halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 0
36
| timestamp
stringclasses 652
values | year
stringclasses 55
values | url
stringlengths 43
370
| text
stringlengths 16
2.18M
|
---|---|---|---|---|---|---|
01760448 | en | [
"sde.ie",
"chim.anal",
"sdu.envi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.univ-lorraine.fr/hal-01760448/file/Abuhelou%20et%20al%2C%202017%2C%20ESPR.pdf | Fayez Abuhelou
Laurence Mansuy-Huault
Catherine Lorgeoux
Delphine Catteloin
Valéry Collin
Allan Bauer
Hussein Jaafar Kanbar
Renaud Gley
Luc Manceau
Fabien Thomas
Emmanuelle Montargès-Pelletier
Suspended Particulate Matter Collection Methods influence the Quantification of Polycyclic Aromatic Compounds in the River System
Keywords: Continuous Flow Centrifuge, Filtration, Polycyclic Aromatic Compounds, Suspended Particulate Matter
In this study, we compared the influence of two different collection methods, filtration (FT) and continuous flow field centrifugation (CFC) on the concentration and the distribution of polycyclic aromatic compounds (PACs) in suspended particulate matter (SPM) occurring in river waters. SPM samples were collected simultaneously with FT and CFC from a river during six sampling campaigns over two years, covering different hydrological contexts. SPM samples were analyzed to determine the concentration of PACs including 16 polycyclic aromatic hydrocarbons (PAHs), 11 oxygenated PACs (O-PACs) and 5 nitrogen PACs (N-PACs). Results showed significant differences between the two separation methods. In half of the sampling campaigns, PAC concentrations differed from a factor 2 to 30 comparing FT and CFC collected SPMs. The PAC distributions were also affected by the separation method.
FT-collected SPM were enriched in 2-3 ring PACs whereas CFC-collected SPM had PAC distributions dominated by medium to high molecular weight compounds typical of combustion processes. This could be explained by distinct cut-off threshold of the two separation methods and strongly suggested the retention of colloidal and/or fine matter on glass-fiber filters particularly enriched in low molecular PACs. These differences between FT and CFC were not systematic but rather enhanced by high water flow rates.
Introduction
PACs constitute a wide group of organic micropollutants, ubiquitous in aquatic environments. They include the 16 PAHs identified as priority pollutants by the United States Environmental Protection Agency due to their mutagenic and carcinogenic properties [START_REF] Keith | ES&T Special Report: Priority pollutants: I-a perspective view[END_REF]) (e.g. benzo(a)pyrene, benzo(a)anthracene, chrysene, benzo(b)fluoranthene, benzo(k)fluoranthene, indeno(123-cd)pyrene and dibenzo(a,h)anthracene). PAHs originate from pyrolytic or petrogenic sources and are used as markers of combustion processes, fuel spills or tar-oil contaminations to trace inputs in the environment. Among PACs, oxygen (O-PACs) and nitrogen (N-PACs) containing polycyclic aromatic compounds are emitted from the same sources as PAHs but can also be the products of photochemical, chemical or microbial degradation of PAHs [START_REF] Kochany | Abiotic transformations of polynuclear aromatic hydrocarbons and polynuclear aromatic nitrogen heterocycles in aquatic environments[END_REF][START_REF] Bamford | Nitro-polycyclic aromatic hydrocarbon concentrations and sources in urban and suburban atmospheres of the Mid-Atlantic region[END_REF][START_REF] Tsapakis | Diurnal Cycle of PAHs, Nitro-PAHs, and oxy-PAHs in a High Oxidation Capacity Marine Background Atmosphere[END_REF][START_REF] Lundstedt | Sources, fate, and toxic hazards of oxygenated polycyclic aromatic hydrocarbons (PAHs) at PAH-contaminated sites[END_REF][START_REF] Biache | Bioremediation of PAH-contamined soils: Consequences on formation and degradation of polar-polycyclic aromatic compounds and microbial community abundance[END_REF]. These polar PACs have recently received increasing attention in the monitoring of coking plant sites because of their toxicity. More soluble than their parent PAHs, their transfer from soil to river should be enhanced but reports on their occurrence in aquatic environments are scarce [START_REF] Qiao | Oxygenated, nitrated, methyl and parent polycyclic aromatic hydrocarbons in rivers of Haihe River System, China: Occurrence, possible formation, and source and fate in a water-shortage area[END_REF][START_REF] Siemers | Development and application of a simultaneous SPE-method for polycyclic aromatic hydrocarbons (PAHs), alkylated PAHs, heterocyclic {PAHs} (NSO-HET) and phenols in aqueous samples from German Rivers and the North Sea[END_REF].
PACs enter the river systems through gas exchange at the air-water interface for the most volatile compounds, or associated to soot particles for the high molecular weight PACs, through atmospheric deposition and run-off or leaching of terrestrial surfaces [START_REF] Cousins | A review of the processes involved in the exchange of semi-volatile organic compounds (SVOC) across the air-soil interface[END_REF][START_REF] Heemken | Temporal Variability of Organic Micropollutants in Suspended Particulate Matter of the River Elbe at Hamburg and the River Mulde at Dessau, Germany[END_REF][START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF][START_REF] Gocht | Accumulation of polycyclic aromatic hydrocarbons in rural soils based on mass balances at the catchment scale[END_REF]).
These compounds partition among the entire water column, depending on their physical-chemical properties (solubility, vapor pressure, and sorption coefficient), and the hydrologic conditions in the river [START_REF] Zhou | The partition of fluoranthene and pyrene between suspended particles and dissolved phase in the Humber Estuary: a study of the controlling factors[END_REF].
The low molecular weight PAHs are found in the dissolved phase whereas the high molecular PAHs are associated to particulate or colloidal matter [START_REF] Foster | Hydrogeochemistry and transport of organic contaminants in an urban watershed of Chesapeake Bay (USA)[END_REF][START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF]. Although less studied than the sediments or the dissolved phase of the water column, the suspended particulate matter (SPM) plays a major role in the transport and fate of hydrophobic micropollutants in rivers and numerous studies focus on their characterization [START_REF] Fernandes | Polyaromatic hydrocarbon (PAH) distributions in the Seine River and its estuary[END_REF][START_REF] Bianchi | Temporal variability in terrestrially-derived sources of particulate organic carbon in the lower Mississippi River and its upper tributaries[END_REF][START_REF] Patrolecco | Occurrence of priority hazardous PAHs in water, suspended particulate matter, sediment and common eels (Anguilla anguilla) in the urban stretch of the River Tiber (Italy)[END_REF][START_REF] Maioli | Distribution and sources of aliphatic and polycyclic aromatic hydrocarbons in suspended particulate matter in water from two Brazilian estuarine systems[END_REF][START_REF] Chiffre | PAH occurrence in chalk river systems from the Jura region (France). Pertinence of suspended particulate matter and sediment as matrices for river quality monitoring[END_REF][START_REF] Meur | Spatial and temporal variations of Particulate Organic Matter from Moselle River and tributaries: A multimolecular investigation[END_REF]. In this perspective, the reliability of the process of sampling collection is a crucial prerequisite to ensure the quality of the analyses and the conclusions that can be drawn from their study.
Several methods are used to collect SPM from aquatic systems (e.g. [START_REF] Bates | Collection of Suspended Particulate Matter for Hydrocarbon Analyses: Continuous Flow Centrifugation vs. Filtration[END_REF][START_REF] Rossé | Effects of continuous flow centrifugation on measurements of trace elements in river water: intrinsic contamination and particle fragmentation[END_REF][START_REF] Ademollo | The analytical problem of measuring total concentrations of organic pollutants in whole water[END_REF]. Sediment traps and field continuous flow centrifugation rely on the size and density properties of particles to promote their separation from water, similarly to sedimentation occurring in natural systems. Both methods offer the advantage of extracting SPM from a large volume of water (several hundred liters) and then provide a large amount of SPM, statistically representative because it integrates a large time window of at least several hours. Filtration is the most widespread technique used for SPM collection since it is easy to handle on the field and in the lab. The separation is controlled by the pore size of the filters. It is generally performed on small volumes that represent only a snapshot of river water. Several studies have pointed out the advantages and drawbacks of the different techniques. The distribution of organic compounds between dissolved and particulate phases is strongly affected by the separation technique. An overestimate of organic compounds in the particulate phase can be observed when SPM are separated with filtration, assigned to the colloid clogging of the membrane during filtration but these differences seem to depend on the amounts of suspended solids, the organic matter content as well as the ionic strength in the river [START_REF] Bates | Collection of Suspended Particulate Matter for Hydrocarbon Analyses: Continuous Flow Centrifugation vs. Filtration[END_REF][START_REF] Morrison | Filtration Artifacts Caused by Overloading Membrane Filters[END_REF][START_REF] Rossé | Effects of continuous flow centrifugation on measurements of trace elements in river water: intrinsic contamination and particle fragmentation[END_REF][START_REF] Ademollo | The analytical problem of measuring total concentrations of organic pollutants in whole water[END_REF]. Anyway, most of the studies focus on the total concentrations of organic compounds in SPM and seldom discuss the influence of the separation techniques on the distribution of organic compounds although these distributions are often used to trace their origin such as in the case of PAHs.
In that perspective, we analyzed the concentration and the distribution of PACs in SPM collected in a river affected for more than one century by intense industrial activities (iron ore mining and steel-making plants) and the associated urbanization. Two sampling methods were compared i.e. field continuous flow centrifugation (CFC) and filtration on glass-fiber filters (FT). The study covered different hydrological situations and several sampling sites where the two sampling methods were applied. Two groups of PACs were explored: 16 PAHs representing a group of hydrophobic compounds (3.3 < logK ow < 6.75) and 11 oxygenated and 5 nitrogen PACs, which represent a class of meanly hydrophobic properties (2 < logK ow < 5.32).
Material and methods
Characteristics of sampling sites
The Orne River is a left side tributary of the Moselle River, northeast of the Lorraine region, France (Fig 1). It is a small river (Strahler order 4), with an extended watershed area of 1,268 km 2 , covered by forest (26.5 %), agriculture (67 %) and urban land (6 %). The Orne is 85.8 km long and flows from an altitude of 320 to 155 m asl. The monthly averaged flow fluctuates between 1.5 and 19 m 3 s -1 with a mean flow of 8.1 m 3 s -1 and maximum flow rates reaching at 170 m 3 s -1 . This river has been highly impacted by iron ore extraction and steel-making industries during the whole 20 th century. Five different sampling sites were chosen based on criteria of representativeness and accessibility for the field continuous flow centrifuge in the lower part of the Orne river, on the last 23 km before the confluence with the Moselle River: Auboué (AUB), Homécourt (BARB), Joeuf (JOAB), Moyeuvre-Grande (BETH) and Richemont (RICH). BETH site at Moyeuvre-Grande is located at a dam that influences the river hydrology: the water depth ranges between 3 and 4 m while it is meanly 1 m at the other sites and the water speed (<0.5 m s -1 at the dam) is 1.5 to 5 times lower than at other sites.
SPM collection
SPM were collected at six periods of time between May 2014 and May-June 2016 covering different hydrological situations (Table 1). The field continuous flow centrifugation (CFC) and filtration (FT) were applied to obtain SPM CFC and SPM FT respectively. Additionally, in May and June 2016, water samples from the inlet and outlet of the CFC were collected and filtered back in the laboratory, to obtain SPM FT In-CFC and SPM FT Out-CFC .
The CFC operation, as already mentioned by Le [START_REF] Meur | Characterization of suspended particulate matter in the Moselle River (Lorraine, France): evolution along the course of the river and in different hydrologic regimes[END_REF], started with river water being pumped to the mobile CFC (CEPA Z-41 running at 20000 RPM, equivalent to 17.000×g), located 10-50 m beside the river. The CFC feeding flow rate was set to 600 L h -1 . The cut-off threshold of the field centrifuge was shown to be close to 5 µm by measuring the grain-size of waters at the outlet of the centrifuge.m Depending on the campaign, the CFC was run between 1h30 and 3h in order to obtain representative samples in sufficient amounts (from several grams to 100 g of dry matter). The SPM CFC was recovered from the Teflon plates covering the internal surface of the centrifuge bowl and transferred into glass bottles, transported to the lab in an ice-box to be immediately frozen, freeze-dried and stored at 4 o C for further use.
Depending on the water turbidity, and in order to collect sufficient amount of SPM FT on filters, 7.5 L of water were collected in amber glass bottles and when necessary, additional 10 or 20 L were collected. All water samples were brought back to the lab and filtered within 24 h. To facilitate the filtration process, especially for high turbidity samples, waters were filtered sequentially on pre-weighted glass fiber filters, first on GFD (Whatman, 90 mm diameter, nominal pore size = 2.7 µm) followed by GFF (Whatman, 90 mm diameter, nominal pore size = 0.7 µm).
Filters were then wrapped in aluminum foil, frozen, freeze-dried and weighted individually, to ± 0.01 mg, and the SPM content on each filter in mg L -1 was determined as the difference between the filter weight before and after filtration process.
Analytical methods
Global parameters and elemental content
Water temperature, electric conductivity (EC) and turbidity were measured using a portable multiparameter device (Hach®). The Dissolved Organic Carbon (DOC) was measured with an automated total organic C analyzer (TOC-VCPH. Shimadzu, Japan) on filtered water (0.22 µm syringe filters) stored in brown glass flasks at 4°C. The Particulate Organic Carbon (POC) was determined on the carbonate-free freeze-dried samples of SPM CFC (1 M HCl; left to stand 1 h; shaken 0.5 h) and measured using a CS Leco SC144 DRPC analyzer and/or a CS Horiba EMIA320V2 analyzer at SARM-CRPG Laboratory. The grain size distribution of particles in raw waters (except for the campaign of May 2014) was determined using laser diffraction (Helos, Sympatec). The raw waters were introduced into the Sucell dispersing unit and were ultrasonicated for 20 seconds. Duplicate or triplicate measurements were performed to improve the measurement quality with and without ultrasound treatment. The particle size distribution was then represented as volumetric percentage as a function of particle diameter. In addition, the percentiles (Di) of the particles were calculated using Helos software. Di is the i th percentile, i.e. the particle diameter at which i % of the particles in the sample is smaller than Di (in µm).
Sample treatment
Up to 2 g of dry matter of SPM CFC and from 0.06 to 1.4 g of dry matter SPM FT were extracted with an Accelerated Solvent Extractor (Dionex® ASE350). ASE cells were filled with activated copper powder (to remove molecular sulphur) and sodium sulfate (Na 2 SO 4 to remove remaining water) and pre-extracted for cleaning. Samples were extracted at 130 o C and 100 bars with dichloromethane (DCM) with two cycles of 5 min [START_REF] Biache | Effects of thermal desorption on the composition of two coking plant soils: Impact on solvent extractable organic compounds and metal bioavailability[END_REF]). After adjusting the volume at 5 mL, a 1 mL aliquot was taken out for clean-up step. It was spiked with external extraction standards (mixture of 6 deuterated compounds: 2 H 12 ]Benzo[ghi]perylene) to control the loss during the sample preparation. The 1 mL aliquot was then evaporated to dryness using a gentle N 2 flow, diluted into 200 µL of hexane and transferred onto the top of a silica gel column pre-conditioned with hexane. The aliphatic fraction was eluted using 3.5 mL of hexane. PAC fraction was eluted with 2.5 mL of hexane/DCM (65/35; v/v) and 2.5 mL of methanol/DCM (50/50; v/v). The latter fraction was spiked with 20 µL at 12µg mL -1 of internal quantification standards (mixture of 8 deuterated compounds: [ 2 H 8 ]Naphthalene, [ 2 H 10 ]Acenaphthene, [ 2 H 10 ]Phenanthrene, [ 2 H 10 ]Pyrene, [ 2 H 12 ]Chrysene, [ 2 H 12 ]Perylene and[ 2 H
[ 2 H 8 ]Dibenzofuran, [ 2 H 10 ]Fluorene, [ 2 H 10 ]Anthracene, [ 2 H 8 ]Anthraquinone, [ 2 H 10 ]Fluoranthene, [
Validation and quality control
The quantitative analyses of PACs were carried out using internal calibration using specific family standard (refer to supporting information S1). For each quantified compound, the GC/MS was calibrated between 0.06 and 9.6 µg mL -1 with 10 calibration levels. The calibration curves were drawn and satisfactory determination coefficients were obtained (r 2 >0.99).To verify the quantification, two calibration controls (lower and higher concentrations) were carried out every 6 samples and only a deviation lower than 20% was accepted. The limits of quantification (LOQ) for an extraction of 1 g of sample were between 0.06 and 0.12 µg g -1 (refer to supporting information S1).
Experimental and analytical blanks were also monitored regularly to assess external contamination.
The whole analytical procedure was validated using reference materials (SRM 1941a, NIST) for PAHs. For O-PAC and N-PAC analysis, no commercial reference material was available. So the laboratory took part to an intercomparison study on the analysis of O-PAC and N-PAC in contaminated soils [START_REF] Lundstedt | First intercomparison study on the analysis of oxygenated polycyclic aromatic hydrocarbons (oxy-PAHs) and nitrogen heterocyclic polycyclic aromatic compounds (N-PACs) in contaminated soil[END_REF]). The methodology was then adapted to sediment and SPM. The recoveries of external standards, added in each sample, were checked and the quantification was validated if it ranged between 60 and 125 % (refer to supporting information S2).
Results
Sampling campaign characteristics
The global parameters (Table 1) exhibited different hydrological situations in the successive sampling campaigns.
May 2014 and October 2015 corresponded to the lowest flow conditions with a daily mean water discharge around 1.5 m 3 s -1 and rather high water temperature (13 to 17°C). The turbidity and the SPM contents were respectively lower than 3 NTU and 6 mg L -1 . The water discharge in May 2015 ranged between 8 and 21 m 3 s -1 with turbidity values around 30 NTU and SPM contents from 16 to 54 mg L -1 . Higher water discharges, although rather moderate compared to that of a biennial flood reaching 130 m 3 s -1 , were observed in November 2014 (22 m 3 s -1 the first day and 51 m 3 s -1 the second day) and February 2015 (50 m 3 s -1 ). The highest flow rates of 120 m 3 s -1 were observed during the flood of May 2016. The highest turbidity and SPM content were observed during the first high flow of the season the 5 th of November 2014 (109 NTU and 122 mg L -1 ) and during the flood of May 2016 (94 NTU and 90 mg L -1 ). The POC ranged between 3 and 12.5 mg g -1 and the highest value was recorded during the low flow event of May 2014. DOC varied between 4 and 11 mg L -1 with the highest DOC observed in May 2015 and the 5 th November 2014. Concerning, the grain size distribution of raw waters, the decile D50 was shown to vary very slightly from 5 to 15 µm for the different reported campaigns. The lowest value of the D50 (≈ 5µm) was measured in February 2015 during a high flow event (Table 1 andFig 2). The measured particle size distributions covered relatively narrow ranges from 1.5 to 102 µm and the increases of the flow regime resulted in a clear increase of particle loading (SPM content) with no strong shift of particle size.
PAC concentration and distribution in SPM FT and SPM CFC
Table 2 displays the contents in PAHs, O-PACs and N-PACs and the main characteristics of their distribution according to the sampling methods (CFC and FT) and to the sampling dates and locations. The comparison of the 16 PAH concentrations in SPM FT and SPM CFC revealed a contrasted effect of the separation techniques.
When the whole set of data is considered, the sum of the 16 PAH concentrations in SPM CFC ranged between 2 and 27.7 µg g -1 with a median value at 4 µg g -1 . Despite a high value measured at JOAB site on May 2014, the range of variations was rather narrow, the 1 st and 3 rd quartiles being at 3.6 and 5.3 µg g -1 respectively (Fig 3a). For SPM FT samples, the PAH concentrations varied between 1.3 and 39 µg g -1 with a median value at 18.4 µg g -1 but a higher dispersion of the PAH concentrations was observed, the 1 st and 3 rd quartiles being at 7.2 and 3.4 µg g -1 respectively (Fig. 3b). For all samples, O-PAC and N-PAC concentrations were much lower than PAHs, accounting for 10 to 30% of the total PACs except for BETH site in May 2014. The differences of O-PAC concentrations in SPM FT and SPM CFC were also less contrasted. They were in a very close range, between 0 and 5.4 µg g -1 and 0.1 and 3.8 µg g -1 respectively in SPM FT and SPM CFC . However, as observed for PAHs, the dispersion of the O-PAC concentrations was higher in SPM FT than in SPM CFC (Fig 2c and2d).
The discrepancies in PAC concentrations between SPM FT and SPM CFC appeared more clearly when sampling campaigns were distinguished. The ratios of the PAH concentrations in SPM FT and SPM CFC (Fig. 4a) and of the polar PAC concentrations in SPM FT and SPM CFC (Fig. 4b) were calculated for each sample in order to highlight the differences of concentration according to the separation method and the sampling campaign. The whole campaign of May 2015, JOAB in May 2014 and BARB in November 2014 provided comparable PAH concentrations in SPM FT and SPM CFC with a ratio close to 1. However, all the samples of February 2015 and May-June 2016, BETH in May 2014 and JOAB and BETH in November 2014 provided higher concentrations of PAHs with concentrations in SPM FT two to eleven times higher than in SPM CFC (Fig. 4a).
The comparison of polar PAC concentrations in SPM FT and SPM CFC also revealed discrepancies between the two sampling methods (Fig 3b). In February 2015 and May-June 2016, polar PACS were six to thirty times more concentrated in SPM FT than in SPM CFC . In May 2014, polar PAC concentrations were fifteen times higher in SPM FT at BETH than in SPM CFC .
The distribution of individual PAHs was also strongly and diversely affected by the method of sampling. In SPM CFC , the 4 to 6 ring-PAHs were easily detected and well represented in the distribution, representing 50 to 90% of the all PAHs (except in May 2014), even though they could vary in abundance according to the sampling campaign (Table 2). In SPM FT , 2 to 3 ring-PAHs were systematically more represented than in SPM CFC and accounted for 40 to 70% of the total PAH concentration except during the November 2014 and May 2015 campaigns. The ratio of each individual PAH concentration in SPM FT over its concentration in SPM CFC was plotted against the log K ow of each PAH for all the samples (Fig. 5). The ratio is close to 1 for the PAHs with log K ow higher than 5.5 having more than 4 aromatic rings whereas it can vary from 0.5 to 38 for PAHs with 2 to 4 aromatic rings (log K ow <5.2). The highest differences were observed in Feb 2015 and May-June 2016 and to a lesser extent in Nov 2014, May 2014 and October 2015. Thus, it appeared that the 2 to 3-ring PAHs and to a lesser extent the 4-ring PAHs were the molecules the most affected by the sampling methods.
In the same way, any time we observed a significant difference of O-PAC concentration between the two sampling methods (February 2015 and May-June 2016), it could be attributed to a higher concentration in low molecular weight O-PACs, composed of three rings, mainly dibenzofuran, fluorenone and anthraquinone (Table 2).
Values of common PAH molecular ratios were compared (Table 2 and Fig. 6). Only, the ratios based on 3 and 4 rings could be calculated in SPM FT and compared to SPM CFC . Whatever the sampling method, the values of Flt/(Flt+Pyr) were found within a quite narrow range of 0.5-0.66 assigned to pyrogenic inputs. The values of Ant/(Ant+Phe) evolved between 0.07 and 0.37 in SPM CFC placing most of the samples in the pyrogenic domain and showing a variation in the contribution of these compounds according to the hydrology. Except for October 2015, the ratios Ant/(Ant+Phe) in SPM FT ranging between 0.06 and 0.17, were systematically lower than in the equivalent SPM CFC suggesting an influence of petrogenic PAHs.
PACs in the filtered SPM of the inlet and outlet waters of the CFC
The analyses of the matter collected by filtration of the inlet waters of CFC (SPM FT In-CFC ) and the matter collected by filtration of the outlet waters of the CFC (SPM FT Out-CFC ) allowed to better understand the partitioning of SPM in the CFC and then by the filtration process. This test was carried out at AUB, RICH and BETH in May and June 2016. The quantification of the SPM collected by filtration of the inlet and outlet waters showed that CFC allowed to recover 80% of the SPM contained in the inlet waters (Table 3). In the three tests, as already described in the previous paragraphs, the PAH concentrations are six to eleven times higher in SPM FT than in SPM CFC . The PAH concentration in the residual SPM collected by filtration of the CFC outlet waters (SPM FT Out-CFC ) is as high or even twice higher than in the inlet water. The PAH distributions displayed at figure 7 showing that the contribution of the fine matter collected in the outlet waters largely contributes to the SPM collected in the total SPM collected on filters (figure 7).
Discussion
Our results show that the two methods of SPM collection strongly influence not only the concentration but also the distribution of PACs. PACs in SPM CFC were found in a narrow range of concentrations independently of the sampling location and the hydrological situation. The PAH distributions were dominated by 4 to 6 ring-PAHs. On the contrary, in SPM FT , the spreading of PAH concentrations was much higher, and the PAH distribution was dominated by low molecular weight compounds when a noticeable discrepancy was observed compared to SPM CFC .
From reported results in literature, a non-exhaustive inventory of PAH concentrations and distributions according to the SPM collection method, regardless of the spatial and hydrological context was summarized on Table 4. This inventory shows that the concentrations remain in a relatively narrow range (the maximum concentration does not exceed five times the minimum concentration) when the SPM collection method is CFC [START_REF] Wölz | Impact of contaminants bound to suspended particulate matter in the context of flood events[END_REF][START_REF] Meur | Spatial and temporal variations of Particulate Organic Matter from Moselle River and tributaries: A multimolecular investigation[END_REF] or sediment traps [START_REF] Zhang | Size distributions of hydrocarbons in suspended particles from the Yellow River[END_REF][START_REF] Chiffre | PAH occurrence in chalk river systems from the Jura region (France). Pertinence of suspended particulate matter and sediment as matrices for river quality monitoring[END_REF] or pressure-enhanced filtration system [START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF][START_REF] Ko | Seasonal and annual loads of hydrophobic organic contaminants from the Susquehanna River basin to the Chesapeake Bay[END_REF]. When the SPM are collected by filtration, the PAH concentration range can be really more highly spread from one to 40 times [START_REF] Deng | Distribution and loadings of polycyclic aromatic hydrocarbons in the Xijiang River in Guangdong, South China[END_REF]Guo et al., 2007;[START_REF] Luo | Impacts of particulate organic carbon and dissolved organic carbon on removal of polycyclic aromatic hydrocarbons, organochlorine pesticides, and nonylphenols in a wetland[END_REF][START_REF] Maioli | Distribution and sources of aliphatic and polycyclic aromatic hydrocarbons in suspended particulate matter in water from two Brazilian estuarine systems[END_REF][START_REF] Mitra | A preliminary assessment of polycyclic aromatic hydrocarbon distributions in the lower Mississippi River and Gulf of Mexico[END_REF][START_REF] Sun | Distribution of polycyclic aromatic hydrocarbons (PAHs) in Henan Reach of the Yellow River, Middle China[END_REF][START_REF] Zheng | Distribution and ecological risk assessment of polycyclic aromatic hydrocarbons in water, suspended particulate matter and sediment from Daliao River estuary and the adjacent area, China[END_REF]. One can argue that this variation might obviously depend on the river and the hydrological situation. However, if we compare the PAH distributions, it clearly appears that whenever the sampling method is filtration, 2 to 3-ring PAHs can largely dominate the distribution as indicated by the LMW/HMW ratios reported in Table 5. In SPM collected by CFC or sediment traps, the low molecular weight PAHs seldom dominate the distribution.
Several studies have reported that filtration retains colloidal organic matter and their associated organic or metallic contaminants leading to an overestimate of compounds associated to particulate matter. [START_REF] Bates | Collection of Suspended Particulate Matter for Hydrocarbon Analyses: Continuous Flow Centrifugation vs. Filtration[END_REF] compared centrifugation and filtration to collect particulate matter from wastewaters and riverine waters and observed a systematic higher concentration of aliphatic hydrocarbons in SPM collected by filtration and a lower proportion of dissolved hydrocarbons in the filtered water compared to the centrifuged one. They attributed it to the adsorption of dissolved an colloidal matter on the glass-fiber filter and by the matter retained on its surface. [START_REF] Morrison | Filtration Artifacts Caused by Overloading Membrane Filters[END_REF] showed that membrane clogging during filtration of riverine waters induces the decline of dissolved cation concentrations in filtered waters. Our results show variations from a factor 2 to 9 for PAH contents and 2 to 30 for O-PAC contents when SPM are separated by filtration and could be explained by the retention of colloidal and fine matter on filters. [START_REF] Gomez-Gutierrez | Influence of water filtration on the determination of a wide range of dissolved contaminants at parts-per-trillion levels[END_REF] tested the adsorption of various organics on glass-fiber filters according to DOC values and salinity on synthetic waters. They showed an increase of the adsorption of the more hydrophobic PAHs (4 to 6 rings) with the increase of DOC and salinity but a lower adsorption of low molecular weight PAHs. In our natural waters, if we compare the PAC concentrations and distributions, we observed an opposite trend with low molecular weight PAHs being more concentrated in SPM FT than in SPM CFC . The higher concentration in SPM FT cannot be only related to PAH adsorption on filters but might be due to the retention of colloidal or fine particulate matter (few microns), organic and mineral, particularly enriched in low molecular weight PAHs. This hypothesis is strongly supported by the abundance of low molecular weight PACs highly concentrated in the matter not retained by the centrifuge but collected by filtration of the outlet waters. [START_REF] Countway | Polycyclic aromatic hydrocarbon (PAH) distributions and associations with organic matter in surface waters of the York River, {VA} Estuary[END_REF] showed that high molecular weight PAHs were rather associated to soot and particles from sediment resuspension whereas more volatile PAHs were associated to autochthonous organic matter. [START_REF] Wang | Monthly variation and vertical distribution of parent and alkyl polycyclic aromatic hydrocarbons in estuarine water column: Role of suspended particulate matter[END_REF] also observed enrichment in low molecular weight hydrocarbons in the finer grain-size fractions of their sediments. Surprisingly, those differences are not systematic and only occur in half of the collected samples. For the samples of February 2015 and May and June 2016, the highest differences between SPM FT and SPM CFC are concomitant of a high flow and the finest grain size distribution of SPM (D50 < 10 µm). In those cases, filtration allowed to recover most of the fine particles and colloids highly concentrated in PACs (18 to 43 µg g -1 ) while field centrifugation collected coarser matter with much lower PAC concentrations (2 to 5.5 µg g -1 ). Previous studies observed similar trends in other contexts. [START_REF] Wang | Monthly variation and vertical distribution of parent and alkyl polycyclic aromatic hydrocarbons in estuarine water column: Role of suspended particulate matter[END_REF] showed that small-size SPM (0,7 -3 µm) collected from estuarine and riverine waters were particularly enriched in PAHs compared to large-size SPM (>3 µm). [START_REF] El-Mufleh | Distribution of PAHs and trace metals in urban stormwater sediments: combination of density fractionation, mineralogy and microanalysis[END_REF] separated sediments from storm water infiltration basins into several density fractions and showed that the PAH amounts were 100 times higher in the lighter fractions than in the denser ones. In their study of colloids and SPM in river, [START_REF] Ran | Fractionation and composition of colloidal and suspended particulate materials in rivers[END_REF] showed the increasing content in organic carbon and ions with decreasing particle size and highlighted the importance of colloidal matter in the concentrations of micropollutants. However, in our set of data, no significant correlation could be observed between the high amount of PACs in SPM FT and global parameters such as particle grain-size, water discharge, organic carbon content of SPM, water conductivity or SPM amount. This comparison of PACs in SPM FT and SPM CFC allowed evidencing the crucial role of colloidal and fine particulate matter in the transfer of PACs. The predominant contribution of fine and/or colloidal matter in SPM FT in February 2015 and May-June 2016 campaigns revealed that this matter transfers mainly low molecular weight PACs compared to the coarser particulate fraction collected by SPM CFC . Also, the molecular ratios suggest a different origin for PACs in colloidal and fine matter with a higher contribution of petroleum products. This suggests that distinct transfer paths of PACs coexist in this river: the PACs associated to particulate matter with a quite homogeneous molecular signature assigned to combustion corresponding to diffuse pollution in the catchment and the PACs associated to colloidal and fine matter with a more variable molecular signature that could be assigned to petrogenic contribution and could enter the river as a point-source during specific hydrological events.
Conclusions
Filtration on glass fiber filters (0.7 µm), the most commonly used technique, is easy to handle, inexpensive, adapted to any field context and the separation between particulate and dissolved matter is based on particle size. In this study, we showed that this method might collect colloidal and fine matter that can significantly affect the amount of PACs measured in the SPM fraction inducing higher concentrations and distributions enriched in low molecular compounds. These differences were not systematic over the two-year period of our investigation in a small industrial river system. On the contrary, the second sampling technique we tested on the same samples, CFC, provided SPM a large amount of SPM collected out of important volumes of water (500 L) with PAC concentrations quite stable from one site to another and from one hydrological condition to another. PAC distributions were dominated by medium to high molecular weight compounds that allowed to calculate various diagnostic molecular ratios easier to interpret than with FT where the poor abundance of HMW PAC limited the interpretation of molecular ratios. Although filtration presents numerous advantages to collect SPM, one must be very careful in the interpretations of some variations that can also be attributed to the retention of some colloidal and fine matter, enriched in low molecular PACs, especially during high flow events. On the other hand, this allows to access to supplemental information on the nature of PACs transported by fine and colloidal matter. Thus, according to the sampling method, evaluation of PAC distribution between dissolved and particulate phase can be appreciably different. These results suggest that the choice of a SPM collection method is fundamental to comply with the objectives that one can defined for the monitoring of surface waters.
8 ]9H-fluorenone) before evaporation and the volume was adjusted to 100 µL with DCM. To improve the chromatographic resolution, the sample was derivatized by adding BSTFA in (1:1; v/v) and finally injected in 200 µL volume in the gas chromatograph-mass spectrometer (GC-MS). Analysis The instrument used was an Agilent 6890N gas chromatograph equipped with a DB 5-MS column (60 m × 0.25 mm i.d. × 0.25 µm film thickness) coupled with an Agilent 5973 mass selective detector operating in single ion monitoring mode. The molecules were detected with a quadrupole analyzer following ionization by electronic impact. The temperature program was the following: from 70 o C to 130°C at 15°C min -1 , then from 130 o C to 315 o C at 3 o C min -1 and then a 15 min hold at 315 o C. 1 µL of sample was injected in splitless mode at 300°C. The carrier gas was helium at 1.4 mL min -1 constant flow.
are representative of those observed in most of the SPM FT and SPM CFC . PAH distributions in SPM CFC are characterized by the abundance of 4 to 6-ring PAHs whereas they are in very low abundance in SPM FT and not detectable in SPM FT Out-CFC . Thus, the CFC retains SPM containing low PAC concentrations made of high molecular weight compounds and the SPM not retained by the CFC but collected by the filtration of the outlet waters is highly concentrated in PACs mainly made of phenanthrene, fluoranthene and pyrene. The PAH distribution in SPM FT In-CFC and SPM FT Out-CFC are very similar
Fig. 1
1 Fig. 1 Lower part of the Orne River catchment, showing the five selected sampling sites and the land cover and use (Map source: CORINE Land Cover, 2012).
Fig. 2
2 Fig. 2 Grain size distribution deciles (d10, d50 and d90) of raw waters measured for the campaigns from November 2014 to May 2016.
Fig. 3
3 Fig. 3 Box plots of ΣPAH, and ΣO-PAC concentrations in SPM CFC (a) and (c) and in SPM FT (b) and (d) for all samples. The boundaries of the box indicate the 25th and 75th percentiles; the line within the box marks the median; the + is the mean value; and values on the top and bottom of the box indicate the minimum and maximum of the distribution.
Fig. 4
4 Fig. 4 Comparison of the ratios of PAH content in SPM FT over PAH content in SPM CFC (a) and of polar PAC content (11 O-PACs+ 5 N-PACs) in SPM FT over polar PAC content in SPM CFC (b) for each sample of the campaigns.
Fig. 5
5 Fig. 5 Ratios of individual PAH concentration in SPM FT over their concentration in SPM CFC plotted against the log K ow of these PAHs. Black circles represent the campaigns of November 2014, February 2015 and May-June 2016, and white circles represent the other sampling campaigns.
Fig. 6 Fig. 7
67 Fig. 6 Ant/(Ant+Phe) vs Flt/(Flt+Pyr) diagnostic ratios calculated in SPM CFC and SPM FT . Dashed lines represent the limits of petrogenic/ pyrogenic domains after Yunker et al. (2002)
CFC and in the SPM collected by filtration of the waters entering the CFC (SPM FT In-CFC ) and of the waters collected at the outlet of the CFC (SPM FT Out-CFC ). This campaign was performed at AUB, RICH and BETH in May-June 2016.
Table 2 PAC concentrations in SPM CFC and SPM FT (µg g -1 dw) and molecular ratios of PACs. <LQ: under the limit of quantification.
2
Date May 2014 Nov 2014 Feb 2015 May 2015 Oct 2015 May 2016 June 2016
Site JOAB BETH BARB JOAB BETH AUB BARB BETH AUB JOAB BETH RICH AUB RICH AUB RICH BETH
PACs in SPM CFC
Σ 16PAHs (µg g -1 ) 27.69 3.99 6.07 5.38 5.25 2.54 3.72 5.08 2.45 3.77 3.36 3.74 5.17 7.03 2.01 4.20 3.56
Σ O-PACs (µg g-1 ) 2.66 1.09 0.91 0.91 0.68 0.19 0.12 0.42 0.51 0.96 0.95 1.1 2.48 3.76 0.09 0.00 0.13
Σ N-PACs (µg g -1 ) 0.43 0.9 0.09 0.2 0.06 0.03 <LQ 0.05 <LQ 0.04 0.03 0.04 0.10 0.30 0.18 0.10 0.12
Σ All PACs (µg g -1 ) 30.77 5.99 7.07 6.49 5.98 2.76 3.84 5.55 2.96 4.77 4.35 4.88 7.75 11.09 2.28 4.30 3.81
Ant/(Ant+Phe) 0.15 0.18 0.24 0.3 0.37 0.21 0.18 0.15 0.32 0.32 0.34 0.36 0.07 0.20 0.20
Flt/(Flt+Pyr) 0.66 0.6 0.57 0.56 0.56 0.57 0.58 0.62 0.54 0.56 0.55 0.56 0.53 0.51 0.56 0.56 0.56
BaA/(BaA+Ch) 0.52 0.53 0.53 0.54 0.56 0.57 0.57 0.5 0.52 0.52 0.53 0.49 0.49 0.46 0.50 0.50
IP/(IP+Bghi) 0.63 0.55 0.55 0.57 0.6 0.62 0.62 0.52 0.53 0.54 0.54 0.55 0.52 0.51 0.51 0.51
2-3 ring PAHs (%) 85% 50% 20% 19% 14% 19% 36% 43% 9% 13% 11% 12% 21% 29% 13% 14% 16%
3-rings O-PACs (%) 100% 46% 34% 31% 15% 42% 100% 74% 36% 22% 20% 20% 21% 29% 100% 100%
PACs in SPM FT
Σ 16PAHs (µg g -1 ) 31.24 19.78 7.18 10.26 10.97 23.53 25.37 34.43 1.3 1.34 1.77 2.58 23.77 13.64 18.44 25.98 39.06
Σ O-PACs (µg g-1 ) 3.25 0.89 0.94 0.97 0.72 2.55 3.36 3.73 0.09 0.19 <LQ 0.11 3.30 5.43 1.89 4.07 3.93
Σ N-PACs (µg g -1 ) 1.78 29.41 0.12 0.11 0.12 0.11 0.26 0.14 0.16 0.23 <LQ 0.45 <LQ <LQ <LQ <LQ 0.15
Σ All PACs (µg g -1 ) 36.27 50.08 8.24 11.34 11.81 26.2 28.98 38.3 1.55 1.77 1.77 3.14 27.07 19.07 20.33 30.05 43.14
Ant/(Ant+Phe) 0.07 0.15 0.1 0.13 0.12 0.15 0.11 0.06 0.17 0.09 0.11 0.09
Flt/(Flt+Pyr) 0.53 0.59 0.6 0.62 0.61 0.65 0.56 0.61 0.58 0.57 0.58 0.61 0.59 0.53 0.52
BaA/(BaA+Ch) 0.55 0.6 0.59 0.59 0.59 0.59 0.54 0.59
IP/(IP+Bghi) 0.55 0.57 0.62 0.65 0.56 0.61 0.6 0.57 0.52 0.56
2-3 ring PAHs (%) 57% 33% 28% 31% 34% 79% 74% 84% 7% 25% 28% 24% 70% 55% 49% 49% 40%
3-ring O-PACs (%) 100% 100% 52% 63% 90% 96% 96% 100% 100% 100% 100% 100% 100% 100% 89% 100%
Table 3 PAC concentrations in µg g -1 in the SPM
3
Acknowledgements
The authors would like to thank Long-Term Ecosystem Research (LTER) France, Agence Nationale de la Recherche (ANR) project number ANR-14-CE01-0019, RésEAU LorLux and Region Lorraine through the research network of Zone Atelier Moselle (ZAM) for partially funding the work, the Syndicat de Valorisation des Eaux de l'Orne (SVEO) and the city of Moyeuvre for granting us access to the sampling sites. We thank ERASMUS MUNDUS for funding the PhD of M. Abuhelou. The final manuscript was also improved by the valuable suggestions of four reviewers. |
01760483 | en | [
"info.info-ti"
] | 2024/03/05 22:32:13 | 2017 | https://theses.hal.science/tel-01760483/file/TH2017PESC1224.pdf | Clément
Bruno
Bahman, Mathieu, Lâmân, Laurent David
Keywords: Lidar, multispectral imagery, fusion, feature selection, supervised classification, energy minimization, regularization, forest stand delineation, tree species
First, I would like to acknowledge Valérie Gouet-
Résumé
Les peuplements forestiers constituent une entité de base pour l'inventaire forestier statistique et la cartographie. Ils sont définis comme de (grandes) zones forestières (par exemple, de plus de 2 ha) et de composition homogène en terme d'essence d'arbres et d'âge. Leur délimitation précise est généralement effectuée par des opérateurs humains par une analyse visuelle d'images contenant un canal infrarouges à très haute résolution (THR). Cette tâche est fastidieuse, nécessite beaucoup de temps et doit donc être automatisée pour un suivi de l'évolution et une mise à jour plus efficace des bases de données. Une méthode fondée sur la fusion de données lidar aéroportées et d'images multispectrales THR est proposée pour la délimitation automatique de peuplements forestiers contenant une essence dominante (c'est à dire, pure à plus de 75%). Il s'agit en effet d'une tâche préliminaire importante pour la mise à jour de la base de données de la couverture forestière.
Le méthode est adaptable à la donnée et au paysage étudié. Elle est composée de quatre étapes qui sont analisée en profondeur qui tirent le meilleur parti des différents sources de données de télédétection, à l'aide de processus de fusion à plusieurs niveaux des images optiques VHR et du nuage de points lidar 3D aéroporté mais aussi de l'analyse de la base de données géographique (BD Forêt) décrivant la forêt Française. Des attributs multimodaux sont d'abord extraits et leur pertience est évaluée. Ces attributs sont ensuite croisée avec une sursegmentation afin d'obtenir des attributs au niveau de l'objet. Il peut s'agir d'arbres (obtenus à partir du nuage de points) ou de tout autre objet de taille et/ou de forme similaire. En raison du nombre élevé d'attributs, une sélection d'attributs est ensuite effectuée. Elle permet de réduire les temps de calcul, d'améliorer la discrimination ainsi que d'évaluer la pertinence des attributs extraits et la complémentarité des données de télédétection. Une classification supervisée fondée objet est ensuite effectuée avec l'algorithme supervisé des Forêts Aléatoires. Une attention spéciale est apportée à la création du jeu d'apprentissage afin de faire face aux erreurs potentielles de la base de données Forêt. Enfin, le resultat de la classification est ensuite traité afin d'obtenir des zones homogènes avec des frontières lisses. Ce lissage est effectué de manière globale sur l'image en minimisant une énergie, dans laquelle contraintes supplémentaires sont proposées en plus des formulation classiques pour former la fonction d'énergie. Ce problème est reformulé de manière graphique et résolu par une approche de type coupe de graphe.
Une étude détaillée des différents parties de la chaîne de traitement proposée à été réalisée. Les résultats expérimentaux montrent que la méthode proposée fournit des résultats très satisfaisants en termes d'étiquetage et de délimitation des peuplements, même pour des régions spatialement éloignées et présentant des paysages différents. La méthode proposée permet également d'évaluer la complémentarité des sources de données de télédétection (à savoir le lidar et les images optiques THR). Plusieurs schémas de fusion sont par ailleur proposés en fonction du niveau de détail souhaité et des éventuelles contraintes opérationelles (temps de calculs, données).
Mots clés: Lidar, imagerie multispectrale, fusion, sélection d'attributs, classification supervisée, minimisation d'énergie, régularisation, délimitation de peuplment forestiers, essence d'arbre.
Résumé étendu en français 1.1 Introduction
L'extraction de l'information dans les zones forestières, en particulier au niveau du peuplement, est motivée par deux objectifs principaux: l'inventaire statistique et la cartographie. Les peuplements forestiers sont les unités de base et peuvent être définis en termes d'espèces d'arbres ou de maturité des arbres. Du point de vue de la télédétection, la délimitation des peuplements est un problème de segmentation. Pour l'inventaire forestier statistique, la segmentation est utile pour extraire des points d'échantillonnage significatifs sur le plan statistique et des attributs fiables (surface terrière, hauteur dominante, etc.) [START_REF] Means | Predicting forest stand characteristics with airborne scanning lidar[END_REF][START_REF] Kangas | Forest inventory: methodology and applications[END_REF]. Pour la cartographie du couvert végétal, la segmentation est très utile pour la mise à jour des bases de données forestières [START_REF] Kim | Forest Type Mapping using Object-specific Texture Measures from Multispectral Ikonos Imagery: Segmentation Quality and Image Classification Issues[END_REF]. La plupart du temps, pour des raisons de fiabilité, chaque zone est interprétée manuellement par des opérateurs humains avec des images géospatiales de très haute résolution spatiale. Ce travail prend beaucoup de temps, de plus, dans de nombreux pays, la grande variété d'essences forestières (environ 20) rend le problème plus difficile.
L'utilisation de données de télédétection pour l'analyse automatique des forêts est de plus en plus répandue, en particulier avec l'utilisation combinée du lidar aéroporté et de l'imagerie optique (imagerie multispectrale à très haute résolution ou imagerie hyperspectrale) (Torabzadeh et al., 2014).
Quelques travaux proposant une délimitation automatique des peuplements forestiers avec des données de télédétection existent. Tout d'abord, la délimitation peut être effectuée avec une seule source de télédétection. Une technique de délimitation des peuplements utilisant des images hyperspectrales est proposée dans [START_REF] Leckie | Stand delineation and composition estimation using semi-automated individual tree crown analysis[END_REF]. Les arbres sont extraits et classés selon 7 espèces d'arbres (5 conifères, 1 caduque et 1 non spécifié) à l'aide un classificateur à maximum de vraisemblance.
Une méthode de cartographie des peuplements utilisant des données lidar aéroportées à basse densité est proposée dans [START_REF] Koch | Airborne laser data for stand delineation and information extraction[END_REF]. Elle comporte plusieurs étapes; extraction d'attributs, rastérisation des attributs et classification à partir du raster. Les peuplements forestiers sont créés en regroupant les cellules voisines par classe. Ensuite, seuls les peuplements ayant une taille minimale prédéfinie sont acceptés. Les petites zones voisines d'espèce différentes qui n'atteignent pas la taille minimale sont fusionnées à un peuplement forestier proche.
La délimitation des peuplements forestiers proposée dans [START_REF] Sullivan | Object-oriented classification of forest structure from light detection and ranging data for stand mapping[END_REF] utilise aussi du lidar aéroporté à basse densité pour une segmentation suivie d'une classification supervisée.
Trois attributs (couverture de la canopée, densité de la tige et hauteur moyenne) sont calculés et rasterisés. La segmentation est réalisée par croissance de région. Les pixels spatialement adjacents sont regroupés en objets ou régions homogènes. Ensuite, une classification supervisée de l'image segmentée est réalisée à l'aide d'un classificateur de Bhattacharya, afin de définir la maturité des peuplements (les étiquettes correspondent à la maturité des peuplements).
x Une délimitation de peuplements forestiers utilisant des données lidar aéroportées à haute densité est également proposée dans [START_REF] Wu | ALS data based forest stand delineation with a coarse-to-fine segmentation approach[END_REF]. Trois attributs sont d'abord extraits du nuage de points; l'indicateur de la taille de l'arbre, l'indice de densité forestière et l'indicateur d'espèces d'arbres. Une délimitation grossière du peuplement forestier est ensuite effectuée sur l'image des attributs en utilisant l'algorithme Mean-Shift, avec une valeur élevée des paramètres afin d'obtenir des peuplements forestiers grossiers sous-segmentés. Un masque forestier est ensuite appliqué à l'image segmentée afin de récupérer des peuplements forestiers et non-forestiers grossiers. Cette étape peut créer quelques petites zones isolées qui seront fusionnées à leur voisine la plus proche jusqu'à ce que leur taille soit supérieure à un seuil défini par l'utilisateur. Les peuplements forestiers sont ensuite raffinés, mais en utilisant des superpixels générés à partir des trois attributs au lieu d'utiliser les pixels d'origine de l'image 3 bandes. Le raffinement des peuplements forestiers est obtenu grâce à une croissance de région. Cette méthode fournit des peuplements relativement grands. D'autres méthodes utilisant la fusion de différents types de données de télédétection ont également été développées. Deux méthodes de segmentation sont proposées dans [START_REF] Leppänen | Automatic delineation of forest stands from LIDAR data[END_REF] pour une forêt composée de pin sylvestre, d'Épicéa de Norvège et de feuillus. La première est une segmentation sur la hauteur de couronne suivi d'une croissance de région itérative sur une image composite de lidar et d'image IRC. La deuxième méthode propose une segmentation hiérarchique sur la hauteur de couronne. Chaque objet de l'image est connecté à la fois aux objets de l'image de niveaux supérieur et inférieur. Cela permet de considérer les segments finaux à partir des niveaux de segmentation les plus fins, comme les arbres individuels dans la zone. L'analyse des données lidar et multispectrales est effectuée à trois niveaux dans [START_REF] Tiede | Object-based semi automatic mapping of forest stands with Laser scanner and Multi-spectral data[END_REF]. Le premier niveau représente des petits objets (arbres individuels ou petit groupe d'arbres) qui peuvent être différenciés suivant des caractéristiques spectrales et structurelles en utilisant une classification fondée sur des règles. Le deuxième niveau correspond au peuplement. Il est construit en utilisant le même processus de classification que le niveau précédent, en se référant aux objets à petite échelle du niveau 1. Le troisième niveau est généré en fusionnant des objets du même développement forestier en unités spatiales plus grandes. Cette méthode produit une cartographie permettant d'évaluer la phase de développement forestier (les étiquettes ne correspondent pas aux espèces d'arbres). Puisque les peuplements sont l'unité de base pour l'inventaire statistique, certaines méthodes de segmentation ont été développées à cette fins dans [START_REF] Diedershagen | Automatic segmentation and characterisation of forest stand parameters using airborne lidar data, multispectral and fogis data[END_REF] et [START_REF] Hernando | Spatial and thematic assessment of objectbased forest stand delineation using an OFA-matrix[END_REF].
Au regard des méthodes existantes, il semble qu'il n'y ait pas de méthode de segmentation des peuplements forestiers, en termes d'espèce, capables de traiter de façon satisfaisante un grand nombre de classes (> 5). Il apparaît également que le fait de travailler au niveau de l'objet (habituellement le niveau de l'arbre), afin de discriminer les espèces d'arbres, produit de meilleurs résultats de segmentation des peuplements. Plusieurs méthodes de classification des espèces au niveau des arbres ont été étudiées dans [START_REF] Heinzel | Investigating multiple data sources for tree species classification in temperate forest and use for single tree delineation[END_REF], [START_REF] Leckie | Stand delineation and composition estimation using semi-automated individual tree crown analysis[END_REF] et [START_REF] Dalponte | Semisupervised SVM for individual tree crown species classification[END_REF]. Toutefois, il est probable qu'elles soient imprécises, ce qui entraîne une classification inexacte des arbres. Cependant, le classement des essences forestières au niveau des arbres peut être utilisé pour la délimitation des peuplements forestiers. xi Une méthode pour la segmentation des peuplements en termes d'espèce est proposée dans ce document. La méthode comprend trois étapes principales. Des attributs sont extraits au niveau du pixel et de l'objet. Les objets sont déterminés par une sur-segmentation. Une classification est effectuée au niveau de l'objet car elle améliore significativement les résultats de discrimination (environ 20% de mieux que la classification sur les pixels). Cette classification est ensuite régularisée par une minimisation d'énergie. La solution de cette régularisation, obtenue à l'aide d'une méthode de coupe de graphe, produit des zones homogènes d'espèces d'arbres avec des frontières lisses.
Méthode proposée 1.2.1 Extraction des attributs
L'extraction des attributs comporte trois étapes;
• Calcul et rastérisation des attributs lidar.
• Calcul des attributs spectraux.
• Extraction des objets (sur-segmentation) et création des image à l'objet.
Attributs lidar
Les attributs lidar nécessitent de prendre en compte un voisinage pour être cohérent. Pour chaque point lidar, 3 voisinages cylindrique, avec le même axe vertical, sont utilisés (rayons de 1 m , 3 m et 5 m, hauteur infinie). Le cylindre est la forme la plus pertinente car elle permet de prendre en compte toute la variabilité de hauteur des points. Trois rayons sont utilisés afin de gérer les différentes tailles des arbres. Tout d'abord, deux indices de végétation, D 1 et D 2 , sont calculés: le premier est fondé sur le nombre de maxima locaux dans les voisinages et le second est lié au nombre de points hors-sol dans le voisinage (les points au sol ayant été déterminés précédemment par filtrage). D 1 et D 2 sont calculés comme suit:
D 1 = r1∈{1,3,5} r2∈{1,3,5} N t r1,r2 ,
(1.1)
D 2 = 1 3 r∈{1,3,5} N s r N tot r , (1.2)
où N t r1,r2 est le nombre de maxima locaux extraits d'un filtre maximal de rayon r 1 dans le voisinage cylindrique de rayon r 2 . N s r est le nombre de points classés comme "sol" dans le voisinage de rayon r et N tot r est le nombre total de points dans le voisinage de rayon r.
La dispersion S et la planarité P sont aussi calculés en suivant la formulation de [START_REF] Weinmann | Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers[END_REF]:
S = 1 3 r∈{1,3,5}
λ 3,r λ 1,r , (1.3)
P = 1 3 r∈{1,3,5}
2 × (λ 2,rλ 3,r ), (1.4) xii où λ 1,r ≥ λ 2,r ≥ λ 3,r sont les valeurs propres de la matrice de covariance des points dans le voisinage cylindrique de rayon r. Elles sont obtenues par une simple analyse en composante principale.
Des attributs statistiques, reconnus pour être pertinents pour la classification des espèces [START_REF] Dalponte | Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data[END_REF]Torabzadeh et al., 2015), sont aussi calculés. Pour chaque point lidar, les 3 mêmes voisinages cylindriques sont utilisés. Deux informations du lidar, la hauteur et l'intensité, sont utilisées afin de dériver des attributs statistiques. Un attribut statistique f d , dérivé d'une information f o , (hauteur ou intensité) est obtenu comme suit:
f d = 1 3 r∈{1,3,5} f s (p r,fo ), (1.5)
où f s est une fonction statistique (minimum; maximum; moyenne; médiane; écart-type; déviation absolue médiane d'une médiane (medADmed); déviation absolue moyenne d'une médiane (meanADmed); skewness; kurtosis; 10 ème , 20 ème , 30 ème , 40 ème , 50 ème , 60 ème , 70 ème , 80 ème , 90 ème et 95 ème centile), et p r,fo un vecteur contenant les valeurs des points de l'information f o dans le cylindre de rayon r. Toutes ces fonction statistiques sont utilisées pour la hauteur. Seule la moyenne est utilisée pour l'intensité: il est difficile de savoir si le capteur a été correctement calibré et une correction des valeurs d'intensité dans la canopée n'a pas encore été proposée.
24 attributs sont calculés au cours de cette étape; 2 liés à la densité de végétation, 2 liés à la distribution 3D locale du nuage de points et 20 attributs statistiques.
Les 24 attributs sont rastérisés à la même résolution spatiale que l'image multispectrale, en utilisant la méthode proposée dans [START_REF] Khosravipour | Generating pitfree canopy height models from airborne lidar[END_REF]. Cette méthode est intéressante car elle produit des images lisses, qui permettent d'obtenir de meilleurs résultats pour la classification et la régularisation (Li et al., 2013a). Le modèle numérique de surface (MNS) est aussi obtenu en utilisant cette méthode, à la même résolution spatiale, en utilisant un modèle numérique de terrain (MNT) précédemment obtenu en filtrant le nuage de points. Le MNS est très important car il permet de déterminer la hauteur par rapport au sol et est un attribut très discriminant pour la classification [START_REF] Mallet | Relevance assessment of fullwaveform lidar data for urban area classification[END_REF][START_REF] Weinmann | Reconstruction and Analysis of 3D Scenes[END_REF]. Au total, 25 attributs lidar sont calculés.
Attributs spectraux
Les 4 bandes spectrales sont conservées et considérées comme des attributs spectraux. 3 indices de végétation pertinents; le NDVI, le DVI et le RVI sont calculés car ils peuvent fournir plus d'information que les bandes originales seules [START_REF] Zargar | A review of drought indices[END_REF]. Comme pour les attributs lidar, des attributs statistiques sont calculés pour chaque bande et chaque indice de végétation en utilisant l'équation 1.5 (3 voisinages circulaires de rayon de 1m, 3m et 5m). D'autres fonctions statistiques sont utilisées (minimum; maximum; moyenne; médiane; écart-type; déviation absolue moyenne d'une médiane (meanADmed); déviation absolue moyenne d'une moyenne (meanADmean); déviation absolue médiane d'une médiane (medADmed); déviation absolue médiane d'une moyenne (medADmean). Au total, 70 attributs spectraux sont calculés.
Extraction d'objets
L'extraction d'objets est une étape importante permettant d'améliorer les résultats de la classification. Cependant, les objets peuvent être extraits de différentes manières; il est possible d'effectuer une simple sur-segmentation sur l'un des 95 attributs disponibles (Whatershed) ou bien d'utiliser des algorithmes plus avancés (SLIC, Quickshift, etc.) ou bien d'extraire les arbres directement du nuage xiii de points. Il apparaît que le choix de la méthode de sur-segmentation n'impacte que très peu les résultats finaux. La sur-segmentation est effectuée sur le MNS. Une fois la sur-segmentation effectuée, les attributs sont moyennés sur les objets: la valeur v t d'un pixel appartenant à l'objet t est:
v t = 1 N t p∈t v p , (1.6)
où N t est le nombre de pixels dans l'objet t, et v p est la valeur du pixel p.
Classification
La classification est composée de deux étapes; tout d'abord, le nombre d'attributs est réduit, en sélectionnant uniquement les plus pertinents, puis la classification est effectuée sur les attributs sélectionnés.
Sélection d'attributs
À cause du grand nombre d'attributs disponibles, une étape de sélection d'attributs doit être mise en place. La sélection d'attributs comporte deux étapes; l'une qui permet de déterminer le nombre d'attributs à sélectionner et l'autre de sélectionner les attributs. L'algorithme SFFS (Sequential Forward Floating Search) (SFFS) [START_REF] Pudil | Floating search methods in feature selection[END_REF] est utilisé pour les deux étapes. Cet algorithme présente deux avantages: (i) il peut être utilisé avec plusieurs scores de classification (dans cette étude, le coefficient Kappa), (ii) il permet d'accéder à l'évolution du score de classification en fonction du nombre d'attributs sélectionnés. L'algorithme SFFS sélectionne p attributs qui maximisent le score de sélection d'attributs (le coefficient Kappa).
Classification
Un classificateur supervisé est utilisé afin de discriminer les essences forestières fournies par une base de données (BD) de couverture forestière existante. La classification est obtenue à l'aide de l'algorithme des forêts aléatoires. Cette méthode de classification a démontré être pertinente dans la littérature [START_REF] Belgiu | Random Forest in remote sensing: A review of applications and future directions[END_REF] et dans une étude précédente comparée aux Séparateurs à Vaste Marge (SVM) [START_REF] Dechesne | Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery[END_REF], car elle fournit des résultats similaires tout en étant plus rapide. Les résultats de la classification sont (i) une carte de label et (ii) une carte de probabilité (probabilités de chaque classe pour chaque pixel). Cette carte de probabilité est nécessaire pour l'étape de régularisation suivante.
Régularisation
La classification peut ne pas être suffisante pour obtenir des zones homogènes avec des frontières lisses. Par conséquent, afin de s'adapter au modèle de peuplement, la régularisation de la classification au niveau des pixels est nécessaire. Elle peut être effectuée de manière locale (par utilisation de filtre ou par relaxation probabiliste). Cependant, une régularisation globale par minimisation d'énergie reste la solution la plus optimale. De plus, une telle formulation permet d'insérer des contraintes supplémentaires pour une délimitation plus spécifique des peuplements forestiers. xiv
Formulation de l'énergie
Le modèle énergétique proposé repose sur un modèle graphique, c'est-à-dire que le problème est modélisé par un graphe probabiliste prenant en compte les probabilités de classe P et la carte
E pairwise (C(u) = C(v)) = 0; E pairwise (C(u) = C(v)) = V (u, v).
N désigne la 8-connexité, et P (C(u)) est la probabilité qu'un pixel u appartienne à la classe C(u).
γ est le paramètre de régularisation permettant de contrôler l'influence des deux termes et donc le niveau d'homogénéité des segments.
E data est le terme d'attache aux données. E pairwise est le terme de régularisation, permettant de mesurer la différence entre les attributs du pixel u et les attributs de ses 8 voisins. L'énergie exprime à quel point les pixels sont bien classés et à quel point les attributs sont similaires. D'autres modèles de champs aléatoires conditionnels pourraient être envisagés pour l'expression de l'énergie [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF][START_REF] Volpi | Semantic segmentation of urban scenes by learning local class interactions[END_REF]Tuia et al., 2016). E pairwise pourrait être exprimée relativement à plus de pixels.
La fonction f liée à E data la plus efficace (voir Équation 1.7) est:
f (x) = 1 -x, avec x ∈ [0, 1].
(1.8)
Cette fonction permet de contrôler l'importance à donner au résultat de la classification. Une simple fonction linéaire permet de contrôler l'énergie: quand la probabilité est proche de zéro, l'énergie est maximale. Inversement, quand la probabilité est forte, l'énergie est minimale. Cette fonction permet de conserver ce terme dans [0, 1] et ainsi simplifier le paramétrage de γ. D'autres formulations peuvent être envisagée mais il est apparu que cette formulation produit les meilleurs résultats pour notre problème.
Le terme de régularisation est une mesure qui contrôle la valeur de l'énergie en fonction de la valeur des attributs des 8 voisins. Deux pixels de classe différentes, mais avec des valeurs d'attributs proches sont plus susceptibles d'appartenir à la même classe que deux pixels de classes différentes avec des valeurs d'attributs différents. La valeur de l'énergie doit être proche de 1 lorsque les valeurs des attributs sont assez similaires et décroit à mesure de leur différence. Le terme de régularisation xv V le plus efficace est exprimée comme suit:
V (u, v) = 1 n opt nopt i=1 -exp(-λ i |A i (u) -A i (v)|) .
(1.9)
∀ i ∈ [1, n opt ],
λ i ∈ [0, ∞[, où i est l'indice correspondant à l'attribut. A i (u) est la valeur du i ème attribut du pixel u. λ = [λ 1 , λ 2 , ..., λ opt ] est un vecteur de longueur égale au nombre d'attributs (n opt ). Ce vecteur assigne des poids différents aux attributs. Si λ i = 0 l'attribut i ne sera pas pris en compte dans le processus de régularisation. Quand λ i est important, une petite différence entre les attributs impactera beaucoup l'énergie. Ainsi, l'attribut i aura un fort impact dans la régularisation. Les attributs étant de différents types (hauteur, réflectance, densité de végétation, etc.), il est donc important de garder les termes dans [0, 1] pour chaque attribut, même si ils ne sont pas initialement dans la même gamme de valeurs. Dans notre étude, tous les poids λ i sont fixés à 1 ∀i. D'autre formulations pour cette énergie peuvent être envisagées (un simple modèle de Potts par exemple).
Minimisation d'énergie
La minimisation de l'énergie est réalisée en utilisant des méthodes de coupe de graphe. L'algorithme de coupe de graphe utilisé est l'optimisation pseudo-booléenne quadratique (QPBO) avec α-expansion. Il s'agit d'une méthode de coupe de graphe très rependue qui résout efficacement les problèmes graphiques de minimisation d'énergie [START_REF] Kolmogorov | Minimizing non-submodular functions with graph cuts-a review[END_REF]. L'α-expansion est une technique permettant de traiter les problèmes multi-classes [START_REF] Kolmogorov | What energy functions can be minimized via graph cuts?[END_REF].
Lorsque γ = 0, le terme de régularisation n'a aucun effet dans la formulation énergétique; la classe la plus probable est attribuée au pixel, conduisant au même résultat que la sortie de classification. Lorsque γ = 0, la carte d'étiquette résultante devient plus homogène, et les bords des segments sont plus lisses. Cependant, si γ est trop élevé, les petites zones sont liées pour être fusionnées dans des zones plus grandes, en supprimant une partie des informations utiles fournies par la classification.
Données
Les zones d'étude se situent sur des forêts de différentes régions de France présentant des paysages différents. Elles offrent une importante diversité des espèces d'arbres en présence. Chaque zone comprend un grand nombre d'espèces (4-5 espèces par zone), permettant de tester la robustesse de la méthode. Les images multispectrales aéroportées ont été acquises par les caméras numériques IGN [START_REF] Souchon | A large format camera system for national mapping purposes[END_REF]. Elles sont composées de 4 bandes: 430-550 nm (bleu), 490-610 nm (vert), 600-720 nm (rouge) et 750-950 nm (proche infrarouge) avec une résolution spatiale de 0,5 m.
Les données lidar aéroportées ont été recueillies en utilisant un dispositif Optech 3100EA.
L'empreinte était de 0,8 m afin d'augmenter la probabilité d'atteindre le sol. La densité de points pour tous les échos varie de 2 à 4 points/m 2 . Nos données multispectrales et lidar sont en adéquation avec les normes utilisées dans de nombreux pays pour des fins de cartographie forestière à grande échelle [START_REF] White | Remote sensing technologies for enhancing forest inventories: A review[END_REF]. Les données ont été acquises en conditions foliaires, en mai et juin 2011 pour les images multispectrales et les données lidar, respectivement. Le recalage entre le nuage de points lidar et les images multispectrales VHR a été réalisé par l'IGN à partir de points de xvi contrôle au sol. Il s'agit d'une procédure standard de l'agence de cartographie française qui présente des résultats comparables aux solutions professionnelles standard. La BD Forêt est composée de polygones 2D délimités par photo-interprétation. C'est une base de donnée géographique nationale française pour les forêts, librement accessible1 . Elle est utilisée à la fois pour entraîner le classificateur et pour évaluer les résultats. Seuls les polygones contenant au moins 75% d'une espèce donnée sont utilisés pour la classification (c'est le seuil qui définit quand un peuplement peut être attribué à un type de végétation unique et considéré comme "pur"). Les polygones de végétation (lande ligneuse et formation herbacée) sont également pris en compte pour la classification. Comme cette étude ne s'intéresse qu'aux espèces, la vérité terrain utilisée ne couvrira donc pas l'ensemble de la zone (les zones contenant du mélange ne seront pas prise en compte).
Résultats
Sélection d'attributs
L'algorithme SFFS permet de déterminer le nombre optimal de d'attributs à sélectionner, et de les sélectionner. Le nombre optimal d'attributs à sélectionner est ici de 20. Ce nombre a été conservé pour les autres zones et les résultats montrent que le transfert n'a aucun impact sur la précision finale. Une fois le nombre optimal d'attributs déterminé, la sélection a été effectuée 40 fois sur toutes les zones de test afin de récupérer les attributs les plus pertinents. En moyenne, 61% des attributs sélectionnés proviennent de l'information spectrale et 39% de l'information lidar. Ceci montre la complémentarité des deux données de télédétection.
Pour les informations spectrales, les attributs dérivés des bandes originales sont plus pertinents que les indices de végétation. L'attribut statistique le plus pertinent pour l'information spectrale est le minimum (17% de la sélection spectrale). Le maximum (12%), la médiane (11%), la moyenne (11%) et l'écart-type (10%) sont également pertinents.
Pour l'information lidar, l'attribut le plus pertinent est étonnamment l'intensité, sélectionnée dans chacune des 40 sélections (12% de la sélection lidar, 5% de la sélection totale). L'écart-type de la hauteur(8% de la sélection lidar), le maximum de la hauteur(7%) et les densités (5% et 6% de la sélection lidar) sont également pertinents.
Classification
Les résultats de la classification et leur impact sur la segmentation finale sont illustrés par les
Régularisation
Les résultats finaux sont présentés par la figure 1.3. La précision globale montre que la méthode donne des résultats satisfaisants en termes de discrimination des espèces d'arbres, dans la gamme des documents existants de la littérature pour le même nombre d'espèces [START_REF] Leckie | Stand delineation and composition estimation using semi-automated individual tree crown analysis[END_REF][START_REF] Leppänen | Automatic delineation of forest stands from LIDAR data[END_REF]. Deuxièmement, les résultats sont améliorés jusqu'à 15% grâce au lissage des résultats. Malgré des précisions très élevées, les résultats doivent être considérés avec précaution. Ils sont comparés à une BD qui comporte des défauts et ne couvre pas l'intégralité des zones d'étude.
L'analyse visuelle des les scores de segmentation peuvent en fait être pris avec une marge de ±5%.
Le réglage de γ est également une étape importante; quand il est trop bas (par exemple < 5), certains petits segments non pertinents peuvent rester dans la segmentation finale, ce qui entraîne un score de segmentation faible. Cependant, cette sur-segmentation peut être utile pour l'inventaire forestier
Conclusion
Une méthode en trois étapes pour la délimitation des peuplements forestiers, en termes d'espèce, a été proposée. La fusion des données lidar aéroporté et des images multispectrales produit des résultats très satisfaisants puisque les deux modalités de télédétection fournissent des observations
Analysis of forested areas
Forests are a core component of planet's life. They are defined as large areas dominated by trees. Hundreds of other definitions of forest may be used all over the world, incorporating factors such as tree density, tree height, land use, legal standing and ecological function [START_REF] Schuck | Compilation of forestry terms and definitions[END_REF][START_REF] Achard | Vital forest graphics[END_REF].
Forest are commonly defined "Land spanning more than 0.5 hectares (ha) with trees higher than 5 meters and a canopy cover of more than 10 percent, or trees able to reach these thresholds in situ.
It does not include land that is predominantly under agricultural or urban land use" according to Food and Agriculture Organization (FAO) 1 [START_REF] Keenan | Dynamics of global forest area: results from the FAO Global Forest Resources Assessment 2015[END_REF]. Forests are the dominant terrestrial ecosystem of Earth, and are distributed across the globe [START_REF] Pan | The structure, distribution, and biomass of the world's forests[END_REF]. They cover about four billion hectares, or approximately 30% of the World's land area the study at a fine level (e.g. species composition) of forested areas must be restricted to a single ecozone at a time.
Human society and forests influence each other in both positive and negative ways [START_REF] Vogt | Global societies and forest legacies creating today's forest landscapes[END_REF]. On one hand, human activities, including harvesting forest resources, can affect forest ecosystems. On the other hand, forests have three main contributions to human: ecosystem services, tourist attraction and harvesting.
Ecosystem services. Forests provide ecosystem services. They are involved in the provisioning of clean drinking water and the decomposition of wastes. Forests account for 75% of the gross primary productivity of the Earth's biosphere, and contain 80% of the Earth's plant biomass [START_REF] Pan | The structure, distribution, and biomass of the world's forests[END_REF]. They also hold about 90% of terrestrial biodiversity [START_REF] Brooks | Global biodiversity conservation priorities[END_REF][START_REF] Wasiq | Sustaining forests: A development strategy[END_REF]. Forests are also beneficial for the environment; they capt and store the CO 2 [START_REF] Fahey | Forest carbon storage: ecology, management, and policy[END_REF] (see Figure 1.2). About 45% of the total global carbon is held by forests. They also filter dust and microbial pollution of the air [START_REF] Smith | Air pollution and forests: interactions between air contaminants and forest ecosystems[END_REF]. Finally, they also play an important role in hydrological regulation and water purification [START_REF] Lemprière | The importance of forest sector adaptation to climate change[END_REF] (see Figure 1.2). Harvesting/wood resources. Wood from trees displays many uses. It is still widely used for fuel [START_REF] Sterrett | Alternative fuels and the environment[END_REF]. In this case, hardwood is preferred over softwood because it creates less smoke and burns longer. Wood is still an important construction material [START_REF] Ramage | The wood from the trees: The use of timber in construction[END_REF]: Elm was used for the construction of wood boats. In Europe, oak is still the preferred variety for all wood constructions, including beams, walls, doors, and floors [START_REF] Thelandersson | Timber engineering[END_REF]. A wider variety of woods is also used such as poplar, small-knotted pine, and Douglas fir. Wood is also needed in the paper industry since wood fibers are an important component of most papers. Eventually, wood is also extensively used for furniture or for making tools or music instruments.
Tourist attraction and recreative activities.
Forests serve as recreative attractions. In France, there are hundred of long distance footpaths (∼ 60000km) through forests. Other activities such as rock climbing, mountain bike or adventure parks are mostly practiced in forests.
The evolution of forests need to be monitored in order to efficiently exploit the forest resources in a sustainable way (Paris Agreement 2015). For example, France is a significant wood importer (∼ 25 millions of m 3 per year), while the French forest is the third in Europe in term of volume 4 . It is therefore needed to better manage and exploit the French wood stocks.
The assessments of these stocks are all the more difficult because a large part (about 75%) of the French forest is private, leading to a a more complex management and exploitation. Furthermore, they can be assessed at different (i.e. forest, stand/plantation, tree) levels with more or less accuracy.
In order to evaluate the forest resources, a precise mapping combined with accurate statistics of forests therefore is needed. Such statistics are already operational at a national level and widely employed for the evaluation of forest resources. However, a precise mapping would allow to refine the evaluation of forest resources. Forests are complex structures [START_REF] Pommerening | Approaches to quantifying forest structures[END_REF], for which information is needed for management, exploitation and more generally for public and private policies. Such information can be the tree species or the tree maturity of the forest. There are two ways to extract such information from forest; field inventory or remote sensing. The field inventories are very expensive to set up and are adapted for statistics (considering a limited set of inventory sites) but only at a regional or national scale. Remote sensing is a more relevant way in order to obtain such information since it allows to extract them at larges scale.
In order to meet these needs, two synergistic products could be produced: statistical inventory or forest mapping.
Remote sensing for forested areas
Remote sensing through automatic Earth Observation image analysis has been widely recognized as the most economic and feasible approach to derive land-cover information over large areas and for a large range of needs. The obvious advantage is that remote sensing techniques can provide LC information on different levels of details in a homogeneous and reliable way over large scales. They can also provide bio-geophysical variables and change information [START_REF] Hansen | High-resolution global maps of 21st-century forest cover change[END_REF], in addition to the current cover of the Earth surface.
The analysis of forested areas from a remote sensing point of view can be performed at three different levels: pixel (straightforward analysis), object (mainly trees), plots or stands. In statistical national forest inventory (NFI), an automated and accurate tree segmentation would simplify the extraction of tree level features (basal area, dominant tree height, etc., [START_REF] Means | Predicting forest stand characteristics with airborne scanning lidar[END_REF][START_REF] Kangas | Forest inventory: methodology and applications[END_REF]), since there is no straightforward way to obtain them. Two kinds of features can be extracted, the ones estimated directly from remote sensing data and the ones interpolated using allometric equations.
The tree level is not the most suitable level of analysis for forest studies at a national scale but should be preferred for local studies. The plots correspond to the level of analysis in fields inventories; statistics are derived at this level and interpolated at a larger scale. When a joint mapping and statistical reasoning is required (e.g., land-cover (LC) mapping and forest inventory (Tomppo et al., 2008)), forest stands remain the prevailing scale of analysis [START_REF] Means | Predicting forest stand characteristics with airborne scanning lidar[END_REF][START_REF] White | Remote sensing technologies for enhancing forest inventories: A review[END_REF]. A stand can be defined in many different ways in terms of homogeneity: tree specie, age, height, maturity. Its definition varies according to the countries and agencies.
From a remote sensing point of view, the delineation of the stands is a segmentation problem. Forest stands are preferred, since they allow to extract reliable and statistically meaningful features and to provide an input for multi-source statistical inventory. For land-cover mapping, this is highly helpful for forest database updating [START_REF] Kim | Forest Type Mapping using Object-specific Texture Measures from Multispectral Ikonos Imagery: Segmentation Quality and Image Classification Issues[END_REF], whether the labels of interest are vegetated areas (e.g., deciduous/evergreen/mixed/non-forested), or, even more precisely, the tree species.
• Manual delineation. To obtain such information, most of the time in national forestry inventory institutes, for reliability purposes, each area is manually interpreted by human operators with very high spatial resolution (VHR) geospatial images focusing on the infra-red channel [START_REF] Kangas | Forest inventory: methodology and applications[END_REF]. This work is extremely time consuming and subjective (Wulder et al., 2008b). Furthermore, in many countries, the wide variety of tree species (e.g., >20) significantly complicates the problem. This is all the more true than photo-interpretation may not always be sufficient (consensus may even not be reached between experts) and even in case of few species (3-5).
• Automatic delineation.
The design of an automatic procedure based on remote sensing data would fasten and ease such process. Additionally, the standard manual delineation procedure only takes into account the species, and few physical characteristics (alternatively height, age, stem density or crown closure). Instead, an automatic method could offer more flexibility not being limited to a visual analysis and using characteristics extracted from complementary data sources and not only Colored Infra-Red (CIR) ortho-images.
The use of remote sensing data for the automatic analysis of forests has been growing in the last 15 years, especially with the synergistic use of airborne laser scanning (ALS) and optical VHR imagery (multispectral imagery and hyperspectral imagery) (Torabzadeh et al., 2014;[START_REF] White | Remote sensing technologies for enhancing forest inventories: A review[END_REF]. Several countries have already integrated such remote sensing data sources in their operational pipeline for forest management [START_REF] Tokola | Remote Sensing Concepts and Their Applicability in REDD+ Monitoring[END_REF]Wulder et al., 2008a;[START_REF] Patenaude | Synthesis of remote sensing approaches for forest carbon estimation: reporting to the Kyoto Protocol[END_REF] and characterization. Furthermore, they can be employed for forest management. ALS provides a joint direct access to the vertical distribution of the trees and to the ground underneath [START_REF] Holmgren | Prediction of tree height, basal area and stem volume in forest stands using airborne laser scanning[END_REF]. Hyperspectral and multispectral optical images are particularly relevant for tree species classification: spectral and textural information from VHR images can allow a fine discrimination of many species [START_REF] Clark | Hyperspectral discrimination of tropical rain forest tree species at leaf to crown scales[END_REF]Franklin et al., 2000). Multispectral images are often preferred due to their higher availability, and higher spatial resolution [START_REF] Belward | Who launched what, when and why; trends in global landcover observation capacity from civilian earth observation satellites[END_REF]. Multispectral and hyperspectral images can be acquired from airplanes or satellites. Spaceborne sensors allow to capture large areas with a higher temporal rate but generally suffer from a lower spatial resolution, even if the gap decreases every year (see Table 1.1). For a better spatial resolution, airborne multispectral images are preferred since they also allow to extract more relevant texture features for tree species classification (Franklin et al., 2000). The airborne linear Lidar technology has been widely used for remote sensing tasks [START_REF] Lim | LiDAR remote sensing of forest structure[END_REF][START_REF] Shan | Topographic laser ranging and scanning: principles and processing[END_REF][START_REF] Vosselman | Airborne and terrestrial laser scanning[END_REF]. Lidar has been successfully employed for many forest applications (Ferraz et al., 2016b). The new Geiger mode lidar (Ullrich et al., 2016) and single photon lidar [START_REF] Viterbini | Single photon detection and timing system for a lidar experiment[END_REF] are also very promising, allowing a significantly higher point density with different angles at a higher altitude, enabling the coverage of larger areas at a better cost than classic Lidar systems. Employing such 3D data could have an important impact, especially on the studies of forested areas [START_REF] Jakubowski | Tradeoffs between lidar pulse density and forest measurement accuracy[END_REF][START_REF] Strunk | Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables[END_REF]. Similarly to hyperspectral Lidar [START_REF] Kaasalainen | Toward hyperspectral lidar: Measurement of spectral backscatter intensity with a supercontinuum laser source[END_REF], additional research is required to accurately assess that relevance on our scope. Synthetic Aperture Radar (SAR) is widely employed for the evaluation of biomass, especially in forested environment [START_REF] Le Toan | Relating forest biomass to SAR data[END_REF][START_REF] Beaudoin | Retrieval of forest biomass from SAR data[END_REF]. With its ability to penetrate the vegetation, SAR in P-band (0.3-1 GHz) allows to estimate efficiently the aboveground biomass. Thus, SAR can be employed in order to extract relevant information of forests but is not well adapted for the discrimination of the species.
Context of the thesis
In France, the study of forests mainly consist in mapping and inventory. It can also be envisaged for the assessment of biodiversity, the impact of forests on human behaviors or on climate etc.
The forest inventory of IGN allows to obtain an estimation of the wood stock and the forestation rate at different scales (national,regional,departmental,see Figures 1.3 & 1.4). Statistics such as volume per hectare, deciduous volume or conifer volume can then be derived. The inventory is performed through extrapolation of field inventories. Forest mapping as also interesting for the understanding of forested areas. It is traditionally provided through a national forest land-cover (LC) database (DB) (see Figure 1 .5). In France, it is manually interpreted by human operators using VHR CIR ortho-images. It assigns a vegetation type to each mapped beach of more than 5000 m 2 . The nomenclature is composed of 32 classes based on hierarchical criteria such as pure stands of the main tree species of the French forest. The forest LC should be updated in a 10 years cycle.
Objectives
Currently, the forest LC DB is obtained through remote sensing (namely photo-interpretation), but it is a time consuming activity. A framework should be developed to update it automatically using remote sensing data processing. Since the forest LC is available, it can be used as an input for training a supervised classification (Gressin et al., 2013a). However, the learning process should be carefully addressed (Gressin et al., 2014). Indeed, some areas might have changed (e.g., forest cuts). Furthermore, the database is designed generalized [START_REF] Smith | Database abstractions: aggregation and generalization[END_REF]. Indeed, forests are not perfectly homogeneous in term of species and there can be many gaps in the canopy, leading also to a noisy classification. Such classification would then not be sufficient in order to retrieve homogeneous patches similar to the forest LC. In order to retrieve homogeneous patches, the classification should be regularized using smoothing methods [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF]. Furthermore, an automatic framework considering more data sources than only CIR ortho-images would allow to enrich the LC, i.e. retrieve homogeneous tree species stands also homogeneous in terms of height.
Strategy
Two remote sensing modalities are available for the mapping of forested areas at IGN; VHR optical images and lidar points cloud. VHR optical images are acquired at the national level for various needs of IGN, while lidar is mostly employed order to derive an accurate Digital Surface Model especially in forested areas since it is the better solution to obtain it in such environment
VHR optical images
In this thesis, the VHR ortho-images employed have a spatial resolution of 50 cm. The orthoimages employed have 4 bands (red, green, blue near infra-red) captured by the IGN digital cameras [START_REF] Souchon | A large format camera system for national mapping purposes[END_REF] that exhibit very high radiometric and geometric quality. Such VHR optical multispectral images are available over whole France. They are captured every 3 years and are one of the component of RGE (a public service mission of the IGN, that aims at describing the national land cover in a precise, complete and homogeneous way).
Airborne Laser Scanning
IGN also acquires 3D point clouds with laser scanning devices (Leica ALS 60 or Optech ALTM 3100). The point density for all echoes ranges from 2 to 4 points/m 2 . Forested areas and areas subject to flooding are mainly flown. About 40000 km 2 are acquired each year for Digital Terrain Model generation as main purpose.
The registration between airborne lidar point clouds and VHR multispectral images was performed by IGN itself using ground control points, following a standard procedure production in the French mapping agency since IGN operates both sensors and has also a strong expertise in data georeferencing.
The combination of these two data sources is very relevant for the study of forests. Indeed, optical imagery provides the major information about the tree species (spectral and texture), while Lidar gives information about the vertical structure of the forests. Furthermore, Lidar allows to extract consistent objects such as trees, that could be used in the stand segmentation process, even if delineated coarsely.
The fusion of these two modalities is a way to extract the most information in order to retrieve forest stands. The fusion can be performed at different levels. 3 levels are frequently considered :
• Low level (or observation level): It corresponds to the fusion of the observations, in this case, the reflectance from the optical images and the coordinates of the lidar points. It is a straightforward fusion method that does not really extract information from the data. It is also simple way to validate the complementarity of the data.
• Medium level (or attribute level): It corresponds to the fusion of features, derived from both sources, and merged together. It also corresponds to the cooperative understanding of the data; a feature is derived on a modality and applied to the other (e.g. segmentation of the point cloud applied to images). In this process, all the information from both data sources is directly exploited. However, attention should be paid to the choice of the employed features since it can lead to poor classification results.
• High level (or decision level or late fusion): It corresponds to decision fusion. Each data source has been processed independently (e.g. classified) and the final decision is an optimal combination of the classifications and the input data. This level of fusion is very important since it allows to refine the results and only keep the best from the intermediate results.
In this work, the fusion is mainly performed at the medium and high levels.
Structure of the thesis
This work is divided in 6 chapters.
• Chapter 2 presents and discusses the different existing methods for stand segmentation. Since stand segmentation is at the interplay between different kinds of image processing methods, they are also analyzed since they are also employed in this work for stand segmentation.
• Chapter 3 describes the proposed framework. It is composed of three steps. The first one is related to the extraction of features, it is composed of two core elements. Firstly an object extraction is carried out in order to derive features at the object level (medium level fusion for the cooperative understanding of the data). The desired objects aim to have a size similar to trees. Secondly, features (∼ 100) are extracted from the two remote sensing modalities. The second part deals with the object-based classification. Here, a special attention is paid to the design of a training set. A feature selection is also carried out since it allows to validate the complementarity of data sources (medium level fusion) while reducing computation times. Finally, Random Forest classification is performed on the selected features (medium level fusion, since the features are spectral-based and lidar-based). This classification is then refined with regularization methods.
• Chapter 4 presents the results of the different experiments that have been carried out. Since many options can be envisaged for obtaining a relevant result (e.g. features, training set design, regularization), a large variety of experiments have been proposed for the contribution assessment of the different steps of the proposed framework.
• Chapter 5 emphases on the last step of the proposed framework. It aims at regularization of the classification. This step corresponds to the high level fusion. Indeed, the supervised classification does not allow to retrieve consistent forest stands (according to the forest LC DB). Thus, a regularization process allows to refine the results in order to obtain homogeneous segments with smooth borders and consistent with the forest LC DB. Such regularization can be performed using local or global methods both having their advantages and drawbacks.
• Chapter 6 summarizes and analyzes the different levels of fusion proposed in the framework. From the different fusion schemes possible, experiments are proposed in order to define what can be the optimal fusion schemes.
• Conclusions are drawn in Chapter 7. Eventually, perspectives are also proposed so as to alleviate remaining issues.
Forest are complex areas, thus, the mapping of such environment needs the use of different image processing methods. Indeed, the extraction of "homogeneous" forest stands is at the interplay between different kinds of image processing methods (see Figure 2.1).
Several methods have already been proposed for forest stand segmentation depending on the definition of forest stand (Section 2.1). They involve different image processing algorithms which will be considered with details. Segmentation (Section 2.2) algorithms can be employed for a fine or coarse delineation of the principal components of the forests. Classification is also very useful to discriminate the different elements of the forest and especially detect tree species (Section 2.3). Furthermore, with the important number of features that can be derived from original data, feature selection algorithms are mandatory in order to improve the results while decreasing the computational load and times (section 2.4). Eventually, smoothing methods could be employed as a post processing in order to obtain a better labeling configuration. Especially global smoothing methods aims at the configuration corresponding to a minimum of an energy. Such energy minimization processes are used for a refinement of raw results (Section 2.5).
Stand segmentation
A forest stand is defined as a contiguous group of trees that is uniform in specie composition, structure, age and/or height, spatial arrangement, site quality or condition to distinguish it from adjacent groups of trees.
One should note that the literature remains heavily focused on individual tree extraction and tree species classification [START_REF] Dalponte | Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data[END_REF][START_REF] Kandare | A new procedure for identifying single trees in understory layer using discrete LiDAR data[END_REF][START_REF] Véga | PTrees: a point-based approach to forest tree extraction from lidar data[END_REF][START_REF] Dalponte | Semisupervised SVM for individual tree crown species classification[END_REF], developing site-specific workflows with similar advantages, drawbacks, and classification performance. Some authors have focused on forest delineation [START_REF] Eysn | Forest delineation based on airborne LIDAR data[END_REF][START_REF] Wang | Forest delineation of aerial images with Gabor wavelets[END_REF][START_REF] Radoux | A quantitative assessment of boundaries in automated forest stand delineation using very high resolution imagery[END_REF], that most of the time do not convey information about the tree species and their spatial distribution. Forest stand delineation methods have been proposed but they generally remain very specific to the study area and commonly uniquely provide a binary mask as final output. Consequently, no operational framework embedding the automatic analysis of remote sensing data has been yet proposed in the literature for forest stand segmentation at large scale [START_REF] Dechesne | Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery[END_REF].
Hence, in the large amount of literature in the field, only few papers focus on the issue of stand segmentation or delineation. They can be categorized with regard to the type of data processed as presented in the following sections.
FOREST ANALYSIS
Stand segmentation using VHR optical images
A stand delineation technique using VHR airborne superspectral imagery (0.6 m spatial resolution, 8 spectral bands ranging from 438.5 nm to 860.7 nm with an approximate bandwidth of 26 nm) is proposed in [START_REF] Leckie | Stand delineation and composition estimation using semi-automated individual tree crown analysis[END_REF]. The trees are extracted using a valley following approach and classified into 7 tree species (5 coniferous, 1 deciduous, and 1 non-specified) with a maximum likelihood classifier. The classification is performed at the object (i.e. tree) level using statistical features (mean, maximum, standard deviation) and textural features. A semi-automatic iterative clustering procedure is then introduced to generate the forest polygons. The method produces relevant forests stands and consider many tree species. It shows the usefulness of an object-based classification using statistical and textural features. However, since experiments have been conducted on a small area (330 m × 800 m), no strong conclusion can be drawn.
A hierarchical and multi-scale approach for the identification of stands is adopted in [START_REF] Hernando | Spatial and thematic assessment of objectbased forest stand delineation using an OFA-matrix[END_REF]. The data inputs were the 4 bands of an airborne 0.5 m orthoimage (Red, Green, Blue, and Near Infra-Red) allowing to derive the Normalized Difference Vegetation Index (NDVI). The stand mapping solution is based on the Object-Based Image Analysis (OBIA) concept. It is composed of two main phases in a cyclic process: first, segmentation, then classification. The first level consists in over-segmenting (using the multi-resolution segmentation algorithm from eCognition) the area of interest and performing fine-grained land cover classification. The second level aims to transfer the vegetation type provided by a land cover geodatabase in the stand polygons, already retrieved from another segmentation procedure. The multi-scale analysis appears to have a significant benefit on the stand labeling but the process remains highly heuristic and requires a correct definition of the stand while we consider it is an interleaved problem.
Following the work of (Wulder et al., 2008b) with IKONOS images, Quickbird-2 panchromatic images are used in [START_REF] Mora | Segment-constrained regression tree estimation of forest stand height from very high spatial resolution panchromatic imagery over a boreal environment[END_REF] to automatically delineate forest stands. A standard image segmentation technique from eCognition is used and the novelty mainly lies on the fact that its initial parameters are optimized with respect to NFI protocols. They show that meaningful stand heights can be derived, which are a critical input for various modeled inventory attributes.
The use of VHR optical images is very interesting since it is meaningful for tree species discrimination. Furthermore, statistical and textural features, that can be computed thanks to the high spatial resolution allows a better discrimination of tree species. Eventually, the Object-Based Image Analysis (OBIA) is also possible and preferred in order to obtain better results.
Stand segmentation using lidar data
A seminal stand mapping method using low density (1-5 point/m 2 ) airborne lidar data is proposed in [START_REF] Koch | Airborne laser data for stand delineation and information extraction[END_REF]. It is composed of several steps of feature extraction, creation and raster-based classification of 15 forest types. Forest stands are created by grouping neighboring cells within each class. Then, only the stands with a pre-defined minimum size are accepted.
Neighboring small areas of different forest types that do not reach the minimum size are merged together to an existing forest stand. The approach offers the advantage of detecting 15 forest types (deciduous/coniferous and maturity) that match very well with the ground truth but to the detriment of simplicity: the flowchart has to be highly reconsidered to fit to other stand specifications. Additionally, the tree species discrimination is not addressed.
The forest stand delineation proposed in [START_REF] Sullivan | Object-oriented classification of forest structure from light detection and ranging data for stand mapping[END_REF] also uses low density (3-5 point/m 2 ) airborne lidar still coupling an object-oriented image segmentation and a supervised classification procedure implemented in FUSION. Three features are computed and rasterized also with the FUSION software. The segmentation is performed using a region growing approach. Spatially adjacent pixels are grouped into homogeneous objects or regions of the image. Then, a supervised discrimination of the segmented image is performed using a Battacharya classifier, in order to determine the maturity of the stands.
The method proposed in [START_REF] Eysn | Forest delineation based on airborne LIDAR data[END_REF] aims at generating a forest mask (forested area label only) using low density airborne lidar. A Canopy Height Model (CHM) with a spatial resolution of 1 m is derived. The positions and heights of single trees are determined from the CHM using a local maximum filter, based on a moving window approach. Only detected positions with a CHM height superior to 3 m are considered. The crown radii are estimated using an empirical function. The three neighboring trees are connected using a Delaunay triangulation applied to the previously-detected tree position. The crown cover is then calculated using the crown areas of three neighboring trees and the area of their convex hull for each tree triple. The forest mask is derived from the canopy cover values. While this is not a genuine stand delineation method, this approach could be easily extended to a multi-class problem and enlightens the necessity of individual tree extraction even with limited point densities as a basis for the stand-level analysis.
A forest stand delineation also based on airborne lidar data is proposed in [START_REF] Wu | ALS data based forest stand delineation with a coarse-to-fine segmentation approach[END_REF]. Three features are first directly extracted from the point cloud (related to tree height, density and shape).
A coarse forest stand delineation is then performed on the feature image using the unsupervised Mean-Shift algorithm, in order to obtain under-segmented raw forest stands. A forest mask is then applied to the segmented image in order to distinguish forest and non-forest raw stands. It may create some small isolated areas, iteratively merged to their most similar neighbor until their size is larger than a user-defined threshold in order to generate larger and coarse forest stands. They are then refined into finer level using a seeded region growing based on superpixels. The idea is to select several different superpixels in a raw forest stand and merge them. This method provides a coarse-to-fine segmentation with relatively large stands. The process was only applied on a small area of a forest in Finland, thus, general conclusions can not be drawn.
Stand segmentation using VHR optical images and lidar
The analysis of the lidar and multispectral data is performed at three levels in [START_REF] Tiede | Object-based semi automatic mapping of forest stands with Laser scanner and Multi-spectral data[END_REF], following a given hierarchical nomenclature of classes standard for forested environments. The first level represents small objects (single tree scale, individual trees or small groups of trees) that can be differentiated by spectral and structural characteristics using here a rule-based classification. The second level corresponds to the stand level. It is built using the same classification process which summarizes forest development phases by referencing to small scale sub-objects at level 1. The third level is generated by merging objects of the same classified forest-development stage into larger spatial units. The multi-scale analysis offers the advantage of alleviating the standard issue of individual tree crown detection and proposing development stage labels. Nevertheless, the pipeline is highly heuristic, under-exploits lidar data and significant confusions between classes are reported.
The automatic segmentation process of forests in [START_REF] Diedershagen | Automatic segmentation and characterisation of forest stand parameters using airborne lidar data, multispectral and fogis data[END_REF]) is also supplied with Lidar and VHR multispectral images. The idea is to divide the forests into higher and lower strata with lidar. An unsupervised classification (with an algorithm similar to the ISODATA) process is applied to the two images (optical and rasterized lidar). The final stand delineation is achieved by segmenting the classification results with pre-defined thresholds. The segmentation results are improved using morphological operators such as opening and closing, which fill the gaps and holes at a specified extent. This method is efficient if the canopy structure is homogeneous and requires a strong knowledge on the area of interest. Since it is based on height information only, it cannot differentiate two stands of similar height but different species.
In [START_REF] Leppänen | Automatic delineation of forest stands from LIDAR data[END_REF] a stand segmentation technique for a forest composed of Scots Pine, Norway Spruce and Hardwood is defined. A hierarchical segmentation on the Crown Height Model followed by a restricted iterative region growing approach is performed on images composed of rasterized lidar data and Colored Infra-Red images. The process was only applied on a limited area of Finland (∼ 70 ha) and prevents from drawing strong conclusions. However, the quantitative analysis carried out by the authors shows that lidar data can help to define statistically meaningful stands (here the criterion was the timber volume) and that multispectral images are inevitable inputs for tree species discrimination.
Challenges of stand segmentation
Table 2.1 summarizes the presented methods of forest stands segmentation. Firstly, it appears that the fusion of the two remote sensing modalities (optical images and lidar) improve the results for the problematic of forest stand delineation. However, the stands are not defined the same way in the different proposed methods, preventing from drawing general conclusion.
Regarding the existing state of the art on the forest stand segmentation, it appears that such task remains very complex to implement especially in an automatic way. Indeed, a simple segmentation of VRH optical image or lidar point cloud is not sufficient since it does not allows to retrieve consistent stands (in terms of species). However, segmentation algorithms are relevant for the extraction of small objects (ideally trees, or similar to trees). A classification is mandatory in order to obtain the tree species (i.e. semantic information). However, it is very difficult to discriminate species, since some have a very close looking (e.g. deciduous oak and beech), and the intra-class variability might be important (depending on age, maturity and other external features such as shape). Other issues related to the input data such as shadows in VHR optical images can also be reported. Eventually, the desired stands are not totally pure, a certain level of generalization is desired in order to have a consistent mapping at large scale. Thus, a regularization process can be employed for such purpose.
It also appears that the type of data employed has an impact on the results.
• The VHR optical images permits to obtain information about the tree species, furthermore, textural features are very relevant (Franklin et al., 2000) if no hyperspectral data are available. • The lidar data provides information about the vertical structure of the forest that can also be useful for the discrimination of tree species [START_REF] Brandtberg | Classifying individual tree species under leaf-off and leaf-on conditions using airborne lidar[END_REF][START_REF] Hovi | LiDAR waveform features for tree species classification and their sensitivity to tree-and acquisition related parameters[END_REF]Li et al., 2013b).
Reference
It also brings information about the height that allows to separate forest stands of different ages. Most of the time, lidar is deeply under-exploited since it is used only as a simple DSM/CHM.
The segmentation of forest stands must be envisaged as a region-based segmentation problem.
Indeed, contour-based methods would be very difficult to employ since forested areas exhibit high variability and finding relevant borders is almost impossible especially in environment where no prior can be envisaged. Thus, for an optimal segmentation of forest stands, the strategies that are employed in this work are the following:
• Extraction of small objects (similar to trees), in order to derive features at the object level, since it is very relevant for subsequent classification.
• Extraction of multiple features from the two data sources.
• Object-based classification, since it produces better results than a simple pixel-based classification.
• Regularization of the classification that leads to homogeneous forest stands with smooth borders.
Therefore, the fusion between VHR optical images and lidar is performed at the four levels since they allow to obtain relevant forest stands.
Main processing families adopted in this work are now presented and discussed with respect to our field of research.
Segmentation
The direct segmentation of optical image and/or lidar point clouds is not sufficient in order to retrieve forest stands. Indeed, such segmentation methods can not take into account the information needed to define the stand. However, with adapted parameters, segmentations algorithms might be useful to obtain objects that have and adapted size and shape for the desired study [START_REF] Dechesne | Semantic segmentation of forest stands of pure species combining airborne lidar data and very high resolution multispectral imagery[END_REF]. They can be divided in two categories:
• The "traditional" segmentation methods; in these methods, a specific attention must be paid to the choice of the parameters in order to obtain relevant results. Such segmentation can be applied on an image or a point cloud. Specific methods have also been developed for the segmentation of lidar point cloud [START_REF] Nguyen | 3D point cloud segmentation: a survey[END_REF].
• The superpixels segmentation methods: they natively produce an over-segmentation of the image. The parameters control the size and the shape of the resulting segments [START_REF] Stutz | Superpixels: An evaluation of the state-of-the-art[END_REF].
"Traditional" segmentation methods
The segmentation of an image can be performed using a large variety of techniques [START_REF] Wilson | Image segmentation and uncertainty[END_REF][START_REF] Nitzberg | Filtering, segmentation and depth[END_REF][START_REF] Pal | A review on image segmentation techniques[END_REF][START_REF] Zhang | Advances in image and video segmentation[END_REF].
The easiest way to segment an image is the thresholding of a gray level histogram of the image [START_REF] Taxt | Segmentation of document images[END_REF]. When the image is noisy or the background is uneven and illumination is poor, such thresholding is not sufficient. Thus, adaptive thresholding methods have been developed [START_REF] Yanowitz | A new method for image segmentation[END_REF].
The watershed transformation [START_REF] Vincent | Watersheds in digital spaces: an efficient algorithm based on immersion simulations[END_REF] is also a simple segmentation method that considers the gradient magnitude of an image as a topographic surface. Pixels having the highest gradient magnitude intensities correspond to watershed lines, which represent the region boundaries. Water placed on any pixel enclosed by a common watershed line flows downhill to a common local intensity minimum. Pixels draining to a common minimum form a catch basin, which represents a segment.
The segmentation can also be considered as an unsupervised classification problem. Algorithms considering such classification problems adopt iterative process. The most popular algorithm is the k-means algorithm, or the ISODATA which is a variant of the k-means. Segmentation methods using the spatial interaction models like Markov Random Field (MRF) [START_REF] Hansen | Image segmentation using simple Markov field models[END_REF] or Gibbs Random Field (GRF) [START_REF] Derin | Modeling and segmentation of noisy and textured images using Gibbs random fields[END_REF] have also been proposed. Neural networks are also interesting for image segmentation (Ghosh et al., 1991) as they take into account the contextual information.
Conversely, the segmentation of an image can also be obtained from the detection of the edges of the image [START_REF] Peli | A study of edge detection algorithms[END_REF]. The idea is to extract points of significant changes in depth values. Edges are local features and are determined based on local information and thus non suitable in our case.
Eventually, hierarchical or multi-scale segmentation algorithms can be employed. They analyze the image at several different scales. Their output is not a single partition, but a hierarchy of regions or a data structure that captures different partitions for different scales of analysis [START_REF] Baatz | Method of iterative segmentation of a digital picture[END_REF][START_REF] Guigues | Scale-sets image analysis[END_REF]Trias-Sanz, 2006). These methods allow to control the complexity of the segmentation, which was not the case for the above-mentioned methods. The algorithm of [START_REF] Guigues | Scale-sets image analysis[END_REF]) is a bottom-up approach that starts with an initial over-segmentation (e.g. segmenting almost each pixel on a different own region) and uses this level as an initialization for the construction of subsequent significant levels. The segmentation process is guided by an energy E of the form:
E = D + µC (2.1)
where, D is a fit-to-data measure (how well the segmentation fits to the original image, better fits give lower values of D); C is a measure of segmentation complexity (less complex solutions give lower values of C); and µ is a dimensional parameter, the scale parameter. The parameter µ balances between a perfect fit to the original data (µ = 0), consisting of one segmentation region for each pixel in the original image, and the simplest segmentation, consisting of a single region containing the whole image [START_REF] Guigues | Scale-sets image analysis[END_REF] (see Figure 2.2). The segmentation level can be adjusted gradually from the finest to the coarsest depending of the image complexity. The choice of a value of µ define a specific energy, leading to a unique segmentation. Top-down approaches can also be employed for image segmentation. In [START_REF] Landrieu | Cut Pursuit: fast algorithms to learn piecewise constant functions[END_REF]) working-set/greedy algorithms to efficiently solve problems penalized respectively by the total variation on a general weighted graph are proposed. The algorithms exploit this structure by recursively splitting the level-sets of a piecewise-constant candidate solution using graph cuts.
Segmentation algorithms based on optical data have been also developed especially for forest analysis such as the approach proposed in [START_REF] Tochon | On the use of binary partition trees for the tree crown segmentation of tropical rainforest hyperspectral images[END_REF]. It proposes a method for hyperspectral image segmentation, based on the binary partition tree algorithm, applied to tropical rainforests. Superpixel generation methods are applied to decrease spatial dimensionality and provide an initial segmentation map. Principal component analysis is performed to reduce the spectral dimensionality. A non-parametric region model based on histograms, combined with the diffusion distance to merge regions, is used to build the binary partition tree [START_REF] Salembier | Binary partition tree as an efficient representation for image processing, segmentation, and information retrieval[END_REF]. An adapted pruning strategy based on the size discontinuity of the merging regions is proposed. The resulting segmentation is coherent with the trees delineated manually, however, such method has been proposed for tropical rainforest and might not be adapted for temperate forests.
Superpixels methods
Dozen of superpixels algorithms have been developed [START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF]. They group pixels into perceptually meaningful atomic regions. Many traditional segmentation algorithms have been employed with more or less success to generate superpixels [START_REF] Shi | Normalized cuts and image segmentation[END_REF][START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF][START_REF] Comaniciu | Mean shift: A robust approach toward feature space analysis[END_REF][START_REF] Vedaldi | Quick shift and kernel methods for mode seeking[END_REF][START_REF] Vincent | Watersheds in digital spaces: an efficient algorithm based on immersion simulations[END_REF]. These algorithms produce satisfactory results, however, they may be relatively slow and the number, size and shape of the superpixels might not be specified, leading to a potential tedious parameter tuning step. (2012) propose a generation of superpixels based on the k-means algorithms. A weighted distance that combines color and spatial proximity is introduced in order to control the size and the compactness of the superpixels.
Segmentation of point cloud
Segmentation methods dedicated to 3D point cloud have been proposed [START_REF] Nguyen | 3D point cloud segmentation: a survey[END_REF]. The aim is mainly to extract meaningful objects. Such extraction has two principal objectives:
• Objects are detected so as to ease or strengthen subsequent classification task. A precise extraction is not mandatory since the labels would be refined after.
• Objects are precisely delineated in order to derive features from these objects (e.g. surface, volume, etc.). A high spatial accuracy is therefore expected.
Several methods presented in previous sections can also be applied to 3D lidar point cloud.
In forested areas, the most reliable objects to extract are trees. The tree detection and extraction has been widely investigated [START_REF] Wang | International Benchmarking of the Individual Tree Detection Methods for Modeling 3D Canopy Structure for Silviculture and Forest Ecology Using Airborne Laser Scanning[END_REF][START_REF] Kaartinen | An international comparison of individual tree detection and extraction using airborne laser scanning[END_REF]. The tree extraction from lidar point cloud can be envisaged in two ways:
• Rasterize the point cloud and use image-based segmentation techniques to obtain trees.
• Direct segmentation of the 3D point cloud.
A lot of methods have been developed for single tree delineation [START_REF] Dalponte | Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data[END_REF][START_REF] Véga | PTrees: a point-based approach to forest tree extraction from lidar data[END_REF][START_REF] Kandare | A new procedure for identifying single trees in understory layer using discrete LiDAR data[END_REF][START_REF] Reitberger | 3D segmentation of single trees exploiting full waveform LIDAR data[END_REF]. They all have their advantages and drawbacks, most of the time it is hard to assess the quality of the segmentation. None of them exhibit the ability to handle different kinds of forest.
Classification
A classification is a process that aims to categorize observations. The idea is to assign an observation to one or more classes. The classification can be unsupervised, in such cases the classes (i.e., the targeted labels) need to be discovered and the observation assigned. Such classification is similar to segmentation (see section 2.2) and is no further investigated here. The classification can be supervised, the target classes are known and observations with labels (employed for training and validation) are available. In our case, labels and training sets are given with the forest LC DB of interest.
Supervised classification: common algorithms
A great number of supervised classification algorithms have been developed and used for remote sensing issues [START_REF] Landgrebe | Signal theory methods in multispectral remote sensing[END_REF][START_REF] Lu | A survey of image classification methods and techniques for improving classification performance[END_REF][START_REF] Mather | Classification methods for remotely sensed data[END_REF]. There are two kinds of algorithms: the generative, often parametric, and the discriminative, often non-parametric.
The parametric methods assume that each class follows a specific distribution (mainly Gaussian). The parameters of the distribution are estimated using the learning set. This is the case for the maximum likelihood [START_REF] Strahler | The use of prior probabilities in maximum likelihood classification of remotely sensed data[END_REF] or maximum a posteriori [START_REF] Fauvel | Fast forward feature selection of hyperspectral images for classification with Gaussian mixture models[END_REF].
The non parametric methods do not make any assumption on the classes distribution. In this category of algorithms, very popular ones are the Support Vector Machines (SVM) [START_REF] Boser | A training algorithm for optimal margin classifiers[END_REF][START_REF] Scholkopf | Learning with kernels: support vector machines, regularization, optimization, and beyond[END_REF] and the Random Forest (RF) [START_REF] Breiman | Random forests[END_REF]. The deep based-methods are also efficient algorithms [START_REF] Hepner | Artificial neural network classification using a minimal training set-Comparison to conventional supervised classification[END_REF][START_REF] Atkinson | Mapping sub-pixel proportional land cover with AVHRR imagery[END_REF]. However, despite their great performance in terms of accuracy, they have several drawbacks: firstly, the training process is time consuming and good GPU cards or specific architectures are required in order to reach decent training times [START_REF] Dean | Large scale distributed deep networks[END_REF][START_REF] Moritz | Sparknet: Training deep networks in spark[END_REF]. Secondly, it requires an important amount of training data in order to correctly optimize the large number of parameters (e.g., hundred of millions). Simpler methods exist, such as the k-nearest neighbors [START_REF] Indyk | Approximate nearest neighbors: towards removing the curse of dimensionality[END_REF] or the decision trees [START_REF] Breiman | Classification and regression trees[END_REF] but they produces quite low accuracy results. The non parametric methods are more efficient for the discrimination of complex classes [START_REF] Paola | A review and analysis of backpropagation neural networks for classification of remotely-sensed multi-spectral imagery[END_REF]Foody, 2002), and are now considered as a basis for land cover classification [START_REF] Camps-Valls | Kernel methods for remote sensing data analysis[END_REF].
We chose to use the RF, because of their widespread use, also offer the possibility of obtaining the probability of belonging of a pixel to a class. These posterior probabilities can be then integrated into a smoothing process. They also report good results, similar to SVM (see Chapter 4). The RF are described in section 2.3.2.
Random Forest
The RF have been introduced by [START_REF] Breiman | Random forests[END_REF] and are defined by the aggregation of weak predictors (decision trees). Here, we refer to the RF with random inputs proposed in [START_REF] Breiman | Random forests[END_REF].
The idea is to create an ensemble of sample sets S Θ1 n , ..., S Θ k n randomly selected from an initial training set. A Classification and Regression Tree (CART) [START_REF] Breiman | Classification and regression trees[END_REF] is built on each sample set S Θi n . Each tree is built using a a random pool of m features among the M available features. The final classification is obtained by majority vote; each tree votes for a class and the class reaching the most votes wins (see Figure 2.4). This algorithm has two parameters: the number of trees k and the number of features m used to build a tree. The first parameter is arbitrary fixed to a high value. The second is generally fixed to the square root of the total number of feature (Gislason et al., 2006). Other parameters can be defined such as the maximal depth of the trees (or the purity of the leaves). In this thesis, the parameters of the Random Forest have not been fine tuned, since a standard tuning allows to obtain very good results. Other ameliorations to the Random Forest have been proposed such as the Rotation Forest [START_REF] Rodriguez | Rotation forest: A new classifier ensemble method[END_REF], the Random Ferns [START_REF] Bosch | Image classification using random forests and ferns[END_REF] or the Extremely Randomized Trees (Geurts et al., 2006). RF have shown classification performances comparable or better than traditional Boosting methods [START_REF] Breiman | Random forests[END_REF] or SVM [START_REF] Pal | Random forest classifier for remote sensing classification[END_REF]. They are also able to handle large datasets with high number of features. Furthermore, a measure of feature importance has been introduced in [START_REF] Breiman | Random forests[END_REF]. It allows to qualify the relevance of the features in the classification process [START_REF] Strobl | Bias in random forest variable importance measures: Illustrations, sources and a solution[END_REF]. Several other feature importance metrics have been proposed.
Dataset S Θ1 n . . . S Θ k n CART 1 CART K Majority vote
The importance of a feature X j , j ∈ {1, ..., q} (with q the number of feature) is defined as follow.
Let S Θi n be a set of sample and OOB i all the observations that do not belong to S Θi n . errOOB i , the error on OOB i using S Θi n , is then computed. A random permutation on the value of the j th feature of OOB i is performed in order to obtain OOB i j . err OOB j i is then computed. The importance of the feature j, F I(X j ) is the mean of the difference of the errors (see Equation 2.2).
F I(X
j ) = 1 k k i=1 (err OOB j i -errOOB i ) (2.2)
where k is the number of CART.
Dimension reduction and feature selection
Discriminative features are basis for classification without assuming which ones are the more salient, the standard strategy consists in computing a large set of features, especially in the multimodal case [START_REF] Tokarczyk | Features, color spaces, and boosting: New insights on semantic classification of remote sensing images[END_REF]. The feature selection methods try to overcome the curse of dimensionality [START_REF] Bellman | Adaptive control processes: a guided tour[END_REF][START_REF] Hughes | On the mean accuracy of statistical pattern recognizers[END_REF]. Indeed, the increasing number of features available tends to decrease the accuracy of the classifiers. Furthermore, the computation times increase with the number of features. Thus, reducing the feature dimension is beneficial for the classification task.
Furthermore, in case of multi-modal features, feature selection allows to assess the contribution of the remote sensing modalities.
Two kinds of approaches exist: first the ones based on the extraction of new features summarizing the information by the transformation of the data, generally using a projection to an other space of lower dimensionality. Secondly, feature selection approaches that aim at identifying for an optimal subset of the features.
Dimension reduction: feature extraction
The most popular dimension reduction method is the Principal Component Analysis (PCA). It is an unsupervised method that aim to maximize the variance between data [START_REF] Jolliffe | Principal component analysis[END_REF]. However, it has been demonstrated that PCA is not optimal for the purpose of classification [START_REF] Cheriyadat | Why principal component analysis is not an appropriate feature extraction method for hyperspectral data[END_REF]. Other methods have been developed related to the PCA: the Independent Component Analysis (ICA) [START_REF] Jutten | Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture[END_REF] maximizes the statistical independence between data, and the Maximum Autocorrelation Factor (MAF) [START_REF] Larsen | Decomposition using maximum autocorrelation factors[END_REF] maximizes the spatial auto-correlation. When training samples are available, supervised methods exist, such as the linear discriminant analysis (LDA) that tries to maximize both the intra-class homogeneity and the inter-class variance (Fisher, 1936;[START_REF] Lebart | Multidimensional exploratory statistics[END_REF].
Feature selection
Feature selection (FS) aims z defining an optimal subset of features without modifying them.
Automatic methods have been proposed in order to obtain such subset. One can explore the subsets of features and need to define a criteria to evaluate the subsets. Furthermore, the selection can be supervised or unsupervised. The first aims at discriminating the better the classes while the second is looking for an optimal subset that contains the most informative and less redundant features. Many exploration methods for feature selection have been proposed in the literature. The naive exhaustive exploration of all the subsets can be envisaged only when the number of features is not important.
Existing methods
The feature selection methods can be separated into 3 categories: filters, wrapper and embedded.
• Filters
The filter methods are independent from any classifier. Within the filter methods, one can distinguish the supervised and unsupervised case depending on whether the notion of classes is taken into account or not. When supervised, they consider the features according to their capacity to bring together elements of the same class and separate the different elements [START_REF] John | Enhancements to the data mining process[END_REF]). Separability measures (e.g., Fisher (Fisher, 1936), Bhattacharrya or Jeffries-Matusia) allow to determine whether a feature or a subset of feature is well adapted to discriminate the classes [START_REF] Bruzzone | A technique for feature selection in multiclass problems[END_REF][START_REF] Herold | Spectral resolution requirements for mapping urban areas[END_REF][START_REF] De Backer | A band selection technique for spectral classification[END_REF][START_REF] Serpico | Extraction of spectral channels from hyperspectral images for classification purposes[END_REF]. Among filters, ranking methods compute an individual importance score for each feature, classify the features according to this score and keep only the best. Such scores can be computed using training samples or not. Such methods are independent from a classifier and are used as preliminary step to classification. Statistical measures derived from information theory such as the divergence, the entropy or the mutual information have been proposed in the unsupervised case [START_REF] Martínez-Usó | Clustering-based hyperspectral band selection using information measures[END_REF][START_REF] Le Moan | A constrained band selection method based on information measures for spectral image color visualization[END_REF] or supervised case [START_REF] Battiti | Using mutual information for selecting features in supervised neural net learning[END_REF][START_REF] Guo | A fast separability-based featureselection method for high-dimensional remotely sensed image classification[END_REF][START_REF] Estévez | Normalized mutual information feature selection[END_REF][START_REF] Sotoca | Supervised feature selection by clustering using conditional mutual information-based distances[END_REF][START_REF] Cang | Mutual information based input feature selection for classification problems[END_REF].
To summarize, criteria and methods for filter selection methods are numerous and cover different approaches. The ranking filter methods, which sort features according to an individual importance score and retain only the n best remain limited since they do not take into account the dependencies between the selected features. Approaches that directly associate relevance scores with feature sets are more interesting. A distinction is made between supervised and unsupervised approaches. The unsupervised criteria are interesting, but present a risk of selecting attributes that would not all also be useful for classification. An optimization method must be then employed in order to select the best subset.
• Wrapper
The wrapper methods weight the feature subsets according to their pertinence for the prediction [START_REF] Kohavi | Wrappers for feature subset selection[END_REF]. This weighting is related to the performance of a classifier. [START_REF] Estévez | Normalized mutual information feature selection[END_REF], [START_REF] Li | An effective feature selection method for hyperspectral image classification based on genetic algorithm and support vector machine[END_REF], [START_REF] Yang | Research into a feature selection method for hyperspectral imagery using PSO and SVM[END_REF], and [START_REF] Zhuo | A genetic algorithm based wrapper feature selection method for classification of hyperspectral images using support vector machine[END_REF] propose approaches with SVM classifiers. [START_REF] Zhang | Dimensionality reduction based on clonal selection for hyperspectral imagery[END_REF] and [START_REF] Fauvel | Fast forward feature selection of hyperspectral images for classification with Gaussian mixture models[END_REF] use maximum likelihood classifiers. The RF is also employed in [START_REF] Díaz-Uriarte | Gene selection and classification of microarray data using random forest[END_REF]. Data are separated into two subsets. The first is used for the training, while the second for the evaluation. The use of a classifier is a big advantage as it fits more to the envisaged problem and produces better results with less features than the filters methods.
However, the use of a classifier significantly increases the computation times. Furthermore, worse results could be obtained when using a feature subset with an other classifier.
• Embedded
Eventually, the embedded methods also involve a classifier but select the features during the training process [START_REF] Tang | Feature selection for classification: A review[END_REF] or using intermediate results from the training process. They have two advantages: since they take in consideration the data as training, they have the same advantages as wrappers (selecting features related to the classification problem). Furthermore, they are faster than the wrapper methods since they do not test feature sets on a test dataset. Many methods have been proposed. The RF allow to assess the feature importance [START_REF] Breiman | Random forests[END_REF] an is also natively embedded since the irrelevant features are discarded while building the tress and will not be used in the classification process. Other methods are based on the SVM classifiers, the SVM-RFE (Recursive Feature Elimination) (Tuia et al., 2009) recursively removes the less pertinent features according to a weight estimated from a SVM model.
Selection optimization
Some methods involve a specific optimization method but when considering a generic feature relevance score, the set of possible solutions is generally too large to be visited entirely. Thus, using heuristic rules allows to find a solution close enough to the optimal solution while visiting only a reasonable number of configurations. These optimization methods can generally be distinguished in sequential or incremental methods and stochastic methods.
• Sequential approaches
The first idea is to add features step by step (forward approaches), also called Sequential Forward Selection (SFS) [START_REF] Marill | On the effectiveness of receptors in recognition systems[END_REF]. It could also be methods that start from the entire feature set and remove feature step by step (backward approaches), also called Sequential Backward Selection (SBS) [START_REF] Whitney | A direct method of nonparametric measurement selection[END_REF]. A generalization of these methods have been proposed in [START_REF] Kittler | Feature set search algorithms[END_REF]. Finally, the forward and backward methods could be combined in order to improve the process. The Sequential Floating Forward Selection (SFFS) and the Sequential Floating Backward Selection (SFBS) [START_REF] Pudil | Floating search methods in feature selection[END_REF] propose such improvement.
• Stochastic approaches Stochastic algorithms will involve randomness in their exploration of the solution space. The random initialization and search for a solution can therefore propose different solutions of equivalent quality from a single dataset. The generation of the subset can be totally random [START_REF] Liu | Feature selection and classification-a probabilistic wrapper approach[END_REF].
Genetic algorithms is a possible solution. They propose to weight the subsets according to their importance (Goldberg, 1989). They allow a faster convergence to a more stable solution. The Particle Swarm Optimization (PSO) algorithm [START_REF] Yang | Research into a feature selection method for hyperspectral imagery using PSO and SVM[END_REF] is faster and select relevant features. For finding an approximate optimal subset of features, simulated annealing [START_REF] De Backer | A band selection technique for spectral classification[END_REF][START_REF] Chang | A parallel simulated annealing approach to band selection for high-dimensional remote sensing images[END_REF]) is also possible.
Smoothing methods
Pixel-wise classification is not sufficient for both accurate and smooth land-cover mapping with VHR remote sensing data. This is particularly true in forested areas: the large intra-class and low inter-class variabilities of classes result in noisy label maps at pixel or tree levels. This is why various regularization solutions can be adopted from the literature (from simple smoothing to probabilistic graphical models).
According to [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF], both local and global methods can provide a regularization framework, with their own advantages and drawbacks.
Local methods
In local methods, the neighborhood of each element is analyzed by a filtering technique. The labels of the neighboring pixels (or the posterior class probabilities) are combined so as to derive a new label for the central pixel. Majority voting, Gaussian and bilateral filtering [START_REF] Perona | Scale-space and edge detection using anisotropic diffusion[END_REF] can be employed if it is not targeted to smooth class edges. The majority vote can also be used especially when a segmentation is available: the majority class is assigned to the segment. The vote can be weighted by class probabilities of the different pixels.
The probabilistic relaxation is an other local smoothing method that aims at homogenizing probabilities of a pixel according to its neighboring pixels. The relaxation is an iterative algorithm in which the class probability at each pixel is updated at each iteration in order to have it closer to the probabilities of its neighbors (Gong et al., 1989;[START_REF] Smeeckaert | Large-scale classification of water areas using airborne topographic lidar data[END_REF]. It reports good accuracies with decent computing time and offers an alternative to edge aware/gradient-based techniques that may not be adapted in semantically unstructured environments.
Global methods
Global regularization methods consider the whole image by connecting each pixels to its neighbors. They traditionally adopt the Markov Random Fields (MRF, see Figure 2.5), the labels at different locations are not considered to be independent and the global solution can be retrieved with the simple knowledge of the close neighborhood for each pixel. The optimal configuration of labels is retrieved when finding the Maximum A Posteriori over the entire field [START_REF] Moser | Land-cover mapping by Markov modeling of spatial contextual information in Very-High-Resolution Remote Sensing Images[END_REF]. The problem is therefore considered as the minimization procedure of an energy E over the whole image I. Despite a simple neighborhood encoding (pairwise relations are often preferred), the optimization procedure propagates over large distances. Depending on the formulation of the energy, the global minimum may be reachable. However, a large range of optimization techniques allow to reach local minima close to the real solution, in particular for random fields with pairwise terms [START_REF] Kolmogorov | What energy functions can be minimized via graph cuts?[END_REF]. For genuine structured predictions, in the family of graphical probabilistic models, Conditional Random Fields (CRF, see Figure 2.5) have been massively adopted during the last decade. Interactions between neighboring objects, and subsequently the local context can be modeled and learned using an energy formulation. In particular, Discriminative Random Fields (DRF, [START_REF] Kumar | Discriminative random fields[END_REF]) are CRF defined over 2D regular grids, and both unary/association and binary/interaction potentials are based on labeling procedure outputs. Many techniques extending this concept or focusing on the learning or inference steps have been proposed in the literature [START_REF] Kohli | Robust Higher Order Potentials for Enforcing Label Consistency[END_REF][START_REF] Ladický | Inference Methods for CRFs with Co-occurrence Statistics[END_REF]. A very recent trend even consists in jointly considering CRF and deep-learning techniques for the labeling task [START_REF] Kirillov | A Generic CNN-CRF Model for Semantic Segmentation[END_REF].
In standard land-cover classification tasks, global methods are known to provide significantly more accurate results [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF] since contextual knowledge is integrated. This is all the more true for VHR remote sensing data, especially in case of a large number of classes (e.g., 10, [START_REF] Albert | Contextual land use classification: how detailed can the class structure be?[END_REF]), but presents two disadvantages. For large datasets, their learning and inference steps are computationally expensive. Furthermore, parameters should often be carefully chosen for optimal performance, and authors that managed to alleviate the latter problem still report a significant computation cost [START_REF] Lucchi | Are Spatial and Global Constraints Really Necessary for Segmentation?[END_REF].
Conclusion
In this chapter, several existing stand segmentation methods have been identified, leading to some important conclusions. Firstly, it appears that the fusion of optical images and lidar has already been investigated and identified as useful for forest stand segmentation. However, the stands are not defined the same way in the different proposed methods, preventing from drawing general conclusion for our problematic. Thus, proposed methods vary a lot depending on the definition of stands (species, age, height, etc.). Besides, several existing methods are very dedicated to human/operator interaction.
The segmentation of forest stands is often envisaged as a region-based segmentation problem, and specific strategies are employed in this work in order to retrieve forest stands. First , it appears that the extraction of features is relevant for the tree species discrimination. Such features can be derived from both data sources. From the literature, it came out that working at the object-level improves the tree species discrimination results. The small objects are then desired to have a size and shape similar to trees. Many algorithms can be envisaged for the extraction of such objects. A supervised classification is then mandatory in order to discriminate the tree species. However, since many features have been derived, it is also interesting to select a limited number of features to train the classifier. Such selection is also interesting in order to validate the complementarity of the data sources. Eventually, the classification might be noisy and can be smoothed through regularization methods. Such smoothing allows to obtain homogeneous segments with regular borders that fit with the specifications of the forest stands desired.
All these problems are addressed in the following chapters.
Proposed framework
3
General flowchart
With respect to the methods mentioned in Chapter 2, it appears that there is no operational automatic forest stand segmentation method, where the target labels are the tree species, that can satisfactorily handle a large number of classes (>5). This problem has been barely investigated and only ad hoc methods have been developed. The proposed framework is fully automatic, modular and versatile for species-based forest stand segmentation. It strongly relies on the exploitation of an existing Forest LC DB (in order to update and improve it), taking into account potential errors that it may contain, subsequently providing an adapted model for the studied area. The proposed framework propose the following characteristics:
• No heavy and sensitive parameter tuning;
• Several steps can be substituted with almost equivalent solution, depending on the requirements: accuracy, speed, memory usage, etc.
• Depending on the input and the prior knowledge, one can select the shape and the level of details of the output maps.
The framework is composed of four main steps. Features are then computed at the pixel level for the optical images and at the point level for the lidar data. An over-segmentation is then performed in order to retrieve small object that will be employed for subsequent classification. The object extracted from the over-segmentation and the computed features allow to derive features at the object level. Such features are then used by the classification of the vegetation types (mainly tree species).
Indeed, performing classification at the object level significantly improves the discrimination results compared to a pixel-based classification. Here, the training set is automatically derived from an existing forest LC database. Specific attention is paid to the extraction of the most relevant training pixels, which is highly challenging with partly outdated and generalized vector databases. Because of the high number of features, a feature selection is also carried out in order to have a more efficient classification and reduce the computational load and time, but also in order to assess the complementarity of the multi-source features (namely multispectral optical images/lidar point cloud). Finally, a regularization of the label map is performed in order to remove the noise and to retrieve homogeneous forest stands according to a given criteria (here tree species). Each step is presented in the next sections. A particular focus is made on the regularization methods in Chapter 5 and the discussion of the most relevant fusion scheme in Chapter 6. The flowchart of the framework is presented in Figure 3.1.
Feature extraction
The extraction of discriminative features is an important preliminary step in order to obtain an accurate classification. The features can be handcrafted. Such strategy has been extensively employed for remote sensing applications. The features could also be learned for a specific classification task using convolutional neural networks [START_REF] Demuth | Neural network design[END_REF]. Here, the proposed features have been derived manually, since they are physically interpretable. Most are standards in the literature. The lidar features are derived at the 3D level and the spectral features at the 2D (image) level. Two strategies can be envisaged: • The rasterization of the 3D lidar features at the same spatial resolution as the spectral features.
• The projection of the 2D spectral features in the 3D point cloud.
The projection of the 2D spectral features raises three main issues:
• In the 3D space, it is impossible to attribute a relevant spectral value to a point below the canopy (i.e. not visible in the optical image).
• A single pixel can be attributed to many 3D points, the information will be then duplicated. As an example for and image of 1 km 2 at a spatial resolution of 0.5 m there are 4 millions of pixels, while the number of 3D points is about 16 millions.
• The processing of 3D points is tedious (especially for classification and regularization) while pixels are more easy to handle.
On the opposite, the rasterization of the 3D lidar features has been widely employed, many efficient rasterization algorithms have been proposed. Thus this strategy was adopted.
Point-based lidar features.
24 features are extracted during this step; 2 related to vegetation density, 2 related to the 3D local distribution of the point cloud (planarity and scatter), and 20 statistical features.
Lidar-derived features require a consistent neighborhood for their computation [START_REF] Demantké | Dimensionality based scale selection in 3D lidar point clouds[END_REF][START_REF] Filin | Neighborhood systems for airborne laser data[END_REF]. For each lidar point, 3 cylindrical neighborhoods, aligned with the vertical axis, are used (1 m, 3 m and 5 m radii, infinite height). A cylinder appears to be the most relevant environment in forested areas so as to take into account the variance in altitude of the lidar points.
Three radius values are considered so as to handle the various sizes of the trees, assuming a feature selection step will prune the initial set of attributes.
Density features.
Two vegetation density features, D 1 and D 2 , are computed: the first one based on the number of local height maxima within the neighborhoods, and the second one related to the number of nonground points within the neighborhoods (ground points were previously determined by a filtering step). D 1 and D 2 are calculated as follows:
D 1 = r1∈{1,3,5} r2∈{1,3,5} N t r1,r2 , (3.1)
D 2 = 1 3 r∈{1,3,5} N s r N tot r , (3.2)
where N t r1,r2 is the number of local maxima retrieved from a r 1 maximum filter within the cylindrical neighborhood of radius r 2 . N s r is the number of points classified as ground points within the cylindrical neighborhood of radius r and N tot r is the total number of points within the cylindrical neighborhood of radius r. D 1 describes how trees are close to each other and gives information about the tree crown width. Such information is very discriminative for tree species classification [START_REF] Korpela | Tree species classification using airborne LiDAR-effects of stand and tree parameters, downsizing of training set, intensity normalization, and sensor type[END_REF]. D 2 provides information on the penetration rate of the lidar beam. It has been proven to be relevant for tree species classification (Vauhkonen et al., 2013).
Shape features.
Additionally, the scatter S and the planarity P features are computed following [START_REF] Weinmann | Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers[END_REF]:
S = 1 3 r∈{1,3,5} λ 3,r λ 1,r , (3.3)
P = 1 3 r∈{1,3,5} 2 × (λ 2,r -λ 3,r ), (3.4)
where λ 1,r ≥ λ 2,r ≥ λ 3,r are the eigenvalues of the covariance matrix within the cylindrical neighborhood of radius r. They are retrieved with a standard Principal Component Analysis. They have been shown to be relevant for classification tasks [START_REF] Weinmann | Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers[END_REF].
Statistical features.
Statistical features, known to be relevant for vegetation type (mainly tree species) classification [START_REF] Dalponte | Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data[END_REF]Torabzadeh et al., 2015), are also derived. For each lidar point, the same 3 cylindrical neighborhoods are used. Two basic information from the lidar data, namely height and intensity [START_REF] Kim | Classifying individual tree genera using stepwise cluster analysis based on height and intensity metrics derived from airborne laser scanner data[END_REF], are used to derive statistical features. A statistical feature f d , derived from an original feature f o , (normalized height or intensity) is computed as follows:
f d = 1 3 r∈{1,3,5} f s (p r,fo ), (3.5)
where f s is a statistical function, and p r,fo a vector containing the sorted values of the original feature f o within the cylindrical neighborhoods of radius r. The statistical functions f s employed are standard ones (minimum; maximum; mean; median; standard deviation; median absolute deviation from median (medADmed); mean absolute deviation from median (meanADmed); skewness; kurtosis; 10 th , 20 th , 30 th , 40 th , 50 th , 60 th , 70 th , 80 th , 90 th and 95 th percentiles).
All the statistical functions are used for the height. Only the mean is used for the intensity:
indeed it is hard to know how well the sensor is calibrated and a suitable correction of intensity values within tree canopies has not yet been proposed.
Pixel-based lidar features.
The lidar features are rasterized at the resolution of the multispectral image using a pit-free method proposed in [START_REF] Khosravipour | Generating pitfree canopy height models from airborne lidar[END_REF]. Indeed, the main problem for the rasterization of lidar features is that the point density is not homogeneous, thus applying a regular grid can lead to a pixel-based feature map with pits. Therefore, a special pit-free rasterization methods need to be employed. The idea is to retain points within a cylindrical neighborhood. The axis of the cylindrical neighborhood is the center of the pixels, the radius is chosen in order to have enough retained points (at least 10). The features values of the points are weighted according to their distance to the center of the pixel. The value of the pixel is the mean of the weighted feature values of the points.
The rasterization method used here is interesting because it produces smooth images, compared to rough rasterization and will lead to better results for classification and regularization (Li et al., 2013a). This rasterization process at the feature level is valid since both datasets have approximately the same initial spatial resolution.
A nDSM is also computed using this method, at the same spatial resolution using an existing 1 m Digital Terrain Model computed from the initial point cloud (Ferraz et al., 2016a). The nDSM is very important as it allows to derive the height above the ground and is known as a very discriminative feature for classification [START_REF] Mallet | Relevance assessment of fullwaveform lidar data for urban area classification[END_REF][START_REF] Weinmann | Reconstruction and Analysis of 3D Scenes[END_REF]. Some lidar features are presented in
Pixel-based multispectral images features.
The original 4 spectral bands of the VHR airborne optical image are kept and considered as multispectral features. The Normalized Difference Vegetation Index (NDVI), (Tucker, 1979), the Difference Vegetation Index (DVI), [START_REF] Bacour | Normalization of the directional effects in NOAA-AVHRR reflectance measurements for an improved monitoring of vegetation cycles[END_REF] and the Ratio Vegetation Index (RVI) [START_REF] Jordan | Derivation of leaf-area index from quality of light on the forest floor[END_REF] are also computed as they are standard relevant vegetation indices [START_REF] Anderson | Evaluating hand-held radiometer derived vegetation indices for estimating above ground biomass[END_REF][START_REF] Lee | Forest vegetation classification and biomass estimation based on Landsat TM data in a mountainous region of west Japan[END_REF]. Many other vegetation indices have been proposed [START_REF] Bannari | A review of vegetation indices[END_REF] that have shown relevance for vegetation classification tasks. Indeed, they mainly provide information about the chlorophyll contents and are therefore more relevant for vegetation discrimination more than the original bands alone [START_REF] Zargar | A review of drought indices[END_REF] (emphasizing specific spectral behaviors).
As for the point-based lidar features, statistical features are also derived from each band and each vegetation index according to Equation 3.5 (3 circular neighborhoods of 1m, 3m and 5m radii). Other statistical functions are used (minimum; maximum; mean; median; standard deviation; mean absolute deviation from median (meanADmed); mean absolute deviation from mean (meanADmean); median absolute deviation from median (medADmed); median absolute deviation from mean (medADmean). Such features are related to texture features that are relevant for classification tasks [START_REF] Haralick | Textural features for image classification[END_REF].
Finally, the pixel-based multispectral feature set is composed of 70 attributes. Some of these spectral features are presented in Figure 3.2.
The pixel-based features (lidar and spectral) that have been derived are summarized in Figure 3.3.
Object-based feature map.
The pixel-based multispectral and lidar maps are merged so as to obtain a pixel-based feature map. Then, an object-based feature map is created using the over-segmentation and the pixel-based feature map. The value v t associated to an object t in the object-based feature map is computed as follows:
v t = 1 N t p∈t v p , (3.6)
where N t is the number of pixels in object t ,and v p is the value of the pixel p. If a pixel does not belong to an object (e.g. when the extracted objects are trees), it keeps the value of the pixel-based feature map. Here, only the mean value of the pixels within the object is envisaged but one can also consider other statistical values (minimum, maximum, percentiles etc.). It has not been considered here since it would drastically increase the total number of features and there is no warranty that such statistical values are relevant or are not redundant with the features already produced here.
Other morphological features could also be directly derived from the lidar points cloud at the object-level. For instance, an alpha-shape could be performed on the individual trees (Vauhkonen et al., 2010) and penetration features could be derived as it can help to classify vegetation type (mainly tree species) [START_REF] Ko | Tree genera classification with geometric features from high-density airborne LiDAR[END_REF]. However, low point densities (1-5 points/m 2 ) compatible with largescale lidar surveys are not sufficient in order to derive such discriminative features, they are therefore not considered here.
An illustration of the pixel-based and object-based feature map is presented in Figure 3.4.
Over-segmentation
Over-segmentation proposes a full coarse partition of the area of interest. "Objects" are detected so as to ease and strengthen subsequent classification task. An accurate object extraction is not mandatory since the labels are refined after. Both 3D and 2D mono-modal solutions are investigated, depending on the input data and the desired level of detail for the objects. Indeed, multi-modal segmentation has also been proposed [START_REF] Tochon | Hierarchical analysis of multimodal images[END_REF] and can be adopted for over-segmentation. However, they are not employed here since they mostly aim at producing very accurate segmentation results which are not needed in this framework. In this section, the over-segmentation aims at extracting objects at the tree scale (i.e. extract trees or objects of size that are similar to trees). Individual Tree Crown (ITC) delineation is a complex task and no universal automatic solution has been proposed so far. Thus different methods have been tested in order to extract consistent objects (i.e., object similar to trees). Five existing image-based segmentation methods have been tested either on VHR optical images or rasterized lidar features. In addition a coarse 3D tree extraction method has been developed.
Segmentation of lidar data
Two approaches can be envisaged: the direct segmentation in 3D of the point cloud or the segmentation of a given rasterized lidar feature using standard image-based segmentation algorithms.
The tree extraction directly from the 3D point cloud is a complex task that has been widely tackled and discussed [START_REF] Dalponte | Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data[END_REF][START_REF] Véga | PTrees: a point-based approach to forest tree extraction from lidar data[END_REF][START_REF] Morsdorf | Clustering in airborne laser scanning raw data for segmentation of single trees[END_REF][START_REF] Kandare | A new procedure for identifying single trees in understory layer using discrete LiDAR data[END_REF][START_REF] Wang | International Benchmarking of the Individual Tree Detection Methods for Modeling 3D Canopy Structure for Silviculture and Forest Ecology Using Airborne Laser Scanning[END_REF]. Variations exist according to the tree species, the number of vegetation strata, forest complexity, location and data specification. However, a precise tree extraction is not needed here, since the extracted objects are only needed to improve the classification task. A coarse and standard method is therefore adopted: the tree tops are first extracted from the lidar points cloud using a local maximum filter (Figure 3.5b). A point is considered as a tree top when it has the highest height value within a 5 meter radius neighborhood. Only the points above 3 meters are retained as it is a common threshold of the literature [START_REF] Eysn | Forest delineation based on airborne LIDAR data[END_REF], and appears to be highly discriminative in non-urban areas. Points belonging to a tree are obtained through a two step procedure.
1. If the height of a point within a 5 m radius is greater or equal than 80% the height of the closest tree top, it is aggregated to the tree top (Figure 3.5c).
2. If the distance in the (x, y) plane between an unlabeled point and the closest tree point is smaller than 3 m they are also aggregated (Figure 3.5d).
This delineation method allows to discard low vegetation, but buildings might be extracted and considered as trees. However, it is not a big issue since the purpose of this segmentation is not to precisely extract trees but only provide relevant object for the subsequent object-based classification. The segmentation of lidar data can be performed using image-based segmentation algorithm on a given rasterized lidar feature. They are mainly applied on the normalized Digital Surface Model (i.e.
true height of topographic objects on the ground). Thus a method using a single feature must/can be employed. The watershed algorithm [START_REF] Vincent | Watersheds in digital spaces: an efficient algorithm based on immersion simulations[END_REF] with specific parameters allows to obtain quickly a consistent over-segmentation of the image. A hierarchical segmentation [START_REF] Guigues | Scale-sets image analysis[END_REF]) is also relevant with the advantages that only one parameter that controls the segmentation level needs to be provided and different suitable segmentations can be obtained. These two algorithms are detailed below.
Watershed.
Watershed has been proposed after the natural observation of water raining onto a landscape topology and flowing with gravity to collect in low basins [START_REF] Beucher | Use of watersheds in contour detection[END_REF]. The size of those basins will grow with increasing amounts of precipitation until they spill into one another, causing small basins to merge together into larger basins. Regions (catchment basins) are formed by using local geometric structure to associate points in the image domain with local extrema in some feature measurement such as curvature or gradient magnitude. This technique is less sensitive to user-defined thresholds than classic region-growing methods, and may be better suited for fusing different types of features from different data sets. The watersheds technique is also more flexible in that it does not produce a single image segmentation, but rather a hierarchy of segmentations from which a single region or set of regions can be extracted a-priori [START_REF] Vincent | Watersheds in digital spaces: an efficient algorithm based on immersion simulations[END_REF].
Hierarchical segmentation.
This segmentation method [START_REF] Guigues | Scale-sets image analysis[END_REF]) introduces a multi-scale theory of piecewise image modeling, called the scale-sets theory, and which can be regarded as a region-oriented scale-space theory. A general formulation of the partitioning problem which involves minimizing a two-termbased energy, of the form D + µC, where D is a goodness-of-fit term and C is a regularization term. Such energies arise from basic principles of approximate modeling and relate them to operational rate/distortion problems involved in lossy compression problems. An important subset of these energies constitutes a class of multi-scale energies in that the minimal cut of a hierarchy gets coarser and coarser as parameter µ increases. This allows to define a procedure to find the complete scalesets representation of this family of minimal cuts. Considering then the construction of the hierarchy from which the minimal cuts are extracted, ending up with an exact and parameter-free algorithm to build scale-sets image descriptions whose sections constitute a monotone sequence of upward global minima of a multi-scale energy, which is called the "scale climbing" algorithm. This algorithm can be viewed as a continuation method along the scale dimension or as a minimum pursuit along the operational rate/distortion curve. Furthermore, the solution verifies a linear scale invariance property which allows to completely postpone the tuning of the scale parameter to a subsequent stage.
Segmentation of optical images
Many algorithms have been developed for the over-segmentation of optical RGB (Red-Green-Blue) images. Superpixels methods [START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF] are specific over-segmentation methods that put a special effort on the size and shape of the extracted objects. Pseudo-superpixels can be generated using segmentation algorithms [START_REF] Shi | Normalized cuts and image segmentation[END_REF][START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF][START_REF] Comaniciu | Mean shift: A robust approach toward feature space analysis[END_REF][START_REF] Vedaldi | Quick shift and kernel methods for mode seeking[END_REF][START_REF] Vincent | Watersheds in digital spaces: an efficient algorithm based on immersion simulations[END_REF]. A special attention must be paid to the choice of the parameters. These methods produce superpixels that might not be homogeneous in terms of size and shape but with a good visual delineation. Superpixels algorithms have then been developed.
They allow to control one or many parameters of the superpixels; their number, their size and their shape [START_REF] Moore | Superpixel lattices[END_REF][START_REF] Veksler | Superpixels and supervoxels in an energy optimization framework[END_REF][START_REF] Levinshtein | Turbopixels: fast superpixels using geometric flows[END_REF][START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF].
For the over-segmentation of the optical images, we will use both "traditional" (i.e. non superpixels methods) and superpixels methods. The three methods have been employed for the segmentation of RGB VHR optical images are detailed below:
• "PFF": A segmentation algorithm based on graph-cut [START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF]),
• Quickshift: A segmentation algorithm based on the mean shift algorithm [START_REF] Vedaldi | Quick shift and kernel methods for mode seeking[END_REF],
• SLIC: A genuine superpixel algorithm working in the CIELab space [START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF].
PFF.
This algorithm defines a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image [START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF]). An efficient segmentation algorithm based on this predicate is employed, and shows that although this algorithm makes greedy decisions, it produces a segmentation that satisfies global properties. The algorithm runs in near linear time with the number of graph edges. An important characteristic of the method is its ability to preserve details in low-variability image regions while ignoring details in high-variability regions.
Quickshift
Quickshift [START_REF] Vedaldi | Quick shift and kernel methods for mode seeking[END_REF] is a kernelized version of a mode seeking algorithm similar in concept to mean shift [START_REF] Comaniciu | Mean shift: A robust approach toward feature space analysis[END_REF]Fukunaga et al., 1975) or metroid shift [START_REF] Sheikh | Mode-seeking by medoidshifts[END_REF]. Quickshift is faster and reports better results than traditional mean shift or metroid shift for standard segmentation tasks. Given N data points x 1 , . . . , x N , it computes a Parzen density estimate around each point using, for example, an isotropic Gaussian window of standard deviation σ:
P (x) = 1 2πσ 2 N N i=1 e -||x-x i || 2 2σ 2 (3.7)
Once the density estimate P (x) has been computed, Quickshift connects each point to the nearest point in the feature space which has a higher density estimate. Each connection has a distance d x associated with it, and the set of connections for all pixels forms a tree, where the root of the tree is the point with the highest density estimate. To obtain a segmentation from a tree of links formed by quick shift, a threshold τ is chosen and break all links in the tree with d x > τ . The pixels which are a member of each resulting disconnected tree form each segment.
SLIC superpixels.
The SLIC superpixel algorithm [START_REF] Achanta | SLIC superpixels compared to state-of-the-art superpixel methods[END_REF] clusters pixels in the combined fivedimensional color (CIELab color space) and image plane (x,y) space to efficiently generate compact, nearly uniform superpixels. It is basically based on the k-means algorithm. The number of desired clusters corresponds to the number of desired superpixels. The employed distance is based on a weighted sum of a color based distance and a plane space distance. This method produces superpixels achieving a good segmentation quality measured by boundary recall and under-segmentation error. The benefits of superpixels approaches have already been shown, as they increase classification performance over pixel-based methods. A major advantage of this superpixels method is that the produced segments are compact and regularly distributed.
Feature selection
Due to the high number of possible features involved (95), an automatic Feature Selection (FS) step has been also integrated. This selection is composed of two steps: the choice of the number of features to select and the selection of the feature subset itself. Indeed, the choice of the number of features is very important because it enables to greatly decrease the classification complexity and computation times without limiting the classification quality.
An incremental optimization heuristic called the Sequential Forward Floating Search (SFFS) [START_REF] Pudil | Floating search methods in feature selection[END_REF] algorithm is used for both steps. The SFFS algorithm has two main advantages:
• it can be used with many FS scores (in this study, the Kappa coefficient of the classification),
• it enables to access to the evolution of the classification score/accuracy according to the number of selected features.
Here, the classification accuracy is assessed through the Kappa coefficient of the RF classifier and the SFFS algorithm selects p features by maximizing this FS score criterion. In order to retrieve the optimal number of features, the SFFS algorithm was performed n times on different sample sets (i.e., training pixels from different areas of France that exhibit different tree species) with p equal to the total number of features (95, but in order to reduce the computation times of this step, one can choose a lower value). The classification accuracy a s is conserved for each selection of s features (s ∈ 1, p )
and averaged over the n iterations. The number of optimal features n opt is obtained as follow:
n opt = argmax s∈ 1,p 1 n n i=1 a s . (3.8)
It corresponds to the size of the selection of s features having the maximal mean accuracy.
The feature selection was carried out for different areas of interest (one selection for each area)
with p = n opt . The selected features then are used for both the classification (at the object level) and the energy minimization (at the pixel level) steps.
The feature selection has been carried out only once on different training sets (i.e., training pixels from 3 different regions of France that exhibit different tree species, cf Chapter 4, Section 4.1). The selected features are then retained as the single relevant features. It is therefore not necessary to compute the others features. Such process greatly decreases the computation times but also might reduce the performance of the classification since the selected features are relevant for many regions but not necessarily for a given area of interest.
Classification
The classification is performed using a supervised classifier, in order to discriminate the vegetation types (mainly tree species) provided by the existing forest LC DB. Only the "pure" polygons (i.e., polygons containing at least 75% of a single tree specie) are employed. The classifier used in this study is the Random Forest (RF), implemented in OpenCV [START_REF] Bradski | Learning OpenCV: Computer vision with the OpenCV library[END_REF], as it has been extensively shown relevant in the remote sensing literature [START_REF] Belgiu | Random Forest in remote sensing: A review of applications and future directions[END_REF][START_REF] Fernández-Delgado | Do we need hundreds of classifiers to solve real world classification problems[END_REF]. It was compared to Support Vector Machine (SVM) [START_REF] Dechesne | Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery[END_REF], and provided similar results while being faster. The RF has many advantages:
• It can handle a large number of classes. In our problem more than 20 classes can be envisaged but in most cases the studied areas exhibit between 4 and 7 different classes.
• It is an ensemble learning method, which brings a high level of generalization.
• It can handle a large number of features, even if they are derived from different remote sensing modalities (which is the case in this study).
• The feature importance can be easily assessed.
• The posterior probabilities/uncertainties are natively obtained.
• Only few parameters are needed, the tuning of the parameters is not investigated in this study since a standard parametrization produces satisfactory results.
• It is robust to noise and mislabels [START_REF] Mellor | The performance of random forests in an operational setting for large area sclerophyll forest classification[END_REF][START_REF] Mellor | Using ensemble margin to explore issues of training data imbalance and mislabeling on large area land cover classification[END_REF]Mellor et al., 2015) The SVM classifier is very efficient (Vapnik, 2013), however, the training of such classifier is time consuming, especially when the number of training samples increases (which is the case when the learning is based on a database). Furthermore, when using different types of features (here spectral and lidar features) a special attention should be paid to the employed kernel. The RF classifier is therefore preferred because it natively handles features of different types and works better when the number of samples increases.
The outputs of the classification are:
• a label map that allow to evaluate the accuracy of the classification,
• a probability map (posterior class probabilities for each pixel/object). This probability map is the main input for the subsequent regularization step.
In order to overcome the issues of the generalized and partly outdated forest LC DB, a strategy is proposed in order to automatically select the most suitable training set out of an existing land-cover forest maps, subsequently improving the classification accuracy. Additionally, in order to reduce the classification complexity and computation times, a feature selection has previously carried out in order to identify an "optimal" feature subset.
Training set design
Using an existing LC DB to train a model is not straightforward [START_REF] Pelletier | Assessing the robustness of Random Forests to map land cover with high resolution satellite image time series over large areas[END_REF]Gressin et al., 2013b;[START_REF] Radoux | Automated Training Sample Extraction for Global Land Cover Mapping[END_REF][START_REF] Maas | Using label noise robust logistic regression for automated updating of topographic geospatial databases[END_REF]. First, locally it can suffer from a lack of semantic information (not all the classes of interest are present). Secondly, this database may also be semantically mislabeled and, more frequently, geometrically incorrect: changes may have happened (forest cut or grow, see Figure 3.6a) and the geodatabase may have been generalized (see Figure 3.6b), resulting in sharp polygon vertices that do not exactly correspond to the class borders. Thirdly, in many forest LC databases, polygons of a given vegetation type (mainly tree species) may contain other vegetation types in a small proportion. This is the case in the French forest LC DB where a vegetation type is assigned to a polygon if the latter is composed at least of 75% of this type. Such errors might highly penalize the classification [START_REF] Carlotto | Effect of errors in ground truth on classification accuracy[END_REF], especially if random sampling is performed for constituting the training set. Deep neural networks can handle noisy labels without specific approach but it requires larger amount of training samples.
In order to correct the potential errors of the LC database or discard pixels that do not correspond to the class of interest, a k-means clustering has been therefore performed for each label in the training area (which is the area covered by the polygons of this label). It is assumed that erroneous pixels are present but in a small proportion and that therefore the main clusters corresponds to the class of interest. Let p i-c,t be the i th pixel of the vegetation type (mainly tree species) t in the cluster c of the k-means. The pixels P t used to train the model for the vegetation type (mainly tree species) t correspond to the set:
P t = {p i-c,t | c = argmax c∈[1,k] Card(∪ i p i-c,t )}.
(3.9)
That is to say, only samples belonging to the main k-means cluster among training pixels for one class are kept in the training dataset. Such selection is a bit exclusive and does not allow a lot of variability of the selected training pixels. Thus, a more reasonable way to select training pixels is to keep the pixels such as:
P t = {p i-c,t | Card ∪ ∀i p i-c,t Card ∪ ∀i,∀c k p i-c k ,t ≥ u p }, (3.10)
where u p is a proportion defined by the user. Such selection is equivalent to select the pixels that are in a cluster which size represents a significant proportion of the total pixels labeled as t in the Forest LC. in this case, the number of clusters for the k-means can be increased. In the experiments, u p was set to 0.25 (i.e. 25%) an k to 4. The second formulation is preferred since it allows to keep some inter-class variability.
Training and prediction
For the prediction, the RF classifier is employed. It is performed at the object level. On each area, the RF model is trained using 1000 samples per class. The samples are randomly drawn from the pixels selected by the training set design. The selected features are employed for the construction of the model and for the prediction.
Four main parameters are needed for the construction of the RF model. Firstly, the maximum number of trees in the forest needs to be set. Typically the more trees are employed, the better the accuracy is. However, the improvement in accuracy generally diminishes and asymptotes pass a certain number of trees. The number of trees increases the prediction time linearly. Here, 100 trees are computed in order to reach sufficient accuracy without huge computation times. The maximal depth of the tree needs then to be set . A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods but a depth of 25 is commonly accepted. The number of features employed at each tree node and that are used to find the best split(s) is an other parameter. It is commonly set to the square root of the total number of features. Since 20 features are employed for the classification, this number is here set to 4. Besides, the tree computation can be stopped when each leaf of the tree has reached a sufficient purity or maximum population. Here, when the OOB error is below 0.01, the tree computation stops.
To summarize, the parameters of the RF employed in this study are the following:
• Number of trees: 100.
• Maximum depth of the trees: 25.
• Sufficient accuracy (OOB error): 0.01.
• Number of features at each tree node: 4
Regularization
The obtained classification might remain very noisy due to the complexity of the tree species discrimination task. The forest stand could not be clearly defined and a refinement should be employed in order to smooth the classification results. Many smoothing methods have been proposed and are evaluated in Chapter 5. They can be based on the classification label map results or on a class membership probability map. More advanced framework can include object contours [START_REF] Ronfard | Region-based strategies for active contour models[END_REF][START_REF] Chan | Active contours without edges[END_REF] or output of object detection for higher level regularization. Here for such unstructured environments, it does not appear to be relevant.
The smoothing is here performed at the pixel level. Both local and global methods have been investigated. The local methods only consider a limited number of pixels while the global methods consider all the pixels of the image.
The global smoothing method uses only a small number of relation between neighboring pixels (8-connexity) to describe the smoothness. An energy is computed and its minimum leads to a smoothed result. The energy E(I, C, A) for an image I is expressed as follows:
E(I, C, A) = u∈I E data (u, P (C(u)))+ γ u∈I,v∈Nu E pairwise (u, v, C(u), C(v), A(u), A(v)), (3.11)
where A(u) is a vector of the values of the features at pixel u to be selected so as to constrain the problem according to a given criterion (height for instance) and N u is the 8-connexity neighborhood of the pixel u. When γ = 0, the pairwise prior term has no effect in the energy formulation; the most probable class is attributed to the pixel, leading to the same result as the classification output. When γ > 0, the resulting label map becomes more homogeneous.
In spite of having only connections between local neighbors, the optimization propagates information over large distances [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF]. The problem is NP-hard to solve but an efficient algorithm called Quadratic Pseudo-Boolean Optimization (QPBO) allows to efficiently solve it.
The principal difficulty lies in the formulation of E data and E pairwise , several formulation are investigated in Chapter 5. The ones that produce the best results and were finally retained are defined as follows:
E data = 1 -P (C(u)).
(3.12)
E pairwise (C(u) = C(v)) = 0, E pairwise (C(u) = C(v)) = 1 n n i=1 exp(-λ i |A i (u) -A i (v)|), (3.13)
where A i (u) is the value of the i th feature of the pixel u and λ i ∈ R + * the importance given to feature i in the regularization (λ i are set to 1 ∀i). To compute such energy, the features need to be first normalized (i.e., zero mean, unit standard deviation) in order to ensure that they all have the same range.
Conclusion and discussions
In this chapter, an automatic framework for the extraction of forest stand has been proposed. It is composed of four main steps. An over-segmentation is firstly performed in order to retrieve small object that will be employed for subsequent classification. Multi-modal features are computed at the pixel level and object level. Because of the high number of features, a feature selection is also carried out in order to have a more efficient classification and reduce the computational load and time, but also in order to assess the complementarity of the multi-modal features. Classification is then performed at the object level since it improves the discrimination of tree species. When training this classifier, a specific attention is paid to the design of the training set, in order to cope with the errors of the Forest LC DB. Finally, a regularization of the label map is performed in order to remove the noise and to retrieve homogeneous forest stands according to a given criteria (here tree species).
Next chapters will come into details about the different steps of the proposed framework, and will investigate several variants, so as to justify the choices and to define at the end the best joint use of VHR optical images and lidar point cloud for forest stand delineation.
The contribution of our framework is multiple:
• • The feature selection that allow to reduce computation times. It also allow to assess the relevance of the extracted feature and the complementarity of data sources. It is perform using an incremental optimization heuristic called the Sequential Forward Floating Search based on the κ of the Random Forest.
• The classification is performed using the Random Forest classifier. A special attention is paid to to the selection of training pixel using unsupervised classification algorithm (namely k-means).
The training of the classifier is performed with standard parameters.
• The regularization can be performed using local or global methods (see Chapter 5). It is possible to integrate informations from the derived features. The regularization allows to smooth the classification that might be noisy. It is a necessary step in order to obtain relevant forest stands.
The use of deep-based features could be interesting since they produce good discrimination results for remote sensing application [START_REF] Kontschieder | Deep neural decision forests[END_REF]. Furthermore, such methods can also be employed for classification, reporting good results [START_REF] Paisitkriangkrai | Semantic labeling of aerial and satellite imagery[END_REF][START_REF] Workman | A Unified Model for Near and Remote Sensing[END_REF]. However, since we wanted to draw the best from the data sources (especially lidar), handcrafted features have been preferred. Indeed, the integration of lidar is generally limited to a simple nDSM [START_REF] Audebert | Semantic segmentation of earth observation data using multimodal and multi-scale deep networks[END_REF].
In this chapter, the reliability of the proposed framework for the segmentation of forest stands is assessed.
First the data employed in this framework are introduced. The specifications the the remote sensing modalities (VHR optical images and airborne lidar point cloud) are presented, then, the nomenclature of the French Forest land cover (LC) database (DB) is detailed. The test areas where the framework has been employed are also presented.
Experiments are then conducted in order to optimize the first steps of the method (i.e., features reliability, over-segmentation, feature selection and classification). The last step (regularization) will be experimented with detail in Chapter 5.
After validation of the framework, it is also applied several areas from different regions of France, in order to validate its adaptability.
A simple and naive segmentation method is then proposed. It aims at justifying the relevance of our framework, showing that the problem of stand segmentation can not be envisaged as a simple segmentation problem.
Eventually, examples other interesting outputs of the framework, such as a semi-automatic update process and additional features for the enrichment of the Forest LC DB are proposed.
Data
In this section, the specifications of the different input data are presented. Firstly, the two remote sensing modalities are presented. The forest LC DB is then detailed. Finally, the areas where the framework has been tested are presented.
Remote sensing data VHR optical images.
IGN actually acquires airborne images over the whole territory within a 3 years update rate.
Such images are part of the French official state requirement. In this thesis, the images have been ortho-rectified at the spatial resolution of 50 cm. Such resolution allows to derive consistent statistical features from optical images (see Chapter 3). The ortho-images are composed of 4 bands (red: 600-720 nm, green: 490-610 nm, blue: 430-550 nm and near infra-red: 750-950 nm) captured by the IGN digital cameras [START_REF] Souchon | A large format camera system for national mapping purposes[END_REF] and with its own aircraft granting high radiometric and geometric quality.
Airborne Laser Scanning.
IGN also acquires 3D lidar point clouds over various areas of interest (forest, shorelines, rivers).
Here, forested areas have been acquired by an Optech 3100EA device. Such data are employed in order to derive an accurate Digital Surface Model since it is the better solution to obtain it in such environment. The footprint was 0.8 m in order to increase the probability of the laser beam to reach the ground and subsequently acquire ground points, below tree canopy. The point density for all echoes ranges from 2 to 4 points/m 2 . The points are given with an intensity value that correspond to the quantity of energy that came back to the sensor. Such information is hard to interpret since the beam might have been reflected multiple times and it is therefore difficult to calibrate the sensor. Data are usually acquired under leaf-on conditions and fit with the standards used in many countries for large-scale operational forest mapping purposes [START_REF] Kangas | Forest inventory: methodology and applications[END_REF][START_REF] Naesset | Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data[END_REF].
One may have note that superior densities are now recommended to boost metric extraction.
Data registration.
A prerequisite for data fusion is the most accurate alignment of the two remote sensing data sources (Torabzadeh et al., 2014). A frequently used technique is to geo-rectify images using ground controls points (GCPs). A geometric transformation is established between the coordinates of GCPs and their corresponding pixels in the image or 3D point in the point cloud. It is then applied to each pixel/point, so that coordinate differences on those points are reduced to the lowest possible level.
This method can be easily applied and is relatively fast in terms of computation time.
The registration between airborne lidar point clouds and VHR multispectral images was performed by IGN itself using GCPs. This is a standard procedure, since IGN operates both sensors and has also a strong expertise in data georeferencing. IGN is the national institute responsible for geo-referencing in France for both airborne and spaceborne sensors. No spatial discrepancies are notice in the processed areas.
National Forest LC DB: "BD Forêt R "
The IGN forest geodatabase is a reference tool for professionals in the wood industry and for environmental and spatial planning stakeholders.
The forest LC database is a reference vector database for forest and semi-natural environments.
Produced by photo-interpretation of VHR CIR optical images completed with extensive field surveys, the forest LC database is realized following the departmental division in the metropolitan territory.
• Forest LC DB, version 1
The version 1 of the forest LC DB, was developed by photo-interpretation of aerial images in infrared colors. Its minimum mapped surface area is 2.25 ha. It describes the soil cover (by description of the structure and the dominant composition of wooded or natural formations), based on a departmental nomenclature ranging from 15 to 60 classes according to vegetation's diversity of the mapped department.Constituted, until 2006, at the departmental level, it is available throughout the metropolitan territory.For more than half of the departments, several versions of the version 1 of the forest LC database are available. This version of the forest LC DB is not employed in this work since it highly depends on the area investigated.
• Forest LC DB, version 2
The forest LC DB version 2 has been produced since 2007 by photo-interpretation of VHR CIR optical images. It describes the forest at the national level. Indeed, the classes are the same for the whole metropolitan part of the country which was not the case for the version 1 of the Forest LC DB. It assigns to each mapped range of more than 5000 m 2 a vegetation formation type. Its main characteristics are the following:
• A national nomenclature of 32 items based on a hierarchical breakdown of the criteria, distinguishing, for example, pure stands from the main forest tree species in the French forest (see Figure 4.1). Within the 32 classes, there are mixed forest (not specific information on tree species) that are not employed in our analysis and 2 classes of low vegetation formation that do not corresponds to forest but that are employed during the classification.
• A type of vegetation formation assigned to each mapped range greater than or equal to 50 ares (5000 m 2 ).
•
Areas of interest
In order to assess to relevance of the proposed framework, we have selected 3 different regions of France (see Figure 4.2) which exhibit specific characteristics. The color code of the Forest LC DB is presented in Table A.1.
Gironde.
In the Gironde department, and more generally in the Landes forests (South-Western Atlantic coast in France), the dominant tree specie is the maritime pine. A second specie, oaks, cohabits with maritime pines. In the XIX th century, maritime pines were planted in the Landes (called Landes de Gascogne). The goal was multiple: clean up the marshland, retain the dunes, and provide an interesting tree exploitation to a population having at the time few sources of income.
In the studied area, two main species (namely maritime pine (7, ) and deciduous oaks (1, )) are reported, but also a less common specie : elm (labeled as other pure hardwood (6, )). A low vegetation class (namely woody heathland (17, )) is also represented. After a visual inspection of the Forest LC DB and remote sensing data, it appears that stands of maritime pine (7, ) exhibit two interesting aspects: • clear cuts, corresponding to areas labeled as maritime pine but that are not planted (or replanted yet),
• variation in height of trees, that is to say, a single polygon can contain stands of different height.
Ventoux.
The Ventoux is a mountainous area in South Eastern of France. The vegetation is diverse because of its location and weather. Indeed, it is a mountainous formation in a Mediterranean environment.
Thus many tree species are represented, making it an interesting area for forest stand segmentation. Two areas were selected, since they exhibit various species. The first area (called Ventoux1) is large (2.4×2.5 km) and exhibits a large number of vegetation types (5 tree species and 2 herbaceous formation) and is therefore interesting in order to validate the framework. The second area (called Ventoux2) also exhibits a large number of vegetation types (4 tree species an 1 herbaceous formation) and is interesting in order to validate that under-represented classes can be well retrieved with the proposed framework. In the two proposed areas, the represented vegetation types are: Deciduous oaks (1, ), Evergreen oaks (2, ), Beech (3, ), Black pine (9, ), Mountain pine or Swiss pine (11, ), Larch (14, ), Woody heathland (17, ) and Herbaceous formation (18, ) (6 species, 2 low vegetation formations).
Vosges.
The Vosges are a mountainous area in Eastern France (up to 1,500 m). The vegetation is very varied because of the environmental conditions (altitude, climate, topography, soils type, etc.). Such forested areas are therefore very interesting for forest stand extraction since many species cohabit there. Five areas have been processed.
Vosges1 is the main area of interest of 1 km 2 . Most of the framework has been validated on this area. Indeed, it is a very interesting area for 3 reasons:
• It contains four tree species, which is quite important for a 1 km 2 .
• The stands are adjacent to each other, thus, it is possible to assess if the borders are well retrieved by the proposed framework.
• The stands exhibit height variation. Thus, stands of same specie but with different height might be extracted.
The second area (Vosges2) is a large area ( km 2 ) that exhibit a large number of species (6 species, 1 low vegetation formation) and is therefore interesting in order to validate the framework. In the other areas (Vosges3, Vosges4 and Vosges5) a relevant number of species are represented (4 or 5) but the stands are most of the time not adjacent. However, such areas allow to assess how the proposed framework operates in the mixed stands (i.e., the unlabeled areas). In the five proposed areas, the vegetation types represented are: Deciduous oaks (1, ), Beech (3, ), Chestnut (4, ), Robinia (5, ), Scots pine (8, ), Fir or Spruce (13, ), Larch (14, ), Douglas fir (15, ), Woody heathland (17, ) (8 species, 1 low vegetation formation).
Framework experiments
The proposed framework is composed of different steps (defined above) they have been evaluated on a single area (Vosges1). Indeed, this area is very interesting since it exhibit forest stands of pure species that are adjacent to each other. Thus, the retrieval of the borders can be assessed. In this section, each step is evaluated at 2 levels; the direct output will be first considered and the impact on the final segmentation will also be investigated. In order to simplify the reading of captions, when no specific information is provided the framework is employed using the following parameters, defining a standard pipeline:
• Feature selection:
-Selection of training samples, to cope with the Forest LC DB errors, -Selection of 20 features, -Object-based features (lidar and spectral) obtained from the PFF over-segmentation, -Pixel-based features (lidar and spectral) for the regularization.
• Classification performed with:
-Selection of 500 per class training samples (k = 4 and clusters that amount for more than 25% of the total labeled pixel of the processed class are kept), to cope with the Forest LC DB errors, -Object-based features (lidar and spectral) obtained from the feature selection.
• Regularization using global method with:
γ = 10, -Linear data formulation for the unary term,
-Exponential-feature model with pixel-based features obtained from the feature selection.
The validity of the framework is assessed by a visual analysis and 4 standard accuracy metrics that are derived from the confusion matrices. The employed metrics are the Intersection over Union (IoU), the mean F-score, the overall accuracy and the κ coefficient. Since they are standard metrics for the evaluation of a classification/object detection, they are not detailed here but in Section C.1 of Appendix C.
Over-segmentation
The over-segmentation aims at extracting "objects" so as to ease and strengthen subsequent classification task. An accurate object extraction is not mandatory since the labels are refined after. Both 3D and 2D mono-modal solutions are investigated, depending on the input data and the desired level of detail for the objects.
At the beginning, the idea was to extract trees from the lidar point cloud. It seemed meaningful since trees are the basic units of forest stands. Thus, a simple bottom-up method for tree extraction has firstly been proposed. However, a precise extraction is hard to obtain whatever the adopted technique for a large range of forested environment [START_REF] Kaartinen | An international comparison of individual tree detection and extraction using airborne laser scanning[END_REF][START_REF] Wang | International Benchmarking of the Individual Tree Detection Methods for Modeling 3D Canopy Structure for Silviculture and Forest Ecology Using Airborne Laser Scanning[END_REF]. That is why a comparison with other segmentation algorithms has then been investigated to identify whether a tree detection was necessary or whether only homogeneous objects were sufficient. The idea here is to compare different over-segmentation methods. The advantage of using an object-based analysis instead of a pixel-based analysis is discussed in Section 4.2.3. The results of the over-segmentation provided by the several segmentation algorithms are presented in Figure 4.11.
In all the tested methods, the resulting segments are relevant since they all represent small homogeneous objects. In a given over-segmentation, objects are mostly not of the same size and shape, except for the SLIC superpixels (indeed, the aim of the methods being to obtain such uniform segments). The PFF algorithm produces objects with rough borders that follow the more precisely the borders observed in the image.
From a qualitative analysis, as expected, it appears that no segmentation method performs better than the others. Furthermore, it is impossible to evaluate the segmentation quantitatively since no over-segmentation ground truth is available. Thus, the different segmentation methods are compared through the results they produce after the object-based classification (which indicates how the objects are relevant for classification) and after the regularization (how the objects impacts the final results). These results are presented in Figure 4.12, 4.13 and 4.14.
From the results, it appears that the choice of the over-segmentation methods does not significantly impact the final results (after regularization). The watershed applied to the nDDSM tends to under-perform the other algorithms (94% of overall accuracy after regularization compared to 97% for other methods, with similar results observed using other metrics). The most visible contribution of the choice of the over-segmentation method stands in the classification results (before smoothing). The hierarchical segmentation and the watershed have poorer classification results compared to the other methods (respectively 88% and 83% in terms of overall accuracy compared to values ≥ 91% for the 4 other methods, with similar results observed using other metrics). From the experiments of the over-segmentation algorithms several conclusions can be drawn:
• The watershed algorithm applied to the nDSM is not relevant for this framework.
• A precise tree delineation is not mandatory, a coarse extraction is sufficient in order to obtain relevant objects for a subsequent classification that will be regularized
• There is no preferential data type to operate the segmentation on, since both lidar (especially nDSM) and VHR optical image produce relevant over-segmentation that can be employed for classification.
It leads to the conclusion that the choice of the over-segmentation algorithm should not be guided by performance but on how complicated the tuning step is and how long it takes to operate it.
Feature selection
In the previous step, a high number of features has been derived (95). Thus, an automatic feature selection step is carried out for 4 main reasons.
• It allows to determine how many features are needed for an optimal classification.
• It shows the complementarity of the data sources (optical images and lidar).
• It permits to understand which features are interesting for tree species classification.
• It reduces the computational loads and times.
How many features for the classification?
It is aimed here at obtaining a number of features for an optimal classification. The idea is to run the feature selection N times using all features. Let a n,k be an accuracy metric for the classification of the n th iteration using the k (k ∈ 1, K , where K is the total number of available features) best features. The optimal number of feature k opt is defined as follow:
∀k ∈ 1, K , N n=1 a n,k ≤ N n=1 a n,kopt (4.1)
Here, K = 95 and N = 50. This experiment has been carried out on different areas (i.e. using training and validation samples from different geographical areas), and the optimal number of features found is k opt = 20 (see Figure 4.15). When k is too low, poor classification results are obtained. When the number of features selected increases, the classification performs better, but when employing too much feature (here, more than 20), the accuracy decreases. It is a well know phenomena [START_REF] Bellman | Adaptive control processes: a guided tour[END_REF][START_REF] Hughes | On the mean accuracy of statistical pattern recognizers[END_REF] Thus, in all the following experiments, the classification is performed using only 20 features.
Complementarity of data sources.
Once the optimal number of features was determined, the feature selection was performed 40 times over all the test areas in order to retrieve the most relevant features, in order to obtain statistically relevant results. The retained attributes are presented in Figures 4.16,4.17 and 4.18. On average, 61% of the selected features are derived from the spectral information and 39% from the lidar information for a single selection (i.e., in a random selection of 20 features, 12 are derived from VHR optical images and 8 from lidar). This shows the complementarity of both remote sensing data.
Features for tree species classification.
For the spectral information, over the 40 selections, the features derived from the original band set are more relevant than the ones generated from the vegetation indices: the near-infrared derived features represent 18% of the spectral selected features, 16% for the red and the green, 15% for the blue and the DVI, only 11% for the NDVI and 10% for the RVI. The most relevant statistical features for the spectral information is surprisingly the minimum (17% of the spectral selection). The maximum (12%), the median (11%), the mean (11%) and the standard deviation (10%) are also particularly relevant. The other statistics are selected less than 9% each. For more details see Figures 4.17 For the Lidar information, the most relevant feature is surprisingly the intensity, selected in each of the 40 selections (12% of the lidar selection, 5% of the total selection). This feature highly depends on the calibration of the sensor. Thus when using an other sensor, the intensity might return different results and would be not adapted for an accurate classification. The standard deviation (8% of the lidar selection), the maximum (7%) and the densities (5% and 6%) are also relevant. The other lidar derived features count for less than 4% each. For more details, see Figure 4.16.
Reduce computational loads and times.
It is obvious that processing a reduced number of features (20 instead of 95) reduces the computational loads and times. Furthermore, if an optimal feature subset is found, only the concerned features need to be computed. Thus, the feature computation step would be reduced to a minimum step by computing only the relevant features.
Classification
In order to discriminate the vegetation types (mainly tree species) provided by the existing forest LC DB, a supervised classification is carried out, since such information about tree species is not straightforward to extract. Here, the classification is composed of two steps, the selection of training samples to cope with database errors and generalization and the classification using the Random Forest classifier (i.e., model training). The classification is mainly performed using standard RF classifier [START_REF] Bradski | Learning OpenCV: Computer vision with the OpenCV library[END_REF]. Tests have also been conducted using a SVM classifier with RBF kernel (Vapnik, 2013), leading to similar results [START_REF] Dechesne | Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery[END_REF]. The classification can be impacted by two factors:
• The use of object-based or pixel-based features.
• The pixel-based classification (Figure 4.19c) appears more noisy than the object-based classification (Figure 4.19d), as expected. However, the pixel-based classification already provides bad discrimination results (overall accuracy: 70.48%, κ: 0.5, mean F-score: 50%, IoU: 38.1%) since many confusions are reported. Conversely, even if the objects are roughly extracted (no specific attention is paid to the relevance of the extracted objects), the object-based classification produces more spatially consistent labels (overall accuracy: 93.14%, κ: 0.86, mean F-score: 88.19%, IoU: 79.79%). Such results impacts the final output (see Figures 4.19e and 4.19f and Tables C.17 and C.18). The regularization allows to greatly improve the bad results of the pixel-based classification (overall accuracy: 91.94%, κ: 0.84, mean F-score: 84.74%, IoU: 71.13%). However, when regularizing an object-based classification, the results are still better (overall accuracy: 97.44%, κ: 0.95, mean F-score: 94.04%, IoU: 88.97%).
Training set design.
The selection of the training pixels is an important step for the classification. It is a two sided problem:
• If the selected pixels are randomly taken from the polygons of the forest LC, some might not correspond to the target class, leading to confusions in the final classification.
• If the pixels are selected using a too discriminative criteria, the variability of the target class will not be taken into account, also leading to confusions in the final classification (over-fitting).
The Figure 4.20 shows the training pixels that have been selected by the k-means algorithm. Here, k was set to 4, in order to conserve some variability in the selected set of pixels, the cluster that account for more than 25% size of the processed class are kept. Most pixels from the forest LC are retained. However, the pixels that are excluded from the training set are visually relevant as erroneous pixels. Indeed, they mostly correspond to:
• shadows in the optical images,
• gaps in the canopy (retrieved thanks to the lidar data), • pixels that are visually different from the other pixels of the considered class (i.e. other minor species).
The selection of training pixels is beneficial since it allows to remove the obviously irrelevant pixels from the training set while maintaining a certain variability within classes. 78.98%, κ: 0.62, mean F-score: 60.44% and IoU: 47.14%) and many confusion are reported (especially for Chestnut (4, ) and Robinia (5, )). The regularization attenuates the errors, but the result remains worse than the one obtained with the selection of training pixels (overall accuracy: 94.67%, κ: 0.89, mean F-score: 90.17% and IoU: 82.61%), confusions are still reported for Robinia (5, ).
Regularization
The obtained classification might remain very noisy due to the complexity of the tree species discrimination task. The forest stand could not be clearly defined and a refinement should be employed in order to smooth the classification results. Regularization is the final step: it smooths the former classification. Further discussions an analysis are proposed in Chapter 5 to justify the used regularization framework and its parametrization. The smoothing of the classification is performed using local or global methods. Here, we only aim to validate the relevance of this step showing examples of results and illustrating the influence of the smoothing parameter γ. Thus, only the best results are presented and discussed.
It appears that the global method produces the best results (+4% and +2% in terms of overall accuracy compared to filtering and probabilistic relaxation respectively). At this step, only the choice of the parameter γ is needed. It influences the final results and more precisely how smooth the borders of the resulting segments are.
Influence of γ
The effect of the parameter γ is presented in Figures
Computation times
In the proposed framework, the object-extraction step is the only step that exhibits different computation times depending on the over-segmentation method employed. All these methods are fast (less than 5 minutes for a 1 km 2 area), except for the tree extraction from the Lidar point cloud. Indeed, the tree extraction needs to iterate several times on all the points from the lidar cloud, leading to high computation times (about 1h30 for a 1km 2 area). These computation times, with regard to results of the over-segmentation methods confirm that the tree extraction is not necessary. It produces similar results compared to other methods, but with higher computation times.
For the other steps of the framework, the computation times are presented in Table 4.1. The computation of the features is the most time consuming step (2h). The feature selection (1h) here appears also quite long but can easily be reduced if necessary (e.g., by employing an other method). The implementation of the different steps of the framework can be improved). This times are provided for the retrieval of the best features within the whole feature set.However, once they are identified, only a limited number of optimal features needs to be computed. It results in a decrease of the computation times (less features are computed and no feature selection is carried out). Initial, the proposed framework takes 4 h for 1 km 2 , with optimal features identified, if takes 1 h 30.
Computation time
Lidar features (all)
∼ 1 h
Optical features (all) ∼ 1 h
Object-based feature map ∼ 10 min
Final results on multiple areas
In the previous section, results have been presented on a single zone in order to evaluate the different possibilities of the framework and find out optimal schemes for forest stands extraction. In this section, results are presented over different regions of France using only the best configuration (see Section 4.2).
Gironde
The area and the results of the framework are presented in Figure 4.24. The confusion matrices and accuracy metrics for the classification and regularization are presented in Table 4. In this area, a lot of confusions are reported. The main class (namely maritime pine (7, )) is well retrieved. The confusions are due to the over-representation of the maritime pines (7, ) which exhibits high variability (different heights). Furthermore, some areas are labeled as maritime pine (7, ) in the forest LC have probably been harvested. They clearly appear as bare soils that can be easily confused with Woody heathland (17, ). However, from a visual point of view the results are coherent. Such errors are easily automatically detected and can be employed for the update of the Forest LC DB or the detection of clearcut areas. Furthermore, it is interesting to note that the stand of elm labeled as other hardwood (6, ), is well retrieved.
The poor results observed can be explained:
• The intense harvesting in this area, leading to many clearcut that can be confused with bare soil by the classifier.
• The major forest damage caused by Cyclone Klaus in January 2009.
Ventoux
The results of the framework over two areas presented in Figures 4.25 and 4.26. The confusion matrices and accuracy metrics of the classification and regularization are respectively presented in Tables 4.4 and4.5 for Ventoux1and Tables 4.6 and4
Vosges
Similar results are also observed in this area (see Figures 4.27,4.28,4.29 and 4.30,and Tables 4.8,4.9,4.10,4.11,4.12,4.13,4.14 and 4.15).
The results on the large area Vosges2 allow to validate the proposed framework. The classification is noisy, leading to many confusions (mean F-score: 76.92%, IoU: 63.03%). After regularization, the results are greatly improved with a nearly perfect match with the forest LC DB (overall accuracy: 96.26%, κ: 0.95, mean F-score: 94.62%, IoU: 90.12%). Precision and recall rates are also improved for each class. Furthermore, small segments that do not exist in the Forest LC DB are obtained and a visually coherent. The proposed framework is therefore relevant for the retrieval of forest stands but also allows to obtain further information about small isolated stands within a larger one. On the last areas (Vosges5), the classification results are poor (overall accuracy: 74.7%, κ: 0.58, mean F-score: 66.57%, IoU: 50.52%) probably because of the difference in illumination and height of a stand of the same specie. Furthermore, the Scots pine (8, ) is composed of a thin strip, leading to the same problems discussed for Ventoux2. After regularization, the results are improved (overall accuracy: 89.76%, κ: 0.83, mean F-score: 85.33%, IoU: 75.84%) but are similar to results we can obtain after classification. Here, two annoying effects are combined:
• a class (namely Scots pine (8, )) has a significant part represented as a thin strip, that is totally merged into an other class (namely Beech (3, )),
• the regularization process has eroded the border of a class (namely Chestnut (4, ))
Validity of the framework
The experiments on multiple areas allow to draw general conclusions on the proposed framework. The most important point is that the framework can be applied to any forested area. It will generally produce relevant results from a visual point of view. Three aspects have to be taken into account when quantitatively evaluating the results.
• The employed Forest LC DB is generalized and may contain errors (e.g., clear cuts). The proposed framework proposes the creation of an optimal training set to cope with such errors. However, such errors remain in the forest LC DB, which is employed for evaluation. Thus, the obtained results are biased by the employment of a potentially incorrect Forest LC DB for evaluation.
• Some stands might be or contain parts represented as a thin strip. The proposed regularization method is likely to merge such strip with the adjacent class, leading to poor results. In order to overcome this problem, the value of γ could be decreased. The strip will be kept but some small segment may also appear, giving a precise information of small forest stands within larger stands, but also probably decreasing the global results.
• Sometimes, the proposed regularization method erodes classes edges, leading to poorer results. Again, decreasing the γ parameter would allow to retrieve the borders more precisely while have the same effects mentioned above.
Can forest stands be simply retrieved?
A framework involving classification and global regularization was proposed and successfully employed to retrieve forest stands. This section aims at demonstrating this complex framework was necessary compared to simpler traditional segmentation tools.
Segmentation of remote sensing data
Two segmentation strategies are employed to segment to retrieve the forest stands borders of the forest LC DB:
• The segmentation is applied to the VHR optical images. Thus, the resulting segments will correspond to "stands" that are homogeneous in terms of spectral reflectance. Since the optical images are employed by photo-interpreters in order to derive the forest LC, such segmentation is supposed to produce results similar to the forest LC.
• The segmentation is also applied to the rasterized normalized digital surface model (nDSM) (canopy height without ground relief). Such segmentation would produce "stands" that are homogeneous in term of height.
The results of the segmentation of the VHR optical image using the two segmentation algorithms is presented in Figure 4.32. In both cases, most borders found are not consistent with the forest LC DB, and no metrics can be computed. Visually, the hierarchical segmentation seems to be more relevant than the PFF segmentation. However, the hierarchical segmentation produces small segments due to high variation of illumination in the image, while the PFF segments are all relatively large (thanks to the m parameter). The high spectral variability due to the very high spatial resolution of optical is an hindering factor, that is not correctly handled with a direct segmentation technique The results of the segmentation of rasterized nDSM using the two segmentation algorithms is presented in Figure 4.33. Just like the segmentation of the VHR optical image, most of the found borders are not consistent with the forest LC database. Here, the PFF segmentation seems visually to perform better than the hierarchical segmentation since the retrieved borders are close to the borders of the Forest LC BD. Since the segmentation on the VHR optical image and the nDSM does not allow to retrieve directly the borders from the forest LC database, different values of the parameter µ have been tested for the hierarchical segmentation on the VHR optical images (see Figure 4.34) in order to determinate if the choice of the parameters could be an issue for direct stand segmentation. It appears that decreasing µ does not allow to obtain the borders of the forest LC. It only leads to an over-segmentation of the image. It can be reminded that such over-segmentation is employed in the proposed framework but not as a relevant segmentation for stand delineation but as an input for object-based classification. IT can be concluded that retrieving stands borders is not straightforward. The generalization of the Forest LC DB and the level of detail it offers are not the cause of the wrong delineation obtained by the direct segmentation of input data. The stands are not defined by their borders but by the tree species they are composed of, and segmentation algorithms do not grant access to such information.
The two proposed segmentation [START_REF] Guigues | Scale-sets image analysis[END_REF][START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF] algorithms are very efficient for image segmentation tasks but are not adapted to retrieve forest stands. However, they have shown their relevance at producing an interesting over-segmentation (see before).
Classification of the segments
The classification proposed in the framework gives information about the species at the object level. Since the segments extracted above are larger than the small classified objects/superpixels, a majority vote of pixel labels can be applied within each segment. The obtained label map could then be compared with the forest LC. The result of the classification for the area of interest is presented in Figure 4.35, the confusion matrix and other accuracy metrics for this classification are provided in Table C.34.
The results of the majority vote for the segmentation of the VHR optical images are presented in From a visual inspection, it appears that adding semantic information to the obtained segments does not allow to retrieve a relevant stands since the obtained segments are too generalized. Indeed, some classes are not represented. When they are under-represented in a segment, they are not taken into account. Thus, they are removed from the final results. For the hierarchical segmentation, the overall accuracy increases when adding semantic information through majority vote (81.94%, versus 81.75% for the classification). The κ (0.64) is also revealing a good agreement. This shows the limitation of the two global metrics: the result appears to be good when inspecting them, while it is not. The IoU is more relevant in order to underline the irrelevance of the majority vote on a segmentation (41.1%, versus 56.4% for the classification). The mean F-score (50.68%, versus 70.6% for the classification) also concurs to this conclusion.
The results of the majority vote for the segmentation of the nDSM is presented in Figure 4.35, the confusion matrices and other metrics are provided in Tables C.37 and C.38. The same results are observed, a majority vote applied to a segmentation equivalent to stands (in term of size) does not allow to retrieve a relevant mapping of forested areas.
From this section, a conclusions can be drawn: the direct segmentation of the data does not allow to directly retrieve relevant forest stands in terms of species. Even with an addition of semantic information from a classification, the results are not sufficient for a good mapping of the forest.
Thus, the proposed regularization framework is completely justified.
Besides, some metrics are not relevant in order to evaluate the results. Indeed, the overall accuracy or the κ are not sufficient, other metrics, such as intersection over union and F-score, are needed for the correct evaluation of the results.
Derivation of other outputs
The outputs of the framework can be employed for further investigation in order to extract relevant information about forest. Firstly, the forest stands obtained can be employed for the semiautomatic update or geometric enrichment of the forest LC. Furthermore, the parameter γ allows to refine the Forest LB DB by providing small homogeneous forest stands (see Section 4.2.4). Secondly, the information extracted from the original data can be employed to enrich forest inventory, in case of multi-source inventory.
Semi-automatic update process
The semi-automatic update of the forest LC DB can be performed by the joint analysis of the final stand segmentation and the existing forest LC DB.
A change map can be derived from the stand segmentation results and changes can be prioritized according to their size and shape. Here 3 criteria are employed to characterize a change area:
• The number of pixels,
• The size of the rectangular bounding box (see Figure 4.36b),
• The size of the circular bounding box (see Figure 4.36c). The changes are classified using only thresholds based on the size (in pixels) of the change s, the ratio between the size of the change and the size of the rectangular bounding box r 1 and the ratio between the size of the change and the size of the circular bounding box r 2 . A change is defined as major when one of the arbitrary conditions (4.2) or ( 4.3) is verified.
s ≥ 100 and r 1 ≥ 0.3, (4.2)
s ≥ 100 and r 2 ≥ 0.2. (
These thresholds have been empirically defined since they produce visually consistent results. Further investigations are needed for a better description of change areas.
An "updated" forest LC can then be created, the major change areas being labeled as no data (see Figure 4.37). A difference map is also produced with two kinds of changes:
• The minor changes; they correspond to areas where the borders of the Forest LC DB do not exactly fit the borders obtained by the framework. This type of error is common because of the generalization level of the DB, since the border from the Forest LC DB are mostly straight lines while the proposed framework tends to follow the natural borders of the forest because of the regularization.
• The major changes; they correspond to large patches that differ from the Forest LC DB. Two cases can be differentiated:
-The obtained results can be wrong.
-The forest has changed or has been exploited (cut or plantation).
k = m k × n + n k , (4.4
)
p k = p n k . (4.5)
Where n k is the target class of the pixel in the original probability map, m k is the mode of the pixel, and p n k is the probability of the of the class n k in the original probability map.
Such probability map can then be regularized in order to obtain a map of homogeneous stands both in terms of height and species (see Figure 4.41).
The redistribution of the probabilities according to the modes of the height distribution has two advantages. Firstly, it produces results similar to the regularization without the redistribution (i.e., when considering original classes). Thus, such process does not drastically affect the final segmentation results. Secondly, the obtained results are coherent with the nDSM. It therefore is possible to obtain stands of different maturity, and such information is very interesting for statistical inventory. The main drawback is that "new" classes are created, thus the computation times are increased.
Conclusion and perspectives
Multiple experiments have been conducted in order to validate the choices operated in the framework:
• The extraction of small objects is necessary for efficient classification. The method employed for such extraction has a significant impact on the classification but is less pronounced on the final results (after regularization).
• The feature selection allows to validate the complementarity of both data sources. It also improves the classification results. Furthermore an optimal feature set can be obtain with the feature selection.
• The design of the training set from the forest LC DB is interesting since it improves the classifications results.
• The global regularization allows to obtain relevant segments (with respect to the Forest LC DB). Furthermore, tuning the parameter γ gives different segmentation that exhibit levels of details that are interesting for a finest forest analysis. The regularization is further discussed in Chapter 5.
In order to validate the operated choices, the framework has been applied on areas from different regions of France. It comes out that our proposal is relevant, being not specific to a single area.
The framework has been compared to a naive segmentation of the input data, enlightening that the extraction of forest stand is not a straightforward problem.
Finally, other contributions of the proposed framework are proposed. It consists in extracting quantitative informations (such as stands tree density, of stands mean height) from the obtained stands. It fully benefits from the integration of lidar data in the framework.
The proposed framework has been employed using VHR optical images and low density lidar point cloud. It could be interesting to investigate the use of hyperspectral or multispectral imagery since it might provide more spectral information that has shown to be relevant for tree species classification [START_REF] Dalponte | Tree species classification in boreal forests with hyperspectral data[END_REF][START_REF] Dalponte | Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data[END_REF][START_REF] Clark | Hyperspectral discrimination of tropical rain forest tree species at leaf to crown scales[END_REF]. Finally, the use of high density lidar could also be interesting. Firstly, with more points, trees could be delineated more precisely and structural shape features at the tree level could be derived. For instance, a convex envelope can be extracted for each tree and the volume of the envelope can be obtained as a new feature [START_REF] Lin | A comprehensive but efficient framework of proposing and validating feature parameters from airborne LiDAR data for tree species classification[END_REF]Li et al., 2013a;[START_REF] Ko | Tree genera classification with geometric features from high-density airborne LiDAR[END_REF].
In this chapter, several methods for classification smoothing are proposed and compared. The classification process has been previously presented, and it provides a label map for the areas of interest, accompanied with a class membership probability map, which provides, for each pixel of the image, the posterior class membership for all classes of interest. These are the necessary inputs for the methods that are described below.
However, whereas the classification has been performed at the object level, this label map remains quite noisy. Forest stands often do not clearly appear on this map as it can be seen in Figures 4.19c and 4.19d. Thus, an additional smoothing step is necessary to obtain relevant forest stands. As shown in Section 4.4.2 the simple method consisting in a majority vote within regions from a segmentation do not lead to desired results. Thus, soft labeling smoothing methods [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF] have been considered. This chapter aims at exploring several possible solutions, and at validating the retained ones.
Here, both local and global methods are investigated. For local techniques, majority voting and probabilistic relaxation are selected. For global methods, various energy formulations based on a feature-sensitive Potts model are proposed.
Local methods
Filtering
An easy way to smooth a classification label map or probability map is to filter it. All the pixels in a r × r pixels moving window W are combined in order to generate an output label for the central pixel. The most popular of which is the majority vote filter. First of all, the class probabilities are converted into labels, assuming that the label of pixel x is the label of the most probable class.
L: a set of labels,
C(x) = argmax c∈L P (x, c). (5.1)
From this label image, the final smoothed result is obtained by selection the majority vote in local neighborhood W:
C smooth (x) = arg max ci∈L u∈W [C(u) = c i ] . (5.2)
The majority filter does not take into account the original class posterior likelihoods (it works directly on labels). This explains the poor observed results, since uncertainty and heterogeneity are not taken into account. However, it is a straightforward method that can give a baseline for the analysis of further results. The main issue is for example, if in a 5 × 5 neighborhood, 13 pixels have a probability of 51% for class Douglas fir, and the 12 other pixels have a 99% probability for class beech, the voting will, however, prefer Douglas fir. There are variants which give pixels closer to the center more voting power, but typically yield similar results. Other filters have been developed such as the Gaussian Filter, the Bilateral Filter (Paris et al., 2009b;Paris et al., 2009a) and the Edge-Aware Filter [START_REF] Chen | Real-time edge-aware image processing with the bilateral grid[END_REF] but they are taking into account object contours, which are not fully relevant for unstructured environment like forests. In addition, filters applied to class probability maps can be an alternative.
Probabilistic relaxation
The probabilistic relaxation aims at homogenizing probabilities of a pixel according to its neighboring pixels. The relaxation is an iterative algorithm in which the probability at each pixel is updated at each iteration in order to make it closer to the probabilities of its neighbors (Gong et al., 1989). It was adopted for simplicity reasons. First, good accuracies have been reported with decent computing time, which is beneficial over large scale [START_REF] Smeeckaert | Large-scale classification of water areas using airborne topographic lidar data[END_REF]. Secondly, it offers an alternative to edge-aware/gradient-based techniques that may not be adapted in semantically unstructured environments like forests. The probability P t k (x) of class k at a pixel x at the iteration t is updated by δP t k (x) which depends on:
• The distance d x,u between the pixel x and its neighbor u. Neighbors are defined as the pixels that are distant of less than r pixels from x.
• A co-occurrence matrix T k,l defining a priori correlation between the probabilities of neighboring pixels. The local co-occurrence matrix has been tuned arbitrarily. It can also be estimated using training pixels [START_REF] Volpi | Semantic segmentation of urban scenes by learning local class interactions[END_REF] if dense training is available. The matrix is expressed as follows:
T k,l = 0.8 p • • • p p . . . . . . . . . . . . . . . . . . p p • • • p 0.8 , with p = 0.2 nc-1 .
The update factor is then defined as:
δP t k (u) = u∈Wx d x,u nc l=1
T k,l (x, u) × P t l (u).
(5.3)
In order to keep the probabilities normalized, the update is performed in two steps using the unnormalized probability Q t+1 k (x) of class k at a pixel x at the iteration t + 1:
Q t+1 k (x) = P t k (x) × 1 + δP t k (x) , (5.4)
P t+1 k (xu) = Q t+1 k (x) nc l=1 Q t+1 l (x) .
(5.5)
Global smoothing
Before moving on to the formulation of the global smoothing method, some notations and terminology need to be introduced first. As usual, the pixel values of an image with k features are viewed as samples of a non-parametric function I : R 2 → R k . The number of pixels is denoted by n, and individual pixel locations are referred to by two-dimensional vectors, denoted with x. The aim of classification is to assign each image pixel one of l possible class labels c i , to obtain a the final thematic label map C : R 2 → {c 1 . . . c l }.
Finding the thematic map C with the highest probability consist in searching the labeling which maximizes the likelihood P (C|I) ∼ P (I|C)P (C). Thas is to say,
C =arg max C P (C|I)
with,
P (C|I) = x P (C(x)|I(x)) ∼ x P (I(x)|C(x)) x P (C(x)).
(5.6) It respectively corresponds to the minimization of its negative log-likelihood,
C =arg min C -logP (C|I) with, -logP (C|I) = -logP (I|C) -logP (C) + const. = x -logP (I(x)|C(x)) + x -logP (C(x)) + const.
(5.7) Thus, minimizing the log-likelihood amounts at finding the minimum of an energy E(I, C) defined as:
C =arg min C E(I, C)
with,
E(I, C) =E data (I, C) + E smooth (I, C).
(
The energy consists of two parts:
• a "data term" which describes how likely a certain label is at each pixel given the observed data (this is actually the output of the classification), and decreases as the labeling fits the observed data better.
• a "prior term", which introduces a prior concerning the label configuration, and decreases as the labeling gets smoother.
Without a smoothness prior the second term vanishes and classification decomposes into perpixel decisions which can be taken individually (P (C|I) ∼ P (I|C) P (C)). That is to say, maximizing the probability P (C|I) amounts to simply assign each pixel the most probable class.
If smoothness is included, the labels at different locations x are no longer independent, but form a random field. The energy of a given pixel depends not only on its data I(x), but also on the labels of other pixels in its neighborhood (4-connexity or 8-connexity) or composed of cliques. Since different cliques interact through common pixels, they can no longer be treated independently. The smoothness can be integrated with values chosen arbitrary or that depends on the features corresponding to the pixel. Thus, it is possible to add contextual information from the features. For Markovian Random Fields, it is assumed that the label of each pixel only depends of its neighborhood, i.e.,
P (C(x)) = u∈N (x) P (C(x)|C(u)) = u∈I P (C(x)|C(u)) , (5.9)
where N (x) is a neighborhood of x (not especially the 4-connexity or 8-connexity).
In general, finding the labeling that globally minimizes E(I, C) is intractable since there is no factorization into smaller problems one would, at least conceptually, have to check all l n possible labelings.
For random fields with only pairwise cliques (called first-order random fields), efficient approximation methods can find good minima. Such random fields are often represented as graphs: every pixel corresponds to a node with an associated unary potential (corresponding to the data term), and each neighbor pair corresponds to an edge linking the corresponding node pair, with an associated pairwise potential (corresponding to the prior term).
Over the entire resulting first order random field, the maximization of the posterior probability leads to a smoother result. This can be done by finding the minimum of the energy (arg min C (E(I, C, A))) defined for an image I as follows:
E(I, C, A) = u∈I E data (u, P (C(u)))+ γ u∈I,v∈Nu E pairwise (u, v, C(u), C(v), A(u), A(v)), (5.10)
where A(u) is a vector of values of the features at pixel u to be selected so as to constrain the problem according to a given criterion (height for instance). N u is the 8-connexity neighborhood of the pixel u (only the 8-connected neighborhood has been investigated). When γ = 0, the pairwise prior term has no effect in the energy formulation; the most probable class is attributed to the pixel, leading to the same result as the classification output. When γ > 0, the resulting label map becomes more homogeneous, and, according to A(u) and A(v), the borders of the segments/stands are smoother.
However, if γ is too high, the small areas are bound to be merged into larger ones, removing part of the useful information provided by the classification step. The automatic tuning of the parameter γ has been addressed for instance in [START_REF] Moser | Land-cover mapping by Markov modeling of spatial contextual information in Very-High-Resolution Remote Sensing Images[END_REF] but is not adopted here. There is a range of γ values that produce relevant results. This parameter permits to control the level of detail and its value should be chosen regarding the expected results. Discussions about the choice of γ are detailed in Section 5.5.2.
Despite connections limited to local neighbors, the optimization propagates information over large distances [START_REF] Schindler | An overview and comparison of smooth labeling methods for land-cover classification[END_REF]. The problem is NP-hard, but efficient approximate optimization algorithms exist [START_REF] Boykov | Fast approximate energy minimization via graph cuts[END_REF][START_REF] Kolmogorov | Convergent tree-reweighted message passing for energy minimization[END_REF]Felzenszwalb et al., 2006).
Here, two formulations of E data (unary term) and four formulations of E pariwise (pairwise prior term) are investigated.
Unary/data term
The data term E data expresses how the integration of the classification results is performed in the energy formulation. It is expressed according to the posterior class probabilities estimated by the classifier. This term decreases according to P (u) (i.e., the higher the posterior probability, the smaller the energy).
• A widely used formulation for the unary term is the log-inverse formulation using the natural logarithm. It corresponds to the information content in Information Theory. It is derived directly from the log-likelihood (i.e., -logP (C|I)). It is formulated as follows:
E data = -log(P (u)).
(5.11)
It highly penalizes the low-probability classes but can increase the complexity with potential infinite values.
• An other simple formulation for the unary term is the linear formulation,
E data = 1 -P (u).
(5.12)
It penalizes low probabilities less than the log-inverse formulation, but has the advantage of having values lying in [0, 1].
Pairwise/prior term
The pairwise prior term integrates some a priori constraints on the neighborhood, weighting the relationship between two neighboring pixels. This weight is expressed according to contextual information from the concerned pixels such as the class assignment of two neighboring pixels. Furthermore, other information, such as features values, can also be taken into account.
In this work, the prior energy term has a value depending on the class of neighboring pixels. In the four proposed formulations, two neighboring pixels pay no penalty if they are assigned to the same class. First, two basic and popular priors, the Potts model and the contrast-sensitive Potts model (called here z-Potts model), are investigated.
• In the Potts model, two neighboring pixels pay the same penalty if they are assigned to different labels. Thus, the prior for the Potts model is:
E pairwise (C(u) = C(v)) = 0, E pairwise (C(u) = C(v)) = 1.
(5.13)
• In the z-Potts model, the penalty for a label change depends on the height gradient between two neighboring pixels. The z-Potts model is a standard contrast-sensitive Potts model applied to the height estimated from the lidar point cloud. Here, since we are dealing with forest stands that are likely to exhibit distinct heights, the gradient of the height map is computed for each of the four directions separately. The maximum M g over the whole image in the four directions is used to compute the final pairwise energy. A linear function has been used: the penalty is highest when the gradient is 0, and decreases until the gradient reaches its maximum value. The prior term of the z-Potts model is defined as:
E pairwise (C(u) = C(v)) = 0, E pairwise (C(u) = C(v)) = 1 - |z(u) -z(v)| M g , (5.14)
where z(u) is the height of pixel u and z(v) the height of pixel v.
• An other investigated pairwise energy is formulated according to the values of the features of the neighboring pixels. It is called here Exponential-feature model. The pairwise energy is computed with respect to a pool of n features. When the features have close values, the penalty is high and decreases when the features tends to be very different. The pairwise energy in this case is expressed as follows:
E pairwise (C(u) = C(v)) = 0, E pairwise (C(u) = C(v)) = 1 n n i=1 exp(-λ i |A i (u) -A i (v)|), (5.15)
where A i (u) is the value of the i th feature at the pixel u and λ i ∈ R + * the importance given to feature i in the regularization. To compute such energy, the features need to be first normalized (i.e., zero mean, unit standard deviation) in order to ensure that they all have the same range.
• The last formulation investigated is also formulated according to the values of the features of the neighboring pixels. It is called here Distance-feature model. The pairwise energy is still computed with respect to a pool of n features. In this case, the energy is computed according to the Euclidean distance between the two neighboring pixels in the feature space, the penalty is high when the pixels are close in the feature space and decrease when they get distant. The pairwise energy in this case is expressed as follows:
E pairwise (C(u) = C(v)) = 0, E pairwise (C(u) = C(v)) = 1 -||A(u); A(v)|| n,2 , (5.16)
with:
||A(u); A(v)|| n,2 = 1 √ n n i=1 λ i A i (u) -A i (v) 2 .
(5.17) λ i ∈]0; 1] the importance given to feature i in the regularization.
To compute such energy, the features need to be first normalized (i.e., zero mean, unit standard deviation) in order to ensure that they all have the same range. They are then rescaled between 0 and 1 to ensure that ||A(u);
A(v)|| n,2 lies in [0; 1] ∀(u, v).
A high number of features was extracted from available lidar and optical images, the number n of features employed in the last two formulations can then be very important. Since a feature selection has been previously carried out, it is interesting to only use these selected features. Furthermore, they can also be weighted according to their importance (through λ i ). The Random Forest classification process is here very interesting since it natively gives the feature importance. Since the most important features (20) are almost all equally weighted, it does not bring additional discriminative information for theses two models. The λ i in the Exponential-feature model and Distance-feature model were therefore set to 1 ∀i.
Energy minimization
The random field can be expressed as a graphical model. Thus, the energy minimization can be performed using graph-cut methods. These methods are based on the observation that the binary labeling problem with only two labels (i.e., C(x) ∈ {0, 1}) can be solved to global optimality. To that end the graph of the random field is augmented with a source and a sink node, which represent the two labels and are connected to all pixels by edges representing the unary potentials. A large additive constant on those terms guarantees that the minimum cut of the augmented graph into two unconnected parts leaves each node connected to only the source or the sink. Computing the minimum cut is equivalent to finding the maximum flow from source to sink, for which fast algorithms exist.
The graph-cut algorithm employed here is the Quadratic Pseudo-Boolean Optimization (QPBO). The QPBO is a popular and efficient graph-cut method that efficiently solves energy minimization problems (such as the proposed ones). Thus, the problem is expressed as a graph and the optimal cut is computed over it [START_REF] Kolmogorov | Minimizing non-submodular functions with graph cuts-a review[END_REF]. Furthermore, standard graph-cut methods can only handle simple cases of pairwise/prior term expressed as follows:
E pairwise (C(u) = C(v)) = 0, E pairwise (C(u) = C(v)) = f (A(u), A(v)), (5.18)
were f is a function expressed according to the features values at pixels u and v. The QPBO can integrate more constraints since it can solve problems with pairwise/prior term expressed as follows:
E pairwise (0 = C(u) = C(v) = 0) = f 1 (A(u), A(v)), E pairwise (1 = C(u) = C(v) = 1) = f 2 (A(u), A(v)), E pairwise (0 = C(u) = C(v) = 1) = f 3 (A(u), A(v)), E pairwise (1 = C(u) = C(v) = 0) = f 4 (A(u), A(v)), (5.19)
were f 1 , f 2 , f 3 and f 4 are functions expressed according to the features values at pixels u and v.
With the binary graph cut algorithm as a building block, multi-label problems can be solved approximately using α-expansion moves [START_REF] Kolmogorov | What energy functions can be minimized via graph cuts?[END_REF] and can be adopted for QPBO. In this method, each label α is visited in turn and a binary labeling is solved between that label and all others, thus flipping the labels of some pixels to α; these expansion steps are iterated until convergence. The algorithm directly returns a labeling C of the entire image which corresponds to a minimum of the energy E(I, C, A). Such method gives a hard assignment (a single label for each pixel) while other methods assign probabilities to pixels (soft assignment) which is also interesting [START_REF] Landrieu | Cut Pursuit: fast algorithms to learn piecewise constant functions[END_REF].
Constraints integration
The proposed energy minimization algorithm (namely QPBO) can solve the problem when the energy is modified in order to add more constraint to the problem. Such investigation have been envisaged and two additional constraints have been tested.
Size constraint
Firstly, the size of a segment can be taken into account in the energy formulation. This is done by setting the pairwise term to an a very big value v s (ideally v s = ∞) when a pixel belongs to a segment defined as too small by the user (in forested areas, a minimum stand size is usually 0.5 ha, further discussions are proposed in Section 5.5.3). Thus the Equation 5.13 becomes:
E pairwise (C(u) = C(v)) = f (u, v), E pairwise (C(u) = C(v)) = 1 + f (u, v), (5.20)
with f (u, v) = 0 if u and v are not in a small segment,
f (u, v) = v s if u or v are in a small segment.
(5.21) Such constraint can be applied in addition to the other proposed priors.
Border constraint
The second constraint is related to the borders. Indeed, it is possible the set the energy to a specific value in order to ensure that a border will be created. An a priori border needs to be defined. Let b(u, v) be a binary function that defines if a border between pixel u an v is to be set. If b(u, v) = 0, no borders want to be set and b(u, v) = 1 means that a border wants to be set. Equation 5.13 becomes: (5.22) with v b an important value (ideally v b = ∞).
E pairwise (C(u) = C(v)) = b(u, v) × v b , E pairwise (C(u) = C(v)) = 0,
Adding constraints increases the computational load and time but might be interesting in order to refine even more the results. However, adding such constraints also leads to some issues. Firstly, even if the minimum size of a forest stand is clearly defined in the specifications of the forest LC database, forcing segment to have a minimum size could suppress some information (e.g. small pure segments) that are relevant in many thematical and ecological application. In practice, such generalization could also be obtained by increasing the γ parameter. Secondly, adding borders means that predetermined relevant borders could be retrieved. However, in practice, such borders can not be straightforward extracted for this stand segmentation problem (as it has been presented in Section 4.4.1 of Chapter 4). Thus, adding these constraint have been considered in a limited way (validations have been performed on synthetic images).
How to efficiently smooth a classification?
After having presented several soft labeling smoothing methods, let us test them and define the optimal smoothing procedure. The classification result has already been presented in Chapter 4. The classification results are illustrated as a reminder in Figure 5.1.
Local methods
Two methods have been investigated for the local smoothing of the classification. They are very easy to implement and have low computation times. Firstly, the use of a majority filter on labels has been employed. Since it does not take into account the probabilities, a probabilistic relaxation has also been tested. For both method, the main problem is to define a relevant neighborhood size. If if is too small, the results will remain noisy, and if it is too important, the results will be over generalized and the computation times will explode.
Majority filter.
The results for the majority filter are presented in Figure 5.2. The filtering method performs the worse with a gain of less than 1% compared to the classification, even with large filters. Furthermore, the larger the filter, the longer the computation times.
Probabilistic relaxation.
The results for the probabilistic relaxation are presented in Figure 5.3. The probabilistic relaxation has also poor results (+5% than the classification) and has also important computation times, since the iterative process has to converge.
Global methods
In the formulation of the energy, three aspects are taken into account. The two most important are the integration of the information from the classification (unary term) and how the features are integrated into the smoothing process (prior). The last aspect is the integration of other constraints, allowed by the use of the QPBO algorithm.
The global methods produce significantly better results than the local methods with an average gain of 2% in terms of overall accuracy.
Which unary/data term?
The choice of the unary/data term (fit-to-data term) has a major impact on the regularization results (see Figure 5.4). Indeed, the log-inverse formulation highly penalizes pixels with low probability, thus, small areas with high probabilities will be kept, even for an important γ. In contrast, the linear formulation has a stronger smoothing effect. Both are interesting for forest analysis. The log-inverse allows to obtain small areas that keep an important class confidence. Such areas are useful for forest inventory or DB enrichment since they give information about large forested areas with small segments of pure species. Conversely, the linear formulation produces smooth segments that are more compliant to the present specifications of the forest LC DB.
In terms of accuracy, the two formulation lead to similar results. The log-inverse formulation and the linear formulation have similar overall accuracy (97.43% and 97.44% respectively) and κ (0.95).
The other scores (mean F-score and IoU) show that the log-inverse formulation is slightly better than the linear formulation (94.18% vs 94.04% for the mean F-score and 89.21% vs 88.97% for the IoU). Since the results are very similar, the linear formulation is preferred since it produces smoother segments.
Which prior energy term?
The choice of a prior has a weaker impact on the regularization results (see Figures 5.5 and 5.6).
All proposed models lead to similar results with an overall accuracy greater than 97.3%, a very strong agreement (κ > 0.94). The mean F-score (∼ 93%) and the IoU (∼ 88%) confirm that few confusions are reported.
After a more thorough inspection of the results, it appears that the Exponential-feature model leads to slightly better results (gain of 0.1% in term of overall accuracy, 0.002 for the κ, 0.3% for the mean F-score and 0.5% for the IoU). This improvement is not impressive, showing that a simple Potts model is sufficient. However, it also shows that adding feature information can improve the final results.
Constraints integration
The addition of constraints could be easily carried out with the QPBO algorithm. Constraints about strong borders can be added in order to retrieve them in the final segmentation. Unfortunately, such borders can not be easily extracted (see Section 4.4.1). Thus, in practice, it is not very relevant to add such constraints for this stand segmentation problem since they will not lead to accurate results.
A second constraint can be employed in order to ensure a minimal size of the final segments. However, the minimal size of a segment should be defined. Even if such minimal size is defined by specification of forest stands (0.5 ha), a segment with a size of 0.5 ha+1 pixels will not be considered as a small segment.
In order to validate the formulation of the constraints, experiments have been tried on synthetic data. An object-based probability map has been generated as it is the needed input for the regularization. Here the unary term employed is the linear one and the prior is the Potts model (thus, no features are needed in order to compute the energy). Only a qualitative evaluation of the formulation is proposed here. The results using the standard energy formulation (with γ = 1) is proposed in We assume that strong borders (i.e., borders that we want to retrieve in the final result) are provided. Small segments are retrieved from the classification (it the proposed example, there is only one small segment). The constraints that will be applied are presented in Figure 5.8 The results obtained when adding constraints are illustrated in Figure 5.9. Three cases have been tested depending on whether we want to retrieve borders, remove small segments, or both. The value of the constraints (v s and v b ) are set to 100. The results show that the proposed model for the integration of constraint works well. The size constraint allows to remove the small blue segment, as expected. When adding the border constraint, they are also more or less retrieved. Indeed, sometimes, it does not work. However, when adding such constraint, new small segments are created, thus, the constraint map must be updated and the regularization should be processed again.
Conclusion
In this chapter, different methods for the spatial regularization of a classification have been proposed and evaluated.
• Local methods, such as filtering or probabilistic relaxation, are easy to implement but lead to poor results. The resulting segmentation is often not smooth enough.
• Global methods are more adapted to the problematic of forest stand segmentation. The control of the level of smoothing can be performed using the parameter γ. The integration of external data (such as features derived from both data sources) allows to constraint the problem. Other constraints can be added to the model.
Furthermore, in global methods, constraints can be added thanks to the QPBO algorithm. However, such constraints are not straightforward to integrate since they require a priori knowledges on the studied area.
• Defining a minimal size for stand segments is possible, but if a segment is slightly more important than the defined threshold, it will no be taken into account.
• Obtaining persistent borders from the remote sensing data is not relevant, mainly because of the shadowing effect and canopy holes. They can also be obtained from other DB such as the roads but they are not relevant borders for forest stand. Indeed, a road can pass through a forest stand. The forest cadastre could also be used, but it appears to be highly fragmented and most of the forest owners do not exploit the forest, leading to very heterogeneous stands. In the previous chapters, a framework has been proposed for the extraction of forest stands by joint use of airborne lidar and VHR imagery. The standard tuning brings relevant results. Its modularity allows to investigate where the fusion is mandatory. This chapter focuses on the cooperative use of lidar and VHR optical image. Indeed, both data can be employed at different levels within the framework. So several fusion scenarii are possible. Further experiments are proposed for the evaluation of the fusion process in the proposed framework. The possible levels of fusion are firstly analyzed. It permits the design of a limited number of experiment in order to assess the contribution of the two remote sensing data at each step of the framework. Once such experiments have been processed, it is possible to define several fusion schemes producing results with different quality according to the desired level of detail/computation times.
Data fusion
Levels of fusion
In this framework, the fusion between lidar and optical images informations can be performed at multiple levels (see Figure 6.1;
• Data employed for the over-segmentation (called here object-level fusion),
• Features employed for the classification (called here classification-level fusion),
• Features employed for the regularization (called here regularization-level fusion).
Since 95 features are available and 6 over-segmentation methods have been proposed, the combinatory is important and thousands of scenarii can be envisaged only at the object-level fusion. The classification-level fusion and the regularization-level fusion can be both performed involving:
• only spectral features,
• only Lidar features,
• both spectral and Lidar features.
Object-level fusion
The over-segmentation can be performed on the lidar data, on the optical data or on a combination of both of them. It is difficult to define an accurate data that combines both lidar and optical image and that is relevant for an over-segmentation. Thus, the over-segmentation is only performed on the VHR RGB optical image or on the nDSM. Besides, as it is shown in Section 4.2.1, the solution for this segmentation step has a little influence on the final results (after regularization).
The Quickshift and the SLIC segmentation algorithms have been developed specifically for RGB images, thus these two methods have been employed here only for the over-segmentation of VHR RGB optical images. The PFF algorithm have also shown good results on the segmentation of RGB images [START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF]. It is therefore the last segmentation algorithm employed for the over-segmentation of VHR RGB optical image in these experiments.
The tree extraction can be performed either on the VHR optical images or using the 3D information of the point cloud. The tree extraction out of the VHR optical images is quite complex since it needs to take into account enlighted and shadowed tree parts. The tree extraction from 3D point cloud can be easily performed. Thus, only tree extraction out of lidar points cloud was employed. The watershed and the hierarchical segmentation are employed for the over-segmentation of the nDSM. Since, both are relevant for the segmentation of grayscale images (such as the nDSM).
To sum it up, among the many possible scenarii for the over-segmentation, only 6 are retained:
• Over-segmentation of VHR RGB optical image:
-PFF -Quickshift -SLIC
• Over-segmentation of lidar point cloud:
-Tree extraction -Over-segmentation of the nDSM: * Watershed * Hierarchical segmentation
Classification-level fusion
The classification can be performed in three different ways using different features:
• only lidar features,
• only spectral features,
• both lidar and spectral features.
In the classification process, a feature selection is supposed to have been is carried out. However, in the case of the classification using only lidar features, no feature selection has been performed since only 25 lidar features are available (furthermore, Lidar features are less significant than spectral features for tree species classification, as it will be presented later in Section 6.2). In the other cases, 20 features are selected.
Regularization-level fusion
In the proposed method, the energy models that take into account the values of the features offer an other level of fusion. Two prior models integrate the features, namely the Exponential-features model and the Distance-features model. Here, only the Exponential-features model has been envisaged, since it has been shown to produce slightly better results.
In this model, one can choose to integrate:
• only Lidar features,
• only spectral features,
• both Lidar and spectral features.
Designing the best fusion scheme
Regarding the different fusion schemes that can be envisaged, only a limited number of scenarii are investigated. The idea is to define the fusion scheme giving the best results with respect to the Forest LC DB.
The impact of the choice of the data for the over-segmentation has already been investigated in Section 4.2.1. Employing an over-segmentation from VHR optical images (e.g. using PFF, Quickshift or SLIC) or from lidar data (e.g. using hierarchical segmentation on nDSM or tree extraction) produces similar results after final regularization.
In order to evaluate the impact of the data modality choice on the classification and regularization, 5 scenarii have been investigated:
• A lidar scenario: here, all lidar features (25) are employed, the feature selection is not carried out:
-Hierarchical segmentation on the nDSm, -Object-based classification (with selection of training samples to cope with Forest LC DB errors) using the 25 lidar features, -Regularization using the 25 lidar features.
• A spectral scenario:
-PFF segmentation of the RGB VHR optical image, -Object-based classification (with selection of training samples to cope with Forest LC DB errors) using a selection of 20 spectral features, -Regularization using a selection of 20 spectral features.
• 3 interleaved scenarii at regularization level the same over-segmentation and classification scenario is kept:
- Many other scenarii can be proposed. However, the 5 investigated allow to define a critical fusion path that produces the best results in term of retrieval of the forest stands.
The overall accuracies reached for these 5 scenarii are presented in Table 6.1 for the area Vosges1 but similar results have been observed other areas. The impact of the choice of the data on the classification step is obvious; the classification using a single data source performs worse than when using both. Furthermore, the spectral information tends to be more relevant than the lidar information. Such results are coherent with the results of the feature selection since the spectral feature are selected for 61% and for 39% for the lidar. Indeed, these two data sources are complementary since the spectral information can efficiently discriminate the tree species while lidar gives additional information about the vertical structure of the forest, that is also helpful for a better discrimination of tree species.
The impact of the choice of the data on the regularization is less significant. We can first note that even in the case of the use of a single modality, the regularization can greatly improve the results starting from a poor classification (+17.4% for lidar and +16.1% for spectral). However, when the classification is already of high quality, the improvement is more or less the same whatever the scenario. The spectral information is a bit more beneficial (gain of 8.5% in terms of overall accuracy) than the lidar information (gain of 8.4% in terms of overall accuracy) at the regularization step. Fusing the two data sources or using only the spectral information in the regularization step does not significantly change the final results.
However, it is important to notice that the energy models that take into account the features values are only slightly more efficient than the Potts model. Thus it is not absolutely necessary to integrate features in this step to obtain consistent results.
Optimal fusion scheme
Regarding the different results, an optimal fusion scheme can be defined. Such scheme is obviously not unique:
• The over-segmentation can be performed on the Lidar data or on the VHR optical images. At this step, the choice of the modality does not impact the final result.
• The classification step is the most crucial. The classification employing only Lidar features leads to poor results, while using VHR optical images produces better results. Here, both Lidar and VHR optical features are needed to obtain the best classification scores.
• When employing a feature sensitive regularization method (i.e., that can take into account features), it appears that using both lidar and VRH optical features leads to the best results. However, using a single data source still produces good results.
Thus, three schemes are proposed for different levels of details and computation times:
• A low cost scheme in terms of data required and computational load, no fusion is operated, the idea is to only use VHR optical images. This scheme allows to extract forest stands with a relatively good accuracy. The forest stands can be retrieved when no Lidar information is available:
-75 spectral features can derived, -small objects are extracted using the PFF algorithm on the VHR RGB optical images, -a selection of 20 spectral features is operated, -the classification is performed with the selected features and a optimized training set (to cope with the Forest LC DB errors),
the regularization is performed for a linear unary term and the Potts model for prior.
• A time cost effective fusion scheme, the idea is to compute the minimum amount of features in order to reduce the feature computation times and to suppress the feature selection:
-20 pre-defined features (spectral-based and lidar-based) are computed 1 , -small objects are extracted using the PFF algorithm on the VHR RGB optical images, -the classification is performed using the 20 features and a optimized training set (to cope with the Forest LC DB errors), -the regularization is performed for a linear unary term and the Potts model for prior.
• An efficient fusion scheme, the best results are reported with high computation times and load:
all the 95 features (spectral-based and Lidar-based) are computed.
small object are extracted using the PFF algorithm on the VHR RGB optical images, 1 The selected features are: minimum of the green band, minimum of the blue band, maximum of the green band, maximum of the NIR band, median of the green band, standard deviation the the red band, standard deviation of the blue band, meanADmed of the red band, medADmean of the blue band, standard deviation of the NDVI, minimum of the DVI, mean of the RVI, density D 2 , planarity, standard deviation of the height, medADmed of the height, 30 th percentile, 50 th percentile, 90 th percentile and mean of the Lidar intensity.
a selection of 20 features (among the 95) is operated (the features are adapted to the current area), -the classification is performed using the selected feature and a optimized training set (to cope with the Forest LC DB errors), -the regularization is performed for a linear unary and the Exponential-feature model for prior.
The results of the different schemes are presented in Figures 6. 2, 6.3 and 6.4 and in Tables 6.2, 6.3 and 6.4. They have been conducted on a single area (Vosges1).
The low cost scheme produces the worse results, especially for the classification (overall accuracy:
79.1%, κ: 0.63, mean F-score: 65.77%, IoU: 51.97%). However, the regularization allows to greatly smooth the classification, leading to final results that are satisfactory (overall accuracy: 95.66%, κ: 0.91, mean F-score: 86.55%, IoU: 77.65%). This scheme has also the advantage of being faster than the standard scheme presented before since fewer feature are derived and processed. This scheme runs in about 2 h 30 for a 1 km 2 compared to 4 h (1.6 times faster). Here, since the features are only obtained from VHR optical images, several confusions are observed for Chestnut and Robinia, leading to relatively poor results. However, this scheme is very interesting since it shows that it is possible to obtain a relevant segmentation, according to the forest LC DB, when only using VHR optical images. The time cost effective fusion scheme has better results than the low cost scheme. The classification shows relevant results (overall accuracy: 92.04%, κ: 0.84, mean F-score: 86.17%, IoU: 76.92%) and the regularization result are very close to the Forest LC DB (overall accuracy: 96.24%, κ: 0.92, mean F-score: 90.93%, IoU: 83.93%). It has an other major advantage; only a limited number of features are computed (it takes about 30 minutes to extract them for a 1 km 2 ), and no feature selection is carried out. Thus, only "generic features" (i.e. features that are relevant for different geographical regions) are employed leading to small confusions for Robinia. The results are under-optimal but still very good regarding the computation times. Indeed, the whole algorithm runs in about 1 h 30 for a 1 km 2 area (2.6 times faster than the standard proposed procedure). The efficient fusion scheme produces the best results, close to a perfect fit with the Forest LC DB (overall accuracy: 97.44%, κ: 0.95, mean F-score: 94.04%, IoU: 88.97%). This scheme should be preferred when Lidar and VHR optical images are available and if no time constraint for the production of the results are required. Indeed, relevant features regarding the area of interest are employed in the classification and regularization process leading to optimal results. Running such schle takes 4 h instead of 1 h 30 for the time cost effective fusion scheme (2.6 times slower).
Conclusion
In this chapter, the different levels of fusion have been more precisely presented and analyzed.
From the different conducted experiments, three schemes appear to be relevant for forest stand delineation.
• When no lidar is available, the single use of VHR optical image is possible in order to obtain quite relevant forest stands.
• For large scale results (and when lidar is available), only a limited number of features need to be computed, leading to generalized results. However, such scheme produces good results with very decent computation times.
• When precise mapping is needed, the employment of all the steps of the proposed framework allows to obtain a very relevant stand segmentation (according to the Forest LC DB).
Conclusion
The framework proposed in this thesis allows to draw strong conclusions on forest stand segmentation and fusion of optical spectral images and Lidar point clouds for such a task. Furthermore, Lidar might not be limited to a labeling data source since it can also provide quantitative information and biophysical features about the vertical structure of forest.
An automatic framework, composed of several steps that can be optimized independently has been proposed. It integrates operational constraints, such as errors in the Forest LC DB and draws the best of all the data (namely VHR optical images, lidar and Forest LC DB). The proposed framework has been validated on different areas with various landscapes. Several variants were also tested so as to identify the best "frameworks". The contribution of the thesis are the following:
• the development of a modular and versatile framework with few parameters,
• each step have been justified through multiple experiments,
• special attention is paid to the regularization of the classification, standard formulation for global method have been proposed and additional constraints have been investigated,
• a study of the best cooperative use of lidar and VHR optical images and of their relevance for forest stand retrieval, leading to three possible variant of the pipeline.
Extraction of forest stands
The proposed framework is composed of four main steps that can be divided in sub-steps:
• The feature computation step that aims at the extraction of:
-Spectral image based features: it mainly corresponds to the computation of statistical features (minimum, maximum, mean ...) using different neighborhood sizes, but also vegetation indices that have shown their relevance in vegetation discrimination.
-Lidar features at the point level, these features are mainly related to the height and the spatial distribution of the points. They have also shown relevance for classification tasks in forested areas. The features derived at the point level are, like the spectral image based features, extracted according to statistical functions. The point features are then rasterized at the spatial resolution of the optical images.
-Small objects that have a size and shape similar to trees (trees are the main components of forest). They are needed in order to obtain features at the object level. Such objects are coarsely delineated, since they are only employed for an object based classification that will be refined after. The extraction of objects can be performed directly on the Lidar point cloud (e.g., extraction of trees) or using over-segmentation methods (e.g., segmentation algorithms with adapted parameters or superpixels algorithms) on rasterized Lidar features (mainly the nDSM) or the optical images.
-Features at the object level. It is a simple averaging of the pixel-based features for all extracted objects but it has proven to increase classification performances.
• A supervised forward feature selection step using the κ of the Random Forest as feature set relevance score. The feature selection is driven by three main objectives. Firstly, it greatly decreases the computation times (instead of processing the entire feature set, only an optimal subset is needed). Secondly, it allows to avoid the curse of high dimensionality, that decreases the classification performance when the number of features increases. Lastly, since our features are derived from different remote sensing modalities, the feature selection is a way to assess the relevance of each modalities in the classification process.
• The supervised classification step:
-The training is based on the forest LC DB, which is natively generalized. Such generalization results in borders that do not follow the natural borders of the forest. Furthermore, forest stands that are not 100% pure. A k-means is employed in order to retrieve only the main component of the forest stands that is bound to correspond to the genuine tree specie while keeping a certain level of variability to avoid over-fitting.
-The classification itself is performed using the Random Forest. This state of the art algorithm has shown good discrimination performances. It can handle important amount of features from different data sources. Furthermore, it generates the posterior probabilities for each target class and a feature importance score is also natively obtained. The posterior probabilities can be employed for subsequent smoothing.
• The regularization step that can be envisaged at 2 different levels:
-The local level; in this case only, a local neighborhood is considered to smooth the final results. It can run from a simple majority vote to the iterative probability relaxation technique. Such local methods are straightforward to implement but do not lead to relevant results for forest stands. The results remain noisy.
-The global level; in this case, all the pixels of the image are taken into account. The aim is to minimize an energy composed of two terms, on related to the classification (unary or data term) and one related to the context (prior or pairwise term). Furthermore, information and strong thematical constrains can be added in such model. The choice of the unary term exhibits the strongest influence while the prior term has a weak impact on the final results. The regularization level (i.e., how smooth the results are) is controlled through the parameter γ (the only impacting parameter of the framework). This parameter could be tuned automatically. However, it appears that γ = 10 produces the most accurate results compared to the forest LC DB regardless of the concerned area. Besides, this parameter can be tuned differently depending on the expected outputs. When γ < 10, the resulting segmentation will be less smooth and small pure segments will be conserved (the accuracy will decrease but small pure segments will be detected). If γ > 10, the results will be overgeneralized. Such generalization can be interesting for a national mapping, since only the dominant species will be highlighted.
The proposed framework allows to extract homogeneous forest stands that are relevant according to the French Forest LC DB. It has only few parameters; the most impacting being the γ of the regularization. The modularity of the framework is also a great advantage since it can be fed with different inputs for comparison or improvements. Thus when trying different configurations (i.e., testing different inputs for each step), one can seek for the best fusion scheme.
Fusion schemes
The fusion is operated at different levels in each step of the framework.
• In the feature computation step, the data and algorithm employed for the object extraction correspond to a medium level of fusion and more precisely to a cooperative understanding of the data.
• In the classification step, the fusion is operated at both low and medium level. The feature selection has selected a relevant amount of features among the 95 available (70 spectral-based + 25 Lidar-based). The selected features are employed for the supervised classification and
show the complementarity of both data sources.
• In the regularization step, the fusion can be performed at the high and medium level (since the output of the classification and features are jointly employed) when a global model that takes into account features is employed. In such model, the classification output (posterior probabilities) and the features values are both employed in order to obtain a final smooth decision.
Here, the modularity of the framework allows to efficiently assess where the fusion is crucial. We can conclude that there is not a single best fusion scheme;
• In the feature computation step, the object extraction has not an important impact on the final results. It only slightly impacts the classification results. Thus any over-segmentation method can be employed. For better classification results, an over-segmentation based on VRH optical image is recommended.
• In the classification step, both spectral and Lidar features are required. Indeed, even with an unbalanced proportion of spectral features (70 over 95) and Lidar features (25 over 95), 60% of the selected features are spectral-based features and 40% are Lidar-based features. When employing only a single data source source in the classification, the accuracy of the results greatly decreases. The fusion is crucial in the classification step. The two remote sensing modalities are here complementary.
• In the regularization step, when not taking into account the features through global methods, the obtained results already report very good accuracies. Slightly better results can be obtained when employing feature values in the energy formulation. Here again, employing both data sources leads to the best results (but is not mandatory).
The proposed framework allows to determine how the fusion should be carried out in order to obtain the best results. It is also possible to define what kind of results can be expected at different levels or when employing only one data source.
• Employing only Lidar-based features is not consistent for forest stand segmentation in this context.
• Employing only image-based features allows to obtain relevant forest stands, however, confusion might be reported.
At the end, three schemes are proposed: S1: A low cost fusion scheme that permits to obtain exploitable results with limited computing times but limited accuracy. This scheme only use the spectral features of the VHR optical images. Thus no fusion is operated but the results are exploitable. It is straightforward to implement. It can not be employed for a precise delineation but would give satisfactory results if no lidar is available.
S2:
A time cost effective fusion scheme, very satisfactory results are reported for decent computation times:
-Only a subset of relevant features (spectral and Lidar) are computed (they have been previously defined through a global feature selection).
-The objects are extracted employing an efficient segmentation algorithm (PFF) on the VHR optical images.
-The classification is performed using the object-based features.
-The regularization is performed using a global model with linear unary term and Potts model prior, which is a standard formulation in many remote sensing application cases.
S3: An efficient fusion scheme. The best results are reported coupled with high computation times and load:
-All the 95 features are computed.
-The object are extracted employing an efficient segmentation algorithm (PFF) on the VHR optical images.
-The classification is performed using a selection of the object-based features.
-The regularization is performed using a global model with linear unary term and Exponential-feature model prior.
S3 is recommended for higher accuracies, S2 may be more suitable for scalability purposes and S1 in case of absence of lidar data.
Quantitative features
In the previous workflow, it appears that Lidar is only useful in the classification process. This is probably due to the low point cloud density. If the spectral information allows to discriminate efficiently the tree species, lidar provides information about the vertical structure of the forest. Thus, it is possible to extract other relevant forest indicators employing Lidar. Once the stands are delineated, one can count the tree density per stand. Such information can then be employed for forest exploitation or to derive statistics and to help/improve inventory tasks. Furthermore, once the height and the tree species are determined, allometric equations can be employed in order to derive other relevant information from the stands [START_REF] Muukkonen | Generalized allometric volume and biomass equations for some tree species in Europe[END_REF]. Here, Lidar appears as an meaningful measurement tools for tree height that can be employed for other purposes than mapping [START_REF] Hyyppa | A segmentation-based method to retrieve stem volume estimates from 3-D tree height models produced by laser scanners[END_REF].
Perspectives
The proposed framework allows to efficiently extract forest stands. However, some improvements can be envisaged. Firstly, other remote sensing data sources can be employed. The second idea would be to improve the different steps of the framework. Finally, experiments could been conducted employing the proposed framework to different target classes.
Relevance of other remote sensing data sources
In this work, only standard VHR spectral optical images and low density (1-5 point/m 2 ) Lidar point cloud were used.
Employing hyperspectral images could be interesting, since the high spectral resolution would allow to discriminate more precisely tree species [START_REF] Dalponte | Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data[END_REF][START_REF] Liu | Fusion of airborne hyperspectral and LiDAR data for tree species classification in the temperate forest of northeast China[END_REF]Torabzadeh et al., 2015;[START_REF] Clark | Hyperspectral discrimination of tropical rain forest tree species at leaf to crown scales[END_REF]. Such data mostly has lower spatial resolution or have sufficient spatial resolution but on very limited area. Thus statistical features that have here proven to be efficient for tree specie classification could not be derived. Feature selection would allow to estimate the impact of higher spectral resolution despite inferior spatial one.
A higher density lidar acquisition would also be interesting. Firstly, with more points, trees could be delineated more precisely and structural shape features at the tree level could be derived. For instance, a convex envelope can be extracted for each tree and the volume of the envelope or penetration indices can be derived [START_REF] Lin | A comprehensive but efficient framework of proposing and validating feature parameters from airborne LiDAR data for tree species classification[END_REF]Li et al., 2013a;[START_REF] Ko | Tree genera classification with geometric features from high-density airborne LiDAR[END_REF]. Again, feature selection would allow to conduct a precise and objective assessment of the relevance of such high density lidar. Multiple wavelength lidar also appears to be very promising for forest analysis [START_REF] Budei | Identifying the genus or species of individual trees using a three-wavelength airborne lidar system[END_REF], since it provides "spectral" and spatial information.
Improvement
As explained before, new features could be derived when employing other data sources. However, even with VHR optical images and low density lidar, other meaningful features that have not been tested in this work could be envisaged. The most interesting and obvious possibility would be to derive deep-based features. Indeed, deep-learning algorithms allow to directly learn features. It may appear interesting in such an unstructured environment. Deep-based features are optimal for specific classification tasks and can be employed as an input of traditional classification algorithms (such as RF or SVM) [START_REF] Kontschieder | Deep neural decision forests[END_REF].
The classification could also be performed directly using deep neural networks [START_REF] Paisitkriangkrai | Semantic labeling of aerial and satellite imagery[END_REF][START_REF] Paisitkriangkrai | Effective semantic pixel labelling with convolutional networks and conditional random fields[END_REF][START_REF] Wegner | Cataloging public objects using aerial and street-level images-urban trees[END_REF][START_REF] Workman | A Unified Model for Near and Remote Sensing[END_REF]. They have shown to deliver better results than traditional classification algorithms. The main advantage of the deep methods is that once a model is learned, it can be applied to other areas without being retrained. Finally, efforts could be envisaged on the smoothing methods. One can consider fusion of different complementary classification outputs [START_REF] Ouerghemmi | A two-step decision fusion strategy: application to hyperspectral and multispectral images for urban classification[END_REF]. Thus, the integration of a classification from different data sources at different spatial resolutions could be assessed.
Application to other land-cover problems
The proposed framework has been developed specifically for forest stand segmentation and especially to retrieve smooth quite important stands segments, but could be applied to other semantic segmentation problems. Indeed, at the end, no a priori knowledge on forested areas has been inserted and all labels are considered equally and in an agnostic way. Thus, our pipeline exhibits rather general applicability. Experiments have been conducted on an small urban area (see C.39. Here the framework has been applied using the same standard parameters as the ones employed in forested areas. Only the parameter γ has been tuned in order to have consistent segments regarding the urban LC DB. Indeed, the use of a too important γ leads to over-smoothed results. Consequently, smaller elements such as roads are removed.
It appears that the framework also seems to be adapted to urban semantization. Indeed, all the global quality metrics indicate that the framework performs well in the discrimination of the urban classes, even with an important γ. However, some classes are poorly retrieved (e.g. road). Since such classes are under-represented, a loss of accuracy then does not decrease the global results. A visual inspection shows that the results do not give relevant description of the scene (e.g. the roads are not continuous).
The results could be improved with two procedures:
• Extracting features that are more related to urban environments,
• Propose a new formulation of the energy that can take into account the variation in the gradient of height (i.e. integrate borders constraints) and/or continuity in classes (e.g. roads are connected to each other) [START_REF] Tokarczyk | Features, color spaces, and boosting: New insights on semantic classification of remote sensing images[END_REF][START_REF] Wegner | A higher-order CRF model for road network extraction[END_REF], and/or meaningful transition between classes [START_REF] Volpi | Semantic segmentation of urban scenes by learning local class interactions[END_REF].
Final outlook
A fully automatic modular workflow has been proposed for the extraction of tree species forest stands using airborne VHR optical images and airborne lidar point cloud. It involves different image processing algorithms, such as segmentation, classification and smoothing. An attention was also paid to feature extraction; meaningful features that shown their efficiency for tree specie classification were extracted. The proposed framework produces excellent results for forest stand segmentation while highlighting the complementarity of both remote sensing data sources. The framework fulfills most of the operational constraints for a national mapping agency: no critical parameters, decent computing times, automation, selection of the level of detail and quantitative metrics for more in depth forest analysis.
Autre pin pur
Recall or user's accuracy
For the class i ∈ [1, r], the recall (or user's accuracy) r i is defined as follow:
r i = n ii n .i = TP P . (C.3)
It is the accuracy from the point of view of a map user, not the map maker. The User's accuracy essentially tells use how often the class on the map will actually be present on the ground, that is to say how exhaustive the map is. This is referred to as reliability. The User's Accuracy is complement of the Commission Error, User's Accuracy = 100%-Commission Error. When a class is not represented in the classification map, the recall can not be computed.
Intersection over Union
The Intersection over Union (or Jaccard index) [START_REF] Jaccard | The distribution of the flora in the alpine zone[END_REF] measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets. It has been designed for the evaluation of object detection. For a class i, the Intersection over Union (IoU i ) is defined as follow:
IoU i = n ii n i. + n .i -n ii = TP Pp + P -TP . (C.4)
F-score
It is the harmonic mean of precision and recall. It considers both the precision p and the recall r to compute the score. The F-score can be interpreted as a weighted average of the precision and recall, where an F-score reaches its best value at 1 and worst at 0. The F-score (F 1 ) of the class i is defined as follow F 1,i = 2 p i r i p i + r i (C.5)
Accuracy
The accuracy (A i ) (or relative observed agreement among raters) of the class i is computed as follow:
A i = TP + TN n . (C.6)
Kappa coefficient
The Kappa coefficient [START_REF] Cohen | A coefficient of agreement for nominal scales[END_REF] (κ i ) is generated from a statistical test to evaluate the accuracy of a classification. Kappa essentially evaluates how well the classification performs as compared to just randomly assigning values (i.e. did the classification do better than randomness.) The Kappa Coefficient can range from -1 to 1. A value of 0 indicates that the classification is no better than a random classification. A negative number indicates the classification is significantly worse than random. A value close to 1 indicates that the classification is significantly better than random. The
C.2 Flowchart assessment
In this section, the confusion matrices of the different conducted experiments are presented. They allow to estimate precisely where confusions are reported. Even if they are relevant for such analysis, the metrics at the class level and global level are sufficient to evaluate the reliability of the method.
The confusion matrices are related to the area Vosges1.
C.2.1 Over-segmentation
The confusion matrices resulting from the different proposed over-segmentation methods are presented here.
Trees.
C.2.2 Classification
The confusion matrices resulting from the different experiments of the classification are presented here.
Pixel-based classification versus object-based classification.
C.2.4 Regularization
The confusion matrices resulting from the different experiments of the regularization are presented here. Both local and global methods are presented here.
Local methods
Majority filter.
Global methods
Impact of the parameter γ.
C.3 Can forest stands be simply retrieved?
A segmentation of the input data (i.e., VHR optical image or nDSM) is operated, and a majority vote is performed. The resulting confusion matrices are presented here.
C.4 Test on urban area
In this section, results are presented for a small urban area. The main problem of the ground truth of urban area is that road and railway are represented as line instead of polygons.
E
d'attributs normalisés (moyenne nulle et écart-type unitaire) A. Il se compose d'un terme d'attache à la donnée et d'un terme de régularisation. Pour une image I et une carte de classification C, l'énergie E est formulée comme suit: data (C(u)) = f (P (C(u)));
FIGURE 1.1: Résultats de classification; comparaison entre la méthode fondée pixel et la méthode fondée objet.
xviii (A) Regularisation avec une classification fondée pixel (précision globale: 91.94%, κ: 0.85). (B) Regularisation avec une classification fondée objet (précision globale: 97.44%, κ: 0.95).
FIGURE 1 . 2 :
12 FIGURE 1.2: Résultats de régularisation; comparaison entre la méthode utilisant une classification fondée pixel et la méthode utilisant une classification fondée objet.
complémentaires. Les segments finaux obtiennent un bon score de correspondance avec les peuplements délimités par les opérateurs humains. La méthode repose sur le calcul d'attributs lidar et spectraux à différents niveaux (pixel et objet) pour une classification supervisée des espèces d'arbres. De bon scores de classification sont obtenus, produisant une base solide pour une délimitation des peuplements. Cette délimitation est alors effectuée grâce à une régularisation globale de la classification. Elle repose sur un modèle énergétique formulé en fonction des résultats des probabilités de classification et des valeurs des attributs. Elle permet d'obtenir des zones homogènes en termes d'espèces avec des frontières lisses. De plus, il est possible de contrôler le niveau des détails requis pour la délimitation du peuplement, qui dépend de l'inventaire forestier ou des spécifications de la BD. L'objectif étant de délimiter les peuplements forestiers en fonction du type de végétation (principalement des espèces d'arbres), l'utilisation d'images superspectrale ou hyperspectrales pourrait être intéressante pour obtenir plus d'informations sur les espèces et surtout surmonter le problème xix de la variabilité des espèces. D'autres indices de végétation peuvent également être dérivés à partir des données hyperspectrales. De plus, l'utilisation de données lidar à haute densité ( ∼ 10 pts/m 2 ) pourrait également améliorer les résultats de la méthode. of forested areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Remote sensing for forested areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Context of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
They may consist either of closed forest formations where trees of various storeys and undergrowth cover a high proportion of the ground; or open forest formations with a continuous vegetation cover in which tree crown cover exceeds 10 %. Young natural stands and all plantations established for forestry purposes which have not yet reach a crown density of 10 percent or tree height of 5 m are included under forest, since they are normally forming part of the forest areas which are temporarily unstocked as a result of human intervention or natural causes, but which are expected to revert to forest.
FIGURE 1 . 1 :
11 FIGURE 1.1: Forest repartition and categorization in the world. The three categories of forest are distributed according to their distance to the equator 2 .
(
see Figure 1.1). Forests at different latitudes and elevations form distinct ecozones: boreal forests near the poles, tropical forests near the Equator and temperate forests at mid-latitudes (see Figure 1.1
FIGURE 1 . 2 :
12 FIGURE 1.2: Carbon cycle: a process of CO2 storage, water and air purification 3 .
2 FIGURE 1
21 FIGURE 1.3: Distribution wood volume per species at the national scale 5 .
FIGURE 1 . 4 :
14 FIGURE 1.4: Forestation rate in France 6 .
FIGURE 2 . 2 :
22 FIGURE 2.2: Graphical depiction of concepts related to hierarchical segmentation. The diagram on the left shows partitions of an image at four different scales µ. The partition at the top has the highest µ and is therefore the coarsest, the partition at the bottom is the finest.
(A) VHR RGB optical image.(B) SLIC superpixels.
FIGURE 2 .
2 FIGURE 2.3: Comparison of the pixel (2.3a) and the superpixel (2.3b) approaches (125 m×125 m).
FIGURE 2 . 4 :
24 FIGURE 2.4: General flowchart of the Random Forest.
FIGURE 2 .
2 The MRF only takes into account the posterior probabilities to compute the graph, while CRF also includes contextual information (e.g. the features).
.1 General flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Point-based lidar features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Pixel-based lidar features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Pixel-based multispectral images features. . . . . . . . . . . . . . . . . . . . . . 3.2.4 Object-based feature map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Over-segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Segmentation of lidar data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Segmentation of optical images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Feature selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Training set design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Training and prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Conclusion and discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FIGURE 3.1: Flowchart of the proposed framework, see text for more details.
2 :
2 FIGURE 3.2: Some features at the pixel level (1 km 2 ).
Figure
Figure 3.2.
medADmean
FIGURE 3
3 FIGURE 3.3: Features derived at the pixel level, yellow corresponds to the categories of the features, blue corresponds to the computed features.
FIGURE 3 . 4 :
34 FIGURE 3.4: Illustration of the pixel-based and object-based features (250 m×250 m).
(A) Point cloud. (B) Tree top extraction. (C) Step 1: aggregation of the points according to their height relatively to the height of the tree top. (D) Step 2: aggregation of the points in the (x, y) space.
FIGURE 3 . 5 :
35 FIGURE 3.5: The proposed tree extraction procedure. Red and blue points are the points assigned to different trees, black points correspond to unlabeled points.
(A) Stand of pure maritime pine, green corresponds to the borders of the stand. Blue circle corresponds to hardwoods labeled as maritime pine.(B) Stand of pure maritime pine, green corresponds to the borders of the stand. Blue circles correspond to bare soils labeled as maritime pine.
FIGURE 3
3 FIGURE 3.6: Two main errors in the forest LC DB; generalization ( 3.6a) and miss-labeling/change ( 3.6b).
The extraction of features from VHR optical images and lidar 3D point cloud. From the VHR optical images, vegetation indices are derived. From the original bands and the vegetation indices, statistical features are derived. From the lidar point cloud, 3 types of features are extracted. The first ones are related to vegetation density. The seconds are related to the local shape of the lidar point cloud. Finally, statistical features (related to height or intensity of the 3D points) are extracted.
photo-interpretation).
FIGURE 4 . 1 :
41 FIGURE 4.1: Organizational chart of the version 2 of the forest LC database 1 .
FIGURE 4 . 2 :
42 FIGURE 4.2: The 3 geographical areas of interest. 8 areas being processed in total.
FIGURE 4 .
4 FIGURE 4.3: VHR CIR optical image, rasterized nDSM and forest LC on the selected area from the Gironde (9 km 2 ).
FIGURE 4 . 5 :
45 FIGURE 4.5: VHR CIR optical image, rasterized nDSM and forest LC on the selected area from the Ventoux (Ventoux2, 1 km 2 ).
FIGURE 4 .
4 FIGURE 4.6: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Vosges (Vosges1, 1 km 2 ).
FIGURE 4 . 7 :
47 FIGURE 4.7: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Vosges (Vosges2, 9 km 2 ).
FIGURE 4 .
4 FIGURE 4.11: Illustration of over segmentation results superposed to the input remote sensing source.red corresponds to the borders found by the segmentation algorithm.
FIGURE 4 .FIGURE 4 .FIGURE 4 .
444 FIGURE 4.12: Final regularization results for different over-segmentation methods.
FIGURE 4 .
4 FIGURE 4.15: κ accuracy as a function of the number of selected features. The red bar corresponds to the minimal number of features needed in order to obtain sufficient classification accuracy (99% of the maximum accuracy). The green bar corresponds to the number of features needed in order to obtain the best classification accuracy
and 4.18.
FIGURE 4 .FIGURE 4 .
44 FIGURE 4.16: Result of the feature selection for the Lidar features over 40 trials of 20 optimal features.
FIGURE 4 .
4 FIGURE 4.18: Result of the feature selection for the vegetation indices features over 40 trials of 20 optimal features.
The selection or non-selection of training pixels to take into account the potential errors of the Forest LC DB. Pixel-based classification versus object-based classification. The results of pixel-based and object-based classifications are shown by Figure 4.19. The corresponding confusion matrices and accuracy metrics are presented in Tables C.15 and C.16.
(A) VHR CIR optical image. (B) Forest LC. (C) Pixel-based classification (overall accuracy: 70.48%, κ: 0.50). (D) Object-based classification (PFF) (overall accuracy: 93.14%, κ: 0.86). (E) Regularization using pixel-based classification (overall accuracy: 91.94%, κ: 0.85). (F) Regularization using object-based classification (PFF) (overall accuracy: 97.44%, κ: 0.95).
FIGURE 4 .
4 FIGURE 4.19: Results of the regularization; pixel-based versus object-based (Vosges1 1 km 2 .
(A) RGB VHR optical image. (B) nDSM. (C) Forest LC. (D) Retained pixels for training colored with respect to the tree species.
FIGURE 4 .
4 FIGURE 4.20: Selection of the training pixels.
(
A) Classification with selection of training pixels (overall accuracy: 93.14%, κ: 0.86). (B) Regularization with selection of training pixels (overall accuracy: 97.44%, κ: 0.95). (C) Classification without selection of training pixels (overall accuracy: 78.98%, κ: 0.62). (D) Regularization without selection of training pixels (overall accuracy: 94.67%, κ: 0.89).
FIGURE 4 .
4 FIGURE 4.21: Selection of the training pixels.
FIGURE 4 .
4 FIGURE 4.22: Regularization accuracy metrics using different values of γ.
(A) VHR CIR optical image. (B) Forest LC. (C) Regularization (γ = 5, overall accuracy: 97.4, κ: 0.95). (D) Regularization (γ = 10, overall accuracy: 97.44, κ: 0.95). (E) Regularization (γ = 15, overall accuracy: 97.06, κ: 0.94). (F) Regularization (γ = 20, overall accuracy: 96.06, κ: 0.92).
FIGURE 4 .
4 FIGURE 4.23: Regularization results, effect of the parameter γ.
FIGURE 4 .
4 FIGURE 4.24: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Gironde (9 km 2 ).
Classification result. (E) Regularization result.
FIGURE 4 .
4 FIGURE 4.26: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Ventoux (Ventoux2, 1 km 2 ).
FIGURE 4 .
4 FIGURE 4.27: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Vosges (Vosges2, 9 km 2 ).
FIGURE 4 .
4 FIGURE 4.28: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Vosges (Vosges3, 1 km 2 ).
FIGURE 4 .
4 FIGURE 4.30: VHR CIR optical image, rasterized nDSM and forest LC of the selected area from the Vosges (Vosges5, 1 km 2 ).
(A) Hierarchical segmentation with µ = 15. (B) PFF segmentation with σ = 0.8, k = 500 and m = 40000.
FIGURE 4 .
4 FIGURE 4.32: Result of the segmentation of the VHR optical image for the two segmentation algorithms. Blue lines correspond to the borders of the segments, red lines correspond to the borders of the forest LC.
(A) Hierarchical segmentation with µ = 15. (B) PFF segmentation with σ = 0.8, k = 500 and m = 40000.
FIGURE 4 .
4 FIGURE 4.33: Result of the segmentation of the nDSM for the two segmentation algorithms. Blue lines correspond to the borders of the segments, red lines correspond to the borders of the forest LC.
Figure 4 .
4 Figure 4.35, the confusion matrices and other metrics are presented in Tables C.35 and C.36.
(A) Hierarchical segmentation with µ = 3. (B) Hierarchical segmentation with µ = 6. (C) Hierarchical segmentation with µ = 8. (D) Hierarchical segmentation with µ = 10. (E) Hierarchical segmentation with µ = 12. (F) Hierarchical segmentation with µ = 15.
FIGURE 4 .
4 FIGURE 4.34: Results of the segmentation of the VHR optical image using different values of µ for the hierarchical segmentation. Blue lines correspond to the borders of the segments, red lines correspond to the borders of the forest LC.
(A) Forest LC. (B) Classification results (overall accuracy: 81.75%, κ: 0.67). (C) Semantic information for the hierarchical segmentation of the VHR optical image with µ = 15. (D) Semantic information for the PFF segmentation of the VHR optical image with σ = 0.8, k = 500 and m = 40000. (E) Semantic information for the hierarchical segmentation of the nDSM with µ = 15. (F) Semantic information for the PFF segmentation of the nDSM with σ = 0.8, k = 500 and m = 40000.
FIGURE 4 .
4 FIGURE 4.35: Forest LC and classification results.
FIGURE 4 .
4 FIGURE 4.36: Bounding boxes of the change detection.
(A) Forest LC.(B) "Updated" Forest LC.
FIGURE 4 .
4 FIGURE 4.37: Automatic update of the Forest LC.
FIGURE 4 .
4 FIGURE 4.41: Redistribution of the probabilities.
(A) VHR CIR optical image. (B) Forest LC. (C) Object-based classification (overall accuracy: 93.14%, κ: 0.86).
FIGURE 5 . 1 :
51 FIGURE 5.1: Result of the classification.
(
A) Forest LC database. (B) Majority filter (r = 5, overall accuracy: 93.64%, κ: 0.88). (C) Majority filter (r = 25, overall accuracy: 93.65%, κ: 0.88).
FIGURE 5 . 2 :
52 FIGURE 5.2: Smoothing with the majority filters.
(A) Forest LC database.(B) Probabilistic relaxation (r = 5, overall accuracy: 95.35%, κ: 0.9).
FIGURE 5 . 3 :
53 FIGURE 5.3: Smoothing with probabilistic relaxation.
(A) Forest LC database.(B) Regularization results using the log-inverse data formulation (overall accuracy: 97.43%, κ: 0.95).(C) Regularization results using the linear data formulation (overall accuracy: 97.44%, κ: 0.95).
FIGURE 5 . 4 :
54 FIGURE 5.4: Impact of the formulation of the unary/data term.
FIGURE 5 . 5 :FIGURE 5 . 6 :
5556 FIGURE 5.5: Impact of the formulation of the pairwise/prior term.
Figure 5
5 Figure 5.7.
FIGURE 5 . 7 :
57 FIGURE 5.7: Results of the method on synthetic data: each color corresponds to a class.
FIGURE 5 .
5 FIGURE 5.8: Constraints desired for the synthetic data set superposed with the classification: the black lines corresponds to borders that we want to retrieve in the final segmentation. The purple area corresponds to the small segment we want to remove.
(A) Regularization of the synthetic classification (γ = 1) without constraints. (B) Regularization of the synthetic classification (γ = 1) with size constraint. (C) Regularization of the synthetic classification (γ = 1) with border constraint. (D) Regularization of the synthetic classification (γ = 1) with both constraint.
FIGURE 5 . 9 :
59 FIGURE 5.9: Results of the method on synthetic data when integrating constraint, each color correspond to a class.
FIGURE 6 . 1 :
61 FIGURE 6.1: General flowchart of the proposed method with all the possible fusion schemes.
PFF segmentation of the VHR optical image, -Object-based classification (with selection of training samples to cope with Forest LC DB errors) using a selection of 20 features (lidar and spectral), selection of 20 spectral features, 3. a selection of 20 features (lidar and spectral).
FIGURE 6 . 2 :
62 FIGURE 6.2: Result of the low cost scheme (scheme using only VHR optical images) (1 km 2 ).
FIGURE 6 . 3 :
63 FIGURE 6.3: Result of the time cost effective fusion scheme (1 km 2 ).
FIGURE 6 . 4 :
64 FIGURE 6.4: Result of the efficient fusion scheme (1 km 2 ).
Figure C.1) with exactly the same pipeline, data sources and data specifications. The results are presented in Figure C.2 and Table
s Accuracy is complement of the Omission Error, Producer's Accuracy = 100%-Omission Error. It is also the number of reference sites classified accurately divided by the total number of reference sites for that class
TABLE 1
1
.1: Principal multispectral spatial optical sensors.
TABLE 2 .
2 1: Existing methods for forest stand segmentation, see text for more details.
The tuning of the parameter γ is an important issue, since different values of γ might be acceptable depending on the level of detail expected for the segmentation. In forest inventory, having small regions of pure species is interesting for the understanding of the behavior of the forest. For
4.23 and 4.22. When γ is low, the borders
are rough and small regions might appear (Figure 4.23c). The match with the forest LC DB is very
good (overall accuracy: 97.4%, κ: 0.95, mean F-score: 94.1% and IoU: 89.16%) Increasing γ smooths
the borders with, again, very good results (overall accuracy: 97.44%, κ: 0.95, mean F-score: 94.04%
and IoU: 88.97%). However, a too high value has a negative impact on the results, reducing the size
of meaningful segments (Figure 4.23e) or even removing them (Figure 4.23f), drastically reducing
the match with the forest LC DB (overall accuracy: 96.06%, κ: 0.92, mean F-score: 89.02% and IoU:
81.32%). generalization purposes (such as forest LC), the segments must have a decent size and may exhibit
variability.
100
95
90
%
85
80 IoU F-Score
Overall Accuracy
κ
75 5 10 γ 15 20
TABLE 4 .
4
1: Average computation times of the different steps of the framework for a 1 km 2 area.
TABLE 4 .
4
Confusion matrix
Label 1 6 7 17 Precision
1 179476 2 4968 0 97.31
6 3417 419132 87366 0 82.2
7 615727 534859 16777216 1943289 84.43
17 0 0 36207 111376 75.47
Recall 22.47 43.93 99.24 5.421
Accuracy metrics
Label 1 6 7 17 Overall
IoU 22.33 40.12 83.89 5.327 37.92
F-score 36.51 57.26 91.24 10.11 48.78
Accuracy 96.99 96.98 84.44 90.44 99.25
P0 0.97 0.97 0.84 0.9 0.84
Pe 0.95 0.93 0.79 0.9 0.79
κ 0.3558 0.5585 0.2575 0.08904 0.275
2: Confusion Matrix and accuracy metrics of the classification.
TABLE 4 .
4
3: Confusion Matrix and accuracy metrics of the regularization.
TABLE 4 .
4 .7 for Ventoux2. 4: Confusion Matrix and accuracy metrics of the classification (Ventoux1).
Confusion matrix
Label 1 2 3 9 11 17 18 Precision
1 58218 0 3247 2759 331 0 0 90.18
2 0 154297 772 79 186 439 2404 97.55
3 21038 85103 3827279 66812 188176 41743 32783 89.78
9 0 11572 175368 3043399 292148 4760 4766 86.17
11 139 45061 384619 294910 3083873 43374 9964 79.85
17 0 0 1045 0 5905 104020 13 93.73
18 0 0 31 829 154 126 29367 96.26
Recall 73.33 52.12 87.13 89.28 86.36 53.49 37.03
Accuracy metrics
Label 1 2 3 9 11 17 18 Overall
IoU 67.91 51.45 79.27 78.09 70.91 51.64 36.51 62.25
F-score 80.89 67.94 88.44 87.7 82.98 68.11 53.49 75.65
Accuracy 99.77 98.79 91.68 92.9 89.48 99.19 99.58 85.69
P0 1 0.99 0.92 0.93 0.89 0.99 1 0.86
Pe 0.99 0.96 0.54 0.59 0.57 0.97 0.99 0.31
κ 0.8077 0.6738 0.8194 0.827 0.7538 0.6773 0.5332 0.7929
Confusion matrix
Label 1 2 3 9 11 17 18 Precision
1 62834 0 6 1715 0 0 0 97.33
2 0 150450 5533 0 2194 0 0 95.11
3 6556 0 4185074 24676 30802 15531 295 98.17
9 0 0 28668 3463198 40147 0 0 98.05
11 0 5123 145885 193255 3500313 17364 0 90.64
17 0 0 236 0 5770 104977 0 94.59
18 0 0 0 517 0 0 29990 98.31
Recall 90.55 96.71 95.87 94.02 97.8 76.14 99.03
Accuracy metrics
Label 1 2 3 9 11 17 18 Overall
IoU 88.36 92.13 94.19 92.3 88.82 72.96 97.36 89.45
F-score 93.82 95.9 97.01 95.99 94.08 84.37 98.66 94.26
Accuracy 99.93 99.89 97.85 97.6 96.34 99.68 99.99 95.64
P0 1 1 0.98 0.98 0.96 1 1 0.96
Pe 0.99 0.97 0.54 0.58 0.57 0.98 0.99 0.31
κ 0.9379 0.9585 0.9533 0.9428 0.9143 0.8421 0.9866 0.9364
TABLE 4 .
4 5: Confusion Matrix and accuracy metrics of the regularization (Ventoux1).
Similar results are observed on the small area. The classification reports great precisions (overall
accuracy: 92.06%, κ: 0.64, mean F-score: 71.06%, IoU: 58.08%), however, recall rates are not sufficient
for Larch (14, ), non-pectinated fir (labeled as Other conifer other than pine (16, )) and Herbaceous
formation (18, ). The global metrics also report good results. After regularization, most of the
precision and recall rates are improved, except for the Herbaceous formation (18, ). Indeed, this class
is represented as a thin strip that is totally merged with Black pine (9, ). This is a quite specific
situation in the Forest LC DB (usually, object are less elongated/thin). The global metrics confirm the good results (overall accuracy: 98.09%, κ: 0.87, mean F-score: 84.58%, IoU: 77.91%), the mean F-score reflects the discussed confusion in the final result.
TABLE 4 .
4
6: Confusion Matrix and accuracy metrics of the classification (Ventoux2).
TABLE 4 .
4
Confusion matrix
Label 3 9 14 16 18 Precision
3 55365 250 0 0 0 99.55
9 2055 2325662 8149 5015 4876 99.14
14 0 4198 91899 0 0 95.63
16 0 208 0 20308 0 98.99
18 0 24004 0 0 11183 31.78
Recall 96.42 98.78 91.85 80.2 69.64
Accuracy metrics
Label 3 9 14 16 18 Overall
IoU 96 97.95 88.16 79.54 27.91 77.91
F-score 97.96 98.96 93.71 88.61 43.64 84.58
Accuracy 99.91 98.09 99.52 99.8 98.87 98.09
P0 1 0.98 1 1 0.99 0.98
Pe 0.96 0.85 0.93 0.98 0.98 0.85
κ 0.9791 0.8696 0.9345 0.885 0.4315 0.8733
7: Confusion Matrix and accuracy metrics of the regularization (Ventoux2).
TABLE 4
4
Confusion matrix
Label 1 3 8 13 14 15 17 Precision
1 2428277 0 16432 102640 0 66 35097 94.03
3 2 898716 239 15216 7345 0 5734 96.92
8 26741 36035 1937428 37313 0 2553 0 94.97
13 60069 27193 32422 6088861 5109 4358 50605 97.13
14 0 370 0 0 181000 0 0 99.8
15 1066 0 1237 7285 0 468298 0 97.99
17 281 0 0 0 0 0 234597 99.88
Recall 96.5 93.39 97.47 97.4 93.56 98.53 71.95
Accuracy metrics
Label 1 3 8 13 14 15 17 Overall
IoU 90.92 90.7 92.68 94.68 93.38 96.58 71.89 90.12
F-score 95.25 95.12 96.2 97.27 96.58 98.26 83.65 94.62
Accuracy 98.09 99.28 98.8 97.31 99.9 99.87 99.28 96.26
P0 0.98 0.99 0.99 0.97 1 1 0.99 0.96
Pe 0.68 0.86 0.73 0.5 0.97 0.93 0.96 0.32
κ 0.9405 0.9473 0.9549 0.9461 0.9653 0.9819 0.8329 0.9454
TABLE 4.9: Confusion Matrix and accuracy metrics of the regularization (Vosges2).
Confusion matrix
Label 1 3 8 13 15 Precision
1 25221 0 0 248 66 98.77
3 806 195007 330 1691 3547 96.83
8 0 231 38520 382 0 98.43
13 8148 1844 3356 266015 13680 90.78
15 11065 35230 19069 136006 859279 81.01
Recall 55.75 83.94 62.86 65.79 98.03
Accuracy metrics
Label 1 3 8 13 15 Overall
IoU 55.37 81.7 62.24 61.67 79.71 68.14
F-score 71.27 89.93 76.73 76.29 88.71 80.59
Accuracy 98.74 97.3 98.56 89.79 86.5 85.45
P0 0.99 0.97 0.99 0.9 0.87 0.85
Pe 0.96 0.77 0.94 0.66 0.51 0.42
κ 0.7068 0.8838 0.7602 0.6999 0.7229 0.7497
.8: Confusion Matrix and accuracy metrics of the classification (Vosges2).
On Vosges3, similar results are observed, the classification is not relevant (mean F-score: 80.59%, IoU: 68.14%). Indeed, confusions are reported for Douglas fir (15, ) (precision:81.01%). Indeed, young stands of Douglas fir (15, ) generally contains a significant amount of broadleaved (such as Deciduous oaks (1, )) and have similar aspect with other coniferous (such as Fir or Spruce (13, )), explaining the observed confusions. After regularization, the results are again greatly improved (overall accuracy: 98.76%, κ: 0.98, mean F-score: 97.29%, IoU: 94.89%). The precisions and recall rates have globally been improved. Only the precision for Deciduous oaks (1, ) has decreased because the regularization process has eroded the border of this class. In order to retrieve this class more precisely, the parameter γ should be decreased. As it as been explained in Section 4.2.4, the borders will be less smooth in this case and some new small segments can appear.
TABLE 4
4
.10: Confusion Matrix and accuracy metrics of the classification (Vosges3).
TABLE 4
4
Confusion matrix
Label 3 8 13 15 Precision
3 89744 0 1135 1189 97.48
8 79 234733 376 3432 98.37
13 16287 3192 533305 47949 88.78
15 36733 12171 95749 791577 84.55
Recall 62.83 93.86 84.58 93.77
Accuracy metrics
Label 3 8 13 15 Overall
IoU 61.82 92.42 76.41 80.05 77.68
F-score 76.41 96.06 86.62 88.92 87
Accuracy 97.03 98.97 91.18 89.44 88.31
P0 0.97 0.99 0.91 0.89 0.88
Pe 0.88 0.77 0.56 0.5 0.36
κ 0.749 0.9547 0.8005 0.7889 0.8185
.11: Confusion Matrix and accuracy metrics of the regularization (Vosges3).
TABLE 4 .
4 12: Confusion Matrix and accuracy metrics of the classification (Vosges4).
Confusion matrix
Label 3 8 13 15 Precision
3 90435 0 578 1055 98.23
8 0 237335 1075 210 99.46
13 2140 966 587349 10278 97.77
15 11513 2137 21032 901548 96.3
Recall 86.88 98.71 96.28 98.74
Accuracy metrics
Label 3 8 13 15 Overall
IoU 85.54 98.18 94.21 95.12 93.27
F-score 92.21 99.08 97.02 97.5 96.45
Accuracy 99.18 99.77 98.07 97.52 97.27
P0 0.99 1 0.98 0.98 0.97
Pe 0.9 0.78 0.56 0.5 0.37
κ 0.9178 0.9895 0.9559 0.9505 0.9567
TABLE 4 .
4
13: Confusion Matrix and accuracy metrics of the regularization (Vosges4).
TABLE 4
4
Confusion matrix
Label 3 4 8 13 15 Precision
3 771298 0 0 2584 0 99.67
4 13611 22887 0 0 0 62.71
8 54417 0 74760 7448 0 54.72
13 103070 1021 0 702113 0 87.09
15 0 224 0 1234 38546 96.36
Recall 81.84 94.84 100 98.42 100
Accuracy metrics
Label 3 4 8 13 15 Overall
IU 81.62 60.64 54.72 85.89 96.36 75.84
F-score 89.88 75.5 70.73 92.41 98.14 85.33
Accuracy 90.31 99.17 96.55 93.57 99.92 89.76
P0 0.9 0.99 0.97 0.94 1 0.9
Pe 0.5 0.97 0.89 0.51 0.96 0.41
κ 0.8076 0.7509 0.6907 0.8686 0.981 0.8266
.14: Confusion Matrix and accuracy metrics of the classification (Vosges5).
TABLE 4 .
4 15: Confusion Matrix and accuracy metrics of the regularization (Vosges5).
6.1 Levels of fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Object-level fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Classification-level fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regularization-level fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Designing the best fusion scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Optimal fusion scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 6.1.3
TABLE 6 .
6 ). 2: Accuracy metrics of the low cost scheme (scheme using only VHR optical images).
Accuracy metrics of the classification
Label 1 4 5 13 Overall
IoU 71.71 38.21 24.81 73.17 51.97
F-score 83.52 55.29 39.76 84.51 65.77
Accuracy 81 98.64 88.17 90.39 79.1
P0 0.81 0.99 0.88 0.9 0.79
Pe 0.51 0.97 0.82 0.57 0.43
κ 0.6162 0.5473 0.3519 0.7754 0.6319
Accuracy metrics of the regularization
Label 1 4 5 13 Overall
IoU 95.16 56.93 68.97 89.53 77.65
F-score 97.52 72.55 81.63 94.48 86.55
Accuracy 96.9 99.62 98.36 96.43 95.66
P0 0.97 1 0.98 0.96 0.96
Pe 0.53 0.99 0.91 0.56 0.5
κ 0.9338 0.7238 0.8078 0.9184 0.9136
).
Accuracy metrics of the classification
Label 1 4 5 13 Overall
IoU 90.09 81.11 53.24 83.24 76.92
F-score 94.78 89.57 69.49 90.85 86.17
Accuracy 93.46 99.81 96.39 94.43 92.04
P0 0.93 1 0.96 0.94 0.92
Pe 0.53 0.98 0.89 0.58 0.49
κ 0.8602 0.8947 0.6767 0.8685 0.8442
Accuracy metrics of the regularization
Label 1 4 5 13 Overall
IoU 95.64 78.97 69.84 91.28 83.93
F-score 97.77 88.25 82.25 95.44 90.93
Accuracy 97.17 99.79 98.34 97.17 96.24
P0 0.97 1 0.98 0.97 0.96
Pe 0.54 0.98 0.91 0.57 0.5
κ 0.9391 0.8814 0.8138 0.9339 0.9247
TABLE 6.3: Accuracy metrics of the time cost effective fusion scheme.
).
Accuracy metrics of the classification
Label 1 4 5 13 Overall
IoU 90.63 83.03 58.98 86.52 79.79
F-score 95.08 90.73 74.2 92.77 88.19
Accuracy 93.89 99.83 97.09 95.48 93.14
P0 0.94 1 0.97 0.95 0.93
Pe 0.53 0.98 0.89 0.57 0.49
κ 0.8701 0.9064 0.7271 0.8948 0.8662
Accuracy metrics of the regularization
Label 1 4 5 13 Overall
IoU 96.67 81.36 83.89 93.94 88.97
F-score 98.31 89.72 91.24 96.88 94.04
Accuracy 97.87 99.82 99.13 98.06 97.44
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9542 0.8963 0.9079 0.9547 0.949
TABLE 6.4: Accuracy metrics of the efficient fusion scheme.
[START_REF] Postadjian | Investigatin the potential of deep neural networks for large-scale classification of very high resolution satellite images[END_REF] have shown that it is possible to automatically derive training samples from noisy LC DB for large scale classification. No preprocessing is required and the training procedure is validated at very large scales (> 10000 km2 . Indeed, the model can be refined for small costs,
leading to better results. The main drawback of these methods is their training times and system
requirement; even if the transfer is important, the training of such model needs specific architecture
and graphic cards.
TABLE C .
C 3: Confusion Matrix and accuracy metrics of the classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2168270 2468 22793 28477 97.58
4 684 29235 0 36 97.6
5 86 0 151836 9789 93.89
13 13407 2457 5586 1070900 98.04
Recall 99.35 85.58 84.25 96.55
Accuracy metrics
Label 1 4 5 13 Overall
IoU 96.96 83.82 79.88 94.72 88.84
F-score 98.46 91.2 88.81 97.29 93.94
Accuracy 98.06 99.84 98.91 98.3 97.55
P0 0.98 1 0.99 0.98 0.98
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9585 0.9111 0.8824 0.9604 0.9515
TABLE C .
C 4: Confusion Matrix and accuracy metrics of the regularization.
Watershed.
Confusion matrix
Label 1 4 5 13 Precision
1 1803817 11010 197844 209337 81.18
4 41 27438 175 2301 91.6
5 8018 1115 140922 11656 87.14
13 72314 15007 39069 965960 88.43
Recall 95.73 50.28 37.28 81.22
Accuracy metrics
Label 1 4 5 13 Overall
IoU 78.35 48.06 35.34 73.42 58.79
F-score 87.86 64.92 52.22 84.67 72.42
Accuracy 85.78 99.15 92.64 90.03 83.8
P0 0.86 0.99 0.93 0.9 0.84
Pe 0.51 0.98 0.86 0.56 0.45
κ 0.7098 0.6453 0.4892 0.773 0.7048
TABLE C .
C 5: Confusion Matrix and accuracy metrics of the classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2078028 977 110274 32729 93.52
4 317 26414 0 3224 88.18
5 443 0 153631 7637 95
13 42882 1009 3336 1045123 95.68
Recall 97.94 93.01 57.49 96
Accuracy metrics
Label 1 4 5 13 Overall
IoU 91.72 82.7 55.8 92.01 80.56
F-score 95.68 90.53 71.63 95.84 88.42
Accuracy 94.65 99.84 96.53 97.41 94.21
P0 0.95 1 0.97 0.97 0.94
Pe 0.53 0.98 0.88 0.57 0.48
κ 0.8866 0.9045 0.699 0.9396 0.8879
TABLE C .
C 6: Confusion Matrix and accuracy metrics of the regularization.
Hierarchical segmentation.
Confusion matrix
Label 1 4 5 13 Precision
1 1906809 2069 136358 176772 85.81
4 317 26775 141 2722 89.38
5 2540 0 154347 4824 95.45
13 57939 4404 31601 998406 91.4
Recall 96.91 80.53 47.87 84.42
Accuracy metrics
Label 1 4 5 13 Overall
IoU 83.53 73.5 46.8 78.2 70.51
F-score 91.03 84.73 63.76 87.77 81.82
Accuracy 89.28 99.72 95 92.06 88.03
P0 0.89 1 0.95 0.92 0.88
Pe 0.52 0.98 0.87 0.56 0.47
κ 0.7783 0.8459 0.6139 0.8191 0.7762
TABLE C .
C 7: Confusion Matrix and accuracy metrics of the classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2164661 1350 27069 28928 97.42
4 1019 27568 0 1368 92.03
5 373 0 159813 1525 98.83
13 35484 612 5944 1050310 96.15
Recall 98.32 93.36 82.88 97.06
Accuracy metrics
Label 1 4 5 13 Overall
IoU 95.83 86.37 82.07 93.43 89.43
F-score 97.87 92.69 90.15 96.6 94.33
Accuracy 97.31 99.88 99 97.89 97.04
P0 0.97 1 0.99 0.98 0.97
Pe 0.53 0.98 0.9 0.57 0.5
κ 0.9423 0.9263 0.8963 0.9508 0.9412
TABLE C .
C 8: Confusion Matrix and accuracy metrics of the regularization.
Confusion matrix
Label 1 4 5 13 Precision
1 2072223 2635 66560 80590 93.26
4 200 29345 119 291 97.96
5 12398 0 146668 2645 90.7
13 51976 2751 20296 1017327 93.13
Recall 96.98 84.49 62.77 92.41
Accuracy metrics
Label 1 4 5 13 Overall
IoU 90.63 83.03 58.98 86.52 79.79
F-score 95.08 90.73 74.2 92.77 88.19
Accuracy 93.89 99.83 97.09 95.48 93.14
P0 0.94 1 0.97 0.95 0.93
Pe 0.53 0.98 0.89 0.57 0.49
κ 0.8701 0.9064 0.7271 0.8948 0.8662
PFF.
TABLE C .
C 9: Confusion Matrix and accuracy metrics of the classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2174574 3039 14711 29684 97.87
4 1643 28105 0 207 93.82
5 2407 0 158293 1011 97.89
13 23327 1550 12265 1055208 96.6
Recall 98.76 85.96 85.44 97.15
Accuracy metrics
Label 1 4 5 13 Overall
IoU 96.67 81.36 83.89 93.94 88.97
F-score 98.31 89.72 91.24 96.88 94.04
Accuracy 97.87 99.82 99.13 98.06 97.44
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9542 0.8963 0.9079 0.9547 0.949
TABLE C .
C 10: Confusion Matrix and accuracy metrics of the regularization.
Quickshift.
Confusion matrix
Label 1 4 5 13 Precision
1 2010751 1970 85980 123307 90.49
4 441 28932 420 162 96.58
5 8280 0 137651 15780 85.12
13 57765 4237 16873 1013475 92.78
Recall 96.8 82.34 57.13 87.92
Accuracy metrics
Label 1 4 5 13 Overall
IoU 87.86 80.01 51.95 82.29 75.53
F-score 93.54 88.89 68.38 90.28 85.27
Accuracy 92.08 99.79 96.37 93.78 91.01
P0 0.92 1 0.96 0.94 0.91
Pe 0.52 0.98 0.89 0.56 0.48
κ 0.8333 0.8879 0.6653 0.8571 0.8267
TABLE C .
C 11: Confusion Matrix and accuracy metrics of the classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2172018 1463 19015 29512 97.75
4 536 28364 0 1055 94.69
5 427 0 147858 13426 91.43
13 23755 1499 4103 1062993 97.31
Recall 98.87 90.54 86.48 96.03
Accuracy metrics
Label 1 4 5 13 Overall
IoU 96.67 86.17 80 93.55 89.1
F-score 98.31 92.57 88.89 96.66 94.11
Accuracy 97.87 99.87 98.95 97.91 97.3
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9543 0.925 0.8833 0.9514 0.9462
TABLE C .
C 12: Confusion Matrix and accuracy metrics of the regularization.
Confusion matrix
Label 1 4 5 13 Precision
1 1999280 2337 61150 159241 89.98
4 398 26354 653 2550 87.98
5 4645 0 152316 4750 94.19
13 44996 2776 18514 1026064 93.93
Recall 97.56 83.75 65.47 86.04
Accuracy metrics
Label 1 4 5 13 Overall
IoU 87.99 75.15 62.93 81.51 76.9
F-score 93.61 85.81 77.25 89.81 86.62
Accuracy 92.22 99.75 97.44 93.36 91.39
P0 0.92 1 0.97 0.93 0.91
Pe 0.52 0.98 0.89 0.56 0.48
κ 0.837 0.8569 0.7594 0.849 0.8345
SLIC.
TABLE C .
C 13: Confusion Matrix and accuracy metrics of the classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2179198 2066 15072 25672 98.07
4 33 28139 0 1783 93.94
5 911 0 158212 2588 97.84
13 16802 1696 6974 1066878 97.67
Recall 99.19 88.21 87.77 97.26
Accuracy metrics
Label 1 4 5 13 Overall
IoU 97.3 83.46 86.1 95.05 90.48
F-score 98.63 90.98 92.53 97.46 94.9
Accuracy 98.27 99.84 99.27 98.42 97.9
P0 0.98 1 0.99 0.98 0.98
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9629 0.909 0.9215 0.9631 0.9583
TABLE C .
C 14: Confusion Matrix and accuracy metrics of the regularization.
TABLE C .
C 15: Confusion Matrix and accuracy metrics of the pixel-based classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2072223 2635 66560 80590 93.26
4 200 29345 119 291 97.96
5 12398 0 146668 2645 90.7
13 51976 2751 20296 1017327 93.13
Recall 96.98 84.49 62.77 92.41
Accuracy metrics
Label 1 4 5 13 Overall
IoU 90.63 83.03 58.98 86.52 79.79
F-score 95.08 90.73 74.2 92.77 88.19
Accuracy 93.89 99.83 97.09 95.48 93.14
P0 0.94 1 0.97 0.95 0.93
Pe 0.53 0.98 0.89 0.57 0.49
κ 0.8701 0.9064 0.7271 0.8948 0.8662
TABLE C .
C 16: Confusion Matrix and accuracy metrics of the object-based classification (PFF).
689 2430 1042032 95.39
Recall 97.67 95.87 46.93 92.37
Accuracy metrics
Label 1 4 5 13 Overall
IoU 89.52 60.41 42.16 88.43 70.13
F-score 94.47 75.32 59.31 93.86 80.74
Accuracy 93.21 99.65 94.9 96.11 91.94
P0 0.93 1 0.95 0.96 0.92
Pe 0.53 0.99 0.88 0.57 0.48
κ 0.8571 0.7516 0.5679 0.9102 0.845
TABLE C .
C 17: Confusion Matrix and accuracy metrics of the regularization after a pixel-based classification.
Confusion matrix
Label 1 4 5 13 Precision
1 2174574 3039 14711 29684 97.87
4 1643 28105 0 207 93.82
4 2407 0 158293 1011 97.89
13 23327 1550 12265 1055208 96.6
Recall 98.76 85.96 85.44 97.15
Accuracy metrics
Label 1 4 5 13 Overall
IoU 96.67 81.36 83.89 93.94 88.97
F-score 98.31 89.72 91.24 96.88 94.04
Accuracy 97.87 99.82 99.13 98.06 97.44
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9542 0.8963 0.9079 0.9547 0.949
TABLE C .
C 18: Confusion Matrix and accuracy metrics of the regularization after an object-based classification (PFF).
Training set design.
Confusion matrix
Label 1 4 5 13 Precision
1 1776715 14410 268660 162223 79.96
4 167 26119 1203 2466 87.19
5 10505 2383 135761 13062 83.95
13 119353 84627 57826 830544 76.03
Recall 93.18 20.48 29.29 82.37
Accuracy metrics
Label 1 4 5 13 Overall
IoU 75.54 19.88 27.74 65.39 47.14
F-score 86.07 33.17 43.43 79.08 60.44
Accuracy 83.59 97 89.91 87.46 78.98
P0 0.84 0.97 0.9 0.87 0.79
Pe 0.51 0.96 0.83 0.58 0.44
κ 0.6639 0.3223 0.3928 0.7015 0.6242
TABLE C .
C 19: Confusion Matrix and accuracy metrics of the classification without training set design (training pixels are randomly selected).
Confusion matrix
Label 1 4 5 13 Precision
1 2118007 1016 57835 45150 95.32
4 1913 25818 0 2224 86.19
5 517 0 152774 8420 94.47
13 63771 1016 4869 1022694 93.62
Recall 96.97 92.7 70.9 94.83
Accuracy metrics
Label 1 4 5 13 Overall
IoU 92.56 80.71 68.08 89.07 82.61
F-score 96.14 89.33 81.01 94.22 90.17
Accuracy 95.15 99.82 97.96 96.42 94.67
P0 0.95 1 0.98 0.96 0.95
Pe 0.53 0.98 0.9 0.57 0.49
κ 0.8961 0.8924 0.7995 0.9163 0.8948
TABLE C .
C 20: Confusion Matrix and accuracy metrics of the regularization without training set design (training pixels are randomly selected).
TABLE C .
C 21: Confusion Matrix and accuracy metrics of the regularization (r = 5).
Confusion matrix
Label 1 4 5 13 Precision
1 2083643 2660 61569 74136 93.77
4 129 29478 86 262 98.41
5 12003 0 147401 2307 91.15
13 47040 2526 19802 1022982 93.65
Recall 97.24 85.04 64.41 93.02
Accuracy metrics
Label 1 4 5 13 Overall
IoU 91.34 83.88 60.62 87.51 80.84
F-score 95.47 91.24 75.48 93.34 88.88
Accuracy 94.37 99.84 97.27 95.83 93.65
P0 0.94 1 0.97 0.96 0.94
Pe 0.53 0.98 0.89 0.57 0.49
κ 0.8802 0.9116 0.7408 0.9031 0.876
TABLE C .
C 22: Confusion Matrix and accuracy metrics of the regularization (r = 25).
2299 17229 1039879 95.2
Recall 98.02 85.42 71.37 95
Accuracy metrics
Label 1 4 5 13 Overall
IoU 93.71 85.24 67.85 90.65 84.36
F-score 96.75 92.03 80.85 95.1 91.18
Accuracy 95.94 99.85 97.96 96.94 95.35
P0 0.96 1 0.98 0.97 0.95
Pe 0.53 0.98 0.9 0.57 0.49
κ 0.9133 0.9196 0.7979 0.9287 0.9085
TABLE C .
C 23: Confusion Matrix and accuracy metrics of the regularization (r = 5).
TABLE C .
C 24: Confusion Matrix and accuracy metrics of the regularization (γ = 5).
Confusion matrix
Labels 1 4 5 13 Precision
1 2174574 3039 14711 29684 97.87
4 1643 28105 0 207 93.82
5 2407 0 158293 1011 97.89
13 23327 1550 12265 1055208 96.6
Recall 98.76 85.96 85.44 97.15
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.67 81.36 83.89 93.94 88.97
F-score 98.31 89.72 91.24 96.88 94.04
Accuracy 97.87 99.82 99.13 98.06 97.44
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9542 0.8963 0.9079 0.9547 0.949
TABLE C .
C 25: Confusion Matrix and accuracy metrics of the regularization (γ = 10).
Confusion matrix
Labels 1 4 5 13 Precision
1 2184507 2975 5458 29068 98.31
4 2514 27213 0 228 90.85
5 22584 0 137459 1668 85
13 25778 1754 10980 1053838 96.47
Recall 97.72 85.2 89.32 97.15
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.11 78.46 77.16 93.82 86.39
F-score 98.02 87.93 87.11 96.81 92.47
Accuracy 97.48 99.79 98.84 98.02 97.06
P0 0.97 1 0.99 0.98 0.97
Pe 0.54 0.98 0.91 0.57 0.5
κ 0.9456 0.8782 0.865 0.9537 0.9409
TABLE C .
C 26: Confusion Matrix and accuracy metrics of the regularization (γ = 15).
Confusion matrix
Labels 1 4 5 13 Precision
1 2181685 2552 3839 33932 98.19
4 4080 25647 0 228 85.62
5 51623 0 106115 3973 65.62
13 27573 899 9543 1054335 96.52
Recall 96.32 88.14 88.8 96.51
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 94.64 76.77 60.6 93.26 81.32
F-score 97.25 86.86 75.47 96.51 89.02
Accuracy 96.47 99.78 98.03 97.83 96.06
P0 0.96 1 0.98 0.98 0.96
Pe 0.54 0.98 0.92 0.57 0.51
κ 0.9235 0.8675 0.7447 0.9494 0.9198
TABLE C .
C 27: Confusion Matrix and accuracy metrics of the regularization (γ = 20).
Unary term.
Confusion matrix
Labels 1 4 5 13 Precision
1 2168246 2951 15896 34915 97.58
4 26 29861 0 68 99.69
5 6376 0 154601 734 95.6
13 14023 2380 12605 1063342 97.34
Recall 99.07 84.85 84.43 96.75
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.69 84.63 81.28 94.26 89.21
F-score 98.32 91.67 89.67 97.05 94.18
Accuracy 97.88 99.85 98.98 98.15 97.43
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9547 0.916 0.8914 0.957 0.9491
TABLE C .
C 28: Confusion Matrix and accuracy metrics of the regularization using the log-inverse unary/data formulation.
Confusion matrix
Labels 1 4 5 13 Precision
1 2174574 3039 14711 29684 97.87
4 1643 28105 0 207 93.82
5 2407 0 158293 1011 97.89
13 23327 1550 12265 1055208 96.6
Recall 98.76 85.96 85.44 97.15
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.67 81.36 83.89 93.94 88.97
F-score 98.31 89.72 91.24 96.88 94.04
Accuracy 97.87 99.82 99.13 98.06 97.44
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9542 0.8963 0.9079 0.9547 0.949
TABLE C .
C 29: Confusion Matrix and accuracy metrics of the regularization using the linear unary/data formulation.
Prior.
Confusion matrix
Labels 1 4 5 13 Precision
1 2172746 3032 14419 31811 97.78
4 1700 27558 0 697 92
5 2673 0 157955 1083 97.68
13 24348 1737 11912 1054353 96.52
Recall 98.7 85.25 85.71 96.91
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.54 79.36 84 93.64 88.38
F-score 98.24 88.49 91.3 96.72 93.69
Accuracy 97.78 99.8 99.14 97.96 97.34
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9522 0.8839 0.9085 0.9523 0.947
TABLE C .
C 30: Confusion Matrix and accuracy metrics of the regularization the Potts model.
Confusion matrix
Labels 1 4 5 13 Precision
1 2174110 3124 13860 30914 97.84
4 2033 27472 0 450 91.71
5 2416 0 158306 989 97.89
13 25814 1653 11583 1053300 96.43
Recall 98.63 85.19 86.15 97.02
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.53 79.1 84.59 93.65 88.47
F-score 98.23 88.33 91.65 96.72 93.73
Accuracy 97.77 99.79 99.18 97.96 97.35
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9521 0.8822 0.9122 0.9524 0.9473
TABLE C .
C 31: Confusion Matrix and accuracy metrics of the regularization the z-Potts model.
Confusion matrix
Labels 1 4 5 13 Precision
1 2174574 3039 14711 29684 97.87
4 1643 28105 0 207 93.82
5 2407 0 158293 1011 97.89
13 23327 1550 12265 1055208 96.6
Recall 98.76 85.96 85.44 97.15
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.67 81.36 83.89 93.94 88.97
F-score 98.31 89.72 91.24 96.88 94.04
Accuracy 97.87 99.82 99.13 98.06 97.44
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.9542 0.8963 0.9079 0.9547 0.949
TABLE C .
C 32: Confusion Matrix and accuracy metrics of the regularization the Exponential-feature model.
Confusion matrix
Labels 1 4 5 13 Precision
1 2171863 2951 14706 32488 97.74
4 2144 27578 0 233 92.06
5 2633 0 157920 1158 97.66
13 23487 1544 11976 1055343 96.61
Recall 98.72 85.99 85.55 96.89
Accuracy metrics
Labels 1 4 5 13 Overall
IoU 96.52 80.05 83.82 93.71 88.52
F-score 98.23 88.92 91.2 96.75 93.77
Accuracy 97.76 99.8 99.13 97.98 97.34
P0 0.98 1 0.99 0.98 0.97
Pe 0.53 0.98 0.91 0.57 0.5
κ 0.952 0.8882 0.9075 0.9528 0.9471
TABLE C .
C 33: Confusion Matrix and accuracy metrics of the regularization the Distance-features model.
TABLE C .
C 34: Confusion Matrix and accuracy metrics of the classification (object-based hierarchical).
C.3.1 VHR optical images
Confusion matrix
Label 1 4 5 13 Precision
1 1869715 0 6022 346265 84.15
8142 0 0 21813 0
5 82695 0 44484 34532 27.51
13 108307 0 25382 958661 87.76
Recall 90.37 - 58.62 70.42
Accuracy metrics
Label 1 4 5 13 Overall
IoU 77.22 0 23.03 64.13 41.1
F-score 87.15 - 37.44 78.14 50.68
Accuracy 84.27 99.15 95.76 84.7 81.94
P0 0.84 0.99 0.96 0.85 0.82
Pe 0.52 0.99 0.93 0.54 0.5
κ 0.6695 0 0.3555 0.6659 0.6417
TABLE C .
C 35: Confusion Matrix and accuracy metrics when adding semantic information to a direct hierarchical segmentation (µ = 15).
Confusion matrix
Label 1 4 5 13 Precision
1 2021033 0 0 200969 90.96
4 0 0 0 29955 0
5 83682 0 0 78029 0
13 589638 0 0 502712 46.02
Recall 75.01 - - 61.94
Accuracy metrics
Label 1 4 5 13 Overall
IoU 69.8 0 0 35.87 26.42
F-score 82.22 - - 52.81 33.76
Accuracy 75.06 99.15 95.39 74.37 71.98
P0 0.75 0.99 0.95 0.74 0.72
Pe 0.57 0.99 0.95 0.6 0.56
κ 0.4176 0 0 0.3573 0.3644
TABLE C .
C 36: Confusion Matrix and accuracy metrics when adding semantic information to a direct PFF segmentation (σ = 0.8, k = 500 and m = 40000).
C.3.2 nDSM
Confusion matrix
Label 1 4 5 13 Precision
1 1526001 0 0 696001 68.68
4 0 0 0 29955 0
5 109 0 0 161602 0
13 105832 0 0 986518 90.31
Recall 93.51 - - 52.64
Accuracy metrics
Label 1 4 5 13 Overall
IoU 65.55 0 0 49.83 28.84
F-score 79.19 - - 66.51 36.43
Accuracy 77.13 99.15 95.39 71.67 71.66
P0 0.77 0.99 0.95 0.72 0.72
Pe 0.49 0.99 0.95 0.49 0.46
κ 0.5508 0 0 0.4477 0.4737
TABLE C .
C 37: Confusion Matrix and accuracy metrics when adding semantic information to a direct hierarchical segmentation (µ = 15).
Confusion matrix
Label 1 4 5 13 Precision
1 1788281 0 59707 374014 80.48
4 5687 0 0 24268 0
5 2510 0 44950 114251 27.8
13 244694 0 2 847654 77.6
Recall 87.61 - 42.95 62.32
Accuracy metrics
Label 1 4 5 13 Overall
IoU 72.26 0 20.3 52.82 36.34
F-score 83.89 - 33.75 69.12 46.69
Accuracy 80.42 99.15 94.97 78.4 76.47
P0 0.8 0.99 0.95 0.78 0.76
Pe 0.52 0.99 0.93 0.54 0.49
κ 0.5903 0 0.3126 0.5282 0.5374
TABLE C .
C 38: Confusion Matrix and accuracy metrics when adding semantic information to a direct PFF segmentation (σ = 0.8, k = 500 and m = 40000).
Thus only few pixels are available for training/validation (a dilatation can be envisaged but has not been carried out in our tests).
Metric Classification Regularization (γ = 2) Regularization (γ = 10)
OA 86.48 88 90.16
κ 0.6887 0.7172 0.7551
F-score 74.57 77 78.66
IoU 64.33 67.6 70.25
TABLE C .
C 39: Accuracy metrics of the different results of the method on the urban area.
http://inventaire-forestier.ign.fr/spip/?rubrique67
http://www.atmosedu.com/Geol390/Life/CarbonCycleShort.html
http://ec.europa.eu/eurostat/documents/3217494/5733109/
http://inventaire-forestier.ign.fr/spip/IMG/pdf/Int_memento_2013_BD.pdf
http://inventaire-forestier.ign.fr/spip/IMG/pdf/Int_memento_2013_BD.pdf
This PhD thesis work was carried out in the Laboratoire LaSTIG of the Institut National de l'Information Géographique et Forestière (IGN). , for their relevant remarks.
Data enrichment for inventory
Features derived from Lidar and the final stand segmentation can be jointly used in order to extract additional information. The features can be also used to define a "new" probability map.
On this area, very good results are reported. In the classification process, some confusions are reported (between Evergreen oaks (2, ) and Beech (3,) and also between Beech (3, ) and Herbaceous formation (18, )), even if the results are already good. The classification precisions for all classes are greater than 80%, and most of the recall rates are also satisfactory (> 70%, except for Evergreen oaks (2, ), Woody heathland (17, ) and Herbaceous formation (18, )). After regularization, the results are improved a lot (precisions greater then 90% for all classes). Regarding the recalls, only the Woody heathland (18, ) class suffers from limited confusion with Deciduous oaks (1, ) and Mountain pine or Swiss pine (11, ). The global results are very satisfactory with an IoU of 89.45%, showing that stands borders are well retrieved. The κ of 0.94 shows an agreement nearly perfect. Finally, the mean F-score and overall accuracy confirm the relevance of the results. Furthermore, in the mixed areas (i.e., areas not labeled in the Forest LC DB), we retrieve small stands that are visually coherent, showing that the proposed framework allows to obtain finest results. A straightforward method to retrieve forest stands could be to simply segment one of the 2 input remote sensing data source. Such segmentation algorithms do not take into account information about species. However, they could allow to retrieve stand borders easily. Furthermore, once the segmentation is performed, one can add semantic information using imperfect classification results.
Two algorithms were employed in order to obtain relevant stands only through the segmentation of the data. It helps performing a baseline comparison with the framework presented in Chapter 3.
The first segmentation algorithm is the one proposed in [START_REF] Guigues | Scale-sets image analysis[END_REF]. It is a multi-scale hierarchical segmentation algorithm that allows to control the segmentation level through a unique scale parameter µ.
The second segmentation algorithm (called here PFF) employed [START_REF] Felzenszwalb | Efficient graph-based image segmentation[END_REF]) is a method for image segmentation based on pairwise region comparison considering the minimum weight edge between two regions in measuring the intensity difference between each pixel of them.
3 parameters need to be tuned in order to obtain relevant segmentation.
• σ is the standard deviation of the Gaussian filter employed to smooth the image as a preprocessing (we followed the authors' recommendation employing σ = 0.8).
• k is a second parameter that sets an observation scale (the larger k, the larger segments).
• m permits to define the minimum size of a segment.
These experiments have been performed only on area Vosges1 (presented in Figure 4.31). Since it did not produce relevant results, no further experiment have been carried out on other areas. It is a 1 km 2 area, the spatial resolution of the VHR optical image is 0.5 m, and the nDSM has been rasterized at the same resolution. From these data, it can be seen that a stand is composed of areas that are not homogeneous in term of reflectance and/or height. Furthermore, one can also note that the difference between two stands in terms of reflectance and/or height might not be so important (two distinct stands can be similar).
Additional information extraction
The extraction of information from the lidar features is performed for each stand obtained from the framework. On each stand, the number of trees can be counted and statistics on the height of the stand can be derived. Such information are easy to extract and might be useful for forest understanding (see Figure 4.39). The extracted trees can also be used in order to extract the tree density for each segment (see
C.1 Accuracy metrics
In this section, the different accuracy metrics presented in the following tables are detailed. Most are standard metrics for classification evaluation.
All are based on the confusion matrix. The metrics can be computed at the global level, or for each class. In this section, the metrics are defined using the confusion matrix presented in Table C.1 (problem with r classes). In order to compute the metrics for a single class, an other confusion matrix can be derived. In this section such confusion matrix is presented in Table C The relation between the elements of Table C.1 and Table C.2 are the following:
C.1.1 Metrics at the class level Precision or producer's accuracy
For the class i ∈ [|1, r|], the precision (or producer's accuracy) p i is defined as follows:
It is the accuracy from the point of view of the map maker (the producer). This is how often real samples on the ground are correctly shown on the classified map or the probability that a certain land cover of an area on the ground is classified as such.
Kappa coefficient is computed as follow:
where P 0 is the relative observed agreement among raters, and P e is the hypothetical probability of chance agreement. They are defined as follow:
(C.9)
C.1.2 Metrics at the global level
Intersection over Union
The overall Intersection over Union score (IoU ) is defined as the mean of the local Intersection over Union scores:
F-score
The overall F-score (F 1 ) is defined as the mean of the local F-scores. If a F 1,i can not be computed, it is considered as zero.
Overall Accuracy
The overall accuracy (OA) is defined as follows:
Kappa coefficient
The Kappa coefficient is computed as follows:
where P 0 is the relative observed agreement among raters, and P e is the hypothetical probability of chance agreement. They are defined as follow: Ghosh, A., N. R. Pal, and S. K. Pal (1991). "Image segmentation using a neural network". Biological Cybernetics 66. Vauhkonen, J., I. Korpela, M. Maltamo, and T. [START_REF] Korpela | Tree species classification using airborne LiDAR-effects of stand and tree parameters, downsizing of training set, intensity normalization, and sensor type[END_REF]. "Imputation of single-tree attributes using airborne laser scanning-based height, intensity, and alpha shape metrics". Remote Sensing of Environment 114.6, pp. 1263-1276 (Cited on page 41).
Vauhkonen, J., T. Hakala, J. Suomalainen, S. Kaasalainen, O. Nevalainen, M. Vastaranta, M.s Holopainen, and J. Hyyppä (2013). "Classification of spruce and pine trees using active hyperspectral LiDAR". IEEE Geoscience and Remote Sensing Letters 10.5, pp. 1138-1141 (Cited on page 38). |
00176056 | en | [
"shs.eco",
"sde.es"
] | 2024/03/05 22:32:13 | 2007 | https://shs.hal.science/halshs-00176056/file/Flachaire_Hollard_Luchini_06.pdf | Emmanuel Flachaire
email: emmanuel.flachaire@univ-paris1.fr
Keywords: Anchoring, Contingent Valuation, Heterogeneity, Framing effects JEL Classification: Q26, C81, D71
, our method appears successful in discriminating between those who anchor and those who did not. An important result is that when controlling for anchoring -and allowing the degree of anchoring to differ between respondent groups -the efficiency of the double-bounded welfare estimate is greater than for the initial dichotomous choice question. This contrasts with earlier research that finds that the potential efficiency gain from the double-bounded questions is lost when anchoring is controlled for and that we are better off not asking follow-up questions.
Résumé
pour controler l'ancrage, nous montrons que la prise en compte d'une telle hétérogénéité permet d'obtenir des estimations plus précises que celles obtenues avec la prise en compte d'une seule offre. Ce résultat contraste avec ceux de la littérature, qui trouvent que le gain de précision obtenu avec la prise en compte d'une deuxième offre est en général perdu en présence d'ancrage significatif, à tel point qu'il vaut mieux ne pas proposer une deuxième offre.
Introduction
Anchoring is a general phenomenon put forward by [START_REF] Tversky | Judgment under uncertainty: Heuristics and biases[END_REF]: "In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient. That is, different starting points yield different estimates, which are biased toward the initial values. We call that anchoring". This anchoring problem affects, in particular, survey methods, designed to elicit individual willingness to pay (WTP) for a specific good. Among such surveys, by far the most popular one is the contingent valuation (CV) method. Roughly speaking, this method consists of a specific survey that proposes respondents to consider a hypothetical scenario that mimics a market situation. A long discussion has taken place that analyzes the validity of the contingent valuation method in eliciting individual willingness to pay 1 . In the dichotomous choice CV method, the presence of anchoring bias implies that, "confronted with a dollar figure in a situation where he is uncertain about an amenity's value, a respondent may regard the proposed amount as conveying an approximate value of the amenity's true value and anchor his WTP amount on the proposed amount" [START_REF] Mitchell | Using Surveys to Value Public Goods: The contingent Valuation Method[END_REF]. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] propose a model that takes into account the effect of anchoring. It turns out that there is an important loss of efficiency in the presence of substantial anchoring. The purpose of this paper is to address this issue.
To the best of our knowledge, anchoring has always been considered as a phenomenon affecting the population as a whole. Little attention has been paid to the fact that some individuals may anchor their answers while others may not 2 . The assumption of homogeneous anchoring may be hazardous as it may lead to econometric problems. Indeed, it is well known in standard regression analysis that individual heterogeneity can be a dramatic source of misspecification and if it is not taken into account, its results can be seriously misleading. In the context of this paper, the presence of two groups or types of people (those who are subject to anchoring and those who are not), is a type of individual heterogeneity that could affect empirical results in CV surveys.
The major issue is how to conceive a measurement of individual heterogeneity with respect to anchoring. In other words, if we assume that individuals are of two types, then the question is how can we identify these two distinct groups of people in practice?
In this paper, we propose to develop a methodology that borrows tools from social 1 see Mitchell and Carson 1989, Hausman 1993, Arrow et al. 1993, Bateman and Willis 1999 2 Grether (1980) studies decisions under uncertainty and shows that, although the representativeness heuristic explains some of the individuals' behaviors, Bayesian updating is still accurate for other individuals. He suggests that, being familiar with the evaluation of a specific event (in his case, acquired through repeating evaluations in the experiment) leads to more firmly held opinions and, consequently, to a behavior more in line with standard economic assumptions. This is also what John List suggests when he compares the behavior of experienced subjects (through previous professional trade experiences) and unexperienced subjects [START_REF] List | Neoclassical theory versus prospect theory: Evidence from the marketplace[END_REF] psychology that will allow us to identify the two groups of people. Using the dichotomous choice model developed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF], we control for anchoring for each group separately. A noticeable empirical result of our methodology is that when we allow the degree of anchoring to differ between those two groups, the efficiency of the double-bounded model improves considerably. This contrasts with previous research that finds that the efficiency gains from the double-bounded model are lost when anchoring is controlled for.
The paper is organized as follows. In section 2, we review some possible sources of heterogeneity in the context of anchoring. Then, we concentrate on a particular form of heterogeneity and we present the methodology that we use to identify it in practice. In section 3, we extend the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] in order to develop a specific econometric model with heterogeneous anchoring. Finally, in section 4, we apply our methodology and econometric model to a French dedicated CV survey. Conclusions are drawn in section 5.
Conformism as a source of heterogeneity
Heterogeneity can be defined in many different ways. In this section, we are interested in a form of heterogeneity linked to the problem of anchoring, that is to say involving the behavior of survey respondents induced by the survey itself. More precisely, we would like to investigate whether there is heterogeneity with respect to the degree of anchoring on the bid in the initial valuation question. Thus, a clear distinction should be made between heterogeneity that leads to different anchoring behaviors and heterogeneity that relates to WTP directly. The latter sort of heterogeneity can be treated, as in standard linear regression model, by the use of regressor variables in specific econometric models and is not related at all to the problem of anchoring. The type of heterogeneity we are interested in here, however, calls for treatment of a different nature.
The economic literature on contingent valuation in particular, and on survey data in general, often mentions a particular source of heterogeneity. This source concerns the fact that some individuals may hold a "steadier point of view" than others. Alternatives versions are "more precise beliefs", "higher level of self-confidence", "well defined preferences", etc. . . A good example of such a notion is "one might expect the strongest anchoring effects when primitive beliefs are weak or absent, and the weakest anchoring effects when primitive beliefs are sharply defined" [START_REF] Green | Referendum contingent valuation, anchoring, and willingness to pay for public goods[END_REF]. It is quite clear that all these statements share some common feature. However it seems that economic theory lacks a precise definition of this, even if the notions mentioned are very intuitive. Thus, many authors are confronted with a "missing notion" since economic theory does not propose a clear definition of this type of human characteristic.
Psychology proposes a notion of "conformism to the social representation" that could fill this gap. In order to test if an individual representation is a rather conformist one, we compare it to the so called "social representation". Individuals whose representation differs from the social representation could be considered as "non-conformists". The basic idea, supported by social psychology, is that individuals who differ from the social representation are less prone to be influenced3 . It leads us naturally to wonder if individuals that are less prone to be influenced are also less prone to anchoring. Before testing this last hypothesis with an econometric model, we develop a method to isolate "non-conformist" individuals.
Method
Individuals have, for each particular subject, a representation (i.e. a point of view). Representations are defined in a broad sense by social psychologists,4 since an individual representation is defined as a form of knowledge that can serve as a basis for perceiving and interpreting reality, as well as for orienting one's behavior. This representation may either be composed of stereotypes or of more personal views.
The general principle that underlies the above methodology consists of detecting individuals who hold a representation of the object to be evaluated that differs from that of the majority. The methodology allows us to identify an individual who holds a representation which differs from the majority one. We restrict our attention here to a quantitative approach using an open-ended question. This is the usual way to gather quantitative information on an individual representation at low cost [START_REF] Vergès | Approche du noyau central: propriétés quantitatives et structurales[END_REF]. After cleaning the data, we use an aggregation principle in order to establish the majority point of view (which is a proxy for the so called social representation). Then it is possible to compare individual and social representations. Using a simple criterion, we sort individuals into two sub-samples. Those who do not differ from the majority point of view are said to be in conformity with the majority while the others are said to be different from the majority. The methodology consists of four steps summarized in the figure 1 and described in detail in what follows.
Step 1: A representation question At a formal level, an individual representation of a given object is an ordered list of terms that one freely associated with the object. Such a list is obtained through open-ended questions such as "what does this evoke to you?".
Step 2: Classification As mentioned above, an individual representation is captured through an ordered list of words. A general result is that the total number of different words used by the sample of individuals considered is quite high (say 100 to 500 depending on the complexity of the object). This imposes a categorization that puts together words that are close enough. This step is the only one which leaves the researcher with some degrees of freedom.
After the categorization, each individual's answer is transformed into an ordered list of categories. It is then possible to express an individual representation as an ordered list
Step 1 : A representation question What are the words which come to your mind when ... ?
Result : individual lists of words and expressions
Step 2 : Classification Coding words and expressions by "frame of reference"
Result : Ordered lists of categories (incomplete)
Step Those individual representations, namely ordered lists of words, could at a formal level be considered as an ordinal preference over the set X of possible categories. As the question that is used to elicit individual representation is open-ended, individual lists could be of various length. So, preferences could be incomplete. Those individual representations will in turn aggregate to form the social representation.
Step 3: Aggregating representations Using a majoritarian device 5 , it is possible to proceed in a non ambiguous manner in order to identify the social representation on the basis of individual ones. A social representation, whenever it exists, will then be a complete and transitive order over the set X.
An important property of the majority principle is that it may lead to non transitive social preferences, the so called Condorcet paradox. Indeed, X may be ranked before Y at the social level and Y ranked before another attribute Z with X not ranked before Z. 6Further results even show that the probability of getting a transitive social preference becomes very small as the number of elements in X grows. We will then consider the use of the majority principle as a test for the existence of a social representation: if a set of data leads to a transitive social representation, the social representation is coherent.
Step 4: Segmentation Thanks to our previous results, it is possible to sort individuals according to the way they build their representations. In order to do so, we consider individuals who do not refer to the Condorcet winner (i.e. the top element of the social representation). Recall that preferences are incomplete, so that a typical individual preference does not display all of the elements of X, otherwise all individuals include the Condorcet winner in their preference. In practice, the Condorcet winner refers to elements obviously associated to the object, i.e among the very first words that come to mind when talking about the object.
We are then left with two categories of individuals. This leads to a breakdown of individuals into two sub-samples: the ones who did mention the Condorcet winner (conformists) and the ones who did not (non-conformists). Finally, one has a dummy variable that sorts individuals into two categories and that identifies individual heterogeneity. It remains for us to test if such a variable can indeed play a role in anchoring bias, based on a specific econometric model. We develop such a model in the next section.
Econometric Models
There exist several ways to elicit individuals' WTPs in CV surveys. The use of discrete choice format in contingent valuation surveys is strongly recommended by the work of the NOAA panel [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. It consists of asking a bid to the respondent with a question like if it costs $x to obtain . . . , would you be willing to pay that amount? Indeed, one advantage of the discrete choice format is that it mimics the decision making task that individuals face in everyday life since the respondent accepts or refuses the bid proposed.
One drawback of this discrete choice format is that it leads to a a qualitative dependent variable (the respondent answers yes or no) which reveals little about individuals' WTP. In order to gather more information on respondents' WTP, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF] proposed to add a follow-up discrete choice question to improve efficiency of discrete choice questionnaires. This mechanism is known as the double bounded model. This basically consists of asking a second bid to the respondent, greater than the first bid if the respondent asked yes to the first bid and lower otherwise. The key disadvantage of the double-bounded model is that individuals may anchor their answers to the second bid on the first bid proposed. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] show that, in the presence of anchoring bias, information provided by the second answer is lost such that the single bounded model can become more efficient than the double bounded model.
In this section, we present these different models proposed in the literature: the single bounded, double bounded models and the [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] anchoring model. Finally, we develop an econometric model of anchoring that depends upon individual heterogeneity.
Single bounded model
Let us first consider W i , the individual i's prior estimate of his willingness to pay, which is defined as follows
W i = x i (β) + u i (1)
where the unknown parameters β and σ 2 are respectively a k × 1 vector and a scalar, where x i is a non-linear function depending on k independent explanatory variables. The error term u i are Normally distributed with mean zero and variance σ 2 . The number of observations is equal to n and the error terms u i are normally distributed with mean zero and variance σ 2 . In the single bounded mechanism, the willingness to pay (WTP) of the respondent i is not observed, but his answer to the bid b i is observed. The individual i answers yes to the bid offer if W i > b i and no otherwise.
Double bounded model
The double bounded model, proposed by [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF], consists of asking a second bid (follow-up question) to the respondent. If the respondent i answers yes to the first bid, b 1i , the second bid b 2i is higher and lower otherwise. The standard procedure, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF], assumes that respondents' WTPs are independent of the bids and deals with the second response in the same manner as the first discrete choice question,
W 1i = x i (β) + u i and W 2i = W 1i (2)
The individual i answers yes to the first bid offer if W 1i > b 1i and no otherwise. He answers yes to the second bid offer if W 2i > b 2i and no otherwise. [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF] compare the double bounded model with the single bounded model and show that the double bounded model can yield efficiency gains.
Anchoring model
The double bounded model model assumes that the same random utility model generates both responses to the first and the second bid. In fact, introduction of follow-up questioning can generate inconsistency between answers to the second and first bids. To deal with inconsistency of responses, [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]'s approach considers a model in which the follow-up question can modify the willingness to pay. According to them, respondents combine their prior WTP with the value provided by the first bid, this anchoring effect is then defined as follows
W 1i = x i (β) + u i and W 2i = (1 -γ) W 1i + γ b 1i (3)
where the parameter 0 ≤ γ ≤ 1. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] show that, when an anchoring bias exists, efficiency gains provided by the double-bounded model disappear.
Information yielded by the answers to second bid is diluted in the anchoring bias phenomenon.
Anchoring model with heterogeneity
In the presence of individual heterogeneity, results based on standard regression can be seriously misleading if this heterogeneity is not taken into account. In the preceding anchoring model, [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] consider that all individuals are influenced by the first bid: the anchoring bias parameter γ is the same for all individuals. However, if only some respondents combine their prior WTP with the information provided by the first bid, the others not, it means that individual heterogeneity is present.
Let us consider that we can divide respondents into two distinct groups: one subject to anchoring and another one not subject to anchoring. Then, we can define a new model as follows
W 1i = x i (β) + u i and W 2i = (1 -I i γ) W 1i + b 1i I i γ (4)
where I i is a dummy variable which is equal to 1 when individual i belongs to one group and 0 if he belongs to the other group. Note that, if I i = 1 for all respondents, our model becomes the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] and if I i = 0 for all respondents, our model becomes the standard double bounded model. The model can also be defined with an heterogeneity based on individual characteristics rather than two groups, replacing I i by a variable X i taking any real values.
Estimation
The dependent variable is a dichotomous variable: the willingness-to-pay W i is unknown and we observe answers only. Thus, estimation methods appropriate to the qualitative dependent variable are required. The single bounded model can be estimated with a standard probit model. Models with follow-up questions can easily be estimated by maximum likelihood using the log-likelihood function l(y, β) = n i=1 r 1i r 2i log P (yes, yes) + r 1i (1 -r 2i ) log P (yes, no)
+ (1 -r 1i ) r 2i log P (no, yes) + (1 -r 1i ) (1 -r 2i ) log P (no, no) (5)
where r 1 (resp. r 2 ) is a dummy variable which is equal to 1 if the answer to the first bid (resp. to the second) is yes, and is equal to 0 if the answer is no. For each model, we need to derive the following probabilities: P (yes, no) = P (b 1 < W i < b 2 ) and P (yes, yes) = P (W i > b 2 ).
P (no, no) =P (W i < b 2 ) P (no, no) =P (b 2 < W i < b 1 ) (6) P (yes, no)=P (b 1 < W i < b 2 ) P (yes, yes)=P (W i > b 2 ) (7)
For the anchoring model with heterogeneity, we calculate these probabilities:
P (no, no) = Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] (8) P (yes, no) = Φ[(b 1i -x i (β))/σ] -Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] (9) P (no, yes) = Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] -Φ[(b 1i -x i (β))/σ] (10) P (yes, yes) = 1 -Φ[((b 2i -b 1i I i γ)/(1 -I i γ) -x i (β))/σ] (11)
The anchoring model, proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] is a special case, with
I i = 1 for i = 1, . . . , n.
The double bounded model is a special case, with γ = 0.
Application
In order to test our model empirically, this article uses the main results of a contingent valuation survey which was carried out within a research program that the French Ministry in charge of environmental affairs started in 1995. It is based on a contingent valuation survey which involves a sample of users of the natural reserve of Camargue 7 . The purpose of the contingent valuation survey was to evaluate how much individuals were willing to pay to preserve the natural reserve using an entrance fee. The survey was administered to 218 recreational visitors during the spring 1997, using face to face
7
The Camargue is a wetland in the south of France covering 75 000 hectares. The Camargue is a major wetland in France and is host to many fragile ecosystems. The exceptional biological diversity is the result of water and salt in an "amphibious" area inhabited by numerous species. The Camargue is the result of an endless struggle between the river, the sea and man. During the last century, while the construction of dikes and embankments salvaged more land for farming to meet economic needs, it cut off the Camargue region from its environment, depriving it of regular supplies of fresh water and silt previously provided by flooding. Because of this problem and to preserve the wildlife, the water resources are now managed strictly. There are pumping, irrigation and draining stations and a dense network of channels throughout the river delta. However, the costs of such installations are quite large.
interviews. Recreational Visitors were selected randomly in seven sites all around the natural reserve. The WTP question used in the questionnaire was a dichotomous choice with follow-up. There was a high response rate (92.6 %)8 .
Conformists and Non-Conformists
The questionnaire also contains an open-ended question related to the individual representations of the Camargue. This open-ended question yields the raw material to divide the respondents population into two groups: conformists and non conformists. This is done using the methodology presented in section 2, through the following steps:
Step 1: What are the words that come to your mind when you think about the Camargue?
In the questionnaire, respondents were asked to freely associate words to the Camargue. This question were asked before the contingent valuation scenario in order to not influence the respondents' answers. Respondents used more than 300 different words or expressions in total.
Step 2: A categorization into eight categories A basic categorization by frame of reference leads to eight different categories. For instance, the first category is called "Fauna and Flora". It contains all attributes which refer to the animals of Camargue and local vegetation (fauna, 62 citations, birds, 44, flora, 44, bulls, 37, horses, 53, flamingos, 36, etc.). The others categories are "Landscape", "Disorientation", "Isolation", "Preservation", "Anthropic" and "Coast". A particular exception is the category "Nature" which only contains the word nature which can hardly fall in one of the previous categories. There exists a ninth category which put together all attributes which do not refer to any categories mentioned below9 .
Step 3: Existence of a transitive social representation After consolidating the data in step 2, we were left with 218 incomplete preferences over the set X containing our eight categories. A majoritarian pairwise comparison results are presented in Table 1. The result between two categories should be read in the following way: the number of line i and column j is the difference between the number of individuals who rank category i before category j and the individuals who order category j before i. For instance, we see that "Fauna and Flora" is preferred by a strong majority to "Isolation" (a net difference of 85 voices for "Fauna and Flora"). After aggregation through the majoritarian principle, the social representation is then transitive and thus provides a coherent social representation. Step 4: Conformists and non conformists
Attributes
The top element, namely the Condorcet winner, concerns all aspects relating to biodiversity10 . This is not surprising since the main interest of the Camargue (as presented in all related commercial publications) is the "Fauna and Flora" category. Talking about the Camargue without mentioning any of those aspects is thus remarkable. Individuals who do so are considered as non conformists (38 individuals), while individuals who do are considered as conformists (180 individuals). Recall the survey was admistrated inside Camargue after individuals have visited it. Thus, they are fully aware of the importance of fauna and flora in Camargue. Not referring to those aspects is thus not a hazard.
Econometric results
We consider the dummy variable conformists/non-conformists, obtained with the four steps described above, and estimate the different models described in section 3, using a linear model [START_REF] Mac Fadden | Issues in the contingent valuation of environmental goods: Methodologies for data collection and analysis[END_REF]. In practice, a value of particular interest is the mean of WTP, evaluated by
μ = n -1 n i=1 x i ( β) (12)
and the WTPs estimated dispersion is equal to d = σ [START_REF] Hanemann | The statistical analysis of discrete response CV data[END_REF].
Table 2 presents estimated means of WTP μ, as defined in ( 12), and the dispersions of WTP distributions σ for the single bounded, double bounded, anchoring and anchoring with heterogeneity models. From this Table, it is clear that the standard errors, in parentheses, decrease considerably when one uses the usual double-bounded model (column 2) instead of the single bounded model (column 1). This result confirms the expected efficiency gains provided when the second bid is taken into account [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF]. However, estimates of the mean WTP in [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]. Then, we estimate a model with anchoring effect, as defined in (3). Results, given in column 3, show that the anchoring parameter, γ = 0.52, is significant (P -value = 0.0124). This test confirms the existence of an anchoring effect in the respondents' answers. When correcting for anchoring effect, the mean WTP belongs to the confidence interval [118; 136] which intersects the confidence interval of the single bounded model: results are now consistent. However, standard errors increase considerably, so that, even if follow-up questioning increases precision of parameter estimates (column 2), efficiency gains are lost once the anchoring effect is taken into account (column 3). According to this result, "the single-bounded approach may be preferred when the degree of anchoring is substantial" (Herriges and Shogren, 1996, p.124).
According to the distinction between conformists and non conformists, we now tackle the assumption of homogeneous anchoring. We firstly estimate a more general model than (4), with two distinct parameters of anchoring for these two groups, respectively conformists and non conformists. This is done from a model with 4). It allows us to test if non-conformists are not subject to anchoring with the null hypothesis γ 2 = 0. A likelihood ratio test is equal to 1.832 (P -value=0.1759), so that we cannot reject the null hypothesis and we therefore select the model ( 4), where anchoring only affects the conformists.
W 2i = [1 -I i γ 1 -(1 - I i )γ 2 ]W 1i +[I i γ 1 +(1-I i )γ 2 ]b 1i replacing W 2i in (
Estimates of the model, where only conformists are subject to anchoring, are given in column 4. The anchoring parameter, γ = 0.36, is clearly significant (P -value = 0.005). In other words, it means that conformists use information provided by the first bid in combining their prior WTP with this new information, but not the non-conformists. Moreover, it is clear from In addition, the confidence interval of the mean WTP in the model with anchoring and heterogeneity is equal to [93; 106]. This interval intersects the confidence interval in the single bounded model [104; 123] and so, results are consistent. These results show that the estimate of the mean WTP is smaller and more precise in the anchoring model with heterogeneity than in the single bounded model.
Table 3 presents full estimation results. It is worth noting that the introduction of heterogeneity provides a better estimation since many variables are now statistically significant. Indeed, the heterogeneous model exhibits six significant variables. This contrasts with the single-bounded model which exhibits only one significant variable.
Our results therefore suggest that when anchoring is understood as a heterogeneous process, one obtains significant efficiency gains. Furthermore, these gains are so important that the welfare estimates can be calculated by using the anchoring model with heterogeneity rather than the single bounded model. This contradicts the result by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] who use a homogeneous anchoring model and observe substantial efficiency losses.
Conclusion
In this article, we follow a line of argument suggesting that anchoring exists but is not uniformally distributed acrros the population. To that extent, we present a method that is able to identify respondents who are more likely to anchor, and respondents who are not, on the basis of a single open-ended question with which we want to elicit free associations. Depending on the answers, we discriminate between two groups of individuals, namely the conformists and the non-conformists respectively. While the first group responds in more standard terms, the latter give more individualistic answers. We therefore show that it is possible to control for anchoring bias. The interesting aspect for CV practitioners is that we still experience efficiency gains over single bounded dichotomous choice by exploiting the heterogeneity in anchoring effects. This result stands in contrast to [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] who propose a model with homogeneous anchoring throughout the population and find important losses of efficiency with respect to the single-bounded model.
Finally, how can we explain that non-conformists are less prone to anchoring? More investigation is required to answer this question. Our suggestion is that non-conformists have already a much more elaborated view on the subject, which does not conform to the "stereotypical" representation of the Camargue. They are not citing the most "obvious" reasons why they are visiting the Camargue (fauna, birds, horses, flamingos etc), but have a more "constructed" discourse, which reflects their own personal opinion on the Camargue. In that sense, we identify people with more "experience" on their subject, which may give raise to stronger opinions and preferences. Arguably, people with enhanced preferences are more likely to behave according to standard economic rationality. This means that in our setting, non-conformists attach much more importance to their own prior value of the object and are not influenced by the bidding values presented to them in the CV questionnaire. The general line of thought parallels experimental findings, which show that experienced subjects are more likely to conform to standard economic rationality. While one can rely on repetition in an experimental setting [START_REF] Grether | Bayes rule as a descriptive model: The representativeness heuristic[END_REF], or clearly identified experienced subjects [START_REF] List | Neoclassical theory versus prospect theory: Evidence from the marketplace[END_REF], to come up with this conclusion, we associate "repetition" and "experience" with non-conformist representations of the subject under consideration.
3 : Aggregation Majority voting as the aggregation principle Result : Test for transitivity of the social representation and identification of the Condorcet winner Step 4 : Segmentation Distinguishing sub-populations in the sample Result : Identification of conformists Conformists Non conformists Final result : Conformity as a dummy variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 1: Methodology
Table 1 :
1 Majoritarian pairwise comparison
F-F Land. Isol. Preserv. Nat. Anth. Disor. Coast
Fauna-Flora 0 40 85 73 107 147 146 144
Landscape - 0 48 53 86 117 123 126
Isolation - - 0 6 47 56 78 73
Preservation - - - 0 25 51 62 65
Nature - - - - 0 14 11 28
Anthropic - - - - - 0 9 17
Disorientation - - - - - - 0 12
Coast - - - - - - - 0
Table 2 :
2 Parameter estimates in French Francs (standard errors in parenthesis) both models are very different: in the double bounded model the mean WTP would belong to the interval [77; 86] with a confidence level at 95% 11 , instead of the confidence interval[104; 123] in the single bounded model. Such inconsistent results lead us to consider that anchoring effect could be present, as suggested by
Table 2 that standard errors from column 4, in parentheses,
Variables Single-bounded Anchoring Anchoring model
model model with heterogeneity
Constant 35.43 (57.27) 83.57 (68.43) 61.16 (44.18)
Distance home-natural site 9.30 (5.30) 7.07 (4.45) ⋆ 4.67 (2.17)
Using a car to arrive -61.71 (41.08) -79.47 (49.04) ⋆ -58.22 (26.81)
Employee ⋆ 95.86 (46.86) 84.27 (49.09) ⋆ 65.36 (27.77)
Middle class 109.96 (63.60) 99.89 (56.95) ⋆ 74.66 (28.96)
Inactive 52.58 (38.44) 57.12 (40.87) 48.80 (27.99)
Working class 97.28 (68.29) 81.27 (81.66) 62.00 (53.27)
White collar 80.33 (42.16) 78.88 (44.24) ⋆ 59.66 (24.65)
Visiting with family 4.71 (29.61) 12.79 (31.36) 13.01 (22.71)
Visiting Alone 61.11 (101.67) 122.37 (95.03) 89.18 (52.97)
Visiting with a group 44.79 (47.90) 3.70 (46.24) 4.22 (32.65)
First visit 51.42 (35.29) 18.56 (23.50) 15.59 (16.31)
New facilities proposed 56.93 (32.12) 57.29 (33.06) ⋆ 41.94 (15.59)
Other financing proposed -32.03 (27.60) -28.19 (21.84) -19.01 (12.87)
South-West -24.18 (33.57) -42.04 (40.61) -28.48 (24.24)
South-East 42.04 (58.26) 52.72 (52.06) 40.73 (32.61)
Questionnaire type -28.19 (23.34) -13.15 (17.82) -10.50 (11.97)
Investigator 1 23.44 (56.29) 6.12 (47.50) 8.26 (32.07)
Investigator 2 -17.12 (57.52) -39.70 (54.49) -29.92 (35.09)
Table 3 :
3 Parameter estimates, standard errors in parentheses (⋆: significant at 95%) are significantly reduced compared to those of column 1. Hence, although the singlebounded model provides better results in terms of efficiency than the model with constant anchoring, our model with anchoring and heterogeneity yields more efficient estimates.
Moscovici (1998aMoscovici ( , 1998b) )
[START_REF] Moscovici | La psychanalyse, son image et son public[END_REF]Moscovici ( , 1998a)),[START_REF] Farr | From collective to social representations: Aller et Retour[END_REF],[START_REF] Viaud | A positional and representational analysis of consumption. households when facing debt and credit[END_REF]
The majority principle will then consist of a pairwise comparison of each of the attributes. For each pair (X, Y ), the number of individuals who rank X before Y is compared to the number of individuals who rank Y before X. The individuals who do not cite either X or Y since incomplete individual representation may exist do not contribute to the choice between X and Y . Adding to this, when an individual cites X and not Y , X is considered as superior to Y .
See Laslier (1997) for details.
See Claeys-Mekdade, Geniaux, and Luchini (1999) for a complete description of the contingent valuation survey.
After categorization and deletion of doubles, the average number of attributes evoked by the respondents falls from 5.5 to 4.0.
Full description of the data and more details are available in[START_REF] Hollard | Théorie du choix social et représentations : analyse d'une enquête sur le tourisme vert en camargue[END_REF].
this confidence interval is defined as [81.79 ± 1.96 × 2.41].
Acknowledgments
This research was supported by the French Ministry in charge of environmental affairs. The authors wish to thank Louis-André Gérard-Varet for advice throughout this work. Thanks to Emmanuel Bec, Colin Camerer, Russell Davidson, Alan Kirman, André Lapied, Miriam Teschl and Jean-François Wen for their helpful and constructive comments. We also gratefully aknowledge the participants of the workshop Recent issues on contingent valuation surveys held in Marseilles (June 2003), especially Jason Shogren for his helpful comments and remarks. Errors are the authors' own responsibility. |
01760670 | en | [
"sdv"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01760670/file/Clin%20Physiol%20Funct%20Imaging.pdf | Alain Boussuges
Pascal Rossi
Laurent Poirette
exercise in water when compared with on land
Keywords: cardiac function, cycling, oxygen uptake, recovery, thermoneutrality
come
Heart rate recovery improves after exercise in water
when compared with on land
Abstract BACKGROUND:
Water immersion has demonstrated its effectiveness in the recovery process after exercise. This study presents for the first time the impact of water immersion on heart rate recovery after low-intensity cycle exercise.
METHODS:
Sixteen male volunteers were involved in the study. The experiment consisted of two cycling exercises: 1 h in ambient air and 1 h in water (temperature: 32 ± 0•2°C). The exercise intensity was individually prescribed to elicit around 35%-40% of VO2 peak for both conditions. Heart rate recovery was analysed according to recognized methods, such as the differences between heart rate at exercise completion and within the 2 min recovery period.
RESULTS:
Although the two exercises were performed both at same energy expenditure and heart rate, the indexes used to assess the fast and slow decay of the heart rate recovery were significantly shortened after exercise in water.
CONCLUSION:
The results of the present study suggest that cycling in thermoneutral water decreases the cardiac work after exercise when compared with cycling on land. |
01760682 | en | [
"shs.geo",
"shs.stat"
] | 2024/03/05 22:32:13 | 1996 | https://hal.science/hal-01760682/file/2302-52.pdf | B Mikula
Hélène Mathian
Denise Pumain
Lena Sanders
Dynamic modelling and geographical information systems: for an integration
published or not. The documents may come L'archive ouverte pluridisciplinaire |
01760693 | en | [
"spi.nano"
] | 2024/03/05 22:32:13 | 2013 | https://hal.science/tel-01760693/file/thesis_della_marca.pdf | Antonello Scanni
Nando Basile
Olivier Pizzuto
Julien Amouroux He
Guillaume Just
Lorin Martin
Olivier Paulet
Lionel Bertorello
Yohan Joly
Luc Baron
Marco Mantelli
Patrick Poire
Marion Carmona
Jean-Sebastian Culoma
Ellen Blanchet helped me for the thesis preparation in a satisfactory English :-
General introduction
Walking down the street, inside an airport or a university, it is impossible not to notice some people speaking or sending messages with their smartphones, others are painting a picture on their tablets, and all this is happening while we are transferring the data of our research from a smart card to a laptop. The wish to communicate and to keep all the information in our pocket, has lead to the development of embedded and portable device technology. Suddenly, with the coming of social networks, we need to exchange comments, articles, pictures, movies and all other types of data with the rest of the word, regardless of our position. In a "touch" we can access the information that always needs to be stored in larger quantity; not one single bit of what belongs to us must be lost and the devices must be extremely reliable and efficient. In this scenario the microelectronics industry is continuously evolving and never ceases to astonish. As a consequence, over the last decade, the market of semiconductor integrated circuits (IC) for embedded applications has exploded too. The request of customers commands the market of low energy consumption portable devices.
Particular attention is paid to Flash memories that actually represent the most important media to store each type of data. Depending on application characteristics, different architectures and devices have been developed over the last few years in order to satisfy all the needs of customers. Size scaling, faster access time and lower energy consumption have been the three pillars of scientific research in micro and nano electronic devices over the last few years.
Starting from these philosophical considerations we performed an experimental study on silicon nanocrystal memory that represents one of most attractive solutions to replace the standard Flash floating gate device. The aim of this thesis is to understand the physical mechanisms that govern the silicon nanocrystal cell behavior, to optimize the device architecture and to compare the results found with the standard Flash to verify performance improvement.
In the first chapter, we will present the economic context, the evolution and the working of EEPROM-Flash memories. Then, a detailed description of the technology, the functioning and their scaling limits will be provided. Finally we will expose the possible solutions to overcome these problems and the thesis framework.
The second chapter will present the experimental setup and the methods of characterization used to measure the performances of silicon nanocrystal memory cell. Moreover the impact of relevant technological parameters such as: the nature of nanocrystals, silicon nitride presence, channel doping dose and tunnel oxide thickness, will be analyzed. A memory cell stack optimization is also proposed to match the Flash floating gate memory performance.
In the third chapter the impact of main technological parameters on silicon memory cell reliability (endurance and data retention) is studied. The performance of silicon nanocrystal memories for applications functioning within a wide range of temperatures [-40°C; 150°C] is also evaluated reaching for the first time a 1Mcycles endurance with a 4V programming window. Finally the proposed optimized cell is compared to the standard Flash floating gate.
Chapter four describes a new dynamic technique of measurement for the drain current consumption during the hot carrier injection. This enables the cell energy consumption to be evaluated when a programming operation occurs. This method is applied for the first time to the floating gate and silicon nanocrystals memory devices. A study on programming scheme and the impact of technological parameter is presented in this chapter. In addition the silicon nanocrystal and floating gate cells are compared. Finally we demonstrate that is possible to reach a sub-nanojoule energy consumption saving a 4V programming window.
Finally in the chapter five the conclusion of this work will be analyzed in order to highlight the main experimental results. Moreover the basis for a future work will be presented.
Introduction
The aim of this first chapter is to present the economic context, the role and the evolution of non-volatile memories. In this context we will present the Flash floating gate device and the physical mechanisms used to transfer electric charge from and into the floating gate. Then the limits of this device and the existing solutions to overcome them will be introduced. In particular, we will focus on the silicon nanocrystal memory that represents the object of this thesis.
The industry of semiconductor memories
The market of non-volatile memories
Over the last decade, the market of non volatile memories has been boosted, driven by the increasing number of portable devices (figure 1.1). All the applications require higher and higher performance such as: high density, low power consumption, short access time, low costs, and so on [Changhyun '06]. This is why the business of Flash memories gained market segments at the expense of other types of memory (figure 1.2). Although the market is growing continuously, the price of memory device is decreasing (figure 1.3). '08].
As the memory market enters the Gigabit and GHz range with consumers demanding ever better performance and more diversified applications, new types of devices are being developed in order to keep up with the scaling requirements for cost reduction. In this scenario, memories play an important role. The "ideal" memory should be a memory that retains the stored information even when it is not powered (non volatile) with a high integration density, that can be infinitely written/re-written (infinite endurance), with ultra high program/erase/read operations, and a zero energy consumption. Because the "ideal" device does not exist, different types of memories have been studied in order to develop one or more of these properties according to their final application [Masoero '12a] (figure 1.4 ). In the next section, the most important semiconductor memories will be summarized. [Zajac '10]. "Bit count" is the amount of data that can be stored in a given block.
Memory classification
There are various possibilities to classify semiconductor memories; one is to consider their electrical characteristics (figure 1.5). Volatile Memories: are fast memories that are used for temporary storage data since they lose the information when the power is turned off. We can divide them into two types:
Static Random Access Memory (SRAM). The information is maintained as long as they are powered. They are made up of flip-flop circuitry (six transistors in a particular configuration). Because of its large number of components SRAM is large in size and cannot compete with the density typical of other kinds of memories.
Dynamic Random Access Memory (DRAM). These memories lose the information in a short time. They are made up of a transistor and a capacity where the charge is stored. They are widely used in processors for the temporary storage of information.
As the capacitor loses the charge, a refresh or recharge operation is needed to maintain the right state.
Non-Volatile Memories: they retain the information even when the power is down. They have been conceived in order to store the information without any power consumption for a long time. This thesis concerns the study of charge storage non volatile memories that are a subgroup of the semiconductor memories. However it is important to remember that there are other devices were the information can be stocked. A very common storage device is the magnetic disk; its main drawback being the long access time and the sensitivity to magnetic fields. Another example of non-volatile memory is the CD technology developed in the late 1970s which uses an optical media that can be read fast, but necessitating a pre-recorded content. Here we will only describe the memory based on semiconductor technology:
Read Only Memory (ROM). This is the first non-volatile semiconductor memory. It consists in a simple metal/oxide/semiconductor (MOS) transistor. Thus its cell size is potentially the smallest of any type of memory device. The memory is programmed by channel implant during the fabrication process and can never be modified. It is mainly used to distribute programs containing microcode that do not need frequent update (firmware).
Programmable Read Only Memory (PROM). It is similar to the ROM memory mentioned above, but the programming phase can be done by the user. It was invented in 1956 and can constitute a cheaper alternative to the ROM memory because it does not need a new mask for new programming.
Erasable Programmable Read Only Memory (EPROM). This memory could be erased and programmed by the user, but the erase has to be done by extracting the circuit and putting it under ultraviolet (UV) radiations. The particularity of this device is the presence of a "floating gate" between the control (top) and tunnel (bottom) oxides. In 1967 D. Khang and S. M. Sze proposed a MOS-based non-volatile memory based on a floating gate in a metal-insulator-metal-insulator-semiconductor structure [Kahng '67]. At the time, however, it was almost impossible to deposit a thin oxide layer (<5nm) without introducing fatal defects. As a consequence a fairly thick oxide layer was adopted and this type of device was developed for the first time at Intel by .
Electrically Erasable Programmable Read Only Memory (EEPROM). In this memory both the write and erase operations can be electrically accomplished, without removing the chip from the motherboard. The EEPROM cell features a select transistor in series to each floating gate cell. The select transistor increases the size of the memories and the complexity of array organization, but the memory array can be erased bit per bit.
Flash memory is a synthesis between the density of EPROM and the enhanced functionality of EEPROM. It looks like EEPROM memory but without the select transistor. Historically, the name comes from its fast erasing mechanism. Because of these properties and the new applications (figure 1.6) the flash memory market is growing at a higher average annual rate than DRAM and SRAM, which makes it today the most produced memory (figure 1.2). Depending on their applications, flash memories can used in two different architectures that we introduce here and we will describe in next section. NOR flash memory provides random memory access and fast reads useful for pulling data from the memory. The NAND, on the other hand, reads data slowly but has fast write speeds and high density.
Figure 1. 6. Market trend of NAND Flash memories in portable applications [Bez '11].
Flash memory architectures
Flash memories are organized in arrays of rows (word lines or WL) and columns (bit lines or BL). The type of connection determines the array architecture (figure 1.7).
NOR:
The NOR architecture was introduced for the first time by Intel in 1988. The cells are connected in parallel and in particular, the gates are connected together through the wordline, while the drain is shared along the bitline. The fact that the drain of each cell can be selectively selected enables a random access of any cell in the array. Programming is generally done by channel hot electron (CHE) and erasing by Fowler-Nordheim (FN). NOR architectures provide fast reading and relatively slow programming mechanisms. The presence of a drain contact for each cell limits the scaling to 6F 2 , where F is the smallest lithographic feature. Fast read, good reliability and relatively fast write mechanism make NOR architecture the most suitable technology for the embedded applications requiring the storage of codes and parameters and more generally for execution-in-place. The memory cells studied in this thesis will be integrated in a NOR architecture for embedded applications.
NAND: Toshiba presented the NAND architecture development in 1987 in order to realize ultra high density EPROM and Flash EEPROM [Masuoka '87]. This architecture was introduced in 1989 and presented all the cells in series where the gates were connected by a wordline while the drain and the source terminals were not contacted. The absence of contacts means that the cell cannot be selectively addressed and the programming can be done only by Fowler-Nordheim. On the other hand, it is possible to reach an optimal cell size of 4F 2 , thus a 30% higher density than in NOR cells. In NAND architecture programming is relatively fast but the reading process is quite slow as the reading of one cell is done by forcing the cell in the same bit line to the ON state. The high density and the slow reading but fast writing speeds make NAND architecture suitable for USB keys, storing digital photos, MP3 audio, GPS and many other multimedia applications.
Floating gate cell
The floating gate cell is the basis of the charge trap memory. The understanding of the basic concepts and functionalities of this device are fundamental and studied in this thesis. In this part we will describe flash memory operations. The operation principle is the following (figure. 1.8a): when the cell is erased there are no charges in the floating gate and the threshold voltage (Vt) is low (Vte). On the contrary when the memory is programmed (or written) the injected charge is stored in the floating gate layer and the threshold voltage value is high (Vtp). To know the state of the memory (e.g. the amount of trapped charge) it is just necessary to bias the gate with moderate read voltage (Vg) that is between (Vte) and (Vtp)
and then determine if the current flows through the channel (ON state) or not (OFF state). andQ≠0). b) Schematic cross section of a floating gate transistor. The model using the capacitance between the floating gate and the other electrodes is described [Cappelletti '99].
The schematic cross section of a generic FG device is shown in figure 1.8b; the upper gate is the control gate (CG) and the lower gate, completely isolated within the gate dielectric, is the floating gate (FG). The simple model shown in figure 1.8b helps to understand the electrical behavior of a FG device. CFC, CS, CB, and CD are the capacitances between the FG and control gate, source, drain, and substrate regions, respectively. The potentials are described as follows: VFG is the potential on the FG, VCG is the potential on the control gate, and VS, VD and VB are potentials on source, drain, and bulk, respectively [Pavan '97].
Basic structure: capacitive model
The basic concepts and the functionality of a FG device are easily understood if it is possible to determine the FG potential. Consider the case when no charge is stored in the FG, i.e., Q=0.
) ( ) ( ) ( ) ( 0 D FG D B FG B S FG S CG FG FC V V C V V C V V C V V C Q ( 1
)
Where VFG is the potential on the FG, VCG is the potential on the control gate, and VS, VD and VB are potentials on source, drain, and bulk, respectively. We name:
B D S FC T C C C C C (2)
The total capacitance of the FG, and
T J J C C (3)
The coupling factor relative to the electrode J, where J can be one of G, D, S and B, the potential on the FG due to capacitive coupling is given by
B B S S DS D GS G FG V V V V V (4)
It should be pointed out that (4) shows that the FG potential does not depend only on the control gate voltage but also on the source, drain, and bulk potentials [Pavan '97]. When the device is biased into conduction and the source is grounded, VFG can be written approximately as [Wu '92]:
) ( ) ( Dt D D G G FG FG V V Vt V Vt V (6)
Where αG and αD are the coupling factors, VtFG is the FG threshold voltage (i.e., the VFG value at which the device turns on), while VDt is the drain voltage used for reading measurement. The control gate threshold voltage (Vt) is obviously dependent on the charge (Q) possibly stored in the FG and is typically given in the form:
Dt G D FG G FG V C Q Vt Vt (7)
When ( 7) is substituted into (6), the following well-known expression for VFG is obtained:
T D D G G FG C Q V V V (8)
In particular the Vt shift (ΔVt) due to the programming operation is derived approximately as:
FC T G C Q C Q Vt Vt Vt 0 (9)
Where Vt0 is the threshold voltage when Q=0. Equations ( 8) and ( 9) reveal the importance of the gate coupling factor (αG): (8) shows that high αG induces a floating gate potential close to the applied control gate bias and consequently, the gate coupling ratio needs to be high for provide a good programming and erasing efficiency. On the other hand (9) indicates that high αG reduces the impact of the storage charge to the programming window (ΔVt). The international roadmap for semiconductor [ITRS '12] indicates that the best trade-off is achieved with a αG between 0.6 et 0.7.
Programming mechanisms
We describe in this section the two main methods to program a Flash memory cell: Fowler-Nordheim (FN) [Fowler '28] and the channel hot electron (CHE) [Takeda '83].
Fowler-Nordheim programming
The Fowler-Nordheim programming operation is performed by applying a positive high voltage on the control gate terminal (about 20V) and keeping source drain and bulk grounded (figure 1.9a). The high electric field generated through the tunnel oxide creates a gate current due to the FN tunneling of charge from the channel to the floating gate [Chang '83]. consequence, the floating gate potential decreases and hence, the electric field through the tunnel oxide decreases. The charge injection will continue until the cancellation of electric field in the tunnel oxide. This is due to the maximum drop potential through the interpoly dielectric layer (ONO). This operation is relatively slow (order of milliseconds), but the energy consumption can be considered negligible because no current flows in the channel.
Channel Hot Electron programming
This operation is done keeping bulk and source grounded and applying a positive high voltage on gate (order of 10V) and drain (order of 5V) terminals (figure 1.10). The electrons are first strongly accelerated in the pinchoff region by the high lateral electric field induced by the drain/source bias. Then the electrons that have reached a sufficiently high kinetic energy are injected into the floating gate thanks to the vertical electric field induced by the positive voltage applied on the gate electrode [Ning '78] [Takeda '85] [Chenming '85].
Programming by channel hot electron is faster than FN (few microseconds). Furthermore the CHE is efficiency poor (only a few electrons are injected over the total amount of electrons that flow from source to drain [Simon '84]), and consequently high power consumption is reached. We remember that this programming mechanism is the main one used in this work to characterize the memory cells.
Erase mechanisms
There are mainly four ways to erase the Flash floating gate cell; the schematic representations are shown in figure 1.11.
Fowler-Nordheim erase
As for the programming operation, the source, drain and bulk are generally kept grounded while a strong negative voltage (order of -15V) is applied to the gate terminal. In this case, electrons are forced to flow from the floating gate to the semiconductor bulk. This method is slow, but the erasing is uniform on the channel surface (figure 1.11a). This is the preferred mechanism to erase the memory cells in NOR architecture. In the next chapter we will discuss about its effect on studied samples.
Hot Hole Injection (HHI) erase
This mechanism consists in accelerating the holes produced by reverse biasing of drain/bulk junction and by injecting them into the floating gate thanks to the vertical electric field [Takeda '83]. Figure 1.11b shows that this is done by keeping the bulk and source grounded and biasing positively the drain (order of 5V) and negatively the gate (about -10V). HHI erasing method is fast, localized near the drain and could induce the SILC (Stress Induced Leakage Current) phenomenon more easily than the methods listed above.
Source erasing
This forces electrons to flow from the floating gate into the source junction by FN tunneling.
This erasing is done by applying a positive voltage of about 15V on the source and keeping bulk and gate grounded (figure 1.11c). In order to prevent current through the channel, the drain is kept floating. There are three main drawbacks to this method the erasing is localized near the source, it needs a strong source/gate overlap and it requires the application of a high voltage on the source terminal.
Mix source-gate erase
This is a mix between the source and the FN erasing. Electrons are erased both through the source and the channel. The principle is to share the high voltage needed in the source erasing between the gate and the source electrodes. As a result a negative bias of about -10V is applied on the gate and a positive bias of about 5V on the source. Again, the drain is kept floating in order to prevent source to drain current (figure 1.11d).
Evolution and limits of Flash memories
We explained at the beginning of the chapter that the new applications have commanded the semiconductor market and the research development. Since the invention of flash memory cell, the progress on device architectures and materials has been huge. The "ideal" memory should have:
• high density solution
• low power consumption • compatibility with logic circuits and integration This is the final objective of semiconductor research. As the "ideal" device does not exist different types of memories have been invented in order to push some specific properties. For these reasons, as shown in figure 1.12, memory technology development did not pursue a single technology solution, but rather it has oriented in many different direction over time [Hidaka '11] [Baker '12].
Figure 1. 12. Evolution of Flash and embedded-Flash memory technology (left) [Hidaka '11]. Mapping of common eNVM architectures to the NVM byte count and critical characteristics (right) [Baker '12].
Here, a category for non volatile storage with absolute minimum cost/bit is also shown. In this section we will first introduce the device scaling and the related challenges and then we will present flash cell developments. It is worth noting that the solutions found for the flash memory cell can be used in embedded memory. In fact, even if in embedded memories there are minor constraints on the cell dimensions, research has always provided smaller nonvolatile memory for embedded applications that have to face-off with the same flash scaling limits.
Device scaling
During the last 30 years Flash cell size has shrunk from 1.5um to 25nm doubling the memory capacity every year. In '12]. White cell color: manufacturable solutions exist and are being optimized. Yellow cell color: manufacturable solutions are known. Red cell color: unknown manufacturable solutions.
We can see that even if the trend is maintained and the cell scaled down in the years to come, some technological solutions are still not known. Moreover, scaling beyond the 28nm will be very difficult if no revolutionary technology is adopted. The main issues that limit device miniaturizing are:
Stress Induced Leakage Current (SILC). During each erase/write cycle the stress degrades the tunnel oxide and the cell slowly loses its capacity to store electric charges (figure 1.13).
Figure 1. 13. Experimental cumulative distribution functions of bits vs. threshold voltage, measured at different times after different P/E cycling conditions [Hoefler '02].
This phenomenon, increases as the tunnel oxide is thinned. This is due to the defects induced in the oxide by the electrons that passing through it during program/erase operations [Pavan '97] [Ling-Chang '06] [Hoefler '02] [Belgal '02] [Kato '94] [Chimenton '01]. Consequently, the retention depends on the number of cycles and on the tunnel oxide thickness, but the physical scaling of this latter is limited to 6-7 nm.
Short Channel Effects (SCE). SCE appear when the gate length dimensions are so short that the gate control of the channel is lowered due to the influence of the source and drain potentials (figure 1.14). This parasitic effect produces the Drain Induced Barrier Lowering (DIBL) phenomenon [Yau '75], which results in threshold voltage decrease and the degradation of the subthreshold slope. Because of DIBL, the "OFF" current (IOFF) increases and the power consumption reaches values incompatible with the advanced technology node requirements [Brews '80] [Fichtner '80] [Yau '74] [Fukuma '77]. Moreover, elevated IOFF currents result in some disturb of the memory cell, especially in the erased state. The insert is the calculated boron profile below the silicon surface in the channel [Fichtner '80].
Disturb. We consider here the main disturb effects due to the programming and reading operations done on unselected cell of a NOR memory array. It is to be remembered that in this thesis work, the electrical characterizations are based on the principle that the memory cells will be integrated in a NOR architecture for embedded applications.
Programming disturb. This impacts the unselected cells in the same bitline and wordline of selected cell. In the first case a drain stress is produced and The cell scaling reduces the distances between the neighboring cells and the contacts. This means that parasitic capacitances have to be taken into account for the coupling factor calculation, and we will explain our model in chapter 4 section 2.2.
Figure 1. 16. TEM pictures of STMicroelectronics 90nm NOR Flash (left) and Samsung sub-50nm NAND Flash right) [Kim '05].
Parasitic Charge Trapping. In scaled memories the reduction in the number of stored electrons leads to a higher influence of the parasitic charge trapping on threshold voltage shift [Prall '10]. Figure 1.17 shows various locations within a NAND cell, thus programmed and erased by FN, where the parasitic charge can be trapped. The results of a TCAD simulation show that with the memory scaling, the number of electrons located outside the floating gate starts to dominate the cell threshold voltage shift. We will see in following chapters, how this parameter impacts the memory cell behavior. The triangle shows the ±3σ percentage divided by the mean [Prall '10].
Alternative solutions
In this section we will describe some of the envisaged modification to the classical flash memory cell in order to overcome the scaling limits presented in the previous section.
Tunnel dielectric
In a flash memory the tunnel dielectric has the double role of tunneling media during programming operations and electrostatic barrier in order to preserve the stocked charge.
Moreover
Interpoly material
Maintaining a constant coupling ratio at a value of 0.6-0.7 is a great scaling challenge. The use of high-k dielectric in the interpoly dielectric is envisaged to reduce the total EOT while maintaining or even increasing the gate coupling ratio. The choice of the high-k must take into account that for most of them the high dielectric constant comes at the expense of a narrower band gap (figure 1.20). This narrowed band gap can cause leakage current during retention operation [Casperson '02]. In particular Alumina dielectric is employed in the TANOS (TaN/Al2O3/Si3N4/SiO2/Si) memory, proposed for the first time by Samsung in 2005 [Yoocheol '05], Despite the envisaged advantages, high-k materials are not as well known as the silicon oxide and they need further development before they can be integrated in the memory market. One of the main problems is that they inevitably introduce defects that can induce trap assisted conduction and degrade the memory operations .
Silicon nanocrystal memory: state of the art
The market of nonvolatile Flash memories, for portable systems, requires lower and lower energy and higher reliability solutions. The silicon nanocrystal Flash memory cell appears as one promising candidate for embedded applications. The functioning principle of discrete charge trapping silicon nanocrystal memories (Si-nc) is similar to floating gate devices. In this thesis we consider the integration of Si-nc memories in NOR architecture for embedded applications programmed by channel hot electron and erased by Fowler-Nordheim mechanisms.
There are many are the advantages to using this technology:
-Robustness against SILC and RILC (Radiation Induced Leakage Current), this enables to scale the tunnel oxide thickness to be scaled down to 5nm, while the ten year data retention constraint is guaranteed. Moreover the operation voltages can be decreased too [Compagnoni '03] [Monzio Compagnoni '04]. Further improvements can be achieved using cells with a high number of nanocrystals [De Salvo '03].
-Full compatibility with standard CMOS fabrication process encouraging industrial manufacturability, reducing the number of masks with respect to the fabrication of floating gate device [Muralidhar '03] [ Baron '04] and ease of integration [Jacob '08].
-Decrease in cell disturb, due to the discrete nature of nanocrystals and their smaller size than a floating gate, the coupling factor between the gate and drain is reduced as well as the disturbs between neighboring cells.
-Multi level applications, the threshold voltage of a silicon nanocrystal transistor depends on the position of stored charge along the channel [Crupi '03] [De Salvo '03].
Despite these peculiarities two main drawbacks characterize the Si-nc memories:
-The weak coupling factor between the control gate and nanocrystals. This implies finding a method to keep the program/erase voltages small and to take advantage of the decrease in tunnel oxide thickness [De Salvo '01].
-The spread in the surface fraction covered with Si-nc limiting this type of cell for high integration density applications [Gerardi '04].
IBM presented the first Si-nc memory at IEDM [Tiwari '95] in order to improve the DRAM (Dynamic Random Access Memory) performance using a device with characteristics similar to EEPROM. The polysilicon floating gate is replaced by silicon nanocrystals grown on tunnel oxide by Low Pressure Chemical Vapor Deposition (LPCVD) two step process. This type of fabrication enables the size and density of nanocrystals to be controlled separately [Mazen '03] [Mazen '04].
Figure 1. 23. Schematic representation of the nucleation and growth two step process [Mazen '03].
Other techniques of fabrication have also been developed: ionic implantation [Hanafi '96],
annealing of SRO (Silicon Rich Oxide) layers deposition [Rosmeulen '02] and aereosol deposition [De Blauwe '00]. Thanks to these research works, Motorola demonstrated the interest in using this device for non-volatile applications by developing a 4Mb memory array [Muralidhar '03]. In addition STMicroelectronics in collaboration with CEA-Leti presented their 1Mb memory array [De Salvo '03]. The three main actors in the industry of silicon nanocrystal memories are STMicroelectronics, Atmel and Freescale. Finally they processed the silicon nanocrystal memory cell in order to assume a cylindrical shape, which greatly benefits improve the coupling ratio (figure 1.26a). In addition, they used an optimized ONO control dielectric, enabling the reduction of the parasitic charge trapping during cycling (figure 1.26b); this type of cell was integrated in a 4Mb NOR array [Gerardi '08]. It clearly appears that increasing the Si-nc size, the programming window is increased too.
Indeed, this result well agrees with the theoretical model [De Salvo '01] which states that the programming window linearly increases with the floating gate surface portion covered by the Si-NCs. In fact it was demonstrated for the Si-nc cell that the dynamic charging/discharging Si-dot memory corresponds better to a FG memory device operation rather than to a pure capture/emission trap-like behavior [De Salvo '01]. Starting from the capacitive model of Flash floating gate (section 1.3.1), and by considering the discrete nature of nanocrystals, the coefficient αD can be neglected and the equation ( 8) can be rearranged as:
T G G FG C Q V V (10)
In this FG-like approach, we define a parameter that takes into account the surface portion covered by the nanocrystals (Rnc). It corresponds to a weighting factor for the trapped charges in the MOSFET threshold voltage; the Vt shift in this case takes into account this parameter and can be written as: 11) This approach will be considered as fundamental in the next chapters in order to improve the Si-nc memory cell coupling factor and thus the programming window.
FC nc C R Q Vt Vt Vt 0 (
We reported in figure 1.28 the results shown in [Jacob '08] concerning cell reliability using the HTO control oxide and keeping the silicon nanocrystal size constant. Freescale was created by Motorola in 2004 when the studies on silicon nanocrystal memory cell had already been started [Muralidhar '03]. Freescale did a comparative study on the importance of control dielectric, using HTO and ONO samples because the latter, with its silicon nitride layer, represents a barrier against the parasitic oxidation of silicon nanocrystals and decreases the leakage current in the memory stack. As a drawback the parasitic charge trapping is present during the programming operations. In figure 1 ONO [Muralidhar '04] control dielectric.
This parasitic charge trapping impacts also the data retention (figure 1.30). It is thus important to minimize it to reach the 20 year target. They demonstrated the advantage of discrete nature of silicon nanocrystals on data retention and read disturb; it enables the tunnel oxide thickness to be decreased and hence the program/erase voltages.
Figure 1. 30. Program state data retention and erased slate READ disturb characteristics for a nanocrystal NVM bitcell with a 5mn tunnel oxide. Exhibited charge loss in cycled case is attributed to detrapping of parasitic charge [Muralidhar '04].
Further studies have been performed concerning the impact of silicon nanocrystals size and density [Rao '05] [Gasquet '06]. Figure 1.31a shows that the covered area impacts the program/erase speed and the saturation level of the programming window. [Rao '05]. b) 200ºC bake Vt shift for samples with different nanocrystals size [Gasquet '06].
The hot carrier injection speed increases with the covered area, while the Fowler-Nordheim erase operation is more efficient with smaller nanocrystals. This is due to the presence of HTO and the Coulomb blockade effect. Data retention measurements have been also carried out on a 4Mb memory array. The samples had a 5nm tunnel oxide and 10nm HTO (figure 1.31b). Here the data retention loss is shown during a 200ºC bake. The erased state is very close to neutral charge so the Vt shift is small while most of the variation in program state response originates in the first 54 hours of bake and appear uncorrelated to nanocrystal size.
Moreover, Freescale decided to integrate silicon nanocrystals in high scalable Split Gate memories (figure 1.32) [Sung-Taeg '08] [Yater '09], where it is possible to control the current consumption during the hot carrier injection for low energy embedded applications [Masoero '11] [Masoero '12b]. Recent results of endurance and data retention are reported in figure 1.33. The cycling experiments (figure 1.33a) show program and erase Vt distribution width that remain approximately constant throughout extended cycling and a substantial operating window is maintained even after 300Kcycles. Concerning the data retention, due to the inherent benefits of NC-based memories, no extrinsic charge loss was observed on fresh and cycled parts (figure 1.33b). The average loss for 504hrs for uncycled arrays is about 70mV and for 10K and 100K cycled arrays it is 250mV and 400mV, respectively [Yater '11] [Sung-Taeg '12]. Finally all these studies underline the importance of achieving a good coupling factor to improve the programming window and thus cell endurance, paying attention to the tunnel oxide thickness that plays an important role for the data retention and disturbs.
Flash technology for embedded applications
The 1T silicon nanocrystal technology is not the only solution to replace the Flash floating gate. In particular for the market of embedded applications the Flash memory array is integrated in the microcontroller products with SRAM, ROM and logic circuits achieving System on a Chip solution (SoC). This type of integrated circuit enables the fabrication costs reduction due to the compatibility with the CMOS process, by improving the system performance because the code can be executed directly from the embedded Flash. The most important applications for embedded products are the smart card and automotive, where low energy consumption, fast access time and high reliability are required (figure 1.34). In this scenario each one of main industrial actors searches the best compromise between cell area, performance and cost. In figure 1.35 we show the mainstream Flash concepts proposed by the top players of SoC manufacturers [Strenz '11]. [Strenz '11].
Although a large variety of different cell concepts can be found in sell, only three main concepts in terms of bitcell structure dominate the marketall of them using NOR array configuration: 1T stacked gate concepts, splitgate concepts as well as 2-transistor NOR concepts. Due to highly diverging product requirements there is a variety of concepts tailored to specific applications. Looking into development of new nodes a clear slowdown of area shrink potential can be observed for classical bitcell concepts while reliability requirements are tightened rather than relaxed. This increases the pressure for new, emerging cell concepts with better shrink potential. We used this brief analysis (Robert Strenz, Infineon -Workshop on Innovative Memory Technologies, Grenoble 2012), to highlight the concept that the industry push its technology to overcome the problem of scaling cost.
Innovative solutions for non volatile memory
Since the ultimate scaling limitation for charge storage devices is too few electrons, devices that provide memory states without electric charges are promising to scale further. Several non-charge-storage memories have been extensively studied and some commercialized, and each has its own merits and unique challenges. Some of these are uniquely suited for special applications and may follow a scaling path independent of NOR and NAND flash. Some may eventually replace NOR or NAND flash. Logic states that do not depend on charge storage eventually also run into fundamental physical limits. For example, small storage volume may be vulnerable to random thermal noise, such as the case of superparamagnetism limitation for MRAM. One disadvantage of this category of devices is that the storage element itself cannot also serve as the memory selection (access) device because they are mostly two-terminal devices. Even if the on/off ratio is high, two terminal devices still lack a separate control (e.g. gate) that can turn the device off in normal state. Therefore, these devices use 1T-1C (FeRAM), 1T-1R (MRAM and PCRAM) or 1D-1R (PCRAM) structures. It is thus challenging to achieve small (4F 2 ) cell size without an innovative access device. In addition, because of the more complex cell structure that must include a separate access (selection) device, it is more difficult to design 3-D arrays that can be fabricated using just a few additional masks like those proposed for 3-D NAND [ITRS '12] [Jiyoung '09] [Tae-Su '09] [SungJin (figure 1.36).
Ferroelectric Random Access Memory (FeRAM)
FeRAM devices achieve non-volatility by switching and sensing the polarization state of a ferroelectric capacitor. To read the memory state the hysteresis loop of the ferroelectric capacitor must be traced and the data must be written back after reading. Because of this "destructive read," it is a challenge to find ferroelectric and electrode materials that provide both adequate change in polarization and the necessary stability over extended operating cycles. The ferroelectric materials are foreign to the normal complement of CMOS fabrication materials and can be degraded by conventional CMOS processing conditions.
Thus, the ferroelectric materials, buffer materials and process conditions are still being refined. So far, the most advanced FeRAM [Hong '07] is substantially less dense than NOR and NAND flash. It is fabricated at least one technology generation behind NOR and NAND flash, and not capable of MLC. Thus, the hope for near term replacement of NOR or NAND flash has faded. However, FeRAM is fast, low power and low voltage, which makes it suitable for RFID, smart card, ID card and other embedded applications. In order to achieve density goals with further scaling, the basic geometry of the cell must be modified while maintaining the desired isolation. Recent progress in electrode materials show promise to thin down the ferroelectric capacitor [ITRS '12] and extend the viability of 2-D stacked capacitor through most of the near-term years. Beyond this the need for 3-D capacitors still remains a formidable challenge.
Magnetic Random Access Memory (MRAM)
MRAM devices employ a magnetic tunnel junction (MTJ) as the memory element. An MTJ cell consists of two ferromagnetic materials separated by a thin insulating layer that acts as a tunnel barrier. When the magnetic moment of one layer is switched to align with the other layer (or to oppose the direction of the other layer) the effective resistance to current flow through the MTJ changes. The magnitude of the tunneling current can be read to indicate whether a ONE or a ZERO is stored. Field switching MRAM probably is the closest to an ideal "universal memory", since it is non-volatile and fast and can be cycled indefinitely, thus may be used as NVM as well as SRAM and DRAM. However, producing magnetic field in an IC circuit is both difficult and inefficient. Nevertheless, field switching MTJ MRAM has successfully been done in products. In the near term, the challenge will be the achievement of adequate magnetic intensity fields to accomplish switching in scaled cells, where electromigration limits the current density that can be used. Therefore, it is expected that field switch MTJ MRAM is unlikely to scale beyond 65 nm node. Recent advances in "spin-torque transfer (STT)" approach, where a spin-polarized current transfers its angular momentum to the free magnetic layer and thus reverses its polarity without resorting to an external magnetic field, offer a new potential solution [Miura '07]. During the spin transfer process, substantial current passes through the MTJ tunnel layer and this stress may reduce the writing endurance.
Upon further scaling the stability of the storage element is subject to thermal noise, thus perpendicular magnetization materials are projected to be needed at 32 nm and below [ITRS '12].
Resistive Random Access Memory (RRAM)
RRAM is also a promising candidate next-generation universal memory because of its shorter write time, large R-ratio, multilevel capability, and relatively low write power consumption.
However, the switching mechanism of RRAM remains unclear. RRAM are based on binary metal oxides has been attracting increasing interest, owing to its easy fabrication, feasibility of 3-D (stacked) arrays, and promising performances. In particular, NiO and HfO based RRAM have shown low voltage and relatively fast programming operations [Russo '09] [Vandelli '11]. RRAM functionality is based on the capability to switch the device resistance by the application of electrical pulses or voltage sweeps. In the case of metal-oxide-based RRAM devices, the switching mechanism has been recognized to be a highly localized phenomenon, where a conductive filament is alternatively formed and destroyed (at least partially) within the dielectric layer. Several physical interpretations for the switching processes have been proposed, including trap charging in the dielectric, space-charge-limited conduction processes, ion conduction and electrodeposition, Mott transition, and Joule heating. Such a large variety of the proposed physical mechanisms can be explained in part by the different dielectric and electrode materials and by the different procedures used in the experiments (unipolar or bipolar experiments). This aspect represents a limit today for the cell behavior understanding and a comprehensive physical picture of the programming behavior in RRAM device is still to be developed. This device results to be highly scalable, but limited by the size of select transistor in cell architecture. Another drawback is due to the high voltage necessary to create the conductive filament the first time to switch from the pristine state in a conductive state. This "first programming" operation has to be performed during the manufacturing process, thus increasing the fabrication complexity.
Phase Change Random Access Memory (PCRAM)
PCRAM devices use the resistivity difference between the amorphous and the crystalline states of chalcogenide glass (the most commonly used compound is Ge2Sb2Te5, or GST) to store the logic ONE and logic ZERO levels. The device consists of a top electrode, the chalcogenide phase change layer, and a bottom electrode. The leakage path is cut off by an access (selection) transistor (or diode) in series with the phase change element. The phase change write operation consists of: (1) RESET, for which the chalcogenide glass is momentarily melted by a short electric pulse and then quickly quenched into an amorphous solid with high resistivity, and (2) SET, for which a lower amplitude but longer pulse (usually >100 ns) anneals the amorphous phase into a low resistance crystalline state. The 1T-1R (or 1D-1R) cell is larger or smaller than NOR flash, depending on whether MOSFET or BJT (or diode) is used and the device may be programmed to any final state without erasing the previous state, which provides substantially faster programming throughput. The simple resistor structure and the low voltage operation also make PCRAM attractive for embedded NVM applications [ITRS '12]. The major challenges for PCRAM are the high current required to reset the phase change element and the relatively long set time. Interaction of phase change material with electrodes may pose long-term reliability issues and limit the cycling endurance. This is a major challenge for DRAM-like applications. Because PCRAM does not need to operate in page mode (no need to erase), it is a true random access, bit alterable memory like DRAM. The scalability of the PCRAM device to < 5 nm has been recently demonstrated using carbon nanotubes as electrodes [Feng '10] [Jiale '11] and the reset current followed the extrapolation line from larger devices. In at least one case, cycling endurance of 10 11 was demonstrated [Kim '10].
Conclusion
In this chapter, we have presented the framework of this thesis. In the first part the economic context, the classification and the architectures of semiconductors memory were presented.
Then, the Flash floating gate memory cell was described as well as its capacitive model that characterizes this device. Furthermore, the main program/erase mechanisms implemented in memory arrays are explained highlighting the importance of channel hot electron programming operation and the Fowler-Nordheim erasing for this thesis work. We thus presented the flash scaling limits and the proposed solutions; we explained the advantages of using a charge trapping layer instead of the continuous floating gate and a high-k control dielectric instead of the classical silicon oxide. Finally, we introduced the silicon nanocrystal memory cell that is the central point of this thesis. In particular we reported the state of the art of charge trap silicon nanocrystal memory, listing the various trials performed in the past. We introduced the impact on cell performances and reliability of some technical parameters:
silicon nanocrystal size and density, control oxide, and the cell active shape. The object of this thesis will be to find the best tradeoff between some technological parameters, in order to optimize the programming window, the reliability and the energy consumption of our silicon nanocrystal cell.
Introduction
In this section the results concerning the programming window electrical characterization are presented. The programming window of the silicon nanocrystal memory cell was measured using a defined experimental protocol developed in the STMicroelectronics-Rousset electrical characterization laboratory. With this procedure we evaluated the impact of main technological parameters on the programming window: silicon nanocrystal size and density (covered area), presence of silicon nitride capping layer, channel doping dose and tunnel oxide thickness. The results were compared to the state of the art in order to understand how to improve the cell performance using a CMOS process fully compatible with the existing STMicroelectronics method. The chapter will conclude with the benchmarking of silicon nanocrystal cell versus the standard Flash floating gate.
Experimental details
One of the main limits of silicon nanocrystal memories is the narrow programming window [Gerardi '08] [Monzio Compagnoni '03] [De Salvo '03]. In order to evaluate how to improve the memory cell performance, it is important to develop manual and automatic tests.
Experimental setup
The electrical characterization of the silicon nanocrystal cell was performed using manual and automatic probe stations. The first was used to measure the programming/erase kinetic characteristics, while the second was used to obtain statistical information concerning the dispersion on wafer. In figure 2.1 a picture of the manual test bench is shown.
Manual prober
• Thermo chuck equipped
Switch matrix (optional)
• To connect the instruments to the sample Tester
• Electrical parameter analyser (HP4156)
• LCR meter (HP4284)
• Pulse Generator (HP8110)
Computer (LABView system) to drive the prober and the tester The manual probe station is driven by the LABView system in order to command the instruments of the bench with homemade software. In particular the test bench is equipped with a HP4156 electrical parameter analyzer, a HP4284 LCR precision meter and a HP8110 pulse generator. The switch matrix enables the instruments to be connected to the 200mm wafer (sample). Using the LABView program we are able to measure the program/erase cell characteristics using different biasing conditions. The presence of thermo chuck enables measurement in the range of temperature from -40°C up to 250°C.
Methods of characterization
In order to characterize the programming window of the silicon nanocrystal cell and compare it with the characterizations obtained on standard Flash floating gate in NOR architecture, we used an appropriate method to program the cell by channel hot electron and to erase it by Fowler-Nordheim mechanism. The measurement protocol was divided into two parts and it was kept unchanged for all samples. The first part was performed using the automatic bench;
by applying only one program/erase cycle we were able to evaluate the programming window dispersion on the whole wafer. The evaluation of programming window dispersion was performed using a fixed 5µs programming pulse, a gate voltage (Vg) of 9V, a drain voltage (Vd) of 4.2V, source and bulk voltages (Vs and Vb respectively) of 0V. Concerning the erase phase, a pulse of 90ms was applied on the gate terminal using Vg=-18V, while the drain, source and bulk terminals were grounded (Vd=Vs=Vb=0V). The second part of the experiments was performed using the manual prober station. Its purpose was to apply program/erase pulses with different durations and amplitudes to get the kinetic evolution of the threshold voltage in channel hot electron and Fowler-Nordheim regime. The two methods are described below. The staircases have been used to emulate the ramps generated in STMicroelectronics products. The programming kinetic was performed by applying 4.2V pulses on drain terminal and a staircase from 3V to 9V with a step of 0.75V, followed by an additional pulse of 1µs on gate terminal. The duration of each pulse was 0.5µs in order to obtain a 1.5V/µs ramp. For the erase kinetic, 10V pulses were applied on drain, source and bulk terminals, while a staircase from -4V to -10V was applied on gate terminal. In this way it was possible to reach the gate-bulk voltage of 20V. This represents the maximum voltage value available in STMicroelectronics products. The step amplitude was 0.25V, while the duration was 50µs in order to emulate a 5kV/s ramp. After each pulse the cell state was read by measuring the gate voltage needed to drive a fixed drain current of 1µA. The programming window (ΔVt) is calculated as the difference between the programmed threshold voltage (Vtp) and the erased threshold voltage (Vte):
Vte Vtp Vt (1)
Impact of technological parameters
After the technical details described above, we are going to present the results of electrical characterization in terms of programming window. The aim is to understand the impact of main technological parameters on the programming window and how to merge them to obtain the best results with respect to the Flash floating gate.
Effect of silicon nanocrystal size
The impact of silicon nanocrystals size on the programming window has already been analyzed in depth [De Salvo '01] [Rao '05]. Nevertheless it is important to evaluate its effect in STMicroelectronics products. In [Jacob '08] it appears that the bigger the Si-NCs are, the larger the programming window is. This is due to the increasing surface portion covered by the nanocrystals. In this work, the studied samples have a channel width of W=90nm and a channel length of L=180nm. The cell stack is described in figure 2.4. On a p-type substrate a 5.2nm thick tunnel oxide was grown. The average silicon nanocrystal size and density have been measured in-line using a Critical Dimension Scanning Electron Microscopy (CDSEM) technique [Amouroux '12]. In this case we compare the samples with two different diameters (Φ) of 6nm and 9nm. Then, to complete the stack, the ONO (Oxide-Nitride-Oxide) Inter-Poly Dielectric (IPD) layer was deposited to reach 14.5nm of Equivalent Oxide Thickness (EOT). On a silicon nanocrystal cell it is not possible to measure capacitance to calculate the EOT of ONO thickness, because of the discrete nature of nanocrystals. The ONO thickness is thus considered to be unchanged with respect to the standard Flash floating gate cell, because the fabrication process and the recipe are unchanged for the two devices. As for the F.G. it was measured at the end of process with capacitance-voltage characterizations and Transmission Electron Microscopy analysis. Figure 2.5a shows the average values and the dispersion of program/erase threshold voltages obtained with the statistical measurements (30 samples). The minimum program/erase states to target the Flash floating gate are highlighted. With our extrapolation it is necessary to increase the nanocrystal diameter up to 14nm, maintaining the cell stack unchanged, in order to achieve good programming window. Using the measured size and density (figure 2.4).we plot the correlation between the programming window and the percentage of covered area in figure 2.5b; increasing the covered area, thus the coupling factor, the programming window increases because of the higher number of trapped charges. Using this cell structure to achieve the minimum programming window of 4V, the 95% of covered area is needed. That is not coherent with the discrete nature of Si-nc cell, as detailed in section 1.4. Moreover, the programming window increases with the silicon nanocrystal size as well as the dispersion on wafer. Increasing the diameter by 3nm enables increasing the programming window of 0.5V that is not sufficient for our application. After these preliminary evaluations we used the staircases, described in paragraph 2.2, to measure the program/erase kinetic characteristics. In figure 2.6a the results concerning the considered samples are compared. As expected, we notice that by increasing the nanocrystal size, the programming window is improved because the covered area increases as well as the cell coupling factor. In literature it is shown that the FN erase speed can increase when the nanocrystal size decreases [Rao '05]. This is true for specific cell architectures and in particular for small nanocrystal diameters. However, when increasing the nanocrystal size and the covered area, the coulomb blockade effect and quantum mechanisms can be neglected [De Salvo '01], thus the coupling factor dependence becomes predominant. It is important to notice that the programming window of figures 2.5 and 2.6 cannot be compared because two different program/erase schemes are used. With the ramped gate voltage the program/erase efficiency decreases with respect to the box pulses used for the statistical measurements shown in figure 2.5. To conclude we can confirm that increasing the silicon nanocrystal size and thus, the covered area, the programming window is improved. In particular this effect is more evident on FN erase operation.
Effect of silicon nitride capping layer
In literature it is shown how to improve the programming window using high-k material interpoly dielectrics [Molas '07] The final programming window is increased with the Si3N4 capping layer thanks to the increase in charge trapping probability and the improvement in coupling factor (figure 2.9).
In particular the former improves the channel hot electron programming efficiency, while the covering ratio increase improves in particular the erase operation. It is worth noting that the programming windows of figure 2.8 and 2.9 are not directly comparable because the mechanisms of program erase are different due to the different scheme box or ramp pulses.
Finally, these results suggest the silicon nitride capping layer is a solution to increase the programming window maintaining the nanocrystal physical parameters (size and density) constant.
In section 3 we will analyze the impact of Si3N4 capping layer on Si-nc cell reliability.
Effect of channel doping dose
The impact of channel doping dose (CDD) on the threshold voltage of a MOS transistor is well known and well analyzed in literature [Brews '78] [Brews '79a]. In this section we show the experimental results concerning the programming window achieved using the silicon nanocrystal cell with 9nm silicon nanocrystals capped by 2nm
Si3N4 trapping layer, where the channel doping dose is varied. More precisely three CDD are used: 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 , and 11•10 13 at/cm 2 (figure 2.10). The aim of this trial is to increase the injection probability due to the higher number of carriers in the channel surface. In figure 2.11a the programming window dispersion on wafer is shown. It is important to notice that in spite of the sample dispersion, the trends related to the threshold voltage shift versus the channel doping dose are coherent; tp and Vte shift toward higher levels, due to the threshold voltage dependence on CDD [Brews '79a] [Brews '79b] [Booth '87]. As expected, the average programming window increases with the channel doping dose (figure 2.11b). The programming window tends to saturate for the higher CDD; We showed in figure 2.11a the threshold voltage dependence on CDD; hence we performed kinetic measurements forcing the programmed and erased threshold voltages to the levels of Vtp=8.2V and Vte=3.5V. These levels represent the targets used in STMicroelectronics products. The Vt adjusting operation enabled the different devices to be compared in order to evaluate the impact of channel doping dose on programming window only. We notice the improvement in programming efficiency when CDD=11•10 13 at/cm 2 (figure 2.12a) and at the same time, the erase efficiency is decreased (figure 2.12b). The programming window trend is the same as statistical measurements made with box pulses (increasing the CDD the programming window increases), but the absolute values are not comparable because, in this case, the Vt adjusting is performed and also the program/erase scheme is different (box versus ramp). In this section we showed that by increasing the channel doping dose, the programming window increases, but the threshold voltage adjusting is needed to reach good levels of programmed and erased states. In section 3.1 we demonstrated that the erase operation can be improved with the increase of nanocrystal size and density. We can affirm therefore it is important to find the best tradeoff between the channel doping dose and the channel covered area in order to optimize the programming window. Finally we decided to use the higher channel doping dose (CDD=11•10 13 at/cm 2 ) for cell optimization, as reported at the end of the chapter.
Effect of tunnel oxide thickness variation
The last technological parameter we varied in order to improve the Si-nc cell performances was the tunnel oxide thickness (tunox), due to the well known dependence of Fowler-Nordheim tunneling on tunox [Fowler '28] [Punchaipetch '06]. In literature alternative solutions to the SiO2 tunnel oxide integrating high-k materials are proposed [Fernandes '01] [Maikap '08]. The studied samples are described in figure 2.13a; three values of SiO2 tunnel oxide thickness were considered: tunox=3.7nm, tunox=4.2nm, tunox=5.2nm. It is not possible to measure the tunnel oxide thickness of the silicon nanocrystal cell because the nature of nanocrystals does enable them to be contacted directly. We decided thus to measure the capacitance of memory stack integrated in large structures applying -7V on gate terminal (30 samples). We verified that the measurement did not introduce a parasitic charge trapping in nanocrystals. Hence we extrapolated the EOT of memory stack and we compared it with theoretical results (figure 2.13b). The difference between the calculated and measured values is due to the fact that the calculated EOT does not take in to account the substrate and gate doping in capacitance stack. However one can notice that the relative variation of EOT stack thickness corresponds to the variation of tunnel oxide thickness of the three samples.
Furthermore, Transmission Electron Microscopy (TEM) photos were taken to measure the physical tunnel oxide thickness at the end of fabrication process (figure 2.13c). Using a specific image processing based on the light contrast of the TEM picture we measured the physical thicknesses of our samples that correspond to the expected ones. Also for this case we performed experiments to evaluate the programming window dispersion on wafer (30 samples are tested); the results are reported in figure 2.14. The dispersion is greater than 1V, due to the process variation. This limits the data interpretation, but the trend is clear by increasing the tunnel oxide thickness, the programming window decreases. Here the cell reaches a satisfactory erase level, using the tunox=4.2nm and tunox=3.7nm, but as in the case of channel doping dose variation, the Vt adjusting is needed due to the impact of tunnel oxide thickness on cell threshold voltage [Yu-Pin '82] [Koh '01]. Also in this case, before the kinetic characterizations, the program/erase threshold voltages were fixed: Vte=3V and Vtp=8.2V, in order to evaluate the impact of tunnel oxide thickness on programming window only. One can notice that the tunnel oxide thickness has a limited influence on channel hot electron programming operation. This is due to the dominant role of horizontal electric field on hot carrier injection probability [Eitan '81]. In table 2.1 we reported the vertical electric field (ξvert) in tunnel oxide calculated using a 9V gate voltage and we show that, for the considered tunnel oxide thicknesses, the tunox variation slightly impacts the vertical electric field [Tam '84]. Instead, during the Fowler-Nordheim erase operation, the gate voltage reaches -20V and only the vertical electric field is present. The F-N erase only depends on the applied gate voltage and tunnel oxide thickness [Fowler '28], keeping the temperature constant. We can thus affirm that the impact of tunnel oxide thickness is more relevant on erase operation; in particular a thickness of 4.2nm is sufficient to achieve the expected 4V programming window in less than 1ms. As a consequence of these considerations we performed program/erase In figure 2.17 the ∆Vt, using 100ms of program/erase time, is plotted as a function of tunnel oxide thickness. We show the dependence of Fowler-Nordheim program/erase on tunnel oxide thickness. The two operations are not symmetrical because by applying a positive or negative voltage on gate terminal, the channel zone is depleted, or not, which in turns varies the bulk surface potential. Moreover this technological parameter impacts the reliability characteristic which will be described in the next chapter. Finally it is important to find a satisfactory tradeoff between tunnel oxide thickness and channel doping dose in order to adjust the program/erase threshold voltages. Moreover in order to reach a 4V programming window the maximum tunnel oxide thickness is 4.2nm for this cell architecture.
tunox
Figure 2. 17. Dependence of programming window, measured using Vg=±18V after 100ms, versus tunnel oxide thickness for the programmed and erased states.
Programming window cell optimization
In the last paragraph we evaluated the impact of the main technological parameters on the programming window of the STMicroelectronics silicon nanocrystal memory cell. The aim of this analysis was to define the best way to improve the programming window using the standard program/erase pulses used for the Flash floating gate memory cell, maintaining the cell dimensions unchanged. Below we summarize the main conclusion of previous studies that we have taken into account in order to optimize the silicon nanocrystal cell:
When increasing the silicon nanocrystal size, thus the covered cell area, the programming window increases and in particular the Fowler-Nordheim erase operation is improved. We noticed that using the standard memory cell stack a covering of 95% is required to reach the level of 4V programming window that is not coherent with the silicon nanocrystal principle of functioning. In order to improve the programming window and optimize the stack of Si-nc cell, we considered as a fundamental point to increase the coupling factor as explained for the Flash floating gate by [Wang '79] [Wu '92] [Pavan '97] [Esseni '99] and in the chapter 1. Two different recipes have been developed to achieve 9nm and 12nm silicon nanocrystals reaching respectively 46% and 76% covered area. Furthermore with the coupling factor optimization it was possible to decrease the ONO layer thickness down to10.5nm to increase the vertical electric field during the erase operation. This thickness value is chosen in accordance with the recipes available in the STMicroelectronics production line.
The presence of Si3N4 capping layer on silicon nanocrystals increases the charge trapping probability and the covered channel area. The coupling factor is increased and then the programming window increases. Observing the CDSEM pictures we noticed that the Si3N4 capping layer grows around the silicon nanocrystals. Hence if their size is big enough it is possible to reach the contact and the coalescence of hybrid nanocrystals. In this case it is not possible to confirm if the programming window improvement is due to the Si3N4 capping layer presence or if it is due to the covered area increasing. In figure 2.7 we have shown the isolation of nanocrystals in a tilted CDSEM picture. In figure 2.18 we plot on the same graph the results of figure 2.5 and figure 2.8 showing that the programming window due to the Si3N4 presence can be extrapolated by the trend obtained when varying the covered area. In this case we can consider that the improvement in programming window depends mainly on covered area and slightly on charge trapping probability increasing. Even if the presence of nanocrystal Si3N4 capping layer is helpful to improve the programming window, we decided to avoid this process step in order to minimize the effects of parasitic charge trapping [Steimle '04] [Gerardi '07] [Bharath Kumar '06]. This choice will be explained and described in the next chapter on cell reliability. We have shown that it is possible to improve the programming window by increasing the channel doping dose, but paying attention to the shift of threshold voltages. By increasing the channel doping dose up to 10 14 at/cm 2 , a 20% programming window gain is achieved. In this case the adjusting of program/erase threshold voltages is needed and to do this it is important to find the best tradeoff with the variation of other parameters nanocrystal size and tunnel oxide thickness. After these considerations we decided to use 10 14 at/cm 2 as the CDD for the optimized silicon nanocrystal cell in order to reach a higher programming window; the details will be given below.
Finally we studied the impact of tunnel oxide thickness on program and erase operations. In particular we demonstrated during the channel hot electron programming the tunnel oxide thickness slightly impacts the programming window because of the dominant dependence on lateral electric field. On the contrary, this technological parameter strongly impacts the Fowler-Nordheim operations. In particular we showed the effect on erase operation using both the ramped gate voltage and box pulses. In this latter case an improvement of 1.5V/nm can be achieved. In order to reach the 4V programming window a tunnel oxide of 4.2nm or thinner is needed. As for channel doping dose, the program/erase threshold voltages are shifted by the tunnel oxide variation and a Vt adjusting operation is needed.
The layers stacked in the optimized nanocrystal cell are shown in figure 2.19. The SiO2 4.2nm thick tunnel oxide was grown on a p-type substrate doped on a surface with a dose of 10 14 at/cm 2 . Two different recipes were developed to grow 9nm and 12nm silicon nanocrystals. The cell stack is completed with an ONO layer with an equivalent oxide thickness of 10.5nm. We showed in chapter 1 that by decreasing the ONO thickness, the capacitance between the control gate and the floating gate is increased and the programming window is thus decreased; to compensate this effect we increased the silicon nanocrystal size up to 12nm. Furthermore the Fowler-Nordheim erase operation can be improved by decreasing the ONO thickness and thus increasing the vertical electric field on tunnel oxide.
Using these two nanocrystal fabrication recipes we obtained the following samples:
Sample 1: Φ=9nm; density=7.3•10 11 nc/cm 2 ; covering=46.4% Sample2: Φ=12nm; density=6.7•10 11 nc/cm 2 ; covering=75.7% In figure 2.20 the program/erase kinetic characteristics are plotted for the optimized stacks;
the dispersion on wafer is also highlighted (30 samples tested). The first point we notice is the limited dispersion with respect to the nanocrystal diameter which is comparable for the two samples. Once again we demonstrated that for the silicon nanocrystal cell the covered area slightly impacts the channel hot electron programming, while the Fowler-Nordheim erase operation is strongly improved. The cell with the higher covered area can be erased in 0.4ms reaching a programming window of 4.7V which is thus greater than the 4V minimum programming window target; the program/erase threshold voltages were fixed: Vte=3V and Vtp=8.2V. In this case the quantum and/or the Coulomb blockade effects are negligible because of the large size of nanocrystals and the thick tunnel oxide. [De Salvo '01]. In conclusion with the optimized cell architecture, it is possible to reach the 4V programming window using the standard program/erase ramps described in section 2.2. In the following chapters we will continue the study of the silicon nanocrystal cell by analyzing its reliability and energy consumption. The program/erase pulses are described in section 2.2.
Benchmarking with Flash floating gate
To conclude this paragraph we compare the results obtained on optimized silicon nanocrystal memory cell (Φ=12nm; density=6.7•10 11 nc/cm 2 ; covering=75.7%) with the standard Flash floating gate keeping the cell size constant. To compare these devices the program/erase levels were fixed to Vte=3V and Vtp=8V. In figure 2.21 the program kinetic characteristic is shown for each device. For the optimized Si-nc cell the performances are the same of floating gate cell; the 4V minimum programming window is reached using a channel hot electron operation in 3.5µs using the ramped gate voltage. The erasing time to reach the minimum 4V programming window is 0.2ms for the optimized Si-nc cell achieving a gain of 60% with respect to the Flash floating gate. Considering these last results the programming window width can be increased up to 5V by adjusting the program/erase ramp time. This is important with regard to the programming window degradation after cycling, as explained in the next chapter. In conclusion all the trials varying different technological parameters (nanocrystals size and density, Si3N4 capping, channel doping dose and tunnel oxide thickness) have enabled us to optimize the silicon nanocrystal cell programming window in order to make it comparable with the Flash floating gate memory cell. The aim is to substitute the floating gate and thus decrease the wafer costs. In the next chapter we will compare the optimized silicon nanocrystal cell with the Flash floating gate memory according to the reliability results (endurance and data retention).
Chapter 3 -Reliability of silicon nanocrystal memory cell
Introduction
In this section we present the results concerning our study on the reliability of the silicon nanocrystal memory cell. The data retention, thus the charge loss, is evaluated for different temperatures starting from a fixed programmed state [Gerardi '02]. The aim of endurance experiments is to evaluate the cell functionality after a large number of program/erase cycles, typically 100k. In particular, the results will show the impact of some technological parameters on cell reliability such as: silicon nanocrystal size and density, silicon nitride capping layer, channel doping dose and tunnel oxide. The comprehension of experimental results will be useful to improve the cell performances similarly to the previous chapter devoted to programming window characterization. At the end we will present the results obtained for the optimized STMicroelectronics nanocrystal cell and we will compare them to the standard Flash floating gate.
Data retention: impact of technological parameters
The data retention experiments have been performed by programming the silicon nanocrystal cell with manual and automatic test bench described in chapter 2. Let us evaluate the effect of main technological parameters on data retention and chose the best cell architecture configuration to optimize the performance.
Effect of silicon nitride capping layer
We have seen in chapter 2 that hybrid silicon nanocrystal memory is an attractive solution to improve the cell programming window [Steimle '03] [Colonna '08] [Chen '09] [Tsung-Yu '10]. Moreover in literature many papers present the integration of high-k materials as a good option to achieve better cell performances [Lee '03] [Molas '07]. At STMicroelectronics we
were not able to integrate high-k materials in the process flow so we decided to develop a hybrid solution by capping the silicon nanocrystals using the silicon nitride (Si3N4) to compare it with standard Si-nc cell. The presence of this layer improves the programming window by increasing the covered area and generating a higher number of trapping sites with respect to the simple Si-nc utilization as explained in chapter 2. In literature the effect of temperature on charge loss is also explained when a silicon nitride layer is used to store the charges. This is the case for SONOS and TANOS memories [Tsai '01] This explains why our hybrid Si-nc+SiN memory loses more charges with respect to the simple Si-nc cell. The difference can be due to the portions of trapped charge at the tunox/SiN interface around the Si-nc. These trapped charges are more energetic and when the temperature increases can be easily lost. In this case the presence of Si3N4 layer does not improve the cell data retention because is not sandwiched between the nanocristal and the bulk to increase the barrier thickness. Rather it caps the nanocrystals and contacs directly the tunnel oxide, creating a parasitic charge trapping at the tunox/SiN interface.
Effect of channel doping dose
In figure 3.3 we plot the data retention results for the samples where the channel doping dose (CDD) is changed. In this case the tunnel oxide thickness is 5.2nm and the nanocrystals are capped with the Si3N4 layer (Φ=9nm+SiN=2nm). The charge loss at 150°C is not impacted by the CDD. The slight difference with the data presented in figure 3.1 can be due to the parasitic charge trapping at the tunox/SiN interface caused by the irregular Si3N4 layer deposition on the wafer; nevertheless, the difference is not relevant for cell behavior understanding.
Effect of tunnel oxide thickness
The most important parameter to evaluate the charge loss during the data retention is the tunnel oxide thickness (tunox), because as shown in figure 3.2b, it defines the thickness of barrier between the nanocrystals and the substrate. In literature the direct dependence on tunox of charge loss in terms of applied electric field is explained [Weihua '07] [Ghosh '10].
The mechanisms of charge loss can be activated since it depends on the tunnel oxide thickness and the type of traps generated by the Si-nc fabrication process. We considered samples with three different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm (description in figure 2.13). Figure 3.4 shows the data retention results at 27°C, 150°C and 250°C. In these samples the silicon nanocrystals are capped by the Si3N4 trapping layer. Ideally one memory cell must keep the charge stored for 10 years at 27°C. Considering the charge loss, the cell program level has to be greater than the program verify level; in our case this level is fixed at 5.7V. The data retention specification is reached only using a tunnel oxide thickness of 5.2nm, whereas if a tunox=4.2nm is used, the charge loss constraint is very close to the suitable limit. Moreover, the charge loss mechanism is strongly accelerated if the temperature is increased. This is due to the parasitic charge trapping at the tunox/SiN interface [Chung '07]. As previously explained, the charge loss mechanism in this case is similar to the SONOS and TANOS memories [Hung-Bin '12]. In table 3.1 we summarized the percentage of lost charge after 186 hours for all temperatures considered and tunnel oxide thicknesses. In figure 3.5 we show the Arrhenius plot of retention time, defined as the time necessary to reach a Vt=6V that is very near the program verify level. The slopes of extrapolated trends enable achieving the charge loss activation energy (Ea) to be attained for each tunnel oxide thickness considered embedded in our memory stack. As expected, when increasing the tunnel oxide thickness, a higher energy is necessary to activate the charge loss mechanism [Weihua '07] [Ghosh '10]. The extrapolated quantities are comparable to the amounts found in literature [Gerardi '08] [Lee '09] [Gay '12]. However the differences are due to the cell architectures.More particularly these activation energies can be impacted by the parasitic charge trapping in Si3N4 capping layer. We can conclude that is possible to achieve the data retention specification using a tunnel oxide 5.2nm thick in the range of temperature [27°C;
Percentage of charge loss after 186h
150°C], but as we demonstrated in chapter 2, in accordance with the literature, this parameter impacts the coupling factor and the Fowler-Nordheim erase efficiency. To improve the performance, the cell architecture was changed, avoiding the Si3N4 capping layer and thus minimizing the parasitic charge trapping (see section 3.4)
Endurance: impact of technological parameters
Another important point to evaluate in the key criterion of reliability is the endurance of silicon nanocrystal memory cell. We present in this chapter the memory device degradation in terms of programming window closure when the number of program/ease cycles is increased. We investigate the effect of different technological parameters such as: silicon nanocrystal size and density (covering area), silicon nitride capping layer (Si3N4), channel doping dose and tunnel oxide thickness. The aim is to understand the Si-nc cell behavior and chose the best architecture configuration to optimize the memory performance.
Impact of silicon nanocrystal size
We described in chapter 2 the increasing of nanocrystal size, thus the covering of channel area, corresponds to an increase of the programming window. Using the same samples we reported in figure 3.7, the results of 100kcycles endurance experiments. The cells were programmed by channel hot electron using Vg=9V, Vd=4.2V and tp=5µs; the Fowler-Nordheim erase was performed with Vg=-18V, using a ramp of 5kV/s followed by a plateau of 90ms (te). A schematic of the cycling signals is reported in figure 3.6. The programming window at the beginning is 1.9V for the bigger nanocrystals and 1.1V using the 6nm nanocrystals. This result is coherent with the data reported in chapter 2. The erase operation is improved by increasing the channel covered area (9nm Si-nc), because the coupling factor is increased. However, in both cases we notice an important shift of threshold voltages, in particular for the erase state (2.4V). This can be due to the parasitic charge trapping in ONO interpoly dielectric layer [Jacob '07] [Gerardi '07] [Hung-Bin '12]. The electric field applied during the FN erase is not sufficient to evacuate the trapped charges during the programming operation in the SiN layer of ONO stack. The same phenomenon is present in SONOS and TANOS memories. Despite, the improvement due to the increasing of silicon nanocrystal size (or covered area), cell functioning does not reach the specification of 100kcycles. The cell with the 6nm silicon nanocrystals, after only 10cycles, has the erase threshold voltage at the same level of program threshold voltage; hence it is impossible to discriminate the two states for the reading circuits. For the sample with 9nm embedded Si-nc the endurance limit is 10kcycles. Finally, to improve the programming window and the erase efficiency, thus the cell endurance, one way is to increase the silicon nanocrystal covered area. Further development of cell architecture is needed to reach a good programming window, such as Si3N4 capping layer, channel doping dose, tunnel oxide thickness, but we will study these aspects in the next sections.
Impact of silicon nitride capping layer
We repeated the endurance experiments on the samples with the silicon nanocrystals capped
Impact of channel doping dose
We have previously seen in chapter 2 that an increasing of the channel doping dose enables the programming window to be increased and it generates a shift of program/erase threshold One can notice the shift of programming window toward higher voltages at the cycling beginning due to the increase of CDD as described in chapter 2. Moreover in this case the parasitic charge trapping in Si3N4 capping layer is present. This is demonstrated by the shift of threshold voltage during the cycling. The closure of programming window is evident for the three samples, but the highest CDD presents the most stable programming threshold voltage and only the erase state shifts. In order to summarize the results, we reported in table 3.3 the values of programming windows before and after the cell cycling and the program/erase threshold voltages shifts. For the first time we achieved 100kcycles endurance characteristic by keeping the programmed and erased states separated using the CDD=1.1•10 13 at/cm 2 and the erase time te=10ms, but we can note that the programming window is only 1.1V. That is not enough to achieve a good cell functioning, hence other improvements on the memory cell stack are necessary. In the next section the impact of tunnel oxide thickness will be studied.
CDD (at/cm
Impact of tunnel oxide thickness
The last technological parameter we studied is the tunnel oxide thickness. We varied it between 3.7nm and 5.2nm, using the silicon nanocrystals capped by silicon nitride layer One can notice the limited influence of tunnel oxide thickness on CHE programming operation, while the FN erase is strongly impacted by this technological parameter. The endurance experiments confirm the erase efficiency improvement when the tunnel oxide thickness is decreased. This is due to the higher vertical electric field applied, as explained in
Silicon nanocrystal cell optimization
In this chapter we evaluated the impact of the main technological parameters of STMicroelectronics silicon nanocrystal memory cell. The results reported here concerning data retention and endurance experiments have been confirmed in literature, in particular the charge loss activation energy and the threshold voltage shift during the cycling due to the parasitic charge trapping. The aim of this analysis was to define the best way to achieve better results in terms of reliability of Si-nc memory cell.
In order to optimize the silicon nanocrystal cell we have taken into account the following points:
In literature it is shown that the data retention is not strongly impacted by the nanocrystal size [Crupi '03] [Weihua '07] [Gasquet '06]. However this parameter impacts directly the programming window and the erase efficiency as shown in chapter 2. Moreover, the increasing of covered area leads to a good cell functioning after 100k program/erase cycles with better erase efficiency during the cycling. In any case we noticed that the increasing of covered area, thus the cell coupling factor, using the studied cell architecture, is not sufficient to erase the parasitic charge trapping in the ONO control dielectric [Gerardi '07] [Jacob '07]. Hence it is important to increase the nanocrystal size and density, but further improvements on cell architecture are needed.
The presence of silicon nitride capping layer on nanocrystals increases the charge trapping probability and the cell covered area. We described in section 3.2.1 the differences between the samples with nanocrystals entirely surrounded by Si3N4 layer and the samples where Si-nc are grown on the SiO2 tunnel oxide and afterwards capped by Si3N4. Our cell corresponds to the second case. Thus, there are no benefits concerning the data retention due to the Si3N4 presence because the barrier to consider for the data retention corresponds only to the tunnel oxide thickness.
Moreover, the presence of parasitic charge trapping at the tunox/Si3N4 interface facilitates the charge loss at high temperature. Concerning the cell endurance using the silicon nitrite capping layer, the coupling factor is increased and thus the programming window increases too, but the parasitic charge trapping in the Si3N4
does not enable good cell functionality after 100k program/erase cycles with the described memory stack. To avoid the parasitic charge trapping that worsens the Sinc cell reliability for the optimization, we decided to skip this process step by adjusting the other technical parameters to achieve better results.
The data retention is unchanged when varying the channel doping dose. It is thus possible achieve a programming window gain by increasing the channel doping dose and adjusting the program/erase levels if needed (chapter 2). Moreover, the increasing of CDD decreases the shift of programmed threshold voltage because the programmed threshold voltage is close to the saturation level; the CDD=10 14 at/cm 2 is chosen for the optimized silicon nanocrystal memory cell.
Finally we showed the charge loss dependence on tunnel oxide thickness and we extrapolated the activation energies related to the different samples. As in the case of erase operation, the charge loss increases as well as the tunnel oxide thickness. We noticed that a 5.2nm tunnel oxide is needed to achieve the data retention specification for temperatures up to 150°C. On the other hand, the tunnel oxide thickness strongly impacts the erase Fowler-Nordheim operation. Hence, to achieve good cell functioning after 100k program/erase cycles, a tunnel oxide thickness of 3.7nm has to be used. Fo our study it is important now, to evaluate the cell behavior using a 4.2nm tunnel oxide, embedded in a different architecture without silicon nitride capping layer and an optimized ONO stack, in order to increase the vertical electric field and improve the program/erase efficiency and cell reliability.
Data retention optimization
It is clear now that if on the one hand the Si3N4 capping layer increases the programming window, on the other hand the Si-nc cell reliability is strongly degraded. We show below the results obtained using the silicon nanocrystal cell with the optimized stack. The latter is In order to show the gain reached by avoiding the Si3N4 capping layer in figure 3.12, we plot the data retention characteristics at 150°C of optimized Si-nc cell (Φ=9nm) compared with the data of hybrid nanocrystals (Si-nc+SiN) cell; the two samples have the same tunnel oxide thickness (tunox=4.2nm). In this case the optimized Si-nc cell is able to retain the stored charge up to 10 years at 150°C. This result demonstrates that the data retention strongly improves by avoiding the silicon nitride capping layer.
Endurance optimization
We complete the optimized cell reliability characterization showing the data concerning the endurance cell degradation. The impact of Si3N4 on endurance is evaluated above. In figure 3.15 we plot the results of hybrid silicon nanocrystal cell compared with the optimized cell;
the cell schematics are also shown. In order to achieve the same programming window (4V) at the beginning of cycling, we used different program/erase conditions:
Hybrid Si-nc cell (Φ=9nm+SiN=2nm)
o CHE programming: Vg=9V; Vd=4.2V; tp=5µs.
o FN erase: Vg=-18V; te=ramp=5kV/s + 10ms.
Optimized Si-nc cell (Φ=9nm)
o CHE programming: Vg=9V; Vd=4.2V; tp=1µs.
o FN erase: Vg=-18V; te=ramp=5kV/s + 1ms.
Using the optimized memory stack it is possible to decrease the programming and erase time thanks to the higher covered area and associated coupling factor. Moreover, by avoiding the Si3N4 capping layer, the erase efficiency and the endurance are greatly improved. A very slight shift of program/erase threshold voltages for the optimized sample results in 3.6V
programming window after 100kcycles. This result was reached without a pre-cycling cell treatment. In fact using a positive or negative stress with high voltage, before cycling, helps accelerate the degradation process and improves the endurance performance with less memory window decrease [Yong '10].
The shifts of threshold voltages measured for the optimized Si-nc cell are not so marked, thus the pre-cycling treatment is not needed. After reaching this good results by using the optimized Si-nc cell we show in figure 3.16, for the first time to our knowledge, the 1Mcycles endurance characteristics of two optimized samples with different nanocrystal sizes (Φ=9nm and Φ=12nm) by achieving a large programming window. The cell with 12nm Si-nc is able to maintain a 4V programming window after 1Mcycles, improving the results published in [Ng '06]. In table 3.5 the values of the programming window before and after the cycling as well as the threshold voltage shifts are reported. Hence to improve the endurance performances up to 1Mcycles it is important to avoid the Si3N4 Si-nc capping, to increase the covered area and to use a thinner ONO layer. Using the program/erase conditions of experiments reported in figure 3.16, we repeated the cycling varying the temperature (T=-40°C up to T=150°C). In figure 3.17 the results are shown for the Si-nc cell with the higher covered area (sample with Φ=12nm and density=6.7•10 11 nc/cm 2 ). The programming window after 1Mcyles remains bigger than 4V and its value does not depend on the temperature. One can notice that by increasing the temperature the characteristic shifts toward the lower voltages. Both the programming and the erase operations are impacted by the temperature [Della Marca '13]. The programming efficiency decreases by increasing the temperature because the current in the channel decreases as well as the injection probability [Eitan '81] [Emrani '93]. Moreover, at low temperature, an increase in mobility is observed for Si-nc transistors generating a quasi-linear increase of the threshold voltage [Souifi '03]. In the case of FN erase, the efficiency increases with the temperature. This is justified assuming that the dominant conduction mechanism is assisted by traps [Zhou '09]. Therefore the programming window is bigger than 4V at the first cycle for all the temperatures and the shift of threshold voltages is due to the program/erase conditions that are kept unchanged. In table 3.6 the programming window before and after the cycling as well as the threshold voltage shifts are reported for different temperatures: T=-40°C, T=27°C and T=150°C.
Benchmarking with Flash floating gate
To conclude this chapter we compare the results concerning the optimized silicon nanocrystal memory cell with the standard Flash floating gate, keeping constant the cell size. In figure 3.18 the data retention at 250°C is shown for each device. We have seen previously (figure 3.11) that the optimized cell can maintain the programmed memory state 10 years up to 150°C. To satisfy our fixed data retention specification and to achieve the Flash floating gate results the cell must keep a program threshold voltage greater than 5.75V at 250°C up to 168h. One can notice that the Si-nc cell is just at the limit of this target, but further efforts concerning the tunnel oxide optimization are required to reach the standard floating gate device. The main constraint is the fast initial charge loss due to the charge trapping in the tunnel oxide, ONO layer and at the interfaces: substrate/tunox, Si-nc/oxide. To improve data retention of the optimized Si-nc cell one way is to increase the tunnel oxide thickness taking in to account the tradeoff with the programming window. Moreover special recipes of tunnel oxide growth can be developed playing on: time and temperature of process, oxide nitridation and preparation of the active surface to silicon nanocrystal nucleation.
However these options will be taken in to account in future work. The endurance results are also compared keeping unchanged the program/erase conditions (CHE programming: Vg=9V, Vd=4.2V, tp=1µs and FN erase: Vg=-18V, ramp=5kV/s+te=1ms). We considered the optimized Si-nc cell with a larger programming window (Φ=12nm); the data are plotted in figure 3.19. As expected, using the same program/erase conditions the Flash floating gate presents a larger programming window at the beginning of cycling (ΔVt=7V), thanks to its superior coupling factor and higher programming efficiency. The most significant threshold voltage degradation determines a major closure of programming window after 1Mcycles (ΔVt=2.8V), while the endurance characteristic remains more stable for the Si-nc cell. This is why for the floating gate device it is important to achieve a larger programming window. To understand the results qualitatively, we report in table 3.7 the programming window before and after the cycling as well as the threshold voltage shifts.
Introduction
In this section we present the results concerning the current and energy consumption of floating gate and silicon nanocrystal memory cells during the channel hot electron programming operation. The current consumption evaluation of a Flash floating gate memory cell is measured using current/voltage converter or indirect technique. In this way it is not possible to understand the dynamic cell behavior and to measure the cell performances in a NOR architecture for a programming pulse of several microseconds. Moreover, the indirect method, which will be explained in the chapter, is not functional for silicon nanocrystal memories. In this context we developed a new experimental setup in order to measure dynamically the current consumption during a channel hot electron programming operation.
This method helps to understand the dynamic behavior of two devices. The energy consumption is also evaluated using different bias and doping conditions. The aim was to characterize the impact of different parameters about the floating gate cell consumption and find the best tradeoff for the Si-nc cell. Furthermore the consumption due to the leakage of unselected cells in the memory array is measured in order to complete this study. In conclusion the consumption is optimized and compared for both devices, giving new solutions for low power applications.
Methods of Flash floating gate current consumption measurement
Today one of the most important challenges for Flash floating gate memory cells in view of low power applications is to minimize the current consumption during the Channel Hot Electron (CHE) programming operation. Specific consumption characterization technique is presented in the literature, but requires a complex measurement setup, limited by circuit time constants [Esseni '99] [Esseni '00b] [Maure '09]. As an alternative it is possible to calculate the cell consumption using the static drain current measurement on an equivalent transistor.
Standard current consumption measurement
In literature it is shown how to measure the drain current of a Flash floating gate using a dedicated experimental setup. In [Esseni '99] and [Esseni '00b]
Indirect current consumption measurement
As an alternative to the direct measurement on floating gate cell, it is possible to calculate the static current consumption during the programming operation starting from the drain current measurement on the equivalent transistor (called dummy cell), where the control gate and floating gate are shorted and the geometric dimensions (channel length and channel width) are kept unchanged. This technique enables the consumption to be calculated, regardless of the programming time, by using a commercial electrical parameter analyzer, while the current is considered to be constant during the programming. In order to explain this method we consider the following formula: Vt : threshold voltage of floating gate cell during the programming operation.
) ( Vt Vt V V V G treq G D D G G FG
Defining the overdrive voltage as:
Vt V V G OV (4.2)
We obtain:
treq D D OV G FG Vt V V V (4.3)
The coupling factors have been calculated using the capacitance model shown in figure 4.3 and the cell dimensions. In this simple model the parasitic capacitances are also considered: We calculated for the standard Flash floating gate: αG=0.67 and αD=0.07. In the case of G it is possible to compare the theoretical result with the measured values using different experimental techniques [Wong '92] [Choi '94]. This requires the static measurement of electrical parameters in both a floating gate cell and dummy cell. In figure 4.4 we reported the box plot of αG calculated as the ratio between the subthreshold slope of dummy cell and subthreshold slope of floating gate cell. The cell dimensions are: W=90nm and L=180nm. One can notice the effect of dispersion on wafer related to the process variations (tunnel oxide integrity, source and drain implantation, channel doping dose, geometrical effect, etc..), and coupling factor calculation (36 samples are tested). Finally the overdrive voltage is measured monitoring the Vt evolution during a programming operation. When the cell is programmed by a ramp, the Vov remains constant as well as the CHE injection [Esseni '99].
In figure 4.6 the measured Vt, obtained by applying the 1.5V/µs ramp on the control gate with a drain voltage of 4.2V, are plotted. The ramped control gate voltage is emulated using a series of pulses as explained in chapter 2. In order to reduce the error due to the Vov calculation, the overdrive is calculated as follow: At this point it is possible to calculate the floating gate potential reached by the cell after the channel hot electron programming operation; its value is: VFG=3.3V. In order to measure the static cell consumption, this potential is applied on the gate of the dummy cell maintaining the Vd=4.2V. In figure 4.8 we reported the drain current absorption (Id) under programming conditions measured for the equivalent transistor. During the experimental trials we noticed the degradation of Id level for consecutive measurements on the same device. This is due to the high voltage applied between gate and drain terminals by degrading the tunnel oxide.
2 1 n n G OV Vt Vt V V (4.
Thus the measurement has been performed using a sampling time as fast as possible (65µs)
depending on the parameter analyzer speed. Using this indirect method it is possible to evaluate the floating gate cell consumption by using the static measurements on an equivalent transistor (dummy cell). This procedure introduces a significant error due to the spread of dummy cell parameters, by assuming that the drain current absorption remains constant during the programming phase. This means the energy consumption is overestimated with respect to the real conditions. On the other hand the direct measurement on floating gate cell, by using the IV converter described above (figure 1.2), shows relevant limits when the programming pulse is short (several microseconds) due to the presence of parasitic capacitance in the measurement setup. That is why we have been motivated to develop a new measurement technique
New method of current consumption measurement
We discussed in paragraph 2.2 the complex measurement setup for dynamic drain current measurements. This method is affected by high time constant and does not enable measurement in a very short time period. The alternative indirect method of absorbed Id extrapolation that we previously described, is equally inaccurate and does not enable energy consumption calculation. We propose a new technique of measurement in order to measure the drain current during the CHE programming operation by using pulses of several microseconds. Our setup is shown in figure 4.9, where we use the Agilent B1500 equipped with two WGFMU (Waveform Generator and Fast Measurement Unit, Agilent B1530A)
[Della Marca '11b] [Della Marca '11c]. In this way it is possible to set the sampling time at 10 ns by measuring the current dynamically. Moreover a power supply source can be connected using a low resistance switch matrix to the FG to complete the device biasing. One can notice that the drain current is not constant during the programming operation using a ramped gate pulse. The Id becomes constant when the equilibrium condition is reached [Esseni '99] and its quasi-static value decreases when the gate voltage remains constant. Thus the importance of this characterization technique is related to the cell behavior comprehension and the energy consumption calculation.
Floating gate consumption characterization
In order to understand the cell's behavior during the channel hot electron operation and to optimize its performances, we decided to use the dynamic technique of measurement to evaluate the impact of the programming pulse shape; the impact of drain and bulk biases and the impact of technology (channel doping dose and lightly doped drain). The study on current and energy consumption during the programming operation is not limited to the single cell current absorption, but extended to the bitline leakage current as well. In the memory array, the unselected cells connected to the same bitline of selected cell, contribute to the global consumption with their drain/bulk junction leakage [Della Marca '13]. The principle of bitline leakage measurement will be explained in paragraph 4.3.2.
Cell consumption
Impact of programming pulse shape
We have seen before that in literature the floating gate behavior is described when the pulse gate is represented by box or by ramp. We applied our dynamic method to measure the drain current and consequently, the energy consumption and the programming window. The aim was to find the best tradeoff to improve the cell performances. In figure 4.11a the boxes applied on the control gate are shown, the ramp speed is 45V/µs and the drain voltage is constant at 4.2V. For all box pulses, the measured current peak is constant (figure 4.11b).
When the gate voltage remains constant, the Id current quickly decreases following an exponential law. We have to plot the consumption data in arbitrary units (a. u.) to respect the STMicroelectronics data confidentiality. This means that it is possible to reach low energy consumption levels and programming in a very short time. After each programming operation, the threshold voltage is measured and the cell is erased at the same start level. In this way, we calculate the programming window (PW) as the difference between programmed and erased threshold voltages. Then, the energy consumption (Ec) is calculated using the following formula:
tp t t dt Vd Id Ec 0 ) ( (4.5)
Where tp is the programming time. With the same method, we measured the drain current for different ramps applied on control gate (figure 4.13). It is worth noting that by increasing the ramp speed the Id current peak increases. On the contrary, when the ramp is slower, the Id current is smoothed (no peak), but the programming time increases. As explained before, we calculated the programming window and the energy consumption also in this case; the results are plotted in figure 4.14.
The programming window and the energy consumption both decrease with the ramp speed increase. It is possible to reach a very low Ec, maintaining a good PW level, but inferior to the minimum specification (figure 4.14). This specification is due to the sense amplifier sensibility. This study enables the gate pulse shape to be chosen with respect to the best compromise between the cell performances (PW, Ec, Id peak). The final amplitude of Vg and the programming time duration are kept constant at 9V and 5µs respectively (charge pump design constraints). In order to optimize the cell performances, we decided to merge a 1.5V/µs ramp with a 1µs plateau in order to avoid the problem of Id peak, maintaining a satisfactory programming window. The results are summarized in Table 1, where the gain/loss percentages are normalized with respect to the case of single box programming. Using the new dynamic method of measurement, we characterized the device with different ramp and box programming signals. This procedure enables the best programming pulse shape to be chosen with respect to the final embedded low power product application. We have shown one possible optimization, with respect to the standard box pulse. The best tradeoff reduces the current consumption by 35%. However this decreases the programming window by 11% and increases the energy consumption by 10%, if the programming duration is kept constant (5µs). Another improvement, concerning floating gate cell performances, can be obtained using the appropriate drain and bulk biases. This study will be presented in the next paragraph.
Impact of drain and bulk biases
Using the optimized pulse (ramp + plateau), we decided to study the dynamic cell behavior for several drain (Vd) and bulk (Vb) voltages. by increasing again. The current variation in this region is attributed to the effects of the high fields applied between gate and substrate, as well as drain and source, and the channel modulation. These effects induce the hot carrier generation, thus increasing the drain current.
In this case the channel is completely formed and pinched closely to the drain; the position of the pinch-off point is modulated by the drain voltage [Benfdila '04] [Moon '91] [Wang '79].
In figure 4.17d, it is worth noting that we found an optimal value of drain current between the low and high injection zones (Vd=3.8V). After each programming pulse, we measured the We repeated the experiment with the reverse body bias to benefit from the CHISEL effect [Takeda '83] [Driussi '04] [Esseni '00a]. The results are reported in figure 4.19. By increasing the amplitude of bulk bias the injection efficiency is increased, reaching a bigger programming window. In the meantime, the energy consumption of drain charge pump decreases due to the current reduction, allowing relaxed design constraints. Here we only considered this contribution, but by adding the substrate biasing, a bulk current is present.
This current impacts the size of bulk charge pumps.
Impact of channel doping dose
After this study on programming pulses, where we understood the significant role of surface channel potential, we decided to modify an important technological parameter which can impact the cell consumption: the Channel Doping Dose (CDD). We tested the device described above using the optimized gate pulse (ramp + plateau) presented in
Bitline leakage
Another interesting point to evaluate in a NOR-architecture memory array is the BitLine Leakage (BLL), due to the Gate Induced Drain Leakage (GIDL) current in Band-to-Band Tunneling (BBT) regime during the CHE programming operation [Rideau '04]. As highlighted in previous studies [Mii '92] [Orlowski '89] [Touhami '01], several technological parameters, such as cell LDD doping (dose, tilt, and energy), drain-gate overlap, or STI shape have an impact on electric fields in the drain-bulk junction, responsible for GIDL. In this section, the impact of arsenic LDD implantation energy on bitline leakage measurements is presented through electrical characterizations, performed on a dummy cell structure.
Impact of lightly doped drain implantation energy
It is traditionally known that LDD process is used in classic MOSFETs to reduce lateral electric field by forming a gradual drain junction doping near the channel and as a consequence, to decrease the hot electron effect at the drain side [Nair '04] [Yimao '11]. In
TCAD simulations of LDD implantation
To understand the effect of LDD concerning the BLL mechanism, we performed TCAD investigation using Synopsys commercial tool for both process and electrical simulations.
The process simulator parameters, i.e. doping diffusion and segregation coefficients, have been fine-tuned in order to obtain electrical results in accordance with experimental data. For electrical simulations, the hydrodynamic transport model has been adopted. Figure 4.23a
shows the 2D cell net active doping profiles of LDD and drain-bulk junction, for several implantation energies. In figure 4.23b, the net doping profile along a horizontal cut below the Si/SiO2 interface is reported and five regions are identified, from left to right: source/LDD/channel/LDD and drain. We observe that the net doping level at the channel-LDD region (also corresponding to the gate-drain overlap region) decreases with implantation energy increase. It has been shown that doping and surface potential gradients have an impact on GIDL through the lateral electric field [Parke '92] [Rideau '10]. In the present case, a less abrupt net doping profile in the channel-LDD region for the highest implantation energy (figure 4.23b) leads to a lower lateral electric field and a smaller leakage current. respectively. It can be noticed that if, on the one hand, no significant variation is seen on the vertical electric field peak (figure 4.24c) and on the other hand, the lateral electric field peak decreases as LDD implantation energy increases (figure 4.24b). As previously mentioned, the reduction of the lateral electric field, and thus of the global electric field, decreases the leakage current of the drain-bulk junction, due to Band-to-Band Tunneling. Although the cell LDD implantation energy increase could help decrease the bitline leakage, we also have to take into account its impact on cell performances during the programming operation. In what follows, we will focus on the impact of implant energy on the write efficiency. Programming is performed on a standard cell using CHE injection. We bias the control gate and the drain with 9V and 3.8V box pulses respectively, and the bulk with -0.5V. In figure 4.25a the programming window and the bitline leakage are plotted versus the LDD implantation energy. This graph highlights the fact that the programming window is impacted by LDD energy and decreases as the energy increases, due to the reduction of the lateral electric field contribution. A satisfactory trade-off can be found reaching a gain of 49% in terms of BLL reduction, losing only 6% of PW and increasing by +10keV the standard LDD implantation energy. Further improvements can be made, with a +20keV increase, gaining 70% of BLL reduction against less than 10% loss on PW. This study has been performed by keeping the channel doping dose unchanged. In order to find the best tradeoff, it is important to take into account that when the CDD is increased, the BLL is increased too, because the lateral electric field is enhanced (figure 4.25b). In conclusion we found a programming scheme optimization for the floating gate cell using the new dynamic method measurement for the drain current consumption. The study enables the best tradeoff to be found depending on cell application in terms of dynamic consumption and programming window. In addition we considered the impact of drain and bulk biases highlighting the optimum point of work for our technology using Vd=3.8V and Vb=-0.5V.
Finally the impact of channel doping dose and lightly doped drain implantation energy have been studied to improve the consumption due to the unselected cells of bitline.
Silicon nanocrystal cell consumption characterization
The study of floating gate cell consumption helped understand the main parameters that can impact this aspect during the channel hot electron programming operation. We have seen the importance of new setup development for dynamic current consumption. Moreover, this method becomes compulsory because the discrete nature of nanocrystals does not enable a dummy cell to be designed that is useful for static measurement and thus the drain current consumption extrapolation. Here we present the impact of programming scheme and tunnel oxide thickness on s hybrid silicon nanocrystal cell (Si-nc+SiN). Finally we will show the consumption results on optimized Si-nc cell compared with those on the standard floating gate.
Impact of programming pulse shape
After the study on the floating gate cell, we used the same setup for dynamic current measurements to evaluate also the hybrid Si-nc cell with a 5.2nm tunnel oxide thickness. This device has been chosen for its higher programming window due to the SiN presence that increases the charge trap probability (see chapters 2 and 3). The programming window and consumption are evaluated by using box and ramp pulses and also considering the optimization used in the case of floating gate (ramp+plateau). In figure 4.26 we show the applied gate voltage pulses while the drain voltage is constant (Vd=4.2V); the source and bulk terminals are grounded (Vs=Vb=0V). The programming pulse varies from a box to a ramp of 1.2V/µs; between these two conditions each ramp is followed by a plateau of different duration to improve the programming window, while the programming time is kept constant (tp=5µs). We program the cell always by starting from the same threshold voltage, in order to measure the drain current and the programming window after each pulse. Using the dynamic measurement setup and considering the Id behavior, we can observe that the current follows the gate potential variation over the time. The cell behavior is very different with respect to the floating gate cell results observed above. These results, obtained for the Si-nc cell, suggest a transistor-like behavior during the programming operation. In particular we can notice that the peak is not present when the box pulse is applied, but the current remains constant during the programming phase. The design of memory array control circuits have to take into account this aspect, but no overshoot is possible. In figure 4.27 we reported a simple schematic of silicon nanocrystal cell in order to explain its behavior. During the CHE injection the charges are trapped only in the silicon nanocrystals and SiN capping layer close to the drain area. In this zone the horizontal electric field is stronger and the electrons are accelerated so as to be injected in trapping layers [Tam '84] [Takeda '85] [Ning '78] [Chenming '85]. During this operation a high potential is applied on drain terminal (Vd=4.2V). Then the Space Charge Region (SCR) on drain-body junction increases its area, With these measurements and using the formula (4.5) we can compare the results obtained for Si-nc cell and the results reported in figure 4.16 for the Flash floating gate. The aim is to evaluate the relation between the shape of the gate pulse, the energy and the programming window of the two devices (figure 4.28). Concerning the programming window, the hybrid Si-nc and the floating gate cells have the same behavior. One can notice that by increasing the plateau duration (or the ramp speed) the programming window increases too, and the difference between the two devices remains constant (10%). This difference is due to the higher coupling factor of Flash floating gate.
Considering the Id consumption of Si-nc+SiN cell (figure 4.26b), it is easy to understand the linear dependence of energy consumption on plateau duration, while Ec slightly decreases for the Flash floating gate. For each pulse it is evident that the hybrid Si-nc cell reaches a smaller programming window consuming more energy; in particular to achieve a good programming window (greater than 80%), the Si-nc+SiN cell can use up to 50% more energy. It is necessary to use box pulses to increase the programming window, but also to decrease the The aim is to find the best programming pulse shape to improve the cell performances, as we did for the flash floating gate cell. In figure 4.30 we notice that the minimum programming window level is reached using 2µs box pulses. This enables the energy under the floating gate consumption to be decreased by using a 5µs box pulses. We can now establish that for the Si-nc+SiN cell, the best performances are obtained using box pulses of short duration. Unlike the floating gate cell, where an optimized pulse was defined by merging a ramp followed by a plateau in order to avoid current peak during the programming, for the hybrid Si-nc cell, the current level is constant. This leads to less design constraints and less disturb in logic and analog circuits around the memory array. To complete the study we report the experimental data of dynamic drain current absorption, of the programming window and the energy consumption in figures 4.31 and 4.32. We demonstrated that the drain current follows the gate potential, ( see also the figures 4.26 and 4.29), confirming the linear dependence of energy consumption on programming time. In figure 4.32 we compared the experimental results obtained for the hybrid silicon nanocrystal cell when it is programmed by box or by ramp. When the programming time increases, the consumed energy as well as the programming window are increased. In this case the ramp speed is varied as a function of programming time (tp); so the programming window decreases when the ramp speed is increased because tp decreases too. The difference between ramp and box pulses is constant and independent of programming time: it is 40% for programming window and 50% for the energy consumption. The energy increasing is linear but the programming window tends to saturate when increasing the programming time. This means that in order to reach a satisfactory programming window level, a long programming time is necessary. Today, for electronic low power applications, speed is a fundamental parameter in order to be competitive on market. This is why we consider the box programming scheme to be the best solution for the silicon nanocrystal cell.
TCAD simulations of current consumption
In order to confirm the experimental results on silicon nanocrystal and floating gate cells, the behavior of these two devices is simulated using a commercial TCAD simulator. We chose a two-dimensional (2D) approach in order to evaluate the process impact, improve the program stability and reduce the simulation time. We produced process simulations using the We used the same calibrated hydrodynamic set model for both devices, except for the Channel Hot Electron injection model. The Spherical Harmonic Expansion (SHE) model presented in [Seonghoon '09] was chosen for FG cell programming simulations, while the Fiegna model [Fiegna '93] is used for Si-nc+SiN cell. During the simulations we noticed that the SHE model can reproduce dynamic current simulations of each device, but not the programming window, in the case of Si-nc+SiN cell, even if the adjustment parameters are considered. Hence we decided to use in this case the Fiegna model that offers the best compromise to simulate the Si-nc+SiN channel hot electron programming operation. Figure 4.34 shows concordance between TCAD simulations and dynamic drain current measurements. In figure 4.34 the results of dynamic current measurements, obtained for the floating gate and the Si-nc+SiN cells, are shown in the case of programming box and ramp pulses. These are also compared with the simulations of our TCAD model previously presented. For each case there is a satisfactory quantitative concordance. As described above we were able to simulate the programming window level after each pulse and to calculate the consumed energy. In figure 4.35 is reported the case of the Si-nc+SiN cell, by using the box pulses with different durations. The experimental data used to fit the simulations are the same as those plotted in figure 4.30. One can notice that the concordance between data and simulation predicts the cell behavior by varying the voltage bias and pulse shape. In this case we used the simulations to confirm our explanation concerning the cell functioning. Thus the charge localization maintains the absorbed drain current constant. This leads to an increase of the cell consumption and to suppress the current peak during the channel hot electron operation.
Hybrid silicon nanocrystal cell programming scheme optimization
Previously, we described the floating gate and hybrid silicon nanocrystal cell behavior using the experimental data obtained with the dynamic current measurements and TCAD simulations. The Si-nc+SiN cell does not present a drain current peak during the channel hot electron programming, but the consumption is higher than standard Flash floating gate. In order to reduce the consumed energy we decided to use short programming pulses by maintaining a satisfactory programming window level. In order to summarize all results, we plot in figure 4.36 the value of energy consumption calculated for the F.G. and Si-nc+SiN cells, while the programming window is kept constant (PW=4V). To reduce the design constraints, it is possible to optimize the programming operation of Si-nc+SiN cell using a box pulse. To reach 4V of programming window the following conditions are used:
Impact of gate and drain biases
In the preceding paragraph we studied the effect of programming pulse shape of Si-nc+SiN cell, keeping the biasing conditions constant. In order to optimize the cell consumption we further investigate the effect of gate and drain biases. In figure 4.37 the programming window and the energy are plotted keeping the programming time constant (tp=1µs); both are calculated as explained above. It is confirmed that the box pulse is more efficient in terms of programming window than the ramp, at the expense of higher energy consumption. Moreover the impact of drain and gate voltages is shown. The increase in drain voltage by 0.4V leads to the same gain in terms of memory window as when increasing Vg by 1.2V. The horizontal electric field becomes predominant in channel hot electron operation, a small variation of Vd thus implies a relevant increase of the programming window. Comparing the ramp and box pulses, when gate voltage is varied (figure 4.37 a and b), we notice that the energy consumption is twice as big as when the box pulses are used independently of the programming time. In addition the difference of programming windows, calculated for both cases, increases with gate voltage which means that the box pulse achieves the best results regardless of Vg. This increases the vertical electric field during the CHE operation, hence the charge injection probability is increased. In figure 4.37 c) and d) the impact of drain voltage on programming window and energy consumption is shown. We confirm the best results in terms of PW are reached using the box pulse and we notice the exponential increase of energy for the higher amounts of Vd. To better understand the cell performances as a function of programming time and biases, the programming energy is plotted as a function of the programming window (figure 4.38). The goal is to increase the programming window by keeping constant the energy consumption using the box pulse and optimizing the biasing conditions. The abrupt variation of gate voltage (high ramp speed) during the programming operation starts the hot carrier generation at the beginning of the programming phase. The hot electron injection starts when Vg≈Vd [Takeda '83] [Takeda '85]. Thus, by using a ramp, the programming efficiency is lower and a longer time is needed to program the cell correctly. In figure 4.38 we notice that the programming window tends to saturate when the programming time is increased, leading to higher consumption. This is due to the quantity of injected charges that decreases the vertical electric field during the programming operation. We demonstrated once again that the box pulse increases the programming window by keeping the programming time and the biasing condition constant. Considering these results we defined the programming efficiency (PE) as the ratio between the programming window (PW) and the energy consumption (Ec). One can notice the linear dependence on the gate voltage (or vertical electric field). On the other hand, an optimized efficiency is measured for Vd=4.2V. This is due to the exponential behavior of energy consumption versus drain voltage. When the drain voltage is higher than 4.2V, the programming injection tends to saturate while the drain current increases which increases the consumption and reduces the programming efficiency [Della Marca '12] [Masoero '12]. We can now affirm that to optimize the programming operation, it is necessary to use the box programming pulse with the higher gate voltage and Vd=4.2V which, in this case, represents the point of higher programming efficiency. Using the
Impact of tunnel oxide thickness
After the study of programming scheme to improve the consumption of the silicon nanocrystal cell, in this section we show the impact of tunnel oxide thickness (tunox) on the programming operation (programming window and energy consumption) and data retention.
First we performed the programming kinetic experiments using cumulative box pulses (duration of 0.5µs) for two tunnel oxide thicknesses (figure reported the values of vertical electric field calculated during the first 0.5µs, considering zero charges stored in silicon nanocrystals for two different tunnel oxide thicknesses. We can notice that the difference of 0.2 MV/cm is small compared to the maximum Evert=4.2MV/cm, thus the 1nm variation of tunox becomes negligible for the channel hot electron operation because the horizontal electric field is dominant. In figure 4.42 the energy consumption is plotted as a function of the programming window using the optimized programming conditions identified before (box pulse, Vg=10V and Vd=4.2V). The XY scale axes are the same as for figure 4.38. Moreover we highlighted the levels of minimum programming window, acceptable for good cell functionality, and the level of sub-nanojoule energy consumption. In figure 4.42, once again the tunnel oxide thickness has a limited influence on the consumed energy and the programming window during the CHE programming. In particular to produce the minimum programming window, a programming pulse of 1μs is sufficient for the two tunnel oxide thicknesses with a consumption energy lower than 1nJ. We show in figure 4.43 the programming efficiency calculated for two different tunnel oxides varying the box pulse duration, in order to conclude the study on tunnel oxide thickness and to demonstrate that this technological parameter has a limited impact on programming efficiency. The efficiency is higher using short pulses and is independent of the tunnel oxide thickness; which is consistent with the results previously presented. In this section we studied the impact of programming scheme on energy consumption for silicon nanocrystal memory cell. We propose an optimization concerning the programming pulse shape demonstrating the greatest efficiency of box pulse versus the ramp. The linear dependence on gate voltage is shown, while an optimum point of work is found for Vd=4.2V.
The consumption has been reduced down to 1nJ generating a satisfactory programming window. Moreover the best trade-off to improve the cell efficiency is found by using very fast pulses regardless of tunnel oxide thickness.
Optimized cell consumption
In previous sections we showed the characterization results of floating gate and hybrid silicon nanocrystal cell consumption; we related it with the programming window in order to find an optimized programming scheme. The best compromise for floating gate cell was to use a programming ramp followed by a plateau in order to reduce the drain current peak and to maintain a satisfactory programming window. In the case of the Si-nc+SiN cell we found the best compromise using very short box pulse on gate terminal. Here we report the results found using the optimized silicon nanocrystal cell with 4.2nm tunnel oxide, 12nm silicon nanocrystal size and the thinner ONO (EOT=10.5nm). Using box pulses of different durations, we characterized the drain current absorption; in figure 4.44 we reported the results using Vg=10V and Vd=4.2V, the Y scale axe is the same as in figure 4.29 so as to compare the optimized cell with Si-nc+SiN cell. We notice that the drain current does not follow the gate potential, but in this case it decreases during the programming time, showing a similar behavior to that of the floating gate cell, where a current peak is present. We explained previously that the different behavior between Si-nc+SiN and floating gate cell is justified by the localization or not of trapped charges. In the case of Si-nc cell we can affirm that the localization effect due to the SiN layer is not present. Moreover the bigger size of nanocrystals produces a charge distribution toward the center of device channel, modifying the potential of substrate surface. In figure 4.45 we reported the cell schematics and the relative drain current measurements in order to compare the behavior of Si-nc+SiN, optimized Si-nc and floating gate cells. with different electric potentials [Matsuoka '92]. Moreover the nanocrystal coalescence during the growth process can reduce the distances up to the contact creating percolative paths from the drain side to the source side. Taking into account the dynamic behavior of the Si-nc cell and the fact that the box is more efficient with respect to the ramp pulse, in particular for short pulse duration, we repeated the measurements by varying drain and gate biases and we compared these results with the hybrid Si-nc cell. In figure 4.46 we reported the programming window and the energy consumption as a function of Vg (a-b) and Vd (c-d) using 1µs box programming pulses, of Si-nc and Si-nc+SiN cells. We notice that the optimized silicon nanocrystal cell presents a better programming window due to the higher covering area (coupling factor) and lower energy consumption due to the lower drain current.
The dependence on gate voltage (vertical electric field) is linear as in the case of the Si-nc+SiN cell. With high drain voltage the programming window starts to saturate for Vd=3.8V, while the consumed energy exponentially increases. These results directly impact the cell efficiency. In figure 4.47a we can notice that for the optimized Si-nc cell the programming efficiency decreases in spite of the Si-nc+SiN cell trend.
Benchmarking with Flash floating gate
To conclude the report on cell energy consumption, in this paragraph we compare the main results based on the previous study of the silicon nanocrystal cell with Flash floating gate. We have shown before the cell behavior under different biasing conditions and the impact of some technological parameters (tunnel oxide thickness, channel doping dose, ONO thickness). In order to compare these two different devices, we plotted in figure 4.49 the programming window and the consumed energy using box pulses of different duration. The devices were tested using the proper drain voltage optimization shown previously: Vd=4.2V
for the Si-nc+SiN; Vd=3.8V for F.G. and Si-nc cells; the gate voltage is kept unchanged (Vg=9V). The optimized Si-nc cell is able to reach a programmed level comparable with that of the floating gate cell, but the consumed energy remains higher because of the higher drain current; the worst case is represented by the Si-nc+SiN cell. Using the experimental data we extrapolated the power laws describing cell behavior and we show that by using very fast pulses the Si-nc cell consumption can be improved but it does not go down to the floating gate level. The results of ramped gate voltage (figure 4.50) show that the Si-nc optimized cell is completely comparable with the floating gate in terms of programming window and energy consumption. Even if the maximum drain current is grater, in the case of Si-nc, the different dynamic behavior enables to reach very similar performance. The dynamic measurements verify the presence of current peak in the case of the floating gate and we explained above that it can cause disturbs in analog and digital circuits around the memory array. In the case of the optimized Si-nc cell, the current decrease during programming operation is not abrupt and it can be tolerated depending on design constraints.
To conclude the chapter we show in figure 4.51 the performance of Si-nc, hybrid Si-nc and floating gate cells found using the optimized drain voltages and keeping the gate voltage unchanged (Vg=9V). The programming time is fixed at 2µs in order to propose benchmarking in the case of low energy and fast application. We notice that when the box pulse is applied the floating gate consumption is 50% less than for the Si-nc optimized cell, on the contrary the programming window can be considered as equivalent. Spite when using programming ramp, the silicon nanocrystal cell shows the best consumption level. The optimized silicon nanocrystal cell can be considered as a good alternative to the flash floating gate in terms of programming speed and energy consumption maintaining a satisfactory level of programming window, but further efforts are necessary to overcome the Flash floating gate memory cell. In the first chapter the economic context, the evolution and the classification of semiconductor memories was presented. Then the Flash memory operations needed to understand this thesis were reviewed. We thus presented the Flash memory scaling limits and the proposed solutions. We explained the advantages of using a discrete charge trapping layer instead of the continuous floating gate and the importance of control dielectric instead of the classical silicon oxide. Finally, we introduced the silicon nanocrystal memory solution. In particular we reported the state of the art of charge trap silicon nanocrystal cell, which is the object of this thesis. This option reduces the mask number in process fabrication and scales the memory stack thickness, hence decreasing the operating voltages. Moreover the silicon nanocrystal cell can produce satisfactory data retention results using a thin tunnel oxide.
The second chapter describes the experimental setup used to characterize the programming window of silicon nanocrystal memory with automatic and manual tests. In chapter three we have shown the results of reliability experiments performed on silicon nanocrystal memory. The variation of technological parameters has been also evaluated. In particular the presence of silicon nitride capping layer on nanocrystals increases the charge trapping probability and the cell covered area. In our case the nanocrystals were not entirely surrounded by Si3N4, layer but they were grown on the SiO2 tunnel oxide and afterwards capped by Si3N4. We demonstrated that there are no benefits in this configuration concerning the data retention, due to the Si3N4 presence, because the physical tunnel barrier that separates the nanocrystals from the substrate corresponds to the tunnel oxide thickness only.
Furthermore the Si3N4 capping layer enables the parasitic charge trapping at the tunox/SiN interface by increasing the charge loss with the temperature. Moreover, the presence of parasitic charge trapping at the tunox/Si3N4 interface facilitates the charge loss at high temperature. Concerning the cell endurance using the silicon nitrite capping layer, the coupling factor is increased, thus the programming window increases too, but the parasitic charge trapping in the Si3N4 does not lead to produce satisfactory cell functionality after 100k
program/erase cycles. Another important point is the charge loss dependence on tunnel oxide thickness. We extrapolated the activation energies related to the different samples. Obviously the charge loss increases when the tunnel oxide thickness decreases. We noticed that a 5.2nm tunnel oxide is needed to achieve the data retention specification for temperatures up to 150°C. On the other hand, the tunnel oxide thickness strongly impacts the erase Fowler-Nordheim operation, hence to obtain satisfactory cell functioning after 100k program/erase cycles, a tunnel oxide thickness of 3.7nm has to be used. Finally we evaluated the cell behavior using a 4.2nm tunnel oxide, embedded in a different architecture without the silicon nitride capping layer and with an optimized ONO stack (EOT=10.5nm) in order to increase the vertical electric field which improves the program/erase efficiency and the cell reliability.
We demonstrated for the first time the silicon nanocrystal cell functioning up to 1M
program/erase cycles by maintaining a 4V programming window in a wide range of temperatures from -40°C to 150°C. Moreover, by avoiding the SiN capping, the data retention is also improved for cycled samples. The silicon nanocrystal covered area and the channel doping dose increase improve the programming window, hence the endurance performance. Furthermore we have shown that these two technological parameters do not impact the data retention results. Finally the Si-nc cell was compared with the floating gate.
The endurance experiments have shown better behavior for the Si-nc cell up to 1M
program/erase cycles, while the charge loss is higher at 250°C due to the thinner tunnel oxide.
We presented a new dynamic technique of drain current measurement in chapter four. This innovative method is presented for the first time in literature and can be used for different cell architectures (floating gate, silicon nanocrystal and split gate). We characterized the consumption of floating gate and silicon nanocrystal cell under various bias conditions and programming schemes. We compared in particular the ramp and box pulses on gate terminal during a channel hot electron programming operation. In this way we optimized the programming pulses for the two devices in order to minimize the energy consumption and the drain current peak. For the Flash floating gate we propose to use a ramped gate followed by a plateau, while a box pulse can be used in the case of the silicon nanocrystal memory cell.
Using TCAD simulation we explained the transistor-like behavior of the silicon nanocrystal cell when the SiN capping layer is used. This is due to the discrete nature of the charge trapping layer and thus the localization of trapped charge. The optimized silicon nanocrystal cell was also characterized showing an intermediate behavior between the floating gate and the hybrid Si-nc cell. This decreases the cell consumption by increasing the programming window and thus the cell programming efficiency. Using the optimized gate and drain biasing we demonstrated that it is possible to obtain a 4V programming window with sub-nanojoule energy consumption. Finally we compared the optimized silicon nanocrystal cell with the Flash floating gate. The programming time has been fixed to 2µs in order to propose benchmarking in the case of low energy and fast application. We notice that when the box pulse is applied the floating gate consumption is 50% less than with the Si-nc optimized cell, except for short pulses when the performances of the two devices become more similar. In spite of this, using the ramp programming, the silicon nanocrystal cell has the best consumption level. The optimized silicon nanocrystal cell can be considered as a good alternative to the flash floating gate in terms of programming speed and energy consumption while keeping a satisfactory level of programming window.
Perspectives
In this thesis we focused in particular on the silicon nanocrystal cell current consumption during the channel hot electron programming operation. Hereafter we propose two interesting points to study in a future work.
Dependence of current consumption on silicon nanocrystals size and density
We explained the cell behavior when the SiN capping layer is used. Using large nanocrystals it is possible to activate a mechanism of charge diffusion in the charge trapping layer. This explains behavior similar to the floating gate device, where the charge is distributed on the channel area. Preliminary experimental results lead us to say that is possible to drive the drain current consumption, controlling the charge diffusion mechanism. We conclude that the charges in silicon nanocrystals positioned on the drain side (HCI zone) can control the programming window, while the stored charges in Si-ncs, close to the source, control the consumed current. The charge diffusion changes the cell behavior emulating a double transistor functioning where the transistor on the source side acts as an access transistor with a threshold voltage that depends on the quantity of the trapped charges. To increase the ability to control the channel current, one way is to increase the charge diffusion toward the source side. In figure 5.1 we show a schematic of the silicon nanocrystal cell where the mechanism of charge diffusion is enabled. Using the asymmetrical tunnel oxide thickness it is possible to improve the hot carrier injection by increasing the vertical electric field when the channel is pinched-off. Moreover the consumed current is controlled by the tunnel oxide thickness in the source region. In this way the programming efficiency can be improved. This cell presents advantages concerning to current consumption of a split gate cell in a small area. The real current consumption can be evaluated with our dynamic method of measurement. As a drawback, the high electric field generated in the tunnel oxide zone, where the thickness is varied, can stress the device, thus limiting the endurance performance. Enfin, nous démonterons qu'il est possible de rejoindre une consommation énergétique inférieure à 1nJ en sauvegardant une fenêtre de programmation de 4V.
Présentation de la thèse
Le manuscrit terminera avec une conclusion générale qui résume les différents résultats obtenus dans ce travail de thèse, avant de proposer quelques perspectives.
Le marché des mémoires à semi-conducteur
Pendant les dix dernières années, le marché des mémoires à semi-conducteur a subi une forte augmentation, grâce à l'énorme quantité de produits tels que les smartphones et autres Les avantages de l'utilisation de cette technologie sont:
-Résistance contre le SILC et RILC: ceci permet de diminuer l'épaisseur de l'oxyde de tunnel en dessous de 5nm, en conservant toujours une contrainte de dix ans pour la rétention de données. Par ailleurs, les tensions des opérations d'écritures et d'effacement peuvent être également diminuées.
-Compatibilité avec le procédé de fabrication CMOS standard afin d'encourager la production industrielle, en réduisant le nombre de masques utilisés par rapport à la fabrication du dispositif à grille flottante.
-Diminution des effets de perturbations de la cellule mémoire: Grâce à la nature discrète des nanocristaux ainsi qu'à leur petite taille, le facteur de couplage entre la grille et le drain est réduit autant que les interférences entre cellules voisines.
-Application multi-niveaux: La tension de seuil d'un transistor à nanocristaux de silicium dépend de la position de la charge stockée tout le long du canal.
En dépit de ces particularités, deux inconvénients importants caractérisent la mémoire Si-nc:
-Le faible facteur de couplage entre la grille de contrôle et les nanocristaux.
-La dispersion sur la surface recouverte avec les nanocristaux qui limite ce type de cellule pour des applications à haute densité d'intégration.
Des études importantes ont été amenées par les sociétés STMicroelectronics, Atmel et Freescale, qui ont démontré la possibilité de pouvoir obtenir des résultats intéressants en termes de fenêtre de programmation satisfaisante, haute fiabilité et intégration au sein d'architectures mémoires spécifiques (Split Gate).
Caractérisation électrique de la cellule mémoire à nanocristaux
Dans ce chapitre, nous évaluerons l'impact des principaux paramètres technologiques sur la fenêtre de programmation de la cellule mémoire à nanocristaux produite au sein de STMicroelectronics. Le but de cette analyse a été de définir la meilleure façon d'améliorer la fenêtre de programmation en utilisant toujours les pulses de programmation et d'effacement standards utilisées pour la cellule à mémoire Flash à grille flottante. Ensuite nous résumerons les conclusions principales obtenues par les études de caractérisations électriques de la cellule.
- -Nous avons montré qu'il est possible d'augmenter la fenêtre de programmation en augmentant la dose de dopage dans le canal, en considérant toujours le décalage des tensions de seuil. En augmentant la dose de dopage de canal jusqu'à 10 14 at/cm -La rétention des données reste inchangée lorsque la dose de dopage du canal (CDD) est modifiée. Il est possible par conséquent d'obtenir un gain sur la fenêtre de programmation en augmentant la dose de dopage de canal mais dans ce cas une régulation des niveaux programmés/effacés doit être réalisée. Le CDD=10 14 at/cm 2 a été choisi pour la cellule mémoire optimisée à nanocristaux de silicium.
-Enfin, nous avons montré la dépendance de la perte des charges de l'épaisseur de
Consommation de la cellule pendant une opération de programmation par injection d'électrons chauds
Dans cette section nous présentons les résultats concernant la consommation de courant et
Conclusion générale
Dans ce travail de thèse nous avons caractérisé et modélisé les cellules mémoires à nanocristaux de silicium. Suite à une étude détaillée des récentes implications des nanocristaux dans des dispositifs mémoire, nous avons optimisé l'empilement de la mémoire.
Nous avons par conséquent caractérisé la fenêtre de programmation en faisant varier
Caractérisation et Modélisation des Mémoires Avancées non Volatiles à Piégeage de Charge
Les mémoires à nanocristaux de silicium sont considérées comme l'une des solutions les plus intéressantes pour remplacer les grilles flottantes dans les mémoires Flash pour des applications de mémoires non-volatiles embarquées. Ces nanocristaux sont intéressants pour leur compatibilité avec les technologies de procédé CMOS, et la réduction des coûts de fabrication. De plus, la taille des nanocristaux garantie un faible couplage entre les cellules et la robustesse contre les effets de SILC. L'un des principaux challenges pour les mémoires embarquées dans des applications mobiles et sans contact est l'amélioration de la consommation d'énergie afin de réduire les contraintes de design de cellules. Dans cette étude, nous présentons l'état de l'art des mémoires Flash à grille flottante et à nanocristaux de silicium. Sur ce dernier type de mémoire une optimisation des principaux paramètres technologiques a été effectuée pour permettre l'obtention d'une fenêtre de programmation compatible avec les applications à faible consommation d'énergie. L'étude s'attache à l'optimisation de la fiabilité de la cellule à nanocristaux de silicium. On présente pour la première fois une cellule fonctionnelle après un million de cycles d'écriture et effacement dans une large gamme de températures [-40°C;150°C], et qui est capable de retenir l'information pendant dix ans à 150°C. Enfin, une analyse de la consommation de courant et d'énergie durant la programmation montre l'adaptabilité de la cellule pour des applications à faible consommation. Toutes les données expérimentales ont été comparées avec les résultats d'une cellule standard à grille flottante pour montrer les améliorations apportées.
Mots Clés: Mémoires à nanocristaux de silicium; grille flottante; consommation d'énergie; fenêtre de programmation; fiabilité; température
Characterization and Modeling of Advanced Charge Trapping Non Volatile Memories
The silicon nanocrystal memories are one of the most attractive solutions to replace the Flash floating gate for nonvolatile memory embedded applications, especially for their high compatibility with CMOS process and the lower manufacturing cost. Moreover, the nanocrystal size guarantees a weak device-todevice coupling in an array configuration and, in addition, for this technology it has been shown the robustness against SILC. One of the main challenges for embedded memories in portable and contactless applications is to improve the energy consumption in order to reduce the design constraints.
Today the application request is to use the Flash memories with both low voltage biases and fast programming operation. In this study, we present the state of the art of Flash floating gate memory cell and silicon nanocrystal memories. Concerning this latter device, we studied the effect of main technological parameters in order to optimize the cell performance. The aim was to achieve a satisfactory programming window for low energy applications. Furthermore, the silicon nanocrystal cell reliability has been investigated. We present for the first time a silicon nanocrystal memory cell with a good functioning after one million write/erase cycles, working on a wide range of temperature [-40°C; 150°C]. Moreover, ten years data retention at 150°C is extrapolated. Finally, the analysis concerning the current and energy consumption during the programming operation shows the opportunity to use the silicon nanocrystal cell for low power applications. All the experimental data have been compared with the results achieved on Flash floating gate memory, to show the performance improvement.
Key words: Silicon nanocrystal memories; floating gate; energy consumption; programming window; reliability; temperature
Chapter 1 -
1 Flash memories: an overview 1.1 Introduction ........................................................................................................................ 1.2 The industry of semiconductor memories .......................................................................... 1.2.1 The market of non-volatile memories ......................................................................... 1.2.2 Memory classification ................................................................................................. 1.2.3 Flash memory architectures ........................................................................................ 1.3 Floating gate cell ................................................................................................................ 1.3.1 Basic structure: capacitive model ................................................................................ 1.3.2 Programming mechanisms .......................................................................................... 1.3.3 Erase mechanisms ....................................................................................................... 1.3.4 Evolution and limits of Flash memories ..................................................................... 1.3.4.1 Device scaling....................................................................................................... 1.3.5 Alternative solutions ................................................................................................... 1.3.5.1 Tunnel dielectric ................................................................................................... 1.3.5.2 Interpoly material ................................................................................................. 1.3.5.3 Control Gate ......................................................................................................... 1.3.5.4 Trapping layer....................................................................................................... 1.4 Silicon nanocrystal memory: state of the art ...................................................................... 1.5 Flash technology for embedded applications ..................................................................... 1.6 Innovative solutions for non volatile memory ................................................................... 1.6.1 Ferroelectric Random Access Memory (FeRAM) ...................................................... 1.6.2 Magnetic Random Access Memory (MRAM) ............................................................ 1.6.3 Resistive Random Access Memory (RRAM) ............................................................. 1.6.4 Phase Change Random Access Memory (PCRAM) ................................................... 1.7 Conclusion ......................................................................................................................... Bibliography of chapter 1 ........................................................................................................
Figure 1
1 Figure 1. 1. Evolution and forecast of portable devices market (source: muniwireless.com and trak.in)
Figure 1
1 Figure 1. 3. DRAM and Flash price outlook [Philip Wong'08].
Figure 1 . 4 .
14 Figure 1. 4. Mapping of typical applications into NVM space[Zajac '10]. "Bit count" is the amount of data that can be stored in a given block.
Figure 1
1 Figure 1. 5. Left: Overview of the non volatile semiconductor memories; Right: Semiconductor memory classification by different performance criteria.
Figure 1
1 Figure 1. 7. Architectures of NAND (left) and NOR (right) memory array (source: micron.com).
Figure 1
1 Figure 1. 8. a) I-V trans-characteristics of a floating gate device for two different values of charge stored within the floating gate (Q=0 and Q≠0). b) Schematic cross section of a floating gate transistor. The model using thecapacitance between the floating gate and the other electrodes is described[Cappelletti '99].
Figure 1
1 Figure 1. 9. a) FN programming mechanism representation. b) Band diagram of a floating gate memory during FN programming operation. In this tunnel effect, the electrons flow from the conduction band of the silicon into the floating gate through the triangular energy barrier of the tunnel oxide (figure 1.9b). During the FN programming, the number of trapped electrons in the floating gate increases. As a
Figure 1 . 10 .
110 Figure 1. 10. Channel Hot Electron (CHE) programming mechanism representation.
Figure 1 . 11 .
111 Figure 1. 11. Flash floating gate schematics of erase mechanisms: a) Fowler-Nordheim, b) Hot Hole Injection (HHI), c) source erasing, d) mix source-gate erasing.
Figure 1 .
1 Figure 1. 14. Subthreshold current of MOS transistor as a function of gate voltage with the channel length as parameter.The insert is the calculated boron profile below the silicon surface in the channel[Fichtner '80].
the programmed cells can lose part of their charge due to FN drain stress on tunnel oxide causing hot hole injection (see the cell A in figure 1.15). The second case, represented in figure 1.15, cell B concerns a gate stress that can be induced on programmed cells (charge lost due to the stress through the ONO) or on erased cells (charge trapping due to the stress through the tunnel oxide). Read disturb. In this case the selected cell can suffer from parasitic programming at low gate voltage; furthermore the unselected cells are gate stressed too (figure 1.15 cell C).
Figure 1 .
1 Figure 1. 15.Programming disturb (left) and read disturb (right) condition in NOR Flash memory array.
Figure 1 .
1 Figure 1. 17. Locations of parasitic charge in a NAND cell (left). Number of electrons required in each locationto shift the cell VTH by 100mV (right)[Prall '10].Random Doping Fluctuation. The threshold voltage shift due to random variations in the quantity and position of doping atoms is an increasingly problem as device dimensions shrink. In figure1.18 the mean and 3σ for the number of doping atoms are shown as a function of feature size. As device size scales down, the total number of doping atoms in the channel decreases, resulting in a larger variation of doping numbers, and significantly impacting threshold voltage. It has been documented[Frank '99] that at 25nm node, the Vt can be expected to vary of about 30% purely due to the random doping fluctuation.
Figure 1 .
1 Figure 1. 18. Number of Boron atoms per cell (mean: square, -3σ: diamond, +3σ: circle vs. feature size). The triangle shows the ±3σ percentage divided by the mean [Prall '10].
we must avoid the creation of defects during the programming operations that can induce the SILC and degrade the retention and cycling performance. This technological challenge can be solved by engineering the tunnel barrier. As shown in figure 1.19 crested barriers can provide both sufficient programming and retention. Several crested barriers have been tested: the most common one consists in an ONO layer [Hang-Ting '05], but other combinations have also been experimented (SiO2/Al2O3/SiO2 [Blomme '09], SiO2/AlN [Molas '10]).
Figure 1 .
1 Figure 1. 19. Principle of operation of crested barrier[Buckley '06].
Figure 1 .
1 Figure1. 20. Relationship between the dielectric constant and band gap[Robertson '05].
Figure 1 .
1 Figure 1. 21. (a) Schematic explaining electron back tunneling phenomena. (b) Erase characteristics of SANOS device with n+ poly-Si gate and (c) TaN/n+ poly-Si gate.1.3.5.4 Trapping layerIn figure1.22a we show the schematics of continuous floating gate cell and discrete charge trapping layer. In the first case the trapped charge is free to move along the polysilicon floating gate. This makes the device very sensitive to SILC. A discrete charge trapping layer is the solution envisaged to avoid the charge loss if an electric path is generated in tunnel
Figure 1 .
1 Figure 1. 22. Schematic diagrams representing (SILC) phenomena for (a) continuous floating gate cell (b) discrete charge trapping layer.
STMicroelectronics,
in collaboration with CEA-Leti, presented in 2003 a 1Mb nanocrystal CAST (Cell Array Structure Test, figure 1.24a) where the Si-nc were fabricated with a two step LPCVD process [De Salvo '03] [Gerardi '04]. This structure is programmed and erased by Fowler-Nordheim tunneling, and the write/erase characteristics are reported in figure 1.24b.
Figure 1 .
1 Figure 1. 24. a) Schematic of a CAST structure. b) Program/erase characteristics in fully Fowler-Nordheim regime [De Salvo '03]. As a result STMicroelectronics presented in 2007 a 16Mb Flash NOR memory array divided into 32 sectors of 512kb [Gerardi '07a] [Gerardi '07b]. The silicon nanocrystals were grown on a tunnel oxide, 5nm thick, with a diameter between 3nm and 6nm and a density of 5•10 11 nc/cm 2 . To complete the stack an ONO layer was used as a control oxide (EOT=12nm). In figure 1.25a the program/erase threshold voltage distributions of 16Mb memory array are plotted. In this case the cells have been programmed by channel hot electron and erased by Fowler-Nordheim reaching a 3V programming window in case of the average of distributions and 800mV for the worst case. Moreover Gerardi highlighted the problem of parasitic charge trapping in ONO layer during the cycling (figure 1.25b).
Figure 1 .
1 Figure 1. 25. a) Program and erase threshold voltage distributions for one sector of 512 kb of nanocrystal memory cells. b) Evolution of the program/erase threshold voltages of a Si-nc memory cell showing that the program/erase levels are shifted due to electron trapping in the ONO [Gerardi '07b].
Figure 1 .
1 Figure 1. 26. a) Cross-section of the cylindrical-shaped structure and corresponding TEM image on the right. b) Endurance characteristic of a Si-nc cell by using CHE/FN and FN/FN program/erase operations.
Figure 1 .
1 Figure 1. 27. a) SEM images of Si-NCs with same dot nucleation step and different dot growing times. b) written and erased Vth of bitcells with different Si-NCs [Jacob '08].
Figure 1 .
1 Figure 1. 28. a) Endurance data for a memory bitcell b) Threshold voltage distributions of erased and written states of two different sectors, measured before and after 10k write/erase cycles. c) Data retention at 150°C on two different uncycled 512Kb sectors [Jacob '08].
.29 we report the endurance characteristics of HTO and ONO samples fabricated by Freescale. For the HTO sample the threshold voltages remain stable up to 1kcycles, and afterwards their increase is explained by the parasitic charge trapping in the oxide. In the case of ONO sample, the electrons trapping in the silicon nitride layer starts immediately with the first program/erase cycles figure 1.29.
Figure 1 .
1 Figure 1. 29. Endurance characteristics of silicon nanocrystal cells integrating a) HTO [Steimle '04] and b)ONO[Muralidhar '04] control dielectric.
Figure 1 .
1 Figure 1. 31. a) HCI program and FN erase speed (with positive gate voltage) for devices with 4.5nm bottom oxide and 12nm top oxide, with different nanocrystal depositions [Rao '05]. b) 200ºC bake Vt shift for sampleswith different nanocrystals size[Gasquet '06].
Figure 1 .
1 Figure 1. 32. Schematic of Split Gate with memory first Left) or Access first (Right) configuration[Masoero '12a].
Figure 1 .
1 Figure 1. 33. a) Erase and program Vt distributions of cycles up to 300K at 25°C. b) Bake retention characteristics at 150C with fresh, 10K and 100K cycled parts of 125°C cycling temperature [Sung-Taeg '12].
Figure 1 .
1 Figure 1. 34. Automotive microcontroller and smartcard market (source: iSupply Q1 2012 update).
Figure 1 .
1 Figure 1. 35. Mainstream Flash integration concept[Strenz '11].
Figure 1 .
1 Figure 1. 36. An overview on 3D array integration of charge trapped Flash NAND: a) [Jiyoung '09], b)[Tae-Su '09], c) [SungJin '10], d) [Eun-Seok '11].
Figure 2 .
2 Figure 2. 1. Manual test bench.
Figure 2 .
2 2 shows the automatic probe bench system. The test program is developed in HP BASIC language and controlled by SIAM (Automatic Identification System of Model Parameters). It is a dedicated software for parametric tests. It was possible to load in the automatic prober up to 25 wafers to obtain statistical results. With an accurate prober calibration the station is able to probe the wafers by applying the same test program. The bench was equipped with a HP4142 electrical parameter analyzer, a HP4284 LCR precision meter and a HP8110 pulse generator. Automatic Prober • Wafer handling • Wafer alignment Matrix test head, probe card • Connect the instruments to the samples Tester • LCR meter (HP4284) • Pulse Generator (HP8110) • Parameter analyzer (HP4142) Computer (SIAM system) To drive the prober and the tester
Figure 2 . 2 .
22 Figure 2. 2. Automatic test bench.
Figures 2.3(a) and 2.3(b) showthe signals used during the channel hot electron programming kinetic, and the Fowler-Nordheim erase kinetic, in that order.
Figure 2 .
2 Figure 2. 3. Signals applied during a) the channel hot electron programming operation and b) Fowler-Nordheim erase operation.
Figure 2 . 4 .
24 Figure 2. 4. Silicon nanocrystal cells used to evaluate the impact of nanocrystal size. (a) and (c) are the schematics of cells that integrated average 6nm and 9nm nanocrystal diameter. (b) and (d) are the corresponding 40° tilted CDSEM pictures used to measure the nanocrystal size, density.
Figure 2 .
2 Figure 2. 5. a) Statistical results of program/erase threshold voltage measured for samples with different silicon nanocrystal sizes (Φ=6nm and Φ=9nm). b) Programming window as a function of covered area.
Figure 2 .
2 Figure 2. 6. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with Φ=6nm and Φ=9nm. The program/erase pulses are described in section 2.2.
instead of the ONO layer. One alternative solution is represented by the hybrid Si-nc cell. It is demonstrated that the capping layer on Si-ncs increases the number of trapping sites. Moreover it protects the nanocrystals from oxidation during the interpoly dielectric deposition [Steimle '03] [Colonna '08] [Chen '09]. In this paragraph we analyze the impact of silicon nitride (Si3N4) capping layer on the memory cell programming window. The studied samples are shown in figure 2.7.
Figure 2 .
2 Figure 2. 7. Silicon nanocrystal cells used to evaluate the impact of nanocrystal size. a) and c) are the schematics of cells that integrated average 9nm nanocrystals diameter with and without Si 3 N 4 capping layer. b) and d) are the corresponding 40° tilted CDSEM pictures used to measure the nanocrystal size and density.
Figure 2 .
2 Figure 2. 8. Statistical results of program/erase threshold voltage measured for samples with and without Si 3 N 4 capping layer on silicon nanocrystals. The results plotted in figure 2.8 show the programming window is increased by 1V when the nanocrystals are capped by Si3N4 layer, maintaining the threshold voltage dispersion unchanged, thanks to the increased number of trapping sites [Chen '09]. Nevertheless, the expected limits of program/erase levels are not still reached. The program/erase kinetics, in figure 2.9, are achieved using the ramps previously described.
Figure 2 .
2 Figure 2. 9. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with and without Si 3 N 4 capping layer. The program/erase pulses are described in section 2.2.
Figure 2 . 10 .
210 Figure 2. 10. Schematics of silicon nanocrystal cells with different channel doping doses (CDD).
Figure 2 .
2 Figure 2. 11. a) Statistical results of program/erase threshold voltage measured for samples with different channel doping doses (CDD).b) Linear dependence of programming window as a function of CDD.
figure 2.12.
Figure 2 .
2 Figure 2. 12. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with different channel doping doses (CDD). The program/erase ramps are described in section 2.2.
Figure 2 .
2 Figure 2. 13. a) Cell schematics of Si-nc+SiN devices with: tunox=3.7nm, tunox=4.2nm, tunox=5.2nm. b) Measured and calculated EOT of memory stack. C) TEM analysis used to measure the tunnel oxide thicknesses.
Figure 2 .
2 Figure 2. 14. Statistical results of program/erase threshold voltage measured for samples with different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm.
Figure 2 .
2 Figure 2.15 shows the results of CHE program and FN erase kinetic characteristics, using the pulses described in section 2.2.
Figure 2 .
2 Figure 2. 15. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with different tunnel oxide thickness (tunox). The program/erase pulses are described in section 2.2.
Fowler-
Nordheim characterizations using box pulses of variable duration and Vg=±18V (figure 2.16), in order to complete the evaluation of programming window dependence on tunnel oxide thickness. It is worth noting that 100ms are necessary to obtain a 4V programming window by writing with a gate voltage of 18V for the sample with 3.7nm tunnel oxide thickness. The impact of tunnel oxide thickness on erase operation is more important, because the ∆Vt=6V is reached in 100ms for the same sample.
Figure 2 .
2 Figure 2. 16. a) Program and b) erase Fofler-Nordheim characteristics of Si-nc cell using three different tunnel oxide thicknesses (tunox): 3.7nm, 4.2nm and 5.2nm.
Figure 2 .
2 Figure 2. 18. Programming window as a function of covered area, silicon nanocrystal (Si-nc) and hybrid silicon nanocrystal (Si-nc+SiN) are compared.
Figure 2 .
2 Figure 2. 19. Schematic of optimized silicon nanocrystal cell; nanocrystals with two different sizes are implemented: a) Φ=9nm, b) Φ=12nm.
Figure 2 .
2 Figure 2. 20. a) Channel hot electron (CHE) programming kinetic and b) Fowler-Nordheim (FN) erase kinetic measured on samples with optimized memory stack and two different nanocrystals size: Φ=9nm and Φ=12nm.The program/erase pulses are described in section 2.2.
Figure 2 .
2 Figure 2. 21. Channel hot electron programming kinetic characteristics measured for the optimized silicon nanocrystal memory cell and Flash floating gate.Moreover in figure2.22 the erase kinetic characteristics are plotted, using the ramped gate voltage. The erase performances are improved with respect to the floating gate memory cell thanks to the thinner tunnel oxide thickness and the coupling factor increase.
Figure 2 .
2 Figure 2. 22. Fowler-Nordheim erase kinetic characteristics measured for the optimized silicon nanocrystal memory cell and Flash floating gate.
[Chung '07] [Padovani'10] [Molas. The hybrid silicon nnocrystal cell has thus demonstrated higher operation speed than a plain SONOS memory, while maintaining better retention characteristic than a pure Si nanocrystal memory[Steimle '03] [Chen '09] [Hung-Bin'12].We reported in figure3.1 our results of data retention at 150°C and 250°C of silicon nanocrystal cell with and without the 2nm Si3N4 capping layer; the tunnel oxide thickness is 5.2nm (see the device description in figure 2.7).
Figure 3
3 Figure 3. 1. Data retention of silicon nanocrystal cell with (circles) and without (diamonds) Si 3 N 4 capping layer at 150°C and 250°C.
Figure 3 . 2 .
32 Figure 3. 2. Band diagrams of hybrid silicon nanocrystal cell: a) the silicon nanocrystals are embedded in Si 3 N 4 trapping layer, b) the silicon nanocrystals are grown on SiO2 tunnel oxide and capped by Si 3 N 4 .
Figure 3
3 Figure 3. 3. Data retention of silicon nanocrystal cell with different channel doping doses at 150°C. The tunnel oxide thickness is 5.2nm and the Si-nc are capped by Si 3 N 4 layer (Φ=9nm+SiN=2nm).
Figure 3 . 4 .
34 Figure 3. 4. Data retention at 27°C, 150°C, 250°C for hybrid silicon nanocrystal cell (Si-nc+SiN) varying the tunnel oxide thickness: 3.7nm, 4.2nm, 5.2nm.
Figure 3
3 Figure 3. 5. Arrhenius plot of retention time (defined as the time corresponding to to reach Vt=6V) for temperatures of 27°C, 150°C and 250°C.
Figure 3 . 6 .
36 Figure 3. 6. Schematic of signals used for endurance experiments.
Figure 3
3 Figure 3. 7. Endurance characteristics of silicon nanocrystal cells comparing the samples with different nanocrystal size: Φ=6nm and Φ=9nm.The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=90ms).
by the silicon nitride (Si3N4) layer, by keeping the program/erase conditions unchanged (figure 3.6). In figure 3.8 we reported the results of cycling experiments, comparing the Si-nc cell (Φ=9nm) with and without the silicon nitride capping layer.
Figure 3
3 Figure 3. 8. Endurance characteristics of silicon nanocrystal cell comparing the samples with and without the Si 3 N 4 capping layer: Φ=9nm and Φ=9nm+SiN=2nm.The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=90ms).The programming windows at the beginning of cycling experiments are coherent with the data reported in chapter 2. The Si3N4 capping layer enables improvement on the programming window, but the shift of program/erase threshold voltage is increased too. This effect is due to the parasitic charge trapping in silicon nitride and at the tunox/SiN interface that generates a 3.7V shift of Vte. The shift of threshold voltages and the low erase efficiency cause the programming window closure after 30kcycles. In order to summarize the results, we reported in table3.2 the values of programming window before and after the cell cycling
voltages. The results concerning the endurance experiments are shown and discussed below, using the same samples described in figure 2.7c (and figure 2.10) with three different channel doping doses: 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 and 11•10 13 at/cm 2 . The cells were programmed by channel hot electron and erased by Fowler-Nordheim as shown in figure 3.6, but using te=10ms. The results of experiments are plotted in figure 3.9.
Figure 3 . 9 .
39 Figure 3. 9. Endurance characteristics of silicon nanocrystal cell comparing the samples with different channel doping doses (CDD): 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 and 11•10 13 at/cm 2 . The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=10ms).
(
Φ=9nm+SiN=2nm). The program/erase conditions are the same as the last experiments where the CDD is varied (figure 3.6 with te=10ms). In figure 3.10 we reported the results of endurance experiments.
Figure 3 . 10 .
310 Figure 3. 10. Endurance characteristics of silicon nanocrystal cell comparing the samples with different tunnel oxide thickness (tunox): 3.7nm, 4.2nm and 5.2nm. The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=5µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=10ms).
described in chapter 2 (figure 2.19): tunox=4.2nm, Φ=9nm or 12nm and the EOT of ONO layer is 10.5nm. In figure 3.11 we plotted the data retention results at 27°C, 150°C and 250°C of Si-nc cell with smaller nanocrystals (Φ=9nm). The charge loss is accelerated by the temperature, but the specification at 10 years is reached up to 150°C, despite the thinner tunnel oxide thickness with respect to the samples with the Si3N4 capping layer of figure 3.4.
Figure 3 . 11 .
311 Figure 3. 11. Data retention at 27°C, 150°C, 250°C for the optimized silicon nanocrystal cell (tunox= 4.2nm).
Figure 3 . 12 .
312 Figure 3. 12. Data retention characteristics of hybrid silicon nanocrystal cell and optimized Si-nc cell, integrating the same tunnel oxide thickness (4.2nm). In the optimized memory stack silicon nanocrystals have been embedded with different size and density: Φ=9nm density=7.3•10 11 nc/cm 2 and Φ=12nm density=6.7•10 11 nc/cm 2 (figure 2.18). In figure 3.13 we compare the shift of programmed and erased threshold voltages versus time at 150°C of the optimized cell integrating the different silicon nanocrystal types. The impact of silicon nanocrystal size on data retention is limited, and the programming window closure due to the charge loss is unchanged. To complete the study on data retention of the optimized Si-nc memory cell we plotted in figure 3.14 the results before and after cycling, for the first time up to 1Mcycles, concerning the cell with bigger silicon nanocrystals (Φ=12nm). As published in [Monzio Compagnoni '04], quite unexpectedly the stressed cell displays the same Vt drift (i.e. stronger data retention) with respect to the virgin sample. The smaller leakage current is due to the negative charge trapping in the tunnel oxide or electron trapping at deep-trap states at the nanocrystal surface [Monzio Compagnoni '03].
Figure 3 .
3 Figure 3. 13. Data retention characteristics of programmed and erased states at 150°C. Silicon nanocrystals with different sizes (Φ=9nm and Φ=12nm) are integrated in the optimized memory cell stack.
Figure 3 .
3 Figure 3. 14. Data retention characteristics of programmed Si-nc cells at 150°C. Stressed and virgin samples are compared.
Figure 3 .
3 Figure 3. 15. Endurance characteristics of silicon nanocrystal memory comparing the hybrid Si-nc cell and the optimized Si-nc cell (Φ=9nm). The cells schematics are also shown.
Figure 3 .
3 Figure 3. 16. Endurance characteristics of optimized silicon nanocrystal memory comparing two different nanocrystal sizes: Φ=9nm and Φ=12nm. The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms).The cells schematics are also shown.
Figure 3 .
3 Figure 3. 17. Endurance characteristics of optimized silicon nanocrystal memory (Φ=12nm) at different temperatures: T=-40°C, T=27°C and T=150°C. The cells are programmed byCHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms).
Figure 3 .
3 Figure 3. 18. Data retention of optimized silicon nanocrystal (Si-nc opt) and floating gate (F.G.) cells at 250°C.
Figure 3 .
3 Figure 3. 19. Endurance characteristics of optimized silicon nanocrystal memory (Si-nc, Φ=9nm) compared with the Flash floating gate (F.G.). The cells are programmed by CHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms).
an I/V converter (shunt resistance + amplifier) is used to measure the drain current during the programming; two different configurations are shown in figure 4.1.
Figure 4 .
4 Figure 4. 1. a) Basic experimental setup to measure the programming drain current[Esseni '99]. b) Complex experimental setup used for ramped-gate programming experiments to measure the drain current absorption[Esseni '00b].The development of the system shown in figure4.1a introduces some errors on measured results due to the variable shunt resistance, the IV conversion and the coupling between IV converter and the scope. In order to measure the current absorption using a ramped gate programming the setup was improved (figure4.1b) but the complexity of the system limits the current measurement sensibility and the writing pulse duration. Today the memories for embedded NOR architectures consume a current magnitude of the order of 50 microamperes for shorter time periods (several microseconds). Moreover A. Maure in his PhD thesis evaluated the error due to the IV conversion (figure4.2), showing its relevance when the programming time decreases[Maure '09].
Figure 4 . 2 .
42 Figure 4. 2. Error evaluation during a fast current measurement performed applying a ramped voltage on a resistance. a)Very fast ramp induces a high current, the parasitic capacitive effect is important. b) The slow ramp shows induces a smaller current decreasing the measurement error.
the coupling factor relative to the gate and drain terminals; G V and D V : control gate and drain voltages during the programming operation; treq Vt : threshold voltage measured for the equivalent transistor (dummy cell);
capacitances due to the coupling between floating gate and drain/source contacts. X C : parasitic capacitance due to the coupling between the selected floating gate and the neighbor cells in the same word line. Y C : parasitic capacitance due to the coupling between the selected floating gate and the neighbor cells in the same bit line.
Figure 4 .
4 Figure 4. 3. Capacitive model used in order to calculate the Flash floating gate cell coupling factor.
Figure 4 . 4 .
44 Figure 4. 4. Floating gate coupling factor (α G ) measured with the method of subthreshold slope with the error bar (36 samples) and simulated with the capacitive model. Another important parameter to measure in order to calculate the floating gate potential evolution during the programming operation is the dummy cell threshold voltage. The results are shown in figure 4.5. The average value of Vttreq is 2.96V with a dispersion of 0.6V on wafer.
Figure 4 .
4 Figure 4. 5. Id-Vg characteristics of 36 dummy cells on wafer. The average value of Vt treq is 2.96V using the read current of 10µA.
are the threshold voltages of floating gate cell after the n th and the (n+1) th pulse respectively. The calculated overdrive voltage is reported in figure4.7.
Figure 4 . 6 .
46 Figure 4. 6. Channel hot electron programming kinetic of floating gate cell. The control gate voltage (Vg) is a 1.5v/µs ramp, Vd=4.2V. Threshold voltage (Vt) read conditions: Id=10µA, Vd=0.7V.
Figure 4 .
4 Figure 4. 7. Overdrive voltage during the channel hot electron programming operation calculated using the formula (4.4).
Figure 4 .
4 Figure 4. 8. Floating gate current consumption extrapolation, obtained measuring the dummy cell drain current.
Figure 4 .
4 Figure 4. 9. Experimental setup used to perform dynamic drain current (Id) measurements during a channel hot electron programming operation. The device under test (DUT) is a floating gate cell addressed in a memory array of 256 kbit. In figure 4.10 an example of dynamic drain current of floating gate cell is reported, that is measured using the ramp=1.5V/µs on gate terminal and the drain voltage at 4.2V. The static measured value corresponds to the value extrapolated using the indirect technique. With the developed setup we are able to generate arbitrary pulses on gate and drain terminals, in order to obtain specific ramps or boxes. The applied signals are measured like the dynamic current.
Figure 4 .
4 Figure 4. 10. a) Gate and drain voltages (Vg, Vd) generated with the setup of figure 4.9. b) Dynamic drain current measured during the channel hot electron programming operation. The static measurement is the same reported in figure 4.8.
Figure 4 .
4 Figure 4. 11. (a) Gate box pulses applied during the channel hot electron operation; (b) drain current measured with dynamic method. Drain voltage is constant (Vd=4.2 V). In figure 4.12 we report the trends of cell performances for different box pulse durations. The PW and the Ec increase with the box duration; in region I a low energy consumption level can be reached maintaining a satisfactory programming window. The presence of Id peak can be a problem for the logic circuits around the memory array. Furthermore, the designed charge pump layout areas, used to supply the drain terminal, depend on the value of Id current [Munteanu '02].
Figure 4 .
4 Figure 4. 12. Programming window and energy consumption versus the box duration. In region I a low energy level is reached maintaining the programming window. The Y scale is normalized with the same factor as figure 4.14.
Figure 4 .
4 Figure 4. 13. (a) Gate ramp pulses applied during the channel hot electron programming operation; (b) drain current measured with dynamic method. Drain voltage is constant (Vd=4.2 V).
Figure 4 .
4 Figure 4. 14. Programming window and energy consumption trends versus the ramp speed. Three regions are highlighted: region II higher energy; region III best tradeoff; region IV higher current peak. The Y scale is normalized with the same factor as figure 4.12.Three particular regions are highlighted in figure4.14. Region II is where the Id is lowest with the highest energy consumption. On the contrary in the region IV the energy reaches the lowest value, but higher drain current peak is present. The best performances in terms of consumption are obtained in region III, but the programming window does not reach the minimum level. In order to resolve this conundrum, we decided to merge the two pulse types,
Figure 4 .
4 Figure 4. 15. (a) Gate pulses with different plateau durations, applied during the channel hot electron operation; (b) drain current measured with dynamic method; the current peak is smoothed decreasing the plateau duration, thus the ramp speed.
Figure 4 .
4 Figure 4. 16. (a) Programming window, (b) energy consumption and (c) drain current peak of floating gate cel,l measured using the pulses shown in figure 4.15a.
Figure 4 .
4 Figure 4. 17. Id currents measured with the dynamic method: (a) No injection zone, (b) Low injection zone, (c) High injection zone. (d) Maximum of drain current versus Vd. The Y scales are normalized with the same factor. (e) Optimized gate ramp pulse (1.5V/µs) + plateau (1µs).
threshold voltage in order to evaluate the cell performances as shown in figure 4.18, where the PW and the Ec are plotted for different Vd. Clearly, by lowering the drain voltage to 3.8V with respect to Vd=5V, the energy consumption is minimized, with gain around 55% versus only 15% loss on the programming window.
Figure 4 .
4 Figure 4. 18. Programming window and energy consumption versus drain voltage; the point of minimum energy is highlighted.
Figure 4 .
4 Figure 4. 19. Programming window and energy consumption versus drain voltage; the point of minimum energy is highlighted. The limit for the maximum value of Vb depends on the breakdown voltage of drain/bulk junction. Finally several drain and bulk biases have been analyzed, optimizing the programming signals in order to reduce the current consumption, still keeping satisfactory performances. By decreasing the drain voltage from 4.2V to 3.8V, a diminution of 15% in terms of energy consumption can be achieved. Moreover, using the CHISEL effect with the reverse body bias, further drain current reduction is possible.
figure 4.17e, with Vd=3.8V and Vb=-0.5V. The drain-bulk bias is chosen in order to minimize the stress of junction, and to decrease the bitline leakage of memory array (see next section). For each measurement we started from the same erased threshold voltage (Vte) in order to evaluate and compare the effect of CDD. Three different values of doping dose are used; the results are shown in figure4.20. We can notice that an increase of CDD leads to the improvement of the programming window and the energy consumption. When the channel doping dose is increased, the drain/bulk junction doping profile becomes more abrupt, and the cell threshold voltage (Vth) increases. Thus the electrons are turned into a more energetic state and the probability of injecting electrons in the floating gate increases, leading to better programming efficiency. Moreover, the drain current absorption decreases. Since the Vte increases as well as the CDD, it is necessary to adjust the erase pulse settings by increasing the erase time.After this study on single cell we will analyze in the next section the current consumption due to the unselected memory cells connected to the same bit line of programmed cell. Finally the global consumption of an entire bitline of 512 cells will be evaluated.
Figure 4 .
4 Figure 4. 20. Trends of programming window and energy consumption versus channel doping dose variation.
our case, we decided to use the LDD implants for the floating gate memory cell, in order to decrease the BLL caused by the drain bias of the unselected cells and to find a tradeoff between programming efficiency degradation and BLL improvement. As illustrated in figure4.21, BLL measurement corresponds to the sum of the leakage current of all unselected cells on the same bitline (511 cells in this case), while a cell is being programmed. In order to measure it, we varied the gate potential of dummy cell between -3V (to emulate the floating gate potential of the programmed cell) and 1V (erased cell). The biasing conditions are chosen with regard to the results of section 3.1.2: Vd=3.8V, Vb=-0.5V and Vs=0V.
Figure 4 .
4 Figure 4. 21. Bit-line leakage measurement principle.
Figure 4 .
4 Figure 4. 22. (a) Effect of LDD implantation energy on the main leakage contributions of unselected cell: junction current (Ijc) and punchthrough current (Ipt) for different gate voltages. (b) Sum of Ijc and Ipt versus Vg for different LDD implantation energies. (c) Percentages of global bitline current consumption (selected cell + 511unselected cells).
Figure 4 .
4 Figure 4. 23. TCAD simulations: (a) LDD cartography and (b) doping profiles at Si/SiO2 interface, for different implantation energies.
Figure 4 .
4 Figure 4.24a shows the distribution of the absolute value of the electric field (E) obtained by TCAD simulations. The electric field E is the highest at the gate edge, but its value at this point becomes smaller as LDD implantation energy increases. The lateral (Ex) and vertical (Ey) components of the electric field at the Si/SiO2 interface are shown in figures 4.24b and c
Figure 4 .
4 Figure 4. 24. TCAD simulations: (a) global electric field distribution; (b) lateral field Ex and (c) vertical field Ey along the channel for different LDD implantation energies.
Figure 4 .
4 Figure 4. 25. Programming window and bitline leakage versus (a) LDD implantation energy (constant channel doping dose), and (b) channel doping dose (standard LDD energy implantation).
Figure 4 .
4 Figure 4. 26. (a) Gate pulses with different plateau durations, applied during the CHE operation; (b) drain current of Si-nc+SiN cell, measured with dynamic method; the current follows the gate potential.
reducing the channel length. Thus the stored charges remain trapped in Si-ncs and SiN over the drain and SCR zone. While the gate voltage changes, the hybrid Si-ncs on channel zone are not charged and so their potential remains constant. Consequently, the substrate surface potential depends on gate voltage only, which dynamically drives the drain current during the channel hot electron operation [Della Marca '11a].
Figure 4 .
4 Figure 4. 27. Scheme of a Silicon Nanocrystal cell behavior during the channel hot electron programming operation.
Figure 4 .
4 Figure 4. 28. (a) Programming window and (b) energy consumption comparison of Si-nc+SIN and floating gate cells, measured using the pulses shown in figure 4.26a. The F.G. data are also shown in figure 4.16.
programming time in order to reduce the energy consumption. To do this we repeated the experiment on another Si+SiN cell using box pulses to evaluate the dependence on programming window and energy on programming time. The results are shown in figure4.29, the maximum value of Id is slightly different with respect to figure4.26 and this is due to the wafer dispersion; in the two figures the same y scale is used. To program the cell we used Vg=9V and Vd=4.2V. The drain current follows the gate voltage during the programming. Thus the calculated energy is a linear function of pulse duration. After each pulse the programming window was also measured and the results are plotted in figure4.30.
Figure 4 .
4 Figure 4. 29. (a) Gate box pulses with different durations, applied during the channel hot electron operation; (b) drain current of Si-nc+SiN cell, measured with dynamic method; the current follows the gate potential.
Figure 4 .
4 Figure 4. 30. (a) Programming window and (b) energy consumption comparison of Si-nc+SIN and floating gate cells, measured using the box pulses shown in figure 4.29a. The Y scale is the same as in figure 4.28.
Figure 4 .
4 Figure 4. 31. (a) Gate ramp pulses with different durations, applied during the channel hot electron operation; (b) drain current of Si-nc+SiN cell, measured with dynamic method; the current follows the gate potential.
Figure 4 .
4 Figure 4. 32. (a) Programming window and (b) energy consumption of Si-nc+SIN, measured using box and ramped pulses respectively shown in figure 4.29a and 4.31a. The Y scale is the same as in figure 4.30.
parameter set provided by Advanced Calibration and we perform the electrical simulations with Sentaurus Device 2010.3. In figure 4.33 the simulated structures are shown. In the case of the floating gate cell, the 2D approach implies considering the device width dimension as a scaling parameter for the currents and the wings area as an additional coupling capacitance between control gate and floating gate. The programming operation increases the electrostatic potential on the whole floating gate area (figure 4.33a). The Si-nc+SiN cell simulations only need to scale the currents and the nanocrystal size and density. These two parameters are based on process results. After the programming operation the charged nanocrystals are very close to drain and SCR (figure 4.33b).
Figure 4 .
4 Figure 4. 33. Designed structures used in TCAD simulation after programming operation: a) floating gate cell, and b) Si-nc+SiN cell.
Figure 4 .
4 Figure 4. 34. Drain current in floating gate (F.G.) and hybrid silicon nanocrystal (Si-nc+SiN) cells using: a)and b) gate voltage box pulse or c) and d) gate voltage ramp pulse on gate; Vg=9V, Vd=4.2V, Vb=Vs=0V.Experimental data and simulations are compared; all the graphs have the same Y axe scale.
Figure 4 .
4 Figure 4. 35. Dependence of programming window and energy consumption on box pulse duration for the hybrid silicon nanocrystal cell. Experimental data and simulations are compared.
Figure 4 .
4 Figure 4. 36. Results of programming pulse optimization in terms of energy consumption.
Figure 4 .
4 Figure 4. 37. Programming window and energy consumption as a function of gate (a-b) and drain (c-d) voltages. Ramp and box gate pulses are used.
Figure 4 .
4 Figure 4. 38. Energy consumption as a function of programming window for different gate (a) and drain (b) voltages. The Y scale is comparable to that in figure 4.37.The points of figure4.38, using the programming time of 1µs, are the same as shown in figure4.37. In this way it is possible to compare the two figures with the arbitrary units.
we report the programming efficiency in the case of box pulse, that has shown the best programming window results, for different biasing conditions.
Figure 4 .
4 Figure 4. 39. Efficiency (PW/Ec) vs gate (left) and drain (right) voltage, using tp=1μs.
optimized programming scheme (Box pulse, Vg=10V and Vd=4.2V), we plotted programming efficiency in figure 4.40 as a function of the programming time. The greatest efficiency is reached using shorter programming time; as previously shown the shorter box pulse represents the best compromise for a satisfactory programming window and energy consumption. The graph shows that silicon nanocrystal memories are suitable for fast programming operations, representing a satisfactory trade-off between programming window and energy consumption.
Figure 4 .
4 Figure 4. 40. Efficiency as function of programming time for Si-nc cell, using box gate pulses (Vg=10V, Vd=4.2V).
Figure 4 .
4 Figure 4. 41. Channel hot electron threshold voltage (Vt) kinetic characteristics using box and ramp pulses, for tunnel oxide thicknesses of 4.2nm and 5.2nm.
Figure 4 .
4 Figure 4. 42. Energy consumption as a function of the programming window of cells with 4.2nm and 5.2nm tunnel oxide thicknesses using box pulses (Vg=10V, Vd=4.2V).
Figure 4 .
4 Figure 4. 43. Efficiency as function of programming time for Si-nc cell, using box gate pulses, calculated for tunox=4.2nm and tunox=5.2nm (Vg=10V, Vd=4.2V).
Figure 4 .
4 Figure 4. 44. (a) Gate box pulses with different durations, applied during the channel hot electron operation; (b) drain current of optimized Si-n cell, measured with dynamic method. The Y scale is the same as in figure 4.29.
Figure 4 .
4 Figure 4. 45. Dynamic drain current measured for: hybrid silicon nanocrystal cell (Si-nc+SiN), optimized Si-nc cell and floating gate (F.G.) cell, using a box programming pulse on control gate.
Figure 4 .
4 Figure 4.45 shows that the Si-nc cell, where large nanocrystals (Φ=12nm) are embedded with a high density, has a response half-way between Si-nc+SiN and floating gate cells. This means it is possible to control the dynamic current modifying the nanocrystal size and density, thus the covering area. Previously we explained the transistor-like behavior of hybrid Si-nc+SiN cell where the drain current follows the gate potential thanks to the charge localization close to the drain. Instead, in the case of the floating gate device, a peak of current is detected when a box pulse is applied on gate terminal. During the hot carrier injection the charges flow throw the silicon nanocrystals toward the source side modifying the substrate surface potential and thus the vertical and horizontal electric fields. The charge diffusion can be due to the single electron interactions between neighbor silicon nanocrystals
Figure 4 .
4 Figure 4. 46. Programming window and energy consumption as a function of gate (a-b) and drain (c-d) voltages of optimized Si-nc and Si-nc+SiN cells The Y scale axe is the same as in figure 4.37.
Figure 4 .
4 Figure 4. 47. Programming efficiency (PW/Ec) vs gate (a) and drain (b) voltage, using tp=1μs, of optimized Sinc and Si-nc+SiN cells. This is because a low Vg is sufficient to produce high programming window close to saturation while the drain current dependence on Vg increases the consumed energy. On the other hand by varying the drain voltage (figure 4.47b) we note that the programming
Figure 4 .
4 Figure 4. 48. Energy consumption as a function of the programming window of Si-nc+SiN and optimized Si-nc cells with 4.2nm tunnel oxide thickness using box pulses (Vg=10V, Vd=4.2V).The XY scale axes are the same as in figure 4.42.
Figure 4 .
4 Figure 4. 49. (a) Programming window and (b) energy consumption comparison of optimized silicon nanocrystal (Si-nc opt), hybrid silicon nanocrystal (Si-nc+SiN) and floating gate (F.G.) cells, measured using box pulses.
Figure 4 .
4 Figure 4. 50. (a) Programming window and (b) energy consumption comparison of optimized silicon nanocrystal (Si-nc opt), hybrid silicon nanocrystal (Si-nc+SiN) and floating gate (F.G.) cells, measured using ramp pulses.
Figure 4 .
4 Figure 4. 51. (a) Programming window and (b) energy consumption comparison of optimized silicon nanocrystal (Si-nc opt), hybrid silicon nanocrystal (Si-nc+SiN) and floating gate (F.G.) cells using box and ramp pulses when the programming time is fixed to 2µs.
Figure 5
5 Figure 5. 1. Schematic of silicon nanocrystal cell when the charge diffusion mechanism is enabled.New Cell architecture (ATW-Flash)
Figure 5 . 2 .
52 Figure 5. 2. Schematic of Asymmetrical Tunnel Window Flash (ATW-Flash)
tablettes vendus de part le monde (figure6.1). Toutes ces applications demandent de plus en plus de hautes performances telles que: faible consommation d'énergie, temps d'accès courts, bas coûts, etc. C'est pourquoi le commerce des mémoires Flash gagne des parts de marché par rapport aux autres types de mémoires. Malgré le fait que le marché est en train de croître en continu, le prix des dispositifs mémoires diminue.
Figure 6 .
6 Figure 6. 1. Evolution et prévisions du marché des appareils portables (source: muniwireless.com et trak.in)
Figure 6 . 2 .
62 Figure 6. 2. Fenêtre de programmation en fonction de la surface recouverte. Les cellules à nanocristaux de silicium (Si-nc) et hybride (Si-nc+SiN) sont comparées.
L
'empilement de la cellule a été complété avec une couche ONO d'épaisseur équivalente de 10.5nm.
Figure 6 .
6 Figure 6. 3. Schéma de la cellule à nanocristaux de silicium optimisé; des nanocristaux avec deux tailles différentes sont mises en oeuvre: a) Φ = 9 nm, b) Φ = 12nm. Pour conclure cette partie nous avons comparé les résultats obtenus pour la cellule optimisée Si-nc avec la Flash standard à grille flottante. Dans la figure 6.4 nous montrons les caractéristiques de la cinétique de programmation des deux dispositifs. Pour la cellule optimisée Si-nc les performances sont les mêmes que la cellule à grille flottante; la fenêtre de programmation minimale de 4V est obtenue en utilisant une programmation par électrons chauds d'une durée de 3.5µs.
Figure 6 . 4 .
64 Figure 6. 4. Caractéristiques de la cinétique de programmation par porteurs chauds (Vg_ramp=1.5V/µs, Vg=[3V; 9V], Vd=4.2V).Les caractéristiques de la cinétique d'effacement apparaissent quant à elles dans la figure6.5.Les performances d'effacement ont été améliorées par rapport à la cellule mémoire à grille flottante grâce à un oxyde de tunnel d'épaisseur moindre ainsi qu'à un facteur de couplage plus important.
Figure 6 .
6 Figure 6. 5. Caractéristiques de la cinétique d'effacement par Fowler-Nordheim (Vg_ramp=5kV/s; Vg=[-14V; -18V]). Le temps d'effacement pour obtenir la fenêtre de programmation minimale de 4V est de 0.2ms pour la cellule optimisée Si-nc permettant un gain de 60% par rapport à la cellule Flash à grille flottante. Pour conclure, tous les essais réalisés en faisant varier les différents paramètres technologiques ont permis d'optimiser la fenêtre de programmation de la cellule Si-nc avec l'objectif de pouvoir substituer la cellule à grille flottante et donc de diminuer les coûts de production. Dans le prochain paragraphe seront comparés les résultats concernant la fiabilité de la mémoire à nanocristaux de silicium et à grille flottante.
l'oxyde de tunnel et nous avons extrapolé les énergies d'activation pour chacun des échantillons. Comme dans le cas de l'opération d'effacement, la perte des charges augmente lorsque l'épaisseur de l'oxyde de tunnel est diminuée. Nous avons constaté qu'une épaisseur d'oxyde de tunnel de 5.2nm est nécessaire pour parvenir à la spécification de rétention des données pour les températures jusqu'à 150°C. D'autre part, l'épaisseur d'oxyde de tunnel a un impact considérable sur l'opération d'effacement faite par Fowler-Nordheim. Donc pour obtenir une fenêtre de programmation adéquate après 100k cycles un oxyde de tunnel de 3.5nm doit être utilisé. Pour notre étude il a été important d'évaluer le comportement de la cellule en utilisant un oxyde de tunnel de 4.2nm dans une architecture mémoire où la couche ONO a été optimisée et la couche Si3N4 n'est pas déposée. En conclusion de ce paragraphe nous avons comparé les résultats concernant la cellule à nanocristaux optimisée avec la cellule Flash à grille flottante. Dans la figure 6.6 la rétention des données à 250°C est représentée pour chacun des deux dispositifs. Pour satisfaire la contrainte de rétention des données la cellule doit maintenir un niveau de la tension de seuil supérieur à 5,75V à 250°C pendant 168 heures. On peut observer que la cellule Si-nc est à la limite de cet objectif cependant d'autres efforts seront nécessaires afin de pouvoir encore améliorer les performances et donc atteindre les résultats obtenus en utilisant la cellule à grille flottante. La principale contrainte est représentée par la perte de charges rapide initiale due au piégeage d'électrons dans l'oxyde de tunnel, dans la couche ONO et dans les interfaces relatives.
Figure 6 . 6 .
66 Figure 6. 6. Rétention des données de la cellule à nanocristaux de silicium optimisé (Si-nc opt) et à grille flottante (F.G.) à la température de 250°C. Les résultats concernant l'endurance ont été aussi comparés et les conditions de programmation/effacement sont restées inchangées (programmation: Vg=9V, Vd=4.2V, tp=1µs et effacement: Vg=-18V, ramp=5kV/s+te=1ms). Les résultats expérimentaux pour la cellule optimisée à nanocristaux de silicium sont montrés dans la figure 6.7. Comme nous pouvions nous y attendre, la cellule Flash à grille flottante présente une fenêtre de programmation plus importante au début du cyclage, ceci grâce à son meilleur facteur de couplage et à sa meilleure efficacité de programmation. La dégradation la plus importante de la tension de seuil détermine une fermeture majeure de la fenêtre de programmation après 1 millions de cycles, En même temps la caractéristique de l'endurance est plus stable pour la cellule Si-nc.
Figure 6 .
6 Figure 6. 7. Caractéristiques d'endurance des cellules à nanocristaux de silicium optimisé (Si-nc opt) et à grille flottante (F.G.). En conclusion nous avons démontré pour la première fois, à notre connaissance, le fonctionnement d'une cellule à nanocristaux de silicium jusqu'à 1 million de cycles de programmation/effacement. Une fenêtre de programmation de 4V est ainsi préservée dans un intervalle de température étendu [-40°C ; 150°C] (figure 6.8). En contrepartie le principal inconvénient pour la cellule à nanocristaux est représenté par la perte des charges importante à 250°C.
Figure 6 .
6 Figure 6. 8. Caractéristiques d'endurance des cellules à nanocristaux de silicium optimisé dans un intervalle de température étendu [-40°C ; 150°C].
d'énergie de la cellule Flash à grille flottante et à nanocristaux de silicium pendant une opération de programmation faite par injection d'électrons chauds. L'évaluation de la consommation du courant d'une cellule Flash à grille flottante peut être effectuée à travers des manipulations utilisant un convertisseur courant/tension ou par technique indirecte. De cette manière là il n'est pas possible de comprendre le comportement dynamique et de mesurer les performances de la cellule implémentée dans une architecture NOR pour des pulses de programmation de quelques microsecondes. D'autre part la méthode indirecte de calcul du courant consommé n'est pas fonctionnelle pour les mémoires à nanocristaux de silicium. Dans ce contexte nous avons développé une nouvelle méthode expérimentale qui permet de mesurer dynamiquement la consommation du courant pendant une opération de programmation effectuée par injection d'électrons chauds. Cette méthode a permis de comprendre le comportement dynamique des deux dispositifs. L'énergie consommée a été aussi évaluée en utilisant différentes conditions de polarisation. L'objectif a été de caractériser l'impact de différents paramètres sur la consommation et de trouver le meilleur compromis afin d'améliorer les performances des deux cellules mémoires en question. De plus, la consommation due aux fuites des cellules non sélectionnées dans le plan mémoire a été mesurée pour compléter l'étude. Nous allons décrire les principaux résultats obtenus pour la cellule à nanocristaux de silicium et la cellule Flash à grille flottante.6.7 Optimisation de la consommation énergétiqueLes caractérisations électriques effectuées sur la cellule à grille flottante et sur la cellule à nanocristaux recouverts par la couche Si3N4 ont permis d'optimiser les polarisations ainsi que la forme des signaux utilisées pendant l'opération de programmation. Pour la cellule Flash à grille flottante il a été démontré que le meilleur compromis entre l'énergie consommée et la fenêtre de programmation se trouve être l'utilisation d'une rampe de programmation suivie par un plateau. Cette façon de programmer la cellule permet de réduire respectivement le pic du courant de drain et de maintenir une fenêtre de programmation adéquate. Dans le cas de la cellule à nanocristaux de silicium recouverts par la couche Si3N4 (Si-nc+SiN), nous avons trouvé que les meilleurs résultats sont acquis lorsque des pulses très courts sont appliqués au terminal de grille. Nous reportons ci-dessous les résultats expérimentaux mesurés pour une cellule à nanocristaux avec un oxyde de tunnel épais de 4.2nm, des nanocristaux avec une taille moyenne de 12nm et une fine couche d'ONO(10.5nm). Dans la figure6.9 sont montrés les résultats de la consommation de drain lorsque des pulses de formes carrées et de différentes durées sont appliqués à la grille de contrôle. Nous remarquons que le courant de drain ne suit pas le potentiel de la grille de contrôle, mais dans ce cas il diminue pendant le temps de programmation. Ce type de comportement est similaire à celui d'une cellule Flash à grille flottante où un pic de courant est mesuré. Nous avons démontré alors que la différence de comportement entre la cellule à grille flottante et la cellule Si-nc+SiN est justifiée par la localisation de la charge piégée. Dans le cas de la cellule optimisée nous pouvons affirmer que l'effet de la localisation due à la présence de la couche SiN n'est pas présent. Par ailleurs la grosse taille des nanocristaux produit une distribution de la charge vers le centre du canal et modifie le potentiel de surface de canal. Dans la figure 6.10 nous avons représenté les schémas des divers empilements mémoire et les mesures relatives au courant de drain afin de pouvoir comparer le comportement des trois cellules : Si-nc, Si-nc+SiN et F.G.
Figure 6 .
6 Figure 6. 9. a) Pulses de différentes durées appliquées à la grille de contrôle. b) Résultats de la consommation du courant de drain.
Figure 6 . 10 .
610 Figure 6. 10. Courant de drain dynamique mesuré pour les cellules à nanocrystaux des silicium avec et sans la couche SiN, et à grille flottante. La figure 6.10 montre que la cellule Si-nc optimisée a une réponse à mi-chemin entre les cellules Si-nc+SiN et grille flottante. Cela signifie qu'il est possible de contrôler le courant dynamique en faisant varier la taille et la densité des nanocristaux, donc la surface recouverte.Nous avons expliqué que la cellule Si-nc+ SiN a un comportement de type transistor où le courant de drain suit le potentiel de la grille de contrôle de part la localisation de la charge à côté de la zone de drain. Au contraire pour le dispositif à grille flottante un pic de courant est détecté lorsqu' une impulsion de forme carrée est appliquée sur la grille de contrôle. Pendant l'injection de porteurs chauds les charges diffusent à travers les nanocristaux en direction de la source en faisant varier le potentiel de surface du substrat et donc les champs électriques.En prenant compte le comportement dynamique de la cellule à nanocristaux optimisée et le fait que les pulses de formes carrées sont plus efficaces que les rampes de programmation,
Figure 6 .
6 Figure 6. 11. a) Fenêtre de programmation et b) énergie consommée en utilisant des pulses de programmation carré de différentes durées.
différents paramètres technologiques: taille et densité des nanocristaux, présence de la couche Si3N4, dose de dopage du canal et épaisseur de l'oxyde de tunnel. L'objectif des expériences a été de comprendre le comportement de la cellule afin d'améliorer le facteur de couplage et de minimiser le piégeage des charges parasites. Les résultats concernant la fiabilité de la cellule ont montré une rétention des données satisfaisante à 150°C et pour la première fois une endurance allant jusqu'à 1 million de cycles dans l'intervalle de température entre -40°C et 150°C avec une fenêtre de programmation de 4V. Enfin nous avons développé une technique innovante de mesure du courant dynamique pendant une opération de programmation faite par injection d'électrons chauds. Cette technique a été appliquée pour la première fois dans le but d'étudier les cellules Flash à grille flottante et à nanocristaux de silicium. Nous avons décrit le comportement dynamique et comment améliorer la consommation d'énergie pour atteindre une consommation inférieure à 1nJ toujours en gardant une fenêtre de programmation de 4V. Comme perspectives nous proposons de continuer à investiguer le phénomène de diffusion des charges dans la couche de piégeage de nanocristaux durant l'opération de programmation avec l'objectif de diminuer la consommation énergétique. En alternative il est possible d'intégrer les nanocristaux dans des architectures mémoires différentes de la cellule mémoire standard.
Table 1
1 table 1.1 we report the international technology roadmap for semiconductor 2009 that forecasts the future trends of semiconductor technology.
ITRS 2012 -Process Integration, Devices, and Structures
2013 2016 2019 2022 2026
Nor flash technology node -F (nm) 45 38 32 28 22 ?
Cell size -area factor in multiplies of F2 12 12 12-14 14-16 14-16
Physical gate legth (nm) 110 100 90 85 85
Interpoli dielectric thickness (nm) 13-15 13-15 11-13 11-13 11-13
. 1. Table 1.1: Summary of the technological requirements for Flash NOR memories as stated in ITRS 2012 roadmap [ITRS
Table 2
2
. 1. Vertical electric fields calculated for samples with different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm, applied during the programming operation.
3.1 Introduction ........................................................................................................................ 3.2 Data retention: impact of technological parameters .......................................................... 3.2.1 Effect of silicon nitride capping layer ......................................................................... 3.2.2 Effect of channel doping dose ..................................................................................... 3.2.3 Effect of tunnel oxide thickness .................................................................................. 3.3 Endurance: impact of technological parameters ................................................................
3.3.1 Impact of silicon nanocrystal size ............................................................................... 3.3.2 Impact of silicon nitride capping layer ........................................................................ 3.3.3 Impact of channel doping dose .................................................................................... 3.3.4 Impact of tunnel oxide thickness ................................................................................. 3.4 Silicon nanocrystal cell optimization ................................................................................. 3.4.1 Data retention optimization ......................................................................................... 3.4.2 Endurance optimization............................................................................................... 3.5 Benchmarking with Flash floating gate ............................................................................. Bibliography of chapter 3 ........................................................................................................
Temperature (°C) tunox (nm) 27 150
250
5.2 8% 15% 27%
4.2 21% 30% 43%
3.7 31% 40% 47%
Table 3 .
3
1. Percentage of charge loss after 186h
table3.2 the values of programming window before and after the cell cycling and the program/erase threshold voltage (Vtp and Vte) shifts. To conclude this section we can affirm that the Si3N4 capping layer enables, in this case, a 30% programming window gain to be achieved which is maintained after 100k cycles, but the large quantity of parasitic charge cumulated during the experiments determines a great shift of program/erase threshold voltages limiting the cell functioning at 30kcycles.
Si-nc Programming window @1cycle Programming window @100kcycles Vtp shift Vte shift Endurance limit
Φ=6nm 1.1V 0.4V 1.6V 2.4V 10cycles
Φ=9nm 1.4V 0.7V 1.1V 2.4V 10kcycles
Φ=9nm+SiN=2nm 2.7V 0.8V 1.8V 3.7V 30kcycles
Table 3 .
3
2. Programming window before and after 100k program/erase cycles, and program/erase threshold voltage shift; the studied samples have a different silicon nanocrystals diameter: Φ=6nm, Φ=9nm and Si 3 N 4 capping layer (Φ=9nm+SiN=2nm).
Table 3 .
3 3. Programming window before and after 100k program/erase cycles, and program/erase threshold voltage shift; the studied samples have a different channel doping doses: 2.4•10 13 at/cm 2 , 8.5•10 13 at/cm 2 and 11•10 13 at/cm 2 .
chapter 2. The results of programming windows before and after the cell cycling, and the program/erase threshold voltages shifts, are summarized in table 3.4. Once again the program/erase threshold voltage shift demonstrates the parasitic charge trapping in the silicon nitride layer. This effect slightly depends on tunnel oxide thickness because the charges are trapped during the CHE injection (independent on tunox). Concerning the erase operation we notice the improvement gained using thinner tunnel oxide (3.7nm) which enables the programmed and the erased state to be kept separate, while the working condition are very closed to the limit of good functioning for the case of the sample with tunox=4.2nm. In conclusion, to improve the programming window and the cell endurance, the experiments suggest minimizing the tunnel oxide thickness, but this contradicts the considerations on data retention, where the cell performance is increased by using a thicker tunnel oxide. It is important, thus, to find the best tradeoff between all the technological parameters to satisfy the data retention and endurance specifications.
tunox (nm) Programming window @1cycle Programming window @100k cycles Vtp shift Vte shift Endurance limit
3.7 5.5V 2.7V 1.5V 4.2V >100kcycles
4.2 4.5V 1.9V 1.4V 4.0V >100kcycles
5.2 3.5V 1.2V 1.7V 4.0V 60kcycles
Table 3 .
3
4. Programming window before and after 100k program/erase cycles, and program/erase threshold voltage shift; the studied samples have different tunnel oxide thicknesses: 3.7nm, 4.2nm and 5.2nm.
).
Programming window @1cycle Programming window @1M cycles Vtp shift Vte shift Endurance limit
Si-nc (Φ=12nm) 5.6V 4.0V 0.6V 2.2V >1Mcycles
F.G. 7.0V 2.8V -2.4V 1.8V >1Mcycles
Table 3
3
. 7. Programming window before and after 1M program/erase cycles, and program/erase threshold
voltage shifts; the studied samples are the optimized silicon nanocrystal memory cell (Si-nc Φ=12nm), and the
Flash floating gate (F.G.). Cycling conditions: CHE programming (Vg=9V, Vd=4.2V, tp=1µs) and FN erase
(Vg=-18V, ramp=5kV/s+te=1ms).
We repeated the endurance experiments on Flash floating gate varying the program/erase
conditions in order to achieve the same initial threshold voltage levels (CHE programming:
Vg=8V, Vd=3.7V, tp=1µs and FN erase: Vg=-19.5V, ramp=5kV/s+te=1ms). The comparison
with the optimized Si-nc cell is shown in figure
3
.20. Quite unexpectedly the programming window of Flash floating gate starts to degrade after 100cycles, even if a lower drain and gate voltages are used. This is not due to the tunnel oxide degradation, but it can be due to the lower programming efficiency caused by lower vertical and horizontal fields during the channel hot electron operation. Figure 3. 20. Endurance characteristics of optimized silicon nanocrystal memory (Si-nc, Φ=12nm) programmed by CHE (Vg=9V, Vd=4.2V, tp=1µs) and erased by FN (Vg=-18V, ramp=5kV/s+te=1ms), compared with the Flash floating gate (F.G.) programmed by CHE (Vg=8V, Vd=3.7V, tp=1µs) and erased by FN (Vg=-19.5V, ramp=5kV/s+te=1ms).
In fact after the cycling experiments using Vg=9V and Vd=4.2V, we verified that it is possible to reach the program threshold voltage previously achieved (figure
3
.19). Also in this case the Si-nc cell presents a more stable endurance characteristic; the data concerning the programming window before and after the cycling and threshold voltage shifts are reported in table
3
.8. Finally we demonstrated for the first time to our knowledge the functioning of a silicon nanocrystal cell up to 1M program/erase cycles. A 4V programming window is preserved in a wide range of temperature [-40°C; 150°C]. As a drawback the silicon nanocrystal cell present a higher charge loss than the floating gate at 250°C.
Table 4
4
5µs box pulse reference case
PW (%) Ec (%) Id peak (%)
Ramp -22 +9 -36
Optimized -11 +10 -35
. 1. Summary of results obtained using a single ramp or ramp + plateau (optimized) programming
pulses, with respect to the box (reference case).
4.41). In one case the amplitude of gate voltage remains constant (Vg=9V) during the kinetic, but in the other case Vg changes between 3V and 9V in order to emulate the programming ramp of 1.5V/µs; the drain voltage is kept constant at 4.2V. A very small impact of the tunnel oxide thickness on the threshold voltage shift is observed when a box pulse is applied, in accordance with Hot Carrier Injection theory. The Vt kinetics are differ significantly when box and ramp are compared. This is due to the different programming speed. The hot electron injection starts when Vg≈Vd, thus in the case of box it starts after the first 0.5µs pulse and after 1µs the 80% of global charge is stored. Using the ramp programming, the Vt characteristic is smoother than the case of box pulse, because of the gradual hot carrier injection. In table 4.2 we
Table 4 .
4
Gate voltage=9V
Programming time=0.5µs
The impact of different technological parameters on program/erase speed is studied. The results have shown that to improve the programming window it is important to increase the silicon nanocrystal size and density (covered area), further improvement can be made by capping the silicon nanocrystal with the SiN layer. These results are coherent with the literature. Moreover, the channel doping dose can be increased to improve the programming efficiency, but in this case, an adjusting of program/erase threshold voltage is needed. The last technological parameter studied in this chapter was the tunnel oxide thickness. We demonstrated that its variation strongly impacts the Fowler-Nordheim operation, while the channel hot electron programming is slightly dependent. After these considerations we optimized the silicon nanocrystal memory stack comparing this device with the Flash floating gate. Concerning the channel hot electron programming operation, the Si-nc cell reached the same performance as floating gate, which means a 4V programming window in 3.5µs with a ramped gate voltage (1.5V/µs). For the Fowler-Nordheim erase operation an improvement is seen with respect the floating gate device due to a thinner tunnel oxide. The optimized Si-nc cell is erased with a
ramped gate (5kV/s) in 200µs in despite of the floating gate that reaches a 4V programming
window in 500µs.
Table 5 .
5
1.Performance comparison achieved for optimized silicon nanocrystal and floating gate memory cell.
Ce travail de thèse concerne une étude expérimentale et de modélisation sur les mémoires à nanocristaux de silicium qui représentent une des solutions les plus intéressantes pour remplacer le dispositif Flash à grille flottante. L'objectif de ce travail de thèse est de comprendre les mécanismes physiques qui gouvernent le comportement de la cellule à nanocristaux de silicium afin d'optimiser l'architecture du dispositif et comparer les résultats avec la cellule Flash standard.Dans le premier chapitre nous présenterons le contexte économique, l'évolution et le fonctionnement des mémoires Flash-EEPROM. Ensuite une description détaillée de la technologie, du comportement et des limitations de la miniaturisation sera fournie. En conclusion nous exposerons les solutions possibles pour résoudre ces problèmes.
Dans le troisième chapitre, l'impact des principaux paramètres technologiques sur la
fiabilité (endurance et rétention des données) sera étudié. Les performances de la mémoire à
nanocristaux de silicium pour des applications fonctionnelles dans un intervalle étendu de
température [-40°C; 150°C] seront aussi évaluées en montrant pour la première fois une
endurance de la cellule jusqu'à 1 million de cycles avec une fenêtre de programmation finale
de 4V. Pour conclure la cellule optimisée proposée sera comparée à la cellule Flash à grille
flottante.
Le chapitre quatre décrit une nouvelle technique de mesure dynamique pour le courant de
drain consommé pendant l'injection d'électrons chauds. Cette procédure permet l'évaluation
de la consommation d'énergie lorsqu'une opération de programmation est conclue. Cette
Le deuxième chapitre présentera le setup expérimental ainsi que les méthodes de caractérisation utilisées dans le but de mesurer les performances de la cellule mémoire à nanocristaux de silicium. De plus, l'impact des principaux paramètres technologiques comme par exemple : la nature des nanocristaux, la présence d'une couche de nitrure de silicium, la dose de dopage du canal et l'épaisseur de l'oxyde de tunnel, sera analysé. Une optimisation de l'empilement de la cellule mémoire sera aussi proposée pour pouvoir comparer les performances avec celles de la cellule Flash à grille flottante. méthode est appliquée pour la première fois aux cellules mémoires à grille flottante et à nanocristaux de silicium. Une étude concernant la typologie des pulses durant la programmation et l'impact des paramètres technologiques sera présentée dans ce chapitre.
Lorsque la taille des nanocristaux est augmentée, donc par conséquent la surface recouverte de la cellule, la fenêtre de programmation augmente et en particulier l'opération d'effacement par Fowler-Nordheim est améliorée. Nous avons constaté que l'utilisation de l'empilement mémoire standard nécessitait un recouvrement de
95% pour obtenir une fenêtre de programmation de 4V, pourcentage non cohérent
avec le principe de fonctionnement de la cellule à nanocristaux de silicium. Afin
d'améliorer la fenêtre de programmation et d'optimiser l'empilement de la cellule Si-
nc nous avons considéré comme point clé l'augmentation du facteur de couplage
comme expliqué dans la littérature pour les mémoires Flash à grille flottante. Deux
différentes recettes ont été développées afin d'obtenir des nanocristaux de silicium
avec une taille moyenne de 9 nm et 12nm qui permettent d'arriver respectivement à
une surface recouverte de 46% et 76%. De plus, avec l'optimisation du facteur de
couplage il a été possible de diminuer l'épaisseur de la couche d'ONO jusqu'à
10,5nm d'épaisseur équivalente, ceci a permis d'augmenter le champ électrique
vertical pendant l'opération d'effacement. Cette valeur d'épaisseur a été choisie en
accord avec les recettes disponibles dans la ligne de production de
STMicroelectronics.
-La présence de la couche Si3N4 recouvrant les nanocristaux de silicium augmente la
probabilité de piégeage de charge et la surface de canal recouverte. Le facteur de
couplage est augmenté et donc la fenêtre de programmation augmente aussi. Dans les
observations CDSEM nous avons noté que la couche Si3N4 pousse autour des
nanocristaux de silicium. Dans ce cas il n'est possible de confirmer si les
améliorations obtenues concernant la fenêtre de programmation dont dues à la
présence de la couche Si3N4 ou à l'augmentation de la surface recouverte. Dans la
figure 6.2 nous montrons les résultats concernant la fenêtre de programmation obtenus
en utilisant des échantillons avec différentes dimensions de nanocristaux de silicium
et avec la couche Si3N4. Dans ce cas nous pouvons considérer que l'amélioration
obtenue dépend principalement de la surface recouverte et faiblement de
l'augmentation de probabilité de piégeage de charge. Même si la présence de la
couche Si3N4 est utile pour améliorer la fenêtre de programmation, nous avons décidé
d'éviter cette étape process afin de minimiser les effets de piégeage de charges parasites.
Acknowledgments
First and foremost I want to thank my industrial chef in STMicroelectronics (Rousset) Jean-Luc Ogier. He has taught me (like a second mother). I appreciate all his contributions of time, ideas, and funding to make my Ph.D. experience productive and stimulating. Not less I want to thank my academic advisor Frédéric Lalande and his collaborator Jérémy Postel-Pellerin.
It has been an honor to be their Ph.D. student. They introduced me at the university of Marseille and they gave me the possibility to use innovative equipments for my researches.
Equally Gabriel Molas that supervised me at CEA-Leti (Grenoble) and introduced me to Lia Masoero, together we reached important results concerning our researches in a funny atmosphere.
I am especially grateful for the support of Silicon Nanocrystal team: Philippe Boivin, |
01760704 | en | [
"spi.meca.mema"
] | 2024/03/05 22:32:13 | 2018 | https://pastel.hal.science/tel-01760704/file/LI.pdf | Keywords: polymeric porous media, multilayer coextrusion, lithium-ion battery separator, PAH adsorption, microscale simulation, 3D-PTPO milieux poreux polymé riques, coextrusion multicouche, sé parateur de batterie au lithium-ion, l
Due to the combination of the advantages of porous media and polymer materials, polymeric porous media possess the properties of controllable porous structure, easily modifiable surface properties, good chemical stability, etc., which make them applicable in a wide range of industrial fields, including adsorption, battery separator, catalyst carrier, filter, energy storage, etc. Although there exist various preparation methods, such as template technique, emulsion method, phase separation method, foaming process, electrospinning, top-down lithographic techniques, breath figure method, etc., the large-scale preparation of polymeric porous media with controllable pore structures and specified functions is still a long-term goal in this field, which is one of the core objectives of this thesis. Therefore, in the first part of the thesis, polymeric porous media are firstly designed based on the specific application requirements. Then the designed polymeric porous media are prepared by the combination of multilayer coextrusion and traditional preparation methods (template technique, phase separation method). This combined preparation method has integrated the advantages of the multilayer coextrusion (continuous process, economic pathway for large-scale fabrication, flexibility of the polymer species, and tunable layer structures) and the template/phase separation method (simple preparation process and tunable pore structure). Afterwards, the applications of the polymeric porous media in polycyclic aromatic hydrocarbons adsorption and lithium-ion battery separator have been investigated. More importantly, in the second part of the thesis, numerical simulations of particle transport and deposition in porous media are carried out to explore the mechanisms that form the theoretical basis for the above applications (adsorption, separation, etc.). Transport and deposition of colloidal particles in porous media are of vital important in other applications such as aquifer remediation, fouling of surfaces, and therapeutic drug delivery. Therefore, it is quite worthy to have a thorough IV (λ), Pe and the reactive area fraction (θ), as well as the spatial density distribution of deposited particles were studied. The results indicate that particles tend to deposit at the leading and trailing edges of the favorable strips, and the deposition is more uniform along the patterned capillary compared to the homogeneous one. In addition, similar to the homogeneous capillary, the maximum dimensionless surface coverage Γ final /Γ RSA is a function of Pe. Besides, for the same θ, the deposition probability is in a positive correlation with λ. Moreover, the overall deposition probability is increasing with θ, which coincides well with the patchwise heterogeneity model.
In the last chapter, the general conclusions and perspectives of this thesis are discussed.
understanding of these processes as well as the dominant mechanisms involved. In this part, the microscale simulations of colloidal particle transport and deposition in porous media are achieved by a novel colloidal particle tracking model, called 3D-PTPO (Three-Dimensional Particle Tracking model by Python ® and OpenFOAM ® ) code. The particles are considered as a mass point during transport in the flow and their volume is reconstructed when they are deposited. The main feature of the code is to take into account the modification of the pore structure and thus the flow streamlines due to deposit. Numerical simulations were firstly carried out in a capillary tube considered as an element of an idealized porous medium composed of capillaries of circular cross sections to revisit II the work of Lopez and co-authors by considering a more realistic 3D geometry and also to get the most relevant quantities by capturing the physics underlying the process. Then microscale simulation is approached by representing the elementary pore structure as a capillary tube with converging/diverging geometries (tapered pipe and venturi tube) to explore the influence of the pore geometry and the particle Pé clet number (Pe) on particle deposition. Finally, the coupled effects of surface chemical heterogeneity and hydrodynamics on particle deposition in porous media were investigated in a three-dimensional capillary with periodically repeating chemically heterogeneous surfaces.
This thesis mainly includes the following aspects:
1) Multilayer polypropylene (PP)/polyethylene (PE) lithium-ion battery (LIB) separators were prepared via the combination of multilayer coextrusion (MC) and CaCO 3 template method (CTM).
The as-prepared separators (referred to as MC-CTM PP/PE) exhibit higher porosity, higher ionic conductivity and better battery performance than the commercial trilayer separators (Celgard ® 2325).
Furthermore, the MC-CTM PP/PE not only possesses the effective thermal shutdown function, but also shows the strong thermal stability at high temperature (>160 o C). The thermal shutdown function of MC-CTM PP/PE can be adjusted widely in the temperature range from 127 o C to 165 o C, which is wider than the Celgard ® 2325. The above competitive advantadges which are brought from the convenient and cost-effective preparation method proposed in this work, make MC-CTM PP/PE a promising alternative to the commercialized trilayer LIB separators.
2) Multilayer PP/PE LIB separators (MC-TIPS PP/PE) with cellular-like submicron grade pore structure are efficiently fabricated by the combination of multilayer coextrusion and thermal induced phase separation (TIPS). In addition to the effective shutdown function, the as-prepared separator also exihibits a wider shutdown temperature window and stonger thermal stability than the Celgard ® 2325, the dimensional shrinkage is negligible until 160 o C. Compared to the commercial separator, the MC-TIPS PP/PE has a higher porosity (54.6%) and electrolyte uptake (157%), leading to higher ionic conductivity (1.46 mS cm -1 ) and better battery performances. The above-mentioned significant characteristics make the MC-TIPS PP/PE a promising candidate high performance LIB separators.
3) Porous polystyrene (PS) membranes were prepared via the combination of multilayer coextrusion and CaCO 3 template method. Effects of etching time, CaCO 3 content, and the membrane thickness on the porous structures are investigated, which can be used to regulate and control the porous structure. To demonstrate the adsorption performance of porous PS membranes on PAH, pyrene is used as the model compound for polycyclic aromatic hydrocarbon. Compared with PS III solid membranes, porous PS membranes exhibit much higher adsorption performance on trace pyrene. The adsorption kinetics and isotherm of porous PS membranes respectively well fit the pseudo second-order kinetics and Freundlich isotherm model.
4) The transport and deposition of colloidal particles at the pore scale are simulated by 3D-PTPO code, using a Lagrangian method. This consists in the three-dimensional numerical modeling of the process of transport and deposition of colloidal particles in capillaries of a circular cross section. The velocity field obtained by solving the Stokes and continuity equations is superimposed to particles diffusion and particles are let to adsorb when they closely approach the solid wall. The particles are considered as a mass point during transport in the flow and their volumes are reconstructed when they are deposited, subsequently the flow velocity field is updated before a new particle is injected.
The results show that both adsorption probability and surface coverage are decreasing functions of the particle's Péclet number. At low Péclet number values, when diffusion is dominant, the surface coverage approaches the Random Sequential Adsorption (RSA) value while at high Pé clet number values, it drops drastically.
5) The microscale simulation of colloidal particle deposition in capillaries with converging/diverging geometries (tapered pipe and venturi tube) is approached by the improved 3D-PTPO code. The influence of the particle Pé clet number (Pe) and the pore shape on the particle deposition was investigated. The results show that both deposition probability and surface coverage feature a plateau in the diffusion dominant regime (low Pe) and are decreasing functions of Pe in the convection dominant regime (high Pe). The results of the spatial density distribution of deposited particles show that, for the diffusion-dominant regime, the particle distribution is piston-like; while the distribution is more uniform for the convection-dominant regime. In addition, for the venturi tube with steep corners, the density of deposited particles is relatively low in the vicinity of the pore throat entrance and exit due to streamlines modification. The maximum dimensionless surface coverage Γ final /Γ RSA is studied as a function of Pe. The declining trend observed for high Pe is in good agreement with experimental and simulation results found in the literature.
6) The coupled effects of surface chemical heterogeneity and hydrodynamics on particle deposition in capillaries are investigated by the microscale simulation using the improved 3D-PTPO code. The porous medium is idealized as a bundle of capillaries and in this study, as an element of the bundle, i.e. a three-dimensional capillary with periodically repeating chemically heterogeneous surface (crosswise strips patterned and chess board patterned) is considered. The dependence of the deposition probability and dimensionless surface coverage (Γ/Γ RSA ) on the frequency of the pitches
List of Figures
Figure 1.1 The summary of the classification of materials [3] direct templating; (e) porous aluminum; (f) porous silica [5] ................................................. 2 Figure 2.1 Schematic illustration of different morphology of pores [START_REF] Ishizaki | Porous materials process technology and applications[END_REF] ........................................ 6 Figure 2.2 Illustration of pore surface, pore size, pore geometry, and framework structure of porous media [5] (g) both small pore networks and large pore networks [START_REF] Ishizaki | Porous materials process technology and applications[END_REF] porous polymers, (c) ordered macroporous polymers [5] ..................................................... 10 Figure 2.6 Summary of removal conditions of various templates [5] ........................................... 11 Figure 2.7 schematic representation of a binary phase diagram of a polymer solution [START_REF] Martí Nez-Pé Rez C A | Scaffolds for tissue engineering via thermally induced phase separation[END_REF] ......... 13 Figure 2.8 Schematic representation for the change from conventional emulsion, through emulsion with the maximum packing fraction (74 vol%), to HIPE when increasing the volume fraction of internal phase [5] .................................................................................... 14 Figure 2.9 Schematic representation of the problem [START_REF] Risbud | Trajectory and distribution of suspended non-Brownian particles moving past a fixed spherical or cylindrical obstacle[END_REF] . (a) The small circle of radius a represents the moving sphere with a corresponding incoming impact parameter bin. The circle of radius b represents the fixed obstacle (a sphere or a cylinder). The empty circle represents the position of the suspended particle as it crosses the symmetry plane normal to the x-axis. The surface-to-surface separation ξ and its minimum valueξ min are also shown.
(b) Representation of the conservation argument used to calculate the minimum separation.
The unit vector in the radial direction d, the velocity components in the plane of motion, both far upstream (S∞) and at the plane of symmetry (S 0 ) are shown ................................. 25 Figure 2.10 Three filtration mechanisms of particle transport in porous media [START_REF] Mcdowell-Boyer | Particle transport through porous media[END_REF] .................... 27 Figure 2.11 Key factors controlling engineered nanoparticle transport in porous media [START_REF] Wang | Review of key factors controlling engineered nanoparticle transport in porous media [J][END_REF] ..... 32 Figure 2.12 Schematic representation of the modeled surface charge heterogeneity. The square collector of height L consists of rectangular strips with alternate regions that are favorable VIII (gray) and unfavorable (white) to deposition of widths w and b, respectively. The total width of a favorable and unfavorable strip gives the pitch, p. The deposited spherical particles of diameter d = 2a p have their centers constrained to lie within the favorable strips [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] In each case the percent heterogeneous coverage is 50% [START_REF] Erickson | Three-dimensional structure of electroosmotic flow over heterogeneous surfaces[END_REF] ............................... 40 Figure 2.14 Schematic of a patterned microchannel geometry with Poiseuille flow profile [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] 43 [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] ; (e) experimental results of Veerapen et al [START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] (b) schematic representation of particle behavior near the boundaries between the favorable and unfavorable strips [START_REF] Nazemifard | Particle deposition onto charge heterogeneous surfaces: convection-diffusion-migration model[END_REF] . 3D sectional view of the heterogeneous (c) and homogeneous Table 2.1 Existing models for particle deposition in porous media [START_REF] Zhang | Mechanistic model for nanoparticle retention in porous media[END_REF] Materials can be divided into dense materials and porous materials according to their density. Figure 1.1 shows a summary of the classification of materials. The study of porous material is an important branch of material science, which plays a significant role in our scientific research and industrial production. Each porous medium is typically composed of a solid skeleton and a void space, which is usually filled with at least one type of fluid (liquid or gas) [START_REF] Bear | Introduction to modeling of transport phenomena in porous media[END_REF][START_REF] Coutelieris | Transport processes in porous media [M[END_REF][3] . There are many examples of natural (hollow bamboo, honeycomb, and alveoli in the lungs) and man-made porous materials (macroporous polymer, porous aluminum, and porous silica), as shown in Figure 1.2 [4,5] .
List of Tables
Polymeric porous medium is one of the most important components of organic porous materials, which is the main object of this thesis. Polymeric porous medium (porous polymer) has the combined advantages of porous media and polymer materials. It possesses high porosity, abundant microporous structure and low density. The various preparation methods, controllable pore structure and easily modified surface properties make the polymeric porous media promising materials in a wide range of application fields including adsorption, battery separator, filter, energy storage, catalyst carrier, and biomedical science [5] . Therefore, it is quite interesting and worthy to study the new function of polymeric porous media and to develop a novel preparation method for this widely-used material.
Figure 1.1 The summary of the classification of materials [3] porous silica [5] Prior to the preparation and realization, the functions and the structures of polymeric porous media should be designed. The foremost aspect to consider is the main functions that we would like to realize in the media, which lead to specific application properties. Secondly, we need to determine the key factors that are directly related to the desired function, such as the pore geometry, pore size and the matrix porosity of the materials. Thirdly, based on the above considerations, the experimental scheme needs to be designed to prepare the polymeric porous media. Although various preparation methods exist, such as template technique, emulsion method, phase separation method, foaming process, electrospinning, top-down lithographic techniques, breath figure method, etc., the large-scale preparation of polymeric porous media with controllable pore structures and specified functions is still a long-term goal in this field, which is one of the core objectives of this thesis. A new approach, forced assembly multilayer coextrusion, has been used to economically and efficiently produce polymers of multilayers with individual layer thickness varying from micron to nanoscale. This advanced polymer processing technique has many advantages including continuous process, economic pathway for large-scale fabrication, flexibility of the polymer species, and the capability to produce tunable layer structures. Therefore, in the part I of this thesis, polymeric porous media are designed based on the specific application requirements and prepared by the combination of multilayer coextrusion and traditional preparation methods (template technique, phase separation method). This approach combines the advantages of the multilayer coextrusion and the template/phase separation method (simple preparation process and tunable pore structure).
Afterwards, the applications of the polymeric porous media in PAHs adsorption and lithium-ion battery separator have been investigated.
More importantly, the basic mechanism of the above application processes (PAHs adsorption process or lithium-ion transport through separator) is based on the particle transport and deposition in porous media. Thus it is essential to have a thorough understanding of particle transport and deposition processes in porous media as well as the dominant mechanisms involved. In addition, transport and deposition of colloidal particles in porous media is of vital importance to other engineering and industrial applications [START_REF] Scozzari | Water security in the Mediterranean region: an international evaluation of management, control, and governance approaches[END_REF] , such as particle-facilitated contaminants transport [START_REF] Corapcioglu | Colloid-facilitated groundwater contaminant transport[END_REF] ,
water purification [START_REF] Herzig | Flow of suspensions through porous media-application to deep filtration[END_REF] , wastewater treatment [START_REF] Stevik | Retention and removal of pathogenic bacteria in wastewater percolating through porous media: a review[END_REF] , or artificial recharge of the aquifers [START_REF] Behnke | Clogging in surface spreading operations for artificial ground-water recharge[END_REF] . In order to comprehend colloidal particle transport mechanisms in porous media, computational fluid dynamics (CFD) numerical simulations have been carried out to visualize and analyze the flow field [START_REF] Kanov | Particle tracking in open simulation laboratories [C]. High performance computing, networking, storage and analysis[END_REF] .
Taking water filtration as an example, numerical techniques could be used to model the transport and dispersion of contaminants within a fluid [START_REF] Scott | Particle tracking simulation of pollutant discharges [J][END_REF] . Basically, there are two fundamental theoretical basis to study the transport of colloidal particles in porous media, classified as Eulerian and Lagrangian methods [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF][START_REF] Rajagopalan | Trajectory analysis of deep-bed filtration with the sphere-in-cell porous media model[END_REF] . The Eulerian method describes what happens at a fixed point in space, while the Lagrangian method implies a coordinate system moving along with the flow [START_REF] Salama | Numerical investigation of nanoparticles transport in anisotropic porous media[END_REF] . Pore geometries are often simplified into one or two dimensions to reduce the time consumption of the numerical simulations. However, this simplification restricts the particle motion and reduces the accuracy of the results. Therefore, it is necessary to improve the simulation model to enable three-dimensional simulations in more realistic geometries [START_REF] Bianco | 3-dimensional micro-and nanoparticle transport and filtration model (MNM3D) applied to the migration of carbon-based nanomaterials in porous media [J][END_REF] . Moreover, particle transport and deposition in heterogeneous porous media have been an area of intense investigation recently. So far, there are few reported works on the combined effects of hydrodynamics and surface heterogeneity on colloidal particle deposition in porous media. Hence, it is necessary to better understand the mechanisms reponsible for these phenomena, which is one of the objectives of the present work.
The objective of the part II of this thesis is the three-dimensional microscale simulation of colloidal particles transport and deposition in both homogeneous and heterogeneous porous media by means of CFD tools using the Lagrangian method in order to get the most relevant quantities by capturing the physics underlying the process. In order to perform the simulation, a novel colloid particle tracking model, namely 3D-PTPO (Three-dimensional particle tracking model by Python ® and OpenFOAM ® ) code using Lagrangian method is developed in the present study. The major content of the part II of this thesis could be summarized as: firstly, particle deposition behavior is investigated on a homogeneous cylinder in order to revisit the previous work of Lopez et al. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] by considering a more realistic 3D geometry. Indeed their work was limited to a particular geometry restricted to a slot-like geometry unlikely to be encountered in real porous media. In addition, it is necessary to validate the fundamental transport properties during the simulation, since the initial validation part of the model development (involving deposition onto homogeneous collectors) was important for the following simulations of particle transport and deposition in the subsequent more complex pore geometries. Secondly, the three-dimensional numerical modeling of the process of particle transport and deposition in a more complex pore geometry (tappered pipe of venturi tube) of a circular cross section is carried out to explore the influence of pore geometry on the flow field as well as on the particle transport and deposition properties. Thirdly, the 3D-PTPO model is improved by incorporating surface chemical heterogeneity. The combined effects of the surface heterogeneity and hydrodynamics on the particle deposition behavior are investigated.
Organization of the thesis
The content of this thesis has been subdivided into the following nine chapters:
In Chapter 1, the general overview of the current research topic, the objectives of the thesis, as well as the outline have been explained.
In Chapter 2, firstly, a thorough review of existing work on polymeric porous media has been presented, including the evolution, the detailed preparation methods, as well as the applications.
Secondly, particle transport and deposition in porous media has been reviewed. The transport and deposition mechanisms, the research methods, as well as the previous work are all illustrated in details. Particularly, reviews of particle deposition onto homogeneousheterogeneous substrates have been summarized respectively.
In Chapter 3, a facile and continuous method to prepare porous multilayer polypropylene (PP)/polyethylene (PE) membranes via multilayer coextrusion and CaCO 3 template method is proposed. The physical and electrochemical properties of the separators for lithium-ion batteries (LIBs) have been investigated and compared with the commercial separators.
In Chapter 4, multilayer PP/PE LIB separators with cellular-like submicron grade pore structure are efficiently fabricated by the combination of multilayer coextrusion and thermal induced phase separation (TIPS). The physical and electrochemical properties of the separators have been investigated and compared with the commercial separators.
In Chapter 5, a convenient and continuous method to prepare porous polystyrene (PS) membranes via multilayer coextrusion and template method is proposed. To investigate the adsorption performance of porous PS membranes on PAHs, pyrene is used as the model compound for polycyclic aromatic hydrocarbon.
In Chapter 6, the particle transport and deposition onto a homogenous porous medium composed of a bundle of capillaries of a circular cross section has been carried out to validate the fundamental transport properties, which is important for the following simulations of particle transport and deposition in complex porous media.
In Chapter 7, the particle transport and deposition in homogenous porous media with converging/diverging geometries (tapered pipe and venturi tube) is investigated by the three-dimensional microscale simulation. The influence of the pore geometry and particle Pé clet number (Pe) on the particle deposition iss explored.
In Chapter 8, the particle transport and deposition onto capillaries with periodically repeating as well as the thermal/acoustic insulation capabilities [START_REF] Ishizaki | Porous materials process technology and applications[END_REF] .
Figure 2.1 Schematic illustration of different morphology of pores [START_REF] Ishizaki | Porous materials process technology and applications[END_REF] There are several significant structural characteristics of porous media including pore size, pore geometry, pore surface functionality, and framework structure such as topology, composition, and functionality [5] , as is shown in Figure 2.2. Surface area is a very important parameter that is employed to evaluate the pore structure; generally, pores with a smaller size contribute predominantly to the generation of materials with high surface area. Pore geometry includes tubular, spherical, and network-type morphologies that can be either assembled into ordered arrays or disorders. In addition to the physical structures, the functionalities of the pore surface and the framework are also important [START_REF] Doonan | Exceptional ammonia uptake by a covalent organic framework[END_REF] , which can be engineered by post-modification processes or through the use of functional monomers [START_REF] Rzayev | Nanochannel array plastics with tailored surface chemistry[END_REF] .
Figure 2.2 Illustration of pore surface, pore size, pore geometry, and framework structure of porous media [5] Porous media can be classified by the above structural characteristics such as pore size, pore geometries and framework materials:
According to the International Union of Pure and Applied Chemistry (IUPAC) recommendation, the classification of porous media by pore size is shown in Figure 2.3 [START_REF] Schaefer | Engineered porous materials [J][END_REF] . Macroporous media are defined as media with pore size larger than 50 nm in diameter, mesoporous media with pore size in the range of 2-50 nm, and microporous media with pore size smaller than 2 nm [START_REF] Ishizu | Ordered microporous surface films formed by core-shell-type nanospheres[END_REF] .
Figure 2.3 Classification of porous media according to pore size [START_REF] Schaefer | Engineered porous materials [J][END_REF] Porous media can also be classified by the pore geometries. Porous media consisting of porous particles have the configuration of both small pore networks and large pore networks (Figure 2.4g) [START_REF] Ishizaki | Porous materials process technology and applications[END_REF] . (f) large pores connected by small pores; (g) both small pore networks and large pore networks [START_REF] Ishizaki | Porous materials process technology and applications[END_REF] More importantly, classification based on the framework materials is of vital important in the application of porous media. Depending on the desired properties such as mechanical strength, chemical stability and high-temperature resistance, materials are selected for porous solid including paper, polymer, metal, glass, and ceramic. Polymeric porous media especially have received an increased level of research interest due to their potential to merge the properties of both porous materials and polymers [START_REF] Sun | Porous polymer catalysts with hierarchical structures[END_REF] . Firstly, porous polymers have the advantages of high surface area and well-defined porosity [START_REF] El-Kaderi | Designed synthesis of 3D covalent organic frameworks[END_REF] . Secondly, the porous polymers possess good processability, which generates obvious advantages in many applications fields [START_REF] Kim | Functional nanomaterials based on block copolymer self-assembly [J][END_REF] . Thirdly, the diversity of preparation routes facilitates the construction and design of numerous porous polymers [START_REF] Jiang | Microporous poly(tri(4-ethynylphenyl)amine) networks: synthesis, properties, and atomistic simulation[END_REF] . Finally, the polymeric frameworks are composed of light elements due to their organic nature, which provides a weight advantage in many applications [START_REF] Cote | Porous, crystalline, covalent organic frameworks[END_REF] .
Polymeric porous media have been synthesized [START_REF] Kaur | Porous organic polymers in catalysis: opportunities and challenges[END_REF] by incorporating monomers into well-known step growth and chain-growth polymerization processes to provide cross-links between propagating polymer chains since the early 1960s, resulting in three-dimensional network materials. In the late 1980s, the copolymerization strategy and the use of discrete molecular porogens were combined to create molecularly imprinted polymers. These materials have been extensively investigated in sensing and catalysis applications [START_REF] Cooper | Polymer synthesis and characterization in liquid/supercritical carbon dioxide[END_REF] . Besides, it is often too fast and difficult to control the polymerization/cross-linking kinetics, yielding the macro-pores rather than the micro-pores. In the late 1990s, attempts were made [START_REF] Cooper | Polymer synthesis and characterization in liquid/supercritical carbon dioxide[END_REF] to synthesize microporous organic polymer materials from monomers. Slower bond-forming reactions were adapted promote the formation of pores that closely match the potential guest molecules dimension [START_REF] Kaur | Porous organic polymers in catalysis: opportunities and challenges[END_REF][START_REF] Cooper | Polymer synthesis and characterization in liquid/supercritical carbon dioxide[END_REF] . The ability to control the structure of pores and incorporate desired functionalities into the material has benefited from the great strides being made in the preparation of polymeric porous media by various preparation methods, as summarized in the following chapter.
Preparation methods
The past several years have witnessed an expansion of various methods directed at preparing polymeric porous media, including direct templating, block copolymer self-assembly, and direct synthesis methodologies. These techniques have been developed mainly based on the properties of the raw materials and the targeted applications. Polymeric porous media prepared by different techniques possess different characteristics, such as average pore size, porosity, and mechanical properties [START_REF] Sa-Nguanruksa | Porous polyethylene membranes by template-leaching technique: preparation and characterization[END_REF] . Hereafter, the preparation methods will be introduced in details.
Direct templating method
Direct templating method is a simple and versatile approach for the direct replication of the inverse structure of the preformed templates with stable morphology [5] . A large number of polymeric porous media have been successfully prepared by direct templating method, including individual spherical porous polymers from solid spherical nanoparticle templates (Figure 2.5a), tubular porous polymers from tubular porous templates (Figure 2.5b), and ordered macroporous polymers from colloidal crystal templates (Figure 2.5c) [5] .
Figure 2.5 Schematic illustration of fabrication of (a) spherical porous polymers, (b) tubular porous polymers, (c) ordered macroporous polymers [5] There are several requirements for the successful preparation of polymeric porous media by direct templating method. Firstly, in order to realize a faithful replication of the template, the surface properties of the templates should be compatible with the raw materials selected for the polymeric framework. Secondly, the templates should have well-defined structures. In this way, one can tune the predetermined porous structures of polymer replicas by a rational choice of templates. Thirdly, after templating the templates should be easily removed. Figure 2.6 outlines the template removal conditions employed for various templates [5] . Finally, the polymeric walls should provide the ability to incorporate designable functionalities for targeted applications [5] .
Figure 2.6 Summary of removal conditions of various templates [5] Many researches have been carried out by this method. For example, Johnson et al. [START_REF] Johnson | Ordered mesoporous polymers of tunable pore size from colloidal silica templates[END_REF] successfully prepared the ordered mesoporous polymers by replication of a matrix made from nanosized silica spheres by monomers such as divinylbenzene and ethyleneglycol dimethacrylate. 3D ordered porous cross-linked PDVB-b-PEDMA materials with tunable pore size in the range of 15-35 nm were obtained after copolymerization and dissolution of the silica template. Colvin et al. [START_REF] Jiang | Template-directed preparation of macroporous polymers with oriented and crystalline arrays of voids [J][END_REF] used monodisperse silica particles as a templates to prepared ordered macroporous PS, polyurethane, and poly(methyl methacrylate) membranes. Nguyen et al. [START_REF] Chakraborty | Hierarchically porous organic polymers: highly enhanced gas uptake and transport through templated synthesis[END_REF] demonstrated that hierarchically porous organic polymers containing both micro-and meso-pores can be realized by cobalt-catalyzed trimerization of 1,4-diethynylbenzene inside a mesoporous silica aerogel template. Caruso et al. [START_REF] Wang | Fabrication of polyaniline inverse opals via templating ordered colloidal assemblies[END_REF] prepared 3D ordered porous polyaniline lattice with excellently electrical properties, magnetic properties and optical properties.
Phase separation method
Phase separation method, also known as phase inversion or solution precipitation technique, is one of the most promising strategies for the production of polymeric porous media [START_REF] Pavia | Morphology and thermal properties of foams prepared via thermally induced phase separation based on polylactic acid blends [J][END_REF] . Open or closed pore morphologies with pore sizes between 0.1 nm and 100 μm can be produced depending on the process conditions [START_REF] Aram | A review on the micro-and nanoporous polymeric foams: Preparation and properties [J][END_REF][START_REF] Zhao | Preparation of microporous silicone rubber membrane with tunable pore size via solvent evaporation-induced phase separation[END_REF] . In this method, a polymer dissolved solvent is induced to separate from the solvent in a controlled manner by chemical or thermal cooling processes, the original solvent accumulated in isolated zones forming either two distinct phases or two bicontinuous phases.
Finally, the removal of the solvent phase left the final porous structure [START_REF] Aram | A review on the micro-and nanoporous polymeric foams: Preparation and properties [J][END_REF] . The phase separation process is determined by the kinetic and thermodynamic parameters, such as the chemical potentials and diffusivities of the individual components. The key to understand the formation mechanism of pore structure is the identification and description of the phase separation process [START_REF] Strathmann | Production of microporous media by phase inversion processes[END_REF] . Normally, phase separation methods can be classified into four main methods: precipitation by cooling, called thermally induced phase separation (TIPS) [START_REF] Yave | Syndiotactic polypropylene as potential material for the preparation of porous membranes via thermally induced phase separation (TIPS) process[END_REF] ; precipitation in a non-solvent (typically water), called nonsolvent-induced phase separation (NIPS) [START_REF] Stropnik | Polymeric membrane formation by wet-phase separation; turbidity and shrinkage phenomena as evidence for the elementary processes[END_REF] ; precipitation by absorption of non-solvent (water) from the vapor phase, called vapor induced phase separation (VIPS) [START_REF] Bikel | Micropatterned polymer films by vapor-induced phase separation using permeable molds[END_REF] ; and solvent evaporation-induced phase separation (EIPS) [START_REF] Kim | Preparation of a unique microporous structure via two step phase separation in the course of drying a ternary polymer solution[END_REF] .
Thermally induced phase separation (TIPS) technique is currently receiving much attention in industrial applications for the production of polymeric porous media. The TIPS process is applicable to a wide range of polymers [START_REF] Lloyd | Microporous membrane formation via thermally induced phase separation. I. Solid-liquid phase separation[END_REF] , allow greater flexibility, higher reproducibility, and effective control of the final pore size of porous polymer [START_REF] Ramaswamy | Fabrication of poly (ECTFE) membranes via thermally induced phase separation[END_REF] . In principle, TIPS is based on a rule that a polymer is miscible with a diluent at high temperature, but demixes at low temperature. A typical TIPS process begins by dissolving a polymer in a diluent to form a homogeneous solution at an elevated temperature, which is cast or extruded into a desired shape. Then, a cooling bath is employed to induce a phase separation (e.g. liquid-liquid, solid-liquid, liquid-solid, or solid-solid demixing) [START_REF] Matsuyama | Structure control of anisotropic and asymmetric polypropylene membrane prepared by thermally induced phase separation[END_REF] based on the changes in thermal energy, the de-mixing of a homogeneous polymer solution was induced into a multi-phase system [START_REF] Martí Nez-Pé Rez C A | Scaffolds for tissue engineering via thermally induced phase separation[END_REF] . Figure 2.7 presented a typical temperature composition phase diagram for a binary polymer-solvent system with an upper critical solution temperature. When the temperature of a solution is above the binodal curve, the polymer solution is homogeneous. A polymer-rich phase and a solvent-reach phase coexist in a solution in the L-L demixing region [START_REF] Vandeweerdt | Temperature-concentration behavior of solutions of polydisperse, atactic poly (methyl methacrylate) and its influence on the formation of amorphous, microporous membranes[END_REF] .
The maximum point is the critical point of the system, at which both the binodal and the spinodal curves merge [START_REF] Martí Nez-Pé Rez C A | Scaffolds for tissue engineering via thermally induced phase separation[END_REF] . The porous morphology of the resulting porous polymer can be controlled by the balance of various parameters [START_REF] Lloyd | Microporous membrane formation via thermally induced phase separation. I. Solid-liquid phase separation[END_REF] : the cooling depth, cooling rate, polymer type, polymer concentration, diluent composition and the presence of additives [START_REF] Zhang | Preparation of high density polyethylene / polyethyleneblock -poly (ethylene glycol) copolymer blend porous membranes via thermally induced phase separation process and their properties[END_REF] .
Figure 2.7 schematic representation of a binary phase diagram of a polymer solution [START_REF] Martí Nez-Pé Rez C A | Scaffolds for tissue engineering via thermally induced phase separation[END_REF] Currently, the TIPS process has been applied to several polymers, including polypropylene (PP) [START_REF] Yang | Preparation of iPP hollow-fiber microporous membranes via thermally induced phase separation with co-solvents of DBP and DOP[END_REF] , polystyrene (PS) [START_REF] Matsuyama | Porous cellulose acetate membrane prepared by thermally induced phase separationa [J][END_REF] , and poly(vinylidene fluoride) (PVDF) [START_REF] Cha | Preparation of poly(vinylidene fluoride) hollow fiber membranes for microfiltration using modified TIPS process[END_REF] . Recently, hydrophilic materials of cellulose acetate (CA) [START_REF] Matsuyama | Structure control of anisotropic and asymmetric polypropylene membrane prepared by thermally induced phase separation[END_REF] , cellulose acetate butyrate (CAB), and polyacrylonitrile (PAN) [START_REF] Fu | Effect of surface morphology on membrane fouling by humic acid with the use of cellulose acetate butyrate hollow fiber membranes [J][END_REF] were used for the fabrication of polymeric porous media [START_REF] Jang | Preparation of dual-layer acetylated methyl cellulose hollow fiber membranes via co-extrusion using thermally induced phase separation and non-solvent induced phase separation methods[END_REF] .
Many researches have been carried out by this method. For example, Cheng et al. [START_REF] Cheng | Preparation and performance of polymer electrolyte based on poly(vinylidene fluoride)/polysulfone blend membrane via thermally induced phase separation process for lithium ion battery[END_REF] employed TIPS to fabricate PVDF/polysulfone blend separators that showed the maximum electrolyte uptake of 129.76%. It is promising to develop new kinds of porous separators via TIPS for LIBs.
Matsuyama et al. [START_REF] Matsuyama | Porous cellulose acetate membrane prepared by thermally induced phase separationa [J][END_REF] achieved the first hydrophilic CA hollow fiber membrane using TIPS by liquid-liquid phase separation, and demonstrated that the membrane possess isotropic pore structure without the formation of macrovoids. Fu et al. [START_REF] Fu | Effect of membrane preparation method on the outer surface roughness of cellulose acetate butyrate hollow fiber membrane[END_REF] studied the effect of preparation conditions on the outer surface roughness of CAB hollow fiber membranes prepared via NIPS and TIPS. [START_REF] Jang | Preparation of dual-layer acetylated methyl cellulose hollow fiber membranes via co-extrusion using thermally induced phase separation and non-solvent induced phase separation methods[END_REF] Cui et al. [START_REF] Cui | Preparation of PVDF/PEO-PPO-PEO blend microporous membranes for lithium ion batteries via thermally induced phase separation process[END_REF] prepared porous PVDF via TIPS, the ionic conductivity of corresponding polymer electrolyte reached the standard of practical application for polymer electrolyte, which suggests that microporous PVDF prepared by the TIPS can be used as matrix of polymer electrolyte.
High internal phase emulsion polymerization
High internal phase emulsion (HIPE) polymerization approaches have been applied to the preparation of polymeric porous media. When the volume fraction of the internal phase in a conventional emulsion (Figure 2.8a) is above 74%, which is the maximum packing fraction of uniform spherical droplets (Figure 2.8b), the droplets deform to create polyhedra, and the dispersed phase is surrounded by a thin film of the continuous phase, this conformation is called a HIPE (Figure 2.8c). Polymerization of the continuous phase containing monomers, such as styrene, and cross-linkers, such as DVB, will lock in the HIPE structure, leading to the formation of a porous polymer, called polyHIPE [5] .
Figure 2.8 Schematic representation for the change from conventional emulsion, through emulsion with the maximum packing fraction (74 vol%), to HIPE when increasing the volume fraction of internal phase [5] 2.1.2.4 Extrusion-stretching method Extrusion-stretching method usually utilized to prepare porous polymeric membranes/fibers from either filled or unfilled semi-crystalline polymers. This process comprises two consecutive steps: firstly, an oriented film is produced by melt-extrusion process. After solidifying, the film is stretched in either parallel or perpendicular direction to the original orientation of the polymer crystallites. For filled systems, the second stretching results in partial removal of the solid fillers, yielding a porous structure [START_REF] Sa-Nguanruksa | Porous polyethylene membranes by template-leaching technique: preparation and characterization[END_REF] . For unfilled systems, the second stretching deforms the crystalline structure of the film and produces slit-like pores. Generally, porous membranes/fibers prepared by this technique have relatively poor tear strength along the orientation direction.
Block copolymer self-assembly method
Generally, the self-assembly process can occur either in a pure block copolymer (BCP) or in a composite of BCP. There are two distinguishable roles for BCPs in the preparation of polymeric porous media. One is that BCPs serve as the source of the framework for the porous polymers. The other is that BCPs serve as the pore template followed by removal of the BCP to generate the pores.
In this regard, there are several diverse mechanisms for pore formation. The pores can be derived from (1) removal of additional components from a self-assembled composite containing a BCP; (2) selective etching of constituent block from a pure self-assembled BCP; (3) physical reconstruction of the morphology of the self-assembled BCPs; and (4) selective cross-linking of dynamic self-assembled BCP vesicles in solution to obtain hollow structured polymers [5,[START_REF] Zhou | Mesoporous membrane templated by a polymeric bicontinuous microemulsion[END_REF] .
Breath figures method
Breath figures method (BFs) is commonly used to prepare honeycomb patterned porous polymeric media by casting a polymer solution from a volatile solvent under high humidity.
Compared to other methods, BFs does not require high-tech equipment such as mask aligners and post-removal treatment. As a consequence, BFs has drawn increased attention for use as dust-free coatings, sensors, biomaterials, and separation membranes [5] . Many researches have been carried out by this method. For example, Barrow et al. [START_REF] Barrow | Physical characterisation of microporous and nanoporous polymer films by atomic force microscopy, scanning electron microscopy and high speed video microphotography[END_REF] observed the process of droplets formation on the surface of polymer solution via high speed microphotographic apparatus. Size and number of water droplets on the solution surface increased with increasing exposure time to a humid atmosphere. Park et al. [START_REF] Park | Hierarchically ordered polymer films by templated organization of aqueous droplets[END_REF] reported the fabrication of PS film with hierarchically ordered porous structure by breath figures. The hierarchical ordering of aqueous droplets on polymer solution is realized by the imposition of physical confinement via various shaped gratings, ordered structure can be tuned by dissolving a bit of surfactant in the polymer solution. Pitois and Francois [START_REF] Pitois | Crystallization of condensation droplets on a liquid surface[END_REF] observed the formation process of water droplets by lightscattering experiments, and they found that the evolution with time of the mean droplet radius by a power law with an exponent of 1/3.
In summary, the preparation of polymeric porous media has already become and will continue to become a thriving area of research. Despite significant progress have been achieved on the preparation of polymeric porous media, each methods have advantages and limitations. The common problem is that the production process of the above methods are relatively cumbersome, which clearly decreases the production efficiency and increase the cost. Thus A long-term goal for the preparation of porous polymers remains development of simple and scalable procedures for construction of porous polymers [5] . It is essential to find new methods which can optimize the excellent porous structure without sacrificing the high efficiency and low-cost. The multilayer coextrusion represents an advanced polymer processing technique, which is capable of economically and efficiently producing polymer materials of multilayers with individual layer thickness varying from micron to nanoscale. The multilayer coextrusion has many advantages including continuous process, economic pathway for large-scale fabrication, flexibility of the polymer species, and tunable layer structures [START_REF] Armstrong | Co-extruded multilayer shape memory materials: Nano-scale phenomena[END_REF] . In the present study, a novel strategy is proposed to prepare polymeric porous media with tunable porous structure via multilayer coextrusion combined with the template method and the TIPS method, which is a highly efficient pathway for large-scale fabrication of polymeric porous media. Moreover, this method is applicable to any melt-processable polymer in principle.
Applications
Porous polymers can be used in a wide range of application fields such as adsorption materials, filtration/separation materials, gas storage and separation materials, battery separators, encapsulation agents for controlled release of drugs, catalysts, sensors, and electrode materials for energy storage. [5] In this thesis, we mainly focus on the application as adsorption materials and Li-ion battery separators.
Lithium-ion batteries separators
Lithium-ion batteries (LIBs) are the preferred power source for most portable electronics due to their higher energy density, longer cycle life, higher operational voltage and no memory effect as compared to NiMH and NiCd systems [START_REF] Arora | Battery separators[END_REF] . A typical LIB consists of a positive electrode (composed of a thin layer of powdered metal oxide mounted on aluminum foil), a negative electrode (formed from a thin layer of powdered graphite or certain other carbons mounted on a copper foil), a porous membrane soaked in LiPF 6 dissolved in a mixture of organic solvents [START_REF] Arora | Battery separators[END_REF] . The porous membrane is often called as the separator, which is a crucial component for the LIBs. The essential function of LIB separator is to prevent electronic contact, while enabling ionic transport between the negative and positive electrodes. In addition, separator should improve the performance and ensure the safety of LIB application. An ideal separator used in LIBs should own the following features: (1) Electronic insulator to prevent an electric short circuit; (2) Excellent wettability to liquid electrolytes to obtain high lithium ion conductivity;(3) High thermal stability at increased temperature; (4) Mechanical and chemical stability; (5) Other appropriate properties, like thickness, resistance, etc [START_REF] Lu | Porous membranes in secondary battery technologies[END_REF] .
Many methods have been developed to improve separator's mechanical strength, thermal stability, porosity and electrochemical performance. For example, Ye et al. [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] prepared three-dimensional PANI/PI composites via electrospinning and in situ polymerization. Chen et al. [START_REF] Chen | Improved performance of PVdF-HFP/PI nanofiber membrane for lithium ion battery separator prepared by a bicomponent cross-electrospinning method[END_REF] fabricated (PVDF-HFP)/PI double-components nanofiber membrane via electrospinning, followed by thermal calendaring. The process enhances the mechanical property via fusing the PVDF-HFP component which has lower melting temperature [START_REF] Chen | The research progress of Li-ion battery separators with inorganic oxide nanoparticles by electrospinning: A mini review[END_REF] .
With the coming out and development of all kinds of electronics, the application range of LIBs expands gradually. The improvements in safety of LIBs are more important than any time, especially in the newly growing application fields such as electric vehicles and aerospace systems [START_REF] Li | Poly (ether ether ketone) (PEEK) porous membranes with super high thermal stability and high rate capability for lithium-ion batteries[END_REF] . Separator 'shutdown' function is a useful strategy for safety protection of LIBs by preventing thermal runaway reactions. Compared to the single layer separators, polypropylene (PP) /polyethylene (PE) multilayer separators are expected to provide wider shutdown window by combining the lower melting temperature of PE with the high melting temperature strength of PP [START_REF] Venugopal | Characterization of microporous separators for lithium-ion batteries[END_REF] . The traditional method of preparing such multilayer separators is bonding the pre-stretched microporous monolayer membranes into the multilayer membranes by calendaring, adhesion or welding, and then stretched to obtain the required thickness and porosity [START_REF] Deimede | Separators for lithium-ion batteries: A review on the production processes and recent developments[END_REF] , which will enhance the mechanical strength but decrease the production efficiency. Moreover, the separators will suffer significant shrinkage at high temperature due to the residual stresses induced during the stretching process, hereby a potential internal shorting of the cell could occur [START_REF] Baginska | Autonomic shutdown of lithium-ion batteries using thermoresponsive microspheres[END_REF] . During the past decades, various modifications have been devoted to improve the dimensional thermos-stability of separator including the surface dip-coating of organic polymers or inorganic oxides [START_REF] Jeong | Closely packed SiO 2 nanoparticles/poly(vinylidene fluoridehexafluoropropylene) layers-coated polyethylene separators for lithium-ion batteries[END_REF] , and chemically surface grafting [START_REF] Lee | Separator grafted with siloxane by electron beam irradiation for lithium secondary batteries[END_REF] .
However, the coated layers were easily to fall off when the separator is bent or scratched during the battery assembly process. Besides, most of the above approaches focused on modifying or reinforcing the existing separators [START_REF] Zhang | The Effect of Silica Addition on the Microstructure and Properties of Polyethylene Separators Prepared by Thermally Induced Phase Separation[END_REF] , which makes the manufacturing process more complicated and the separators more expensive. Thus it is essential to propose new solutions that can optimize the thermal stability, shutdown property and electrochemical performance without sacrificing the convenient and cost-effective preparation process. The multilayer coextrusion (MC) represents an advanced polymer processing technique that capable of economically and continuously producing multilayer polymers [START_REF] Armstrong | Co-extruded polymeric films for gas separation membranes[END_REF] . Template method and thermal induced phase separation (TIPS) are widely used manufacturing processes for polymeric porous media with well-controlled and uniform pore structure, high porosity, and good modifiability [Jun-li Shi_2013]. To the best of our knowledge, no studies have been reported on the combination of the above methods with multilayer coextrusion to prepare multilayer porous separators. Thus in the present study, a novel strategy is proposed to prepare the multilayer LIBs separators comprising alternated layers of microporous PP and PE layers via the above combination.
Adsorption materials
Nowadays, environmental problems have become a global concern because of their impact on public health [START_REF] Zhou | Magnetic dendritic materials for highly efficient adsorption of dyes and drugs [J][END_REF] . Various toxic chemicals such as dyes, polluted oil, heavy metals and polycyclic aromatic hydrocarbons (PAHs),are continuously discharged into the environment as industrial waste, causing water, and soil pollutants. These chemicals are recalcitrant and persistent in nature, have low solubility in water but highly lipophilic [START_REF] Osagie | Adsorption of naphthalene on clay and sandy soil from aqueous solution[END_REF] .
Dyes are widely used as coloring agents in many industries such as textile, cosmetics, paper, leather, plastics and coating industry. They occur in wastewater in substantial quantities and cause serious environmental problems due to the resistance to degradation. The removal of dyes from aqueous environment has been widely studied and numerous methods such as membrane filtration, adsorption, coagulation, chemical oxidation and electrochemical treatment have been developed.
Among these methods, the adsorption technique is especially attractive because of its high efficiency, simplicity of design, and ease of operation [START_REF] Zhou | Magnetic dendritic materials for highly efficient adsorption of dyes and drugs [J][END_REF] . Oil spill is a risk during the processes of oil being explored, transported, stored and used, which causes significant and serious environmental damage.
The common cleanup methods include in situ burning, oil booms, bioremediation, oil dispersants, and oil sorbents. Among these methods, the application of oil sorbents has proven to be an effective and economical means of solving the problem. Researchers have developed a great deal of materials (inorganic mineral products, organic natural products, and synthetic organic products) as the adsorbents to concentrate, transfer, and adsorb spilled oils [START_REF] Wang | Open-cell polypropylene/polyolefin elastomer blend foams fabricated for reusable oil-sorption materials[END_REF] .
Polycyclic aromatic hydrocarbons (PAHs) belong to a class of chemicals that contain two or more fused benzene rings, which are carcinogenic, teratogenic, mutagenic and difficult to be biodegraded. They are formed during the incomplete combustion of coal, oil, gas, wood, garbage, or other organic substances, such as tobacco [START_REF] Awoyemi | Understanding the adsorption of polycyclic aromatic hydrocarbons from aqueous phase onto activated carbon[END_REF] . In recognition of their toxicity and high mobility, the World Health Organization (WHO) has recommended a limit for PAH in drinking water. The European En vironmental Agency (EEA) has also included these compounds in its list of priority pollutants to be monitored in industrial effluents [START_REF] Osagie | Adsorption of naphthalene on clay and sandy soil from aqueous solution[END_REF] . In recent years, various techniques have been employed for the removal of PAHs from wastewaters including biological method, advanced oxidation process and adsorption. Since PAHs have certain toxic effects on microbial and the period of biological treatment is relatively long, the application of biological method is limited. Although advanced oxidation process is fast, it is also limited by easy formation of more toxic products [START_REF] Hu | Efficient adsorption of phenanthrene by simply synthesized hydrophobic MCM-41 molecular sieves[END_REF] .
Adsorption is a physical separation process in which certain compounds of a fluid phase are transferred to the surface of a solid adsorbent as a result of the influence of Van der Waals forces [START_REF] Osagie | Adsorption of naphthalene on clay and sandy soil from aqueous solution[END_REF] .
Adsorption method is nowadays considered effective for removing persistent organic pollutants and is regarded superior to other techniques, due to its low cost, simplicity of design, high efficiency, ease of operation and ability to treat a selection of PAHs in variety of concentrated forms. Moreover, it removes the complete PAH molecule unlike certain methods which destroy the molecule and leave harmful residues.
Extensive research has been conducted using various adsorbents to adsorb PAHs in contaminated water. Zhang et al. [START_REF] Séquaris | Pyrene and phenanthrene sorption to model and natural geosorbents in single-and binary-solute systems[END_REF] explored an effective adsorption of phenanthrene and a natural geosorbent in single and binary solute systems. An et al. [START_REF] An | Stepwise adsorption of phenanthrene at the fly ash-water interface as affected by solution chemistry: experimental and modeling studies[END_REF] investigated the adsorption of phenanthreneonto fly ash, which showed stepwise pattern, and solution chemistry such as pH and organic matter played an important role in the distribution of phenanthrene in fly ash-water system.
Tang et al. [START_REF] Tang | Sorption of polycyclic aromatic hydrocarbons from aqueous solution by hexadecyltrimethylammonium bromide modified fibric peat [J][END_REF] employed hexadecyltrimethylammoniumbromide modified fibric peat in adsorption of PAHs such as naphthalene, phenanthrene and pyrene. The hydrophobic fibric peatreceived improved adsorption rate and adsorption capacity for PAHs [START_REF] Hu | Efficient adsorption of phenanthrene by simply synthesized hydrophobic MCM-41 molecular sieves[END_REF] .
The adsorption method is particularly appealing when the adsorbent is low-priced and could be mass produced [START_REF] Hall | Removing polycyclic aromatic hydrocarbons from water by adsorption on silicagel [J][END_REF] . According to the similar compatible principle, absorbents with aromatic ring are comparatively suitable materials for PAHs adsorption. In our previous work, it is found that porous PS bulk materials via high internal phase emulsion polymerization are good candidates to deal with PAHs contamination in water [START_REF] Pu | A porous styrenic material for the adsorption of polycyclic aromatic hydrocarbons[END_REF] . Among the porous adsorption media, the porous membranes are preferable over bulk and powder materials since they possess higher contact area with water and are much easier to be separated from wastewaters. Currently, the porous membranes can be fabricated through numerous methods, including foaming process [START_REF] Simkevitz | Fabrication and analysis of porous shape memory polymer and nanocomposites[END_REF] , phase separation method [START_REF] Luo | Preparation of porous crosslinked polymers with different surface morphologies via chemically induced phase separation[END_REF] ,
electrospinning [START_REF] Li | Hydrophobic fibrous membranes with tunable porous structure for equilibrium of breathable and waterproof performance[END_REF] , top-down lithographic techniques [START_REF] Singamaneni | Instabilities and pattern transformation in periodic, porous elastoplastic solid coatings[END_REF] , breath figure method [START_REF] Srinivasarao | Three-dimensionally ordered array of air bubbles in a polymer film[END_REF] , template technique, and extrusion spinning process for hollow fiber membranes [START_REF] Bonyadi | Highly porous and macrovoid-free PVDF hollow fiber membranes for membrane distillation by a solvent-dope solution co-extrusion approach[END_REF] . Among the above methods, the template technique has attracted much attention due to its relatively simple preparation process and tunable pore structure. With the help of the template method, the extrusion-blown molding seems a highly efficient way for the large-scale fabrication of porous membranes. However, the extrusion-blown molding is not suitable for the preparation of brittle PS or particle-embedded polymer membranes. In the present study, a novel strategy is proposed to prepare porous PS membranes with tunable porous structure via multilayer coextrusion combined with the template method, which is a highly efficient pathway for large-scale fabrication of porous PS membranes. The potential application of the porous PS membranes in adsorbing PAHs is explored preliminarily.
Pyrene, a representative PAH with medium molecular weight and moderate solubility in water, is selected as the model compound to explore the adsorption performance of the porous PS membranes on PAHs. The related adsorption kinetics and isotherms of the porous PS are also discussed [START_REF] Awoyemi | Understanding the adsorption of polycyclic aromatic hydrocarbons from aqueous phase onto activated carbon[END_REF] .
Transport of colloidal particles in porous media
Background and overview
The basic mechanism of the above application processes (PAHs adsorption, Li-ion transport) is based on particle transport and deposition in porous media. Thus it is essential to have a thorough understanding of these processes as well as the dominant mechanisms involved. Moreover, colloidal particle transport and deposition processes in porous media are of great technological and industrial interest for over half a century since they are critical to many other applications ranging from drinking water treatment to drug delivery [3] . Accordingly, significant research efforts have been focused on the understanding of particle transport and deposition phenomena, as well as the related theories and mechanisms [START_REF] Jin | Concurrent modeling of hydrodynamics and interaction forces improves particle deposition predictions[END_REF] . This chapter aims to summarize the extensive relevant works on particle transport and deposition in porous media, including both experimental researches [START_REF] Kretzschmar | Experimental determination of colloid deposition rates and collision efficiencies in natural porous media[END_REF] [93] [START_REF] Yoon | Visualization of particle behavior within a porous medium: Mechanisms for particle filtration and retardation during downward transport[END_REF] and numerical studies [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF][START_REF] Long | Pore-scale study of the collector efficiency of nanoparticles in packings of nonspherical collectors[END_REF][START_REF] Long | A Correlation for the collector efficiency of brownian particles in clean-bed filtration in sphere packings by a Lattice-Boltzmann method[END_REF] . Furthermore, reviews of particle transport and deposition onto homogeneous and heterogeneous substrates will also be summarized.
In the past decades, numerous experiments of particle transport and deposition in porous media have been carried out to investigate the key factors that influence particle transport and deposition processes, including particle surface properties, flow field profiles, particle dispersion concentrations, and the geometry of the porous media. These researches contributed to a better understanding of particle transport in porous media under flow [START_REF] Scozzari | Water security in the Mediterranean region: an international evaluation of management, control, and governance approaches[END_REF] , and they are helpful in the construction of mathematical models [START_REF] Zamani | Flow of dispersed particles through porous media-Deep bed filtration [J][END_REF] for numerical simulation. For example, A. Scozzari et al. [START_REF] Scozzari | Water security in the Mediterranean region: an international evaluation of management, control, and governance approaches[END_REF] studied the deposition kinetics of the suspended particles in two saturated granular porous media by performing pulse [START_REF] Kretzschmar | Experimental determination of colloid deposition rates and collision efficiencies in natural porous media[END_REF] and step-input [START_REF] Elimelech | Particle deposition and aggregation. measurement, modelling and simulation[END_REF] injections, under Darcy flow conditions by establishing breakthrough curves (BTC) The results of the two injection methods were compared to analyze their effects on particle deposition rates in laboratory columns. Yoon et al. [START_REF] Yoon | Visualization of particle behavior within a porous medium: Mechanisms for particle filtration and retardation during downward transport[END_REF] used laser-induced fluorescence for particle tracking in a translucent porous medium to examine the behavior of a dilute suspension of negatively charged, micron-sized particles. The fate of moving particles as a function of pore fluid velocity and bead surface roughness was observed at both the macroscopic and microscopic levels.
In order to directly visualize particle movement in a complex pore space, Ghidaglia et al. [START_REF] Ghidaglia | Nonexistence of travelling wave solutions to nonelliptic nonlinear schriidinger equations[END_REF] created an optically transparent medium to study particle transport. Wan et al. [START_REF] Wan | Improved glass micromodel methods for studies of flow and transport in fractured porous media[END_REF] used hotochemically etched glass plates to simulate a porous medium. Similar work could also be seen in the literature [START_REF] Wan | Improved glass micromodel methods for studies of flow and transport in fractured porous media[END_REF][START_REF] Lanning | Glass micromodel study of bacterial dispersion in spatially periodic porous netwrks[END_REF] .
Impinging jet flow or parallel-plate cells are commonly used to experimentally investigate colloids deposition mechanisms [START_REF] Adamczyk | Deposition of latex particles at heterogeneous surfaces[END_REF][START_REF] Areepitak | Model simulations of particle aggregation effect on colloid exchange between streams and streambeds[END_REF][START_REF] Unni | Brownian dynamics simulation and experimental study of colloidal particle deposition in a microchannel flow[END_REF] . 1D column experiments using polystyrene Latex particles are the most commonly performed in porous media, owing to their simplicity giving output data in the form of Breakthrough curves (BTC) [START_REF] Canseco | Deposition and re-entrainment of model colloids in saturated consolidated porous media: Experimental study[END_REF] . An Eulerian approach may be adopted to interpret experimental data [START_REF] Sasidharan | Coupled effects of hydrodynamic and solution chemistry on long-term nanoparticle transport and deposition in saturated porous media[END_REF] . Risbud and Drazer [START_REF] Risbud | Trajectory and distribution of suspended non-Brownian particles moving past a fixed spherical or cylindrical obstacle[END_REF] have explored the transport of non-Brownian particles around a spherical or a cylindrical collector in Stokes regime by focusing on the distribution of particles. Unni and Yang [START_REF] Unni | Brownian dynamics simulation and experimental study of colloidal particle deposition in a microchannel flow[END_REF] have experimentally investigated colloid deposition in a parallel-plate flow cell by means of direct videomicroscopic observation. In filtration, the porous medium is usually assumed to be composed of unit bed elements containing cylindrical cells. For example, Chang et al. [START_REF] Chang | Prediction of Brownian particle deposition in porous media using the constricted tube model[END_REF] investigated the deposition of Brownian particles in model parabolic constricted/hyperbolic constricted/sinusoidal constricted tubes.
Generally speaking, experimental studies can supply macroscopic results but with little exhaustive information, such as the particle trajectories and particle deposition and distribution in the whole domain due to the limited number of positioning measurement points. As an alternative, numerical simulations with detailed mathematical models, which are based on computational fluid dynamics (CFD) analysis, have aroused a great deal of interest for many researchers as they can precisely track particles at all locations and observe particle transport and deposition process in full-scale. According to the scale hierarchy, numerical simulations can be divided into macro-scale simulations and micro-scale (pore-scale) simulations. Macro-scale simulations could be obtained from the homogenization procedure [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF] in order to describe large spatial domains, and are also adopted to ignore the complicated micro-porous structures. While macroscopic simulations could hardly be used to explore the mechanism of the adhesion process [START_REF] Zamani | Flow of dispersed particles through porous media-Deep bed filtration [J][END_REF] . Besides, the macroscopic behaviors of flow and transport are controlled by microscopic mechanisms [START_REF] Xiao | Pore-scale simulation frameworks for flow and transport in complex porous media[END_REF] Micro-scale simulations, with the advent of advanced algorithms and parallel computing, have become attractive tools to explore the average microscopic mechanisms and uncover the averaged macroscopic behavior by solving simultaneously the Stokes or Navier-Stokes equations and the advection-diffusion equation, which are the underlying governing equations for flow and transport in porous media respectively. The micro-scale models can simulate a variety of situations, such as bacteria in biofilms. Moreover, modification of fluid properties and boundary conditions is much easier during computer simulations than in experiments [111] . Extensive work has been done in the area of particle transport and deposition in porous media by means of microscale simulation.
For example, Boccardo et al [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF] have numerically investigated the deposition of colloidal particles in 2D porous media composed of grains of regular and irregular shapes. They found that the Brownian attachment efficiency (to be defined below) deviates appreciably from the case of single collector. Coutelieris et al. [START_REF] Coutelieris | Low Peclet mass transport in assemblages of spherical particles for two different adsorption mechanisms[END_REF] have considered flow and deposition in a stochastically constructed spherical grain assemblage by focusing on the dependence of capture efficiency and found that the well-known sphere-in-cell model remains applicable provided that the right porous medium properties are taken into account. Sefrioui et al. [START_REF] Sefrioui | Numerical simulation of retention and release of colloids in porous media at the pore scale[END_REF] studied the transport of solid colloidal particles in presence of surface roughness and particle/pore physicochemical interactions by a "one fluid" approach. Lopez et al. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] investigated the deposition kinetics of spherical colloidal particles in dilute regime through a pore consisting of two parallel plates. Sirivithayapakorn et al. [START_REF] Sirivithayapakorn | Transport of colloids in saturated porous media: A pore-scale observation of the size exclusion effect and colloid acceleration[END_REF] used micromodels to explore the mechanisms of transport of colloid particles of various sizes and they investigated factors that would influence the colloid transport in porous media. Bradford et al. [START_REF] Bradford | Determining parameters and mechanisms of colloid retention and release in porous media[END_REF] proposed a modeling framework to study the controlling mechanisms of colloid particle retention and release in porous media. Their work provided valuable and profound insight on the understanding of particle tracking process and mechanisms in porous media.
Particle transport and deposition mechanisms
Knowledge of the mechanisms governing colloidal particle transport and deposition in porous media is of great importance in several natural and engineered processes. Thus it is significant to investigate the dominant mechanisms and to develop a comprehensive understanding of the deposition processes [START_REF] Reeshav | Transport and deposition of particles onto homogeneous and chemically heterogeneous porous media geometries[END_REF] . Generally, the process of particle deposition can be divided into the following two steps: transport and attachment. During the first step, the particles are transported from the bulk of the fluid to the collector surface by diffusive and/or convective transport, and the collection efficiency is denoted by . During the second step the particles are adsorbed on the collector surface, which is caused by the sorption process between the collector wall and the particles, and the attachment efficiency is denoted by . The product of the above two efficiencies gives the total collection efficiency, (see below), which includes both the transport and the attachment of the particles [START_REF] Elimelech | Kinetics of deposition of colloidal particles in porous media[END_REF] .
Diffusive transport and Brownian state
Diffusive transport occurs due to the Brownian motion of colloidal particles arising from the random collision of molecules of the surrounding fluid [START_REF] Tan | Numerical simulation of Nanoparticle delivery in microcirculation[END_REF] . The trajectory of a colloidal particle could be obtained by tracking the movements of the colloidal particle undergoing Brownian motion at the usual experimental timescale intervals, the results show that the trajectory is not a mathematically smooth curve, and the apparent velocity of a Brownian particle derived from the trajectory could not reflect the real velocity of the particle. Therefore, the mean-square displacement is generally used to describe the motion of Brownian particles [START_REF] Elimelech | Kinetics of deposition of colloidal particles in porous media[END_REF] .
The bulk diffusion coefficient for an isolated sphere in an infinite medium is given by the Stokes Einstein expression as:
p 6 B kT D a (2-1)
where k B is the Boltzmann constant, T is the absolute temperature, a p is the particle radius and μ is the dynamic viscosity. The thermal energy term k B T represents the extent of molecular collisions, and the particle size and viscosity are two main factors that affect the diffusion process. Diffusion is inversely proportional to the particle size and the dynamic viscosity of the suspending fluid [START_REF] Elimelech | Particle deposition and aggregation. measurement, modelling and simulation[END_REF][START_REF] Reeshav | Transport and deposition of particles onto homogeneous and chemically heterogeneous porous media geometries[END_REF] .
As Equation (2-1) suggests, diffusion decreases as particle size becomes larger for a given value of viscosity but is the dominant mode of transport for "point like" particles. It is important to note that hydrodynamic interactions effect diffusive particle transport near the collector surface just like it does for convection [START_REF] Johnson P R | Dynamics of colloid deposition in porous media: Blocking based on random sequential adsorption[END_REF][START_REF] Kemps | Particle tracking model for colloid transport near planar surfaces covered with spherical asperities[END_REF][START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF] . Hence, it experiences increased resistance near the wall as it tries to squeeze the fluid molecules out of the thin space during its course of vibration. One can also obtain an approximate estimate of the system-size correction by using the Stokes-Einstein relation for the diffusion coefficient with either stick or slip boundary conditions [START_REF] Yeh | System-size dependence of diffusion coefficients and viscosities from molecular dynamics simulations with periodic boundary conditions[END_REF] :
(stick) 6 (slip) 4 B B k T R k T R D (2-2)
where R is the hydrodynamic radius of the particle.
Convective transport and hydrodynamic interactions
Hydrodynamic factors such as flow field profiles, the distributions of hydraulic gradient and hydraulic conductivity contribute to particle convective transport. For Brownian particles, apart from the Brownian motion, colloidal particles suspended in a fluid are usually considered to flow along the streamlines with the same velocity as the fluid. In most colloidal deposition systems, the particles are usually assumed to be much smaller than the collector, so the flow field is only disturbed by the particles when the particles are deposited or very close to the surface wall [START_REF] Reeshav | Transport and deposition of particles onto homogeneous and chemically heterogeneous porous media geometries[END_REF][START_REF] Probstein | Physicochemical hydrodynamics: An introduction[END_REF][START_REF] Liu | Experimental and numerical investigations into fundamental mechanisms controlling particle transport in saturated porous media[END_REF] .
Based on the finite element method, a new mathematical model has been developed by Jin et al. [START_REF] Jin | Concurrent modeling of hydrodynamics and interaction forces improves particle deposition predictions[END_REF] to study the overall influence of hydrodynamics on particle deposition on spherical collectors.
Their work indicates that the particle deposition is highly influenced by the hydrodynamic effects.
Early models, such as those presented in the works of Levich [START_REF] Levich | Physicochemical hydrodynamics[END_REF] and of Happel [START_REF] Tang | Influence of silicone surface roughness and hydrophobicity on adhesion and colonization of Staphylococcus epidermidis [J][END_REF] , are developed for a single collector. However, the flow field around each collector could be influenced by the neighboring collectors and, as a result, particle transport and deposition process will also be affected.
Boccardo et al. [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF] studied the effect of the irregularity of the collectors and the presence of multiple collectors. Consideration of hydrodynamic contributions to particle deposition may help to explain discrepancies between model-based expectations and experimental outcomes.
Risbud et al [START_REF] Risbud | Trajectory and distribution of suspended non-Brownian particles moving past a fixed spherical or cylindrical obstacle[END_REF] investigated the suspended particle trajectory as it moves past a sphere/cylinder driven by a constant external force/a uniform velocity field in the limit of infinite Pé clet number and zero Reynolds number. They derived an expression for the minimum particle-obstacle separation attained during the motion as a function of the incoming impact parameter, as is shown in Figure 2.9.
The minimum distance between the particle and obstacle surfaces dictates the relevance of short-range non-hydrodynamic interactions such as van der Waals forces, surface roughness and so on. The scaling relation derived for small bin shows that during the motion of a particle past a distribution of periodic or random obstacles, extremely small surface-to-surface separations would be common. This highlights the impact that short-range non-hydrodynamic interactions could have in the effective motion of suspended particles.
The frequency at which particles in the aqueous phase come into contact with the solid phase/ the "collector" is defined as collection efficiency . One of the most commonly used formulas for is derived from Rajagopalan and Tien's Lagrangian trajectory analysis [START_REF] Rajagopalan | Trajectory analysis of deep-bed filtration with the sphere-in-cell porous media model[END_REF] within Happel's sphere-in-cell porous media model [START_REF] Happel | Viscous flow in multiparticle systems slow motion of fluids relative to beds of spherica[END_REF] , which is written as [START_REF] Nelson | Colloid filtration theory and the Happel sphere-in-cell model revisited with direct numerical simulation of colloids[END_REF] :
2 c0 I a UC (2-3)
where I is the overall rate at which particles strike the collector, a c is the radius of the collector, U is the approach velocity of the fluid, and C 0 is the number concentration of particles in the fluid approaching the collector. The Pé clet number (Pe) is a dimensionless number that is relevant for the study of transport phenomena of colloidal dispersions. Here, it is defined to be the ratio of the rate of advection to the rate of particle's diffusion:
a p u Pe D (2-4)
where is the average convection velocity along the mean flow axis, a p is the radius of the particles. In the case of multiple neighboring collectors, the collection efficiency is defined as [START_REF] Cushing R S | Depth filtration: fundamental investigation through three-dimensional trajectory analysis[END_REF] :
2/3 1/3 4e s P A (2-5)
where is the diffusional collection (collection by Brownian motion), Pe is the Pé clet number, T is the absolute temperature, and A s is Happel's flow parameter [START_REF] Cushing R S | Depth filtration: fundamental investigation through three-dimensional trajectory analysis[END_REF] :
5 s 1/3 56 p (1 ) 2 3 3 2 2(1 p ) w p p p A w (2-6)
where ε is the porosity. Figure 2.9 Schematic representation of the problem [START_REF] Risbud | Trajectory and distribution of suspended non-Brownian particles moving past a fixed spherical or cylindrical obstacle[END_REF] . (a) The small circle of radius a represents the moving sphere with a corresponding incoming impact parameter bin. The circle of radius b represents the fixed obstacle (a sphere or a cylinder). The empty circle represents the position of the suspended particle as it crosses the symmetry plane normal to the x-axis. The surface-to-surface separation ξ and its minimum valueξ min are also shown. (b)
Representation of the conservation argument used to calculate the minimum separation. The unit vector in the radial direction d, the velocity components in the plane of motion, both far upstream (S∞) and at the plane of symmetry (S 0 ) are shown
Extended DLVO theory
The process of particle deposition can be divided into the following two steps: convection to the collector and adsorption (capability of a solid substance to attract to its surface molecules of a gas or solution with which it is in contact) due to the particle/collector interaction. Such a potential which origin is the physical-chemical interaction, which is usually calculated from DLVO theory that include electrical double layer, van der Waals, and sometimes short range Born interactions [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF] .
According to different influence range, VDW and EDL are called long-range forces (up to 100 nm from the surface) and the dominant forces controlling the attachment and detachment of particles [START_REF] Sasidharan | Coupled effects of hydrodynamic and solution chemistry on long-term nanoparticle transport and deposition in saturated porous media[END_REF] ,
while the Born repulsion force is called short-range force (being dominant only within 5 nm from the surface) which drops very sharply with the distance. In general, the interaction contains two minimums and one energy barrier. When particle and collector are of opposite charges or when salt concentration is very high, the potential is purely attractive, which is usually called favorable deposition. The particle deposition rate is one of the most significant parameters in studies of particles deposition onto collector surfaces and it is necessary to express this rate in a dimensionless form [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF] . For the isolated collector, the collector efficiency (η) is commonly evaluated through the product of the attachment efficiency (a) and the dimensionless transport rate (η 0 ) [START_REF] Elimelech | Effect of particle size on collision efficiency in the deposition of Brownian particles with electrostatic energy barriers[END_REF] . The defination of collection efficiency (η) could refer to equation (2-3) [START_REF] Nelson | Colloid filtration theory and the Happel sphere-in-cell model revisited with direct numerical simulation of colloids[END_REF] . Therefore in the case of favorable The mathematical modeling of particle transport and deposition in homogenous porous media usually comprises the following three parts: (1) transport process in the liquid phase due to hydrodynamic dispersion and convection, (2) transfer between the liquid phase and the solid phase (due to attachment and detachment), (3) inactivation, grazing or death [START_REF] Liu | Experimental and numerical investigations into fundamental mechanisms controlling particle transport in saturated porous media[END_REF] . Particle transport in saturated porous media is usually described by the advection dispersion-sorption (ADS) equations [START_REF] Tan | Transport of bacteria in an aquifer sand: experiments and model simulations[END_REF] . There are various forms of ADS equations, for example, a simplified form for one-dimensional particle transport in a homogeneous porous medium is:
2 2 c v b D w s b s c D R R s t t x x (2-7)
where s is the adsorbed mass per unit mass of the solid phase, c is the free particle mass concentration at a distance x and time t, D is the hydrodynamic dispersion coefficient, vD is the Darcy velocity, ε is the porosity, ρ b is the dry bulk density of the porous medium, R w and R s are the decay or inactivation rates for particles in the liquid and solid phase, respectively [START_REF] Liu | Experimental and numerical investigations into fundamental mechanisms controlling particle transport in saturated porous media[END_REF] . . The equation above is usually completed by the equilibrium equation stating that the variation of s arises from adsorption and desorption:
k b a d s c k s t (2-8)
where k a and k d are the first order kinetic attachment and detachment rates, respectively. It is important to note that although the particle filtration processes occur at a microscopic level, the attachment and detachment rates are determined at the macroscopic scale [START_REF] Liu | Experimental and numerical investigations into fundamental mechanisms controlling particle transport in saturated porous media[END_REF] .
Classical Filtration Theory (CFT), corresponding to the case of k d = 0 in the Eq. (2-8),was the first theory developed to predict the particle attachment coefficient and the CFT attachment coefficient is based on the surface properties of both the porous medium and the particles. The particle sorption (the process in which one substance takes up or holds another) or "filtration" process in porous media may be governed by three primary mechanisms, as is shown in Figure 2.10:
(1) surface filtration (the sizes of particles are larger than the pore sizes of the porous media), (2) straining filtration (the sizes of particles are not much smaller than the pore sizes), and (3) physico-chemical filtration (the sizes of particles are smaller than the pore sizes by several orders of magnitude) [START_REF] Mcdowell-Boyer | Particle transport through porous media[END_REF] .
Figure 2.10 Three filtration mechanisms of particle transport in porous media [START_REF] Mcdowell-Boyer | Particle transport through porous media[END_REF] The way to obtain the "collector efficiency" is firstly proposed by Yao et al. [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF] This efficiency describes how particles interact with an isolated grain "collector" within a porous medium. In the classical CFT, the deposition efficiency is represented by the deposition efficiency of a unit collector (an isolated solid grain) since the porous medium is assumed to be represented by an assemblage of perfect spherical solid grains (collectors). Generally, the colloid deposition efficiency is typically governed by the superposition of three mechanisms: interception, sedimentation (or gravitation forces), and diffusion. The introduction of CFT sparked several decades of research at the pore, macroscopic, and field scales to develop predictive capabilities and have comprehensive understanding of colloidal particle transport and deposition in porous media [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF] . Nevertheless, many studies have reported that the classical CFT can only describe particle transport under very limited conditions [START_REF] Tan | Transport of bacteria in an aquifer sand: experiments and model simulations[END_REF] . For example, this theory is suitable for clean-bed filtration with no substantial energy barrier between the particles and targeted surfaces [START_REF] Jin | Concurrent modeling of hydrodynamics and interaction forces improves particle deposition predictions[END_REF] .
When there are deposited particles on the surfaces, the hydrodynamic shadowing effect could not be ignored. The possibility to induce structure in layers of colloid particles by using the hydrodynamic blocking effect is investigated both experimentally and by using Monte Carlo simulations. The results indicate that the shadow length scales with the Pe with an exponent less than one due to attractive particle-surface interactions. The reduced particle density behind particles in the flow direction promises the possibility to selectively deposit particles in these locations after reducing the hydrodynamic screening of the surface. Simulations show that it is indeed possible to self-assemble particle pairs by performing consecutive depositions under different flow conditions on the same surface. The surface exclusion behind deposited particles is increase with the Pe. To model the data we used a simple expression of the shadow length L s as a function of Pe, given by [133] L s = A h Pe n + L s0 (2-9) where L s0 = 9 corresponds to the experimentally determined distance for the pair correlation function to reach unity in the direction perpendicular to the flow. A best fit was achieved for an exponent n = 0.87 ± 0.1 and A h = 0.7.
Review of previous work and latest progress
Based on the above mathematical models, theoretical frameworks and classical researches, a great number of studies have been carried out to further investigate the colloidal particle transport and deposition in homogenous porous media during the past two decades. According to the research objective, these works could be divided into the following three parts:
Investigation or improvement of modelling of deposition mechanisms
During the past decades, many researchers have focused on the investigation or improvement of modelling of the deposition mechanisms. Gianluca Boccardo and et al. [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF] improved the current understanding of particle transport and deposition in porous media by means of detailed CFD simulations. The results show that the interactions between close collectors result in behaviors that are different from the theory developed by Happel and co-workers [START_REF] Happel | Viscous flow in multiparticle systems: slow motion of fluids relative to beds of spherical particles[END_REF] . Hasan Khan et al. [START_REF] Khan | Comparative study of Formation damage due to straining and surface deposition in porous media[END_REF] investigated the porous media damage and particle retention mechanisms under different conditions (particle size/pore size) by means of experimental and simulation methods. The results indicate that the retention mechanism for small particles is surface interaction dependant, while for large particle straining is dominant. Lopez et al. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] carried out micro-scale numerical simulations of colloidal particles deposition onto the surface of a simple pore geometry consisting of two parallel planar surfaces. They investigated the deposition kinetics of colloidal spheres from dilute dispersions flowing through the pore. The results show that both the surface coverage Γ and the equivalent hydrodynamic thickness of the deposit, δ h , feature a definite plateau at low Pe (a dimensionless number defined to be the ratio of the rate of advection to the rate of particle's diffusion), and both are decreasing functions at high Pe. In addition, the dimensionless maximum surface coverage was shown to follow a power law given by Pe -1/3 , where the Pé clet number was calculated on the basis of the flow velocity at a distance of 3 particle radii from the pore wall.
In the presence of an energy barrier, Johnson et al. [START_REF] Li | Colloid retention in porous media: mechanistic confirmation of wedging and retention in zones of flow stagnation[END_REF] proposed a mechanistic description of colloid retention in porous media by means of a particle trajectory model. The results indicate that colloid retention in porous media with an energy barrier may occur via two mechanisms: wedging within grain to grain contacts and retention in flow stagnation zones. Both mechanisms were sensitive to colloid surface interaction forces such as the energy barrier height and secondary energy minimum depth. Civan et al. [START_REF] Civan | Modified formulations of particle deposition and removal kinetics in saturated porous media[END_REF] presented a systematic formulation of the appropriate rate equations for the pore-surface and pore-throat particulate deposition and removal processes during flow through porous media. The exploration results offered many meaningful revisions for generalized applications. Although there are several transport models with respect to particle retention (the act of retaining or something retained) as is shown in Table 2.1, but neither Langmuir adsorption/desorption nor colloid filtration models could fully capture particle retention mechanisms under various types of flowing conditions. Zhang et al. [START_REF] Zhang | Mechanistic model for nanoparticle retention in porous media[END_REF] proposed an independent two-site model for nanoparticles deposition each of a given enery barrier. Numerous critical features of the widely different sets of experimental data are captured by this model, suggesting that its assumptions are addressing important phenomena. Moreover, the finite capacities in the two-site model provide some insight into the maximum nanoparticle retention concentration in porous media which is significant in nanoparticle applications in the oilfield .
Table 2.1 Existing models for particle deposition in porous media [START_REF] Zhang | Mechanistic model for nanoparticle retention in porous media[END_REF] 2. Exploration of the key influence factors of particle deposition Numerous works have been carried out to explore the key factors that influence the particle transport and deposition in homogeneous porous media. For instance, Ahfir et al. [START_REF] Ahfir | Porous media grain size distribution and hydrodynamic forces effects on transport and deposition of suspended particles[END_REF] investigated the effects of porous media grain size distribution (GSD) on the transport and deposition of polydisperse suspended particles. The results indicate that both the deposition kinetics and the longitudinal hydrodynamic dispersion coefficients are influenced by the porous media grain size distribution. Moreover, the filtration efficiency is increasing with the uniformity coefficient of the porous medium grain size distribution. Large grain size distribution leads to narrow pores, which also enhances the deposition of the particles by straining. Lin et al. [START_REF] Lin | Examining asphaltene solubility on deposition in model porous media[END_REF] studied the dynamics of asphaltene deposition in porous media using microfluidic devices. Koullapis et al. [START_REF] Koullapis | Particle deposition in a realistic geometry of the human conducting airways: Effects of inlet velocity profile, inhalation flowrate and electrostatic charge[END_REF] investigated the deposition of inhaled aerosol particles, and assess the effects of inlet flow conditions, particle size, electrostatic charge, and flowrate. The conclusions could be summarized as: the effect of inlet velocity profile dissipates quickly as a result of the complex geometry (realistic human upper airways geometry), and the deposition fraction of particles is increased with flow rate due to greater inertial impaction, especially for bigger particles (>2.5 μm). Moreover, the effect of charge is more significant for smaller particles. Becker et al. [START_REF] Becker | In situ measurement and simulation of nano-magnetite mobility in porous media subject to transient salinity[END_REF] quantified the coupled effects of physical and chemical factors on the release behavior of polymer-coated magnetite nanoparticles in water-saturated quartz sand. The multi-dimensional multispecies transport simulator (SEAWAT) combined with experimental observations were employed. The results conclusively demonstrated that the introduction of de-ionized water into the flow lead to lower pore-water ionic strength and lower attractive forces, which will contribute to particle detachment. Park et al. [START_REF] Park | Environmental behavior of engineered nanomaterials in porous media: a review [J][END_REF] investigated the release of Engineered Nano Materials (ENMs) and their environmental behavior in aqueous porous media. Influencing factors including physicochemical properties, solution chemistry, soil hydraulic properties, and soil matrices have also been studied. The results indicate that ENMs are transported readily under 'unfavorable' conditions: surface functionalization, surface physical modifications, high pH, low ionic strength, and high flow velocity. Wang et al [START_REF] Wang | Review of key factors controlling engineered nanoparticle transport in porous media [J][END_REF] provided an introductory Figure 2.11 Key factors controlling engineered nanoparticle transport in porous media [START_REF] Wang | Review of key factors controlling engineered nanoparticle transport in porous media [J][END_REF] Li et al. [START_REF] Li | Pore-scale observation of microsphere deposition at grain-to-grain contacts over assemblage-scale porous media domains using X-ray microtomography[END_REF] directly observed the colloid deposition environments in Porous Media (PM) in the absence of an energy barrier by means of light microscopy, Magnetic Resonance Imaging (MRI), and X-ray MicroTomography (XMT). The results demonstrated that the deposited concentrations decreased log-linearly with increasing transport distances. Buret et al. [START_REF] Buret | Water quality and well injectivity do residual oil-in-water emulsions matter[END_REF] carried out a laboratory study of oil-in-water emulsion flow in porous media to investigate the mechanisms of oil-droplet retention and its effect on permeability. The results demonstrate that at the Random Sequential Adsorption (RSA) limit, the induced permeability loss is significant even at high pore-size/droplet size ratio. In the considered Pe range, the permeability loss decreases according to a power law similar to that relating maxium surface coverage in Pe with an exponent of -1/3. Li et al. [START_REF] Li | Role of grain-to-grain contacts on profiles of retained colloids in porous media in the presence of an energy barrier to deposition[END_REF] also directly observed the colloid deposition environments in porous media in presence of an energy barrier by means of the XMT technique. The results indicate that the deposited concentrations decreased nonmonotonically with increasing transport distances, and in presence of an energy barrier, colloid deposition at grain-to-grain contacts is dominated. Sefrioui et al. [START_REF] Sefrioui | Numerical simulation of retention and release of colloids in porous media at the pore scale[END_REF] simulated the transport of a solid colloidal particle in presence of surface roughness and particle/pore physicochemical interactions, the simulation was carried out at the pore scale by adopting a "one fluid" approach.
They found that the existence of surface roughness is a necessary but not sufficient condition for particles retention, and the residence time is increased with the ionic strength since particle/pore surface becomes less repulsive with higher ionic strength. Ahmadi et al. [START_REF] Ahmadi-Sé Nichault | Displacement of colloidal dispersions in Porous Media experimental & numerical approaches [C]. Diffusion foundations[END_REF] investigated the colloids deposition and re-entrainment in presence of a rough surface, as well as the influence of physicochemical and hydrodynamic conditions. They present firstly experiments on retention and release of colloids in a porous medium and come up to the following conclusions: for given hydrodynamic conditions and suspension chemistry, the colloids deposition process is "piston-like" and the deposited layer at the end of the process is a monolayer. After the deposition step, when the porous medium is flushed by pure water at increasing Pe, the cumulative fraction of removed colloids increases until all already deposited particles are recovered.
Development of novel models or codes to solve specific and realistic problems
Since most real conditions are more complex and diverse, it is necessary to develop specific models or codes to solve specific problems. Taghavy et al. [START_REF] Taghavy | Modeling coupled nanoparticle aggregation and transport in porous media: a Lagrangian approach[END_REF] developed and implemented the one-dimensional hybrid Eulerian-Lagrangian particle (HELP-1D) transport simulator to explore coupled particle aggregation and transport processes. The results indicate that when environmental conditions promote particle-particle interactions, neglecting aggregation effects can lead to under-or over-estimation of nanoparticle mobility. Tosco et al. [START_REF] Tosco | MNM1D: a numerical code for colloid transport in porous media: implementation and validation[END_REF] developed a novel MNM1D code (Micro-and Nanoparticle transport Model in porous Media) to simulate colloid deposition and release in saturated porous media during transients ionic strength. The good agreement between MNM1D
and the other models (well-established software, namely Hydrus-1D and Stanmod) demonstrate that this novel code could be used for the simulation of colloidal transport in groundwater under transient hydrochemical conditions. Later, Bianco et al. [START_REF] Bianco | 3-dimensional micro-and nanoparticle transport and filtration model (MNM3D) applied to the migration of carbon-based nanomaterials in porous media [J][END_REF] proposed an improved modeling approach MNM3D (Micro and Nanoparticle transport Model in 3D geometries) to simulate engineered nanoparticles injection and transport in three-dimensional (3D) porous space. This simulation is based on the modified advection-dispersion-deposition equations accounting for the coupled influence of flow velocity and ionic strength on particle transport. Furthermore, MNM3D can also be applied to more realistic cases, with complex geometries and boundary conditions. Katzourakis et al. [START_REF] Katzourakis | Mathematical modeling of colloid and virus cotransport in porous media: Application to experimental data[END_REF] developed a conceptual mathematical model to describe the cotransport of viruses and colloids in 3D water saturated, homogeneous porous media with uniform flow. The results illustrate the good agreement between the experimental data (collected by Syngouna and Chrysikopoulos) [START_REF] Syngouna | Cotransport of clay colloids and viruses in water saturated porous media[END_REF] and results from this model. This novel model captures most of the physicochemical processes during virus and colloid cotransport in porous media. Moreover, the results indicate that the interactions between suspended virus and colloid particles can significantly affect virus transport in porous media.
Hoepfner et al. [START_REF] Hoepfner | A fundamental study of asphaltene deposition[END_REF] presented a new investigative tool to better understand asphaltene behavior. The results indicate that the governing factor controlling the magnitude of asphaltene deposition is the concentration of insoluble asphaltenes present in a crude oilprecipitant mixture. Moreover, the deposit is significantly thicker at the inlet than at the outlet. Su et al. [START_REF] Su | Discrete element simulation of particle flow in arbitrarily complex geometries[END_REF] developed an algorithm named RIGID (spherical particle and geometry interaction detection) to detect particle interactions with an arbitrarily complex wall in the framework of soft-sphere model and applied it to simulations of dense gas-particle flows in complex geometries. Three test cases have been carried out, (single particle collision with a complex geometry/bubble fluidized bed/immersed tube fluidized bed) and the results indicate that RIGID can be used to detect particle-wall contact in an arbitrarily complex geometry since particle contacts with any geometry are in fact a combination of the three basic types of contact tested in their work (plane/convex edge/convex vertex with single particle contact).
Apart from the above three main aspects, the investigation of the spatial distribution of deposited particles have also attracted considerable attention. Indeed, the understanding of the particle distribution regularities could help us to obtain the desired distribution of deposited particles, which could be used in many practical applications, including the optimization of inhalation therapeutic strategies to target drugs to desired lung regions [START_REF] Brand | Intrapulmonary distribution of deposited particles[END_REF] . There are relatively few researches that have been carried out to clarify the particle deposition distribution in porous media. For example, Kusaka et al. [START_REF] Kusaka | Morphology and breaking of latex particle deposits at a clindrical collector in a microfluidic chamber[END_REF] explored latex particle deposits on a cylindrical collector in a microfluidic chamber by means of an in-situ observation method. The results demonstrated that the deposits morphology generated at a single collector surface in a microfluidic chamber is highly dependant on particle size and flow rate. For lower Pé clet numbers (Pe), particles deposit is mainly uniform except at the rear where particles do not attach. For higher Pe, deposits adopt a columnar morphology at the collector stagnation point. For smaller particles, the distribution is uniform with lower Pe, while it is anisotropic for higher Pe. For bigger particles, the distribution is anisotropic for all cases with different Pe, and the particles deposit at the rear. Ahfir et al. [START_REF] Ahfir | Porous media grain size distribution and hydrodynamic forces effects on transport and deposition of suspended particles[END_REF] explored the coupled effects of porous medium's grain size distribution and hydrodynamic forces on transport and deposition of suspended particles by performing transport experiments in a column of 62 cm in length using the step-input injection technique. The results show that the retention is non-uniformly distributed in the porous media, the deposition is more obvious at the entrance, while decreases with depth. Moreover, the retention decreases with increasing flow velocity. Li et al. [START_REF] Li | Fine particle dispersion and deposition in three different square pore flow structures[END_REF] reported the two-dimensional
Limitations and inspirations for the current work
The above research works have provided valuable and profound insight on the understanding of particle transport and deposition process in homogeneous porous media, and improved and enriched the theoretical frameworks in this field, which are of great importance for the current work. However, most of the above investigations are valid only for clean-bed filtration (before any significant deposition of particles on the collector occurs). Particle deposition and filtration are intrinsically transient processes, since deposited material changes both the geometry of the interstitial spaces of the filter (porous medium) and the nature of the collector surfaces, therefore affecting the deposition of other colloidal particles. Nevertheless, due to the complexity of the topic, it is usually modelled under steady state conditions, assuming that the deposited particles suddenly "disappear" after contacting the filter medium (clean bed filtration theory). This assumption is valid at the initial stage of the phenomenon, while when the number of deposited particles increases to a certain value, the influence of deposited particles cannot be neglected. Until now, few works have consider the volume of the deposited particles. Thus in the present study, in order to take the volume occupied by the deposited particles into account, the geometry of the pores was reconstructed and the flow field was updated once particles were deposited. [START_REF] Civan | Modified formulations of particle deposition and removal kinetics in saturated porous media[END_REF] In addition, in most of the above-mentioned studies, simulations were simplified into one or two dimensions to reduce the calculation time. However, this simplification restricts the particle motion, and reduced the accuracy of the results since most real porous media are three-dimensional.
Therefore, it is necessary to improve the simulation model by enabling three-dimensional simulation [START_REF] Bianco | 3-dimensional micro-and nanoparticle transport and filtration model (MNM3D) applied to the migration of carbon-based nanomaterials in porous media [J][END_REF] . To the author's knowledge, few simulation tools are available in the literature to explore the three-dimensional spatial distribution of deposited particles in microchannel while taking the influence of deposited particles into account. Consequently, in the present study, a novel 3D-PTPO (Three-dimensional particle tracking model by Python ® and OpenFOAM ® ) code developed in I2M laboratory is proposed to investigate the relationship between particle distribution profiles and Pé clet number, as well as the geometry of porous media. In addition, the transport and deposition behavior of colloid particle in porous media and the variations of several key parameters (surface coverage, porosity, permeability, etc.) versus Pe are also investigated. The validity of the numerical code has been demonstrated via comparing results with other studies and theories. Furthermore, modifications could be easily made to adjust the code to further applications, e.g. to simulate the multi-layer particle deposition by introducing attractive particle-particle interaction, to realize the functionalization of a certain region in porous media, etc.
A review of colloidal particle transport and deposition in heterogeneous porous media 2.2.4.1 Research background mathematical models
The discussions in the previous sections of this chapter focus on particle transport and deposition onto homogeneous surfaces. However, almost all natural porous media exhibit both physical and chemical heterogeneities. Artificial porous media are sometimes geometrically and chemically simpler, while heterogeneity still exists in most of their wall surfaces [START_REF] Duffadar | The impact of nanoscale chemical features on micron-scale adhesion: crossover from heterogeneity-dominated to mean-field behavior [J][END_REF] . Particle transport and deposition in heterogeneous porous media have been an area of intense investigation. This is because when particles under flow approach such heterogeneously patterned surfaces, they exhibit various adhesion characteristics and dynamic signatures according to their different physico-chemical properties including charge, shape and size. A thorough understanding of the above processes and dominated mechanisms is significant for the development of separation, adsorption and emerging sensing technologies at the micro-and nano-scales [START_REF] Reeshav | Transport and deposition of particles onto homogeneous and chemically heterogeneous porous media geometries[END_REF][START_REF] Bendersky | Statistically-based DLVO approach to the dynamic interaction of colloidal microparticles with topographically and chemically heterogeneous collectors[END_REF] .
For chemical surface heterogeneities (patches) much larger than the colloidal particles, the patchwise heterogeneity model provides an accurate description of particle deposition rates on heterogeneous surfaces. For example, the two-patch charge model has been successfully applied to describe colloidal transport and deposition in chemically heterogeneous porous media. These studies demonstrated that colloid deposition rate is directly proportional to the degree of porous medium chemical heterogeneity [START_REF] Chen | Role of spatial distribution of porous medium surface charge heterogeneity in colloid transport[END_REF] . While for size of surface heterogeneities much smaller than the colloidal particles, the patchwise heterogeneity models may not be well-suited as some deposition is always predicted if any region of the surface is favorable [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] .
In order to develop more realistic models for particle deposition onto heterogeneous porous media, and have a systematic understanding of the above process and mechanisms, extensive research has been done during the past two decades in the area of both experimental and theoretical studies, dealing with surface chemical/physical heterogeneity at macroscopic or microscopic scales, as described below [START_REF] Duffadar | Interaction of micrometer-scale particles with nanotextured surfaces in shear flow[END_REF] . Here, it should be define that, according to the sources of heterogeneity, surface heterogeneity can be divided into two categories: physical heterogeneity and chemical heterogeneity. The former one is usually due to surface roughness and the latter one is usually attributed to the non-uniform distribution of charged species [START_REF] Reeshav | Transport and deposition of particles onto homogeneous and chemically heterogeneous porous media geometries[END_REF] . Besides, surface heterogeneity can be classified in terms of scale as either macroscopic or microscopic.
Physical and chemical heterogeneity 1. Physical heterogeneity
Physical heterogeneity has far-reaching effects on particle deposition, and often allows to explain the disagreement between the measurements of interaction force made by means of atomic force microscopy or electrophoretic mobility at small Debye lengths (a measure of a charge carrier's net electrostatic effect in solution and how far its electrostatic effect persists) and the predictions obtained from the classical DLVO theory [START_REF] Bendersky | Statistically-based DLVO approach to the dynamic interaction of colloidal microparticles with topographically and chemically heterogeneous collectors[END_REF] . In the presence of physical heterogeneity (surface roughness), the closest distance between two surfaces are usually shifted, the interaction potential between smooth surfaces is more difficult to calculate, and the adhesion of small particles through an increase in the contact area (for particles of appropriate size) will also be enhanced [START_REF] Duffadar | The impact of nanoscale chemical features on micron-scale adhesion: crossover from heterogeneity-dominated to mean-field behavior [J][END_REF] .
During the past decades, works targeting to study the impact of the physical heterogeneities on the interaction energies and particle deposition are numerous. For example, Suresh et al. [START_REF] Suresh | Effect of surface roughness on the interaction energy between a colloidal sphere and a flat plate [J][END_REF] investigated the reduced energy barrier to particle deposition in presence of surface heterogeneity compared to a smooth surface. Later on, Zhao et al. [START_REF] Zhao | Suppressing and enhancing depletion attractions between surfaces roughened by asperities[END_REF] explored the depletion interaction potential for both ordered and disordered physical heterogeneities, and the self-assembly of rough platelets were modelled. Wang et al. [START_REF] Wang | Molecular dynamics simulation study on controlling the adsorption behavior of polyethylene by fine tuning the surface nanodecoration of graphite[END_REF] explored the adsorption of polyethylene (PE) with different chain lengths on nanoscale protrusions-patterned graphite surfaces by means of molecular dynamics simulations. The results indicate that the order parameter, the adsorption energy, and the normalized surface-chain contacting pair number all decreased with the size of the protrusion. Bradford et al. [START_REF] Bradford | Determining parameters and mechanisms of colloid retention and release in porous media[END_REF] explored the determination of fundamental parameters and controlling mechanisms of colloid retention and release on surfaces of porous media by modelling at the representative elementary volume (REV) scale. The results indicate that the nanoscale roughness produced localized primary minimum interactions that control long-term retention. Microscopic roughness play a dominant role in colloid retention under low ionic strength and high hydrodynamic conditions, especially for larger colloids. Phenrat et al. [START_REF] Phenrat | Transport and deposition of polymer-modified FeO nanoparticles in 2-D heterogeneous porous media: Effects of particle concentration, FeO content, and coatings[END_REF] evaluated the effect of porous media heterogeneity and the dispersion properties in heterogeneous porous media within a 2D cell. The results indicate that polymer modified Nanoscale Zero Valent Iron (NZVI) are mostly deposited in regions where fluid shear is insufficient to prevent NZVI agglomeration and deposition .
Chemical heterogeneity
Chemically heterogeneous surfaces are ubiquitous in natural and engineered systems. They are considered to be an important influencing factor to colloid transport in porous media. Chemical heterogeneity may occur on natural colloids or solid surfaces as a result of constituent minerals, chemical imperfections, coatings, and adsorption of different ions, organics, and clay particles [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] .
This chemical variability results in heterogeneous surface charges that are randomly distributed with various length scales and arbitrary geometrical shapes [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] .
During the past two decades, particle deposition onto chemically heterogeneous surfaces have been extensively investigated [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF][START_REF] Bendersky | DLVO interaction of colloidal particles with topographically and chemically heterogeneous surfaces[END_REF] . For example, Elimelech et al. [START_REF] Elimelech | Particle deposition onto solid surfaces with micropatterned charge heterogeneity: The "hydrodynamic bump" effect[END_REF] investigated the influence of microscopic charge heterogeneity on colloid deposition behavior under dynamic flow conditions, they compared the deposition rate onto the micropatterned surfaces to the deposition rate based on the patch mode. The results show that the colloid deposition kinetics is well-fitted by patch model for low Pe and moderate-to high-Ionic Strength (IS) conditions. However, deviations are observed for high Pe and/or low IS. The results are attributed to the interplay between hydrodynamic and electrostatic double layer interactions and are explained by a phenomenon called the "hydrodynamic bump". Bradford et al. [START_REF] Bradford | Colloid interaction energies for physically and chemically heterogeneous porous media[END_REF] developed an improved approach to predict resisting torque (T A ) to removal of adsorbed particles in chemically heterogeneous porous media and use this approach to determine meaningful estimates of fraction of the solid surface area that contributes to colloid immobilization (S f *) under saturated conditions at the representative elementary area (REA) scale.
The results indicate that the colloid attachment depend on solution ionic strength, the colloid radius, the Young's modulus (E), the amount of chemical heterogeneity (P + ), and the Darcy velocity (v D ).
Duffadar et al. [START_REF] Duffadar | Dynamic adhesion behavior of micrometer-scale particles flowing over patchy surfaces with nanoscale electrostatic heterogeneity[END_REF] Develop a computational model to predict the dynamic adhesion behavior of micrometer-scale particles in a low Re flow over planar surfaces with nanoscale electrostatic heterogeneity (randomly distributed "patches"). The results indicate that the ionic strength of flowing solution determines the extent of the electrostatic interactions and can be used to tune selectively the dynamic adhesion behavior. Chatterjee et al. [START_REF] Chatterjee | Particle deposition onto Janus and Patchy spherical collectors[END_REF] analyzed particle deposition on Janus and patchy collectors numerically. The results indicate that particles tend to deposit at the edges of the favorable strips and this preferential accumulation varies along the tangential position due to the nonuniform nature of the collector. Zhang et al. [START_REF] Zhang | Do goethite surfaces really control the transport and retention of multi-walled carbon nanotubes in chemically heterogeneous porous media?[END_REF] investigated and simulated the transport and retention of Multi-Walled Carbon NanoTubes (MWCNTs) in chemically heterogeneous porous media with different mass ratios of Quartz Sand (QS) and Goethite coated Quartz Sand (GQS) by means of the HYDRUS-1D code 55. The results indicate that MWCNT retention in chemically heterogeneous porous media was controlled mainly by roughness. The mass fraction and surface area of chemical heterogeneity also played an important secondary role. Rizwan et al. [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] experimentally created well defined charge-heterogeneous surfaces by employing soft lithographic techniques. They also studied the deposition of model colloidal particles onto such substrates under a no flow (or quiescent) condition, this research is mainly focusing on the surface coverage and deposit morphologies obtained after a long time. Moreover, they presented a simple mathematical description of particle deposition on the created rectangular (striped) surface features by means of the Monte-Carlo-type simulation technique, as is shown in Figure 2.12, and the results are compared with the experiments.
The results indicate that particles tend to preferentially deposit at the edges of the favorable strips, while the extent of this bias can be controlled by varying the distance between consecutive favorable strips as well as the particle size relative to the strip width. In addition, simple binary probability distribution-based Monte Carlo RSA deposition model adequately predicts the deposit structure, particularly the periodicity of the underlying patterns on the substrate. Lin et al. [START_REF] Lin | Deposition of silver nanoparticles in geochemically heterogeneous porous media: predicting affinity from surface composition analysis[END_REF] studied the transport of uncoated AgNPs in geochemically heterogeneous porous media composed of silica glass beads (GB) modified with partial coverage of iron oxide and compared to that in porous media composed of unmodified GB. The results demonstrated a linear correlation between the average nanoparticle affinity for collector and the composition of FeO.
Figure 2.12 Schematic representation of the modeled surface charge heterogeneity. The square collector of height L consists of rectangular strips with alternate regions that are favorable (gray) and unfavorable (white) to deposition of widths w and b, respectively. The total width of a favorable and unfavorable strip gives the pitch, p. The deposited spherical particles of diameter d = 2a p have their centers constrained to lie within the favorable strips [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] 2.2.4.3 Macroscale and microscale heterogeneity
Macroscale heterogeneity
For macroscopic scale heterogeneous patches, isolated patches are assumed to be significantly larger than the particles, and the interactions at patch boundaries can be neglected. Several direct and indirect observations have successfully demonstrated the presence of such macroscopic heterogeneities on solid surfaces [START_REF] Chen | Role of spatial distribution of porous medium surface charge heterogeneity in colloid transport[END_REF] . For these systems, the attachment rate is directly related to the favorable surface fraction of the collectors, which can be predicted by the patchwise model developed by Johnson et al. [START_REF] Johnson | Colloid transport in geochemically heterogeneous porous media: Modeling and measurements[END_REF] . In this model, two types of surface charge are defined on the collector surface and the surface area fraction occupied by one type of surface charge is evaluated using a two site averaging process.
Pham et al. [START_REF] Pham | Nanoparticle transport in heterogeneous porous media with particle tracking numerical methods[END_REF] numerically explored the transport and kinetics of nanoparticles migrating through the pore space between unconsolidated packed spheres and through consolidated Berea sandstone by means of lattice Boltzmann method with Lagrangian particle tracking. The surface charge heterogeneity was distributed according to different patterns, designated as Entry, Exit, Strips, and Mixture patterns for beds packed with spheres. The results indicate that the particle breakthrough curves do not reach a plateau before complete saturation under the effect of surface blocking. Surface saturation also improves particle propagation due to the surface blocking mechanism. Recently, Pham et al. [START_REF] Pham | Effect of spatial distribution of porous matrix surface charge heterogeneity on nanoparticle attachment in a packed bed[END_REF] investigated the effect of spatial distribution of the porous matrix surface heterogeneity on particle deposition. It is found that the pattern effect is controlled by the attachment rate constant, the particle size, and the fraction of surface that is favorable to deposition . Chen et al. [START_REF] Chen | Role of spatial distribution of porous medium surface charge heterogeneity in colloid transport[END_REF] investigated the role of spatial distribution of porous medium patchwise chemical heterogeneity in colloid transport in heterogeneous packed columns. Although the spatial distribution of the modified and unmodified sands was varied, the total extent of mean chemical heterogeneity in the entire column was fixed at 10% favorable surface fraction. The results indicate that particle deposition rate and transport behavior is independent of the spatial distribution of porous media chemical heterogeneity, and dependent of the mean value of chemical heterogeneity. Erickson et al. [START_REF] Erickson | Three-dimensional structure of electroosmotic flow over heterogeneous surfaces[END_REF] examined the electroosmotically driven flow through a microchannel exhibiting a periodically repeating patchwise heterogeneous surface pattern with the microfluidics-based finite element code.
In this study, the electroosmotically driven flow through a slit microchannel (i.e., a channel formed between two parallel plates) exhibiting one of the three periodically repeating heterogeneous surface patterns shown in Figure 2.13. The results indicate the existence of distinct three-dimensional flow structures that, depending on the degree of heterogeneity, vary from a weak circulation perpendicular to the applied electric field to a fully circulatory flow system. In each case the percent heterogeneous coverage is 50% [START_REF] Erickson | Three-dimensional structure of electroosmotic flow over heterogeneous surfaces[END_REF] 2. Microscale heterogeneity
In the case of microscopic heterogeneity, the size of surface patches is much smaller than the particles. For these systems, the size and number of microscopic-scaled heterogeneities have great influences on the parameters including the interaction energy, the zeta potential, and the attachment efficiency [START_REF] Liu | Role of collector alternating charged patches on transport of Cryptosporidium parvum oocysts in a patchwise charged heterogeneous micromodel[END_REF] . A variety of theoretical studies and surface integration techniques have been developed and utilized to investigate the effects of microscopic physical and/or chemical heterogeneity on colloid attachment and transport [START_REF] Bradford | Colloid adhesive parameters for chemically heterogeneous porous media[END_REF] . For example, Duffadar and Davis [START_REF] Duffadar | Interaction of micrometer-scale particles with nanotextured surfaces in shear flow[END_REF] developed grid-surface integration to calculate DLVO interaction energy between a colloid and chemically heterogeneous surface [START_REF] Shen | Influence of surface chemical heterogeneity on attachment and detachment of microparticles[END_REF] . The Grid Surface Integration (GSI) technique utilizes known solutions for particle interactions between two parallel plates to compute the interaction between particles and a heterogeneously patterned surface. The GSI technique calculates the total interaction energy on the particle by summing the contribution on each of the discretized surface elements over the entire particle surface [START_REF] Bradford | Colloid adhesive parameters for chemically heterogeneous porous media[END_REF] . Duffadar et al. [START_REF] Duffadar | The impact of nanoscale chemical features on micron-scale adhesion: crossover from heterogeneity-dominated to mean-field behavior [J][END_REF] explored the impact of random chemical heterogeneity on pairwise interactions and dynamic adhesion, exemplified by the capture of particles on heterogeneous planar surfaces by means of the GSI. The results indicate that for heterogeneity-dominated cases, high ionic strength pairwise interactions are more attractive than predicted by DLVO, and adhesion threshold depends on Debye length. Furthermore, Duffadar and Davis [START_REF] Duffadar | Interaction of micrometer-scale particles with nanotextured surfaces in shear flow[END_REF] also explored the interaction of a micrometer-scale spherical silica particle with a nanotextured heterogeneously charged surface. The results indicate that the spatial fluctuations in the local surface density of the deposited patches are responsible of the dynamic adhesion phenomena observed experimentally, including particle capture on a net-repulsive surface. Bradford et al. [START_REF] Bradford | Colloid adhesive parameters for chemically heterogeneous porous media[END_REF] developed a simplified model to account for the influence of microscopic chemical heterogeneity on colloid adhesive parameters for continuum models. The results indicate that the probability density functions (PDFs) of colloid adhesive parameters at the representative elementary area (REA) scale sensitive to the size of the colloid and the heterogeneity, the charge and number of grid cells, and the ionic strength. Furthermore, Shen et al. [START_REF] Shen | Influence of surface chemical heterogeneity on attachment and detachment of microparticles[END_REF] examined the attachment/detachment of negatively charged microparticles onto/from a negative planar surface carrying positively charged square patches of different sizes by means of surface element integration technique. Their results indicate that a critical patch size is needed to attach a particle at a given ionic strength.
Limitations and inspirations for the current work
As discussed above, several experimental and simulation studies dealing with colloidal particle deposition onto planar heterogeneous surfaces have been reported. However, there is still a paucity of rigorous models of colloidal particle deposition onto heterogeneous substrates in other geometries, such as microchannels and spherical collectors [START_REF] Kemps | Particle tracking model for colloid transport near planar surfaces covered with spherical asperities[END_REF] . Especially, transport of particles suspended in a carrier fluid in the micropores is central to numerous microfluidic and nanofluidic systems including Lab-On-Chip (LOC) systems, porous media flows, and membrane separations. Adamczyk et al. [START_REF] Adamczyk | Deposition of particles under external forces in Laminar flow through parallel-plate and cylindrical channels[END_REF] investigated particle transport and deposition in narrow homogeneous cylindrical channels almost four decades back. Since then, fundamental concepts of particle transport has been used in various microfluidic applications. Waghmare et al. [START_REF] Waghmare | Finite reservoir effect on capillary flow of microbead suspension in rectangular microchannels[END_REF] discussed the transport of microbead suspension in rectangular capillaries. Fridjonsson et al. [START_REF] Fridjonsson | Dynamic NMR microscopy measurement of the dynamics and flow partitioning of colloidal particles in a bifurcation[END_REF] investigated the transport of fluids with colloidal suspensions in a bifurcated capillary system by NMR microscopy and CFD simulations. Waghmare et al. [START_REF] Waghmare | Mechanism of cell transport in a microchannel with binding between cell surface and immobilized biomolecules [C[END_REF] have also explored the transport of cells through buffer solution in microchannels for immunoassay based sensing devices. Saadatmand et al. [START_REF] Saadatmand | Fluid particle diffusion through high-hematocrit blood flow within a capillary tube [J][END_REF] focussed on blood transport in a capillary tube to investigate mixing in biomedical microdevices and microcirculation. Several other similar applications for particle transport in micro and nano channels can be found in literature [START_REF] Chein | Effect of charged membrane on the particle motion through a nanopore[END_REF][START_REF] Gudipaty | Cluster formation and growth in microchannel flow of dilute particle suspensions[END_REF] .
Although, numerous investigations have been performed to analyze transport in these microfluidic and nanofluidic systems, there is no significant theoretical model which predicts particle transport in microchannels considering the effects of surface heterogeneities for evaluating particle capture.
Hence, in order to facilitate tractable evaluation of particle transport and deposition under the influence of such surfaces, a technique must be devised to systematically define the heterogeneity.
Chatterjee et al. [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] have investigated the particle transport in patchy heterogeneous cylindrical microchannels. The schematic representation of patchy microchannel with micropatterned surface charge distribution is indicated in Figure 2.14. The surface heterogeneity is modeled as alternate bands of attractive and repulsive regions on the channel wall to facilitate systematic continuum type evaluation. The results indicate that particles tend to collect at the leading edge of the favorable sections, the extent of this deposition could be controlled by changing Pé clet number. Higher Pe resulted in a more uniform deposition along the length of the channel due to the particles being convected with the fluid. This study provides a comprehensive theoretical analysis of how the transport of such suspended particles is affected in these microchannels due to the heterogeneities on the microchannel walls. While in their model, they considered neutrally buoyant particles to ensure 2D symmetry, which would not be the case in all practical situations.
Figure 2.14 Schematic of a patterned microchannel geometry with Poiseuille flow profile [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] Hence, 3D simulations must be performed once again to develop a complete picture of particle transport in a microchannel. This is also one of the objectives of the present work.
Introduction
Lithium-ion batteries (LIBs) have been predominantly used in portable consumer electronics due to their high specific energy density, long cycle life, and lack of memory effect [START_REF] Tarascon | Issues and challenges facing rechargeable lithium batteries[END_REF] . Furthermore, LIBs are also regarded as one of the most promising power sources for electric vehicles and aerospace systems [START_REF] Etacheri | Challenges in the development of advanced Li-ion batteries: a review[END_REF] . However, improvements in safety are still urgently required for full acceptance of LIBs in these newly growing application fields. The presence of the combustible electrolyte, as well as the oxidizing agent (lithium oxide cathodes) makes LIBs susceptible to fires and explosions [START_REF] Baginska | Autonomic shutdown of lithium-ion batteries using thermoresponsive microspheres[END_REF] . Once LIBs are subjected to extreme conditions such as short-circuiting, overcharging, and high-temperature thermal impacting , exothermic chemical reactions are initiated between the electrodes and the electrolyte, which will raise the internal pressure and temperature of the battery [START_REF] Oho | Safety aspects of graphite negative electrode materials for lithium-ion batteries [J][END_REF] . The increased temperature accelerates these reactions and releases heat rapidly through the dangerous positive feedback mechanism which will lead to thermal runaway, cell cracking, fire or even explosion [START_REF] Wang | Thermal runaway caused fire and explosion of lithium ion battery[END_REF] . During the past two decades, there were more than 100 battery-related air incidents involving fire, extreme heat, or explosion according to the Federal Aviation Administration (FAA) [START_REF] Baginska | Autonomic shutdown of lithium-ion batteries using thermoresponsive microspheres[END_REF] . To prevent catastrophic thermal failure in LIBs, plenty of strategies including temperature-sensitive electrode materials, positive temperature-coefficient electrodes, and thermal shutdown electrode [START_REF] Baginska | Enhanced autonomic shutdown of Li-ion batteries by polydopamine coated polyethylene microspheres[END_REF][START_REF] Ji | Temperature-responsive microspheres-coated separator for thermal shutdown protection of lithium ion batteries[END_REF] have been proposed as a self-activating protection mechanism to prevent LIBs from thermal runaway. While the above methods often involve in either difficult material synthesis or complicated electrode processing, which make them inconvenient for battery application. Besides, the thick coating layers of the electrodes would decrease the energy density of the batteries and hinder their practical use in batteries. From the viewpoint of industrial application, thermal shutdown separator is therefore the most attractive means for safety protection of LIBs since it can overcome the above defects [START_REF] Ji | Temperature-responsive microspheres-coated separator for thermal shutdown protection of lithium ion batteries[END_REF] .
Shutdown separators rely on a phase change mechanism to limit ionic transport via formation of an ion-impermeable layer between the electrodes [START_REF] Arora | Battery separators[END_REF] . Shutdown separators typically consist of polypropylene (PP)/polyethylene (PE) bilayer or PP/PE/PP trilayer structure. In such kind of laminated separators, PE with the lower melting point serves as shutdown agent while PP with the higher melting point serves as mechanical support [START_REF] Shi | Functional separator consisted of polyimide nonwoven fabrics and polyethylene coating layer for lithium-ion batteries[END_REF] . Once the internal temperature rises up to the melting point of PE, the PE layer is softened and melted to close off the inner pores and thereby preventing ionic conduction and terminating the electrochemical reaction [START_REF] Baginska | Autonomic shutdown of lithium-ion batteries using thermoresponsive microspheres[END_REF] . The majority of the commercial bilayer or trilayer separators are made by laminating different functional layers together by calendaring, adhesion, or welding. The traditional method of making such bilayer or trilayer separators comprises the procedure of making the microporous reinforcing layer and shutdown layer by the stretching process, bonding the above microporous layers alternately into the bilayer or trilayer membranes. Afterwards, the bilayer or trilayer membranes were stretched to the bilayer or trilayer battery separator with required thickness and porosity [START_REF] Deimede | Separators for lithium-ion batteries: A review on the production processes and recent developments[END_REF] . In addition, other methods to fabricate the multilayer separators have been reported in patent literatures. For example, Tabatabaei et al. investigated the fabrication of microporous polypropylene/high density polyethylene/polypropylene trilayer membranes as well as monolayer films using the cast extrusion followed by stretching. The results indicate that the tensile properties and the puncture resistance were evaluated. Besides, the trilayer membranes showed a lower permeability than the monolayer membranes due to the presence of the interface [START_REF] Tabatabaei | Microporous membranes obtained from PP/HDPE multilayer films by stretching [J][END_REF] . Callahan et al. described a multilayer separator fabrication process including the steps of preparing a film precursor by blown film extrusion, annealing the film precursor and stretching the annealed film precursor to form the microporous membrane . The above methods can produce multilayer separators with thermal shutdown function, and most of which include the stretching process and obviously enhance the mechanical strength of separators. Nevertheless, the quite cumbersome production process clearly decreases the production efficiency. Furthermore, the stretching process has high requirements for the equipment, which will increase the production cost. More importantly, these separators will suffer significant shrinkage from rather limited temperature range, with an onset for shrinkage around 100 °C because of the residual stresses induced during the stretching process of the separators and the difference in density between the crystalline and amorphous phases of the separator materials, hereby a potential internal shorting of the cell could occur [START_REF] Zhang | A review on the separators of liquid electrolyte Li-ion batteries[END_REF] .
To improve the thermal stability of the current polyolefin separators, various surface modification approaches have been devoted to minimize the shrinkage of separators. For example, the dip-coating of some organic polymers or inorganic oxides with excellent thermal stability onto the surface of the polyolefin separators has been extensively studied [START_REF] Park | Close-packed poly(methyl methacrylate) nanoparticle arrays-coated polyethylene separators for high-power lithium-ion polymer batteries[END_REF] . Although layer coated separator shows significantly improved thermal stability, the coated layer are easy to fall off when the separator is bent or scratched during the battery assembly process [START_REF] Huang | Lithium ion battery separators: Development and performance characterization of a composite membrane[END_REF] . Moreover, the dip-coating method also generates inevitable negative effects, such as the seriously blocked porous structure, unmodified inner pores of separators, as well as the significantly increased thickness [START_REF] Zhang | A review on the separators of liquid electrolyte Li-ion batteries[END_REF] , which is unfavorable for the cell electrochemical performance. Besides, constructing the heat-resistant skeleton can also solve the thermal shrinkage issues, such as cellulose based composite nonwoven separator [START_REF] Zhang | Renewable and superior thermal-resistant cellulose-based composite nonwoven as lithium-ion battery separator[END_REF] , porous polyether imide separator [START_REF] Shi | Porous membrane with high curvature, three-dimensional heat-resistance skeleton: a new and practical separator candidate for high safety lithium ion battery[END_REF] , and polymethyl methacrylate colloidal particles-embedded poly(ethylene terephthalate) composite nonwoven separator [START_REF] Wang | Facile fabrication of safe and robust polyimide fibrous membrane based on triethylene glycol diacetate-2-propenoic acid butyl ester gel electrolytes for lithium-ion batteries [J][END_REF] . Even though the above-mentioned separators exhibit improved thermal stability, the complicated manufacturing process makes the separators more expensive [START_REF] Li | Poly (ether ether ketone) (PEEK) porous membranes with super high thermal stability and high rate capability for lithium-ion batteries[END_REF] . It is essential to find new methods which can optimize the thermal stability, shutdown property and electrochemical performance of polyolefin separators without sacrificing their excellent microporous structure and low-cost.
In this chapter, a novel strategy is proposed to prepare multilayer LIB separators comprising alternated layers of microporous PP layer and PE layer via multilayer coextrusion combined with CaCO 3 template method. This approach combines the advantages of the multilayer coextrusion (continuous process, economic pathway for large-scale fabrication, and tunable layer structures) and the template method (simple preparation process and tunable pore structure). A key improvement of this approach is that the porous structure is formed by the template method instead of the stretching process, which is beneficial for the thermal stability. Compared to the commercial bilayer or trilayer separators, the dimensional shrinkage of these multilayer membranes may be reduced. More significantly, this approach is quite convenient and efficient, avoid the traditional bilayer or trilayer cumbersome processes during the production process. Moreover, this method is applicable to any melt-processable polymer in addition to PP and PE. The schematic representation of the lab self-designed multilayer coextrusion equipment has been reported in our previous studies [START_REF] Cheng | A processing method with high efficiency for low density polyethylene nanofibers reinforced by aligned carbon nanotubes via nanolayer coextrusion[END_REF] . The preparation principle of porous multilayer membranes through the joint of multilayer coextrusion and CaCO 3 template method is shown in Figure 3.1. In the layer multiplier, the layered composites were sliced vertically, spread horizontally and recombined sequentially, then an assembly of n layer multipliers could produce a tape with 2 (n+1) layers [START_REF] Obata | Preparation of porous poly(pyrrole) utilizing agar particles as soft template and evaluation of its actuation property[END_REF] . In this study, the value of n is selected as 1, therefore the layer of the produced membranes is 4. Afterwards, the multilayer membranes were soaked into diluted hydrochloric acid solution (15 wt%) for 6 h, then the membranes were washed to remove the excess acid and dried at The thickness of the separators was measured using an electronic thickness gauges, test for 8
times to obtain the average values. The porosity (ε) of the porous membranes was determined by n-butanol soaking method, in which the weight of the membranes was measured before and after soaking in n-butanol for 2 h at room temperature, and calculated using the following equation,
0 0 (%) 100% L WW V (3-1)
where W and W 0 are the weight of n-butanol soaked membrane and dry membrane respectively, L is the density of n-butanol, and V 0 is the geometric volume of the membranes [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] . The electrolyte uptake (EU) was determined by the weight change of the membrane before and after absorbing the liquid electrolyte, and calculated by following equation,
(%) 100% b b a WW EU W (3-2)
where W b and W a are the weight of membranes before and after soaking in the electrolyte, respectively [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] . Then the soaked membrane was put into an airtight container for 48 h, and the same calculation method as EU was adopted to measure its electrolyte retention (ER) after 48 h. The porosity, electrolyte uptake and electrolyte retention of the separator was obtained as the average of the values determined in five measurements.
The thermal dimensional stability of the membranes at enhanced temperature was investigated by treating them at different temperatures for 0.5 h, the thermal shrinkage rate was determined by calculate the dimensional change (area based, 3×3 cm 2 square) before and after the heat treatment using the following equation,
0 0 Shrinkage(%) 100% SS S (3-3)
where S 0 and S are area of the membrane before and after thermal treatment, respectively [START_REF] Shi | Functional separator consisted of polyimide nonwoven fabrics and polyethylene coating layer for lithium-ion batteries[END_REF] The electrochemical properties of the separators were determined by CHI 604B electrochemical where R b is the bulk impedance, d and A are the thickness and contact area between the separator and electrodes, respectively [START_REF] Zhu | A trilayer poly(vinylidene fluoride)/polyborate/poly(vinylidene fluoride) gel polymer electrolyte with good performance for lithium ion batteries[END_REF] . The electrochemical stability was determined by the liner sweep voltammetry (LSV) measurements with the scanning rate of 5 mV s -1 over voltage range from 2 to 7 V, using stainless steel and lithium metal as the working and counter electrodes, respectively [START_REF] Liao | Novel cellulose aerogel coated on polypropylene separators as gel polymer electrolyte with high ionic conductivity for lithium-ion batteries [J][END_REF] . The lithium ion transference number was calculated from the chronoamperometry profile with the step potential of 10 mV via sandwiching the separator between two lithium metal electrodes. Test of battery performance of the membranes was conducted by assembling the unit cell via sandwiching the separator between a LiFePO 4 cathode (LiFePO 4 /Acetylene black/PVDF=80/10/10, w/w/t) and a lithium metal anode wetted by the liquid electrolyte [START_REF] Liu | High performance hybrid Al 2 O 3 /poly(vinyl alcohol-co-ethylene) nanofibrous membrane for lithium-ion battery separator[END_REF] . The discharge capacity, cycle performance, and rate performance of cells were measured on a multichannel battery tester (LANHE, LAND 2001A, Wuhan, China) in the voltage window of 2.0-4.2 V at ambient condition. The charge and discharge cycling test was performed at a current density of 0.2 C for 50 cycles, and the C-rate capability measurements were performed at the current rates of 0.2 to 2 C.
Results and discussion
The surface and the cross-sectional morphologies of the MC-CTM PP/PE are shown in Figure 3.2a and 3.2b. According to SEM images, the membrane exhibits abundant three-dimensional spherical porous structure. The typical pore size is sub-micrometer scale with certain uniformity. The abundant porous structure is essential in absorbing more electrolytes, which allows the transport of lithium ions and results in high ionic conductivity and lithium ion transference number. Moreover, the sub-micron pore size makes the membrane hopeful to be used as a critical part in terms of avoiding self-discharge and internal short circuits to improve the safety property [START_REF] Wen | Sustainable and superior heat-resistant alginate nonwoven separator of LiNi 0.5 Mn 1.5 O 4 /Li batteries operated at 55 degrees C [J[END_REF] . In addition, separator is summarized in Table 3.1. Normally, separators should be relatively thin to provide sufficient place for active materials to increase the battery efficiency and provide lower resistance [START_REF] Prasanna | Physical, thermal, and electrochemical characterization of stretched polyethylene separators for application in lithium-ion batteries[END_REF] . The thickness of both separators is around 25 µm, which could meet the requirement of commercial LIBs. Besides, the MC-CTM PP/PE exhibit 46.8% porosity, which is higher than Celgard ® separator (36.7%). The higher porosity is closely related to the abundant interconnected porous structure in Figure 3.2, which will provide a better reservoir for the liquid electrolyte. The electrolyte uptake of MC-CTM PP/PE is 148% (Table 3.1), which is 30% higher than that of Celgard ® separator. Moreover, the electrolyte retention of MC-CTM PP/PE is 50% higher than that of Celgard ® separator. This increment is even higher than the electrolyte uptake, which could be attributed to the unique porous structure. Celgard ® separator are generated by the uniaxial stretching technology, which has apparent needle-like elliptic pores uniformly distributed along dry-stretching direction, while the relatively lower effective porosity and the pore structure restrict the electrolyte storage capacity [START_REF] Liu | High performance hybrid Al 2 O 3 /poly(vinyl alcohol-co-ethylene) nanofibrous membrane for lithium-ion battery separator[END_REF] . In contrast, the separator prepared in this work possesses abundant three-dimensional interconnected spherical pore structure, which clearly enhanced the electrolyte uptake and retention. The room temperature ionic conductivity (Table 3.1) of both separators is in the same order of magnitude, while the value of MC-CTM PP/PE is 30% higher owing to the improved electrolyte uptake [START_REF] An | Multilayered separator based on porous polyethylene layer, Al2O3 layer, and electro-spun PVdF nanofiber layer for lithium batteries[END_REF] . The mechanical property is another important property for separators. The tensile strength of MC-CTM PP/PE is displayed in Figure 3.3. It can be seen from the stress-strain curve that the tensile strength of the separator is 13 MPa. Although this value is lower than the tensile strength along the stretching direction of commercial Celgard ® 2325 separators, it is still sufficient as separator membranes for LIBs [START_REF] Li | A dense cellulose-based membrane as a renewable host for gel polymer electrolyte of lithium ion batteries [J][END_REF] . The temperature dependence of the ionic conductivity of electrolyte-soaked separators was further investigated, as shown in Figure 3.4. The ionic conductive behavior of both separators exhibits typical Arrhenius behavior, i.e., with the elevation of temperature, the ionic conductivity gradually increases, indicating more charge carriers and/or the ionic mobility. This behavior suggests that the mechanism of ionic conduction was the same for these two separators [START_REF] Blake | 3D printable ceramic-polymer electrolytes for flexible high-performance li-ion batteries with enhanced thermal stability[END_REF] . According to Arrhenius equation [START_REF] Zhang | Porous poly(vinylidene fluoride-co-hexafluoropropylene) polymer membrane with sandwich-like architecture for highly safe lithium ion batteries[END_REF] ,
0B exp( / k ) a ET (3-5)
where σ 0 is pre-exponential factor, E a is activation energy, k B is Boltzmann constant, and T is absolute temperature. E a can be calculated from the slope of the lines in Figure 3.4. The calculations yielded respective E a values of 6.5 kJ mol -1 and 7.4 kJ mol -1 for MC-CTM PP/PE and Celgard ®
separator. E a value is the indicative of the total movement of anions and cations [START_REF] Liao | Novel cellulose aerogel coated on polypropylene separators as gel polymer electrolyte with high ionic conductivity for lithium-ion batteries [J][END_REF] . The results indicate that the transport of ions in electrolyte soaked MC-CTM PP/PE is slightly easier than that in Celgard ® separator. The essential function of a separator within a battery is to prevent the direct contact between the positive and negative electrodes while enabling ion to transport between the electrodes. Thus the separators should be mechanically and chemically stable inside the battery at charged and discharged states and at elevated temperatures. Otherwise, the short circuit would generate lots of heat and cause potential thermal runaway, even combustion or explosion. Therefore, the dimensional thermo-stability of the separator, as one of most important factors for the high power batteries [START_REF] Li | Poly (ether ether ketone) (PEEK) porous membranes with super high thermal stability and high rate capability for lithium-ion batteries[END_REF] ,
was studied by treating the separators at a certain temperature for 0.5 h and then measured the dimensional change before and after heat-treatment, as shown in Figure 3.5. It can be found in Figure 3.5a that Celgard ® 2325 is easy to lose the dimensional stability after exposure to high temperature over 100 o C, and exhibits a remarkable dimensional shrinkage of 28% after storing at 170 o C for 0.5 h. As shown in Figure 3.5b, the shrinkage of Celgard ® separator mainly happened in the stretching direction and little shrinkage happens in the transverse direction. The large shrinkage can be explained by the shape recovery behavior resulting from the multiple stretching process used to induce adequate porosity during manufacture, which makes them easily to lose the dimensional stability after exposure to high temperature above 100 o C [START_REF] Blake | 3D printable ceramic-polymer electrolytes for flexible high-performance li-ion batteries with enhanced thermal stability[END_REF] . In contrast, during the same test, MC-CTM PP/PE displayed much better thermal stability with no obvious dimensional shrinkage until 160 o C. This improvement in thermo-stability can be attributed to the fabrication method without stretching process [START_REF] Zheng | Hybrid silica membranes with a polymer nanofiber skeleton and their application as lithium-ion battery separators[END_REF] . Besides, the residual CaCO 3 nanoparticles can act as a rigid skeleton to prevent thermal shrinkage of membranes. The variation of the impedance with the temperature was plotted in Figure 3.6a. The dot line corresponds to a cell containing commercial PE separator. The shutdown in this cell occurs at around 130 o C indicated by the sharp rise in impedance at this temperature, which is caused by the melted PE blocking off the pores of the separator. This will substantially slows down the ionic conduction between the electrodes and eventually cuts off the electrode reactions at elevated temperature [START_REF] Ji | Temperature-responsive microspheres-coated separator for thermal shutdown protection of lithium ion batteries[END_REF] .
After that, the impedance of PE separators exhibit a sudden drop at a temperature about 140 o C which indicates that melted PE shrinks and loses mechanical integrity thus can no longer be served as separator to separate cathode and anode. An electrical short circuiting is unavoidable. The impedance increases by approximately three orders of magnitude than the value in room temperature, which is large enough for complete shutdown, i.e., separators can effectively stop ionic transport between the electrodes, and cells may not continue to be overcharged albeit without external heat sources. Therefore, in this work, 1000 ohm is selected to be a general standard to define the shutdown window. As for the cell containing a commercial PP/PE/PP separator is labeled by the dash line, the shutdown temperature is similar with PE separator, while the meltdown increases to 153 o C due to the existence of PP layers [START_REF] Venugopal | Characterization of microporous separators for lithium-ion batteries[END_REF] . While the meltdown temperature is lower than the melting point of PP, which is attributed to the severe shrinkage resulting from the residual stresses induced during the stretching process. PP/PE multilayer separators prepared by this work (the solid line) provide a wider temperature window from 127 to 165 o C. In this system, the low melting PE layer can act as a thermal fuse and block the pathway of ions when the temperature is close to its melting temperature [START_REF] Zhang | Progress in polymeric separators for lithium ion batteries[END_REF] . At the same time, the higher melting PP layer retains the dimensional structure and mechanical strength, and thus prevents short circuiting between two electrodes [START_REF] Zhang | Progress in polymeric separators for lithium ion batteries[END_REF] . It could be observed that the temperature window is wider than the commercial PP/PE/PP separators, which could be explained by the following reasons. On the one hand, the dimensional thermo-stability of MC-CTM PP/PE is much better than the commercial separator (Figure 3.5), which leads to a higher rupture temperature. This result indicates that CaCO 3 template method helps prevent the distortion of the separators at high temperature to some extent, compared to the stretching method. On the other hand, the stretching process of commercial separators is beneficial to the crystallization of polymers, and higher crystallinity will increase the melting point and lead to a The transport of lithium cations is essential for battery operation under high current densities.
The major drawbacks of dual ion conducting in lithium ion batteries are the low Li + ion transference number and the polarization of the electrodes caused by the moving anions. Moreover, the mobile anions also take part in undesirable side reactions at the electrodes, which can also directly affect the performance of batteries. Therefore it is necessary to improve lithium ion transference to gain good performance for lithium ion batteries. The ionic mobility of Li + ion is estimated by chronoamperometry through comparing the lithium ion transference number (t + ), which is obtained by the ratio of the final and initial current values after and before chronoamperometry, as shown in indicates larger real ionic conductivity of Li + ions for MC-CTM PP/PE than that of the commercial separator, which is attributed to the larger porosity and the liquid electrolyte retained in the pores [START_REF] Liao | Novel cellulose aerogel coated on polypropylene separators as gel polymer electrolyte with high ionic conductivity for lithium-ion batteries [J][END_REF] . The electrochemical stability of separators is one of the most important features for their application in battery system, a large electrochemical window is an important aspect for high battery performance [START_REF] Fu | Flexible, solid-state, ion-conducting membrane with 3D garnet nanofiber networks for lithium batteries[END_REF] . Figure 3.8 shows the results of LSV profiles of MC-CTM PP/PE and Celgard ® 2325 using lithium metal as the counter and reference electrode, stainless steel as the working electrode. The voltage corresponding to the onset of a steady increase in the observed current density indicates the limit for electrochemical stability of the electrolyte-soaked separators, at which oxidative decomposition of an electrolyte-soaked separator happened [START_REF] Li | Poly (ether ether ketone) (PEEK) porous membranes with super high thermal stability and high rate capability for lithium-ion batteries[END_REF] . According to the results
shown in Figure 3.8, the electrolyte-soaked Celgard ® separator exhibits an anodic stability up to 4.22
V versus Li + /Li, while the anodic stability for MC-CTM PP/PE increases to 5.1 V. This stability enhancement may be attributed to the improved electrolyte affinity of MC-CTM PP/PE, since the free solvent molecules in the liquid electrolyte tend to be decomposed on the cathode of lithium ion battery. The excellent electrolyte uptake of separators can reduce the decomposition of solvent molecules on the cathode of lithium ion battery [START_REF] Fu | Flexible, solid-state, ion-conducting membrane with 3D garnet nanofiber networks for lithium batteries[END_REF] . This result demonstrates that MC-CTM PP/PE have a wider electrochemical window than Celgard ® separators, which indicates better electrochemical stability and safety performance. The electrochemical performances were investigated using cells consisting of the electrolyte-soaked MC-CTM PP/PE or Celgard ® 2325 sandwiched between two electrodes. LiFePO 4 served as the working electrode and lithium metal as the counter and reference electrode. The discharge property of lithium-ion batteries mainly depends on the electrode material, electrolyte, and separator, while in this work, the only influence factor is in the separator [START_REF] Tang | Study of the effect of a novel high-performance gel polymer electrolyte based on thermoplastic polyurethane/poly(vinylidene fluoride)/polystyrene and formed using an electrospinning technique[END_REF] . Figure 3.9a depicts 1 st , 10 th and 50 th discharge profiles of cells assembled with MC-CTM PP/PE and Celgard ® separator, under a current density of 0.2 C and voltages ranging from 4.2 to 2 V. The discharge curves are similar to what is observed for lithium ion batteries in general, which implies a good contact between the electrodes and the separators. The typical flat-shaped voltage profiles around 3.4 V is consistent with the reported two phase coexistence reaction for LiFePO 4 cathode [START_REF] Yamada | Room-temperature miscibility gap in Li x FePO 4 [J][END_REF] , which reflects the reversible charge-discharge cycling behavior of LiFePO 4 cathode material. Furthermore, the specific discharge capacity of MC-CTM PP/PE (135 mAh g -1 ) is slightly higher than that (131 mAh g -1 ) of Celgard ® separator, which indicates faster and easier lithium ion transportation between the electrodes and lower interfacial impedance [START_REF] Nunes-Pereira | Optimization of filler type within poly(vinylidene fluoride-co-trifluoroethylene) composite separator membranes for improved lithium-ion battery performance[END_REF] .
Cycling performance is used to evaluate the performance of a battery in the long run. Figure 3.9b shows the variation of the capacity retention with cycle numbers (up to 50 cycles) of cells with different separators at a current density of 0.2 C. The cell with MC-CTM PP/PE shows similar cycling behavior as the one with Celgard ® separator. Both of them are relatively stable throughout 50 cycles with small performance degradation, indicating a stable cycle performance. In addition, compared to a battery with PE membrane, a little higher discharge capacity of a battery with MC-CTM PP/PE during 50 charge-discharge cycles is achieved, and the gap between the two curves becomes larger with increasing cycle number. The difference is probably due to the higher affinity of MC-CTM PP/PE for the liquid electrolyte. Normally, it is the electrode materials that play a major role in both the energy density and capacity retention of the cell [START_REF] Agubra | ForceSpinning of polyacrylonitrile formassproduction of lithiumionbattery separators[END_REF] . However, the electrolyte uptake and ionic conductivities of separators can also influence the capacity of the cell since the separator can directly influence the cell capacity by acting both as the medium of transport of the ions for the electrochemical reaction and a barrier to separate two electrodes. The large liquid electrolyte uptake can wet electrode materials more sufficiently, and achieve high ionic conductivity [START_REF] Zhang | Porous poly(vinylidene fluoride-co-hexafluoropropylene) polymer membrane with sandwich-like architecture for highly safe lithium ion batteries[END_REF] , which would favor intercalation and deintercalation of lithium ions on a cathode. This will results in a higher discharge capacity. Furthermore, the unique spherical pore structure of MC-CTM PP/PE can help seal the electrolyte for longer time, resulting in less capacity loss during the long-term discharge and charge process [START_REF] Zhang | Porous poly(vinylidene fluoride-co-hexafluoropropylene) polymer membrane with sandwich-like architecture for highly safe lithium ion batteries[END_REF] . This result suggests that the cell with MC-CTM PP/PE exhibits a better reversibility.
The rate performance was also of great significance for batteries. Figure 3.9c compares the rate behavior of both separators with C-rates increasing from 0.2 to 2.0 C every five cycles. It is noteworthy that MC-CTM PP/PE presents larger discharge capacities over various discharge current densities from 0.2 to 2 C, reflecting higher cathode utilization and discharge C-rate capabilities. The difference in the discharge capacities between two separators becomes larger at higher current densities where the influence of ionic transport on the ohmic polarization is more significance [START_REF] Jeong | A novel poly(vinylidene fluoride-hexafluoropropylene) / poly(ethylene terephthalate) composite nonwoven separator with phase inversion-controlled microporous structure for a lithium-ion battery[END_REF] .
For both separators, the discharge capacities of cells gradually decrease with increasing discharge current density, which means high energy loss resulting from fast ions motion and higher polarization at higher current densities [START_REF] Chi | Excellent rate capability and cycle life of Li metal batteries with ZrO 2 /POSS multilayer-assembled PE separators[END_REF] . The main affecting factors of separators on the C-rate capacity of cells include the ion conductivity and Li-ion transfer through the separator/electrode interface [START_REF] Wang | Self-assembly of PEI/SiO 2 on polyethylene separators for Li-ion batteries with enhanced rate capability[END_REF] . As discussed above, the combination of increased conductivity of electrolyte and higher Li-ion transference number of MC-CTM PP/PE can allow faster mobility of Li-ions inside the separator and reduce the polarization caused by the counter anions. Furthermore, the smaller interfacial impedance of MC-CTM PP/PE with the lithium electrode means better contact between the separator surface and the electrode, facilitating Li-ion diffusion through the separator/electrode interface. These advantageous characteristics contribute to the higher discharge capacity of the cells with PP/PE multilayer separator [START_REF] Zhang | Renewable and superior thermal-resistant cellulose-based composite nonwoven as lithium-ion battery separator[END_REF] . Such ideal cycle and rate performances make MC-CTM PP/PE promising for the use in lithium ion batteries with fast and enhanced performances. Improvements in safety are still urgently required for the wider acceptance of lithium-ion batteries (LIBs) especially in the newly growing application fields such as electric vehicles and aerospace systems [START_REF] Liu | Enhancement on the thermostability and wettability of lithium-ion batteries separator via surface chemical modification[END_REF] . The shutdown function of separators is a useful strategy for the safety protection of LIBs by preventing thermal runaway reactions. Compared to the single layer separators, polypropylene (PP)/polyethylene (PE) bilayer or trilayer separators are expected to provide wider shutdown window by combining the lower melting temperature of PE with high melting temperature and high strength of PP [START_REF] Venugopal | Characterization of microporous separators for lithium-ion batteries[END_REF] . The traditional method to prepare such kind of bilayer or trilayer separators is bonding the pre-stretched microporous monolayer membranes into bilayer or trilayer membranes by calendaring, adhesion or welding, and then stretched to obtain the required thickness and porosity [START_REF] Deimede | Separators for lithium-ion batteries: A review on the production processes and recent developments[END_REF] , which will enhance the mechanical strength but decrease the production efficiency.
Moreover, such kind of separators will suffer significant shrinkage at high temperature due to the residual stresses induced during the stretching process, hereby a potential internal shorting of the cell could occur [START_REF] Zhang | A review on the separators of liquid electrolyte Li-ion batteries[END_REF] . During the past decades, various modifications have been devoted to improve the dimensional thermos-stability of separators including the surface dip-coating of organic polymers or inorganic oxides [START_REF] Jeong | Closely packed SiO 2 nanoparticles/poly(vinylidene fluoridehexafluoropropylene) layers-coated polyethylene separators for lithium-ion batteries[END_REF] , and chemically surface grafting. However, the coated layers are easy to fall off when the separators are bent or scratched during the battery assembly process [START_REF] Huang | Lithium ion battery separators: Development and performance characterization of a composite membrane[END_REF] . Besides, most of the above approaches focused on modifying or reinforcing the existing separators [START_REF] Zhang | The Effect of Silica Addition on the Microstructure and Properties of Polyethylene Separators Prepared by Thermally Induced Phase Separation[END_REF] , which makes the manufacturing process more complicated and the separators more expensive. Thus it is essential to find new solutions that can optimize the thermal stability, shutdown property, and electrochemical performance without sacrificing the convenient and cost-effective preparation process.
Multilayer coextrusion (MC) represents an advanced polymer processing technique which is capable of economically and continuously producing multilayer materials [START_REF] Cheng | A processing method with high efficiency for low density polyethylene nanofibers reinforced by aligned carbon nanotubes via nanolayer coextrusion[END_REF] . Thermal induced phase separation (TIPS) is a widely used manufacturing process for commercial LIBs separators [START_REF] Shi | Improving the properties of HDPE based separators for lithium ion batteries by blending block with copolymer PE-b-PEG[J][END_REF] ,
which is based on a rule that a polymer is miscible with a diluent at high temperature, but demixes at low temperature. The separators prepared by TIPS process show well-controlled and uniform pore structure, high porosity, and good modifiability [START_REF] Shi | Improving the properties of HDPE based separators for lithium ion batteries by blending block with copolymer PE-b-PEG[J][END_REF] . To the best of our knowledge, no studies have been reported on the combination of the above two methods to prepare multilayer porous separators.
Thus in the present chapter, a novel strategy is proposed to prepare the multilayer LIBs separators comprising alternated layers of microporous PP and PE layers via the combination of multilayer coextrusion and TIPS method, aim to combine the advantages of both methods. Besides, in this work the TIPS composed of a binary system, including one polymer (PP or PE), one diluent (paraffin) and therefore only one extractant (petroleum ether). Hence, the extractant is recyclable and offers higher reproducibility, which is of critical importance for the cost reduction and environmental protection [START_REF] Kim | Microporous PVDF membranes via thermally induced phase separation (TIPS) and stretching methods[END_REF] . Another key benefit of this method is that the one-step route provides a more efficient way for large scale fabrication of multilayer separators with high porosity. More significantly, the porous structure is formed without traditional stretching process, which is in favor of the dimensional thermo-stability. The thermal shutdown property and thermal stability of the resultant separators are expected to be improved obviously compared to the commercial bilayer or trilayer separators. The as-prepared separators are also expected to exhibit cellular-like and submicron grade porous structure, sufficient electrolyte uptake, high ionic conductivity, and good battery performance. These advantages make such kind of multilayer membranes a promising alternative to the commercialized bilayer or trilayer LIBs separators at elevated temperatures.
Preparation of porous PP/PE multilayer membranes
Prior to the multilayer coextrusion, paraffin wax as diluent and PP (or PE) resin with the mass ratio of 55: 45 were mixed and put into the twin-screw extruder to prepare the PP/paraffin and PE/paraffin masterbatches. The schematic illustration of preparation process through the co-joint of MC and TIPS is shown in Figure 4.1. The mechanism of the multilayer coextrusion has been reported in our previous work [START_REF] Cheng | A processing method with high efficiency for low density polyethylene nanofibers reinforced by aligned carbon nanotubes via nanolayer coextrusion[END_REF] . In this study, the layer number is chosen to be 4. After the multilayer coextrusion, the extruded multilayer membranes were immersed in water immediately at 20 o C to conduct the thermal induced phase separation. The principle of TIPS is that a homogeneous polymer/diluent system would be thermodynamically unstable when rapidly cooling the solution below a bimodal solubility temperature. Firstly the initial phase separated structure were formed through the nucleation and growth process , then the phase separated droplets come together during the spinodal decomposition , proceed to minimizing the interfacial free energy [START_REF] Akbarzadeh | Effects of processing parameters in thermally induced phase separation technique on porous architecture of scaffolds for bone tissue engineering[END_REF] . This coarsening process was induced by a differential interfacial tension between the polymer-lean and polymer-rich phases due to the reduction in surface energy associated with the interfacial area [START_REF] Barroca | Tailoring the morphology of high molecular weight PLLA scaffolds through bioglass addition[END_REF] . The two-phase structures formed by the phase separation were the prototype of pore structures. After the multilayer coextrusion and the thermal induced phase separation, the membrane was immersed in petroleum ether (act as extractant) for 6 h to extract the diluent and subsequently dried at 60 o C for 12 h, the obtained PP/PE multilayer separator was referred to as MC-TIPS PP/PE. The surface and the fractured surface of the porous membranes were observed through the field emission scanning electron microscopy (FESEM, QUANTA 250FEG, FEI Co.). The pore size distribution was determined by the randomly measured size of the pores from FESEM images using the dimensional analysis software (Nano Measure V.1.2.5). The multilayer configuration was observed by the optical microscope (Lake Success, NY). The thickness of the separator was measured using an electronic thickness gauges, test for 8 times to obtain the average values. The tensile strength of the separators was measured on WDT30 universal material testing machine (Kailiqiang Machinery Co.) according to ASTM D882-09, where the stretching rate was 50 mm min -1 . Differential scanning calorimetry (DSC) was measured on STA 449 F3 Jupiter Netzsch using a heating rate of 5 o C min -1 . The porosity (ε) of the separators was determined by n-butanol soaking method, the weight of the separators was measured before and after soaking in n-butanol for 2 h at room temperature, and calculated by the following equation,
0 0 (%) 100% L WW V (4-1)
where W and W 0 are the weight of n-butanol soaked and dry separators respectively, L is the density of n-butanol, and V 0 is the geometric volume of the separators [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] . The electrolyte uptake (EU) was calculated using the following equation,
(%) 100% b b a WW EU W (4-2)
where W b and W a are the weight of separators before and after soaking in the electrolyte, respectively [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] . Afterwards the soaked separators were put into an airtight container for 48 h, and the same method was adopted to calculate the electrolyte retention (ER) after 48 h. The porosity, electrolyte uptake and electrolyte retention of the separators were obtained as the average of the values determined in five measurements. The thermal dimensional stability of the separators at enhanced temperature was investigated by treating them at different temperatures for 0.5 h, the thermal shrinkage rate was determined by calculating the dimensional change (area based, 3 cm square) before and after the heat treatment using the following equation,
0 0 Shrinkage(%) 100% SS S (4-3)
where S o and S are the area of the separators before and after thermal treatment, respectively [START_REF] Li | Poly (ether ether ketone) (PEEK) porous membranes with super high thermal stability and high rate capability for lithium-ion batteries[END_REF] .
The electrochemical properties of the separators were determined on CHI 604C electrochemical where d is the thickness of the separator, A is the contact area between the separator and electrodes, and R b is the bulk impedance [START_REF] Zhu | A trilayer poly(vinylidene fluoride)/polyborate/poly(vinylidene fluoride) gel polymer electrolyte with good performance for lithium ion batteries[END_REF] . The lithium ion transference number (t + ) was calculated from the chronoamperometry profile with the step potential of 10 mV by sandwiching the separators between two lithium metal (LM) electrodes [START_REF] Zhu | A trilayer poly(vinylidene fluoride)/polyborate/poly(vinylidene fluoride) gel polymer electrolyte with good performance for lithium ion batteries[END_REF] . With the aim to measure the electrochemical stability of the separators, a SS/separator/LM coin cell was assembled and measured by the liner sweep voltammetry (LSV) test with the scanning rate of 5 mV s -1 over voltage range from 2 to 6 V. The battery performance of the separators was measured by assembling the coin cell by sandwiching the separator between a LiFePO 4 cathode (LiFePO 4 /Acetylene black/PVDF=80/10/10, w/w/t) and a LM anode [START_REF] Liu | High performance hybrid Al 2 O 3 /poly(vinyl alcohol-co-ethylene) nanofibrous membrane for lithium-ion battery separator[END_REF] . The tests were carried on a multichannel battery tester (LANHE, LAND 2001A, Wuhan, China) in the voltage window of 2.0-4.2 V. The charge and discharge cycling test was performed for 50 cycles at a current density of 0.2 C, and the C-rate capability measurements were performed at the current rates of 0.2 to 2 C.
Results and discussion
The porous structure is a key characteristic for LIB separators. Figure 4.2 show the surface and cross-sectional morphology of MC-TIPS PP/PE. According to FESEM images, the separator possesses cellular-like pore structure, which is the typical structure formed by the liquid-liquid TIPS process [START_REF] Liao | Novel cellulose aerogel coated on polypropylene separators as gel polymer electrolyte with high ionic conductivity for lithium-ion batteries [J][END_REF] . In addition, the separator also exhibits several long pores, which are attributed to the coalescence and deformation of a number of diluent droplets [START_REF] Laxminarayan | Effect of initial composition, phase separation temperature and polymer crystallization on the formation of microcellular structures via thermally induced phase separation[END_REF] . Figure 4.2 also display the pore size distribution of surface and cross-section of MC-TIPS PP/PE. The average size of them is all around 500 nm. The relatively small pore size and uniform pore size distribution are attributed to the fast quenching speed in water bath and the liquid droplets do not have enough time to grow bigger before the solidification of the matrix [START_REF] Ji | PVDF porous matrix with controlled microstructure prepared by TIPS process as polymer electrolyte for lithium ion battery[END_REF] . The pore size has a major impact on preventing the penetration of particles from the cathode/anode through the separator and on the inhibition of lithium dendritic. The submicron grade pore size of MC-TIPS PP/PE is suitable for LIBs, which can balance the ionic conductivity and electrical insulation well. Besides, the uniform pore size distribution can suppress the growth of lithium dendrites on the anode and ensure a uniform current distribution throughout the separator, which can avoid performance losses arising from non-uniform current densities [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] . The cross-sectional multilayer configurations of MC-TIPS PP/PE can be observed from optical microscope image, as shown in will definitely provide a better reservoir for the liquid electrolyte and thus enhance the ionic conductivity [START_REF] Wu | PVDF/PAN blend separators via thermally induced phase separation for lithium ion batteries[END_REF] . As shown in Table 4.1, the electrolyte uptake, electrolyte retention, and the ionic conductivity of MC-TIPS PP/PE at room temperature are all higher than that of Celgard ® separator.
Figure 4.3 shows the stress-strain curve of MC-TIPS PP/PE. It can be seen that the tensile strength is 11.3 MPa, although this value is much lower than the commercial separators, it is still sufficient for LIBs separators [START_REF] Li | A dense cellulose-based membrane as a renewable host for gel polymer electrolyte of lithium ion batteries [J][END_REF] . suggests that the mechanism of ionic conduction is the same for these two separators [START_REF] Blake | 3D printable ceramic-polymer electrolytes for flexible high-performance li-ion batteries with enhanced thermal stability[END_REF] . According to Arrhenius equation,
0B exp( / k ) a ET (4-5)
where σ 0 is pre-exponential factor, E a is activation energy related to ionic mobility, k B is Boltzmann constant, and T is absolute temperature. E a is indicative of the total movement of anions and cations and can be calculated from the slope of the lines in Figure 4.4. It is worth noticing that the E a value of MC-TIPS PP/PE (6.14 kJ mol -1 ) is lower than that of Celgard ® separator (7.4 kJ mol -1 ). The result confirms that the transport of ions through electrolyte soaked MC-TIPS PP/PE is a little easier than that in Celgard ® separator [START_REF] Chai | A high-voltage poly(methylethyl a-cyanoacrylate) composite polymer electrolyte for 5 V lithium batteries[END_REF] . Prevent the physical contact of the positive and negative electrode is one of the most important functions of the separators. Thus the separator should be mechanically and chemically stable inside the battery at charged and discharged states and at elevated temperatures. Otherwise, the short circuit would occur and generate lots of heat, cause potential thermal runaway, even combustion or explosion. In order to investigate the thermal-resistant characteristics of the separator, thermal shrinkage behavior is observed by measuring the dimensional change (area-based) after storing the separator at a series of temperature from 30 to 170 o C for 0.5 h, respectively [START_REF] Shi | Functional separator consisted of polyimide nonwoven fabrics and polyethylene coating layer for lithium-ion batteries[END_REF] . The results are shown in temperatures. This improvement in thermos-stability can be attributed to the fabrication method without stretching processes, the superior thermal stability of MC-TIPS PP/PE would improve the safety characteristic of LIBs at elevated temperature [START_REF] Jeong | A novel poly(vinylidene fluoride-hexafluoropropylene) / poly(ethylene terephthalate) composite nonwoven separator with phase inversion-controlled microporous structure for a lithium-ion battery[END_REF] . off the electrode reactions [START_REF] Ji | Temperature-responsive microspheres-coated separator for thermal shutdown protection of lithium ion batteries[END_REF] . With the increasing of the temperature, the impedance of the separators exhibit rapid decline, which indicates that the separator shrinks or loses mechanical integrity and can no longer separate the electrodes. The impedance increases by approximately three orders of magnitude is large enough for 'complete' shutdown. Therefore 1000 ohm is selected to be a general standard to define the efficacious shutdown window herein [START_REF] Venugopal | Characterization of microporous separators for lithium-ion batteries[END_REF] . The wider temperature window of MC-TIPS PP/PE (129 o C-165 o C) than the commercial separators can be explained by the improved dimensional thermo-stability, as is proved in Figure 4.5 This result indicate that compare to the stretching method, separators prepared by this novel method avoid the severe shrinkage and distortion at high temperature. The ionic mobility of the Li + ion was confirmed by the lithium ion transference number (t + ) estimated by chronoamperometry, as is shown in Figure 4.7a. The t + was calculated by the ratio of the initial and final current values before and after chronoamperometry. The t + of the MC-TIPS PP/PE (0.481) is larger than that of the Celgard ® separator (0.287), indicates larger effective ionic conductivity of Li + ions for the MC-TIPS PP/PE than that of the commercial separator, which is attributed to the larger porosity and the liquid electrolyte retained in the pores [START_REF] Zhu | A trilayer poly(vinylidene fluoride)/polyborate/poly(vinylidene fluoride) gel polymer electrolyte with good performance for lithium ion batteries[END_REF] . Electrochemical stability of the separators, as one of the most important features for their application in LIBs, was measured by the LSV method. Li + /Li. This enhanced electrochemical stability performance is attributed to the improved electrolyte affinity of MC-TIPS PP/PE, which can reduce the decomposition of solvent molecules on the cathode of lithium ion battery [START_REF] Fu | Flexible, solid-state, ion-conducting membrane with 3D garnet nanofiber networks for lithium batteries[END_REF] . This result demonstrates that the MC-TIPS PP/PE had a wider electrochemical window than Celgard ® separator, and had the capability to apply as high-voltage LIB separators. curves becomes larger with the cycle number. The difference can be explained by the higher liquid electrolyte affinity and higher ionic conductivity of the MC-TIPS PP/PE, which can wet the electrode materials more sufficiently [START_REF] Zhang | Porous poly(vinylidene fluoride-co-hexafluoropropylene) polymer membrane with sandwich-like architecture for highly safe lithium ion batteries[END_REF] , accordingly favor intercalation and deintercalation of lithium ions on a cathode, and resulting in higher discharge capacity. Furthermore, the cellular-like and submicron grade pore structure can help seal the electrolyte for longer time, resulting in better reversibility .
The rate performance was also of great significance for LIBs. Figure 4.8b compares the rate behavior of both separators with C-rates increasing from 0.2 to 2.0 C for every five cycles. It could be observed that the cell with MC-TIPS PP/PE presents larger discharge capacities over various discharge current densities, suggesting higher cathode utilization and discharge C-rate capabilities.
This improved rate capacity was attributed to the higher porosity and electrolyte uptake, which are in favor of the facile Li + transport and good electrolyte retention during cycling. The difference between two separators becomes larger at higher current densities where the influence of ionic transport on the ohmic polarization is more significant [START_REF] Jeong | A novel poly(vinylidene fluoride-hexafluoropropylene) / poly(ethylene terephthalate) composite nonwoven separator with phase inversion-controlled microporous structure for a lithium-ion battery[END_REF] . For both separators, the discharge capacities of cells gradually decrease with the increase of discharge current density, which means high energy loss resulting from fast ions motion and high polarization at higher current densities [START_REF] Chi | Excellent rate capability and cycle life of Li metal batteries with ZrO 2 /POSS multilayer-assembled PE separators[END_REF] . When the current densities decreased to 0.2 C, the capacities of both cells with different separators almost recovered to their original values, which meet the requirement of the LIBs. Such ideal cycle and rate performances make MC-TIPS PP/PE a promising candidate for the use in LIBs with stable and enhanced performances. ). Moreover, the as-prepared separator exhibits higher ionic conductivity, larger lithium ion transference number, and better battery performance. Considering the aforementioned attractive features and the easily scale-up preparation process, this separator is deemed to have great promise for the application in high safety lithium ion batteries.
Chapter 5 Preparation of Porous Polystyrene Membranes via Multilayer Coextrusion and Adsorption Performance of Polycyclic
Aromatic Hydrocarbons
Introduction
As an ubiquitous class of organic compounds consisting of two or more condensed benzene rings and/or pentacyclic molecules, polycyclic aromatic hydrocarbons (PAHs) have been identified in a variety of waters and wastewaters [START_REF] Rubio-Clemente | Removal of polycyclic aromatic hydrocarbons in aqueous environment by chemical treatments: a review [J][END_REF] . The potential toxic, mutagenic, and carcinogenic properties of PAHs together with their ability to bio-accumulation in aquatic organisms make it urgent to reduce the level of PAHs in aqueous environment [START_REF] Awoyemi | Understanding the adsorption of polycyclic aromatic hydrocarbons from aqueous phase onto activated carbon[END_REF] . A number of effective remediation techniques of the aqueous environment containing PAHs have been investigated, including physical, biological, and phytoremediation processes [START_REF] Peng | Phytoremediation of phenanthrene by transgenic plants transformed with a naphthalene dioxygenase system from Pseudomonas[END_REF] , whereas most of these methods possess disadvantages, such as high upfront investment costs, complicated operating procedures with high maintenance costs, and the release of harmful byproducts, etc [START_REF] Shih | Chloramine mutagenesis in bacillus subtilis[END_REF] . Among various methods, the adsorption is found to be a promising technique due to its simplicity of design and operation, high efficiency, low investment and maintenance costs, and no formation of undesirable secondary products. In the past decades, many materials have been reported to be feasible in adsorbing PAHs in aqueous environment including zeolites [START_REF] Lemić | Competitive adsorption of polycyclic aromatic hydrocarbons on organo-zeolites [J][END_REF] , clays [START_REF] Osagie | Adsorption of naphthalene on clay and sandy soil from aqueous solution[END_REF] , plant residue materials [START_REF] Xi | Removal of polycyclic aromatic hydrocarbons from aqueous solution by raw and modified plant residue materials as biosorbents [J][END_REF] , and activated carbon [START_REF] Ge | Adsorption of naphthalene from aqueous solution on coal-based activated carbon modified by microwave induction: Microwave power effects[END_REF] , while most of the above materials are less effective for trace PAHs in water. Some micro-or nano-scale adsorbents have been proved to efficiently capture trace water pollutants [START_REF] Wan | Can nonspecific host-guest interaction lead to highly specific encapsulation by a supramolecular nanocapsule?[END_REF] ,
however, the size of them are too small to separate from water. Thus it is essential to propose new materials that can adsorb trace PAHs and easily to separate from water, moreover, the adsorption method is particularly appealing when the adsorbent is low-priced and could be mass produced [START_REF] Pu | A porous styrenic material for the adsorption of polycyclic aromatic hydrocarbons[END_REF] .
According to the similar compatible principle, absorbents with aromatic ring are comparatively suitable materials for PAHs adsorption. In our previous work, it is found that porous polystyrene (PS) bulk materials via high internal phase emulsion polymerization are good candidates to deal with PAHs contamination in water [START_REF] Simkevitz | Fabrication and analysis of porous shape memory polymer and nanocomposites[END_REF] . Among the porous adsorption materials, the porous membranes are preferable over bulk and powder materials since they possess higher contact area with water and are much easier to be separated from wastewaters.
Currently, the porous membranes can be fabricated through numerous methods, including foaming process , phase separation method [START_REF] Luo | Preparation of porous crosslinked polymers with different surface morphologies via chemically induced phase separation[END_REF] , electrospinning [START_REF] Li | Hydrophobic fibrous membranes with tunable porous structure for equilibrium of breathable and waterproof performance[END_REF] , top-down lithographic techniques [START_REF] Singamaneni | Instabilities and pattern transformation in periodic, porous elastoplastic solid coatings[END_REF] , breath figure method [START_REF] Srinivasarao | Three-dimensionally ordered array of air bubbles in a polymer film[END_REF] , template technique [START_REF] Zhou | Mango core inner shell membrane template-directed synthesis of porous ZnO films and their application for enzymatic glucose biosensor[END_REF] , and extrusion spinning process for hollow fiber membranes [START_REF] Zhao | Highly porous PVDF hollow fiber membranes for VMD application by applying a simultaneous co-extrusion spinning process[END_REF] . Among the above methods, the template technique has attracted much attention due to its relatively simple preparation process and tunable pore structure. With the help of the template method, the extrusion-blown molding seems a highly efficient way for the large-scale fabrication of porous membranes. However, the extrusion-blown molding is not suitable for the preparation of brittle PS or particle-embedded polymer membranes. The multilayer coextrusion represents an advanced polymer processing technique, which is capable of economically and efficiently producing films of multilayers with individual layer thickness varying from micron to nanoscale [START_REF] Armstrong | Co-extruded multilayer shape memory materials: Nano-scale phenomena[END_REF] .
In the present chapter, a novel strategy is proposed to prepare porous PS membranes with tunable porous structure via multilayer coextrusion combined with the template method, which is a highly efficient pathway for large-scale fabrication of porous PS membranes. In principle, this method is applicable to any melt-processable polymers in addition to PS. The potential application of the porous PS membranes in adsorbing PAHs is explored preliminarily. Pyrene, a representative PAH with medium molecular weight and moderate solubility in water, is selected as the model compound to explore the adsorption performance of the porous PS membranes on PAHs. The related adsorption kinetics and isotherms of the porous PS are also discussed.
Experimental section
Preparation of PS(CaCO 3 ) masterbatches
Prior to the multilayer coextrusion, CaCO 3 particles were pre-dispersed in PS via melt blending, with TMC 101 as the dispersing agent. CaCO 3 particles and PS with the ratio of M: (100-M) were mixed and put into the twin-screw extruder, then the PS(CaCO 3 ) masterbatches were obtained, with CaCO 3 content of M wt%.
Preparation of PS(CaCO 3 ) membranes
As shown in Figure 5.1a, LDPE and PS(CaCO 3 ) masterbatches were extruded from extruder A and B respectively. These two melt streams were combined as two parallel layers and then flowed through a series of laminating-multiplying elements (LMEs) and each element doubled the number of layers, as shown in Figure 5.1b. An assembly of n LMEs could produce a tape with 2 (n+1) layers.
In this study, the value of n was taken as 3, 4, and 5, and the total thickness of the multilayer film was fixed at 320 µm. As a result, the single layer thickness of PS(CaCO 3 ) membrane is about 20 µm, 10 µm, and 5 µm theoretically.
Preparation of porous PS membranes via acid etching method
The single layer PS(CaCO 3 ) membranes can be separated from the multilayer structure of LDPE/PS(CaCO 3 ) coextrusion films via the mechanical method. Then the PS(CaCO 3 ) membranes were soaked into diluted hydrochloric acid solutions (15 wt%) for certain hours (named as i h). Then the membranes were washed to remove the excess acid and dried for 24 h at 60 o C.
Characterization
The fractured surface of the porous membranes was observed through the field emission scanning electron microscopy (FESEM, Philips XL30FEG). The optical microscope (Nanjing Kell Instrument Co., KEL-XMT-3100) was used to observe the multilayer configuration of layer-by-layer structure of LDPE/PS(CaCO 3 ) coextrusion films and the distribution of CaCO 3 in single layer PS(CaCO 3 ) membranes. Thermal gravimetric analysis (TGA) was performed to explore the weight percentage of each material, using a Netzsch STA 449 C thermogravimetric analyzer with a heating rate of 20 o C min -1 from ambient temperature to 1000 o C under the air flow. Fourier transform infrared spectroscopy (FTIR) (EQUINOX 55, Bruker Co.) was utilized to check whether CaCO 3 templates could be etched by HCl or not. The porosity (ε) of the PS membranes was determined by the n-butanol soaking method, in which the weight of the membrane was measured before and after soaking in n-butanol for 4 h at room temperature, and calculated using the following equation,
0 0 (%) 100% L WW V (5-1)
where W and W 0 are the weight of the membranes with soaked n-butanol and dry membranes respectively;
L is the density of n-butanol, and V 0 is the geometric volume of the membranes [START_REF] Ye | Hierarchical three-dimensional micro/nano-architecture of polyaniline nanowires wrapped-on polyimide nanofibers for high performance lithium-ion battery separators[END_REF] .
The adsorption performance tests were carried out by using pyrene as the model compound for PAH. Typically, a large stock of pyrene-contaminated solution (130 ppb) were prepared by phosphate buffer (0.01 M, pH = 7.4). A certain amount of PS membranes were added to the pyrene solution with slight shaking for 20 h. After removal of the solid adsorbent by centrifugation or filtration, the solution was subjected to fluorescent detection on a Thermo Scientific Lumina fluorescence spectrometer (Hitachi F-2700). The λ ex was set at 335 nm and the emission within 350-550 nm was recorded. The residual concentration of pyrene could be determined from the fluorescent intensity-concentration calibration curve. When the membrane thickness is reduced to less than 5 µm, the membrane becomes fragile and easy to be broken, thus the optimal thickness of the membrane is 5 µm. The above results demonstrate the influences of the membrane thickness on the pore structure. With the decrease of the membrane thickness, the ratio of the diameter of CaCO 3 template to the thickness of the membranes increases.
Results and discussion
The proportion of the particles near the membrane surface also increases, which is beneficial for the acid etching and also in favor of the acid to penetrate into the internal region of the membranes.
However, when the thickness is reduced to less than 5 µm, the mechanical properties of the membrane are deteriorated. particles are more likely to be in contact with each other at higher CaCO 3 content, which is beneficial for the hydrochloric acid to penetrate into the membrane to conduct the double decomposition reaction with CaCO 3 and form the porous structure. However, when the CaCO 3 content increases to 60 wt%, the CaCO 3 particles tend to agglomerate and act as structure defects in the PS membranes, which will lead to a stress concentration and reduce the mechanical properties such as strength and tenacity. The porosity P of PS membranes plays an important role on the adsorption properties of PAHs.
The porosity of the membranes prepared with different etching time, CaCO 3 content, and membrane thickness was measured, as shown in Table 5.1. It is indicated that the porosity is in positive correlation with the etching time and the initial CaCO 3 content, while in negative correlation with the thickness of the membranes, which are consistent with the analysis of SEM images, through-hole structure test and TGA curves. It can be concluded that the porous membranes, which were prepared with CaCO 3 content of 40 wt%, thickness of 5 µm, and etched for 48 h, possess the optimal through-hole structure.
Adsorption of porous PS membranes on pyrene
By comparing the fluorescence intensity of pyrene solution after and before the adsorption of the porous membranes (6 # ) for 12 h, it can be found that the fluorescence intensity decreases with increasing amount of the membranes, as shown in Figure 5.9a. The adsorption can reach the equilibrium when the amount of porous PS membranes is larger than 0.8 g L -1 . The residual concentration of pyrene solution after the adsorption can be calculated through the fluorescence intensities according to the standard curve, and the adsorption kinetic curve at the wavelength of 370 nm is plotted in Figure 5.9b. Through the comparison of the concentration of the solution, the minimum residual concentration of pyrene in the aqueous solution is only 9.5 ppb, which is much lower than the initial value (130 ppb). The saturated adsorbing capacity of the porous PS membrane (6 # ) can also be calculated from Figure 5.9b, with a value of 0.584 mg g -1 , which is really higher than many other adsorption materials. For example, the saturated adsorbing capacity of the dendritic amphiphile mediated porous monolith of pyrene is proved to be 0.2 mg g -1 [START_REF] Ye | Dendritic amphiphile mediated porous monolith for eliminating organic micropollutants from water [J][END_REF] , which is much lower than the value in the current work. In the following part, the ratio of the membrane mass to the volume of the aqueous solution was chosen to be 0.8 g L -1 , and the initial concentration of pyrene is 0.13 mg L -1 (130 ppb). For comparison, the adsorption properties of the membranes prepared with different membrane thickness, etching time, and CaCO 3 content (from 1 # to 6 # ) were all examined. From the declined trend of the fluorescence intensity with increasing time as plotted in Figure 5.10a to Figure 5.10f, it could be clearly observed that the declining rate of 6 # is the highest, followed by 5 # , 4 # , 3 # , 2 # , and 1 # . Based on the above fluorescence curves, the variation of the residual concentration of pyrene versus the adsorption time (at the wavelength of 370 nm) were calculated according to the standard curve and were plotted in Figure 5.10g. It can be observed that the concentration of pyrene adsorbed separately by six groups of membranes descends rapidly during the first hour, while the gap between the curves becomes larger as time increases. The adsorption equilibrium time for the membranes is similar to the value of 8 h, while the final residual concentrations of pyrene are quite different. The sequence of the adsorption performance of the membranes is 6 # > 5 # > 4 # > 3 # > 2 # > 1 # , which is in line with the order of the porosity. This illustrates that the porous membranes have better adsorption capability attributed to the higher porosity. The possible reason is that the abundant pore structure and surface area increase the contact probability between the pyrene molecules and the inner wall of the porous membranes. As a result, more pyrene molecules can be adsorbed by the inner wall of the porous membranes with higher porosity according to the collision mechanism. Besides, according to the similar compatible principle, both pyrene and PS have aromatic ring structures, which is beneficial for the pyrene adsorption of PS membranes. Theoretically, it can be concluded that such kind of porous membranes can also be used to adsorb other PAHs with aromatic ring. To further understand the adsorption process, the adsorption kinetics is investigated according to two models, pseudo-first-and pseudo-second-order equation. In this part, membrane 1 # represents the solid PS membranes and membrane 6 # stands for the porous PS membranes. The pseudo-first-order equation can be expressed as, ( )
1 d t eq t Q k Q Q dt (5-2)
where Q eq and Q t (mg g -1 ) stand for the amount of pyrene adsorbed at equilibrium state and time t respectively; k 1 is the pseudo-first-order rate constant [START_REF] Zhou | Magnetic dendritic materials for highly efficient adsorption of dyes and drugs [J][END_REF] . After definite integration, equation (5-2) turns into, ( )
eq 1 ln ln t eq Q Q Q k t (5-3)
Theoretically, there exists a straight line (slope-k 1 ) in the plot of ln (Q eq -Q t ) versus t. The correlation coefficients for membrane 1 # (solid) and 6 # (porous) are 0.94 and 0.95, respectively, as shown in Figure 5.11a. Although the correlation coefficients are high, the linear fitting degree has not reached the expected results yet. Thus, the pseudo-second-order equation is introduced as below, ( )
2 2 d t eq t Q k Q Q dt (5-4)
Integrating the above equation by applying initial conditions, equation (5-4) becomes,
2 2 1 t eq eq tt Q k Q Q (5-5)
As shown in Figure 5.11b, the two linear plots of t/Q t against t give higher correlation coefficients, 0.96 and 0.99 for membrane 1 # (solid) and 6 # (porous) separately. Especially for porous PS membranes, the linear fit agrees well with the pseudo-second-order model.
Adsorption isotherms
The way adsorbates interact with adsorbents is generally described by the adsorption isotherm, which is of vital importance to understand the mechanism of adsorption and to design an adsorption system meeting the necessary requirements. Several isotherm models can be found in the literature [START_REF] Srinivasan | Decolorization of dye wastewaters by biosorbents: a review [J][END_REF] . This study used Freundlich and Langmuir isotherm models, which have been widely adopted in the adsorption isotherm studies, to fit the experimental data for pyrene adsorbed by porous PS membranes (6 # ). Figure 5.12a shows the adsorption isotherm for pyrene at equilibrium.
The linearized form of Freundlich model can be described as follows [START_REF] Zhou | Magnetic dendritic materials for highly efficient adsorption of dyes and drugs [J][END_REF] , ln ln ln
eq F F eq Q K b C (5-6)
where K F is Freundlich constant; C eq (mg L -1 ) is the equilibrium concentration of pyrene in the solution; b F is a constant depicting the adsorption intensity. The value of lnQ eq vs. lnC eq is plotted in highly obeys Freundlich model. While, the assumption of Langmuir theory is that the adsorption takes place at the specific homogeneous sites equivalently within adsorbent [START_REF] Zhou | Magnetic dendritic materials for highly efficient adsorption of dyes and drugs [J][END_REF] and no interaction among adsorbate molecules exists. The Langmuir isotherm is expressed as below, eq max max
1 eq eq L CC Q Q Q (5-7)
where Q max (mg g -1 ) is the maximum capacity of the adsorbent and K L (L mg -1 ) is the Langmuir adsorption constant [START_REF] Zhou | Magnetic dendritic materials for highly efficient adsorption of dyes and drugs [J][END_REF] . The experimental values of C eq /Q eq against C eq are shown in Figure 5.12c.
As can be found, the corresponding correlation coefficient for the adsorption of pyrene is 0. )
C e q (mg L ) C e q (mg L -1 ) (c) show that the porous PS membranes with higher porosity exhibit much higher adsorption performance on pyrene in dilute aqueous solution, compared with that adsorbed by PS membranes with lower porosity. This is attributed to the abundant pore structure of the membrane and the similar compatible principle between the membrane surface and the adsorbates. The adsorption kinetics and isotherm of porous PS membranes were found to follow pseudo second-order kinetics and Freundlich isotherm model, respectively. Considering the continuous fabrication process and the good adsorption properties of porous PS membranes, it is believed that this fascinating adsorbent can be a promising candidate for the adsorption of PAHs from wastewater and industrial effluents. Porous media are of scientific and technological interest because of the wide spectrum of applications they have attained during the past decades [START_REF] Coutelieris | Transport processes in porous media [M[END_REF] . Various methods have been used for the design of porous media, such as foaming process, template technique, sol-gel, hydrothermal synthesis, precipitation, chemical etching methods and photolithographic techniques [START_REF] Zawadzki | Hydrothermal synthesis of nanoporous zinc aluminate with high surface area[END_REF] . Several types of porous media including porous polymers, organic-inorganic hybrid porous materials, and porous carbon aerogels have been successfully synthesized in our previous works [START_REF] Li | Nitric acid activated carbon aerogels for supercapacitors[END_REF] . These porous materials are used for many purposes, namely for water filtration and waste water treatment which are the focus of this study.
Moreover, particle transport and deposition processes in porous media are of great technological and industrial interest since they are useful in many engineering applications and fundamentals including contaminant dissemination, filtration, chromatographic separation and remediation processes [START_REF] Scozzari | Water security in the Mediterranean region: an international evaluation of management, control, and governance approaches[END_REF][START_REF] Sefrioui | Numerical simulation of retention and release of colloids in porous media at the pore scale[END_REF] . To characterize these processes, numerical simulations have become increasingly attractive due to growing computer capacity and calculation facilities offering an interesting alternative, especially to complex in situ experiments [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF][START_REF] Long | Pore-scale study of the collector efficiency of nanoparticles in packings of nonspherical collectors[END_REF][START_REF] Long | A Correlation for the collector efficiency of brownian particles in clean-bed filtration in sphere packings by a Lattice-Boltzmann method[END_REF] . Basically, there are two types of simulation methods, namely macro-scale simulations and micro-scale (pore scale) simulations.
Macro-scale simulations describe the overall behavior of the transport and deposition process by solving a set of differential equations that gives spatial and temporal variation of particles concentration in the porous sample without providing any information regarding the nature or mechanism of the retention process [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF] . Micro-scale (pore scale) numerical simulations directly solve the Navier-Stokes or Stokes equation to compute the flow and model particle diffusion processes by random walk for example [START_REF] Xiao | Pore-scale simulation frameworks for flow and transport in complex porous media[END_REF] . Lopez et al. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] carried out micro-scale numerical simulations of colloidal particles deposition onto the surface of a simple pore geometry consisting of two parallel planar surfaces. Messina et al. [START_REF] Messina | Microscale simulation of nanoparticles transport in porous media for groundwater remediation[END_REF] used micro-scale simulations to estimate the particle attachment efficiency.
The objective of the current work is to simulate the process of particles transport and deposition in porous media at the micro-scale by means of CFD simulations in an easy way in order to get the most relevant quantities by capturing the physics underlying the process. The main idea, here, is to revisit the work of Lopez and co-authors [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] by considering a more realistic 3D geometry such as a pipe. Indeed their work was limited to a particular geometry restricted to a slot-like geometry unlikely to be encountered in real porous media.
6.2 Methodology and tools
Methodology
The present work focuses on three-dimensional numerical modeling of the process of transport of particles in a pipe as porous media are usually represented as a bundle of capillaries. This is done by coupling two available soft wares. First, the velocity field is obtained by means of the popular OpenFOAM ® (Open Field Operation and Manipulation) shareware and secondly particle's tracking is performed with Python ® software.
The particle-pore wall interaction is considered purely attractive while particle-particle interaction is purely repulsive. Particles are injected sequentially at a random initial position at the inlet of the pipe and their center of mass is tracked until either they reach the outlet of the domain or are deposited onto the pipe surface. The particles' mean diameter and flow conditions are chosen in a way that the particle's Reynolds number is sufficiently small so that the particles can be treated as a mass point. However, once a particle is deposited, an equivalent volume surrounding the deposition location is set to be solid on the pore surface. Then the new velocity field is recalculated to take into account the influence of the presence of the deposited particle on the flow and a new particle is then injected. The injection process is repeated until particle deposition probability vanishes.
Determining the flow field in OpenFOAM shareware
The hydrodynamic model is solved with OpenFOAM ® package, which is a versatile equations' solver and can be used to solve different kinds of differential equations. The velocity field is obtained by solving the Stokes and continuity equations. No-slip boundary condition is applied on the pore wall, and the pressure at the inlet and outlet are set to fixed values. It should be noticed that the hydrodynamic model is used to generate input flow data for the particle transport model.
Lagrangian particle tracking in Python ®
Injected particles are tracked using a Lagrangian method. Three situations may occurs: (i) the particle is adsorbed onto the solid wall when it approaches it closely if a free surface is available for deposition; (ii) the particle leaves the domain and never comes close to the wall surface; (iii) the particle comes close to the solid surface but the deposition site is occupied by another particle repealing it to the bulk flow. The velocity at every node is the vector summation of the interpolated convection velocity V int (obtained from OpenFOAM) and the Brownian diffusion velocity V diff :
V =V int +V diff (6-1)
where where k B is the Boltzmann constant, T the absolute temperature, µ the dynamic viscosity of the suspending fluid and a p the particle radius. The diffusion velocity V diff is related to the diffusion coefficient D through the following relationship:
V
r t D diff V (6-3)
where t r is the referential time [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] , t r = ζ /(2 u max ) (6-4)
In equation (6-4); ζ stands for the characteristic mesh size and u max is the maximum of actual velocity along the mean flow axis. New positions of the moving particle are obtained by summing the old position vector and the sum of the diffusion velocity and the convection velocity multiplied by the referential time [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] :
X new =X old +V t r (6-5)
Simulation parameters
The parameters used in the simulations are summarized in table 6.1. Following a sensitivity analysis, for all simulations a mesh number of 50 50 40 is chosen with a mesh refinement in the vicinity of solid walls.
The Pé clet number (Pe) is a dimensionless number that is relevant for the study of transport phenomena of colloidal dispersions. Here, it is defined to be the ratio of the rate of advection to the rate of particle's diffusion:
a p u Pe D (6-6)
where is the average convection velocity along the mean flow axis. For the diffusion-dominant regime at low Pé clet number, the deposition probability is high for a small number of injected particles and decreases slowly as more particles are injected. This is due to the fact that for a lower number of injected particles, the wall surface is free and the injected particles easily find a place to deposit. When the number of adsorbed particles reaches a certain value, the deposition probability drops sharply to reach lower values for higher numbers of injected particles.
Results and discussion
When this sharp transition is approached, the surface wall is already covered by many adsorbed particles and further injected particles will have much less probability to deposit. For high Pé clet numbers, where the transport is dominated by convection, the available surface for adsorption is potentially lower that for the diffusion dominant regime. Indeed, there is a larger exclusion surface around already deposited particles due to hydrodynamic shadowing effect. This leads to lower values for the deposition probability [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] . Surface coverage (Γ) is defined as the ratio of the total projection area of the deposited particles to the pipe surface area. Moreover, it is well known that for pure diffusion regime and a flat surface using the Random Sequential Adsorption (RSA) model, the maximum surface coverage, Γ RSA , is found to be close to 0.546 [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Talbot | From car parking to protein adsorption an overview of sequential adsorption processes[END_REF] . The surface coverage Γ is therefore made dimensionless using this value.
The dependency of Γ/Γ RSA on the number of injected particles for different Pé clet numbers is shown in Figure 6.2. As expected, the surface coverage increases with the number of injected particles and it decreases with the Pé clet number. For the most diffusive case considered here, Pe = 0.0015, the dimensionless surface coverage reaches values close to 0.95 Γ RSA , while for the large Pé clet number, Pe = 150, corresponding to a fully convection-dominant regime, Γ/Γ RSA is as low as 0.22 Γ RSA . We must recall that the surface coverage is an important parameter because it is linked to the performance of the porous medium when it is used for water filtration for solid particles or micro-organisms removal. For these applications, a high value of surface coverage is therefore aimed.
In Figure 6.3, the geometry and the adsorbed particles are presented after the injection of 3000 particles for the two extreme Pe numbers (Pe=0.0015 and Pe=150). As it can be seen, for higher Pe, the number of deposited particles is much less than that obtained for lower Pe, which is a visual illustration of the analysis is made above. In this study, the permeability is calculated by:
uL K p (6-7)
where is the average convection velocity along the mean flow axis, μ is the dynamic viscosity of the injected fluid, L is the length of the pipe, ε is the porosity of the pipe, and p is the pressure drop between the inlet and the outlet face. The permeability reduction factor, R k is defined as the ratio of K to KΓ where K and KΓ stand for the permeability before and after deposition. K is constant for all Pe and equal to 1.51 10 -12 m 2 . The variation of R k versus the number of injected particles is plotted in Figure 6.5a for each value of Pe. The permeability decreases with the number of injected particles, and the trend is more pronounced for diffusion dominant regimes (smaller Pé clet numbers).
Knowing the R k values, one can estimate the hydrodynamic thickness of deposited layer using Poiseuille's law:
4 / 1 k R 1 R (6-8)
where R is the initial pore radius. In this study, the ratio (2a p ) of the final hydrodynamic thickness to the particle diameter versus Pe is plotted in Figure 6.5b to compare with the experimental work and the simulation results. For Pe<1, the curve shows a relatively stable plateau in our work as well as in the simulation results by Lopez et al. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] Particularly speaking, for the diffusion-dominant regime (Pe=0.0015), the surface coverage is very closed to the RSA limit, and the 2a p reaches an upper limit of 0.87, a value quite close to the outcome obtained by S. Buret and co-authors who used a simple modified Poiseuille's law with the average hydrodynamic thickness of the deposition layer equal to 0.9 times the droplet diameter. Remember that a large value of 2a p is corresponding to a high number of deposited particles. While for Pe>1, the experimental and simulation data show that 2a p is a decreasing function of flow strength. The reason for the horizontal shifts among the three groups of data may be explained by the difference in geometries of porous media as well as in the calculation methods for the parameters [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Chauveteau | Physics and modeling of permeability damage induced by particle deposition [C]. SPE formation damage control conference[END_REF][START_REF] Nabzar | A new model for formation damage by particle retention[END_REF] .
(b)
.5 Variation of permeability reduction versus N at various Pe (A); Hydrodynamic thickness /2r p of the deposited layer versus Pe (B) [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] 6.4 Conclusions Numerical simulations of the deposition of colloidal particles onto porous media of simple geometry under flow have been carried out by coupling two available soft wares: the OpenFOAM ® shareware is used to obtain the velocity field by solving the Stokes and continuity equations and a software developed in this work using Python ® programming language and used for the particle tracking process. Important quantities such as deposition probability, surface coverage, porosity and permeability were calculated during the simulations. The variations of these quantities versus the number of injected particles for different Pé clet numbers were examined. Preliminary results were analyzed and the some conclusions may be drawn. The deposition probability decreases with the number of the injected particle and with the Pé clet number. At low Pé clet number values, the surface coverage Γ is shown to closely approach the RSA value and it drops noticeably for high Pe values.
Both the porosity and the permeability decrease with the number of deposited particles. At lower Pe numbers, the final hydrodynamic thickness of the deposit layer is lower than the particle diameter
showing the formation of a loose monolayer deposit and it decreases for higher Pe. The above results, though they need to be consolidated, are consistent with theoretical predictions, demonstrating that the numerical method used is relevant to describe deposition of colloids in porous media from dilute dispersions. Future developments of this work will include a larger range of the related parameters on the one hand and more realistic pore geometries on the other hand.
consisting in solving the advection-dispersion equation containing one or two source terms representing adsorption and desorption isotherms [START_REF] Sasidharan | Coupled effects of hydrodynamic and solution chemistry on long-term nanoparticle transport and deposition in saturated porous media[END_REF] .
Risbud and Drazer [START_REF] Risbud | Trajectory and distribution of suspended non-Brownian particles moving past a fixed spherical or cylindrical obstacle[END_REF] have considered the case of non-Brownian particles moving past a spherical or a cylindrical collector in Stokes regime by focusing on the distribution of particles around the obstacle and the minimum particle-obstacle distance attained during particle motion.
They show that very small surface-to-surface separation distances would be common during the motion highlighting that short-range non-hydrodynamic interactions may have a great impact during particle motion.
Unni and Yang [START_REF] Unni | Brownian dynamics simulation and experimental study of colloidal particle deposition in a microchannel flow[END_REF] have experimentally investigated colloid deposition in a parallel-plate flow cell by means of direct videomicroscopic observation. They focused on the influence of the flow's
Reynolds number, physico-chemical conditions and particle size on surface coverage. To simulate deposition therein, the Langevin equation, particle-particle and particle-wall hydrodynamic interactions together with the DLVO theory were used. Accordance between experimental and simulation results were observed for flow Reynolds numbers scanned between 20 and 60.
In filtration processes, the porous medium is usually assumed to be composed of unit bed elements each containing a given number of unit cells whose shape is cylindrical with constant or varying cross sections. Chang et al. [START_REF] Chang | Prediction of Brownian particle deposition in porous media using the constricted tube model[END_REF] have used Brownian dynamic simulation to investigate deposition of Brownian particles in model parabolic constricted tubes, hyperbolic constricted tubes and sinusoidal constricted tubes. Here again Langevin equation with corrected hydrodynamic particle/wall interactions and DLVO interaction were solved to get particles trajectories. Therefore the single collector efficiency that describes the initial deposition rate was evaluated for each geometry at various Reynolds numbers.
A more realistic porous media geometry for the study of transport and deposition is a bed of packed collectors of a given shape. Boccardo et al [START_REF] Boccardo | Microscale simulation of particle deposition in porous media[END_REF] have numerically investigated deposition of colloidal particles under favorable conditions in 2D porous media composed of grains of regular and irregular shapes. For that purpose they solved Navier-Stokes equations together with the advection-dispersion equation. Then and even if particles trajectories can't be specified, it was possible to determine how neighboring grains mutually influence their collection rates. They show that the Brownian attachment efficiency deviates appreciably from the case of single collector.
Similarly Coutelieris et al. [START_REF] Coutelieris | Low Peclet mass transport in assemblages of spherical particles for two different adsorption mechanisms[END_REF] have considered flow and deposition in a stochastically constructed 3D spherical grain assemblage by focusing on the dependence of capture efficiency at low to moderate Pé clet numbers and found that the well-known sphere-in-cell model remains applicable provided that the right porous medium properties are taken into account. Nevertheless the scanned porosities were too close to unity to be representative of actual porous media. In their work, Lagrangian approaches are used to track particles displacement in 3D packed beds allowing to study microscale transport and deposition of colloidal particles. For that purpose Lattice Boltzmann techniques are used for calculating hydrodynamic and Brownian forces acting on moving particles with local evaluation of physico-chemical interaction potential showing the hydrodynamic retardation to reduce the kinetics of deposition in the secondary minimum under unfavorable conditions [START_REF] Coutelieris | Low Peclet mass transport in assemblages of spherical particles for two different adsorption mechanisms[END_REF] and particles retention in flow vortices [START_REF] Gao | Three-dimensional microscale flow simulation and colloid transport modeling in saturated soil porous media[END_REF] .
In most of these simulation approaches the characteristic size of the flow domain is many orders of magnitude greater than the particle size so that the jamming ratio (the ratio of characteristic size of flow domain to the particle size) is high enough to consider that the initial flow domain remains unaffected by particle deposition. For many systems however the jamming ratio may be low and thought straining phenomenon is negligible, colloids deposition should greatly impact the flow structure and strength and therefore the particle deposition process. In the present paper we focus on such impact and will simulate colloids deposition in porous media under favorable deposition condition by adopting the unit bed approach where the unit cell is a constricted tube with two converging-diverging forms, i.e.: tapered pipe and venturi-like tube. To balance the inherent rising of simulation cost, we will restrict this study to dilute colloidal suspensions where hydrodynamic interactions between flowing particles are negligible and will adopt a simple approach that is detailed hereafter and a novel 3D-PTPO (Three-Dimensional Particle Tracking model by Python ® and OpenFOAM ® ) code developed in our laboratory. We will mainly focuses on deposition probability, the spatial distribution of deposited particles and surface coverage as functions of flow strength through the particle Pé clet number.
Numerical simulations
Porous media are often considered as a bundle of capillaries, therefore when particle transport and deposition has been simulated in one capillary, the process in the whole porous media could be predicted with suitable imposed boundary conditions between capillaries [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] . The simplest model consist in representing the porous medium as a series of parallel capillaries of circular cross section whose mean radius is given by where k and ε are the porous medium permeability and porosity respectively. As this is a crude representation of the pore geometry, in the present work, we deal with three-dimensional numerical modeling of the process of transport and deposition of particles in capillaries with converging/diverging geometries (Figure 7.1) that are believed to be more realistic pore shapes. The two pore geometries considered have a length of 15 μm, a volume of 753 μm 3 , and the pore body radius (R B ) to pore throat radius (R T ) ratio is chosen to be 1.5 [START_REF] Malvault | Numerical simulation of yield stress fluids flow in capillaries bundle: Influence of the form and the axial variation of the cross-section[END_REF] . The transported particles have a radius, a p , of 0.2 µm.
A sensitivity analysis was undertaken to explore the effect of mesh numbers on the accuracy of our results by varying the number of grid block used for the numerical simulations. The analysis was first based on the flow field for both types of capillary tubes before any particle deposit. Moreover, the deposition probability and distribution of deposited particles are compared for various mesh numbers going from 80000 to 160000. The results indicate that 80000 grid blocks are sufficient for our computations.
Hypothesis of this problem
For the numerical simulations in this chapter, the following assumptions are adopted [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Dí | Steady state numerical simulation of the particle collection efficiency of a new urban sustainable gravity settler using design of experiments by FVM[END_REF] :
1. The fluid is Newtonian and incompressible and the flow is creeping. and dynamic viscosity, a p the particle radius and the mean velocity at the pore throat under clean bed conditions) is small enough so that the particles can be treated as a mass point during their transport.
The particle Reynolds number defined as
3. The particle-pore wall physico-chemical interaction is considered purely attractive and the particle-particle interaction purely repulsive.
4. Deposition is irreversible and both hydrodynamic and physico-chemical removal of deposited particles is prohibited.
Governing equations and boundary conditions
The governing equations for the creeping flow of an incompressible Newtonian fluid are the Stokes equations given by [START_REF] Wirner | Flow and transport of colloidal suspensions in porous media[END_REF] ,
2 0=-p+ v (7-1) =0 v (7-2)
where p is the pressure and v stands for the flow velocity.
The no-slip boundary condition is applied on the pore wall and on the interface between the fluid and a deposited particle. At the inlet, the pressure is set to a fixed value, while at the outlet it is set to zero. In order to fulfill the requirement of creeping flow and to investigate a large range of Pé clet numbers, the pressure at the inlet will be varied between 10 -5 and 10 Pa. The Pé clet number is defined as:
a p u Pe D (7-3)
where D is the bulk diffusion coefficient of the particles in the fluid.
Methodology and tools
Since the incoming suspension is considered to be dilute, particles are injected individually, randomly and sequentially at the inlet of the geometry and [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] a Lagrangian method is used to track the trajectories of the colloidal particles. Once the injected particle is deposited onto the surface wall or leaves the domain, another particle is injected and the whole process will be repeated until the pre-defined cut-off value of deposition probability (2%) is reached. The deposition probability is defined as the ratio of the number of deposited particles over the number of injected particles.
Simulations are carried out by the 3D-PTPO code, coupling OpenFOAM ® (Open Field Operation and Manipulation) and Python ® . Firstly, the flow field is computed using OpenFOAM ® software. Secondly, the injected particles are tracked using a code developed using Python ® , which is an open source programming language used for both standalone programs and scripting applications in a wide variety of domains.
The detailed steps of the 3D-PTPO code is as following: the calculated flow field (CSV format) is obtained after solving the equation of motion, then a particle is injected at the entrance plane (z = 0) with the initial coordinates (x,y) generated by two independent pseudo-random series [START_REF] Dieter | Pseudo-random numbers: The exact distribution of Pairs[END_REF] .
Afterwards, a loop is carried out to track the movement of the particle in the flow domain. For that purpose, the particle velocity V at every position within the domain is calculated by the vector summation of the advection velocity V conv and the Brownian diffusion velocity V diff . V conv is obtained from OpenFOAM ® by interpolating the velocity of the nearest eight mesh-nodes surrounding the particle. V diff represents the random velocity of the particle due to Brownian motion at every time-step, given by: ( )
2 2 2 2 2 2 2 2 2 r p r = = = = t6 πμat a b c a b c a b c a b c D kT diff diff diff V V i j k V ;; (7-4)
where a, b and c are random numbers between -1 and 1; α, β and are determined by the normalization of the three random numbers, thus giving a unit vector with a random direction , k B is the Boltzmann constant and T is the absolute temperature. The parameters used in the simulation are summarized in Table 7.1. Here we neglect the particles' mobility reduction near the wall that decreases the particle's diffusivity. Indeed, the explicit evaluation of this reduction is of no practical interest in this work since we do not calculate hydrodynamic forces acting on a physical particle moving near the wall albeit one could have advocated a phenomenological correction of the diffusivity coefficient as particles approach the wall. This has not been done in this work where D is considered constant. In this study, the reference time t r , is defined as [START_REF] Djehiche | Effet de la force ionique et hydrodynamique sur le dé pôt de particules colloï dales dans un milieu poreux consolidé[END_REF] , t r =ζ/(2 u max ) (7-5) where ζ stands for the characteristic mesh size and u max is the maximum value of the advection velocity along the z axis in the absence of deposited particles. This means that at most the particle can travel through a distance equivalent to one half of a block size during the reference time.
The position of a moving particle is obtained by summing the old position vector X old and the updated velocity multiplied by the reference time t r :
X new =X old +V×t r (7-6)
Parameters Values
Particle radius, a p (m) 2×10 -7 Length of the geometries, L (m) 1.5×10 -5 Boltzmann constant, k B (J/K) 1.38×10 -23 Temperature, T (K) 293.15
Dynamic viscosity, µ (Pa• s) 10 -3
During the particle tracking process, three situations may occur: (1) the particle leaves the domain without deposition (Figure 7.2a); ( 2) the center-to-center distance between the moving particle and any other particle already deposited is less than a predefined value, the transported particle will bounce back to the bulk flow, and the tracking process will continue, (Figure 7.2b); (3) the particle approaches the pore wall and will be deposited if enough free surface is available for deposition (Figure 7.2c). In that case, the meshes containing the reconstructed deposited particle are considered as solid to take the particle's volume into account, and the flow field is then recalculated.
As soon as the loop for one particle finishes, another particle is injected. The injection process is repeated until the particle deposition probability defined as the ratio of the number of deposited particles over the number of injected particles, reaches a minimum value of 2%.
It must be noted that in order to ensure feasible numerical computations in terms of meshing and therefore computation time, the volume of the deposited particle reconstructed is not spherical but a circle based cylinder.
Results and discussion
For each pore's geometry, seven simulation runs at different Pé clet numbers ranging typically from 10-3 to 103 were carried out to investigate the influence of the flow regime on particle's deposition probability, the spatial density distribution of the deposit and the surface coverage. For low Pe (Pe<<1), the particle movement is dominated by the diffusion mechanism, while for high Pe (Pe>>1), the particle's transport is governed by advection.
Deposition probability
The deposition probability is defined as the ratio of the number of deposited particles over the number of injected particles and is calculated over groups of 200 particles for each simulation run.
This was done for each pore's geometry and since obtained results are almost similar, only those corresponding to the tapered pipe are presented for clarity. The evolution of the deposition probability versus the number of injected particles (N) at different Pe is plotted in Figure 7.3 in case of tapered pipe. For the advection-dominant regime corresponding to high Pé clet numbers, the value of the deposition probability is relatively small. This is due on the one hand to the low residence time of the particles in the domain, as the advection velocity is high and to the hydrodynamic shadowing effect on the other hand that leads to larger exclusion surfaces compared to the diffusion dominant regime and therefore smaller areas are available for deposition around the already deposited particles.
These exclusion zones are more extended downstream of deposited particles and increase both in size and shape complexity as Pe increases. For low Pé clet numbers, the deposition probability is relatively high and exhibits a plateau at the early stages of the injection process. This is due to the fact that compared to the high Pé clet regime, the exclusion area at low Pe (4πa p 2 , for deposition of non-interacting spheres on a flat surface in purely diffusive regime) is smaller and the pore-wall surface is available to a large extent for the injected particles to deposit on. However and strictly speaking, in evaluating the extent of the exclusion area due to blocking effect one should in general take into account not only steric interaction but also the extra contribution of DLVO origin [START_REF] Johnson P R | Dynamics of colloid deposition in porous media: Blocking based on random sequential adsorption[END_REF][START_REF] Adamczyk | Flow-induced surface blocking effects in adsorption of colloid particles[END_REF][START_REF] Ko | Coupled influence of colloidal and hydrodynamic Interactions on the RSA dynamic blocking function for particle deposition onto packed spherical collectors[END_REF] . In this work only steric contribution has been considered. Moreover due to shortness of the pore length and still low residence time of flowing particles in the pore space, the maximum deposition probability is only 44% even for Pe=0.0019. For a sufficient length of the pipe, the maximum value of the deposition probability is expected to approach unity as Lopez et al. have previously shown using an analogous approach in case of a longer domain for the parallel plate configuration [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] .
When the number of the deposited particles reaches a critical value, the deposition probability drops sharply to rather low values indicating that any newly injected particles will have much less chance to deposit. This phenomenon is similar to the Random sequential Adsorption (RSA) process where the deposition kinetics becomes slow as the jamming limit is approached [START_REF] Talbot | From car parking to protein adsorption an overview of sequential adsorption processes[END_REF] . Furthermore, it is noteworthy that for all Pé clet numbers, the overall deposition probability tends to decrease with N, and under our conditions, when N exceeds 15000 all values of the deposition probability are less than 2% with only minor variations afterwards, indicating that the deposition process is almost over.
Therefore, in this work, 2% was selected as the cut-off value for the injection process. Since particles are considered to be volumeless until they adsorb, the density distribution of the deposited particles, expressed as the number of deposited particles per unit area, is obviously isotropic in any x-y plane perpendicular to the pore symmetry axis. To show the influence of Pe on the axial (z-axis) variation of such a density, we will consider successively the tapered pipe geometry and the venturi-like geometry to highlight the difference between them. For the former, two Pé clet numbers (0.0019 and 190) that are representative of diffusion dominant and advection dominant regimes were selected. For each Pe, the pore is divided into 15 slices along the z axis (the mean flow direction) and the density profile of the deposited particles is plotted for various numbers of injected particles, N (Figure 7.4a and 7.5a).
For Pe equal to 0.0019, the deposition process is shown to be nearly piston-like. When N is small, the deposition distribution exhibits an apparent plateau near the inlet of the pipe, while the density remains small near the exit (Figure 7.4b). By further increasing N, the density increase is more remarkable near the exit, where a large surface is still available for deposition, while it undergoes only a moderate change at the inlet zone. At later stages of deposition, a uniform deposit along the pipe is expected. Similar behavior was reported in the literature for column experiments when Polystyrene latex colloidal particles were injected into a synthetic consolidated porous medium and where deposition density was determined through local measurement of the attenuation of an incident gamma ray due to particles deposition [START_REF] Gharbi | Use of a gamma ray attenuation technique to study colloid deposition in porous media[END_REF][START_REF] Djehiche | Étude expé rimentale du dé pôt de particules colloï dales en milieu poreux : Influence de l'hydrodynamique et de la salinité[END_REF] . Indeed, for low Pe the particle movement is mainly dominated by diffusion so that the velocity in the x-y plane may be higher than the velocity component in the mean flow direction. As a consequence, particles approach the surface wall and deposit first close to the inlet of the pipe. For high number of injected particles (over 2000)
deposition at the inlet is almost over and the plateau value increases only slightly there approaching the jamming limit. Then, any further increase of injected particles mainly contributes to increase density in the rear part of the tube where free surface is still available for particles deposition leading to a uniform and dense deposit at the process end (Figure 7.4a and 7.4c). This piston-like deposition was experimentally observed in column experiments [START_REF] Djehiche | Étude expé rimentale du dé pôt de particules colloï dales en milieu poreux : Influence de l'hydrodynamique et de la salinité[END_REF] and the covering front displacement would be similar if the simulation domain was longer.
For Pe=190, particle transport is dominated by advection, the spatial density distribution curves are nearly uniform along the pipe whatever the number of injected particles (Figure 7.5a, 7.5b and 7.5c) leading to a scanty final deposit. This is a consequence of the high value of V conv that greatly modify the excluded zone both in magnitude and shape. In this advection dominant regime, the restricted area in the rear of deposited particles is increased as flow velocity (or particle velocity here) increases resulting in a great impact of the hydrodynamic shadowing effect on pore surface covering.
This phenomenon that is sometimes expressed in terms of the blocking factor was already experimentally evidenced [START_REF] Van Loenhout | Hydrodynamic flow induced anisotropy in colloid adsorption[END_REF][START_REF] Gharbi | Use of a gamma ray attenuation technique to study colloid deposition in porous media[END_REF][START_REF] Ko | The "shadow effect" in colloid transport and deposition dynamics in granular porous media-Measurements and mechanisms[END_REF][START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] and was modelled [START_REF] Van Loenhout | Hydrodynamic flow induced anisotropy in colloid adsorption[END_REF][START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] and numerically assessed [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Areepitak | Model simulations of particle aggregation effect on colloid exchange between streams and streambeds[END_REF] .
Figure 7.6a and 7.6b offer a clear view for Pe=1.9 on how flow streamlines are modified in the vicinity of the wall due to particle deposition. It is then clearly seen that the original streamlines that were straight and parallel to the wall become highly deformed and peeled from the wall. Moreover and as the jamming ratio (ratio of characteristic size of the pore to that of the particle) is not too large, the streamlines become also squeezed decreasing the local Pé clet number. Consequently, the capture efficiency decreases in a manner similar to that of an isolated spherical collector for which the capture efficiency of Brownian particles is predicted to vary as Pe -2/3 [START_REF] Levich | Physicochemical hydrodynamics[END_REF] .
0.0 2.0x10 -6 4.0x10 -6 6.0x10 -6 8.0x10 -6 1.0x10 -5 1.2x10 -5 1.4x10 -5 1.6x10 Similarly, in case of venturi geometry, we studied the variation of the spatial distribution of deposit density with the number of injected particles along the pore axis and for a Pe interval that covers diffusion-dominant and advection-dominant regimes. In overall the observed behavior is similar to that observed for trapped pipe with the same features. However the existence of corners in this geometry may be seen to locally impact the density distribution. This is obvious in Figure 7.7, corresponding to Pe=0.0014, that shows a more or less deep minimum at each corner location corresponding to lower deposition. 0.0 2.0x10 -6 4.0x10 -6 6.0x10 -6 8.0x10 -6 1.0x10 -5 1.2x10 -5 1.4x10 -5 1.6x10 An explanation of such behavior may come from the flow structure in these zones since flow streamlines behave differently in this geometry. On Figure 7.8 an enlarged view of flow at corners is displayed showing a net detachment of the streamlines at the corners that impede the particle deposition as locally flowing particles will tediously cross the critical distance for deposition.
Moreover, such a detachment is more marked at corners D and C evidencing the higher energy loss for the diverging part of the pore where deposition is less likely to occur and the minimum there is consistently deeper than the first one corresponding to corners A & B (Figure 7.8). Analogous
behavior was observed at various Pe numbers even if the evaluation of the density of deposited particles is rather difficult at high Pe values due to less deposition as we emphasized before.
However, one can expect additional hydrodynamic retention of Brownian particles in these zones when the Reynolds number is high due to flow recirculation. Such a retention mechanism was proven to be important in real porous media at grain-grain edges [START_REF] Coutelieris | Low Peclet mass transport in assemblages of spherical particles for two different adsorption mechanisms[END_REF][START_REF] Klauth | Fluorescence macrophotography as a tool to visualise and quantify spatial distribution of deposited colloid tracers in porous media[END_REF] and in cracks for rough pore surfaces [START_REF] Sefrioui | Numerical simulation of retention and release of colloids in porous media at the pore scale[END_REF] . Surface coverage (Γ) is defined as the ratio of the total projection area of deposited particles to the total initial surface area of the pore surface before deposition. In this work, results are presented in terms of the dimensionless surface coverage Γ/Γ RSA , where Γ RSA was taken to be 0.546 [START_REF] Talbot | From car parking to protein adsorption an overview of sequential adsorption processes[END_REF] which corresponds to the value for pure diffusion regime and a flat surface using the RSA model. The use of such a value is justified since the ratio of particle radius to pore surface curvature is low enough.
The variation of Γ/Γ RSA versus the number of injected particles (N) at Pé clet numbers spanning from very low to very high values is plotted in Figure 7.9 for the tapered pipe. It can be seen that, for all Pe values, Γ/Γ RSA increases sharply with N in the early deposition stages and tends to a plateau value, Γ final /Γ RSA , that is of course reached for a smaller value of N for higher Pe. In the diffusion-dominant regime (for example, at Pe = 0.0019), Γ final /Γ RSA is found to be close to that obtained in the same conditions for a straight capillary tube [START_REF] Li | Colloidal particle deposition in porous media under flow: A numerical approach[END_REF] and in parallel plates as well [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] . For Pe = 1900 where particle transport is mostly due to advection, Γ final /Γ RSA is significantly reduced and is of only 0.08. It should be noted here that even at that extreme Pe values, the flow Reynolds number is low enough (<0.1) and the flow may still be considered as creeping. Similar behavior was also observed in the case of the venturi geometry despite the existence of the corners with reduced deposition density therein affecting only slightly the value of Γ final /Γ RSA (data not shown). On Figure 7.10, Γ final /Γ RSA is plotted as a function of Pe for the two considered geometries, for the tube geometry already investigated under the same conditions in a previous work [START_REF] Li | Colloidal particle deposition in porous media under flow: A numerical approach[END_REF] , as well as the data obtained by Lopez and co-workers [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] from simulations performed in parallel plates geometry under comparable conditions. As we can see, as long as diffusion dominates (Pe<<1), the surface coverage is close to Γ RSA and is almost constant due to the high deposition probability in this regime and the precise form of the pore has a weak impact on the attained value of Γ final . It is obvious that the upper limit of this regime varies from one geometry to another due to the values of used for Pe calculation.
Furthermore, for the advection-dominant regime (Pe>>1), Γ final /Γ RSA decreases significantly with Pe.
This is because the hydrodynamic shadowing effect comes to play and as mentioned before, particles are transported by the fluid over a distance increasing with the advection velocity until they may deposit downstream away from an already deposited particle, resulting hence in a lower surface coverage [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Van Loenhout | Hydrodynamic flow induced anisotropy in colloid adsorption[END_REF][START_REF] Gharbi | Use of a gamma ray attenuation technique to study colloid deposition in porous media[END_REF][START_REF] Ko | The "shadow effect" in colloid transport and deposition dynamics in granular porous media-Measurements and mechanisms[END_REF][START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] . This finding is in qualitative agreement with experimental data that were already obtained by Veerapen and co-workers when sub-micrometric latex particles are injected through a non-consolidated silicon carbide porous medium [START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] . Salehi [START_REF] Salehi | Mecanismes de retention hydrodynamique de suspension colloidales en milieux poreux modeles[END_REF] has developed a simple model that describes deposition of colloidal particles in advection-dominant regime and predicts the surface coverage for a flat surface to follow a power law as Γ final ≌Pe -1/3 , which was shown to fit well the experimental data of Veerapen et al. [START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] .This power law is also drawn on Figure 7.10 showing that despite the fact that the surface coverage in the venturi pipe leads to a satisfactory fit with such a trend, the agreement is less obvious when the other geometries are considered and more in-silico experiments at higher Pé clet numbers are needed to draw any definite conclusion. Unni and Yang [START_REF] Adamczyk | Deposition of colloid particles at heterogeneous and patterned surfaces[END_REF] have investigated colloids deposition in parallel channel by means of Brownian dynamic simulation by focusing on the variation of surface coverage with flow strength. They showed that Γ is a decreasing function of flow Reynolds number in a much more pronounced way but the considered range, from 20 to 60, was many orders of magnitude greater than in the present work. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] ; (e) experimental results of Veerapen et al [START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] 7
.4 Conclusions
The present study proposes the 3D-PTPO code (Three-Dimensional Particle Tracking model by Python ® and OpenFOAM ® ) using a Lagrangian method to carry out microscale simulations of colloidal particle transport and deposition in converging/diverging capillaries (tapered pipe and venturi tube). The idea is to approach the behavior in porous media idealized as a bundle of capillaries with variable cross sections. The main originality of the tool is to take into account the modification of the pore-space and therefore the flow field as particles are deposited on the pore-wall.
The variations of the key parameters including the deposition probability and the dimensionless surface coverage Γ/Γ RSA , as well as the detailed spatial distribution of the density of the deposited particles were investigated. The main conclusions drawn from this work are as follows:
(i) The probability for a particle to be deposited on the pore-wall surface is much higher when the transport is dominated by diffusion for the two geometries considered in this work. In this regime, at the early stages of injection, the deposit probability is high and nearly constant and decreases sharply close to the jamming limit as the pore-wall is covered by particles. For advection-dominant regime, the probability of deposition is rather low at all stages of injection due to the high advection velocity vs diffusion (high Pe) and to the hydrodynamic shadowing effect which is predominant in this case.
(ii) The deposited particle distribution along the pipes is piston-like for the diffusion-dominant regime, while the distribution is more uniform for the advection-dominant regime. Especially, for the venturi tube with steep corners, the density of deposited particles is relatively low in the vicinity of the pore throat entrance and exit due to streamlines modification.
(iii) For all values of the Pé clet number considered in this work ranging between 0.0019 and 1900, the dimensionless surface coverage Γ/Γ RSA as a function of the number of injected particles (N) features a sharp increase in the early deposition stages and tends to a plateau value for higher N. The final surface coverage is reached for much lower N values for higher values of Pe. The behavior of the final plateau corresponding to the maximum surface coverage Γ final /Γ RSA as a function of Pe has been analyzed. For low Pe, a plateau could be observed for both geometries, the plateau value and the deposition kinetics are consistent with the random sequential adsorption (RSA) theory. For high Pe, the declining trends for Γ final /Γ RSA versus Pe are in good agreement with experimental and simulation results found in the literature. The matching of the trend with the power law Pe -1/3 observed in some studies in the literature is not general and needs further investigation.
(iv) The introduction of divergence/convergence (tapered pipe) or cross section constriction (venture pipe) can lead to a modification of the deposition phenomenon. However, the small angle chosen for the divergence or convergence of the tapered pipe in this work leads to very weak differences in comparison with the capillary tube of constant cross section studied in a previous work [START_REF] Li | Colloidal particle deposition in porous media under flow: A numerical approach[END_REF] . A thorough study of the influence of the convergence divergence angle is beyond the scope of this paper, but could be of interest. For the venturi pipe, there is an obvious decrease of the density of deposited particles at the entrance and exit of the pore-throat especially for the diffusion-dominant regime. Moreover the decrease at the exit is much more pronounced and corresponds to a zone where particles are less likely to reach the wall. In the same manner, it can be noticed that Γ final /Γ RSA versus Pe for the tapered pipe is rather close to that of the straight tube, while the dependence on Pe for the venturi tube is rather different. This may be partly attributed to the particular geometry in this case, although as mentioned before the definition of the Pé clet number based on the average pore-throat velocity can also be considered as a possible explanation for this shift.
Finally, the above results demonstrate that the numerical method used seems able to capture the physics of transport and deposition of colloidal particles in pores of simple geometry and could be used as a basis for further developments namely transport in more complex geometries (unit cell of a sphere packing), multi-layer particle deposition, chemically patterned surfaces, etc.
Particle Transport and Deposition in Chemically Heterogeneous
Porous Media
Introduction
Colloidal particle transport and deposition (irreversible adsorption) processes in porous media are of great environmental and industrial interest since they are critical to numerous applications ranging from drug delivery to drinking water treatment [START_REF] Tosco | Transport of ferrihydrite nanoparticles in saturated porous media: role of ionic strength and flow rate[END_REF][START_REF] Asgharian | Prediction of particle deposition in the human lung using realistic models of lung ventilation [J][END_REF] . Accordingly, particle deposition on homogeneous porous media has been extensively studied both experimentally and theoretically. For example, Buret et al. [START_REF] Buret | Water quality and well injectivity do residual oil-in-water emulsions matter[END_REF] carried out a laboratory study of oil-in-water emulsion flow in porous media to investigate the mechanisms of oil-droplet retention and its effect on permeability. The results demonstrate that, the induced permeability loss is significant even at high pore-size/droplet size ratio . Experimental investigations can help to understand the particle transport and deposition kinetics to a large extent. To further characterize the colloidal particle transport and deposition in porous media at the microscale level, one can carry out numerical simulations by idealizing the porous medium as a bundle of capillaries with various kinds of geometries. For instance, Lopez et al. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] carried out micro-scale numerical simulations of colloidal particles deposition onto the surface of a simple pore geometry consisting of two parallel planar surfaces, and investigated the deposition kinetics of stable colloidal spheres from dilute dispersions. The results show that both the surface coverage Γ and the equivalent hydrodynamic thickness of the deposit, δ h , possess a definite plateau at small Pe, and both are decreasing functions of flow strength at high Pe. Li et al. [START_REF] Li | Colloidal particle deposition in porous media under flow: A numerical approach[END_REF] simulated the transport and deposition of colloidal particles at the pore scale in a porous medium idealized as a bundle of 3D capillaries. The results indicate that the adsorption probability and the surface coverage are decreasing functions of the particles' Péclet number.
The above investigations provided valuable insight on the understanding of particle transport and deposition process in smooth and homogeneous pore surfaces, which simplify the simulation process obviously. However, from a practical point of view, the problem of particle deposition in porous media featuring heterogeneity at the pore scale is more relevant since most natural or artificial porous media are physically and/or chemically heterogeneous [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] . When particles under flow approach such heterogeneously patterned surfaces, they exhibit various deposition behaviors depending on the nature, magnitude and form of such heterogeneities [START_REF] Bendersky | Statistically-based DLVO approach to the dynamic interaction of colloidal microparticles with topographically and chemically heterogeneous collectors[END_REF][START_REF] Duffadar | Interaction of micrometer-scale particles with nanotextured surfaces in shear flow[END_REF] . Thus, particle transport and deposition in heterogeneous porous media have been an area of investigation, and significant amount of relevant researches has been performed [START_REF] Duffadar | Dynamic adhesion behavior of micrometer-scale particles flowing over patchy surfaces with nanoscale electrostatic heterogeneity[END_REF][START_REF] Bradford | Colloid adhesive parameters for chemically heterogeneous porous media[END_REF] . For example, in the presence of surface roughness and particle/pore physicochemical attractive interaction, the simulation of solid colloidal particle transport in porous media at the pore scale was carried out by Sefrioui et al. [START_REF] Sefrioui | Numerical simulation of retention and release of colloids in porous media at the pore scale[END_REF] .
They found that the existence of surface roughness is a necessary but not sufficient condition for particles retention . In this current work, instead of physical heterogeneity, the impact of chemical heterogeneity is investigated. During the past two decades, particle deposition in chemically heterogeneous surfaces has been extensively investigated [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF][START_REF] Bendersky | DLVO interaction of colloidal particles with topographically and chemically heterogeneous surfaces[END_REF] . Chatterjee et al. [START_REF] Chatterjee | Particle deposition onto Janus and Patchy spherical collectors[END_REF] analyze particle deposition on Janus and patchy collectors. The results indicate that particles tend to deposit at the edges of the favorable strips. Besides, this preferential accumulation varies along the tangential position, this is due to the nonuniform nature of the collector . Recently, Pham et al. [START_REF] Pham | Effect of spatial distribution of porous matrix surface charge heterogeneity on nanoparticle attachment in a packed bed[END_REF] investigated the effect of spatial distribution of the porous matrix surface heterogeneity on particle deposition.
They found that the the attachment rate constant is affected by heterogeneity pattern, the particle size, and the fraction of surface that is favorable to deposition. Adamczyk et al. [START_REF] Adamczyk | Deposition of latex particles at heterogeneous surfaces[END_REF]278,[START_REF] Adamczyk | Colloid particle adsorption at random site (heterogeneous) surfaces[END_REF] investigated the irreversible adsorption of colloid particles and globular proteins at heterogeneous surfaces theoretically and experimentally. It was revealed that the initial attachment flux increased significantly with the Γ/Γ RSA and the λ r parameter, here, Γ/Γ RSA is the site coverage dimensionless coverage, r is the particle size ratio (the averaged size of the latex versus the size of the particle).
This behavior was quantitatively interpreted in terms of the scaled particle theory. It also was demonstrated that particle adsorption kinetics and the jamming coverage increased significantly, at fixed site coverage, when the r parameter increased. Rizwan et al. [START_REF] Rizwan | Particle deposition onto charge-heterogeneous substrates[END_REF] experimentally created charge-heterogeneous surfaces by employing soft lithographic techniques, and presented a simple mathematical description of particle deposition on the created rectangular surface features. The results demonstrate that particles tend to preferentially deposit at the edges of the favorable strips, while the extent of this bias can be controlled by the proximity of consecutive favorable strips and the ratio of particle size to the strip width. Liu et al. [START_REF] Liu | Role of collector alternating charged patches on transport of Cryptosporidium parvum oocysts in a patchwise charged heterogeneous micromodel[END_REF] studied the role of collector surface charge heterogeneity on transport of oocyst and carboxylate microsphere in 2D micromodels. The results
show that under higher pH, particles tend to deposit on the patch. Among the various geometries of porous media, the transport of particles in capillaries/microchannels is central to numerous microfluidic and nanofluidic systems [START_REF] Waghmare | Finite reservoir effect on capillary flow of microbead suspension in rectangular microchannels[END_REF] . Moreover, the porous media are usually considered as a bundle of capillaries/microchannels and when particle transport and deposition has been simulated in one capillary/microchannel, the process in the whole porous media could be predicted with suitable imposed boundary between capillaries/microchannel [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] . To the best of our knowledge, Chatterjee et al. [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] have investigated the particle transport in patchy heterogeneous cylindrical microchannels.
The surface heterogeneity is modeled as alternate bands of attractive and repulsive regions on the channel wall to facilitate systematic continuum type evaluation. This study provides a comprehensive theoretical analysis of how the transport of such suspended particles is affected in these microchannels due to the heterogeneities on the microchannel walls. Hence, 3D simulations must be performed once again to develop a complete picture of particle transport in a capillary. This is one of the most significant objectives of the present work.
Moreover, it is important to investigate the coupled effect of hydrodynamics and chemical heterogeneity on particle deposition. Nazemifad et al. [START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF] used particle trajectory analysis to investigate the effect of microscopic surface charge heterogeneity on particle trajectories and deposition efficiency near a two-dimensional patterned charged substrate. They demonstrated that as a result of the coupled effects of hydrodynamics and colloidal forces, there exists a zone at the leading edge of each favorable strip that is inaccessible for particle deposition and acts as an unfavorable region from the deposition point of view. This observation also suggests that deliberately micropatterning alternating charge heterogeneous strips on a substrate can significantly modify the particle deposition behavior in presence of tangential flow. There is so far few reported works where attention is paid to the combined effects of surface heterogeneity and hydrodynamics on colloidal particle deposition in three-dimensional capillaries/microchannels, which is also one of the objective in the present work.
In this chapter, 3D-PTPO (Three-dimensional particle tracking model by Python ® and OpenFOAM ® ) code developed in I2M laboratory was used to investigate the influence of the surface heterogeneities and hydrodynamics on particle deposition. The three-dimensional axisymmetric capillary with periodically repeating chemical heterogeneous surface (crosswise strips patterned and chess board patterned) of positive and negative surface charges (or alternate attractive and repulsive regions) is employed for the heterogeneity model. Finally, the dependence of the deposition probability, dimensionless surface coverage on the frequency of the pitches, Pe and the favorable surface coverage, as well as the spatial density distribution of deposited particles are all investigated.
Numerical simulations
The present work focuses on three-dimensional numerical modeling of the process of transport and deposition of particles in a pore throat of a porous medium idealized as a capillary with periodically repeating heterogeneous surface pattern (crosswise strips patterned and chess board patterned) with Poiseuille flow profile, as is depicted in Figure 8.1. The frequency of the pitches (λ) is calculated by the ratio of L to p, where L is the length of the capillary and p is the pitch length (summation of one negative and one positive band (patch) widths). The favorable area fraction is denoted as θ=A f /A t , where A f is the area of the surface wall favoring deposition and A t is the total surface area of the capillary wall. The geometries considered have a length of 15 μm and a radius of
Hypothesis of this problem
In the present work the following assumptions are adopted [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Dí | Steady state numerical simulation of the particle collection efficiency of a new urban sustainable gravity settler using design of experiments by FVM[END_REF] :
1. The fluid is Newtonian and incompressible and the flow is creeping. and dynamic viscosity, a p the particle radius and the mean velocity at the pore throat under clean bed conditions) is small enough so that the particles can be treated as a mass point during their transport.
3. The particle-particle interaction is purely repulsive, therefore particle deposition is monolayer.
4. Deposition is irreversible and both hydrodynamic and physico-chemical removal of deposited particles is prohibited.
Governing equations and boundary conditions
The governing equations for the creeping flow of an incompressible Newtonian fluid are the Stokes equations given by [START_REF] Wirner | Flow and transport of colloidal suspensions in porous media[END_REF] :
2 0 = -p+ v (8-1) =0 v (8-2)
where p is the pressure and v stands for the flow velocity.
The no-slip boundary condition is applied on the pore wall and on the interface between the fluid and a deposited particle. At the inlet, the pressure is set to a fixed value, while at the outlet it is set to zero. In order to fulfill the requirement of creeping flow and to investigate a large range of Pé clet numbers, the pressure at the inlet will be varied between 10 -5 and 10 Pa. The Pé clet number is defined as:
a p u Pe D (8-3)
where D is the bulk diffusion coefficient of the particles in the fluid, is the average velocity along the mean flow axis.
Methodology and tools
Since the incoming suspension is considered to be dilute, particles are injected individually, randomly and sequentially at the inlet of the geometry [23]. A Lagrangian method is used to track the trajectories of the colloidal particles. Once the injected particle is deposited onto the surface wall or leaves the domain, another particle is injected and the whole process will be repeated until the pre-defined cut-off value of deposition probability (2%) is reached. The deposition probability is defined as the ratio of the number of deposited particles over the number of injected particles.
Simulations are carried out by the 3D-PTPO code. The detailed steps of the 3D-PTPO code could be found in Chapter 6 and Chapter 7. The parameters used for the simulation are summarized in Table 8.1. Length of the geometries, L (m) 1.5×10 -5 Boltzmann constant, k B (J/K) 1.38×10 -23 Temperature, T (K) 293.15
Dynamic viscosity, µ (Pa• s) 10 -3
During the particle tracking process, four situations may occur: (1) the particle leaves the domain without deposition (Figure 8.2a); (2) the center-to-center distance between the moving particle and any other particle already deposited is less than a predefined value, the transported particle will bounce back to the bulk flow, and the tracking process will continue (Figure 8.2b); (3) the distance between the particle center and the pore wall is less than a certain value (0.5 a p ), while the local surface wall is repulsive (positively charged), the transported particle will also bounce back to the bulk flow, and the tracking process will continue (Figure 8.2c); (4) the distance from the particle center to the pore wall is less than a certain value (0.5 a p ), and the local surface wall is attractive (negatively charged), the particle will be deposited if enough free surface is available for deposition (Figure 8.2d). In that case, the meshes containing the reconstructed deposited particle are considered as solid to take the particle's volume into account, and the flow field is then recalculated.
As soon as the loop for one particle finishes, another particle is injected. The injection process is repeated until the particle deposition probability defined as the ratio of the number of deposited particles over the number of injected particles, reaches a minimum value of 2%. For the chemically heterogeneous capillary, it is important to firstly investigate the local deposition behavior of colloidal particles along the length of the geometry. Since gravity is ignored in this work, spatial distribution of the deposited particles in all cross sections is uniform. This distribution is expressed in terms of a spatial density which is the number of deposited particles per unit area. For that purpose, the geometry is divided into 15 segments along the Z axis (the mean flow direction). The density profile of the deposited particles is plotted for various numbers of injected particles, N (Figure 8.3a). For comparison, the density profile of the deposited particles in a homogeneous capillary was also plotted in Figure 8.3a. The solid boxes represents the concentration profile for the heterogeneous (in this part, we take the case with λ=5 and θ=0.5 for example) capillary, while the hollow stars represents the concentration profile for the homogeneous capillary.
For both cases, N is 2000 and Pe is 0.0015. Three major features could be observed from Figure 8.3a.
Firstly, the concentration profile of the heterogeneous capillary follows the oscillatory surface potential profile along the channel length [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] . The spatial distribution of deposition closely emulates the periodic nature of the heterogeneous pattern. Secondly, the concentration peaks tend to form at the leading and trailing edges of the favorable strips. These results are similar to those found in the litterature [START_REF] Adamczyk | Deposition of colloid particles at heterogeneous and patterned surfaces[END_REF] . This could be explained by the coupled effects of hydrodynamic and colloidal interactions [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] . Figure 8.3b schematically depicts the above effects on the particle behavior near the boundaries between the favorable and unfavorable strips. When the particles are relatively far from the collector surface, they tend to accumulate over the unfavorable strips since they cannot get closer to the surface due to the energy barrier (left case) [START_REF] Nazemifard | Particle deposition onto charge heterogeneous surfaces: convection-diffusion-migration model[END_REF] . Consequently, with the presence of the convection and diffusion, the particles can be transferred toward the nearest favorable strips, causing an enhanced concentration at the leading edges (right case). Similarly, the sharp concentration peaks at the trailing edges could be explained by the combined action of the attraction by the favorable strip, the repulsive colloidal interactions of the next unfavorable strip and the transverse Brownian diffusion velocity [START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF] . Thirdly, the deposition is much more uniform along the length of a patterned capillary wall (favorable regions) compared to the homogeneous one in which all the particles will deposit within a very short distance from the inlet, which means that particles tend to travel further along the heterogeneous capillary compared to homogeneous capillary, similar behavior has also been reported by Chatterjee et al. [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] and Nazemifard et al. [START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF] To be more intuitive, the above for both heterogeneous (λ=5 and θ=0.5) and homogeneous capillary; (b) schematic representation of particle behavior near the boundaries between the favorable and unfavorable strips [START_REF] Nazemifard | Particle deposition onto charge heterogeneous surfaces: convection-diffusion-migration model[END_REF] . 3D sectional view of the heterogeneous (c) and homogeneous (d) capillary (the red small geometries represent the deposited particles; the flow occurred from left to right)
Dependence of deposition behavior on the frequency of the pitches and Pé clet number
The probability of deposition is defined as the ratio of the number of deposited particles over the number of injected particles and is calculated over groups of 200 particles for each simulation [START_REF] Li | Colloidal particle deposition in porous media under flow: A numerical approach[END_REF] .
The evolution of the deposition probability versus the number of injected particles (N) with different λ is plotted in Figure 8.4. For comparison, the variation of deposition probability in a homogeneous capillary is also plotted in Figure 8.4. The Pé clet number for both heterogeneous and homogeneous capillary is 0.0015, and in this part θ was chosen to be 0.5 for the heterogeneous cases.
exhibits a plateau at the early stage of the injection process, which is due to the fact that the exclusion area at low Pe is small and the pore-wall surface is available to a large extent for the injected particles to deposit on. When the number of the deposited particles reaches a critical value, the deposition probability drops sharply to rather low values, indicating that any newly injected particles will have much less chance to deposit [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF] . This phenomenon is similar to the RSA process where the deposition kinetics becomes slow as the jamming limit is approached [START_REF] Talbot | From car parking to protein adsorption an overview of sequential adsorption processes[END_REF] . Furthermore, for all λ, the overall deposition probability tends to decrease with N, and when N exceeds 10000, all values of the deposition probability are less than 2% with only minor variations afterwards, which suggests that the deposition process is almost over. Therefore, 2% was selected as the cut-off value for the injection process in this work.
More importantly, an obvious observation from Figure 8.4 is that the deposition probability tends to increase with λ. In this work, larger λ means higher frequency of the pitches, which indicates smaller pitch width and therefore larger number of favorable strips with the given length of the capillary. This result demonstrates that although the fraction of the surface area favorable for deposition is the same (θ=0.5), higher frequency of the pitches is conducive to the deposition as a particle can access a larger number of favorable sections along a given length of the capillary. As a limiting case, the capillary with λ=1 is half favorable and half unfavorable along the entire pipe length. In such a situation, only one half of the capillary allows particle to deposit on the walls and there is a greater probability that a particle might be transported over the favorable section without depositing. In contrast, if the frequency of the pitches is higher, there is a greater possibility of the particle being captured anywhere (on any one of the favorable strips) along the length of the capillary [START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] . Consequently, the deposition probability is dependent on the frequency of the pitches, and therefore on the distribution of the surface charge. Generally, for the homogeneous capillary, the surface coverage (Γ) is usually defined as the ratio of the total projection area of deposited particles to the total initial surface area of the pore before deposition. And normally the dimensionless surface coverage of Γ/Γ RSA is used, where Γ RSA was taken to be 0.546 [START_REF] Talbot | From car parking to protein adsorption an overview of sequential adsorption processes[END_REF] which corresponds to the value for pure diffusion regime and a flat surface using the Random Sequential Adsorption (RSA) model. Therefore for the chemically heterogeneous capillary, the dimensionless surface coverage needs to be defined in a specific manner. Here, we define two methods, as is depicted in Figure 8.5a, to calculate the Γ/Γ RSA by taking the case of λ=5 and θ=0.5 as an example, and compare with the case of homogeneous capillary, with the same Péclet number of 0.0015. For Method 1, the Γ/Γ RSA is calculated by: (8-4)
Where n is the number of deposited particles, s particle is the projection area of one deposited particle (colored by blue in Figure 8.5a) and s attractive is the total surface area of the attractive regions (colored by dark green in Figure 8.5a), for θ=0.5 the s attractive is equal to half of the total surface area of the capillary.
In addition, deposited particles are counted by the z-axis coordinates of the center of the particles, and the average density of deposited particles at the leading and ending edge of the strips is relatively higher, leading to the "extended zones" around the attractive regions (colored by light green in Figure 8.5a
123
where s extended is the extended surface areas of every attractive region, the length of the extended band is equal to the radius of the particle. The Γ/Γ RSA versus N calculated by Method 1 and 2 are plotted in Figure 8.5a, and the result of homogeneous capillary is also plotted as the dotted line in Figure 8.5a.
It could be observed that the curve calculated by Method 1 is close to the homogeneous one, which indicates that this calculation way is much more suitable, thence the dimensionless surface coverage is calculated by Method 1 in the following part of this chapter.
In order to explore the influence of λ on the dimensionless surface coverage, the variation of Γ/Γ RSA versus the number of injected particles (N) at different λ is plotted in Figure 8 Furthermore, the dependence of Γ/Γ RSA on Pe is significant to understand the influence of flow field on the particle deposition behavior in the heterogeneous capillary. For this purpose, seven simulations with different Pe (ranging from 0.0015 to 1500) were carried out to investigate the influence of Pe on the surface coverage. For lower Pe (Pe<<1), the particle movement is mainly dominated by the diffusion mechanism, while for higher Pe (Pe>>1), the particle transport is mainly governed by convection. In this part, λ=5 and θ=0.5, the variation of Γ/Γ RSA versus N at various Pe is plotted in Figure 8.6a. It can be seen that, Γ/Γ RSA increases with N, while it decreases with Pe. At Pe = 0.0015 (in a diffusion-dominant regime), Γ/Γ RSA reaches a value close to 0.92; while at Pe = 1500 (in a convection-dominant regime), Γ/Γ RSA approaches to a value of 0.09. This trend also coincides with the result obtained for the homogeneous case [START_REF] Li | Colloidal particle deposition in porous media under flow: A numerical approach[END_REF] . On the other hand, Γ final /Γ RSA decreases with Pé clet in a more pronounced way for high Pe. This behavior is more obvious in Figure 8.6b where Γ final /Γ RSA is plotted as a function of Pe. Both the heterogeneous capillary (solid boxes) and the homogeneous capillary (hollow stars) possess similar characteristics. For the diffusion-dominant regime (Pe<1), the surface coverage is close to Γ RSA and is almost constant due to the high deposition probability in this regime. For the convection-dominant regime (Pe>1), Γ final /Γ RSA decreases significantly with Pe, since the hydrodynamic shadowing effect is more obvious for higher Pe.
Indeed, as already mentioned, particles are transported by the fluid over a distance increasing with the convection velocity and are deposited downstream further away from an already deposited particle, resulting in a lower surface coverage [START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] . In addition, the derived power law dependence of surface coverage versus Péclet number Γ final /Γ RSA ≌Pe -1/3 obtained for the convection dominant regime is depicted in Figure 8.6b showing a good agreement with our simulation data. [START_REF] Lopez | Simulation of surface deposition of colloidal spheres under flow[END_REF][START_REF] Veerapen | In-depth permeability damage by particle deposition at high flow rates[END_REF] capillary is always higher than the crosswise strips patterned one. As discussed in section 8.3.1, the average density of deposited particles at the leading and ending edge of the favorable region are relatively higher. Thereby the number of deposited particles is increased with the length of the edges of the favorable region, or to say the "extended zones" around the attractive regions. Under the same simulation conditions, the chess board patterned capillary possesses larger edge length thus larger area of "extended zones", which leads to a larger number of deposited particles although the additional edges are parallel to the flow direction. Moreover, Pham et al. [START_REF] Pham | Effect of spatial distribution of porous matrix surface charge heterogeneity on nanoparticle attachment in a packed bed[END_REF] also observed a similar phenomenon by investigating packed beds with four different patterns of surface charge heterogeneity, on which favorable surfaces for particle attachment are placed at different locations.
The results indicated that the attachment coefficient of random pattern is higher than the strips pattern. This is because for the chess board patterned capillary with more uniform distributed active surface area, the particles have more time and higher opportunity to move towards and collide with the active collector surface as they propagate through the porous domain. In order to investigate the influence of favorable surface ratio, θ, on the particle deposition, nine groups of numerical simulation with θ ranging from 20% to 100% were carried out, and the 3D sectional view of the heterogeneous capillary with deposited particles are provided in Figure 8.8a.
According to the simulation results, the number of deposited particles (n) versus N with different θ were recorded and plotted into nine curves, as is illustrated in Figure 8.8b. It could be clearly observed that n is increasing with N and θ. This result is in line with the patchwise heterogeneity model, which provides the overall deposition efficiency as a linear combination of favorable and unfavorable deposition efficiencies. For a chemically heterogeneous porous media containing a random distribution of favorable and unfavorable regions, the two-site patch model simply gives the effective collector efficiency as [START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF][START_REF] Chen | Role of spatial distribution of porous medium surface charge heterogeneity in colloid transport[END_REF] :
(8-6)
where η effective is the overall deposition efficiency, η f and η u are the deposition efficiencies for favorable and unfavorable regions, respectively. Since the particle can only adsorb if the projection of its center lies within the favorable bands, and no particles can deposit on unfavorable bands (η u = 0), equation (8-6) simplifies to [START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF] :
(8-7) effective f effective u f f (1 )
For comparison, the value of η effective /η f predicted by the patch model (the dashed diagonal line), and those obtained from our simulations (the solid boxes) are plotted in Figure 8.8c. η effective is calculated by the overall deposition efficiency for N=10000, and η f is the deposition efficiencies for a completely favorable surface [START_REF] Nazemifard | Particle deposition onto micropatterned charge heterogeneous substrates: trajectory analysis[END_REF][START_REF] Chatterjee | Particle transport in patterned cylindrical microchannels[END_REF] . The patch model provides a remarkably accurate prediction of the deposition efficiency for surfaces with macroscopic charge heterogeneity [START_REF] Kemps | Particle tracking model for colloid transport near planar surfaces covered with spherical asperities[END_REF][START_REF] Buret | Water quality and well injectivity do residual oil-in-water emulsions matter[END_REF] In order to characterize the correlation degree between the simulation results and the patch model, the simulation data were fitted into a straight line, as is shown in the red line in Figure 8.8c, the correlation coefficient of the data points is up to 0.997, which is highly corresponding to the dashed diagonal line calculated by the patchwise heterogeneity model. Overall, the overall collector deposition efficiency is in good agreement with the results predicted by the patch model, even when the heterogeneous bands are comparable to the particle dimensions. Polymeric porous media with an original porous structure, high porosity, low density, and good chemical stability are proved to be promising materials in a wide range of application fields.
Despite significant progress made on preparation of polymeric porous media, no particular method is better than others, and each method has its' advantages and limitations. Currently, the main problem is that the production processes of these methods are relatively cumbersome and clearly decrease the production efficiency and increase the cost. Thus a long-term goal for the preparation of porous polymer materials is still the development of simple and scalable procedures. It is, therefore, essential to find new methods that can optimize the porous structure without sacrificing the low-cost. This is one of the main objectives of this thesis. The multilayer coextrusion represents an advanced polymer processing technique, that is capable of economically and efficiently producing films of multilayers. Thus in this thesis, a novel strategy is proposed to prepare polymeric porous media with a tunable porous structure via multilayer coextrusion combined with the template method/TIPS, which is a highly efficient pathway for large-scale fabrication of porous polymer materials. This approach combines the advantages of the multilayer coextrusion (continuous process, economic pathway for large-scale fabrication, and tunable layer structures) and the template method/phase separation method (simple preparation process and tunable pore structure). Afterwards, applications of the polymeric porous media in PAHs adsorption and lithium-ion battery separator have been investigated. The obtained results indicate that the multilayer PP/PE separators prepared by this new combination exhibit higher porosity, higher electrolyte uptake and retention than commercial separators. This will definitely increase the ionic conductivity, and consequently improve the battery performances. More importantly, the PP/PE multilayer separator shows effective thermal shutdown function and a thermal stability up to 160 o C, that is wider than commercial separators. The porous polystyrene obtained via this new combination possesses abundant and uniform porous structure which exhibits much higher adsorption performance of traces of pyrene than the solid medium. The adsorption kinetics and isotherm of porous PS membranes can be well fitted by pseudo second-order kinetics and Freundlich isotherm model. In principle, this novel method is applicable to any melt-processable polymer in addition to PP, PE and PS. Thus in the future, investigations could be carried out on the preparation and application of other kinds of polymeric porous media such as porous polyvinylidene fluoride, polyethylene oxide, polymethyl methacrylate and so on.
More importantly, the basic mechanism of the above application processes (PAHs adsorption or Li-ion transport through separator) is based on particles transport and deposition in porous media.
Besides, transport and deposition of colloidal particles in porous media is a frequently encountered phenomenon in a wide range of applications in addition to the above cases, including aquifer remediation, fouling of surfaces, and therapeutic drug delivery. Thus it is essential to have a thorough understanding of such processes as well as the dominant mechanisms. Therefore, in the second part of the thesis, the microscale simulation of particle transport and deposition in porous media was carried out by a novel colloidal particle tracking model, called 3D-PTPO (Three-Dimensional Particle Tracking model by Python ® and OpenFOAM ® ) code.
Firstly, particle deposition is investigated on a homogeneous capillary in order to revisit the work of Lopez and co-authors [17] by considering a more realistic 3D geometry, as well as to validate theoretical basic laws of transport and deposition of particles using the simulation method developed here. Preliminary results were analyzed and some conclusions may be drawn:
The deposition probability decreases with the number of injected particles and with the Pé clet number.
At low Pé clet number values, the ultimate surface coverage Γ is shown to closely approach the RSA value and drops noticeably for high Pe values.
Both porosity and permeability decrease with the number of deposited particles.
At low Pe numbers, the equivalent hydrodynamic thickness of the deposit layer is lower than the particle diameter showing the formation of a loose monolayer deposit before decreasing for higher Pe.
The above results demonstrate that the numerical method used is relevant to describe deposition of colloids in porous media from dilute dispersions.
Secondly, the three-dimensional numerical modeling of the process of particle transport and deposition in a porous media with more complex geometry is carried out by representing the elementary pore structure as a capillary tube with converging/diverging geometries (tapered pipe and venturi tube). The main conclusions can be drawn as follows:
The probability for a particle to be deposited on the pore-wall surface is much higher for the diffusion-dominant regime, and the deposited particle distribution along the pipes is piston-like as for the simple tube model.
The distribution is more uniform for the advection-dominant regime. In particular, for the venturi tube, the density of deposited particles is relatively low in the vicinity of the pore throat entrance and exit due to the particular flow structure therein.
The behavior of the dimesionless final plateau of the maximum surface coverage Γ final /Γ RSA as a function of Pe has been analyzed. For low Pe, a plateau could be observed for both geometries and the plateau value and the deposition kinetics are consistent with the RSA theory. For high Pe, the declining trends for Γ final /Γ RSA versus Pe are in good agreement with experimental and simulation results found in the literature.
Thirdly, the coupled effect of surface chemical heterogeneity and hydrodynamics on particle transport and deposition in a pore-throat idealized as a three-dimensional capillary with periodically repeating chemically heterogeneous surface (crosswise strips patterned and chess board patterned) was investigated. The main conclusions can be drawn as follows: the coupled effect of heterogeneity and the velocity field can bring out a complex concentration distribution of deposited particles on the wall. For example, higher density of deposited particles are observed at the leading and trailing edges of each favorable strip. Besides, the deposition probability is in line with the frequency of the pitches.
In addition, the surface coverage is still close to the Γ RSA and features a relatively stable plateau at lower Pe. The declining trend of Γ final /Γ RSA versus Pe is again in good agreement with the derived power law dependence of surface coverage versus Pe at higher Pe. Moreover, the overall deposition probability is increasing with the favorable area fraction.
The above results are consistent with theoretical predictions, demonstrating that the numerical method used is relevant to describe deposition of colloids in porous media from dilute dispersions.
Future developments of this work will include a larger range of the related parameters on the one hand and more realistic pore geometries on the other hand.
Acknowledgement
How
1 Figure 1 . 2
112 Figure 1.1 The summary of the classification of materials [3] ........................................................ 1 Figure 1.2 Illustration of representative natural and synthesized porous materials: (a) bamboo; (b) honeycomb; (c) alveolar tissue in mouse lung; (d) ordered macroporous polymer from
7 Figure 2 . 3 7 Figure 2 . 4
723724 Figure 2.3 Classification of porous media according to pore size [21] ........................................... 7 Figure 2.4 Different pore configurations: (a) foams; (b) interconnected pore network; (c) powder compacts; (d) porous media produced from a powder with plate-like particles; (e) porous media consisting of fiber-shaped particles; (f) large pores connected by small pores;
8 Figure 2 . 5
825 Figure 2.5 Schematic illustration of fabrication of (a) spherical porous polymers, (b) tubular
39 Figure 2 .
392 Figure 2.13 Periodic surface patterns: (a) lengthwise strips, (b) crosswise strips, (c) patchwise pattern. Dark regions represent heterogeneous patches, light regions are the homogeneous surface.In each case the percent heterogeneous coverage is 50%[START_REF] Erickson | Three-dimensional structure of electroosmotic flow over heterogeneous surfaces[END_REF] ...............................40
Figure 3 . 1 48 Figure 3 51 Figure 3 . 3 52 Figure 3 . 4
3148351335234 Figure 3.1 Preparation of MC-CTM PP/PE via the combination of multilayer coextrusion and CaCO 3 template method ....................................................................................................... 48 Figure 3.2 (a) SEM image of the surface of the MC-CTM PP/PE. (b) SEM image of the cross-section (fractured in liquid nitrogen) of the MC-CTM PP/PE. (c) Optical microscope image of the MC-CTM PP/PE with multilayer structure (cross-section) ............................ 51 Figure 3.3 Stress-strain curve of MC-CTM PP/PE at room temperature .................................... 52 Figure 3.4 Arrhenius plots of ionic conductivity of MC-CTM PP/PE and Celgard ® separator at elevated temperatures (from 30 to 80 o C) ............................................................................ 53
Figure 3 . 54 Figure 3 55 Figure 3 . 7 56 Figure 3 . 8 57 Figure 3 . 9
3543553756385739 Figure 3.5 (a) Thermal shrinkage of MC-CTM PP/PE and Celgard ® 2325 as a function of heat treatment temperature; (b) photographs of MC-CTM PP/PE and Celgard ® 2325 after heat treatment at different temperature for 0.5 h ......................................................................... 54 Figure 3.6 (a) Impedance vs. temperature curves for cells containing different separators, commercial PE separator (SK Energy), commercial PP/PE/PP separator (Celgard ® 2325), and MC-CTM PP/PE; (b) DSC curve of MC-CTM PP/PE ................................................. 55 Figure 3.7 Chronoamperometry profiles of Li/electrolyte soaked separator/Li cells assembled with MC-CTM PP/PE and Celgard ® separator by a step potential of 10 mV ..................... 56 Figure 3.8 Linear sweep voltammetry (LSV) curves of Li/electrolyte soaked separator/SS cells assembled with MC-CTM PP/PE and Celgard ® separator with a scanning rate of 5 mV s -1 over voltage range from 2 to 7 V ......................................................................................... 57 Figure 3.9 Electrochemical performances of MC-CTM PP/PE and Celgard ® separator tested in Li/separator/LiFePO 4 configuration, (a) First discharge profiles at 0.2 C-rate, (b) Cycling performance at 0.2 C-rate, (c) C-rate behavior where the current densities are varied from 0.2 C to 2 C .......................................................................................................................... 59
Figure 4 . 1
41 Figure 4.1 Schematic illustration showing the preparation process of MC-TIPS PP/PE via the combination of multilayer coextrusion and thermal induced phase separation ................... 62
Figure 4 . 65 Figure 4 . 3 66 Figure 4 . 4
465436644 Figure 4.2 (a) FESEM image and pore size distribution of the surface of MC-TIPS PP/PE; (b) FESEM image and pore size distribution of the cross-section of MC-TIPS PP/PE, fractured
Figure 4 .
4 Figure 4.5 (a) Thermal shrinkage rate of MC-TIPS PP/PE and Celgard ® 2325 as a function of heat treatment temperature; (b) photographs of MC-TIPS PP/PE and Celgard ® 2325 after heat treatment at various temperatures for 0.5 h .................................................................. 68
Figure 4 .
4 Figure 4.6 (a) Impedance versus temperature curve for cells containing commercial PE separator (SK Energy), commercial PP/PE/PP separator (Celgard ® 2325), and MC-TIPS PP/PE. (b) The DSC curve of MC-TIPS PP/PE .................................................................. 69
Figure 4 .
4 Figure 4.7 (a) Chronoamperometry profiles of LM/electrolyte soaked separator/LM cells assembled with MC-TIPS PP/PE and Celgard ® sepatator by a step potential of 10 mV. (b) LSV curves of LM/electrolyte soaked separator/SS cells assembled with MC-TIPS PP/PE and Celgard ® separator with a scanning rate of 5 mV s -1 over voltage range from 2 to 6 V .............................................................................................................................................. 70
Figure 4 . 8 71 Figure 5 74 Figure 5 . 2 76 Figure 5 . 3 76 Figure 5 . 4 77 Figure 5 . 5 78 Figure 5 . 6 Figure 5 . 7 79 Figure 5 . 8 79 Figure 5 81 Figure 5 .
4871574527653765477557856577958795815 Figure 4.8 Battery performances of MC-TIPS PP/PE and Celgard ® separator tested in Li/separator/LiFePO 4 configuration. (a) Cycling performance at 0.2 C-rate and (b) C-rate behavior where the current densities ranging from 0.2 C to 2 C ......................................... 71 Figure 5.1 (a) Schematic representation of the two-component multilayer coextrusion system, (b) Preparation of porous PS membranes via the combination of multilayer coextrusion and CaCO 3 template method ...................................................................................................... 74 Figure 5.2 Optical microscope images (reflection mode) of the cross-section of LDPE/PS(CaCO 3 ) coextrusion films with multilayer structure, (a) 16 layers, (b) 32 layers, and (c) 64 layers ................................................................................................................... 76 Figure 5.3 Optical microscope images (transmission mode) of PS(CaCO 3 ) membranes (40 wt% CaCO 3 ), (a) blended by the twin-screw extruder followed by compression molding, (b) prepared via multilayer coextrusion ..................................................................................... 76 Figure 5.4 Digital photographs of single layer PS(CaCO 3 ) membranes (a) before and (b) after etched in 15 wt% HCl solution for 48 h .............................................................................. 77 Figure 5.5 SEM images of the frozen brittle fracture surface of porous PS membranes (40 wt% CaCO 3 , etched for 48 h) with different thickness, (a) 20 µm, (b) 10 µm, (c) 5 µm. ........... 78 Figure 5.6 SEM images of the surface of polystyrene membranes with different etching time (in
Figure 5 . 84 Figure 5 . 85 Figure 6 . 1 91 Figure 6 . 2 92 Figure 6 . 3 92 Figure 6 . 4 93 Figure 6 . 5
584585619162926392649365 Figure 5.11 (a) First-order rates for the adsorption of pyrene by membrane 1 # and 6 # ; (b) Second-order rates for the adsorption of pyrene by membrane 1 # and 6 # PS membranes; the initial concentration of pyrene is 0.13 mg L -1 ...................................................................... 84 Figure 5.12 Adsorption isotherm for the adsorption of pyrene by porous PS membranes (6 # ); (b) the values of lnQ eq against lnC eq based on Freundlich isotherm model; (c) the linear dependence of C eq /Q eq on C eq based on Langmuir isotherm model ..................................... 85 Figure 6.1 Variation of deposition probability versus the number of injected particles for different Pé clet numbers ...................................................................................................... 91 Figure 6.2 Variation of Γ/Γ RSA versus the Pé clet number ............................................................ 92 Figure 6.3 Visual illustration of adsorbed particles for two different Pé clet numbers after the injection of 3000 particles (A) Pe=0.0015 and (B) Pe=150 ................................................ 92 Figure 6.4 Variation of porosity values versus N at different Pe ................................................. 93 Figure 6.5 Variation of permeability reduction versus N at various Pe (A); Hydrodynamic thickness /2r p of the deposited layer versus Pe (B) [17] ...................................................... 94
94 Figure 7 . 1 99 Figure 7 . 2 102 XIFigure 7 . 3 104 Figure 7 106 Figure 7 106 Figure 7 . 6 106 Figure 7 . 7 107 Figure 7 . 8 108 Figure 7 . 9 110 Figure 7 .
947199721027310471067106761067710778108791107 Figure 7.1 Geometry and meshing of the two configurations considered: (a) tapered pipe; (b) venturi tube .......................................................................................................................... 99 Figure 7.2 Sketches of possible particle trajectories: (a) flow through; (b) approaching a deposited particle (in blue) and bouncing back to the bulk flow (in red); (c) deposited on the wall ............................................................................................................................... 102
110 Figure 8 . 1 116 Figure 8 . 2 119 Figure 8
11081116821198 Figure 8.1 Surface heterogeneity is modeled as alternate bands (patches) of attractive and repulsive regions on the capillary wall to facilitate systematic continuum type evaluation: (a): Crosswise strips patterned; (b): Chess board patterned. Bright bands (patches) represent positively charged regions, blue bands (patches) represent negatively charged regions, and the particles are positively charged ............................................................... 116 Figure 8.2 Sketches of possible particle trajectories: (a) flowing through the capillary; (b) approaching a deposited particle (in grey line) and bouncing back to the bulk flow (in orange line); (c) approaching repulsive regions (positively charged) on the capillary wall (in grey line) and bouncing back to the bulk flow (in green line); (d) approaching attractive regions (negatively charged) and depositing on the wall ................................................... 119 Figure 8.3 (a) Spatial density distribution of the deposited particles versus the z coordinate at N=2000, Pe=0.0015 for both heterogeneous (λ=5 and θ=0.5) and homogeneous capillary;
120 Figure 8 . 4 122 XIIFigure 8 123 Figure 8 . 6 124 Figure 8 126 Figure 8
1208412281238612481268 Figure 8.4 Variation of deposition probability versus the number of injected particles (N) for heterogeneous (with different λ) and homogeneous capillaries. (Pe=0.0015) ................... 122
1 Chapter 1
11 ...................................... 30 Table 3.1 Physical properties of MC-CTM PP/PE and Celgard ® 2325 ....................................... 51 Table 4.1 Physical properties of MC-TIPS PP/PE and Celgard ® 2325 ....................................... 66 Table 5.1 Regulation and controlling of the porosity of PS membranes ..................................... 80 Table 6.1 Detailed parameters of the simulation ......................................................................... 90 Table 7.1 Parameters used for simulations ................................................................................ 102 Table 8.1 Parameters used for simulations ................................................................................ 117 General Introduction 1.1 Context and objectives of the present study
Figure 1 . 2
12 Figure 1.2 Illustration of representative natural and synthesized porous materials: (a) bamboo; (b) honeycomb; (c) alveolar tissue in mouse lung; (d) ordered macroporous polymer from direct templating; (e) porous aluminum; (f)porous silica[5]
Chapter 2 Review 2 . 1
221 chemical heterogeneous surfaces (crosswise strips patterned and chess board patterned) is investigated by the three-dimensional microscale simulation. The dependence of the deposition probability, dimensionless surface coverage (Γ/Γ RSA ) on the frequency of the pitches (λ), particle Péclet number (Pe) and the favorable area fraction (θ), as well as the spatial density distribution of deposited particles have been studied. In Chapter 9, major conclusions from each chapter are summarized and the perspectives for the future work are discussed. Literature Design and fabrication of polymeric porous media 2.1.1 Polymeric porous media Porous media are defined as solids containing pores. Normally, porous media have the porosity of 0.2-0.95. Pores are classified into two types: closed pores which are isolated from the outside and open pores which connect to the outside of the media. Especially, penetrating pores are a kind of open pores which possess at least two openings located on two sides of the porous media. The different morphology of pores are illustrates schematically in Figure 2.1. Open pores are required for most industrial applications such as filters, carriers for catalysts and bioreactors. This thesis mainly focuses on open pores. Introducing open pores in media, or to say producing open porous media, changes media properties. The decreased density and the increased specific surface area are the two essential changes, which can generate beneficial properties including permeability, filtration effects,
Figure 2 .
2 4 illustrates the different kinds of pore geometries. Most porous media are foams with the open porosity ranging from 0.7 to 0.95. This configuration can be produced by bubbling the ceramic slurry or by using a large amount of pore forming agent. (Figure 2.4a) An interconnected pore network can be observed in porous glasses produced by a leaching technique that results in the spinodal decomposition, pores are homogenous in both size and shape (Figure 2.4b). Powder compacts have the geometry of opening pores between particles and the pore shape is angular fundamentally. (Figure 2.4c) Porous media produced from a powder with plate-like particles possess the geometry of opening pores between plates (Figure 2.4d). Porous media built up by fiber-shaped particles have the geometry of opening pores between fibers. (Figure 2.4e) Sintered porous materials with pore forming agents possess the configuration of large pores connected by small pores (Figure 2.4f).
Figure 2 . 4
24 Figure 2.4 Different pore configurations: (a) foams; (b) interconnected pore network; (c) powder compacts; (d)porous media produced from a powder with plate-like particles; (e) porous media consisting of fiber-shaped particles; (f) large pores connected by small pores; (g) both small pore networks and large pore networks[START_REF] Ishizaki | Porous materials process technology and applications[END_REF]
overview of the key factors controlling the fate and transport of Engineered Nano Particles (ENP) fate and transport in porous media. The main factors that influence ENP transport could be categorized into three groups, as is shown in Figure 2.11.
simulation of the motion and deposition of micro-or nano-scaled particles transported by a fluid traversing three different kinds of square pores. The flow fields are numerically simulated by means of Lattice Boltzmann Method (LBM). The results show that the dispersion of particles in the three different square pore structures are very different, and vary with Stokes number (defined as the ratio of the characteristic time of a particle to a characteristic time of the flow or of an obstacle) for the same square pore even the Reynolds numbers are the same. In addition, the number and the position of deposited particles are analyzed. It is shown that the deposition characteristics of each wall are mainly influenced by Stokes number, Reynolds number and the pore structure.
Figure 2 .
2 Figure 2.13 Periodic surface patterns: (a) lengthwise strips, (b) crosswise strips, (c) patchwise pattern. Dark regions represent heterogeneous patches, light regions are the homogeneous surface.In each case the percent heterogeneous coverage is 50%[START_REF] Erickson | Three-dimensional structure of electroosmotic flow over heterogeneous surfaces[END_REF]
Part I :
: Preparation of Polymeric Porous Media for Lithium-Ion Battery Battery Separators and Water Treatment Chapter 3 Preparation of Multilayer Porous Polypropylene/Polyethylene Lithium-Ion Battery Separators by Combination of Multilayer Coextrusion and Template Method
60 o C for 24 h in vacuum. The obtained PP/PE multilayer separator was referred to as MC-CTM PP/PE.
Figure 3 . 1
31 Figure 3.1 Preparation of MC-CTM PP/PE via the combination of multilayer coextrusion and CaCO 3 template method
workstation (CH Instruments Inc). The cells (2025-type coin) were assembled by sandwiching the separators between anode and cathode materials in an argon-filled glove box (Lab 2000, Etelux) with water and oxygen content lower than 0.1 ppm. The ionic conductivity was measured by the electrochemical impedance spectroscopy (EIS) via sandwiching the separator between two stainless steel electrodes (SS). Impedance spectra were recorded over a frequency range from 1 Hz to 105 Hz with the AC amplitude of 5 mV under open circuit potential condition. The ionic conductivity (σ) was calculated using equation (3-4),
CaCO 3
3 particles could rarely be observed, which indicates that the majority of the templates have been etched. The cross-sectional multilayer configurations of the MC-CTM PP/PE can be observed from optical microscope image, as shown in Figure 3.2c. Alternated PP layers (brighter slabs) and PE layers (darker slabs) are clearly visible. In order to facilitate the distinction, the color masterbatch was mixed with PE layer. All the layers are parallel and continuous along the coextrusion directions.The average thickness of every layer is approximately 6 μm, and the total thickness of the separators is around 25 μm.
Figure 3 .
3 Figure 3.2 (a) SEM image of the surface of the MC-CTM PP/PE. (b) SEM image of the cross-section (fractured in liquid nitrogen) of the MC-CTM PP/PE. (c) Optical microscope image of the MC-CTM PP/PE with multilayer structure (cross-section)
54
54
Figure 3
3 Figure 3.5 (a) Thermal shrinkage of MC-CTM PP/PE and Celgard ® 2325 as a function of heat treatment temperature; (b) photographs of MC-CTM PP/PE and Celgard ® 2325 after heat treatment at different temperature for 0.5 h
higher shutdown temperature [C. Shi_2015] . Figure 3.6b shows DSC thermogram of MC-CTM PP/PE. It could be observed that the separators have two endothermic peaks at 127 o C and 165 o C, respectively, indicating the melting point of plastic PE and PP, which agrees well with the impedance results.
Figure 3.6 (a) Impedance vs. temperature curves for cells containing different separators, commercial PE separator (SK Energy), commercial PP/PE/PP separator (Celgard ® 2325), and MC-CTM PP/PE; (b) DSC curve of MC-CTM PP/PE
Figure 3 .
3 Figure 3.7. t + of MC-CTM PP/PE (0.43) is larger than that of Celgard ® separator (0.28). This
Figure 3 . 8
38 Figure 3.8 Linear sweep voltammetry (LSV) curves of Li/electrolyte soaked separator/SS cells assembled with MC-CTM PP/PE and Celgard ® separator with a scanning rate of 5 mV s -1 over voltage range from 2 to 7 V
Figure 3 . 9 Chapter 4
394 Figure 3.9 Electrochemical performances of MC-CTM PP/PE and Celgard ® separator tested in Li/separator/LiFePO 4 configuration, (a) First discharge profiles at 0.2 C-rate, (b) Cycling performance at 0.2 C-rate, (c) C-rate behavior where the current densities are varied from 0.2 C to 2 C
Figure 4 . 1
41 Figure 4.1 Schematic illustration showing the preparation process of MC-TIPS PP/PE via the combination of multilayer coextrusion and thermal induced phase separation
workstation. The cells (2025-type coin) were assembled by sandwiching the separators between cathode and anode materials in an argon-filled glove box (Lab 2000, Etelux) with water and oxygen content lower than 0.1 ppm. The ionic conductivity (σ) was measured on the electrochemical impedance spectroscopy (EIS) via sandwiching the separator between two stainless steel electrodes (SS). Impedance spectra were recorded over a frequency range from 1 Hz to 10 5 Hz with AC amplitude of 5 mV. σ was calculated using the following equation,
Figure 4 .Figure 4
44 Figure 4.2 (a) FESEM image and pore size distribution of the surface of MC-TIPS PP/PE; (b) FESEM image and pore size distribution of the cross-section of MC-TIPS PP/PE, fractured by liquid nitrogen; (c) Optical microscope image of the cross-sectional multilayer structure of MC-TIPS PP/PE
Figure 4 . 3
43 Figure 4.3 Stress-strain curve of MC-TIPS PP/PE at room temperature
Figure 4 . 4
44 Figure 4.4 Temperature dependence of ionic conductivity of MC-TIPS PP/PE and Celgard ® separators
Figure 4 .
4 5a. It could be observed that the Celgard ® 2325 is easy to lose the dimensional stability (mainly in the machine direction) after exposure to high temperatures of over 100 o C due to the shape recovery behavior resulting from the multiple stretching process. Thermal shrinkage of Celgard ® 2325 at 110 o C, 120 o C, 130 o C, 140 o C, 150 o C, 160 o C and 170 o C was 2.5%, 3.3%, 4.8%, 7.2%, 11%, 23% and 28%, respectively. In contrast, the MC-TIPS PP/PE displayed much better thermal stability with no obvious dimensional shrinkage until 160 o C.
Figure 4 .
4 photographs of MC-TIPS PP/PE and Celgard ® 2325 after heat treatment of 100 o C, 120 o C, 140 o C, 160 o C for 0.5 h. The photographs clearly show that Celgard ® 2325 suffers a high degree of dimensional change after exposure to high temperature condition. For comparison, the MC-TIPS
Figure 4
4 Figure 4.5 (a) Thermal shrinkage rate of MC-TIPS PP/PE and Celgard ® 2325 as a function of heat treatment temperature; (b) photographs of MC-TIPS PP/PE and Celgard ® 2325 after heat treatment at various temperatures for 0.5 h
Figure 4 .Figure 4 . 6 (
446 Figure 4.6 (a) Impedance versus temperature curve for cells containing commercial PE separator (SK Energy), commercial PP/PE/PP separator (Celgard ® 2325), and MC-TIPS PP/PE. (b) The DSC curve of MC-TIPS PP/PE
Figure 4 .
4 7b shows the results of the LSV profiles of the MC-TIPS PP/PE and Celgard ® 2325. The electrolyte-soaked Celgard ® separator exhibits an anodic stability up to 4.13 V. In contrast, the MC-TIPS PP/PE had no typical decomposition peak below 5.21 V vs
Figure 4 .
4 Figure 4.7 (a) Chronoamperometry profiles of LM/electrolyte soaked separator/LM cells assembled with MC-TIPS PP/PE and Celgard ® sepatator by a step potential of 10 mV. (b) LSV curves of LM/electrolyte soaked separator/SS cells assembled with MC-TIPS PP/PE and Celgard ® separator with a scanning rate of 5 mV s -1 over voltage range from 2 to 6 V
Figure 4 .
4 8a shows the variation of the capacity retention with cycle numbers of cells with MC-TIPS PP/PE and Celgard ® separator at a current density of 0.2 C. Both of them are relatively stable throughout the first 50 cycles with little performance degradation, indicating a stable cycle performance. While the discharge capacity of the battery with MC-TIPS PP/PE is relatively higher and the gap between two
Figure 4 . 8
48 Figure 4.8 Battery performances of MC-TIPS PP/PE and Celgard ® separator tested in Li/separator/LiFePO 4 configuration. (a) Cycling performance at 0.2 C-rate and (b) C-rate behavior where the current densities ranging from 0.2 C to 2 C
Figure 5 .
5 Figure 5.1 (a) Schematic representation of the two-component multilayer coextrusion system, (b) Preparation of porous PS membranes via the combination of multilayer coextrusion and CaCO 3 template method
5. 3 . 1
31 Preparation of porous PS membranes PS was selected as the matrix for the porous membranes, CaCO 3 as the template, and LDPE as the split layer. The multilayer configurations of layer-by-layer LDPE/PS(CaCO 3 ) coextrusion films with different layer numbers are shown in Figure 5.2a-c. Alternated LDPE (brighter slabs) and PS(CaCO 3 ) (darker slabs) layers are clearly visible, and there is no significant interpenetration between the individual layers. All the layers are parallel and continuous along the coextrusion directions. These results confirm the highly ordered multilayer structure presented in the two basic constituent blocks of LDPE and PS(CaCO 3 ). The clear interface between nearby layers should be attributed to the immiscibility of LDPE and PS.
Figure 5 . 2
52 Figure 5.2 Optical microscope images (reflection mode) of the cross-section of LDPE/PS(CaCO 3 ) coextrusion films with multilayer structure, (a) 16 layers, (b) 32 layers, and (c) 64 layers
Figure 5 . 3
53 Figure 5.3 Optical microscope images (transmission mode) of PS(CaCO 3 ) membranes (40 wt% CaCO 3 ), (a) blended by the twin-screw extruder followed by compression molding, (b) prepared via multilayer coextrusion
Figure 5 . 4
54 Figure 5.4 Digital photographs of single layer PS(CaCO 3 ) membranes (a) before and (b) after etched in 15 wt% HCl solution for 48 h
Figure 5 . 5
55 Figure 5.5 SEM images of the frozen brittle fracture surface of porous PS membranes (40 wt% CaCO 3 , etched for 48 h) with different thickness, (a) 20 µm, (b) 10 µm, (c) 5 µm.
Figure 5 . 6 Figure 5 . 7
5657 Figure 5.6 SEM images of the surface of polystyrene membranes with different etching time (in 15 wt% HCl solution), (a) before etching, (b) etched for 24 h, (c) etched for 48 h
Figure 5 . 8
58 Figure 5.8 SEM images of the frozen brittle fracture surface of PS membranes with different CaCO 3 contents after etched in 15 wt% HCl solution for 48 h, (a) 20 wt% CaCO 3 , (b) 40 wt% CaCO 3
Figure 5 . 9 (
59 Figure 5.9 (a) Fluorescence spectra of pyrene in aqueous solution after adsorbed by porous PS membranes (6 # ) for 12 h; (b) effects of the amount of porous PS membranes (6 # ) on the residual concentration of pyrene solution, which can be calculated according to the fluorescence intensity of the solution at 370 nm; the initial concentration of pyrene solution is 130 ppb
6 #Figure 5 .
65 Figure 5.10 Fluorescence spectra of pyrene in aqueous solution after adsorbed for different time by (a) 1 # membranes, (b) 2 # membranes, (c) 3 # membranes, (d) 4 # membranes, (e) 5 # membranes, and (f) 6 # membranes; (g)Adsorption kinetics of pyrene in aqueous solution by membranes from 1 # to 6 # with 0.8 g L -1 membranes in the solution, which can be calculated according to the fluorescence intensity of the solution at 370 nm, the initial concentration of pyrene solution is 130 ppb
Figure 5 .
5 Figure 5.11 (a) First-order rates for the adsorption of pyrene by membrane 1 # and 6 # ; (b) Second-order rates for the adsorption of pyrene by membrane 1 # and 6 # PS membranes; the initial concentration of pyrene is 0.13 mg L -1
Figure 5 .
5 Figure 5.12b according to the experimental data, and the corresponding correlation coefficient for pyrene adsorption is 0.96, indicating that the adsorption of pyrene by porous PS membranes (6 # )
Figure 5 . 86 5. 4
5864 Figure 5.12 Adsorption isotherm for the adsorption of pyrene by porous PS membranes (6 # ); (b) the values of lnQ eq against lnC eq based on Freundlich isotherm model; (c) the linear dependence of C eq /Q eq on C eq based on Langmuir isotherm model
int is the flow velocity computed at a given position by interpolating the velocity values calculated by the hydrodynamic model for the eight surrounding nodes. The stochastic movement of the particles is realized through the calculation of V diff at every position knowing its diffusivity coefficient,
6. 3 . 1
31 Figure 6.1 shows the variation of the deposition probability versus the number of injected particles for different Pé clet numbers ranging from low Pe where diffusion is dominant to high Pe where the transport is mainly governed by convection. The probability of deposition is defined as the
Figure 6 . 1
61 Figure 6.1 Variation of deposition probability versus the number of injected particles for different Pé clet numbers
Figure 6 . 2 Figure 6 . 3 Figure 6 . 4
626364 Figure 6.2 Variation of Γ/ΓRSA versus the Pé clet number
Figure 7 . 1
71 Figure 7.1 Geometry and meshing of the two configurations considered: (a) tapered pipe; (b) venturi tube
and are the fluid density
Figure 7 . 2
72 Figure 7.2 Sketches of possible particle trajectories: (a) flow through; (b) approaching a deposited particle (in blue) and bouncing back to the bulk flow (in red); (c) deposited on the wall
Figure 7 . 3 3 . 2
7332 Figure 7.3 Variation of deposition probability versus N at different Pe for the tapered pipe 7.3.2 Spatial distribution of the density of the deposited particles
Figure 7 Figure 7 Figure 7 . 6
7776 Figure 7.4 (a) Spatial distribution of the density of deposited particles along the z coordinate at different N for Pe=0.0019; (b) 3D sectional view of the tapered pipe at N=200; (c) 3D sectional view of the tapered pipe at N=30000 (the flow occurs from left to right)
Figure 7 . 7
77 Figure 7.7 Spatial distribution of the density of deposited particles along the z coordinate at different N for Pe=0.0014
Figure 7 . 8 4 7. 3 . 3
78433 Figure 7.8 Initial streamlines distribution in the venturi pipe at Pe=1.4 and Re=0.48×10 -4
Figure 7 . 9 3 Figure 7 .
7937 Figure 7.9 Variation of dimensionless surface coverage Γ/Γ RSA versus N at different Pe
4
μm. The transported particles have a radius, a p , of 0.2 µm. A sensitivity analysis was undertaken to explore the effect of mesh numbers on the accuracy of our results by varying the number of grid block used for the numerical simulations. The analysis was based on the flow field for various mesh numbers going from 125 to 200000. The results indicate that 120000 grid blocks are sufficient for our computations.
Figure 8 . 1
81 Figure 8.1 Surface heterogeneity is modeled as alternate bands (patches) of attractive and repulsive regions on the capillary wall to facilitate systematic continuum type evaluation: (a): Crosswise strips patterned; (b): Chess board patterned. Bright bands (patches) represent positively charged regions, blue bands (patches) represent negatively charged regions, and the particles are positively charged
2 .
2 The particle Reynolds number defined as Re = / p ua (where and are the fluid density
Figure 8 . 2
82 Figure 8.2 Sketches of possible particle trajectories: (a) flowing through the capillary; (b) approaching a deposited particle (in grey line) and bouncing back to the bulk flow (in orange line); (c) approaching repulsive regions (positively charged) on the capillary wall (in grey line) and bouncing back to the bulk flow (in green line); (d) approaching attractive regions (negatively charged) and depositing on the wall
Figure 8
8 Figure 8.3 (a) Spatial density distribution of the deposited particles versus the z coordinate at N=2000, Pe=0.0015for both heterogeneous (λ=5 and θ=0.5) and homogeneous capillary; (b) schematic representation of particle behavior near the boundaries between the favorable and unfavorable strips[START_REF] Nazemifard | Particle deposition onto charge heterogeneous surfaces: convection-diffusion-migration model[END_REF] . 3D sectional view of the
Figure 8 . 4
84 Figure 8.4 Variation of deposition probability versus the number of injected particles (N) for heterogeneous (with different λ) and homogeneous capillaries. (Pe=0.0015)
Figure 8
8 Figure 8.5 (a) Determination of the calculation method of the dimensionless surface coverage for heterogeneous capillary, the dotted line stands for the homogeneous capillary. (Pe=0.0015); (b) Variation of the dimensionless
3 Figure 8 . 6
386 Figure 8.6 Variation of dimensionless surface coverage Γ/Γ RSA . (a) Γ/Γ RSA versus N at different Pe; (b) Γ Max /Γ RSA as a function of Pe. (λ=5 and θ=0.5 for the heterogeneous capillary)
Figure 8 . 7 (
87 Figure 8.7 (a). 3D sectional view (left: crosswise strips patterned), (right: chess board patterned) of the heterogeneous capillary with deposited particles ; (b) variation of dimensionless surface coverage Γ/Γ RSA versus N for crosswise strips patterned and chess board patterned capillaries. For both cases, θ=0.5, pitch length=3 μm and Pe=0.0015
Figure 8 .Chapter 9
89 Figure 8.8 (a) 3D sectional view of the heterogeneous capillary with deposited particles, λ=5, Pe=0.0015, θ is ranging from 20% to 100%; (b) variation of the number of deposited particles (n) versus the number of injected particles (N) with different θ; (c) the dependence of the effective collector efficiency (η effective /η f ) on θ. (λ=5, Pe=0.0015, N=10000)
maximum of actual velocity along the mean flow axis m s the surface wall favoring deposition m 2
Chapitre 6 Chapitre 7 Chapitre 8 Chapitre 9
6789 d'adsorption des membranes PS poreuses sur les PAH. La ciné tique d'adsorption associé e et les isothermes du PS poreux sont é galement discuté es. Une sé rie de tests de contrôle ont é té conç us pour explorer les facteurs clé s qui peuvent ré guler et contrôler les structures poreuses. Les ré sultats indiquent que les membranes pré paré es avec la teneur en CaCO 3 de 40% en poids, une é paisseur de 5 um, et gravé es pendant 48 h possè dent la structure optimale des pores ouverts. En outre, l'espace confiné cré é par les LME dans la coextrusion multicouche joue un rôle important dans la dispersion des particules de CaCO 3 et, par consé quent, est bé né fique pour la formation de membranes poreuses homogè nes par attaque acide. Les membranes PS poreuses avec une porosité plus é levé e pré sentent des performances d'adsorption beaucoup plus é levé es sur le pyrè ne dans une solution aqueuse dilué e, par rapport à celle adsorbé e par des membranes PS avec une porosité infé rieure. Ceci est attribué à la structure abondante des pores de la membrane et au principe similaire compatible entre la surface de la membrane et les adsorbats. La ciné tique d'adsorption et l'isotherme des membranes PS poreuses se sont ré vé lé es suivre une ciné tique de pseudo-deuxiè me ordre et un modè le isotherme de Freundlich, respectivement. Le dé pôt de particules colloï dales dans un milieu poreux idé alisé comme un faisceau de capillaires Les processus de transport et de dé pôt de particules dans des milieux poreux pré sentent un grand inté rê t technologique et industriel, car ils sont utiles dans de nombreuses applications d'ingé nierie, notamment la dissé mination des contaminants, la filtration, la sé paration chromatographique et les processus de remé diation. Pour caracté riser ces processus, les simulations numé riques sont devenues de plus en plus attrayantes en raison de la croissance de la capacité informatique et des moyens de calcul offrant une alternative inté ressante, notamment aux expé riences complexes in situ.Fondamentalement, il existe deux types de mé thodes de simulation, à savoir les simulations à l'é chelle macro et les simulations à l'é chelle microscopique (à l'é chelle des pores). Les simulations à l'é chelle macro dé crivent le comportement global du processus de transport et de dé pôt en ré solvant un ensemble d'é quations diffé rentielles qui donne une variation spatiale et temporelle de la concentration des particules dans l'é chantillon poreux sans fournir d'informations sur la nature ou le mé canisme du processus de ré tention. Des simulations numé riques à l'é chelle de pore ré solvent directement l'é quation de Navier-Stokes ou de Stokes pour calculer le flux et les processus de diffusion de particules par un modè le de marche alé atoire par exemple(Lopez et al. (2004) ont ré alisé des simulations numé riques à micro-é chelle de dé pôt de particules colloï dales sur la surface d'une gé omé trie de pore simple constitué e de deux surfaces planes parallè les.Messina et al. (2012) ont utilisé des simulations à micro-é chelle pour estimer l'efficacité de l'attachement des particules. L'objectif de ce travail est de simuler le processus de transport et de dé pôt de particules dans des milieux poreux à l'é chelle microscopique au moyen de simulations CFD afin d'obtenir les quantité s les plus pertinentes en capturant la physique sous-jacente au processus. Une fois qu'une particule est adsorbé e, le champ de vitesse d'é coulement est mis à jour avant l'injection d'une nouvelle particule.L'idé e principale, ici, est de revisiter le travail deLopez et al. (2004) en considé rant une gé omé trie 3D plus ré aliste telle qu'un tube capillaire. En effet, leur travail é tait limité à une gé omé trie particuliè re restreinte à une gé omé trie en forme de fente peu susceptible d'ê tre rencontré e dans des milieux poreux ré els.Dans ce chapitre, des simulations numé riques du dé pôt de particules colloï dales sur des pores de gé omé trie simple ont é té ré alisé es en couplant deux logiciels disponibles : OpenFOAM ® permet d'obtenir le champ de vitesse en ré solvant les é quations de Stokes et de continuité et un logiciel dé veloppé dans ce travail utilisant le langage de programmation Python ® et utilisé pour le processus de suivi des particules. Des quantité s importantes telles que la probabilité de dé pôt, le taux de couverture de surface, la porosité et la permé abilité ont é té calculé es pendant les simulations. Les variations de ces quantité s par rapport au nombre de particules injecté es pour diffé rents nombres de Pé clet ont é té examiné es. Les ré sultats pré liminaires ont é té analysé s et les conclusions suivantes peuvent ê tre tiré es. La probabilité de dé pôt diminue avec le nombre de particules injecté es et avec le nombre de Pé clet. Pour les valeurs de nombre de Pé clet faibles, le taux de couverture finale de surface Γ est proche de la valeur RSA, RSA , et diminue sensiblement pour les valeurs Pe é levé es.La porosité et la permé abilité diminuent toutes deux avec le nombre de particules dé posé es. À des nombres de Pe infé rieurs, l'é paisseur hydrodynamique finale de la couche de dé pôt est infé rieure au diamè tre de particule montrant la formation d'un dé pôt monocouche lâ che et elle diminue pour Pe supé rieur. Les ré sultats ci-dessus, bien qu'ils doivent ê tre consolidé s, sont en accord avec les pré dictions thé oriques, dé montrant que la mé thode numé rique utilisé e est pertinente pour dé crire le dé pôt de colloï des dans des milieux poreux à partir de dispersions dilué es. Les dé veloppements futurs de ce travail comprendront d'une part une plus grande gamme de paramè tres associé s et d'autre part des gé omé tries de pores plus ré alistes. Simulation micro-é chelle tridimensionnelle du transport et du dé pôt de particules colloï dales dans des milieux poreux modè les avec des gé omé tries convergentes / divergentes La repré sentation la plus ré aliste d'un milieu poreux est une collection de grains solides, chacun é tant considé ré comme un collecteur. Le processus de dé pôt de particules est donc divisé en deux é tapes : le transport vers le collecteur et le dé pôt dû aux interactions physico-chimiques particules / collecteurs à courte distance. De telles interactions sont repré senté es par une fonction potentielle gé né ralement obtenue à partir de la thé orie DLVO qui comprend les forces de ré pulsion é lectrostatique, de van der Waals et de courte porté e. Si des particules et des collecteurs sont identiquement chargé s et que la concentration en sel est faible, le potentiel d'interaction contient deux minima et une barriè re d'é nergie rendant la condition de dé pôt dé favorable. Lorsque la concentration en sel est trè s é levé e ou lorsque les particules et les collecteurs sont de charges opposé es, le potentiel est purement attractif, conduisant à un dé pôt favorable. Dans les procé dé s de filtration, on suppose habituellement que le milieu poreux est composé d'é lé ments unitaires contenant chacun un nombre donné de cellules unitaires dont la forme est cylindrique avec des sections transversales constantes ou variables. Chang et al. (2003) ont utilisé la simulation dynamique brownienne pour é tudier le dé pôt de particules browniennes dans des modè les de tubes avec des constrictions paraboliques, hyperboliques et sinusoï daux. Là encore, l'é quation de Langevin avec des interactions hydrodynamiques particules/parois corrigé es et l'interaction DLVO ont é té ré solues pour obtenir des trajectoires de particules. Par consé quent, l'efficacité du collecteur unique qui dé crit le taux de dé pôt initial a é té é valué e pour chaque gé omé trie à divers nombres de Reynolds. Une gé omé trie de milieux poreux plus ré aliste pour l'é tude du transport et du dé pôt est un lit de collecteurs tassé s d'une forme donné e. Boccardo et al. (2014) ont é tudié numé riquement le dé pôt de particules colloï dales dans des conditions favorables dans des milieux poreux 2D composé s de grains de formes ré guliè res et irré guliè res. À cette fin, ils ont ré solu les é quations de Navier-Stokes avec l'é quation d'advection-dispersion. Ensuite, et mê me si les trajectoires de particules ne peuvent ê tre spé cifié es, il a é té possible de dé terminer comment les grains voisins influencent mutuellement leurs taux de collecte. Ils montrent que l'efficacité de l'attachement brownien s'é carte sensiblement du cas du collecteur simple. De mê me Coutelieris et al. (2003) ont é tudié le flux et le dé pôt dans un assemblage de grains sphé riques 3D construit stochastiquement en se concentrant sur la dé pendance de l'efficacité de capture aux nombres de Pé clet faibles à modé ré s et ont trouvé que le modè le bien connu de sphè re-cellule reste applicable à condition que les bonnes proprié té s du milieux poreux soient prises en compte. Né anmoins, les porosité s scanné es é taient trop proches de l'unité pour ê tre repré sentatives des milieux poreux ré els. Dans leur travail, les approches lagrangiennes sont utilisé es pour suivre le dé placement des particules dans des couches compacté es en 3D, ce qui permet d'é tudier le transport à l'é chelle microscopique et le dé pôt de particules colloï dales. Les techniques de Lattice Boltzmann sont utilisé es pour calculer les forces hydrodynamiques et browniennes agissant sur des particules mobiles avec é valuation locale du potentiel d'interaction physico-chimique montrant le retard hydrodynamique pour ré duire la ciné tique de dé pôt au minimum secondaire dans des conditions dé favorables et la ré tention des particules dans les tourbillons . Dans la plupart de ces approches de simulation, la taille caracté ristique du domaine d'é coulement est de plusieurs ordres de grandeur supé rieure à la taille des particules, de sorte que le seuil de blocage (rapport entre la taille caracté ristique du domaine d'é coulement et la taille des particules) est suffisamment é levé pour que le domaine d'écoulement reste non affecté par le dé pôt de particules. Pour de nombreux systè mes, cependant, le seuil de blocage peut ê tre faible et le dé pôt de colloï des devrait avoir un impact important sur la structure et la force du flux et donc sur le processus de dé pôt des particules. Dans ce chapitre, nous nous concentrerons sur un tel impact et simulerons le dé pôt de colloï des dans des milieux poreux dans des conditions de dé pôt favorables en adoptant l'approche par é lé ments unitaires où la cellule unitaire est un tube avec deux formes convergentes-divergentes : tube conique et tube venturi. Pour é quilibrer la hausse inhé rente du coût de la simulation, nous restreindrons cette é tude à des cas de suspensions colloï dales dilué es où les interactions hydrodynamiques entre particules transporté es sont né gligeables et adopterons une approche simple dé taillé e ci-aprè s et un nouveau code 3D-PTPO dé veloppé dans notre laboratoire. L'idé e est d'approcher le comportement dans des milieux poreux idé alisé s comme un faisceau de capillaires avec des sections transversales variables. L'originalité principale de l'outil est de prendre en compte la modification de l'espace interstitiel et donc le champ d'é coulement lorsque les particules sont dé posé es sur la paroi des pores. Les variations des paramè tres clé s, y compris la probabilité de dé pôt et le taux de couverture de surface adimensionnel Γ/Γ RSA , ainsi que la distribution spatiale dé taillé e de la densité des particules dé posé es ont é té é tudié es. Les principales conclusions tiré es de ce chapitre sont les suivantes : La probabilité qu'une particule se dé pose sur la surface de la paroi des pores est beaucoup plus é levé e lorsque le transport est dominé par la diffusion pour les deux gé omé tries considé ré es dans ce travail. La distribution des particules dé posé es le long des tuyaux est semblable à celle d'un piston pour le ré gime où la diffusion domine, tandis que la distribution est plus uniforme pour le ré gime dominant par advection. De plus, pour toutes les valeurs du nombre de Pé clet considé ré es dans ce travail comprises entre 0,0019 et 1900, le taux de couverture de surface normalisé Γ/Γ RSA en fonction du nombre de particules injecté es (N) pré sente une forte augmentation aux stades pré coces de dé pôt et tend à une valeur de plateau pour des valeurs de N é levé es. Le comportement du plateau final correspondant au taux de couverture de surface normalisé maximal Γ final /Γ RSA en fonction de Pe a é té analysé . Pour un Pe bas, un plateau peut ê tre observé pour les deux gé omé tries, la valeur du plateau et la ciné tique de dé pôt sont en accord avec la thé orie de l'adsorption sé quentielle alé atoire (RSA). Tandis qu'à Pe é levé , les tendances de variation de Γ final /Γ RSA en fonction de Pe sont en bon accord avec les ré sultats expé rimentaux et numé riques obtenus par d'autres é tudes. Enfin, les ré sultats ci-dessus dé montrent que le modè le numé rique pourrait capturer la physique du transport et du dé pôt de particules colloï dales dans des pores de gé omé trie simple et pourrait ê tre utilisé dans des dé veloppements ulté rieurs à savoir le transport dans des gé omé tries plus complexes (cellule unitaire de couche de dé pôt de particules, surfaces à motifs chimiques, etc. Simulation micro-é chelle tridimensionnelle du transport et du dé pôt de particules colloï dales dans des milieux poreux chimiquement hé té rogè nes Les processus de transport et de dé pôt de particules colloï dales (adsorption irré versible) dans des milieux poreux pré sentent un grand inté rê t environnemental et industriel, car ils sont essentiels à de nombreuses applications allant de l'administration de mé dicaments au traitement de l'eau potable. Par consé quent, le dé pôt de particules sur des milieux poreux homogè nes a fait l'objet de nombreuses é tudes expé rimentales et thé oriques. Cependant, d'un point de vue pratique, le problè me du dé pôt de particules dans des milieux poreux pré sentant une hé té rogé né ité à l'é chelle des pores est plus pertinent puisque la plupart des milieux poreux naturels ou artificiels sont physiquement et/ou chimiquement hé té rogè nes. Lorsque les particules en é coulement approchent de telles surfaces à motifs hé té rogè nes, elles pré sentent divers comportements de dé pôt en fonction de la nature, de l'ampleur et de la forme de ces hé té rogé né ité s. Ainsi, le transport et le dé pôt de particules dans des milieux poreux hé té rogè nes ont fait l'objet d'é tudes et une quantité significative de recherches pertinentes ont é té effectué es. Parmi les diffé rentes gé omé tries de milieux poreux, le transport de particules dans les capillaires/microcanaux est au coeur de nombreux systèmes microfluidiques et nanofluidiques. De plus, les milieux poreux sont gé né ralement considé ré s comme un faisceau de capillaires/microcanaux et lorsque le transport et le dé pôt de particules ont é té simulé s dans un capillaire/microcanal, le processus dans le milieu poreux entier peut ê tre pré dit en imposant des conditions aux limites approprié es entre capillaires/microcanaux. À notre connaissance, Chatterjee et al. (2011, 2012) ont é tudié le transport de particules dans des microcanaux cylindriques hé té rogè nes. L'hé té rogé né ité de surface est modé lisé e comme des bandes alterné es de ré gions attractives et ré pulsives sur la paroi du canal pour faciliter l'é valuation systé matique de type continuum. Cette é tude fournit une analyse thé orique complè te de la faç on dont le transport de ces particules en suspension est affecté dans ces microcanaux en raison des hé té rogé né ité s sur leurs parois. Cependant, cette simulation é tait bidimensionnelle et pouvait donc difficilement reflé ter les vrais milieux poreux.Par consé quent, des simulations 3D doivent ê tre effectué es à nouveau pour dé velopper une image complè te du transport de particules dans un capillaire. De plus, il est important d'é tudier l'effet couplé de l'hydrodynamique et de l'hé té rogé né ité chimique sur le dé pôt de particules. Par ailleurs, il y a peu de travaux rapporté s où l'on s'inté resse aux effets combiné s de l'hé té rogé né ité de surface et de l'hydrodynamique sur les dé pôts de particules colloï dales dans les capillaires/microcanaux tridimensionnels, ce qui est é galement l'un des objectifs de ce chapitre. Le code 3D-PTPO a é té utilisé pour é tudier l'influence des hé té rogé né ité s de surface et de l'hydrodynamique sur le dé pôt de particules. Le capillaire axisymé trique tridimensionnel avec une surface avec des hé té rogé né ité s chimiques à ré pé tition pé riodique (à motifs de type bandes transversales et é chiquier) de charges de surface positives et né gatives (autrement dit ré gions attractives et ré pulsives) est utilisé pour le modè le d'hé té rogé né ité . Les principales conclusions peuvent ê tre tiré es comme suit : Premiè rement, l'effet couplé de l'hé té rogé né ité de charge et du champ de vitesse tridimensionnel peut faire ressortir une distribution de concentration complexe de particules dé posé es sur la paroi, conduisant à une densité plus é levé e de particules dé posé es sur les parois des pores aux bords de chaque bande favorable, et le dé pôt est plus uniforme le long du capillaire à motifs par rapport à capillaire homogè ne. Deuxiè mement, la probabilité de dé pôt est en ligne avec la pé riode des hé té rogé né ité s.Avec le même rapport de surface favorable, θ, une longueur de période d'hétérogénéité plus petite se traduira par une probabilité de dé pôt plus é levé e et par consé quent un taux de couverture de surface normalisé supé rieur. De plus, pour le ré gime où la diffusion domine à Pe faible, le taux de couverturede surface est proche de Γ RSA et pré sente un plateau relativement stable. Pour le ré gime à convection dominante à Pe é levé , la dé croissance de Γ/Γ RSA par rapport à Pe a une tendance qui est en bon accord avec la dé pendance en loi de puissance observé e par ailleurs. Enfin, la probabilité globale de dé pôt augmente avec la fraction de surface favorable. Cette é tude donne un aperç u de la conception de milieux poreux artificiellement hé té rogè nes pour la capture de particules dans diverses applications d'ingé nierie et biomé dicales, y compris la pharmacothé rapie ciblé e. De plus, le modè le peut ê tre encore amé lioré en incorporant des profils d'é coulement de fluide plus ré alistes et des motifs hé té rogè nes plus alé atoires. Conclusions et perspectives Les milieux poreux polymé riques avec une structure poreuse originale, une porosité é levé e, une faible densité et une bonne stabilité chimique sont des maté riaux prometteurs dans une large gamme de domaines d'application. Malgré des progrè s significatifs ré alisé s dans la pré paration des milieux poreux polymé riques, aucune mé thode particuliè re n'est meilleure que d'autres et chaque mé thode a ses avantages et ses limites. Actuellement, le principal problè me est que les procé dé s de production de ces mé thodes sont relativement encombrants et diminuent nettement l'efficacité de la production et augmentent le coût. Ainsi, un objectif à long terme pour la pré paration de maté riaux polymè res poreux est toujours le dé veloppement de procé dures simples et é volutives. Il est donc essentiel de trouver de nouvelles mé thodes permettant d'optimiser la structure poreuse sans sacrifier le faible coût. C'est l'un des principaux objectifs de cette thè se. La coextrusion multicouche repré sente une technique de traitement de polymè re avancé e, qui est capable de produire é conomiquement et efficacement des films de multicouches. Ainsi, dans cette thè se, une nouvelle straté gie est proposé e pour pré parer des milieux poreux polymè res avec une structure poreuse accordable par coextrusion multicouche combiné e avec la mé thode de modè le/TIPS, qui est une voie hautement efficace pour la fabrication à grande é chelle de maté riaux polymè res poreux. Cette approche combine les avantages de la coextrusion multicouche (processus continu, voie é conomique pour la fabrication à grande é chelle et structures de couche accordables) et la mé thode modè le / mé thode de sé paration de phase (processus de pré paration simple et structure de pore accordable). Par la suite, des applications des milieux poreux polymè res dans l'adsorption des PAH et le sé parateur de batteries au lithium-ion ont é té é tudié es. Les ré sultats obtenus indiquent que les sé parateurs PP/PE multicouches pré paré s par cette nouvelle combinaison pré sentent une porosité plus é levé e, une absorption et une ré tention d'é lectrolyte plus é levé es que les sé parateurs commerciaux. Cela va certainement augmenter la conductivité ionique, et par consé quent amé liorer les performances de la batterie. Plus important encore, le sé parateur multicouche PP/PE pré sente une fonction d'arrê t thermique efficace et une stabilité thermique jusqu'à 160°C , plus large que les sé parateurs commerciaux. Le polystyrè ne poreux obtenu par cette nouvelle combinaison possè de une structure poreuse abondante et uniforme qui pré sente des performances d'adsorption beaucoup plus é levé es de traces de pyrè ne que le milieu solide. La ciné tique d'adsorption et l'isotherme des membranes PS poreuses peuvent ê tre bien ajusté es par une ciné tique de pseudo-deuxiè me ordre et un modè le isotherme de Freundlich. En principe, cette nouvelle mé thode est applicable à tout polymè re pouvant ê tre traité à l'é tat fondu en plus de PP, PE et PS. Ainsi, à l'avenir, des recherches pourraient ê tre mené es sur la pré paration et l'application d'autres types de milieux poreux polymè res tels que le polyfluorure de vinylidè ne poreux, l'oxyde de polyé thylè ne, le polymé thacrylate de mé thyle, etc.Plus important encore, le mé canisme de base des procé dé s d'application ci-dessus (adsorption de PAH ou transport de Li-ion à travers un sé parateur) est basé sur le transport et le dé pôt de particules dans des milieux poreux. En outre, le transport et le dé pôt de particules colloï dales dans des milieux poreux est un phé nomè ne fré quemment rencontré dans un large é ventail d'applications en plus des cas ci-dessus, y compris l'assainissement des aquifè res, l'encrassement des surfaces et l'administration de mé dicaments. Il est donc essentiel de bien comprendre ces processus ainsi que les mé canismes dominants. Par consé quent, dans la seconde partie de la thè se, la simulation microscopique du transport et du dé pôt de particules en milieux poreux a é té ré alisé e par un nouveau modè le de suivi des particules colloï dales, appelé 3D-PTPO (Modè le tridimensionnel de suivi des particules par Python ® et OpenFOAM ® ). Tout d'abord, le dé pôt de particules est é tudié sur un capillaire homogè ne afin de revisiter le travail de Lopez et ses co-auteurs (Lopez et al., 2004) en considé rant une gé omé trie 3D plus ré aliste, ainsi que de valider les lois de base thé oriques du transport et du dé pôt de particules utilisé es dans la mé thode dé veloppé e ici. Les ré sultats pré liminaires ont é té analysé s et quelques conclusions peuvent ê tre tiré es : La probabilité de dé pôt diminue avec le nombre de particules injecté es et avec le nombre de Pé clet. Pour les valeurs de nombre de Pé clet faibles, le taux de couverture ultime de la surface Γ est proche de la valeur RSA et diminue sensiblement pour les valeurs de Pe é levé es. La porosité et la permé abilité diminuent avec le nombre de particules dé posé es. À de faibles nombres de Pe, l'é paisseur hydrodynamique é quivalente de la couche de dé pôt est infé rieure au diamè tre de particule montrant la formation d'un dé pôt monocouche lâ che avant de diminuer pour des valeurs plus é levé es de Pe . Les ré sultats ci-dessus dé montrent que la mé thode numé rique utilisé e est pertinente pour dé crire le dé pôt de colloï des dans des milieux poreux à partir de dispersions dilué es. D'autre part, la modé lisation numé rique tridimensionnelle du procé dé de transport de particules et le dé pôt sur un support poreux avec une gé omé trie plus complexe est ré alisé e en repré sentant la structure é lé mentaire de pores sous forme de tube capillaire avec convergente/divergente ou ré tré cissement/é largissement de section (tube conique et le tube venturi). Les principales conclusions peuvent ê tre tiré es de la maniè re suivante: La probabilité pour une particule à ê tre dé posé e sur la surface de la paroi des pores est beaucoup plus é levé e pour le ré gime de diffusion dominante, et la distribution des particules dé posé es le long des tubes est en forme de piston uniquement pour le modè le simple de capillaire à section constante. La distribution est plus uniforme pour le ré gime dominé par advection. En particulier, pour le tube de venturi, la densité des particules dé posé es est relativement faible au voisinage de l'entré e et de la sortie de la gorge du pore en raison de la structure particuliè re de l'é coulement à l'inté rieur des tubes. Le comportement du plateau final du taux de couverture de surface normalisé maximal Γ final /Γ RSA en fonction de Pe a é té analysé . Pour un Pe bas, un plateau peut ê tre observé pour les deux gé omé tries considé ré es et la valeur du plateau et la ciné tique de dé pôt est cohé rente avec la thé orie RSA. Pour un Pe é levé , les tendances de la variation de Γ final /Γ RSA versus Pe sont en bon accord avec les ré sultats expé rimentaux et ceux de simulations trouvé s dans la litté rature. Troisiè mement, l'effet couplé de l'hé té rogé né ité chimique de surface et de l'hydrodynamique sur le transport et le dé pôt de particules dans un seuil de pore idé alisé en tant que capillaire tridimensionnel avec une surface chimiquement hé té rogè ne à ré pé tition pé riodique (bandes transversales et é chiquier) a é té é tudié . Les principales conclusions peuvent ê tre tiré es comme suit : l'effet couplé de l'hé té rogé né ité et du champ de vitesse peut faire ressortir une distribution de concentration complexe des particules dé posé es sur la paroi. Par exemple, une densité plus é levé e de particules dé posé es est observé e aux bords avant et arriè re de chaque bande favorable. De plus, la probabilité de dé pôt est en ligne avec la pé riode des hé té rogé né ité s.. En outre, le taux de couverture de surface est toujours proche de la Γ RSA et pré sente un plateau relativement stable à Pe infé rieur. La tendance à la baisse de Γ final / Γ RSA par rapport à Pe est de nouveau en bon accord avec la dé pendance en loi de puissance obtenue à des valeurs de Pe forte. De plus, la probabilité globale de dé pôt augmente avec la fraction de surface favorable. Les ré sultats ci-dessus sont en accord avec les pré dictions thé oriques, dé montrant que la mé thode numé rique utilisé e est pertinente pour dé crire le dé pôt de colloï des dans des milieux poreux à partir de dispersions dilué es. Les dé veloppements futurs de ce travail comprendront d'une part une plus grande gamme de paramè tres associé s et d'autre part des gé omé tries de pores plus ré alistes. Conception, Fabrication et Application de Milieux Poreux Polymé riques RESUME : La co-extrusion de microcouche est une mé thode pour produire de faç on efficace et successive des polymè res avec des structures de couches alterné es, ayant les avantages de haute efficacité et faible coût. Par consé quent, sur les exigences structurelles de PM de l'application spécifique, ce mémoire a conçu le PM avec une structure spé cifique et une co-extrusion de microcouche de maniè re cré ative combiné e avec la mé thode traditionnelle de pré paration de PM (mé thode de gabarit, mé thode de sé paration de phase), en combinant les avantages des deux mé thodes, les PM avec une structure de pore idéale peuvent être préparés en grande quantité et l'on peut également explorer son application dans les sé parateurs de batteries au lithium-ion et l'adsorption d'hydrocarbures aromatiques polycycliques. Le plus important, dans la deuxiè me partie de cet essai, se trouve que la simulation micro-numé rique est utilisé e pour é tudier le transport et le dé pôt de particules dans des milieux poreux pour explorer le mé canisme des maté riaux poreux dans les domaines de l'adsorption et du sé parateur de batterie. Le code de 3D-PTPO (un modè le tridimensionnel de suivi des particules combinant Python ® et OpenFOAM ® ) est utilisé pour é tudier le transport et le dé pôt de particules colloï dales dans des milieux poreux, l'on adopte trois modèles (colonne, venturi et tube conique) pour représenter diffé rentes formes de maté riaux poreux. Les particules sont considé ré es comme des points maté riaux pendant le transport, le volume des particules sera reconstitué et dé posé comme partie de la surface du maté riau poreux pendant le dé pôt, la caracté ristique principale de ce code est de considé rer l'influence du volume des particules dé posé es sur la structure des pores, les lignes d'é coulement et le processus du dé pôt des autres particules. Les simulations numé riques sont d'abord conduites dans des capillaires simples. Par la suite, des simulations numé riques sont effectué es dans des capillaires convergents-divergents pour é tudier la structure des pores et l'effet de nombre Peclet sur le dépôt de particules. Enfin, l'on étudie l'effet double de l'hé té rogé né ité de surface et de l'hydrodynamique sur le comportement de dé pôt de particules. Mots clé s : milieux poreux polymé riques, coextrusion multicouche, sé parateur de batterie au lithium-ion, l'adsorption de PAH, simulation à l'é chelle microscopique, 3D-PTPO (modè le tridimensionnel de suivi de particules par Python ® et OpenFOAM ® ), transport et dé pôt de particule Design, Fabrication and Application of Polymeric Porous Media ABSTRACT : In the first part of the thesis, polymeric porous media are firstly designed based on the specific application requirements. Then the designed polymeric porous media are prepared by the combination of multilayer coextrusion and traditional preparation methods (template technique, phase separation method). This combined preparation method has integrated the advantages of the multilayer coextrusion and the template/phase separation method. Afterwards, the applications of the polymeric porous media in polycyclic aromatic hydrocarbons adsorption and lithium-ion battery separator have been investigated. More importantly, in the second part of the thesis, numerical simulations of particle transport and deposition in porous media are carried out to explore the mechanisms that form the theoretical basis for the above applications (adsorption, separation, etc.). In this part, the microscale simulations of colloidal particle transport and deposition in porous media are achieved by a novel colloidal particle tracking model, called 3D-PTPO (Three-Dimensional Particle Tracking model by Python ® and OpenFOAM ® ) code. Numerical simulations were firstly carried out in a capillary tube considered as an element of an idealized porous medium composed of capillaries of circular cross sections. Then microscale simulation is approached by representing the elementary pore structure as a capillary tube with converging/diverging geometries (tapered pipe and venturi tube) to explore the influence of the pore geometry and the particle Pé clet number (Pe) on particle deposition. Finally, the coupled effects of surface chemical heterogeneity and hydrodynamics on particle deposition in porous media were investigated in a three-dimensional capillary with periodically repeating chemically heterogeneous surfaces. Keywords : polymeric porous media, multilayer coextrusion, lithium-ion battery separator, PAH adsorption, microscale simulation, 3D-PTPO (three-dimensional particle tracking model by Python ® and OpenFOAM ® ), particle transport and deposition
Tianchang Green Chem. Additive Factory. Cationic dyeing agent (turquoise blue X-GB) was purchased from Guangzhou Rongqing Chem. Products Co.. The commercialized PP/PE/PP separator (Celgard ® 2325) and commercialized PE separator (SK Energy) were selected as the control sample. Prior to the processing, PP, PE, and CaCO 3 particles were dried separately at 70 o C for 48 h in vacuum oven to remove any humidity which may have been adsorbed during storage.3.2.2 Preparation of multilayer porous PP/PE membranesThe multilayer porous PP/PE membrane was prepared by the combination of multilayer coextrusion and CaCO 3 template method. The preparation steps are as follows, (1) Preparation of PP (CaCO 3 ) and PE (CaCO 3 ) masterbatches. Prior to the multilayer coextrusion, CaCO 3 particles were pre-dispersed in PP or PE resins with TMC 101 as the dispersing agent. CaCO 3 particles and PP (or
3.2 Experimental section
3.2.1 Materials
Polypropylene (PP, V30G) and polyethylene (PE, Q210) resin (injection and extrusion grade)
were purchased from Sinopec Shanghai Petrochemical Co.. CaCO 3 particles (diameter 0.1 μm) were purchased from Changxing Huayang Sujia Co.. Hydrochloric acid (AR, 36.0-38.0%) was purchased from Sinopharm Chem. Reagent Co.. Isopropyl dioleic (dioctylphosphate) titanate (TMC 101) was obtained from PE) resins with the mass ratio of 66: 34 were mixed and put into the twin-screw extruder to prepare PP (CaCO 3 ) or PE (CaCO 3 ) masterbatches. (2) Preparation of porous multilayer PP/PE membranes.
Table 3 .
3 1 Physical properties of MC-CTM PP/PE and Celgard ® 2325
Sample Composition Thickness Porosity a) EU b) ER c) σ d)
[μm] [%] [%] [%] -1 [mS cm ]
MC-CTM PP/PE PP/PE/PP/PE 25 46.8 148 137 1.35
Celgard ® 2325 PP/PE/PP 25 36.7 114 90 1.04
a) Porosity: calculated as the volume of absorbed n-butanol over the volume of the dry membrane; b) Electrolyte uptake; c) Electrolyte retention after 48 h; d) Ionic conductivity at room temperature. MC-CTM PP/PE Figure 3.3 Stress-strain curve of MC-CTM PP/PE at room temperature
Arrhenius plots of ionic conductivity of MC-CTM PP/PE and Celgard ® separator at elevated temperatures (from 30 to 80 o C)
-2.6
Celgard 2325
MC-CTM PP/PE
-2.7
Log(S/cm) -2.9 -2.8
-3.0
2.8 2.9 3.0 3.1 3.2 3.3
1000/T(K -1 )
Figure 3.4
Reagent Co.. The commercialized PP/PE/PP trilayer separator (Celgard ® 2325) and commercialized PE separator (SK Energy) were provided by Shenzhen Yuanchenghui Electronic Co.. Prior to the preparation, PP, PE, and paraffin wax were dried at 50 o C for 24 h in vacuum oven to remove any humidity which may have been adsorbed during storage.
4.2 Experimental section
4.2.1 Materials
Polypropylene (V30G), polyethylene (Q210), and paraffin wax (66#) were purchased from Sinopec Shanghai Petrochemical Co.. Petroleum ether (AR) was purchased from Sinopharm Chem.
Table 4 .
4 1 Physical properties of MC-TIPS PP/PE and Celgard ® 2325
Sample Layer Thickness ε a) EU b) ER c) σ d)
configuration [μm] [%] [%] [%] [mS cm -1 ]
Celgard ® 2325 PP/PE/PP 25 36.7 115 90 1.05
MC-TIPS PP/PE PP/PE/PP/PE 25 54.6 157 141 1.46
a) Porosity; b) Electrolyte uptake; c) Electrolyte retention after 48 h; d) Ionic conductivity at room
temperature.
Table 5
5
.1 Regulation and controlling of the porosity of PS membranes
Sample 1 # 2 # 3 # 4 # 5 # 6 #
Etching time (h) 0 24 48 48 48 48
CaCO 3 content (wt%) 20 20 20 40 40 40
Thickness (µm ) 20 20 20 20 10 5
Porosity (%) 0.0 4.3 7.2 10.5 12.1 19.4
Table 6 .
6 1 Detailed parameters of the simulation
Parameters Values
Particle diameter, a p (m) 4×10 -7
Pipe length, L (m) 1.5×10 -5
Pipe radius, R (m) 4×10 -6
Pressure drop, p (Pa) From 10 -5 to 1
Pé clet number, Pe From 0.0015 to 150
Boltzmann constant, k B (J/°K) 1.38×10 -23
Absolute temperature, T (°K) 293.15
Diffusion coefficient, D (m 2 /s) 5×10 -11
Dynamic viscosity, µ (Pa s) 10 -3
Table 8 .
8
1 Parameters used for simulations
time flies! It has been almost five years since the first enrollment in the spring of 2013, and every details flashback vividly in my mind. I would like to extend my sincerely thanks to Prof.Hongting Pu, who is my Chinese supervisor. Prof. Pu is a talented, frankly, and upright professor, his merits have had a profound impact on me. He is also diligent and rigorous in teaching, he teaches me how to select topic, how to do research, how to write an article, and how to submission. It has been one of my greatest pleasures to work with him during the past years.I would like to express my heartfelt gratitude to my French supervisor Prof. Azita Ahmadi for her international vision, innovative thinking, and meticulous academic style. I miss and appreciate the inspiring seminars, her motivating words and sincere suggestions so much. She is the goddess in my heart, how I wish I can become an elegant and successful lady like her in the future! I would like to give my special thanks to Prof. Aziz Omari, who has brought me a broader view of my research subject. He is intelligent, knowledgeable and warm-hearted. I am also impressed by his cheerful personality, open minded, hard-working and sense of humor. He provided so much helpful advices and corrected almost every mistake for me carefully. His guidance has been instrumental in making my PhD experience a fruitful one. Although I only study in France for only 18 months, Prof. Ahmadi and Prof. Omari still invested so many efforts in helping me with my work. They are always by my side when I encountered troubles, and encouraged me to move on and be a better researcher. I feel so grateful for all the things they have done for me. I will also give my thanks to Otar and yibiao, who helped me a lot during the simulation. Many thanks to the teachers and students in I2M and ENSAM, they are quite nice, funny and easygoing.Many thanks to Cyril, Anne, Ali, Valerie, Muriel, Henri, Florence, Tingting, Yingying, and Jeremy, the experience and memory here will be the most precious thing in my life and I will treasure them forever. Thanks to all the teachers of the FPI at Tongji University for helping and caring during my studies, they are Prof. Wan, Prof. Du, Prof. Jin, Prof. Chang, and Prof. Pan. Thanks to my classmates and dear friends, they are Feng, Haicun, Junfeng, Min, Danyun, Zhu, Hong, Yue, Yonglian, Daobin,
BFs ENP L Nomenclature breath figures method engineered nano particles density of n-butanol g cm -3
Abbreviation PAHs WHO EEA BTC PM HELP-1D V 0 MNM1D S 0 MNM3D S Definition polycyclic aromatic hydrocarbons World Health Organization European Environmental Agency breakthrough curves porous media the geometric volume of the membranes one-dimensional hybrid Eulerian-Lagrangian particle area of the membrane before thermal treatment micro-and nanoparticle transport model in porous media micro and nanoparticle transport model in 3D geometries area of the membrane after thermal treatment Units cm 3 cm 2 cm 2
3D-PTPO RIGID EIS three-dimensional particle tracking model by Python ® and collection efficiency spherical particle and geometry interaction detection electrochemical impedance spectroscopy
D LBM SS OpenFOAM ® code diffusion coefficient lattice boltzmann method stainless steel electrodes m 2 s -1
Pe k B REV σ pé clet number Boltzmann constant representative elementary volume ionic conductivity J K -1 mS cm -1
PP T GB R b polypropylene absolute temperature glass beads bulk impedance K ohm
PE a p IS D polyethylene particle radius ionic strength thickness of separators m cm
LIB μ E A lithium-ion battery dynamic viscosity Young's modulus the contact area between the separator and electrodes Pa• s cm 2
MC VDW REA LSV multilayer coextrusion van der Waals attraction representative elementary area liner sweep voltammetry
CTM EDL T A σ 0 CaCO 3 template method electrical double-layer interaction resisting torque pre-exponential factor
TIPS ADS S f * E a thermal induced phase separation advection dispersion-sorption colloid immobilization activation energy kJ mol -1
MC-CTM PP/PE ξ P + t + PP/PE separators prepared by multilayer coextrusion the surface-to-surface separation the amount of chemical heterogeneity lithium ion transference number
d MWCNTs LM and CaCO 3 template method the unit vector in the radial direction multi-walled carbon nano tubes lithium metal
MC-TIPS PP/PE I QS LDPE PP/PE separators prepared by multilayer coextrusion the overall rate at which particles strike the collector quartz sand low density polyethylene
a c GQS LME and thermal induced phase separation the radius of the collector goethite coated quartz sand laminating-multiplying elements
PS U NP Q eq polystyrene the approach velocity of the fluid nano particles the amount of pyrene adsorbed at equilibrium state mg g -1
RSA C 0 GSI Q t random sequential adsorption the number concentration of particles in the fluid approaching grid surface integration the amount of pyrene adsorbed at time t mg g -1
Γ/Γ RSA PDFs k 1 dimensionless surface coverage the collector probability density functions pseudo-first-order rate constant
λ A s LOC K F the frequency of the pitches Happel's flow parameter Lab-On-Chip Freundlich constant
θ s FAA C eq the reactive area fraction adsorbed mass per unit mass of the solid phase Federal Aviation Administration the equilibrium concentration of pyrene in the solution % mg L -1
CFD c TGA b F computational fluid dynamics free particle mass concentration thermal gravimetric analysis constant depicting the adsorption intensity
IUPAC vD FESEM Q max the International Union of Pure and Applied Chemistry Darcy velocity field emission scanning electron microscopy the maximum capacity of the adsorbent m s -1 mg g -1
NIPS ε DSC K L nonsolvent-induced phase separation porosity differential scanning calorimetry the Langmuir adsorption constant % L mg -1
VIPS ρ b FTIR vapor induced phase separation dry bulk density of the porous medium Fourier Transform infrared spectroscopy kg m -3
and Haicheng. EIPS R w EU solvent evaporation-induced phase separation the decay or inactivation rates for particles in the liquid phase electrolyte uptake %
Finally yet importantly, I sincerely thank my parents for your full support and understanding, PVDF poly(vinylidene fluoride) R s the decay or inactivation rates for particles in the solid phase ER electrolyte retention %
family is my strongest backing. I hope that one day I could be as perfect as you are. CA cellulose acetate k a the first order kinetic attachment rates W the weight of n-butanol soaked membrane CAB cellulose acetate butyrate k d the first order kinetic detachment rates W 0 the weight of dry membrane g December, 2017 g
PAN L s W b polyacrylonitrile the hydrodynamic shadowing length the weight of membranes before soaking in the electrolyte g
HIPE CFT W a high internal phase emulsion classical filtration theory the weight of membranes after soaking in the electrolyte g
BCP GSD block copolymer grain size distribution
152
Le polymè re poreux (PM) associe les avantages des maté riaux poreux et des polymè res, ayant une structure unique de pore, de grande porosité et de densité faible, ce qui lui confè re un potentiel d'application important dans les domaines de l'adsorption, les supports de catalyseur, le sé parateur de batterie, la filtration, etc. Actuellement, il existe plusieurs faç ons de pré parer le PM, comme la mé thode de gabarit, la mé thode de sé paration de phase, la mé thode d'imagerie respiratoire, etc.Chacune des mé thodes ci-dessus a ses propres avantages, mais la pré paration à grande é chelle de PM à structure de pore contrôlable et aux fonctions spé cifiques est toujours un objectif à long terme sur le domaine et constitue l'un des principaux objectifs de ce mé moire. La co-extrusion de microcouches est une mé thode pour produire de faç on efficace et ré pé té e des polymè res avec des structures de couches alterné es, ayant les avantages de haute efficacité et faible coût. Par consé quent, vues les exigences structurelles de PM de l'application spécifique dans cette é tude, le PM est conç u PTPO (un modè le tridimensionnel de suivi des particules combinant Python ® et OpenFOAM ® ) est utilisé pour é tudier le transport et le dé pôt de particules colloï dales dans des milieux poreux. Trois modè les de pores (colonne, venturi et tube conique) pour repré senter diffé rentes formes de maté riaux poreux ont é té considé ré s. Les particules sont considé ré es comme des points maté riels pendant le transport, le volume des particules sera reconstitué dè s que la particule est dé posé e à la surface du maté riau poreux, la caracté ristique principale de ce code est de considé rer l'influence du volume des particules dé posé es sur la structure des pores, les lignes d'é coulement et le processus du dé pôt des autres particules. Les simulations numé riques sont d'abord conduites dans des capillaires simples, le travail deLopez et al. (2004) est ré examiné en é tablissant un modè le gé omé trique tridimensionnel plus ré aliste et il explore les mé canismes gouvernant le transport et le dé pôt. Par la suite, des simulations numé riques sont effectué es dans des capillaires convergents-divergents pour é tudier la structure des pores et l'effet de nombre Peclet sur le dépôt de particules. Enfin, l'on étudie l'effet double de l'hé té rogé né ité de surface et de l'hydrodynamique sur le comportement de dé pôt de particules.Le travail de recherche principal de cette thè se est le suivant : Par la mé thode de MC combiné e à une sé paration de phase induite thermiquement (TIPS), le sé parateur multicouche de batterie au lithium-ion PP / PE (MC-TIPS PP / PE) avec une structure de pores submicroniques de type cellulaire est fabriqué de maniè re efficace et en continue. Le ré sultat montre que cette membrane possè de non seulement une fonction de limitation thermique efficace et une fenê tre de sé curité plus large, mais aussi une stabilité thermique supé rieure à celle des membranes commerciales. À mesure que la tempé rature s'é lè ve à 160°C, le retrait des pores sur le dé pôt des particules. Le ré sultat montre que dans la zone dominé e par la convection, la probabilité de dé pôt et le taux de couverture de surface pré sentent un plateau faible et la distribution des particules dé posé es est uniforme. Dans la zone dominé e par la diffusion, la probabilité de dé pôt et le taux de couverture sont fonctions du nombre de Pe, et la distribution des particules sé dimentaires montre une forme de piston à trè s bas Pe. De plus, pour les tubes de venturi, la densité des particules de dé pôt à proximité de l'entré e et de la sortie de la gorge est infé rieure en raison du changement de lignes de courant. Lorsque Pe est é levé , le taux de couverture de surface anormalisé maximal Γ final /Γ RSA est en accord avec les ré sultats numé riques et expé rimentaux de la litté rature avec la tendance à la baisse en Pe -1/3 . 6) On simule dans cette partie le transport et le dé pôt de particules colloï dales dans des milieux poreux de surface hé té rogè ne avec le code 3D-PTPO amé lioré et é tudie l'effet conjoint de Introduction gé né rale et revue de la litté rature Les maté riaux peuvent ê tre divisé s en maté riaux denses et en maté riaux poreux en fonction de leur densité . L'é tude des maté riaux poreux est une branche importante de la science des maté riaux, qui joue un rôle important dans notre recherche scientifique et notre production industrielle. Chaque milieu poreux est typiquement composé d'un squelette solide et d'un espace vide, gé né ralement rempli d'au moins un type de fluide (liquide ou gazeux). Il existe de nombreux exemples de maté riaux naturels (bambou creux, nid d'abeilles et alvé oles dans les poumons) et poreux artificiels (polymè re macroporeux, aluminium poreux et silice poreuse). Le milieu poreux polymé rique est l'un des composants les plus importants des maté riaux poreux organiques, ce qui est l'objet principal de cette thè se. Le milieu poreux polymé rique (polymè re poreux) pré sente les avantages combiné s des maté riaux poreux et des maté riaux polymè res. Il possè de une porosité é levé e, une structure microporeuse abondante et une faible densité . Les diverses mé thodes de pré paration, la structure des pores contrôlable et les proprié té s de surface facilement modifié es font du support poreux polymé rique des maté riaux prometteurs dans un large é ventail de domaines d'application : adsorption, sé parateur de batterie, filtre, stockage d'é nergie, catalyseur et science biomé dicale. Par consé quent, il est trè s inté ressant d'é tudier la nouvelle fonction des milieux poreux polymè res et de dé velopper une nouvelle mé thode de pré paration pour ce maté riau largement utilisé . Bien que diverses mé thodes de pré paration existent, telles que la technique de gabarit, la mé thode d'é mulsion, la mé thode de sé paration de phase, le procé dé de moussage, l'é lectrofilage, les techniques lithographiques descendantes, la mé thode de respiration, etc., la pré paration à grande é chelle de milieux poreux polymé riques à fonctions spé cifié es est toujours un objectif à long terme dans ce domaine, qui est l'un des objectifs fondamentaux de cette thè se. Une nouvelle approche, la coextrusion multicouche à assemblage forcé , a é té utilisé e pour produire é conomiquement et efficacement des polymè res de multicouches avec une é paisseur de couche individuelle variant d'un micron au nanomè tre. Cette technique de traitement de polymè re avancé e pré sente de nombreux avantages, y compris un procé dé continu, une voie é conomique pour la fabrication à grande é chelle, la flexibilité des espè ces de polymè res et la capacité à produire des structures de couches compatibles. Par consé quent, dans la partie I de cette thè se, les milieux poreux polymè res sont conç us sur la base des exigences d'application spé cifiques et pré paré s par la combinaison de la coextrusion multicouche et des mé thodes de pré paration traditionnelles (technique de gabarit, mé thode de sé paration de phase). Cette approche combine les avantages de la coextrusion multicouche et de la mé thode de sé paration modè le / phase (procé dé de pré paration simple et structure de pore compatible). Ensuite, les applications des milieux poreux polymé riques dans l'adsorption des PAH et le sé parateur de batteries au lithium-ion ont é té é tudié es.Plus important encore, le mé canisme de base des procé dé s d'application ci-dessus (procé dé d'adsorption de PAH ou transport de lithium-ion à travers un sé parateur) est basé sur le transport et le dé pôt de particules dans des milieux poreux. Il est donc essentiel de bien comprendre les processus de transport et de dé pôt des particules dans les milieux poreux ainsi que les mé canismes dominants impliqué s. En outre, le transport et le dé pôt de particules colloï dales dans des milieux poreux sont Fondamentalement, il existe deux bases thé oriques fondamentales pour é tudier le transport de particules colloï dales dans des milieux poreux, classifié es comme mé thodes eulé riennes et lagrangiennes. La mé thode eulé rienne dé crit ce qui se passe à un point fixe dans l'espace, tandis que la mé thode lagrangienne implique un systè me de coordonné es se dé plaç ant avec le flux. Les gé omé tries de pores sont souvent simplifié es en une ou deux dimensions pour ré duire le temps des simulations numé riques. Cependant, cette simplification limite le mouvement possible des particules et ré duit la pré cision des ré sultats. Par consé quent, il est né cessaire d'amé liorer le modè le de simulation pour permettre des simulations tridimensionnelles dans des gé omé tries plus ré alistes. De plus, le transport et le dé pôt de particules dans des milieux poreux hé té rogè nes ont ré cemment fait l'objet d'intenses recherches. Jusqu'à pré sent, il existe peu de travaux sur les effets combiné s de l'hydrodynamique et de l'hé té rogé né ité de surface sur les dé pôts de particules colloï dales dans les milieux poreux. Il est donc né cessaire de mieux comprendre les mé canismes responsables de ces phé nomè nes, ce qui est l'un des objectifs du pré sent travail. L'objectif de la partie II de cette thè se est la simulation tridimensionnelle à l'échelle microscopique du transport et du dé pôt de particules colloï dales dans des milieux poreux homogè nes et hé té rogè nes au moyen d'outils CFD utilisant la mé thode lagrangienne afin d'obtenir les quantité s les plus pertinentes en capturant la physique. sous-jacente au processus. Afin de ré aliser la simulation, un nouveau modè le de suivi des particules colloï dales, à savoir le modè le 3D-PTPO (modè le de suivi des particules tridimensionnel par Python ® et OpenFOAM ® ) utilisant la mé thode lagrangienne, est dé veloppé dans la pré sente é tude. Le contenu majeur de la partie II de cette thè se pourrait ê tre ré sumé comme suit : premiè rement, le comportement de dé pôt de particules est é tudié sur un de la gé omé trie des pores sur le champ d'é coulement comme sur les proprié té s de transport et de dé pôt des particules. Troisiè mement, le modè le 3D-PTPO est amé lioré en incorporant l'hé té rogé né ité chimique de surface. Les effets combiné s de l'hé té rogé né ité de surface et de l'hydrodynamique sur le comportement de dé pôt de particules sont é tudié s.Les batteries au lithium-ion (LIB) ont é té principalement utilisé es dans l'é lectronique grand public portable en raison de leur densité d'é nergie spé cifique é levé e, de leur longue duré e de vie et de l'absence d'effet mé moire. De plus, les LIB sont é galement considé ré es comme l'une des sources d'é nergie les plus prometteuses pour les vé hicules é lectriques et les systè mes aé rospatiaux.Cependant, des amé liorations en matiè re de sé curité sont encore né cessaires de toute urgence pour l'acceptation complè te des LIB dans ces nouveaux domaines d'application. La pré sence de l'é lectrolyte combustible ainsi que l'oxydant rendent les LIB sensibles aux incendies et aux explosions. Une fois que les LIB sont soumises à des conditions extrê mes telles qu'un court-circuit, une surcharge et un impact thermique à haute tempé rature, des ré actions chimiques exothermiques sont dé clenché es entre les é lectrodes et l'é lectrolyte, ce qui augmente la pression interne et la tempé rature de la batterie. L'augmentation de la tempé rature accé lè re ces ré actions et libè re de la chaleur rapidement grâ ce au dangereux mé canisme de ré troaction positive qui conduira à un Une fois que la tempé rature interne atteint le point de fusion du PE, la couche de PE est ramollie et fondue pour fermer les pores internes et ainsi empê cher la conduction ionique et terminer la ré action é lectrochimique. La majorité des sé parateurs bicouches ou tricouches commerciaux sont fabriqué s en stratifiant diffé rentes couches fonctionnelles ensemble par calandrage, adhé rence ou soudage. Le procé dé traditionnel de fabrication de tels sé parateurs à deux ou trois couches comprend la procé dure de fabrication de la couche de renforcement microporeuse et de la couche d'arrê t par le procé dé d'é tirage, en reliant alternativement les couches microporeuses ci-dessus dans les membranes bicouches ou tricouches. Ensuite, les membranes à deux couches ou à trois couches ont é té é tiré es au sé parateur de batterie à deux couches ou à trois couches avec l'é paisseur et la porosité requises. Le procé dé ci-dessus peut produire des sé parateurs multicouches avec une fonction d'arrê t thermique, et la plupart incluent le processus d'é tirement et amé liorent é videmment la ré sistance mé canique des sé parateurs. Né anmoins, le processus de production plutôt encombrant diminue nettement l'efficacité de la production. Plus important encore, ces sé parateurs Dans ce chapitre, une nouvelle straté gie est proposé e pour pré parer des sé parateurs LIB multicouches comprenant des couches alterné es de couche de PP microporeuse et de couche de PE par coextrusion multicouche combiné e avec la mé thode du modè le CaCO 3 . Cette approche combine les avantages de la coextrusion multicouche (processus continu, voie é conomique pour la fabrication à grande é chelle et structures de couche accordables) et la mé thode du gabarit (processus de pré paration simple et structure de pore ajustable). Une amé lioration clé de cette approche est que la structure poreuse est formé e par la mé thode du gabarit au lieu du processus d'é tirement, ce qui est bé né fique pour la stabilité thermique. Comparé aux sé parateurs bicouches ou tricouches commerciaux, le retrait dimensionnel de ces membranes multicouches peut ê tre ré duit. Les sé parateurs multicouches peuvent conserver leur inté grité et leur faible retrait thermique à haute tempé rature. Les sé parateurs multicouches PP / PE pré sentent non seulement une fonction d'arrê t thermique efficace, mais pré sentent é galement des avantages significatifs d'une stabilité thermique é levé e jusqu'à 160°C . La fonction d'arrê t thermique des membranes multicouches PP / PE peut ê tre largement ajusté e dans la plage de tempé rature de 127°C à 165°C , plus large que celle des sé parateurs commerciaux (Celgard ® 2325). Plus important encore, cette approche est trè s pratique et efficace, é vite les processus bicouches traditionnels ou tricouches lourds au cours du processus de production. De plus, cette mé thode est applicable à tout polymè re pouvant ê tre traité à l'é tat fondu en plus du PP et du PE. Pré paration de sé parateurs de batterie au lithium-ion poreux multicouches par combinaison d'une coextrusion multicouche et d'une sé paration de phases induite thermiquement La coextrusion multicouche (MC) repré sente une technique de traitement de polymè re avancé e capable de produire de faç on é conomique et continue des maté riaux multicouches. La sé paration de En outre, dans ce chapitre, les TIPS se composent d'un systè me binaire, comprenant un polymè re (PP ou PE), un diluant (paraffine) et donc un seul agent d'extraction (é ther de pé trole). Par consé quent, l'agent d'extraction est recyclable et offre une reproductibilité plus é levé e, ce qui est d'une importance critique pour la ré duction des coûts et la protection de l'environnement. Un autre avantage clé de cette mé thode est que la voie en une é tape fournit un moyen plus efficace pour la fabrication à grande é chelle de sé parateurs multicouches à porosité é levé e. Plus significativement, la structure poreuse est formé e sans processus d'é tirement traditionnel, ce qui favorise la thermostabilité dimensionnelle. Le sé parateur ré sultant possè de une structure poreuse de qualité cellulaire et traces de PAH et de les sé parer facilement de l'eau. De plus, la mé thode d'adsorption est particuliè rement inté ressante lorsque l'adsorbant est bon marché et peut ê tre produit en masse. Selon le principe similaire compatible, les absorbants à cycle aromatique sont des maté riaux comparativement approprié s pour l'adsorption des PAH. On trouve dans la litté rature que les maté riaux en vrac poreux en polystyrè ne (PS) par polymé risation en é mulsion à phase interne é levé e sont de bons candidats pour traiter la contamination des PAH dans l'eau. Parmi les maté riaux d'adsorption poreux, les membranes poreuses sont pré fé rables aux maté riaux en vrac et en poudre puisqu'ils possè dent une surface de contact plus é levé e avec l'eau et sont beaucoup plus faciles à sé parer des eaux usé es. Dans ce chapitre, une nouvelle straté gie est proposé e pour pré parer des membranes PS poreuses avec une structure poreuse accordable par coextrusion multicouche combiné e à la mé thode du gabarit, qui est une voie trè s efficace pour la fabrication à grande é chelle de membranes PS poreuses. Pyrè ne, un PAH repré sentatif avec un poids molé culaire moyen et une solubilité modé ré e dans l'eau, est choisi comme composé modè le pour explorer les performances
Ré sumé
Chapitre 4
A t the total surface area of the capillary wall m 2
n the number of deposited particles
S particle the projection area of one deposited particle m 2
S attractive the total surface area of the attractive regions m 2
S extend η effective moyen le plus attrayant pour la protection de sé curité des LIB car il peut surmonter les dé fauts the extended surface areas of every attractive region m 2 the overall deposition efficiency %
η f ci-dessus. the deposition efficiencies for completely favorable regions %
η u Les sé parateurs d'arrê t reposent sur un mé canisme de changement de phase pour limiter le the deposition efficiencies for completely unfavorable regions %
transport ionique par la formation d'une couche impermé able aux ions entre les é lectrodes. Les
sé parateurs d'arrê t sont gé né ralement constitué s d'une bicouche polypropylè ne (PP) / polyé thylè ne
(PE) ou d'une structure tricouche PP / PE / PP. Dans ce type de sé parateurs stratifié s, le PE ayant le
avec une structure spé cifique et une co-extrusion de microcouches de maniè re novatrice combiné e avec la mé thode traditionnelle de pré paration de PM (mé thode de gabarit, mé thode de sé paration de phase). En combinant les avantages des deux mé thodes, les PM avec une structure de pore idé ale peuvent être préparés en grande quantité et l'on peut également explorer son application dans les sé parateurs de batteries lithium-ion et l'adsorption d'hydrocarbures aromatiques polycycliques.
Dans la deuxiè me partie de ce mémoire on s'intéresse à la simulation numé rique pour é tudier le transport et le dé pôt de micro-particules dans des milieux poreux pour explorer le mé canisme des maté riaux poreux dans les domaines de l'adsorption et du sé parateur de batterie. Le code de 3D-1) Par la mé thode de co-extrusion de microcouche, (MC) combiné e à la mé thode du modè le (CTM), on pré pare efficacement et en continu la couche mince (MC-CTM PP) de polypropylè ne (PP) / polyé thylè ne (PE) avec une structure multicouche, pour son application dans le sé parateur de batterie lithium-ion. Le ré sultat nous montre que le sé parateur a une structure poreuse submicronique sphé rique riche. Comparé au diaphragme multicouche commercial Celgard ® 2325, le MC-CTM PP / PE pré sente une porosité , une absorption et une ré tention de liquide plus é levé es, ce qui amé liore la conductivité ionique et les performances de la batterie. De plus, cette membrane possè de une performance thermique efficace avec une fenê tre de sé curité allant de 127°C à 165°C, plus large qu'un diaphragme commercial. Il maintient toujours l'inté grité de la membrane lorsque la tempé rature est é levé e à 160°C, ce qui surmonte les inconvé nients d'un retrait thermique grave causé par l'é tirement de la membrane commerciale pendant le processus de production et pré sente une excellente stabilité thermique et une performance de sé curité supé rieure.
2) l'hé té rogé né ité de surface et de l'hydrodynamique sur le transport et le dé pôt des particules. Les milieux poreux sont repré senté s par des capillaires tridimensionnels à hé té rogé né ité structurelle pé riodique (en forme de raie et damier). On explore la relation entre la probabilité de dé pôt, Γ/Γ RSA et la pé riode des hé té rogé né ité s (λ), Pe et le rapport de la zone attractive par rapport à la surface totale (θ). Le résultat montre que les particules colloïdales ont tendance à se déposer aux extré mité s de la bande attractive, et que la distribution de dé pôt des particules est plus uniforme que la situation de l'homogénéité de surface. De plus, dans le cas du même θ, la probabilité de dépôt est positivement corrélée avec λ. La probabilité totale de dépôt augmente avec θ, ce qui est cohé rent avec la rè gle du modè le d'hé té rogé né ité en « patch » . L'introduction dé taillé e et le travail de recherche sont les suivants : Chapitre 1 & Chapitre 2 Avant la pré paration et la ré alisation, les fonctions et les structures des milieux poreux polymé riques doivent ê tre conç ues. L'aspect le plus important à considé rer est les principales fonctions que nous aimerions ré aliser dans les milieux, qui conduisent à des proprié té s d'application spé cifiques. Deuxiè mement, nous devons dé terminer les facteurs clé s qui sont directement lié s à la fonction dé siré e, tels que la gé omé trie des pores, la taille des pores et la porosité de la matrice des maté riaux. Troisiè mement, sur la base des considé rations ci-dessus, le sché ma expé rimental doit ê tre conç u pour pré parer les milieux poreux polymè res. d'une grande importance pour d'autres applications techniques et industrielles, comme le transport de contaminants facilité s par les particules, la purification de l'eau, le traitement des eaux usé es ou la recharge artificielle des aquifè res. Afin de comprendre les mé canismes de transport de particules colloï dales dans des milieux poreux, des simulations numé riques de dynamique des fluides (CFD pour Computational Fluid Dynamics) ont é té ré alisé es pour visualiser et analyser le champ d'é coulement. Prenant la filtration de l'eau à titre d'exemple, des techniques numé riques pourraient ê tre utilisé es pour modé liser le transport et la dispersion des contaminants dans un fluide. capillaire homogè ne afin de revisiter le travail pré cé dent de Lopez et al. (2004) en considé rant une gé omé trie 3D plus ré aliste. En effet, leur travail é tait limité à une gé omé trie particuliè re restreinte à une gé omé trie en forme de fente peu susceptible d'ê tre rencontré e dans des milieux poreux ré els. En outre, il est né cessaire de valider les proprié té s de transport fondamentales pendant la simulation, car la partie validation initiale du dé veloppement du modè le (impliquant un dé pôt sur des collecteurs homogè nes) é tait importante pour les simulations suivantes de transport et de dé pôt de particules dans les gé omé tries de pores plus complexes. Deuxiè mement, la modé lisation numé rique tridimensionnelle du processus de transport et de dé pôt de particules dans une gé omé trie de pore plus complexe (tube de tube venturi) d'une section transversale circulaire est ré alisé e pour explorer l'influence Chapitre 3 Pré parations des sé parateurs de batterie au lithium-ion polyé thylè ne / polyé thylè ne poreux multicouches par combinaison d'une mé thode de coextrusion multicouche et d'une matrice emballement thermique, une fissuration des cellules, un incendie ou mê me une explosion. Pour pré venir les dé faillances thermiques catastrophiques dans les LIB, de nombreuses straté gies comprenant des maté riaux d'é lectrode thermosensibles, des é lectrodes à coefficient de tempé rature positif et une é lectrode d'arrê t thermique ont é té proposé es comme mé canisme de protection auto-activant pour empê cher les LIBs de s'é chapper. Alors que les procé dé s ci-dessus impliquent souvent soit une synthè se de maté riau difficile, soit un traitement d'é lectrode compliqué , ce qui les rend peu pratiques pour l'application de la batterie. En outre, les é paisses couches de revê tement des é lectrodes diminueraient la densité d'é nergie des batteries et gê neraient leur utilisation pratique dans les batteries. Du point de vue de l'application industrielle, le sé parateur à arrê t thermique est donc le point de fusion le plus bas sert d'agent d'arrê t tandis que le PP ayant le point de fusion le plus é levé sert de support mé canique. subiront un ré tré cissement important à partir d'une gamme de tempé rature plutôt limité e, avec un dé but de ré tré cissement autour de 100°C en raison des contraintes ré siduelles induites pendant le processus d'é tirage. Par consé quent, il est essentiel de trouver de nouvelles mé thodes qui peuvent optimiser la stabilité thermique, la proprié té d'arrê t et la performance é lectrochimique des sé parateurs de polyolé fine sans sacrifier leur excellente structure microporeuse et à faible coût. phase induite thermiquement (TIPS) est un procé dé de fabrication largement utilisé pour les sé parateurs LIB commerciaux, qui repose sur la rè gle selon laquelle un polymè re est miscible avec un diluant à haute tempé rature, mais dé mé lange à basse tempé rature. Les sé parateurs pré paré s par le procé dé TIPS montrent une structure de pores bien contrôlé e et uniforme, une porosité é levé e et une bonne « modifiabilité » . À notre connaissance, aucune é tude n'a é té rapporté e sur la combinaison des deux mé thodes ci-dessus pour pré parer des sé parateurs poreux multicouches. Ainsi, dans le pré sent chapitre, une nouvelle straté gie est proposé e pour pré parer les sé parateurs multicouches LIBs comprenant des couches alterné es de couches PP et PE microporeuses via la combinaison de la coextrusion multicouche et de la mé thode TIPS, visant à combiner les avantages des deux mé thodes.
Transport and deposition of colloidal particles in porous media under flow is encountered in a wide range of environmental and industrial applications including aquifer remediation [START_REF] Tosco | Transport of ferrihydrite nanoparticles in saturated porous media: role of ionic strength and flow rate[END_REF] , fouling of surfaces [START_REF] Tosco | Transport of ferrihydrite nanoparticles in saturated porous media: role of ionic strength and flow rate[END_REF] , fouling of surfaces [START_REF] Ngene | A microfluidic membrane chip for in situ fouling characterization[END_REF] , therapeutic drug delivery [START_REF] Asgharian | Prediction of particle deposition in the human lung using realistic models of lung ventilation [J][END_REF] , catalytic processes carried out through filter beds [START_REF] Bensaid | Numerical simulation of soot filtration and combustion within diesel particulate filters[END_REF] and drinking water treatment [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF] . It has therefore received considerable attention during the last decades by investigating colloid transport and deposition process in porous media as well as by focusing on a particular mechanism among all those generally involved in such a process [254][255] . The most realistic representation of a porous medium is as a collection of solid grains, each being considered as a collector. The process of particle deposition is, therefore, divided into two steps: transport to the collector and deposition due to short-range particle/collector physico-chemical interactions. Such interactions are represented through a potential function usually obtained from the DLVO theory that includes electrostatic, van der Waals, and short range Born repulsion forces [START_REF] Yao | Water and waste water filtration. Concepts and applications[END_REF] . If particles and collectors are similarly charged and salt is low, the interaction potential contains two minima and one energy barrier making deposition condition unfavorable.
When the salt concentration is very high or when particles and collectors are of opposite charges, the potential is purely attractive, leading to favorable deposition [START_REF] Elimelech | Kinetics of deposition of colloidal particles in porous media[END_REF][104] [START_REF] Feder | Random sequential adsorption[END_REF] .
To experimentally investigate colloids deposition mechanisms, impinging jet flow or parallel-plate cells are commonly used because of simplicity of flow structure and are coupled to various techniques such as direct observation techniques [START_REF] Adamczyk | Deposition of latex particles at heterogeneous surfaces[END_REF][START_REF] Areepitak | Model simulations of particle aggregation effect on colloid exchange between streams and streambeds[END_REF][START_REF] Unni | Brownian dynamics simulation and experimental study of colloidal particle deposition in a microchannel flow[END_REF][START_REF] Adamczyk | Deposition of colloid particles at heterogeneous and patterned surfaces[END_REF] . In porous media, column experiments using native or fluorescent polystyrene Latex particles are the most commonly performed owing to their simplicity giving output data in the form of Breakthrough curves (BTC) [START_REF] Canseco | Deposition and re-entrainment of model colloids in saturated consolidated porous media: Experimental study[END_REF] . These are sometimes coupled to other techniques as gamma-ray attenuation [START_REF] Canseco | Deposition and re-entrainment of model colloids in saturated consolidated porous media: Experimental study[END_REF][START_REF] Djehiche | Effet de la force ionique et hydrodynamique sur le dé pôt de particules colloï dales dans un milieu poreux consolidé[END_REF][START_REF] Gharbi | Use of a gamma ray attenuation technique to study colloid deposition in porous media[END_REF] , magnetic resonance imaging [START_REF] Baumann | Visualization of colloid transport through heterogeneous porous media using magnetic resonance imaging[END_REF] , laser scanning cytometry [START_REF] May | The effects of particle size on the deposition of fluorescent nanoparticles in porous media: Direct observation using laser scanning cytometry[END_REF] , microscopy and image processing [262, 263] .
In column experiments, the BTC encompass all involved sorts of particle-particle and particle-collector interactions. To interpret experimental data, an Eulerian approach may be adopted
Conclusions
In this work, the proposed 3D-PTPO code is shown to be a useful and flexible tool for the microscale simulation of colloidal particle transport and deposition in a 3D chemically heterogeneous capillary. The main conclusions can be drawn as follows: Firstly, the coupled effect of the charge heterogeneity and the three-dimensional velocity field can bring out a complex concentration distribution of deposited particles on the wall, leading to a higher density of deposited particles at the leading and trailing edges of each favorable strip, and the deposition is more uniform along the patterned capillary compared to the homogeneous one. Secondly, the deposition probability is in line with the frequency of the pitches. Under the same favorable surface ratio, θ, smaller pitch length will result in higher deposition probability and accordingly higher dimensionless surface coverage. Moreover, for the diffusion-dominant regime at lower Pe, the surface coverage is close to the Γ RSA and features a relatively stable plateau. For the convection-dominant regime at high Pe, the declining trend of Γ/Γ RSA versus Pe is in good agreement with the derived power law dependence of surface coverage versus Pe. Finally, the overall deposition probability is increasing with the favorable area fraction. The correlation coefficient of the data points is up to 0.997, which is in good agreement with the patchwise heterogeneity model. This study provides insight in designing artificially heterogeneous porous media for particle capture in various engineering and biomedical applications including targeted drug therapy. Furthermore, the model can be further improved by incorporating more realistic fluid flow profiles and more random heterogeneous patterns. submicronique avec une porosité de 54,6%. Le plus important est que ce sé parateur offre une amé lioration substantielle de la stabilité thermique par rapport au sé parateur commercial, fournit une fenê tre de tempé rature d'arrê t plus large (129-165°C ) et un ré tré cissement dimensionnel né gligeable (jusqu'à 160°C ). De plus, le sé parateur tel que pré paré pré sente une conductivité ionique plus é levé e, un plus grand nombre de transfert d'ions lithium et une meilleure performance de la batterie. Compte tenu des caracté ristiques attrayantes mentionné es ci-dessus et du processus de pré paration à grande é chelle, ce sé parateur est considé ré comme trè s prometteur pour l'application dans des batteries lithium-ion à haute sé curité . |
01760725 | en | [
"sdv.bid.evo",
"sdv.bid.spt"
] | 2024/03/05 22:32:13 | 2018 | https://theses.hal.science/tel-01760725/file/75172_PONCE_TOLEDO_2018_archivage.pdf | Blanca Y Rubén
Raquel Laura
Pierre Capy
Line Philippe Lopez
Le Gall
Marc-André Selosse
Rafael I Ponce-Toledo
Philippe Deschamps
email: philippe.deschamps@u-psud.fr
Purificación López- Garcı
Yvan Zivanovic
Karim Benzerara
David Moreira
email: david.moreira@u-psud.fr
Purificacio ´n Lo ´pez-Garcı ´a
Rafael I Ponce-Toledo¹
David Moreira¹
Purificación López-García¹
Philippe Deschamps¹
Viridiplantae Toxoplasma Centrohelea Opisthokonta Amebozoa Goniomonadida Cryptosporidium Euglenozoa Colpodelida Heterolobosea Katablepharida Ciliophora
Keywords: Karenia Katablepharida Goniomonadida Amphidinium Oomycetes Centrohelea Blastocystea Jakobea Haptophyta Chromerida Ciliophora Plasmodium Cercozoa Glaucophyta Hematodinium Viridiplantae Retaria Perkinsida Euglenozoa Amebozoa Ochrophyta Colpodelida Rhodophyta Placididea Heterolobosea Cryptophyta Toxoplasma Bikosea Cryptosporidium Labirynthulea Karenia Jakobea Blastocystea Plasmodium Oomycetes Cryptophyta Hematodinium Chlorarachniophyta, Euglenida, endosymbiotic gene transfer, phylogenomics
Gracias a mis hermanos
INTRODUCTION
In this introductory part, I provide the theoretical background and up-to-date knowledge on the origin and evolution of photosynthetic eukaryotes as well as some elements of the current debate that I addressed during my PhD research project. First, I describe the history of the endosymbiotic theory of plastids (section 1.1): how it started, the precursors and the problems it faced before it was widely accepted and became one of the most important contributions to evolutionary thinking of the twentieth century. Then, in section 1.2 and section 1.3, I describe the diversity of primary and secondary photosynthetic eukaryotes, respectively, as well as the evolutionary models aiming at understanding the origin and evolution of these lineages.
Symbiosis: the origin of an idea
The history of symbiosis (from the latin "living together") is tightly connected to the study of lichens. It is well known that lichens are symbiotic associations between a fungus (usually an ascomycete) and a photosynthesizer (green algae or cyanobacteria), although recent findings have unveiled a third partner, a basidiomycete yeast, that went undetected for over 150 years [START_REF] Spribille | Basidiomycete yeasts in the cortex of ascomycete macrolichens[END_REF].
The first to propose a composite origin of lichens was the Swiss botanist Simon Schwendener during the annual meeting of the Swiss Natural History Society in 1867 (cited by Honegger, 2000). Schwendener considered lichens to have a dual antagonistic nature, where an enslaved photosynthesizer was exploited by a fungal master [START_REF] Perru | Aux origines des recherches sur la symbiose vers 1868-1883[END_REF]. However, his ideas did not have a good reception and were firmly rejected by most of his contemporary lichenologists, in part because of the slave-master analogy, which was fiercely ridiculed, but also due to the belief of the "individuality of species" that prevailed during most part of the nineteenth century (Honegger, 2000;[START_REF] Sapp | Evolution by Association: A History of Symbiosis[END_REF]. At the time when Schwendener exposed his ideas, lichens were thought to be an intermediate group between algae and fungi [START_REF] Perru | Aux origines des recherches sur la symbiose vers 1868-1883[END_REF].
Based on the works of Schwendener on lichens, the German biologist Anton de Bary published in 1879 the article "Die Erscheinung der Symbiose" where he defined symbiosis as "a phenomenon in which dissimilar organisms live together" (de Bary, 1879; [START_REF] Oulhen | English translation of Heinrich Anton de Bary's 1878 speech, "Die Erscheinung der Symbiose[END_REF], giving to this term the modern biological meaning to describe the continuum of ecological relationships, from parasitism to mutualism (cited by [START_REF] Oulhen | English translation of Heinrich Anton de Bary's 1878 speech, "Die Erscheinung der Symbiose[END_REF]. As a consequence, the role of symbiosis in evolution started to be discussed at the end of the nineteenth century.
Andrei Famintsyn, a Russian plant physiologist, studied the adaptive role of symbiosis using as a model the symbiotic relationship between the photosynthetic algae of the genus Zoochlorella and their invertebrate partners [START_REF] Famintsyn | Nochmals die Zoochlorellen[END_REF][START_REF] Famintsyn | Beitrag zur Symbiose von Algen und Thieren[END_REF]. His research led him to suggest that if symbiotic zoochlorellae can grow independently from their animal hosts, it could be also the case for chloroplasts cultured outside the plant cell. [START_REF] Sapp | Symbiogenesis: the hidden face of Constantin Merezhkowsky[END_REF]. However, when he tried to cultivate chloroplasts from the alga Vaucheria, his attempts were unsuccessful [START_REF] Famintsyn | Zapiski Imperatorskoi akademii nauk, fiz.-mat. otd[END_REF][START_REF] Provorov | Mereschkowsky and the origin of the eukaryotic cell: 111 years of symbiogenesis theory[END_REF]. Famintsyn believed that complex organisms can appear through the symbiotic unification of simpler forms via the transformation of their aggregation into a living entity of higher order [START_REF] Khakhina | Concepts of symbiogenesis: A historical and critical study of the research of Russian botanists[END_REF]. During a large part of his scientific career, Famintsyn attempted to frame the symbiotic interactions in the context of Darwinian evolution; to him, symbiosis and symbiotic unification of organisms could have impacted the origin of life on Earth and its subsequent diversification [START_REF] Carrapiço | Can We Understand Evolution Without Symbiogenesis? In Reticulate Evolution[END_REF][START_REF] Khakhina | Concepts of symbiogenesis: A historical and critical study of the research of Russian botanists[END_REF].
Constantin Mereschkowsky's symbiogenetic theory
When Lynn Margulis published her milestone paper "On the origin of mitosing cells" [START_REF] Sagan | On the origin of mitosing cells[END_REF], she brought into the scientific debate the controversial claims of the symbiotic origin of several eukaryotic organelles (i.e. mitochondria, chloroplasts and flagella were once free-living bacteria). She acknowledged the Russian botanist Constantin Mereschkowsky as the first to suggest in the early twentieth century that chloroplasts had originated from photosynthetic microorganisms (Cyanophyceae).
Although disputable, the first proposal of the symbiotic nature of chloroplast was probably made by Andreas Schimper, a german biologist whose paper "Über die Entwicklung der Chlorophyllkörner und Farbkörper" [START_REF] Schimper | Über die Entwicklung der Chlorophylkörner und Farbkörner[END_REF] showed that chloroplasts (a term coined by himself) do not appear de novo but replicate autonomously within the plant cell. This led him to suggest a symbiotic origin of plastids, though vaguely and without further development than a short footnote: "If it can be conclusively confirmed that plastids do not arise de novo in egg cells, the relationship between plastids and the organisms within which they are contained would be somewhat reminiscent of a symbiosis. Green plants may in fact owe their origin to the unification of a colourless organism with one uniformly tinged with chlorophyll" (William [START_REF] Martin | Annotated English translation of Mereschkowsky's 1905 paper "Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF].
Mereschkowsky was au courant of Andreas Schimper's work on chloroplast division, he used his discoveries and his own research on chloroplasts to challenge the autogeneous hypothesis of the origin of the chromatophore (as he called the chloroplast), which was the privileged model at the time. positing that plastids emerged through the "differentiation of the protoplasmic substance" [START_REF] Martin | Annotated English translation of Mereschkowsky's 1905 paper "Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF][START_REF] Wilson | The cell in development and inheritance[END_REF].
Instead, Mereschkowsky proposed that chloroplasts were once free-living Cyanophyceae that became permanent symbionts within nucleated cells giving rise to new organisms [START_REF] Mereschkowsky | Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF]).
Mereschkowsky's theory on the symbiogenetic origin of chloroplasts was heavily inspired by the dual nature of lichens. Lichens were a recurrent example on his publications, showing how simpler organisms living together can associate resulting in a sort of more complex entity [START_REF] Mereschkowsky | Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF][START_REF] Mereschkowsky | Theorie der zwei Plasmaarten als Grundlage der Symbiogenesis, einer neuen Lehre von der Entstehung der Organismen[END_REF][START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]. He thought rightly that lichens were polyphyletic, suggesting that symbiotic associations between algae and fungi had happened independently at least ten times [START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]. The reasoning behind his theory was based on a comparative study of the physiological similarities of chloroplasts and cyanobacteria. In his paper "Über Natur und Ursprung der Chromatophoren im Pflanzenreiche" (On the nature and origin of chromatophores in the plant kingdom) [START_REF] Martin | Annotated English translation of Mereschkowsky's 1905 paper "Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF] he enunciated his five arguments to consider chromatophores as symbionts:
1. The continuity of chromatophores. Plastids proliferate through division of pre-existing plastids, therefore, it is necessary to postulate that the first chromatophore came from outside to integrate into a non-photosynthetic organism.
2. Chromatophores are highly independent of the nucleus. Mereschkowsky remarked the high independence of the plastid in respect to the nucleus control machinery as expected for a symbiont. He also noticed the differences in lipid composition between the chromatophore and the cytoplasm.
3. The complete analogy of chromatophores and zoochlorellae. The case of the photosynthetic zoochlorellae is an example of how an independent organism can establish a close symbiotic relationship with another one. Then a similar event could have originated the chromatophore.
4.
There are organisms that we can regard as free-living chromatophores. Mereschkowsky suggested that the morphological and physiological resemblances of cyanophytes to chromatophores point out that the latter originated from the former.
5.
Cyanophytes actually live as symbionts in cell protoplasm. The symbiosis of cyanophytes seems to be common in nature.
Mereschkowsky thought that the emergence of new organisms through symbiosis was a common phenomenon in the history of life on Earth. Given the great diversity of algae and plants, he suggested that the plant kingdom was polyphyletic: different "animal cells" would have been invaded by cyanophytes independently in multiple symbiogenetic events [START_REF] Martin | Annotated English translation of Mereschkowsky's 1905 paper "Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF][START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]. Although only two primary cyanobacterial endosymbioses are recognized so far to have originated photosynthetic organisms [START_REF] Mcfadden | Origin and evolution of plastids and photosynthesis in eukaryotes[END_REF], a number much lower than the 15 events initially proposed by Mereschkowsky [START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]. It is worth highlighting the importance of the endosymbiosis in the evolution of secondary photosynthetic eukaryotes (Archibald, 2015).
The origin of the nucleus
Mereschkowksy is recognized mainly for his contribution on the origin of the chloroplast, however, he also proposed a theory for the origin of the first microorganisms, an event that would have been followed by the symbiogenesis of nucleated cells [START_REF] Mereschkowsky | Über Natur und Ursprung der Chromatophoren im Pflanzenreiche[END_REF][START_REF] Mereschkowsky | Theorie der zwei Plasmaarten als Grundlage der Symbiogenesis, einer neuen Lehre von der Entstehung der Organismen[END_REF]. Nevertheless, the symbiotic origin of the nucleus had already been suggested in 1893 by the Japanese zoologist Shosaburo Watase who speculated that small organisms living together gave rise to the cytoplasm and the nucleus [START_REF] Sapp | The New Foundations of Evolution: On the Tree of Life[END_REF]. Likewise, in the early twentieth century, the German biologist Theodor Boveri better known for his research on cancer, enounced briefly a possible symbiogenetic origin of nucleated cells: "It might be possible that what we call a cell, and for which our mind demands simpler preliminary stages, originated from a symbiosis of two kinds of simple plasmatic structures, such that a number of smaller ones, the chromosomes, settled within a larger one, which we now call the cell body" [START_REF] Maderspacher | Theodor Boveri and the natural experiment[END_REF].
According to Mereschkowsky, life on Earth probably appeared twice, giving rise to organisms with different plasma characteristics [START_REF] Mereschkowsky | Theorie der zwei Plasmaarten als Grundlage der Symbiogenesis, einer neuen Lehre von der Entstehung der Organismen[END_REF]. First, small microorganisms resembling extant micrococci would have emerged during the Earth's early hot period. Cyanobacteria may have originated from this type of organisms. Then, when the temperature on Earth descended below 50 ºC and with a greater availability of organic material, the second appearance of organisms took place, a sort of amoeboid protoplasm. The nucleated cells would be the result of the merging of these two types of cells. The amoeboplasm was invaded by a colony of bacteria that ultimately formed a [START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF] membrane and gave rise to the nucleus. This process would have occurred multiple times producing independent origins of nucleated organisms (Fig. 1). Likewise, Mereschkowsky thought that plants originated from multiple secondary symbiogenetic events via the acquisition of cyanophycean symbionts (Fig. 1). He imagined the phylogeny of plants not like a tree but like a grove (Fig. 2) [START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]. However, the idea of a polyphyletic origin of the nucleus in eukaryotes and the plastids in plants proved to be wrong (López-García & Moreira, 2015;McFadden & Van Dooren, 2004).
Mitochondria
Mereschkowsky was opposed to the hypothesis that the chloroplast and mitochondrion shared a same origin, where the former would derive from differentiation of the latter [START_REF] Guilliermond | Sur l'origine mitochondriale des plastides[END_REF][START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]. The ability of chloroplasts to synthesize organic matter from inorganic compounds represented a clear physiological distinction between both organelles that argued for independent origins. Interestingly, although he was uncertain about the origin of mitochondria, he rejected the idea that mitochondria were symbionts, considering it to be incompatible with the symbiogenetic origin of chloroplasts [START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF].
It was Paul Portier who in his book "Les symbiotes" published in 1918 analyzed the similarities between bacteria and mitochondria and proposed that mitochondria were in fact "cellular symbionts" derived from free-living bacteria [START_REF] Perru | Aux origines des recherches sur la symbiose vers 1868-1883[END_REF][START_REF] Portier | Les symbiotes[END_REF]. He went a step further claiming he was able to culture mitochondria outside the cell, confirming the bacterial origin of mitochondria; however, his results were strongly contested and were considered, rightfully, contaminations (as cited in [START_REF] Perru | Aux origines des recherches sur la symbiose vers 1868-1883[END_REF].
One decade later, the American Ivan Wallin, aware of the work and criticism of Portier's work, continued the efforts to cultivate mitochondria. However, similar to Paul Portier, his work and proposal of the symbiotic nature of mitochondria were rejected by the scientific community of his time. His book "Symbionticism and the origin of species" [START_REF] Wallin | Symbionticism and the origin of species[END_REF] was one of the latest publications about the endosymbiotic origin of organelles before the theory ceased to be widely discussed for over 40 years, until it was revitalized in the mid-sixties by Lynn Margulis.
Molecular era 1.1.2.1 Lynn Margulis and the renaissance of the symbiogenetic theory
The period between the 30s and the early 50s of the twentieth century produced great advances in the knowledge and description of the cell, in part due to technological innovations such as the electron microscope, invented by Max [START_REF] Knoll | Lynn Margulis, 1938-2011[END_REF]Ernst Ruska in 1931 (Haguenau et al., 2003). The electron microscope allowed to reach levels of resolution physically impossible for the light microscopes, therefore achieving an unprecedented detailed description of the cellular world.
The description of the ultrastructure of algal and plant cells showed the similarities between the chloroplasts and blue-green algae (cyanobacteria) (e.g. presence of sac-like thylakoids) [START_REF] Gibbs | The ultrastructure of the chloroplasts of algae[END_REF][START_REF] Menke | Das allgemeine Bauprinzip des Lamellarsystems der Chloroplasten[END_REF]; additionally, ultrastructural observations combined with improved histological techniques revealed the presence of both protein synthesis machinery and packed DNA in chloroplasts and mitochondria [START_REF] Nass | Intramitochondial fibers with DNA characteristics. I. Fixation and electron staining reactions[END_REF][START_REF] Ris | Ultrastructure of DNA-containing areas in the chloroplast of Chlamydomonas[END_REF][START_REF] Schatz | Deoxyribonucleic acid associated with yeast mitochondria[END_REF], reopening the debate on the origin of the organelles.
Lynn Margulis combined the accumulated cytological and molecular knowledge about mitochondria and chloroplasts with Mereschkowsky's symbiogenetic ideas and published her updated theory of the origin of eukaryotic cells in the now-famous paper "On the origin of mitosing cells" [START_REF] Sagan | On the origin of mitosing cells[END_REF]; although initially rejected by about fifteen journals [START_REF] Knoll | Lynn Margulis, 1938-2011[END_REF]. She proposed that mitochondria, chloroplasts and eukaryotic flagella derived from free-living bacterial cells. Similar to the response obtained by earlier proponents of the endosymbiotic origin of organelles, her ideas were received with much hesitation by her contemporaries. The evolutionary model proposed by Margulis offered a sequence of serial endosymbiotic events to explain the origin of organelles within the eukaryotic cell, in clear contrast with the explanations provided by the autogenous models that were popular back then [START_REF] Raff | The non symbiotic origin of mitochondria[END_REF][START_REF] Uzzell | Mitochondria and plastids as endosymbionts: a revival of special creation[END_REF]. The symbiogenetic model was also criticized due to the prevailing vision of graduality in evolution, where endosymbiosis was thought to be an abrupt event incompatible with the continuity of slow evolutionary change [START_REF] O'malley | Endosymbiosis and its implications for evolutionary theory[END_REF].
According to Lynn Margulis, the origin of mitochondria was connected to a change in environmental conditions: the increase in oxygen concentration observed during the Mesoproterozoic [START_REF] Holland | The Oxygenation of the aAtmosphere and Oceans[END_REF] favored the integration of an aerobic prokaryote (future mitochondrion) into the cytoplasm of a heterotrophic anaerobe to cope with the new oxygen-containing atmosphere giving rise to an obligate symbiosis [START_REF] Sagan | On the origin of mitosing cells[END_REF].
Margulis thought that the selective force behind the symbiogenesis of mitochondria was oxygen utilization, therefore, she argued against a common ancestry between mitochondria and mitochondria-like anaerobic organelles (e.g hydrogenosomes and mitosomes) which according to her, must have independent origins [START_REF] Margulis | The last eukaryotic common ancestor (LECA): acquisition of cytoskeletal motility from aerotolerant spirochetes in the Proterozoic Eon[END_REF].
Within the framework of serial endosymbioses proposed by Margulis, after the acquisition of mitochondria, a second symbiogenetic event with a spirochete-like symbiont originated the flagellum [START_REF] Bermudes | Prokaryotic Origin of Undulipodia: Application of the Panda Principle to the Centriole Enigma[END_REF][START_REF] Margulis | Symbiosis as a mechanism of evolution: status of cell symbiosis theory[END_REF][START_REF] Sagan | On the origin of mitosing cells[END_REF].
Interestingly, an alternative symbiotic model posits that the eukaryotic flagellum derived from an epixenosome-like bacterium from the phylum Verrucromicrobia that started as defensive ectosymbiont of the eukaryotic cell [START_REF] Li | New symbiotic hypothesis on the origin of eukaryotic flagella[END_REF]. However, there is no clear evidence for the prokaryotic past of the eukaryotic flagella (e.g. no associated genome) and the autogenous origin is favored [START_REF] Moran | Eukaryotic flagella: Variations in form, function, and composition during evolution[END_REF].
When Linus Pauling and Emile Zuckerkandl published their paper "Molecules as Documents of Evolutionary History" where they proposed that nucleic acids and proteins can be used to infer the evolutionary history of organisms [START_REF] Zuckerkandl | Molecules as documents of evolutionary history[END_REF], a new revolution in evolutionary studies started. However, they were not the first to propose this idea. In 1892, Ernst Haeckel hypothesized that to know the phylogenetic relationships among the Kingdom Monera, it was necessary to study the atomic composition of their albumen (proteins), clearly suggesting the informative potential of molecules [START_REF] Sapp | The New Foundations of Evolution: On the Tree of Life[END_REF]. Nonetheless, it was Pauling and Zuckerkandl who created a theoretical framework for molecular phylogenetics.
Although the symbiogenetic model of organelle evolution encountered strong opposition, early gene sequencing and the development of molecular phylogeny offered the opportunity to test the model at the molecular level. In 1974, Robert M. Schwartz and Margaret O. Dayhoff offered the first phylogenetic evidence of the prokaryotic ancestry of mitochondria and chloroplasts (based on ferredoxin, c-type cytochromes and 5S ribosomal RNA sequences) [START_REF] Schwartz | Origins of prokaryotes, eukaryotes, mitochondria, and chloroplasts[END_REF]) (Fig. 3). They demonstrated for the first time at the molecular level the prokaryotic ancestry of these organelles. As predicted by Margulis, the chloroplast emerged from a symbiotic event with blue-green algae. Further molecular evidence of the endosymbiotic origin of these organelles will accumulate throughout the 1980s [START_REF] Gray | Has the endosymbiont hypothesis been proven?[END_REF].
The description of the third membrane of the chloroplast of unicellular algae of the genus Euglena made by electronic microscope observations by Gibbs [START_REF] Gibbs | May Have Evolved From Symbiotic Green Algae[END_REF] led her to propose that this chloroplast evolved from an endosymbiotic event with a green alga. This was a significant change in the endosymbiotic theory and opened the possibility that some algae could have obtained their plastids through eukaryote-into-eukaryote endosymbiosis. This added a layer of complexity to the evolution of photosynthetic eukaryotes where simpler parsimonious models may not reflect the degree of reticulation and recurrence of endosymbiotic events. For instance, the parsimonious Cabozoa hypothesis proposed by Cavalier-Smith (1999) posited a common secondary origin of the green plastids in euglenids and chlorarachniophytes. However, phylogenetic analyses helped to elucidate the origin of the chloroplasts in both green lineages and it is now clear that they were acquired independently (Rogers et al., 2007). This shows how the endosymbiotic theory continues to change, assimilating new information, new methods and new techniques but always working with the experience and knowledge that have been accumulated (Fig. 4).
Primary photosynthetic eukaryotes 1.2.1 Origin of the chloroplast: from cyanobacterium to organelle
Oxygenic photosynthesis appeared in Cyanobacteria possibly after their divergence from their closest non-photosynthetic relatives, the recently described Melainabacteria, approximately 2.6 bya [START_REF] Shih | Crown group Oxyphotobacteria postdate the rise of oxygen[END_REF][START_REF] Soo | On the origins of oxygenic photosynthesis and aerobic respiration in Cyanobacteria[END_REF]. This was a major event in the Figure 3. Composite evolutionary tree based on ferredoxine, c-type cytochromes and 5S ribosomal RNA sequences, depicting the prokaryotic ancestry of mitochondria and chloroplasts (adapted from [START_REF] Schwartz | Origins of prokaryotes, eukaryotes, mitochondria, and chloroplasts[END_REF] Figure 4. Timeline of landmark events and publications in the history of the endosymbiotic theory history of Earth that started the rise of oxygen and modified drastically the biogeochemical cycles and whole ecosystems. Then, around 1.6 bya a cyanobacterium spread the ability to photosynthesize through the establishment of an endosymbiotic relationship with a protist giving rise to Archaeplastida, a monophyletic supergroup of photosynthetic eukaryotes that comprises glaucophytes, red algae, green algae and land plants (Rodríguez-Ezpeleta et al., 2005;Yoon et al., 2004).
Endosymbiotic Gene Transfer
Chloroplast genomes commonly encode 60-150 proteins whereas free-living cyanobacteria harbor between 2,500-7,000 proteins, with the genomes of N2-fixing cyanobacteria from the order Stigonematales reaching more than 12,000 encoded proteins (Dagan et al., 2013;[START_REF] Shi | Genome evolution in cyanobacteria: The stable core and the variable shell[END_REF]. This dramatic reduction in the coding capacity of chloroplast genomes compared with cyanobacteria is associated to massive gene loss but also to the relocation of cyanobacterial genes into the nuclear genome of the host, a process termed Endosymbiotic Gene Transfer (EGT) [START_REF] Timmis | Endosymbiotic gene transfer: organelle genomes forge eukaryotic chromosomes[END_REF]. Mitochondria, the other bacterial-derived organelles in eukaryotic cells, carry also extremely reduced genomes. It seems to be an adaptation to life within the host cell, also observed in intracellular parasites [START_REF] Khachane | Dynamics of reductive genome evolution in mitochondria and obligate intracellular microbes[END_REF][START_REF] Sakharkar | Genome reduction in prokaryotic obligatory intracellular parasites of humans: A comparative analysis[END_REF]. Estimations of the contribution of cyanobacterial EGTs to the nuclear genome of primary plastid-bearing eukaryotes can considerably differ among lineages but usually represent between 5 to 18 % of the protein-coding genes of the nucleus, with land plants having the largest set of these genes (Dagan et al., 2013;[START_REF] Makai | A census of nuclear cyanobacterial recruits in the plant kingdom[END_REF]Martin et al., 2002;Reyes-Prieto et al., 2006).
EGTs are considered to have played a major role in the establishment of plastids.
Initially, the host who fed on cyanobacteria would have integrated random pieces of DNA after digestion (Fig. 5). This continuous supply of cyanobacterial genes increased the host gene repertoire providing a new set of genes that could be functionally active after the addition of promoter sequences [START_REF] Stegemann | Experimental Reconstruction of Functional Gene Transfer from the Tobacco Plastid Genome to the Nucleus[END_REF]. After EGTs are efficiently transcribed and imported into the plastid, their homologs in the plastid genome can undergo pseudogenization followed by complete gene loss (Fig. 5).
The genes transferred to the host nucleus created a dependence of the symbiont that together with host-derived proteins targeted to the plastid increased the control of the host over the symbiont autonomy, for instance, in plastid division [START_REF] Miyagishima | Mechanism of Plastid Division: From a Bacterium to an Organelle[END_REF]. To Bacteria Figure 5. Acquisition of genes by the host via EGT and HGT during the plastid establishment.
import proteins into the plastid, it was necessary to set up an import system which derived from recycled cyanobacterial transporters and host genes (Shi & Theg, 2013)
(see below).
Although most of the proteins of cyanobacterial origin are targeted back to the plastid (Reyes-Prieto et al., 2006), some EGTs have been neofunctionalized to play new roles in the cytosol expanding the metabolic capacities of the host [START_REF] Makai | A census of nuclear cyanobacterial recruits in the plant kingdom[END_REF].
It is not completely clear why some genes are transferred while others continue to be encoded in the plastid (or mitochondrial) genome. Hydrophobicity of proteins has been proposed as a possible explanation to whether a protein is likely to be transferred or not, according to this, hydrophobic proteins tend to be encoded in the organelles [START_REF] Von Heijne | Why mitochondria need a genome[END_REF]. Other hypothesis suggests that organelle-encoded proteins continue to be encoded in situ because they require a tight and continuous redox regulation to respond to changes on the physical environment [START_REF] Allen | Why chloroplasts and mitochondria retain their own genomes and genetic systems: Colocation for redox regulation of gene expression[END_REF].
Bacterial genes in primary photosynthetic eukaryotes
Although primary plastids derived from a cyanobacterial endosymbiont, it was not the only prokaryotic contributor to the plastid proteome. For instance, [START_REF] Suzuki | Eukaryotic and Eubacterial Contributions to the Establishment of Plastid Proteome Estimated by Large-Scale Phylogenetic Analyses[END_REF], showed that more of 20% of plastid-targeted proteins in the thermoacidophilic red alga Cyanidioschyzon merolae derived from bacteria different from cyanobacteria likely acquired via horizontal gene transfer (HGT). Likewise, 7% of proteins in the cyanelle of Cyanophora paradoxa had bacterial origins [START_REF] Price | Cyanophora paradoxa Genome Elucidates Origin of Photosynthesis in Algae and Plants[END_REF][START_REF] Qiu | Assessing the bacterial contribution to the plastid proteome[END_REF]. Interestingly, it has been proposed that horizontally transferred genes to freshwater algae may have facilitated the colonization of land [START_REF] Yue | Widespread impact of horizontal gene transfer on plant colonization of land[END_REF].
Differences in proportion of proteins of bacterial origin are probably due to the combination of differential loss of ancient HGTs acquired by the host before or during the plastid establishment (Fig 5 .) coupled with lineage-specific transfers after diversification of the Archaeplastida lineages. For instance, the plastid-encoded ribulose-1,5-bisphosphate carboxylase/oxygenase of red algae was transferred to the rhodophyte ancestor from a proteobacterial source [START_REF] Delwiche | Rampant horizontal transfer and duplication of rubisco genes in eubacteria and plastids[END_REF].
Several bacterial genes were likely transferred to the ancestor of Archaeplastida before the loss of phagotrophy [START_REF] Doolittle | You are what you eat: A gene transfer ratchet could account for bacterial genes in eukaryotic nuclear genomes[END_REF] . These genes could have been retained to counter the massive gene loss (more than 95%) occurred in the endosymbiont genome during primary endosymbiosis. It is possible, however, that some HGT genes were present in the genome of the cyanobacterial endosymbiont and subsequently passed as EGT into the nuclear genome of the host (Fig. 5). Interestingly, phylogenetic analyses of cyanobacterial genomes have shown high frequency of horizontal gene transfer resulting in chimeric genomes [START_REF] Gross | Evidence of a chimeric genome in the cyanobacterial ancestor of plastids[END_REF][START_REF] Zhaxybayeva | Phylogenetic analyses of cyanobacterial genomes: Quantification of horizontal gene transfer events[END_REF]. Although the HGT donors belong to multiple bacterial phyla, Proteobacteria and Chlamydiae appeared to be the main contributors after cyanobacteria [START_REF] Qiu | Assessing the bacterial contribution to the plastid proteome[END_REF]. Some authors have explained the apparent overrepresentation of chlamydial-like proteins in Archaeplastida with a tripartite symbiotic model of plastid acquisition where a chlamydial symbiont helped in the early steps of the plastid establishment by protecting the cyanobiont from host defenses in addition to supply the transporters needed to export the photosynthate as well as enzymes to the host to metabolize it [START_REF] Brinkman | Evidence that plant-like genes in Chlamydia species reflect an ancestral relationship between chlamydiaceae, cyanobacteria, and the chloroplast[END_REF][START_REF] Facchinelli | Chlamydia, cyanobiont, or host: Who was on top in the ménage à trois?[END_REF] (see below).
Host-derived genes in the chloroplast proteome
Most of the cyanobacterial proteins that are encoded in the nucleus of primary photosynthetic eukaryotes are targeted to the plastid (Reyes-Prieto et al., 2006). However, they only represent about 60% of the plastid proteome; the other 40 % derive from preexisting host genes or were acquired from non-cyanobacterial prokaryotes, probably via HGT [START_REF] Suzuki | Eukaryotic and Eubacterial Contributions to the Establishment of Plastid Proteome Estimated by Large-Scale Phylogenetic Analyses[END_REF].
It has been suggested that host genes could have helped to the host-symbiont integration by providing a set of transporters that facilitated the exchange of metabolites and import of proteins [START_REF] Karkar | Metabolic connectivity as a driver of host and endosymbiont integration[END_REF]Shi & Theg, 2013;[START_REF] Suzuki | Eukaryotic and Eubacterial Contributions to the Establishment of Plastid Proteome Estimated by Large-Scale Phylogenetic Analyses[END_REF] Interestingly, host-derived proteins account for more than half of metabolite transporters in the chloroplast membranes [START_REF] Tyra | Host origin of plastid solute transporters in the first photosynthetic eukaryotes[END_REF]. However, the host contribution to the plastid proteome is not only restricted to transporter-related proteins, it also participates in carbohydrate metabolism, RNA processing, heme biosynthesis and protein translation in the chloroplast. Thus, these proteins together with bacterial ones have created mosaic plastid proteomes [START_REF] Karkar | Metabolic connectivity as a driver of host and endosymbiont integration[END_REF][START_REF] Oborník | Mosaic origin of the heme biosynthesis pathway in photosynthetic eukaryotes[END_REF].
Endosymbiotic gene replacement with host-derived genes, either by duplication of host genes or dual targeting of nucleus-encoded proteins (e.g aminoacyl t-RNA synthetases) [START_REF] Duchene | Dual targeting is the rule for organellar aminoacyl-tRNA synthetases in Arabidopsis thaliana[END_REF] increased the dependency of the symbiont hindering the shift to a free-living lifestyle. The mosaicism of plastid functions together with the contribution to the endosymbiont division machinery [START_REF] Pyke | Plastid division[END_REF] may have facilitated that the enslaved cyanobiont became a permanent organelle.
TIC/TOC complex
The vast majority of plastid-located proteins in Archaeplastida lineages are encoded in the nuclear genome and need to be targeted and imported into the plastid [START_REF] Suzuki | Eukaryotic and Eubacterial Contributions to the Establishment of Plastid Proteome Estimated by Large-Scale Phylogenetic Analyses[END_REF][START_REF] Zybailov | Sorting signals, N-terminal modifications and abundance of the chloroplast proteome[END_REF]. To assure that requirement, primary plastids evolved a translocation machinery at the outer and inner membranes, the TOC and TIC complexes, respectively. These two multiprotein complexes have a hybrid nature: some transporters and receptors derive from the endosymbiont (e.g.Toc75, Tic22 and Tic55) while other proteins have eukaryotic origin (e.g. Tic110, Toc159 and Toc34) (Shi & Theg, 2013). Detection of core proteins of the TIC-TOC complex in all algae and plants suggests that the import system was already established in last common ancestor of Archaeplastida (McFadden & Van Dooren, 2004).
To target nucleus-encoded proteins to the plastid, these proteins carry an Nterminal transit peptide (TP) enriched in hydroxylated serine residues [START_REF] Bhushan | The role of the Nterminal domain of chloroplast targeting peptides in organellar protein import and miss-sorting[END_REF] that is recognized at the outer chloroplast membrane by the TOC machinery. Then, the protein is translocated across the inner membrane via the TIC complex in an ATPdependent manner. Once inside the stroma, the TP is cleaved off by a stromal processing peptidase [START_REF] Kovács-Bogdán | Protein import into chloroplasts: The Tic complex and its regulation[END_REF][START_REF] Paila | New insights into the mechanism of chloroplast protein import and its integration with protein quality control, organelle biogenesis and development[END_REF].
The ménage-à-trois hypothesis
Primary photosynthetic eukaryotes have chimeric nuclear genomes with genes derived from diverse sources (Deusch et al., 2008;Martin et al., 2002). Most of the bacterial genes come from alpha-proteobacteria and cyanobacteria, the clades that gave rise to mitochondria and primary plastids, respectively (Deusch et al., 2008;[START_REF] Ku | Endosymbiotic origin and differential loss of eukaryotic genes[END_REF]. Surprisingly, Chlamydiae seem to have transferred between 20-60 genes to the nuclear genome of the ancestor of Archaeplastida [START_REF] Ball | Metabolic Effectors Secreted by Bacterial Pathogens: Essential Facilitators of Plastid Endosymbiosis?[END_REF][START_REF] Becker | Chlamydial genes shed light on the evolution of photoautotrophic eukaryotes[END_REF][START_REF] Brinkman | Evidence that plant-like genes in Chlamydia species reflect an ancestral relationship between chlamydiaceae, cyanobacteria, and the chloroplast[END_REF][START_REF] Huang | Did an ancient chlamydial endosymbiosis facilitate the establishment of primary plastids?[END_REF]. These genes participate in a broad spectrum of functions such as fatty acid biosynthesis, membrane transporters, carbohydrate metabolism, rRNA processing and aminoacid biosynthesis, but it was the enzymes involved in carbohydrate (glycogen) metabolism which led [START_REF] Ball | Metabolic Effectors Secreted by Bacterial Pathogens: Essential Facilitators of Plastid Endosymbiosis?[END_REF] to propose a major role of Chlamydiae during the early establishment of the cyanobiont. This model is called the ménage-à-trois hypothesis and suggests that the cyanobiont might have been phagocytized by a host infected with a chlamydia-like parasite. Then, the latter would have supplied the enzymes to transform the photosynthate produced by the cyanobiont (ADP-glucose) in a form available for the host. In this way, the chlamydial symbiont would have functioned as a mediator between the host and the cyanobiont facilitating its integration. It is noteworthy that Chlamydiales are common parasites and can infect a wide range of eukaryotic hosts, from animals to protists, but they are not known to parasitize Archaeplastida [START_REF] Horn | Chlamydiae as Symbionts in Eukaryotes[END_REF].
According to the initial proposal of the ménage-à-trois hypothesis [START_REF] Ball | Metabolic Effectors Secreted by Bacterial Pathogens: Essential Facilitators of Plastid Endosymbiosis?[END_REF] (Fig. 6A), the cyanobiont recruited early during endosymbiosis a nucleotide-sugar transporter (NST) from the host endomembrane system to the inner membrane of the future plastid. This transporter would have helped to translocate the ADP-glucose produced by the cyanobiont into the cytoplasm of the host. However, eukaryotes use UDPglucose as precursor for glycogen synthesis and would not have been able to use the exported ADP-glucose. Thus, the chlamydial glycogen synthase GlgA would have catalyzed the storage of ADP-glucose in the glycogen reservoir of the host, working as a bridge between carbohydrate metabolism of the cyanobiont and the host. Then, additional effectors such as the glycogen phosphorylase GlgP and the glycogen debranching enzyme GlgX would catabolize the host glycogen releasing glucose-1-P and maltotetraose, respectively; while maltotetraose would be imported into the chlamydial symbiont, glucose-1-P would be metabolized by the host.
However, although the plastid-targeted NST transporter is monophyletic in green and red algae [START_REF] Weber | Single, ancient origin of a plastid metabolite translocator family in Plantae from an endomembrane-derived ancestor[END_REF], none of the NST proteins in Cyanophora paradoxa seems to be located at the plastid membrane, suggesting that glaucophytes do not use the same transporter system to export the photosynthate than the other two lineages [START_REF] Price | Cyanophora paradoxa Genome Elucidates Origin of Photosynthesis in Algae and Plants[END_REF]; therefore, it seems that the ADP-glucose export system was not established in the common ancestor of Archaeplastida but possibly after the divergence of glaucophytes.
Cyanophora paradoxa exports the photosynthate in the form of glucose-6phosphate though an UhpC-like transporter of apparent chlamydial origin [START_REF] Price | Cyanophora paradoxa Genome Elucidates Origin of Photosynthesis in Algae and Plants[END_REF]. The putative ancestral state of the cyanelle together with the discovery of UhpC homologs in the nuclear genomes of red and green algae [START_REF] Price | Cyanophora paradoxa Genome Elucidates Origin of Photosynthesis in Algae and Plants[END_REF] updated the ménage-à-trois model [START_REF] Facchinelli | Chlamydia, cyanobiont, or host: Who was on top in the ménage à trois?[END_REF] (Fig. 6B) suggesting that UhpC was the transporter that was present in the ancestor of Archaeplastida whereas the NST transporter was secondly acquired in the common ancestor of Rhodophyta and Viridiplantae.
In this updated ménage-à-trois model, the cyanobiont inhabited the same phagosome vesicle that the Chlamydia-like bacteria, in a composite organelle called "chlamydioplast" [START_REF] Facchinelli | Chlamydia, cyanobiont, or host: Who was on top in the ménage à trois?[END_REF]. Here, the cyanobiont would have exported glucose-6-phosphate via the chlamydial-derived UphC transporter, to be converted to ADP-glucose and exported into the host cytoplasm using the host-derived NST transporter (Fig. 6B). Once outside the chlamydioplast, the ADP-glucose would have participated in the glycogen biosynthesis of the host with the help of chlamydial effectors in a similar manner as proposed in the initial model [START_REF] Ball | Metabolic Effectors Secreted by Bacterial Pathogens: Essential Facilitators of Plastid Endosymbiosis?[END_REF]. During this period of cyanobiont-Chlamydiae cohabitation, it is hypothesized that several genes would have been horizontally transferred from the chlamydial partner to the cyanobiont and the eukaryotic host (e.g. carbohydrate metabolism-related genes), thereby breaking the tripartite interdependency which ended with the complete loss of the chlamydial symbiont.
Then, while the early-branching Glaucophyta kept the peptidoglycan wall from the cyanobiont, it was lost in the common ancestor of Rhodophyta and Viridiplantae, opening the possibility to evolve or relocate other membrane transporters such as the NST transporter, considered to be initially in the chlamydioplast membrane [START_REF] Facchinelli | Chlamydia, cyanobiont, or host: Who was on top in the ménage à trois?[END_REF].
Was there a chlamydioplast?
The proposal of the ménage-à-trois hypothesis created an interesting metabolismbased model to explain the genes of apparent chlamydial origin detected in the nuclear genomes of Archaeplastida [START_REF] Ball | Metabolic Effectors Secreted by Bacterial Pathogens: Essential Facilitators of Plastid Endosymbiosis?[END_REF][START_REF] Becker | Chlamydial genes shed light on the evolution of photoautotrophic eukaryotes[END_REF][START_REF] Brinkman | Evidence that plant-like genes in Chlamydia species reflect an ancestral relationship between chlamydiaceae, cyanobacteria, and the chloroplast[END_REF][START_REF] Huang | Did an ancient chlamydial endosymbiosis facilitate the establishment of primary plastids?[END_REF]. However, it seems that the role of Chlamydiae in shaping the nuclear genome of primary photosynthetic algae could have been overestimated (Dagan et al., 2013;[START_REF] Deschamps | Primary endosymbiosis: Have cyanobacteria and Chlamydiae ever been roommates?[END_REF][START_REF] Domman | Plastid establishment did not require a chlamydial partner[END_REF]. For instance, bacterial phyla different to Chlamydiae (i.e. Gammaproteobacteria, Actinobacteria, Deltaproteobacteria, Bacilli, Bacteroidetes, Betaproteobacteria) outnumber the quantity of genes transferred to nuclear genomes of Archaeplastida (Dagan et al., 2013).
Recent reanalysis of putative chlamydial genes have casted doubt on the contribution of this bacterial group, weakening the major role proposed for a chlamydial symbiont on the onset of primary plastids. The contribution of chlamydia seems to have been greatly overestimated due to a combination of factors such as the reconstruction of phylogenetic trees with insufficient taxon sampling and the use of poor-fitted models of protein evolution [START_REF] Deschamps | Primary endosymbiosis: Have cyanobacteria and Chlamydiae ever been roommates?[END_REF][START_REF] Domman | Plastid establishment did not require a chlamydial partner[END_REF]. For instance, according to the ménage-à-trois hypothesis most of chlamydial-derived genes would have been transferred
during the existence of the chlamydioplast, before the diversification of Archaeplastida; however, [START_REF] Deschamps | Primary endosymbiosis: Have cyanobacteria and Chlamydiae ever been roommates?[END_REF] showed that less than 30 % of genes strictly support a HGT from Chlamydiae to the ancestor of Archaeplastida. Therefore, some chlamydial-derived genes in Archaeplastida could be lineage-specific acquisitions via punctual HGTs.
Moreover, the genes involved in the metabolism of carbohydrates, which are thought to be the origin of the metabolic association with a chlamydial symbiont, seem to have a mix origin with genes from the cyanobiont, the eukaryotic host and HGT from bacteria but not necessarily Chlamydiae [START_REF] Domman | Plastid establishment did not require a chlamydial partner[END_REF]. Taken together, it seems that there is no compelling evidence to support that the establishment of primary plastids was mediated by a chlamydial symbiont.
Early-or late-branching cyanobacterium?
The monophyly of Archaeplastida is supported by the phylogenomic analyses of nuclear, plastid and mitochondrial genes [START_REF] Jackson | The mitochondrial genomes of the glaucophytes gloeochaete wittrockiana and cyanoptyche gloeocystis: Multilocus phylogenetics suggests amonophyletic archaeplastida[END_REF]Rodríguez-Ezpeleta et al., 2005). However, the precise identity of the host and the cyanobacterial endosymbiont remains uncertain. Due to the difficulties to resolve deep phylogenetic relationships within the eukaryotic tree, the description of the host is limited. Regarding the cyanobacterial endosymbiont, early sequencing of plastid and cyanobacterial genes produced the first studies on the origin of the plastid [START_REF] Nelissen | An early origin of plastids within the cyanobacterial divergence is suggested by evolutionary trees based on complete 16S rRNA sequences[END_REF]Turner et al., 1999). Then, the increase of genomic data improved and expanded the phylogeny of cyanobacteria opening the possibility to identify the closest cyanobacterial lineage of the plastid ancestor. Publications that addressed this question can be divided in two types depending on the phylogenetic position of the plastid ancestor, that is, whether the endosymbiont was an early-or a late-branching cyanobacterium (Table 1). Early-branching models suggest that the endosymbiont that entered into a symbiotic association with the eukaryotic ancestor of Archaeplastida belongs to a cyanobacterial clade near the base of the cyanobacterial tree whereas late-branching models support that it emerged during the divergence of cyanobacteria.
It has been suggested that the N2-fixation capability common in some latebranching cyanobacterial clades could have been the initial motor of endosymbiosis, providing to the host an element that was scarce in the oceans during the Proterozoic [START_REF] Anbar | Proterozoic Ocean Chemistry and Evolution: A Bioinorganic Bridge?[END_REF]. Thus, diverse late-branching N2-fixing cyanobacterial lineages (unicellular or filamentous) have been proposed as putative sister groups of the plastid ancestor (Table 1). Interestingly, nitrogen-dependent symbioses are common between cyanobacteria and different groups of eukaryotes such as plants, fungi and protists [START_REF] Kneip | Nitrogen fixation in eukaryotes-New models for symbiosis[END_REF][START_REF] Lesser | Discovery of Symbiotic Nitrogen-Fixing Cyanobacteria in Corals[END_REF].
Table 1.
Characteristics of studies that have proposed either an early or late plastid branching within the phylogeny of Cyanobacteria. NA: not applicable.
Archaeplastida lineages 1.2.2.1 Viridiplantae
Viridiplantae is a monophyletic group composed of two lineages: Chlorophyta and Streptophyta. Chlorophytes include a broad diversity of freshwater, terrestrial and marine green algae; it includes the early-diverging prasinophytes and the core chlorophytes [START_REF] Leliaert | Phylogeny and Molecular Evolution of the Green Algae[END_REF]. Streptophyta comprises the charophytes, a group of freshwater green algae, and the embryophytes or land plants (Fig. 7). The divergence between Streptophyta and Chlorophyta is estimated to have occurred more than 1 billion years ago, followed by the adaptive radiation of both lineages [START_REF] Leliaert | Into the deep: New discoveries at the base of the green plant phylogeny[END_REF][START_REF] Moczydłowska | The early Cambrian phytoplankton radiation: Acritarch evidence from the Lukati Formation, Estonia[END_REF]. Chlorophytes diversified in the oceans whereas streptophytes evolved in freshwater habitats [START_REF] Becker | Streptophyte algae and the origin of embryophytes[END_REF][START_REF] Fang | Evolution of the Chlorophyta: Insights from chloroplast phylogenomic analyses[END_REF] The prasinophytes is a paraphyletic group within Chlorophyta that inhabits mostly marine environments with some freshwater clades [START_REF] Leliaert | Phylogeny and Molecular Evolution of the Green Algae[END_REF]. Phylogenetic analyses of nuclear and plastid multigene data have proved difficult to resolve the phylogenetic relationships among the different clades of prasinophytes because of the rapid early radiation of the group and possibly due to multiple extinctions of ancient lineages [START_REF] Cocquyt | Evolution and cytological diversification of the green seaweeds (Ulvophyceae)[END_REF][START_REF] Fang | Evolution of the Chlorophyta: Insights from chloroplast phylogenomic analyses[END_REF].
The core chlorophytes includes ecologically and morphologically diverse green algae. This assemblage contains three species-rich classes, Ulvophyceae, Trebouxiophyceae and Chlorophyceae (UTC), and two smaller lineages, Pedinophyceae and Chlorodendrophyceae [START_REF] Leliaert | Phylogeny and Molecular Evolution of the Green Algae[END_REF]. The class Ulvophyceae includes mostly multicellular coastal seaweeds (e.g. Ulva) but also unicellular organisms like the macroscopic single-celled Acetabularia. The Trebouxiophyceae is a group of green algae with a broad range of morphological character (unicellular/multicellular) and ecological environments (marine, freshwater and terrestrial). It is well-known that several species establish symbiotic relationships with diverse group of eukaryotes. For instance, some trebouxiophycean algae participate in symbiotic relationships with fungi to form lichens [START_REF] Friedl | Origin and Evolution of Green Lichen Algae BT -Symbiosis: Mechanisms and Model Systems[END_REF]. Other species are photosynthetic symbionts of ciliates [START_REF] Summerer | Ciliate-symbiont specificity of freshwater endosymbiotic chlorella (trebouxiophyceae, chlorophyta) 1[END_REF]) and marine invertebrates [START_REF] Letsch | Elliptochloris marina sp. nov. (trebouxiophyceae, chlorophyta), symbiotic green alga of the temperate pacific sea anemones anthopleura xanthogrammica and a. elegantissima[END_REF].
Streptophyta is divided in two groups: the charophytes (a paraphyletic group of freshwater green algae) plus the land plants [START_REF] Leliaert | Phylogeny and Molecular Evolution of the Green Algae[END_REF]. Recent phylogenomic analyses of chloroplast genomes point to the charophyte class Zygnematophyceae as the closest living relatives of embryophytes (land plants) [START_REF] Zhong | The Origin of Land Plants: A Phylogenomic Perspective[END_REF]. The colonization of terrestrial habitats was a gradual change from a freshwater lifestyle passing to moist habitats and eventually to dry land [START_REF] Becker | Streptophyte algae and the origin of embryophytes[END_REF]. The evolution of land plants had important ecological consequences: it impacted positively global primary production, increased the atmospheric levels of oxygen, changed the biogeochemical cycles and reshaped the terrestrial ecosystems [START_REF] Bateman | Early evolution of land plants: phylogeny, physiology, and ecology of the primary terrestrial radiation[END_REF][START_REF] Lenton | The role of land plants, phosphorus weathering and fire in the rise and regulation of atmospheric oxygen[END_REF][START_REF] Lenton | Earliest land plants created modern levels of atmospheric oxygen[END_REF].
Chloroplasts of green algae and land plants contain genomes usually in the range of 120 -200 kbp in size, with some exceptions like the non-photosynthetic plastid in the Trebouxiophyceae parasite Helicosporidium sp. (37.4 kbp) and in the non-photosynthetic orchid Epipogium roseum (19 kbp) [START_REF] Schelkunov | Exploring the Limits for Reduction of Plastid Genomes: A Case Study of the Mycoheterotrophic Orchids Epipogium aphyllum and Epipogium roseum[END_REF]. At the upper limit, the largest green algal chloroplast genome sequence so far is from Volvox carteri (525 kbp) [START_REF] Smith | Low Nucleotide Diversity for the Expanded Organelle and Nuclear Genomes of Volvox carteri Supports the Mutational-Hazard Hypothesis[END_REF] However, the macroscopic Acetabularia is thought to have a chloroplast genome up to 2000 kbp, although it has never been completely sequenced [START_REF] Palmer | Comparative Organization of Chloroplast Genomes[END_REF].
Rhodophyta
Rhodophyta (red algae) is a major lineage of primary plastid-bearing eukaryotes with ~7,100 identified species [START_REF] Guiry | World-wide electronic publication[END_REF]. Red algae have overwhelming representation in marine, coastal and brackish habitats with more than 95% of species living in these environments. The phylum Rhodophyta is divided in three subphylums: the early-branching subphylum Cyanidiophytina which is composed of freshwater unicellular algae living in high-temperature environments (e.g. Galdieria sulphuraria); the recently proposed subphylum Proteorhodophytina that comprises mesophilic non-seaweed algae (e.g. Bulboplastis apyrenoidosa); and the subphylum Eurhodophytina with mesophilic seaweeds (e.g. Pyropia perforata) (Muñoz-Gómez et al., 2017) (Fig. 7).
Within Rhodophyta, cyanidiophycean algae diverged first, prior to the secondary endosymbiotic event that gave rise to secondary red plastids [START_REF] Kim | Evolutionary dynamics of cryptophyte plastid genomes[END_REF][START_REF] Ševčíková | Updating algal evolutionary relationships through plastid genome sequencing: did alveolate plastids emerge through endosymbiosis of an ochrophyte[END_REF]. These organisms thrive in high-temperature and acid environments near volcanic areas such as hot springs [START_REF] Toplin | Biogeographic and phylogenetic diversity of thermoacidophilic cyanidiales in Yellowstone National Park, Japan, and New Zealand[END_REF]. Interestingly, it is thought that the common ancestor of cyanidiophytes underwent an important genome reduction probably due to the adaptation to the extremophilic lifestyle [START_REF] Qiu | Evidence of ancient genome reduction in red algae (Rhodophyta)[END_REF].
The subphylum Eurhodophytina is a monophyletic assemblage composed of two classes: Bangiophyceae and Florideophyceae. These two lineages harbor the vast majority of rhodophyte diversity. Approximately 99 % of the red algal species that have hitherto been described are placed within these classes (mainly in Florideophyceae) [START_REF] Guiry | World-wide electronic publication[END_REF]. Interestingly, the oldest eukaryotic fossil (widely accepted as such) is a bangiophycean species, Bangiomorpha pubescens, estimated to be 1.2 billion years old [START_REF] Butterfield | Bangiomorpha pubescens n. gen., n. sp.: implications for the evolution of sex, multicellularity, and the Mesoproterozoic/Neoproterozoic radiation of eukaryotes[END_REF][START_REF] Yang | Divergence time estimates and the evolution of major lineages in the florideophyte red algae[END_REF].
The subphylum Proteorhodophytina comprises unicellular and filamentous mesophilic algae divided in four classes: Porphyridiophyceae, Compsopogonophyceae, Rhodellophyceae and Stylonematophyceae. Species of this subphylum have shown an impressive variability in the genomic architecture and size of plastid genomes (Muñoz-Gómez et al., 2017). For instance, Corynoplastis japonica (Rhodellophyceae) harbors the largest known plastid genome (1127 kbp) completely sequenced. This unusual size is due to a high proportion of intronic sequences accounting for more than 60% of the total plastid DNA (Muñoz-Gómez et al., 2017).
Glaucophyta
Glaucophytes are a relatively rare group of unicellular freshwater algae with primary plastids called cyanelles. This group of algae is species-poor with only 22 described species grouped within one class (Glaucophyceae) and 4 genera (Cyanophora, Cyanoptyche, Glaucocystis and Gloeochaete); a very small number in comparison to more than 7000 rhodophyte species and almost 11,000 species of green algae and land plants [START_REF] Guiry | World-wide electronic publication[END_REF]. However, there might be cryptic species diversity difficult to identify because of culture biases [START_REF] Jackson | The Glaucophyta: The blue-green plants in a nutshell[END_REF].
To harvest light, cyanelles use phycobilisomes which are complexes of phycobiliproteins (e.g. C-phycocyanin and allophycocyanin) attached to the thylakoids that together with the chlorophyll a give to glaucophyte cyanelles their characteristic blue color and occur only in cyanobacteria, glaucophytes and red algae [START_REF] Green | Light-harvesting antennas in photosynthesis[END_REF].
Cyanelles retained the peptidoglycan wall and carboxysomes from the cyanobacterial ancestor, in contrast to Rhodophyta and Viridiplantae where these components were lost [START_REF] Fathinejad | A carboxysomal carbon-concentrating mechanism in the cyanelles of the "coelacanth" of the algal world, Cyanophora paradoxa?[END_REF][START_REF] Pfanzagl | Primary structure of cyanelle peptidoglycan of Cyanophora paradoxa: A prokaryotic cell wall as part of an organelle envelope[END_REF]. This primitive-like characters of the cyanelle in addition to some phylogenetic reconstructions of nuclear and plastid genes suggest that Glaucophyta was the first lineage to diverge within Archaeplastida [START_REF] Burki | Phylogenomics reveals a new "megagroup" including most photosynthetic eukaryotes[END_REF][START_REF] Reyes-Prieto | Phylogeny of nuclear-encoded plastidtargeted proteins supports an early divergence of glaucophytes within Plantae[END_REF]Rodríguez-Ezpeleta et al., 2005). Nonetheless, phylogenetic and phylogenomic studies supporting "Rhodophyta-first" or "Viridiplantae-first" models are also common in the scientific literature. Thus, the branching order of Archaeplastida lineages continues to be an unresolved question (Deschamps & Moreira, 2009;[START_REF] Hagopian | Comparative analysis of the complete plastid genome sequence of the red alga Gracilaria tenuistipitata var. liui provides insights into the evolution of rhodoplasts and their relationship to other plastids[END_REF]Mackiewicz & Gagat, 2014).
Paulinella chromatophora, a second primary plastid-bearing lineage
There are only two recognized primary endosymbioses: the first originated the plastids of Archaeplastida whereas the second one took place in the heterotrophic ancestor of the filose amoeba Paulinella chromatophora (Marin et al., 2005). In the case of Paulinella, the chromatophore (chloroplast) is derived from an endosymbiosis with a cyanobacterium closely related to the marine Synechococcus/Prochlorococcus (Syn/Pro) clade (Marin et al., 2005). Interestingly, Syn/Pro members are known to be common cyanobionts of sponges, and can be transmitted vertically [START_REF] Burgsdorf | Lifestyle evolution in cyanobacterial symbionts of sponges[END_REF][START_REF] Steindler | 16S rRNA phylogeny of spongeassociated cyanobacteria[END_REF].
Similar to cyanelles in glaucophytes, chromatophores are still surrounded by a peptidoglycan cell wall and harbor carboxysomes, two plesiomorphic characters attesting the cyanobacterial ancestry [START_REF] Kies | Elektronenmikroskopische Untersuchungen anPaulinella chromatophora Lauterborn, einer Thekam{ö}be mit blau-gr{ü}nen Endosymbionten (Cyanellen)[END_REF]. Most photosynthetic Paulinella species live in freshwater environments, however, a recently described species named Paulinella longichromatophora showed a transition from freshwater to marine habitats [START_REF] Kim | Paulinella longichromatophora sp. nov., a New Marine Photosynthetic Testate Amoeba Containing a Chromatophore[END_REF].
According to molecular clock estimations, photosynthetic Paulinella appeared within the Rhizaria lineage somewhere between 60-140 million years ago [START_REF] Delaye | How Really Ancient Is Paulinella Chromatophora ?[END_REF][START_REF] Nowack | Chromatophore Genome Sequence of Paulinella Sheds Light on Acquisition of Photosynthesis by Eukaryotes[END_REF]. Nevertheless, other recent studies suggest that the chromatophore is more ancient and appeared approximately 500 Mya [START_REF] Sánchez-Baracaldo | Early photosynthetic eukaryotes inhabited low-salinity habitats[END_REF]. Regardless of the age estimation, it is clear that the primary endosymbiosis in Paulinella is a more recent event than the one of Archaeplastida which occurred 1.6 bya (Yoon et al., 2004). The younger age of the Paulinella chromatophore correlates well with the larger size of its genome and the number of encoded proteins compared to the much more reduced plastids of Archaeplastida. While in Paulinella the size of the chromatophore genome is approximately 1 Mb (with about 860 protein-coding genes) [START_REF] Lhee | Diversity of the Photosynthetic Paulinella Species, with the Description of Paulinella micropora sp. nov. and the Chromatophore Genome Sequence for strain KR01[END_REF], green and red algae commonly harbor chloroplasts in the range of 100-250 kbp (~80-220 proteins) [START_REF] Lee | Parallel evolution of highly conserved plastid genome architecture in red seaweeds and seed plants[END_REF]. Nonetheless, the chromatophore genome is also heavily reduced. It encodes only about 25 % of proteins in comparison with free-living cyanobacteria of the Syn/Pro clade. Moreover, it seems that the genome reduction (through pseudogenization and gene loss) is still in progress (Nowack et al., 2008).
Gene content in chromatophores has also been affected by the relocation of chromatophore genes to the host nucleus by EGT. More than 50 putative EGTs have been detected in the nuclear genome of Paulinella chromatophora which after transcription and synthesis in the cytoplasm are targeted back to the chromatophore. Most of these proteins participate in photosynthesis and photoacclimatation-related functions (e.g photosystem I subunits PsaE and PsaK) (Nowack et al., 2011[START_REF] Nowack | Gene transfers from diverse bacteria compensate for reductive genome evolution in the chromatophore of Paulinella chromatophora[END_REF][START_REF] Nowack | Trafficking of protein into the recently established photosynthetic organelles of Paulinella chromatophora[END_REF].
Interestingly, many genes horizontally transferred from non-cyanobacterial prokaryotes also function in the chromatophore, probably compensating for the loss of their ancestral chromatophore homologs [START_REF] Nowack | Gene transfers from diverse bacteria compensate for reductive genome evolution in the chromatophore of Paulinella chromatophora[END_REF]. However, the higher contributor of chromatophore-targeted proteins in Paulinella chromatophora is neither the cyanobacterial endosymbiont (EGT) nor other bacteria (HGT) but the host [START_REF] Singer | Massive Protein Import into the Early-Evolutionary-Stage Photosynthetic Organelle of the Amoeba Paulinella chromatophora[END_REF].
Origin and evolution of eukaryotic lineages with complex plastids
After primary endosymbiosis, other eukaryotic cells acquired the ability to photosynthesize via the uptake of a red or green algal endosymbiont, a symbiogenetic event known as secondary endosymbiosis. Plastids in chlorarachniophytes and euglenids originated from two independent secondary endosymbioses with green algae whereas cryptophytes, alveolates, stramenopiles and haptophytes (the CASH lineage) acquired a red algal-derived plastid. Secondary endosymbiotic events have had an important impact in the distribution of photosynthetic lineages across the eukaryotic tree (Fig. 8).
Chlorarachniophytes
Chlorarachniophytes are marine photosynthetic amoeboflagellates with plastids that originated from a secondary endosymbiosis with a green alga [START_REF] Ludwig | Evidence that the nucleomorph of Chlorarachnion reptans (Chlorarachniophyceae) are vestigial nuclei: morphology, division and DNA-DAPI fluorescence[END_REF].
They belong to the phylum Cercozoa within the supergroup Rhizaria (Burki & Keeling, 2014) which is widely present in marine plankton with important roles in biogeochemical cycles [START_REF] Biard | In situ imaging reveals the biomass of giant protists in the global ocean[END_REF].
Similar to cryptophytes, the four membrane-bound plastid in chlorarachniophytes carry a nucleomorph within the periplastid compartment (i.e. the space between the second and third membranes of the plastid); they are the remnant of the nucleus and cytosol of the green algal endosymbiont, respectively (Keeling, 2010). During secondary endosymbiosis, a filose amoeba engulfed an ulvophycean green alga (Suzuki et al., 2016); although the phagotrophic capability was retained in some mixotrophic species (e.g.
Chlorarachnion reptans), other members have secondarily lost this feeding behavior (e.g.
Lotarella globosa) [START_REF] Ishida | Diversification of a Chimaeric Algal Group, the Chlorarachniophytes: Phylogeny of Nuclear and Nucleomorph Small-Subunit rRNA Genes[END_REF].
Nucleomorph
As expected for an organelle that has been completely lost in many secondary plastid-harboring algae, the nucleomorph is heavily reduced. The nucleomorph genomes consist of three linear chromosomes ranging from 330 to 1000 kbp [START_REF] Hirakawa | Chlorarachniophytes With Complex Secondary Plastids of Green Algal Origin[END_REF]. The nucleomorph genome of Bigelowiella natans was the first to be completely sequenced. It showed a high density of genes, with extremely small introns, probably due to selective pressure that resulted in the genome reduction [START_REF] Gilson | Complete nucleotide sequence of the chlorarachniophyte nucleomorph: Nature's smallest nucleus[END_REF]. Interestingly, similar genome architecture has been found in all nucleomorph genomes sequenced to date, suggesting that the massive genome reduction took place before the diversification of extant species [START_REF] Suzuki | Nucleomorph genome sequences of two chlorarachniophytes, Amorphochlora amoebiformis and Lotharella vacuolata[END_REF][START_REF] Tanifuji | Nucleomorph and plastid genome sequences of the chlorarachniophyte Lotharella oceanica: convergent reductive evolution and frequent recombination in nucleomorph-bearing algae[END_REF].
Nucleomorph genomes encode mostly housekeeping genes necessary to the maintenance of the nucleomorph and the expression of 17 proteins targeted to the plastid.
This set of proteins is shared among all nucleomorph genomes and are thought to be the reason why chlorarachniophytes still harbors a nucleomorph [START_REF] Hirakawa | Chlorarachniophytes With Complex Secondary Plastids of Green Algal Origin[END_REF][START_REF] Suzuki | Nucleomorph genome sequences of two chlorarachniophytes, Amorphochlora amoebiformis and Lotharella vacuolata[END_REF]. Moreover, nucleomorph-encoded genes have high rate of evolutionary change [START_REF] Hirakawa | Polyploidy of Endosymbiotically Derived Genomes in Complex Algae[END_REF]Patron et al., 2006a) probably due to the loss of genes involved in DNA replication and repair (Curtis et al., 2012).
Although the plastid in chlorarachniophytes and cryptophytes originated from two independent secondary endosymbioses (with a red alga in the case of cryptophytes), both nucleomorphs share important features such as the number of chromosomes, similarities in gene content [START_REF] Tanifuji | Nucleomorph and plastid genome sequences of the chlorarachniophyte Lotharella oceanica: convergent reductive evolution and frequent recombination in nucleomorph-bearing algae[END_REF], conserved telomeric regions [START_REF] Hirakawa | Chlorarachniophytes With Complex Secondary Plastids of Green Algal Origin[END_REF], accelerated evolution of nucleomorph-encoded genes (Patron et al., 2006a) and polyploidization [START_REF] Hirakawa | Polyploidy of Endosymbiotically Derived Genomes in Complex Algae[END_REF]. Convergent evolution in the genomic architecture of the nucleomorphs of chlorarachniophytes and cryptophytes shows that both organelles have apparently experienced similar evolutionary pressures during secondary endosymbiosis.
Plastid
Plastids in chlorarachniophytes are surrounded by four membranes (Fig. 9A). The first membrane (from outside to inside) correspond to the phagosomal membrane that encapsulated the algal endosymbiont; the second membrane corresponds to the plasma membrane of the green algal endosymbiont whereas the two innermost membranes correspond to the chloroplast membranes where photosynthesis takes place. Similar to chlorophytes, chlorarachniophyte plastids use chlorophyll a and b (Hibberd & Norris, 1984).
N M Nu Plastid
Chlorarachniophyte plastids have circular genomes that encode approximately 60 proteins (Suzuki et al., 2016b). Phylogenetic analyses of 38 concatenated plastid proteins showed that chlorarachniophytes and euglenids (both with secondary green plastids) acquired independently the ability to photosynthesize (Rogers et al., 2007). The green endosymbiont in chlorarachniophytes derived from a member of the UTC clade (Ulvophyceae-Trebouxiophyceae-Chlorophyceae). Recent analysis with enlarged taxon sampling and larger number of concatenated markers (55 plastid-encoded proteins) improved the placement of the endosymbiont within the UTC clade and showed that the chlorarachniophyte plastids derived from a green alga from the order Bryopsidales (Ulvophyceae) (Suzuki et al., 2016b).
During secondary endosymbiosis, genes from the plastid and nuclear genomes of the green endosymbiont were relocated to the host nucleus. As a result, most of the plastid proteins in chlorarachniophytes were transferred. For instance, the plastid genome of Bigelowiella natans encodes only for 57 proteins (mostly involved in photosynthesis and protein biosynthesis) (Rogers et al., 2007), whereas around 780 plastid-targeted proteins are encoded in the nucleus (Suzuki et al., 2016a). Interestingly, phylogenetic analyses have revealed an important contribution of genes from algae with red algal-derived plastids to the Bigelowiella plastid proteome (Curtis et al., 2012).
Euglenophytes
The phylum Euglenozoa (Excavata) comprises kinetoplastids, diplonemids and euglenids [START_REF] Adl | The revised classification of eukaryotes[END_REF]. The latter includes photosynthetic and colorless protists.
Photosynthetic euglenids such as the model species Euglena gracilis belong to the class Euglenophyceae, a monophyletic assemblage that also includes members that secondarily lost the ability to photosynthesize [START_REF] Bicudo | Phylogeny and Classification of Euglenophyceae: A Brief Review[END_REF]. The plastid in euglenophytes derived from an endosymbiotic event with a Pyramimonas-like alga (Prasinophyceae) approximately 100 Mya, possibly in a marine environment (Hrdá et al., 2012;Parfrey et al., 2011).
Euglenophytes are divided in two orders: Eutreptiales, a paraphyletic assemblage composed mostly of marine species; and Euglenales, a monophyletic group widely present in freshwater habitats [START_REF] Vanclová | Chapter Nine -Secondary Plastids of Euglenophytes[END_REF]. Rapaza viridis, the sister lineage to all euglenophytes, is an obligatory mixotroph that feeds on green algae from the genus Tetraselmis. Interestingly, it is thought that Rapaza represents the intermediate stage in plastid acquisition within euglenophycean algae [START_REF] Yamaguchi | Morphostasis in a novel eukaryote illuminates the evolutionary transition from phagotrophy to phototrophy: description of Rapaza viridis n. gen. et sp. (Euglenozoa, Euglenida)[END_REF].
Plastid
Once thought to have originated from the same primary endosymbiosis that Viridiplantae (Cavalier-Smith, 1982), or from a common secondary endosymbiosis with chlorarachniophytes (Cavalier- Smith, 1999), it is known today that euglenophytes possess a secondary plastid that was acquired independently in a single endosymbiotic event with a green alga related to Pyramimonas (Hrdá et al., 2012). The plastid has three membranes (Fig. 9B), two from the ancestral chloroplast and the origin of the third one is still unclear; it has been suggested to be the vestige of the plasma membrane of the green endosymbiont [START_REF] Gibbs | The Chloroplasts of some algal groups may have evolved from endosymbiotic eukaryotic algae[END_REF] or the food vacuole membrane that encapsulated the alga (Cavalier-Smith, 1999).
Euglenid plastid genomes have a surprising similarity in gene content, approximately 90 genes, mostly involved in photosynthesis and housekeeping functions [START_REF] Vanclová | Chapter Nine -Secondary Plastids of Euglenophytes[END_REF]). An exception is the non-photosynthetic plastid in Euglena longa which encodes only 57 genes [START_REF] Gockel | Complete Gene Map of the Plastid Genome of the Nonphotosynthetic Euglenoid Flagellate Astasia longa[END_REF]. E. longa is an osmotrophic species phylogenetically close to E. gracilis, although its plastid is no longer photosynthetic, it is compulsory for the survival of the protist [START_REF] Hadariová | An intact plastid genome is essential for the survival of colorless Euglena longa but not Euglena gracilis[END_REF]. The plastid has lost all photosynthetic genes except the rbcL gene (large subunit of the enzyme RuBisCO). The rbcL protein is extremely divergent and although it is thought to be the reason why Euglena longa still retain the non-photosynthtetic plastid, its function is still unknown [START_REF] Záhonová | RuBisCO in nonphotosynthetic alga Euglena longa: Divergent features, transcriptomic analysis and regulation of complex formation[END_REF].
Eukaryotic lineages with red algal-derived plastids 1.3.2.1 The Chromalveolate hypothesis
The description of algal cells boosted by the improvement of microscope resolution revealed the hidden complexity of plastid morphology and structure. Although early phylogenetic analyses showed that all plastids derived from cyanobacteria [START_REF] Delwiche | Phylogenetic Analysis of tufA sequences indicates a Cyanobacterial origin of all plastids[END_REF][START_REF] Nelissen | An early origin of plastids within the cyanobacterial divergence is suggested by evolutionary trees based on complete 16S rRNA sequences[END_REF], a single primary endosymbiotic event was not sufficient to explain the structural diversity of plastids, making necessary to propose that some photosynthetic lineages could have evolved from eukaryote-into-eukaryote endosymbioses (Cavalier-Smith, 1982;[START_REF] Gibbs | The Chloroplasts of some algal groups may have evolved from endosymbiotic eukaryotic algae[END_REF].
The Chromalveolate hypothesis posits that photosynthetic lineages with chlorophyll c-containing plastids (i.e. cryptophytes, haptophytes, stramenopiles and dinoflagellates) evolved from a single endosymbiotic event with a red alga deep in the evolutionary history of eukaryotes (Cavalier-Smith, 1999) (Fig. 10).
These chromalveolates include heterotrophic lineages that, according to the hypothesis, lost their plastids (e.g. ciliates) or the ability to photosynthesize (e.g. apicomplexans). Interestingly, the discovery of non-photosynthetic plastids in Toxoplasma gondi and Plasmodium falciparum, two alveolate parasites, strongly suggested a photosynthetic past [START_REF] Mcfadden | Plastid in human parasites[END_REF]. Within the chromalveolates there are also lineages that have replaced their original red plastid such as the dinoflagellates from the genus Lepidodinium that replaced their peridinin-containing plastid by a green-colored one [START_REF] Matsumoto | Green-colored Plastids in the Dinoflagellate Genus Lepidodinium are of Core Chlorophyte Origin[END_REF].
Attesting of the secondary origin of the plastid, the ancestor of all chromalveolate plastids is thought to have been surrounded by four membranes. From outside to the inside of the plastid: 1) the phagosomal membrane that encapsulated the red alga; 2) the plasma membrane of the red endosymbiont; 3) and 4) are homologous to the outer and inner membranes of the chloroplast, respectively (Cavalier-Smith, 1999). The fourmembrane-bound plastid is still observed in ochrophytes, cryptophytes, haptophytes, chromerids and apicomplexans whereas in peridinin-containing dinoflagellates, the plastid has only three membranes. Although the extra membrane was once considered to be the residual phagosomal membrane that engulfed the cyanobacterium during primary endosymbiosis (Cavalier-Smith, 1982), today, the secondary nature of the dinoflagellate plastids is widely accepted [START_REF] Janouskovec | A common red algal origin of the apicomplexan, dinoflagellate, and heterokont plastids[END_REF]. According to the chromalveolate hypothesis, the third outermost membrane of dinoflagellate plastids corresponds to the food vacuole membrane that was retained whereas the periplastid membrane was lost in the common ancestor of all dinoflagellates (Cavalier-Smith, 1999).
Other peculiarity in secondary red algae besides the number of membranes is that in the case of cryptophytes, haptophytes and ochrophytes the outermost membrane (membrane 1) is fused with the nuclear envelope. It was initially suggested that it occurred in the common ancestor of stramenopiles and Hacrobia (cryptophytes plus haptophytes) but phylogenomic analyses did not support the monophyly of the group (Burki et al., 2016).
Thus, if a single red algal enslavement took place there was at least two independent membrane fusions, one in the ancestral Hacrobia and the other in stramenopiles (Cavalier-Smith, 2017).
Perhaps the most relevant assumption of the chromalveolate hypothesis is the large number of independent plastid losses, at least 18 (Cavalier-Smith, 2002), necessary to explain the diversity of heterotrophic lineages and their distribution within the eukaryotic tree. A photosynthetic ancestor of chromalveolates implies that plastid-less protist lineages such as Ciliophora (Alveolata), the human gut inhabitant Blastocystis (Heterokonta) [START_REF] Beghini | Large-scale comparative metagenomics of Blastocystis, a common member of the human gut microbiome[END_REF] or the phyla Telonemia and Centroheliozoa which are closely related to Hacrobia [START_REF] Burki | Large-Scale Phylogenomic Analyses Reveal That Two Enigmatic Protist Lineages, Telonemia and Centroheliozoa, Are Related to Photosynthetic Chromalveolates[END_REF], must have lost their plastids at sometime in the course of evolution. Interestingly, some nuclear genes of ciliates seem to have a red algal origin and have been suggested to be residual EGTs from the red endosymbiont of the common ancestor of all alveolates [START_REF] Reyes-Prieto | Multiple Genes of Apparent Algal Origin Suggest Ciliates May Once Have Been Photosynthetic[END_REF]. On the other hand, there is no evidence that the ancestor of stramenopiles harbored a plastid; it seems rather that the plastid was acquired after the divergence of stramenopiles in the ancestor of ochrophytes [START_REF] Leyland | Are Thraustochytrids algae?[END_REF][START_REF] Wang | Re-analyses of "Algal" Genes Suggest a Complex Evolutionary History of Oomycetes[END_REF].
Common red algal endosymbiont
Although not sufficient, the monophyly of secondary red plastids is a necessary condition to support a single origin of all chromalveolates. Early phylogenies of ribosomal RNA sequences suggested that the red plastids in cryptophytes, haptophytes and stramenopiles did not share a common ancestry [START_REF] Müller | Ribosomal DNA phylogeny of the Bangiophycidae (Rhodophyta) and the origin of secondary plastids[END_REF][START_REF] Oliveira | Phylogeny of the Bangiophycidae (Rhodophyta) and the secondary endosymbiotic origin of algal plastids[END_REF]. However, these results were probably due to phylogenetic artifacts.
Improved taxon sampling coupled with phylogenetic analyses of concatenated plastid genes have consistently retrieved the monophyly of chlorophyll c-containing plastids [START_REF] Janouskovec | A common red algal origin of the apicomplexan, dinoflagellate, and heterokont plastids[END_REF][START_REF] Sanchez-Puerta | Sorting wheat from chaff in multi-gene analyses of chlorophyll c-containing plastids[END_REF]Yoon et al., 2002).
Within the phylogeny of red algal-derived plastids, cryptophytes seem the first to diverge and appeared either sister to all other red lineages [START_REF] Lida | Assessing the monophyly of chlorophyll-c containing plastids by multi-gene phylogenies under the unlinked model conditions[END_REF][START_REF] Shalchian-Tabrizi | Heterotachy processes in rhodophytederived secondhand plastid genes: Implications for addressing the origin and evolution of dinoflagellate plastids[END_REF] or sister to haptophytes (with cryptophytes and haptophytes as sister group of alveolates and stramenopiles) [START_REF] Kim | Evolutionary dynamics of cryptophyte plastid genomes[END_REF][START_REF] Sanchez-Puerta | Sorting wheat from chaff in multi-gene analyses of chlorophyll c-containing plastids[END_REF].
The closer phylogenetic proximity between cryptophytes and haptophytes is also supported by the shared bacterial rpl36 gene in their plastid genomes, likely transferred to their common ancestor [START_REF] Rice | An exceptional horizontal gene transfer in plastids: gene replacement by a distant bacterial paralog and evidence that haptophyte and cryptophyte plastids are sisters[END_REF].
Protein import system in red algal-derived plastids
Secondary plastids in cryptophytes, haptophytes, stramenopiles and apicomplexans are surrounded by four membranes whereas the peridinin-containing plastids of dinoflagellates have only three probably due to the loss of the periplastid membrane of the red algal endosymbiont (Cavalier-Smith, 1999). Another important difference among red algal-derived plastids is that while in haptophytes, cryptophytes and stramenopiles the outermost membrane of the plastid is fused with the endoplasmic reticulum (ER) of the host, this is not the case for the plastids in alveolates (e.g. apicomplexans and dinoflagellates).
To import proteins across the first membrane of four membrane-bound plastids, the sec61 complex of the host ER is used to cross co-translationally proteins carrying a signal peptide (SP). In the case of chromerids and apicomplexans, whose plastids are not fused with the host ER, proteins targeted to the plastid enter the secretory pathway of the host an are transported within vesicles thought to fuse with the outermost membrane of the plastid [START_REF] Bouchut | Vesicles bearing toxoplasma apicoplast membrane proteins persist following loss of the relict plastid or golgi body disruption[END_REF][START_REF] Waller | Protein trafficking to the plastid of Plasmodium falciparum is via the secretory pathway[END_REF].
Then, to cross the second outermost membrane, it is thought that the ERAD (Endoplasmic Reticulum-Associated protein Degradation) machinery of the red algal endosymbiont was recycled and relocated to this membrane (Sommer et al., 2007) (Fig. 11). Plastid-targeted and nucleomorph-encoded ERAD-like proteins have been identified in all red lineages carrying plastids surrounded by four-membranes (Felsner et al., 2011). This ERAD-derived system specific to the symbiont has been named SELMA which stands for Symbiont-specific ERAD-like Machinery (Hempel et al., 2009). Interestingly, phylogenetic analyses of some SELMA components have shown to be monophyletic and derived from the red algal endosymbiont. SELMA is considered to provide strong evidence for the existence of a single secondary endosymbiosis with a red alga at the origin of all complex red plastids (Gould et al., 2015).
The two innermost membranes of complex red plastids correspond to the membranes of primary plastids. Similar to protein import in Archaeplastida lineages, complex red plastids use proteins recycled from the TIC/TOC complex of the red endosymbiont (Stork et al., 2013) (Fig. 11).
One host or multiple hosts?
The monophyly of secondary red plastids is widely accepted. However, whether it represents a single symbiogenetic event followed by vertical descent or a single acquisition followed by horizontal transfer via serial endosymbiosis is still unclear. The symbiogenetic model of a single red algal secondary endosymbiosis at the origin of all chlorophyll c-containing plastids assumes that establishing a functional organelle is much more difficult than losing it, because of all the steps required to fully integrate a symbiont (e.g. transfer of genes to the host nucleus, development of plastid targeting system for nucleus-encoded proteins and evolution of a protein import system) (Cavalier- Smith, 1999). According to this parsimonious model, a single event is at the origin of the emergence of all red plastid-harboring algae.
Phylogenomic analyses of recently sequenced new protist lineages, together with the use of complex models of sequence evolution, have improved, although not completely resolved, the relationships among major eukaryotic supergroups [START_REF] Burki | Large-Scale Phylogenomic Analyses Reveal That Two Enigmatic Protist Lineages, Telonemia and Centroheliozoa, Are Related to Photosynthetic Chromalveolates[END_REF][START_REF] Yabuki | Palpitomonas bilix represents a basal cryptist lineage: insight into the character evolution in Cryptista[END_REF]. For instance, analyses of host-derived genes revealed that alveolates and stramenopiles, both with heterotrophic and photosynthetic lineages, are closely related to rhizarian protists, an ecologically and phylogenetically diverse supergroup of single-cell eukaryotes (Burki & Keeling, 2014). The assemblage of Stramenopiles, Alveolates and Rhizaria form a well-supported group named the SAR clade [START_REF] Burki | Large-Scale Phylogenomic Analyses Reveal That Two Enigmatic Protist Lineages, Telonemia and Centroheliozoa, Are Related to Photosynthetic Chromalveolates[END_REF]. Therefore, to propose a single red secondary endosymbiosis in the new context of the global eukaryotic tree, it is necessary to propose the loss of the red plastid at a certain point in the evolution of Rhizaria and a new secondary endosymbiosis with a green alga in an ancestor leading to chlorarachniophytes. Almost all recent phylogenomic analyses have failed to recover the monophyly of chromalveolates (Baurain et al., 2010;[START_REF] Brown | Phylogenomics demonstrates that breviate flagellates are related to opisthokonts and apusomonads[END_REF][START_REF] Burki | Phylogenomics reveals a new "megagroup" including most photosynthetic eukaryotes[END_REF]Burki et al., , 2012)). For instance, the putative common origin of the host lineage in Cryptophyta and Haptophyta, once thought to be part of the monophyletic supergroup Hacrobia [START_REF] Okamoto | Molecular phylogeny and description of the novel Katablepharid Roombia truncata gen. et sp. nov., and establishment of the Hacrobia taxon nov[END_REF], is strongly questioned in recent phylogenomic analyses; instead, the nuclear genome of cryptophytes appears to have an unexpected proximity to Archaeplastida lineages (Burki et al., 2016). Models of serial transfer of red algal-derived plastids have been proposed as workarounds to reconcile the evolutionary history of nuclear and plastid genomes of secondary red algae.
Serial endosymbioses
Models of serial endosymbioses propose that a heterotrophic ancestor of cryptophytes enslaved a red alga that became a secondary plastid. Then, the cryptophyte plastid was spread across the eukaryotic tree through a series of tertiary, quaternary or even quinary endosymbioses (Fig. 12).
As a consequence, serial endosymbioses would have created a sort of "Russian doll" cell in eukaryotic lineages with red algal-derived plastids (Petersen et al., 2014).
Besides that the chromalveolates is likely not a monophyletic assemblage (Baurain et al., 2010), the proposal of serial endosymbioses accounts for the fact that there are many heterotrophic lineages within the chromalveolates that seem to have never been photosynthetic. For instance, although several genes with algal affiliation in the genome of the plant pathogen Phytophthora sojae were originally interpreted as remnant genes of a photosynthetic past [START_REF] Tyler | Phytophthora Genome Sequences Uncover Evolutionary Origins and Mechanisms of Pathogenesis[END_REF], recent re-analyses showed that many genes were misinterpreted as algal-derived due to poor taxon sampling, thereby weakening the support for a common photosynthetic ancestry for all stramenopiles [START_REF] Wang | Re-analyses of "Algal" Genes Suggest a Complex Evolutionary History of Oomycetes[END_REF].
Additional support for serial endosymbiosis models comes from recent analyses of plastid-derived genes of groups thought to have inherited the plastid vertically such as the chromerids, peridinin-containing dinoflagellates (PCD) and apicomplexans. These alveolate lineages carry a red algal-derived plastid and early phylogenetic analyses of 23S rRNA and psbA genes showed that the plastid of PCD and the apicoplast appeared to share a common ancestor [START_REF] Zhang | Phylogeny of Ultra-Rapidly Evolving Dinoflagellate Chloroplast Genes: A Possible Common Origin for Sporozoan and Dinoflagellate Plastids[END_REF]. Likewise, Chromera velia, a photosynthetic symbiont of corals closely related to apicomplexans [START_REF] Moore | A photosynthetic alveolate closely related to apicomplexan parasites[END_REF], shared the same type II Rubisco with dinoflagellates which was likely transferred horizontally from Proteobacteria, suggesting that the HGT took place in the common ancestor of both lineages [START_REF] Janouskovec | A common red algal origin of the apicomplexan, dinoflagellate, and heterokont plastids[END_REF]. However, Petersen et al. (2014) performed phylogenetic analyses of Calvin cycle markers known to have a common origin in red algal-derived plastids in addition to a handful of proteins necessary for the protein import in complex plastids. They showed that none of the plastid-located proteins they investigated supports a common ancestry between the plastid in PCD and the apicoplast. Instead, they proposed two independent acquisitions of secondary red plastids and proposed the "rhodoplex hypothesis" where red algal-derived plastids in the CASH (Cryptophytes-Alveolates-Stramenopiles-Haptophytes) lineages have a reticulate history as a result of independent eukaryote-eukaryote endosymbioses (secondary, tertiary or quaternary), although the path of plastid donors and acceptors remains unclear. Nonetheless, other publications have suggested possible scenarios of serial plastid acquisitions (Bodył, 2017;Bodył et al., 2009;[START_REF] Kim | Evolutionary dynamics of cryptophyte plastid genomes[END_REF][START_REF] Ševčíková | Updating algal evolutionary relationships through plastid genome sequencing: did alveolate plastids emerge through endosymbiosis of an ochrophyte[END_REF] (Fig. 12).
Using regression analyses, Stiller et al., (2014) propose that a red alga became the cryptophyte plastid followed by the enslavement of a cryptophyte cell by the phagotrophic ancestor of ochrophytes via tertiary endosymbiosis. Finally, a quaternary uptake of an ochrophyte gave rise to the haptophyte plastid possibly after haptophytes diverged from centrohelids (Fig. 12).
An alternative sequence of endosymbioses was proposed by Bodył et al., (2009). In their model, after the symbiogenesis of cryptophytes via the enslavement of a red alga, two independent tertiary endosymbioses took place: the first one in the ancestor of ochrophytes by the uptake of a cryptophyte plastid before the insertion of a bacterial rpl36 gene in the plastid genome of cryptophytes; the second tertiary endosymbiosis would have occurred between a member of the cryptophyte lineage that laterally acquired the bacterial rpl36 gene and the ancestor of haptophytes, which explained the presence of the same bacterial HGT in both lineages [START_REF] Rice | An exceptional horizontal gene transfer in plastids: gene replacement by a distant bacterial paralog and evidence that haptophyte and cryptophyte plastids are sisters[END_REF]. Within the alveolates, the chromerids and apicoplasts would have acquired their plastids from an ochrophyte in a quaternary endosymbiotic event whereas the peridinin-containing plastid in dinoflagellates would have originated from an haptophyte (Fig. 12). Models of serial endosymbioses generally assume that whole cells are encapsulated by phagotrophic cells, followed by a drastic reduction of the plastid genome and the loss of cytosolic compartments of the symbiont, such as mitochondria and the nucleus (except in chlorarachniophytes and cryptophytes which still carry a highy reduced nucleomorph). Nonetheless, Bodył, (2017) proposed that some protist lineages with red algal-derived plastids could have acquired their photosynthetic organelles not only from whole cells but also by kleptoplasty. He proposed that the extant 3-membrane bound plastid in peridinin-containing dinoflagellates could have been obtained by myzocytosis from an haptophyte after the loss of the former ochrophyte-derived plastid gained by the ancestor of dinoflagellates and apicomplexans (Fig. 12). The suggestion that photosynthetic eukaryotes with plastids surrounded by three membranes such as euglenophytes or dinoflagellates were acquired from isolated chloroplasts is, however, not new [START_REF] Whatley | Chloroplast Evolution[END_REF].
Tertiary endosymbiosis in dinoflagellates
Models of serial endosymbiosis consider that the chlorophyll c-containing plastids originated from a single secondary endosymbiosis with a red alga that was horizontally spread to other eukaryotic groups via tertiary and higher order endosymbioses (Petersen et al., 2014;Stiller et al., 2014). However, the presence of tertiary plastids has hitherto been proved only in dinoflagellates [START_REF] Janouškovec | Major transitions in dinoflagellate evolution unveiled by phylotranscriptomics[END_REF].
Dinoflagellates have a complex and poorly understood evolutionary history. They have undergone major changes such as the complete loss of plastids in the crustacean parasite Hematodinium [START_REF] Gornik | Endosymbiosis undone by stepwise elimination of the plastid in a parasitic dinoflagellate[END_REF] or replacement of the former red plastid in Lepidodinium chlorophorum with a green one through another secondary endosymbiotic event. [START_REF] Kamikawa | Plastid genome-based phylogeny pinpointed the origin of the green-colored plastid in the dinoflagellate Lepidodinium chlorophorum[END_REF]. Dinoflagellates possess some genomic particularities that differentiate them from other protists possibly due to particular selective pressures. For instance, their nuclear genomes are extremely large, usually in the range of 3,000 -215,000 Mb [START_REF] Hackett | Dinoflagellates: A remarkable evolutionary experiment[END_REF]; for comparison, the genome of Chromera velia (Alveolata), a coral-associated alga, is approximately 194 Mb in size [START_REF] Woo | Chromerid genomes reveal the evolutionary path from photosynthetic algae to obligate intracellular parasites[END_REF].
Moreover, the nuclear DNA of dinoflagellates is greatly methylated (between 12-70% of thymine is replaced by 5-hydroxymethyluracil) [START_REF] Lin | Genomic understanding of dinoflagellates[END_REF].
Besides the intricate evolution of the nucleus, the plastid of dinoflagellates shows a similar pattern. Dinoflagellates harbor different types of tertiary plastids. Molecular phylogenies of plastid-derived proteins as well as the presence of accessory pigments typical of haptophytes (e.g. 19'-hexanoyloxyfucoxanthin) [START_REF] Zapata | Pigment-based chloroplast types in dinoflagellates[END_REF] in members of the Gymnodiniales lineage (e.g. Karlodinium veneficum and Karenia brevis) strongly support the replacement of the former peridinin-containing plastid by an haptophytederived one via a tertiary endosymbiosis. Akin to other complex red plastids, the cytosol and the nucleus of the haptophyte endosymbiont were lost (Burki et al., 2014;[START_REF] Hwan | Tertiary endosymbiosis driven genome evolution in dinoflagellate algae[END_REF]Patron et al., 2006b).
Another example of tertiary endosymbiosis in dinoflagellates is the diatom-derived plastid found in dinoflagellates (dinotoms) from the order Peridiniales (e.g. Durinskia baltica) [START_REF] Janouškovec | Major transitions in dinoflagellate evolution unveiled by phylotranscriptomics[END_REF]. The plastid is surrounded by five membranes and retained cellular components of the diatom endosymbiont such as the nucleus, mitochondria and endoplasmic reticulum [START_REF] Dodge | A dinoflagellate with both a mesocaryotic and a eucaryotic nucleus I. Fine structure of the nuclei[END_REF][START_REF] Imanian | The dinoflagellates Durinskia baltica and Kryptoperidinium foliaceum retain functionally overlapping mitochondria from two evolutionarily distinct lineages[END_REF][START_REF] Waller | Plastid Complexity in Dinoflagellates: A Picture of Gains, Losses, Replacements and Revisions[END_REF]. Interestingly, plastids in dinotoms are not monopholytetic, these dinoflagellates have taken up different diatom endosymbionts through independent endosymbiotic events [START_REF] Horiguchi | Serial replacement of a diatom endosymbiont in the marine dinoflagellate Peridinium quinquecorne (Peridiniales, Dinophyceae)[END_REF][START_REF] Yamada | Identification of highly divergent diatomderived chloroplasts in dinoflagellates, including a description of Durinskia kwazulunatalensis sp. nov. (Peridiniales, Dinophyceae)[END_REF].
It has been suggested that dinoflagellates belonging to the genus Dynophysis have tertiary plastids of cryptomonad origin, possibly closely related to Teleaulax amphioxeia [START_REF] Garcia-Cuetos | The toxic dinoflagellate Dinophysis acuminata harbors permanent chloroplasts of cryptomonad origin, not kleptochloroplasts[END_REF]. Nonetheless, the discussion whether the cryptophyte-derived plastid is a permanent plastid or a kleptoplast (i.e. a "stolen" chloroplast obtained from food capable of carrying out photosynthesis but eventually eliminated) is still unclear [START_REF] Feinstein | Effects of light on photosynthesis, grazing, and population dynamics of the heterotrophic dinoflagellate Pfiesteria piscicida (Dinophyceae)[END_REF][START_REF] Garcia-Cuetos | The toxic dinoflagellate Dinophysis acuminata harbors permanent chloroplasts of cryptomonad origin, not kleptochloroplasts[END_REF][START_REF] Waller | Plastid Complexity in Dinoflagellates: A Picture of Gains, Losses, Replacements and Revisions[END_REF].
Kleptoplasty: an early step in endosymbiosis?
Photosymbiosis is widely spread across the eukaryotic tree, from unicellular protists to metazoans with a variable degree of symbiont integration from temporal acquired phototrophy to permanent plastids [START_REF] Dorrell | What makes a chloroplast? Reconstructing the establishment of photosynthetic symbioses[END_REF][START_REF] Stoecker | Acquired phototrophy in aquatic protists[END_REF] (Fig. 13).
To create a permanent endosymbiont, it is necessary at least: 1) an export system for the photosynthate which needs to be available to the host metabolic machinery; 2) the transfer of plastid-encoded genes into the nucleus of the host and retargeting system to the plastid;
3) the development of an import system of nuclear-encoded proteins; 4) the host must take control over the plastid division. To fulfill these requirements, major evolutionary innovations are needed. Therefore, fully integrated endosymbionts are rather scarce in the natural world. For instance, only two primary plastid endosymbiotic events have been hitherto identified (Keeling, 2010).
Kleptoplasts are the preserved chloroplasts of algal prey retained by some eukaryotic lineages. Although the host needs to continue feeding on algae, the lifespan of functional kleptoplasts can be from short periods (hours) up to months [START_REF] Stoecker | Acquired phototrophy in aquatic protists[END_REF]. Interestingly, it has been suggested that kleptoplasty might represent and intermediate stage of endosymbiosis [START_REF] Leliaert | Phylogeny and Molecular Evolution of the Green Algae[END_REF].
Mesodinium rubrum, a red tide-forming ciliate, preys on cryptophytes and can retain their plastids (kleptoplasts) and nuclei which remain transcriptionally active for up to 30 days [START_REF] Johnson | Retention of transcriptionally active cryptophyte nuclei by the ciliate Myrionecta rubra[END_REF][START_REF] Qiu | Cryptophyte farming by symbiotic ciliate host detected in situ[END_REF]. M. rubrum is an obligate mixotroph that requires regular uptake of cryptophytes from the Teleaulax or Geminigera genera to survive [START_REF] Myung | Population growth and plastid type of Myrionecta rubra depend on the kinds of available cryptomonad prey[END_REF][START_REF] Nishitani | High-level congruence of Myrionecta rubra prey and Dinophysis species plastid identities as revealed by genetic analyses of isolates from Japanese coastal waters[END_REF]. There are important differences in Figure 13. Distribution of permanent plastids and kleptoplasts in the eukaryotic tree (modified from [START_REF] Stoecker | Acquired phototrophy in aquatic protists[END_REF] plastid retention and the preference of cryptophyte species among species of the genus Mesodinium. For instance, Mesodinium chamaeleon has a higher plastid retention capacity and the prey preference is less strict than in Mesodinium rubrum suggesting a different pattern of host-symbiont evolution within the genus [START_REF] Moeller | Preferential Plastid Retention by the Acquired Phototroph Mesodinium chamaeleon[END_REF].
The long term retention of photosynthetic chloroplasts by the sarcoglossan sea slug Elysia represents an example of kleptoplasty in metazoans. The stolen chloroplasts reside within the cells of their digestive gland giving them a green-colored pigmentation (de Vries et al., 2014;[START_REF] Händeler | Functional chloroplasts in metazoan cells -a unique evolutionary strategy in animal life[END_REF]. Although the kleptoplast can perform photosynthesis, it is not strictly necessary for the survival of the slug [START_REF] Christa | Plastid-bearing sea slugs fix CO2 in the light but do not require photosynthesis to survive[END_REF]. Interestingly, it is thought that Elysia chlorotica horizontally acquired several genes from its stramenopile symbiont Vaucheria litorea. These genes encode plastid-located proteins that participate in the Calvin cycle, chlorophyll biosynthesis, light-harvesting complex and assemblage of the photosystem II [START_REF] Pierce | Chlorophyll a synthesis by an animal using transferred algal nuclear genes[END_REF][START_REF] Knoll | Lynn Margulis, 1938-2011[END_REF][START_REF] Rumpho | Horizontal gene transfer of the algal nuclear gene psbO to the photosynthetic sea slug Elysia chlorotica[END_REF], 2009). Nonetheless, recent transcriptomic and genomic analyses have heavily reduced the estimation of the contribution of HGT of algal genes to the nuclear genome of the host, suggesting that if any member of the polyphyletic photosynthetic sarcoglossans is on the track of establishing a permanent symbiont, it is still in an early stage [START_REF] Bhattacharya | Genome analysis of Elysia chlorotica egg DNA provides no evidence for horizontal gene transfer into the germ line of this kleptoplastic mollusc[END_REF][START_REF] Wägele | Transcriptomic evidence that longevity of acquired plastids in the photosynthetic slugs Elysia timida and Plakobranchus ocellatus does not entail lateral transfer of algal nuclear genes[END_REF]. Moreover, photosynthetic Elysia cannot inherit vertically their symbionts.
On the other hand, Hatena arenicola (Katablepharida) can pass the green endosymbiont Nephroselmis (Prasinophyceae) to one of the two daughter cells during division which seemingly represent an ongoing tertiary endosymbiosis [START_REF] Okamoto | Hatena arenicola gen. et sp. nov., a Katablepharid Undergoing Probable Plastid Acquisition[END_REF]. The colorless Hatena develops a feeding apparatus de novo during a predatory phase that allows the uptake of a new Nephroselmis cell and subsequently induces the loss of the feeding apparatus thereby returning to a phototrophic lifestyle [START_REF] Okamoto | Hatena arenicola gen. et sp. nov., a Katablepharid Undergoing Probable Plastid Acquisition[END_REF]. As a consequence of continuous algal replacement, same strains of Hatena arenicola can harbor different Nephroselmis species [START_REF] Yamaguchi | Molecular diversity of endosymbiotic Nephroselmis (Nephroselmidophyceae) in Hatena arenicola (Katablepharidophycota)[END_REF].
As mentioned above, HGT of plastid genes to the host nucleus is necessary to establish a permanent chloroplast but there has not been a systematic search of algal HGTs in Hatena arenicola to estimate the degree of integration of the endosymbiont.
Nevertheless, the green alga is always inherited by the right-hand daughter cell during division (when the parent is observed from the dorsal side) which suggest a tight control of the endosymbiont inheritance [START_REF] Okamoto | Hatena arenicola gen. et sp. nov., a Katablepharid Undergoing Probable Plastid Acquisition[END_REF][START_REF] Yamaguchi | Molecular diversity of endosymbiotic Nephroselmis (Nephroselmidophyceae) in Hatena arenicola (Katablepharidophycota)[END_REF].
OBJECTIVES "Le hasard, pourrait-on dire, engendre un vaste nombre d'individus ; une petite partie de ceux-ci se sont retrouvés organisés de telle sorte que leurs organes pouvaient satisfaire à leurs besoins. Le plus grand nombre n'avait ni adaptation, ni organisation : ces derniers ont tous péri. Ainsi, les espèces que nous voyons aujourd'hui ne sont rien d'autre qu'une petite partie de toutes celles que le destin aveugle a produit"
Pierre Louis Moreau de Maupertuis. Essai de cosmologie (1750).
OBJECTIVES
As I presented in the Introduction (section 1.1), the endosymbiotic theory of the origin of plastids (and mitochondria) had a very complex and long history of facing constant rejection that only finished when molecular phylogenetic analyses undoubtedly showed the prokaryotic nature of these organelles. Today, symbiogenesis is widely accepted and considered to be a major evolutionary process that reshaped the tree of eukaryotes. However, there are still unresolved questions regarding the evolution of photosynthetic eukaryotes, including the phylogenetic identity of symbionts and the number and type of endosymbiotic events.
In order to better understand the evolution and spread of photosynthesis among eukaryotes, we established three main objectives for my PhD work:
1. Identification of the living cyanobacterial lineage closest to primary plastids.
The cyanobacterial lineage that gave rise to primary plastids remained unknown.
Previous studies aiming at identifying the plastid ancestor can be split in two groups depending on whether they support an early-branching or late-branching cyanobacterium as the symbiont that established the endosymbiotic relationship with a phagotrophic host thereby originating the Archaeplastida lineage, the first group of photosynthetic eukaryotes. Thus, the first objective of my PhD was to study the origin of primary plastids taking advantage of the increased sampling of cyanobacterial and plastid genome sequences. These recent sequencing efforts allowed us to use phylogenomic approaches in order to point out the closest extant lineage to the plastid ancestor. With this purpose, we analyzed both plastid-and nuclear-encoded genes of plastid origin (endosymbiotic gene transfers -EGTs-).
Study of the evolution of secondary plastid-harboring algae through the analysis of nuclear-encoded genes.
Photosynthetic eukaryotes harbor composite nuclear genomes with genes derived from diverse sources, including vertically inherited eukaryotic genes and EGTs acquired from their plastids. It is assumed that secondary photosynthetic eukaryotes contain a majority of "red" or "green" genes depending on the type of secondary plastid they harbor (red or green plastid, respectively) as a result of endosymbiotic gene transfers. In order to study the evolution of secondary photosynthetic eukaryotes, we measured the contribution of red and green algal genes to their nuclear genomes. These analyses may help to identify possible cryptic endosymbioses, as it has been proposed for some secondary plastid-harboring lineages.
Phylogenetic analysis of the SELMA translocon.
SELMA (Symbiont-specific ERAD-like machinery) is an assemblage of proteins considered to be responsible of protein import through the second outermost membrane of secondary red plastids with four membranes (found in cryptophytes, haptophytes, alveolates and stramenopiles). It is widely accepted that SELMA has a unique origin so that its monophyly is often used as a major argument to support the hypothesis of a single origin of all secondary red plastids. Nonetheless, there has not been a systematic phylogenetic study of their components. The last part of my PhD focused on the phylogenetic analyses of SELMA proteins to shed light on the evolution of the protein import machinery of red algal-derived plastids.
MATERIALS AND METHODS
We studied the origin and evolution of photosynthetic eukaryotes using phylogenomic approaches. Although every part of my PhD project had its particularities depending on the aim of every subproject, we followed a similar methodological approach in all our studies. For a detailed description of the methodology, see the corresponding chapter.
Selection of markers
The initial step in all our studies was the selection of the most suitable queries to be used in BLASTP searches against our local database to select the phylogenetic markers.
To study the origin of primary plastids (chapter 4) (Fig. 14), we used the 149 plastidencoded proteins of the glaucophyte Cyanophora paradoxa. In chapter 5 (Fig. 15), we used the whole proteomes of Guillardia theta and Bigelowiella natans, two photosynthetic eukaryotes with red and green algal-derived secondary plastids, respectively, to evaluate the contribution of red and green algal genes to the mosaicism observed in secondary plastid-harboring lineages. Finally, to study the evolution of the SELMA translocon (chapter 6) (Fig. 16), the set of protein queries used in this study was retrieved from the largest collection of putative SELMA components reported so far Stork et al., (2012).
The number of genomes and transcriptomes in our local database varied from one study to the other. Before the start of a new study, we incorporated recently published sequences if they were relevant to our analyses to have a more comprehensive taxon sampling. See the materials and methods section in every article manuscript to consult the complete list of species used in every study. We performed reciprocal BLASTP searches against our local database using a homemade python script based on the BLASTP algorithm (Altschul et al., 1997) to increase the number of retrieved proteins. Multiple alignments of protein sequences were performed using MAFFT [START_REF] Katoh | MAFFT multiple sequence alignment software version 7: Improvements in performance and usability[END_REF] with default parameters. Poorly aligned regions of multiple sequences alignments were removed using BMGE (Criscuolo & Gribaldo, 2010) to keep only phylogenetic informative regions for posterior phylogenetic analyses.
Phylogenetic analyses
Two cycles of phylogenetic tree reconstruction were performed in trimmed sequenced alignments. First, preliminary phylogenetic trees were constructed using an approximately-ML approach, which is implemented in the program FastTree and is known to be a faster method for ML estimation (Price et al., 2010). In all our phylogenetic analyses, trees were visually inspected to avoid potentially paralogous proteins and identify protein markers fulfilling the criteria of selection previously established (see the corresponding chapter).
After selection of markers, maximum likelihood trees were constructed initially using PhyML [START_REF] Guindon | Estimating maximum likelihood phylogenies with PhyML[END_REF] (Chapter 4) but the new program IQTREE for maximum likelihood-based inference of phylogenetic trees has shown to outperform PhyML [START_REF] Nguyen | IQ-TREE: A fast and effective stochastic algorithm for estimating maximum-likelihood phylogenies[END_REF] and it was preferred in ulterior studies (chapter 5 and 6). In both programs, we used the LG substitution matrix [START_REF] Le | An improved general amino acid replacement matrix[END_REF]) and a four-category gamma distribution to reconstruct the phylogenetic trees. Bayesian inference of phylogenetic trees was conducted using PhyloBayes-MPI (Lartillot et al., 2013) with the CAT-GTR model [START_REF] Lartillot | A Bayesian mixture model for across-site heterogeneities in the amino-acid replacement process[END_REF]) and a gamma distribution with four rate categories.
ORIGIN AND EVOLUTION OF PRIMARY PLASTIDS
"It is a rather startling proposal that bacteria, the organisms which are popularly associated with disease, may represent the fundamental causative factor in the origin of species. Evidence of the constructive activities of bacteria has been at hand for many years, but popular conceptions of bacteria have been colored chiefly by their destructive activities as represented in disease. This destructive conception has become so fixed in the popular mind that the average person considers bacteria and disease as synonymous. Bacteria occupy a fundamental position in the world as it is constituted today. It is impossible to conceive of this world as existing without bacteria"
Ivan E. Wallin. Symbionticism and the origin of species (1927).
ORIGIN AND EVOLUTION OF PRIMARY PLASTIDS
Context and objectives
As described in the Introduction (section 1.2.1.6), the present-day cyanobacterial lineage closest to primary plastids remains unidentified. Despite that studies have supported either an early-or late-branching origin of primary plastids within the phylogeny of cyanobacteria, none of them have successfully identified their cyanobacterial sister lineage. This was probably due to an overlook or insufficient taxon sampling around key cyanobacterial clades.
Remarkably, our team and other groups have recently undertaken sequencing projects of cyanobacterial genomes that have increased the coverage of cyanobacterial species from a broad range of lifestyles, morphologies and phylogenetic affiliation. For instance, Shih et al., (2013) sequenced 54 cyanobacterial genomes giving as a result a twofold increase in the number of sequenced genomes within the phylum Cyanobacteria.
The availability of new genomic data allowed us to readdress the longstanding question on the origin of primary plastids.
To study the origin of primary plastids, we established the following objectives:
1. To perform robust phylogenomic analyses using heterogeneous models of protein evolution on concatenated plastid markers.
2. To create a dataset of strictly selected nuclear-encoded genes of cyanobacterial origin (EGT) and perform phylogenomic analyses. As it is shown in section 1.2, plastid genomes are extremely reduced and encode only around 10% of genes found in free-living cyanobacteria. Therefore, EGTs are another source of information to study the origin of primary plastids. Although some studies have remarked the potential of EGTs and have identified some of them (Criscuolo & Gribaldo, 2011;Dagan et al., 2013), there has not been a systematic search and phylogenomic analysis of these genes.
3. To study whether the plastid ancestor was capable of nitrogen fixation, to test the hypothesis that the establishment of primary plastids was based on the nitrogen dependency of the host as has been proposed by some authors (Deusch et al., 2008;[START_REF] Falcón | Dating the cyanobacterial ancestor of the chloroplast[END_REF].
Results
Our phylogenomic analyses of concatenated plastid genes of Archaeplastida (see Figure 17) as well as the concatenated dataset of nuclear-encoded genes of cyanobacterial origin (EGTs), strongly support that the closest present-day relative of plastids is Gloeomargarita lithophora, an early-branching cyanobacterium that was recently sequenced by our team and was initially found in an alkaline crater lake in Mexico (Couradeau et al., 2012). This cyanobacterium belongs to a large cyanobacterial clade that has a cosmopolitan distribution as shown by environmental 16S rDNA sequencing studies (Ragon et al., 2014). However, it seems to be restricted to terrestrial habitats. Our results show for the first time a living cyanobacterial clade that is sister to primary plastids in phylogenomic analyses.
SUMMARY
Photosynthesis evolved in eukaryotes by the endosymbiosis of a cyanobacterium, the future plastid, within a heterotrophic host. This primary endosymbiosis occurred in the ancestor of Archaeplastida, a eukaryotic supergroup that includes glaucophytes, red algae, green algae, and land plants [START_REF] Moreira | The origin of red algae and the evolution of chloroplasts[END_REF][START_REF] Rodrı ´guez-Ezpeleta | Monophyly of primary photosynthetic eukaryotes: green plants, red algae, and glaucophytes[END_REF][START_REF] Archibald | The puzzle of plastid evolution[END_REF][START_REF] Keeling | The number, speed, and impact of plastid endosymbioses in eukaryotic evolution[END_REF]. However, although the endosymbiotic origin of plastids from a single cyanobacterial ancestor is firmly established, the nature of that ancestor remains controversial: plastids have been proposed to derive from either early-or latebranching cyanobacterial lineages [START_REF] Turner | Investigating deep phylogenetic relationships among cyanobacteria and plastids by small subunit rRNA sequence analysis[END_REF][START_REF] Criscuolo | Large-scale phylogenomic analyses indicate a deep origin of primary plastids within cyanobacteria[END_REF][START_REF] Shih | Improving the coverage of the cyanobacterial phylum using diversity-driven genome sequencing[END_REF][START_REF] Li | Compositional biases among synonymous substitutions cause conflict between gene and protein trees for plastid origins[END_REF][9][START_REF] Dagan | Genomes of Stigonematalean cyanobacteria (subsection V) and the evolution of oxygenic photosynthesis from prokaryotes to plastids[END_REF][START_REF] Ochoa De Alda | The plastid ancestor originated among one of the major cyanobacterial lineages[END_REF]. To solve this issue, we carried out phylogenomic and supernetwork analyses of the most comprehensive dataset analyzed so far including plastid-encoded proteins and nucleus-encoded proteins of plastid origin resulting from endosymbiotic gene transfer (EGT) of primary photosynthetic eukaryotes, as well as wide-ranging genome data from cyanobacteria, including novel lineages. Our analyses strongly support that plastids evolved from deepbranching cyanobacteria and that the present-day closest cultured relative of primary plastids is Gloeomargarita lithophora. This species belongs to a recently discovered cyanobacterial lineage widespread in freshwater microbialites and microbial mats [START_REF] Couradeau | An early-branching microbialite cyanobacterium forms intracellular carbonates[END_REF][START_REF] Ragon | 16S rDNA-based analysis reveals cosmopolitan occurrence but limited diversity of two cyanobacterial lineages with contrasted patterns of intracellular carbonate mineralization[END_REF]. The ecological distribution of this lineage sheds new light on the environmental conditions where the emergence of photosynthetic eukaryotes occurred, most likely in a terrestrialfreshwater setting. The fact that glaucophytes, the first archaeplastid lineage to diverge, are exclusively found in freshwater ecosystems reinforces this hypothesis. Therefore, not only did plastids emerge early within cyanobacteria, but the first photosynthetic eukaryotes most likely evolved in terrestrial-freshwater settings, not in oceans as commonly thought.
RESULTS AND DISCUSSION
Phylogenomic Evidence for the Early Origin of Plastids among Cyanobacteria
To address the question of whether plastids derive from an early-or late-branching cyanobacterium, we have carried out phylogenomic analyses upon a comprehensive dataset of conserved plastid-encoded proteins and the richest sampling of cyanobacterial genome sequences used to date. Plastids have highly reduced genomes (they encode only between 80 and 230 proteins) compared with free-living cyanobacterial genomes, which encode between 1,800 and 12,000 proteins [START_REF] Dagan | Genomes of Stigonematalean cyanobacteria (subsection V) and the evolution of oxygenic photosynthesis from prokaryotes to plastids[END_REF]. Most genes remaining in these organelle's genomes encode essential plastid functions (e.g., protein translation and photosystem structure), and their sequences are highly conserved. Therefore, despite the fact that plastid-encoded proteins are relatively few, they are good phylogenetic markers because they (1) are direct remnants of the cyanobacterial endosymbiont and (2) exhibit remarkable sequence and functional conservation.
To have a broad and balanced representation of primary plastids and cyanobacterial groups in our analyses, we mined a large sequence database containing all ribosomal RNAs (rRNAs) and proteins encoded in 19 plastids of Archaeplastida (one glaucophyte, eight green algae and plants, and nine red algae) and 122 cyanobacterial genomes, including recently sequenced members [START_REF] Shih | Improving the coverage of the cyanobacterial phylum using diversity-driven genome sequencing[END_REF][START_REF] Dagan | Genomes of Stigonematalean cyanobacteria (subsection V) and the evolution of oxygenic photosynthesis from prokaryotes to plastids[END_REF][START_REF] Benzerara | Intracellular Ca-carbonate biomineralization is widespread in cyanobacteria[END_REF] and the genome of Gloeomargarita lithophora, sequenced for this work (see the Experimental Procedures). Sequence similarity searches in this genome dataset allowed the identification of 97 widespread conserved proteins, after exclusion of those only present in a restricted number of plastids and/or cyanobacteria and those showing evidence of horizontal gene transfer (HGT) among cyanobacterial species (see the Supplemental Experimental Procedures and Data S1 for more information). Preliminary phylogenetic analysis of the dataset of 97 conserved proteins by a supermatrix approach (21,942 concatenated amino acid sites) using probabilistic phylogenetic methods with a site-heterogeneous substitution model (CAT-GTR) retrieved a tree with a deep-branching position of the plastid sequences (Figure S1A). Also noticeable in this tree was the very long branch of the Synechococcus-Prochlorococcus (SynPro) cyanobacterial clade, which reflected its very high evolutionary rate. Since the SynPro cyanobacteria have never been found to be related to plastids, as confirmed by our own analysis, we decided to exclude them from subsequent analyses because of their accelerated evolutionary rate and because their sequences have a strong compositional bias known to induce errors in phylogenomic studies [START_REF] Li | Compositional biases among synonymous substitutions cause conflict between gene and protein trees for plastid origins[END_REF].
After removing the fast-evolving SynPro clade, we analyzed the dataset of 97 proteins as well as an rRNA dataset containing plastid and cyanobacterial 16S+23S rRNA concatenated sequences using maximum likelihood (ML) and Bayesian phylogenetic inference. Phylogenetic trees reconstructed for both datasets provide full support for the early divergence of plastids among the cyanobacterial species (Figures 1 andS1B, respectively). Moreover, our trees show that the closest present-day relative of plastids is the recently described deepbranching cyanobacterium Gloeomargarita lithophora [START_REF] Couradeau | An early-branching microbialite cyanobacterium forms intracellular carbonates[END_REF][START_REF] Ragon | 16S rDNA-based analysis reveals cosmopolitan occurrence but limited diversity of two cyanobacterial lineages with contrasted patterns of intracellular carbonate mineralization[END_REF]. This biofilm-forming benthic cyanobacterium has attracted attention by its unusual capacity to accumulate intracellular amorphous calcium-magnesium-strontium-barium carbonates. Phylogenetic analysis based on environmental 16S rRNA sequences has shown that G. lithophora belongs to a diverse early-branching cyanobacterial lineage, the Gloeomargaritales [START_REF] Ragon | 16S rDNA-based analysis reveals cosmopolitan occurrence but limited diversity of two cyanobacterial lineages with contrasted patterns of intracellular carbonate mineralization[END_REF], for which it is the only species isolated so far. Although it was initially found in an alkaline crater lake in Mexico, environmental studies have revealed that G. lithophora and related species have a widespread terrestrial distribution ranging from freshwater alkaline lakes to thermophilic microbial mats [START_REF] Ragon | 16S rDNA-based analysis reveals cosmopolitan occurrence but limited diversity of two cyanobacterial lineages with contrasted patterns of intracellular carbonate mineralization[END_REF]. Interestingly, it has never been observed in marine samples [START_REF] Ragon | 16S rDNA-based analysis reveals cosmopolitan occurrence but limited diversity of two cyanobacterial lineages with contrasted patterns of intracellular carbonate mineralization[END_REF].
During the long endosymbiotic history of plastids, many genes necessary for plastid function were transferred from the cyanobacterial endosymbiont into the host nuclear genome (endosymbiotic gene transfer, EGT [START_REF] Martin | Gene transfer to the nucleus and the evolution of chloroplasts[END_REF]). In a previous work, we carried out an exhaustive search of EGT-derived proteins in Archaeplastida genomes with two strict criteria: [START_REF] Moreira | The origin of red algae and the evolution of chloroplasts[END_REF] the proteins had to be present in more than one group of Archaeplastida and ( 2) they had to be widespread in cyanobacteria and with no evidence of HGT among cyanobacterial lineages [START_REF] Deschamps | Signal conflicts in the phylogeny of the primary photosynthetic eukaryotes[END_REF]. We have updated our EGT dataset with the new genome sequences available and retained 72 highly conserved proteins (see Data S1) that we analyzed applying similar approaches to those used for the plastid-encoded proteins. ML and Bayesian inference phylogenetic analyses of the resulting 28,102-amino-acid-long concatenation of these EGT proteins yield similar phylogenetic trees to those based on the plastid-encoded proteins: G. lithophora always branches as sister group of the archaeplastid sequences with maximal statistical support (Figure S1C). Therefore, three different datasets (plastid encoded proteins, plastid rRNA genes, and EGT proteins encoded in the nucleus) converge to support the same result.
Robustness of the Evolutionary Relationship between
Gloeomargarita and Plastids
To test the robustness of the phylogenetic relation between Gloeomargarita and plastids, we investigated possible biases that might lead to an artifactual placement of plastids in our trees. We specifically focused on the dataset of plastid-encoded proteins, as these proteins are better conserved than the EGT proteins. Changes induced by the adaptation of endosymbiotically transferred genes to the new genetic environment (the eukaryotic nuclear genome) most likely impacted the evolutionary rate of EGT proteins (notice the much longer distance between Gloeomargarita and archaeplastid sequences for the EGT protein dataset than for the plastid-encoded protein dataset, 20% longer on average; see Figures S1C andS1E). We first recoded the amino acid sequences by grouping amino acids of similar physicochemical characteristics into four families, a procedure that is known to alleviate possible compositional biases [START_REF] Rodrı ´guez-Ezpeleta | Detecting and overcoming systematic errors in genome-scale phylogenies[END_REF]. Phylogenetic trees based on the recoded alignment still retrieve the Gloeomargarita-plastids sister relation (Figure S1D). It has been shown that fast-evolving sites in plastids have a very poor fit to evolutionary models [START_REF] Sun | Chloroplast phylogenomic inference of green algae relationships[END_REF]. Therefore, we tested whether the Gloeomargarita-plastids relation could be due to the accumulation of fast-evolving sites leading to sequence evolution model violation and long-branch attraction (LBA) artifacts. For that, we calculated the evolutionary rate for each of the 21,942 sites of the 97-protein concatenation and divided them into ten categories, from the slowest-to the fastest-evolving ones. We then reconstructed phylogenetic trees with a progressive inclusion of fast-evolving sites (the so-called slow-fast method, aimed at increasing the signal/ noise ratio of sequence datasets [START_REF] Philippe | Early-branching or fastevolving eukaryotes? An answer based on slowly evolving positions[END_REF]). All of the trees show the Gloeomargarita-plastids sister relation with full statistical support (Figures S1E-S1M). Interestingly, the trees based on the slowest-evolving positions exhibit a remarkable reduction of the branch length of the plastid sequences (especially those of green algae and land plants; Figures S1K-S1M), which become increasingly longer with the addition of fast-evolving positions. This reflects the well-known acceleration of evolutionary rate that plastids have experienced, in particular in Viridiplantae [START_REF] Deschamps | Signal conflicts in the phylogeny of the primary photosynthetic eukaryotes[END_REF]. These results argue against the possibility that the Gloeomargarita-plastids relation arises from an LBA artifact due to the accumulation of noise in fast-evolving sites. Finally, we tested whether the supermatrix approach might have generated artifactual results due to the concatenation of markers with potentially incompatible evolutionary histories (because of HGT, hidden paralogy, etc.) that might have escaped our attention. We addressed this issue through the application of a phylogenetic network approach, which can cope with those contradictory histories [START_REF] Huson | Application of phylogenetic networks in evolutionary studies[END_REF], to analyze the set of 97 phylogenetic trees reconstructed with the individual proteins. The supernetwork based on those trees again confirms G. lithophora as the closest cyanobacterial relative of plastids (Figure 2), in agreement with the phylogenomic results (Figure 1).
A deep origin of plastids within the cyanobacterial phylogeny was inferred in past studies based on 16S rRNA gene sequences, plastid proteins, and nucleus-encoded proteins of plastid origin [START_REF] Turner | Investigating deep phylogenetic relationships among cyanobacteria and plastids by small subunit rRNA sequence analysis[END_REF][START_REF] Criscuolo | Large-scale phylogenomic analyses indicate a deep origin of primary plastids within cyanobacteria[END_REF][START_REF] Shih | Improving the coverage of the cyanobacterial phylum using diversity-driven genome sequencing[END_REF][START_REF] Li | Compositional biases among synonymous substitutions cause conflict between gene and protein trees for plastid origins[END_REF]. However, those studies did not retrieve any close relationship between plastids and any extant cyanobacterial lineage, a result that could be attributed either to lack of phylogenetic resolution or to incomplete taxonomic sampling [START_REF] Criscuolo | Large-scale phylogenomic analyses indicate a deep origin of primary plastids within cyanobacteria[END_REF]. Our results, after inclusion of the new species G. lithophora, clearly advocate for the latter and stress the importance of exploration of undersampled environments such as many freshwater and terrestrial ones. Nevertheless, some studies have alternatively proposed that plastids emerged from the apical part of the cyanobacterial tree, being closely related to latebranching filamentous (Nostocales and Stigonomatales) [9][START_REF] Dagan | Genomes of Stigonematalean cyanobacteria (subsection V) and the evolution of oxygenic photosynthesis from prokaryotes to plastids[END_REF][START_REF] Ochoa De Alda | The plastid ancestor originated among one of the major cyanobacterial lineages[END_REF] or unicellular (Chroococcales) [START_REF] Falco ´n | Dating the cyanobacterial ancestor of the chloroplast[END_REF] N 2 -fixing cyanobacteria. However, in agreement with our results, it has been shown that those phylogenetic results can be explained by similarities in G+C content due to convergent nucleotide composition between plastids and late-branching cyanobacteria and that the use of codon recoding techniques suppresses the compositional bias and recovers a deep origin of plastids [START_REF] Li | Compositional biases among synonymous substitutions cause conflict between gene and protein trees for plastid origins[END_REF].
Studies supporting a late plastid origin did not rely on phylogenomic analyses alone, but also used other methods, such as quantifying the number and sequence similarity of proteins shared between Archaeplastida and different cyanobacterial species. They showed that the heterocyst-forming filamentous cyanobacterial genera Nostoc and Anabaena appear to possess the largest and most similar set of proteins possibly present in the plastid ancestor [9,[START_REF] Dagan | Genomes of Stigonematalean cyanobacteria (subsection V) and the evolution of oxygenic photosynthesis from prokaryotes to plastids[END_REF]. However, these approaches have two important limitations: [START_REF] Moreira | The origin of red algae and the evolution of chloroplasts[END_REF] the number of proteins is highly dependent on the genome size of the different cyanobacteria, which can vary by more than one order of magnitude [START_REF] Shih | Improving the coverage of the cyanobacterial phylum using diversity-driven genome sequencing[END_REF], and (2) it is well known that sequence similarity is a poor proxy for evolutionary relationships [START_REF] Koski | The closest BLAST hit is often not the nearest neighbor[END_REF]. Indeed, if we apply a similar procedure on our set of sequences but include several plastid representatives in addition to cyanobacteria, we observe that individual plastids can have sequences apparently more similar to certain cyanobacterial homologs than to those of other plastids (Figure S2). If sequence similarity were a good indicator of evolutionary relationship, it would be difficult to deduce from those data that plastids are monophyletic, as they do not always resemble other plastids more than cyanobacteria. Actually, plastid proteins, especially those that have been transferred to the nucleus [START_REF] Deschamps | Signal conflicts in the phylogeny of the primary photosynthetic eukaryotes[END_REF], have evolved much faster than their cyanobacterial counterparts, such that sequences of one particular primary photosynthetic eukaryote can be more similar to shortbranching cyanobacterial ones than to those of other distantly related long-branching photosynthetic eukaryotes. Thus, similarity-based approaches have clear limitations and can lead to artifactual results. Phylogenomic analysis is therefore more suitable than crude sequence similarity or single-gene phylogenies for studying the cyanobacterial origin of plastids.
Metabolic Interactions and the Environmental Setting at the Origin of Plastids
Proponents of the recent origin of primary plastids from N 2 -fixing filamentous cyanobacteria suggest that the dearth of biologically available nitrogen during most of the Proterozoic and the ability of these organisms to fix nitrogen played a key role in the early establishment of the endosymbiosis [9,[START_REF] Dagan | Genomes of Stigonematalean cyanobacteria (subsection V) and the evolution of oxygenic photosynthesis from prokaryotes to plastids[END_REF]. However, there is no trace of such past ability to fix nitrogen in modern plastids. Furthermore, this metabolic ability is also widespread in many cyanobacterial lineages, including basal-branching clades, which has led to propose that the cyanobacterial ancestor was able to fix nitrogen [START_REF] Latysheva | The evolution of nitrogen fixation in cyanobacteria[END_REF]. Therefore, if N 2 fixation did actually play a role in the establishment of the plastid, it cannot discriminate in favor of a late-versus early-branching cyanobacterial endosymbiont. The closest modern relative of plastids, G. lithophora, lacks the genes necessary for N 2 fixation, which further argues against the implication of this metabolism in the origin of plastids. N 2 fixation is also absent in the cyanobacterial endosymbiont of Paulinella chromatophora, which is considered to be a recent second, independent case of primary plastid acquisition in eukaryotes [START_REF] Archibald | The puzzle of plastid evolution[END_REF][START_REF] Keeling | The number, speed, and impact of plastid endosymbioses in eukaryotic evolution[END_REF][START_REF] Nowack | Chromatophore genome sequence of Paulinella sheds light on acquisition of photosynthesis by eukaryotes[END_REF]. Other hypotheses have proposed a symbiotic interaction between the cyanobacterial ancestor of plastids and its eukaryotic host based on the metabolism of storage polysaccharides. In that scenario, the cyanobacterium would have exported ADP-glucose in exchange for the import of reduced nitrogen from the host [START_REF] Deschamps | Metabolic symbiosis and the birth of the plant kingdom[END_REF].
Although the nature of the metabolic exchanges between the two partners at the origin of primary photosynthetic eukaryotes remains to be elucidated, the exclusive distribution of the Gloeomargarita lineage in freshwater and terrestrial habitats [START_REF] Ragon | 16S rDNA-based analysis reveals cosmopolitan occurrence but limited diversity of two cyanobacterial lineages with contrasted patterns of intracellular carbonate mineralization[END_REF] provides important clues about the type of ecosystem where the endosymbiosis of a Gloeomargarita-like cyanobacterium within a heterotrophic host took place. Like G. lithophora, and similar to most basal-branching cyanobacterial lineages, the first cyanobacteria most likely thrived in terrestrial or freshwater hab-itats [START_REF] Battistuzzi | A major clade of prokaryotes with ancient adaptations to life on land[END_REF][START_REF] Sa ´nchez-Baracaldo | Origin of marine planktonic cyanobacteria[END_REF]. Consistent with this observation, the colonization of open oceans and the diversification of marine planktonic cyanobacteria, including the SynPro clade, occurred later on in evolutionary history, mainly during the Neoproterozoic (1000-541 mya), with consequential effects on ocean and atmosphere oxygenation [START_REF] Sa ´nchez-Baracaldo | Origin of marine planktonic cyanobacteria[END_REF]. Notwithstanding their large error intervals, molecular-clock analyses infer the origin of Archaeplastida during the mid-Proterozoic [START_REF] Parfrey | Estimating the timing of early eukaryotic diversification with multigene molecular clocks[END_REF][START_REF] Eme | On the age of eukaryotes: evaluating evidence from fossils and molecular clocks[END_REF], well before the estimated Neoproterozoic cyanobacterial colonization of oceans. Interestingly, Glaucophyta, the first lineage to diverge within the Archaeplastida (Figures 1 andS1), has been exclusively found in freshwater habitats [START_REF] Kies | Phylum Glaucocystophyta[END_REF], which is also the case for the most basal clade of red algae, the Cyanidiales, commonly associated to terrestrial thermophilic mats [START_REF] Yoon | Defining the major lineages of red algae (Rhodophyta)[END_REF]. In contrast with the classical idea of a marine origin of eukaryotic alga, these data strongly support that plastids, and hence the first photosynthetic eukaryotes, arose in a freshwater or terrestrial environment [START_REF] Blank | Origin and early evolution of photosynthetic eukaryotes in freshwater environments: reinterpreting proterozoic paleobiology and biogeochemical processes in light of trait evolution[END_REF]. This most likely happened on a Proterozoic Earth endowed with low atmospheric and oceanic oxygen concentrations [START_REF] Lyons | The rise of oxygen in Earth's early ocean and atmosphere[END_REF].
Conclusions
We provide strong phylogenomic evidence from plastid-and nucleus-encoded genes of cyanobacterial ancestry for a deep origin of plastids within the phylogenetic tree of Cyanobacteria and find that the Gloeomargarita lineage represents the closest extant relative of the plastid ancestor. The ecological distribution of both this cyanobacterial lineage and extant early-branching eukaryotic algae suggests that the first photosynthetic eukaryotes evolved in a terrestrial environment, probably in freshwater biofilms or microbial mats. Microbial mats are complex communities in which metabolic symbioses between different microbial types, including cyanobacteria and heterotrophic eukaryotes, are common. They imply physical and genetic interactions between mutualistic partners that may have well facilitated the plastid endosymbiosis. In that sense, it will be especially interesting to study the interactions that the cyanobacterial species of the Gloeomargarita lineage establish with other microorganisms, looking for potential symbioses with heterotrophic protists. Our work highlights the importance of environmental exploration to characterize new organisms that can, in turn, be crucial to resolve unsettled evolutionary questions.
EXPERIMENTAL PROCEDURES
Selection of Phylogenetic Markers
We created a local genome database with 692 completely sequenced prokaryotic and eukaryotic genomes, complemented with expressed sequenced tags (ESTs) from eight photosynthetic eukaryotes, all downloaded from GenBank. In particular, the database included 122 cyanobacteria and 20 plastids of primary photosynthetic eukaryotes (primary plastids, found in Archaeplastida). Among the cyanobacteria, we added the genome sequence of Gloeomargarita lithophora strain CCAP 1437/1 (GenBank: NZ_CP017675; see the Supplemental Experimental Procedures). To retrieve plastid-encoded proteins and their cyanobacterial orthologs, we used the 149 proteins encoded in the gene-rich plastid of the glaucophyte Cyanophora paradoxa as queries in BLASTP [START_REF] Altschul | Gapped BLAST and PSI-BLAST: a new generation of protein database search programs[END_REF] searches against this local database. Plastid and cyanobacterial protein sequences recovered among the top 300 hits with a cut-off E value <10 À10 were retrieved and aligned to reconstruct preliminary ML single-protein phylogenetic trees (details can be found in the Supplemental Experimental Procedures). These trees were visually inspected to identify potentially paralogous proteins or HGT among cyanobacteria (see the Supplemental Experimental Procedures). The problematic proteins detected were discarded. Among the remaining proteins, we selected those present in at least two of the three Archaeplastida lineages (glaucophytes, red algae, and green algae and plants; if absent in one lineage, they were replaced by missing data). Thereby, our final dataset comprised 97 plastid-encoded proteins (see Data S1). In addition, we updated the dataset of EGT proteins previously published by Deschamps and Moreira [START_REF] Deschamps | Signal conflicts in the phylogeny of the primary photosynthetic eukaryotes[END_REF], including sequences from new available genomes. As in the previous case, ML single-protein phylogenetic trees were reconstructed and inspected to discard proteins with paralogs and HGT cases. All datasets are available at http://www.ese.u-psud.fr/article909. html?lang=en.
Phylogenetic Analyses
ML trees of individual markers were reconstructed using PhyML [START_REF] Guindon | A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood[END_REF] with the GTR model for rRNA gene sequences and the LG model for protein sequences, in both cases with a four-substitution-rate-category gamma distribution and estimation of invariable sites from the alignments. Branch lengths, tree topology, and substitution rate parameters were optimized by PhyML. Bootstrap values were calculated with 100 pseudoreplicates. Individual ML trees were used to reconstruct a supernetwork [START_REF] Huson | Application of phylogenetic networks in evolutionary studies[END_REF] with the program Splitstree [START_REF] Huson | SplitsTree: analyzing and visualizing evolutionary data[END_REF] with default parameters.
Tree reconstruction of concatenated protein datasets under Bayesian inference was carried out using two independent chains in PhyloBayes-MPI [START_REF] Lartillot | PhyloBayes MPI: phylogenetic reconstruction with infinite mixtures of profiles in a parallel environment[END_REF] with the CAT-GTR model and a four-category gamma distribution. Sequences were also recoded into four amino acid families [START_REF] Rodrı ´guez-Ezpeleta | Detecting and overcoming systematic errors in genome-scale phylogenies[END_REF] using a Python script and analyzed with PhyloBayes-MPI in the same way. Chain convergence was monitored by the maxdiff parameter (chains were run until maxdiff < 0.1).
The slow-fast method was applied using the program SlowFaster [START_REF] Kostka | SlowFaster, a user-friendly program for slow-fast analysis and its application on phylogeny of Blastocystis[END_REF] to produce ten alignments with an increasing proportion of fast-evolving sites included. These alignments were used for phylogenetic reconstruction using PhyloBayes-MPI as with the complete concatenation.
Estimation of Protein Sequence Distances and Heatmap Construction
Distances among protein sequences were calculated using two different approaches: [START_REF] Moreira | The origin of red algae and the evolution of chloroplasts[END_REF] measuring patristic distances between taxa in phylogenetic trees and ( 2) from pairwise comparisons of aligned protein sequences. Patristic distances are the sum of the lengths of the branches that connect two taxa in a phylogenetic tree and can account for multiple substitutions events to a certain level. We used the glaucophyte C. paradoxa as reference, as it has the shortest branch among the plastid sequences, and measured the patristic distances between this taxon and the other taxa (cyanobacterial species and plastids of red algae and Viridiplantae) on the ML trees of the 97 individual plastid-encoded proteins reconstructed as previously described. Protdist [START_REF] Felsenstein | PHYLIP -phylogeny inference package[END_REF] was used to estimate pairwise distances between the plastid proteins of C. paradoxa and the proteins of the rest of taxa. Patristic distances and sequence similarities were displayed as heatmaps using the library Matplotlib [START_REF] Hunter | Matplotlib: a 2d graphics environment[END_REF].
ACCESSION NUMBERS
The accession number for the genome sequence of the Gloeomargarita lithophora strain CCAP 1437/1 reported in this paper is GenBank: NZ_ CP017675.
EVOLUTION OF SECONDARY
PLASTID-HARBORING LINEAGES "Grey, dear friend, is all theory, and green the golden tree of life"
Johann Wolfgang von Goethe. Faust (1804).
EVOLUTION OF SECONDARY PLASTID-HARBORING LINEAGES
Context and objectives
After we addressed the origin of primary plastids (previous chapter), we were interested in the study of secondary plastids. Chlorarachniophytes and euglenids both have secondary plastids that originated from two independent endosymbioses with green algae (Rogers et al., 2007). Nonetheless, multiple nuclear-encoded genes that participate in plastid-located biosynthetic pathways seem to have been transferred from red algal sources in both secondary green lineages (Markunas & Triemer, 2016;Yang et al., 2014).
Similarly, the finding that chromalveolates seem to encode thousands of green algalderived genes led Moustafa et al., (2009) to suggest a cryptic endosymbiosis with a green alga in diatoms. However, the reanalysis of these genes by Deschamps & Moreira, (2012) reduced the number of green genes to less than 3% of the initial claim. Hence, it seems that secondary algae have mosaic genomes and the extent of red-or green-algal derived genes remains unclear and can modify substantially the evolutionary hypotheses that are proposed to explain them.
Considering that EGTs were very helpful in the study of primary plastids (previous chapter), we used secondary EGTs to evaluate the contribution of red and green algalderived genes in secondary plastid-bearing algae. We worked on a specific fraction of secondary EGTs: those that were transferred from cyanobacteria to Archaeplastida during primary endosymbiosis, and then from a red or green alga to secondary photosynthetic eukaryotes. These secondary EGTs are important for multiple reasons: 1. They are easy to spot in phylogenetic trees which allows the differentiation between genes that were likely acquired via EGT or HGT from genes that were vertically inherited; 2. The corresponding proteins are targeted to the plastid where they play important functions; 3. Secondary EGTs may help to identify cryptic endosymbiosis. Thus, to study the evolution of photosynthetic eukaryotes with secondary plastids we established the following objectives:
1. Identification of genes acquired from red and green algae in secondary plastid-harboring lineages that were compatible with a secondary EGT scenario. This was achieved by using robust phylogenetic analyses with a broad taxon sampling as well as strict criteria of selection of markers.
2. Functional annotation of red and green algal genes to identify whether there is a correlation between the gene function and its origin that may explain a bias in gene acquisition.
Results
Our results show that approximately half of the putative secondary EGTs of chlorarachniophytes and euglenids that we identified in our analyses, seem to have been transferred from red algal sources (See Figure 18). These results are surprising given that both photosynthetic lineages harbor secondary plastids derived from green algae. By contrast, the vast of majority of secondary EGTs in eukaryotic lineages with secondary red plastids derive, as expected, from red algae. The quantification of secondary EGTs allowed us to propose different plausible explanations to account for the differences observed in the origin of this type of genes among secondary algae.
Figure 18. Genes transferred from red or green algae to secondary photosynthetic eukaryotes. A) Number of red or green algal-like genes for each lineage among the 85 genes analyzed, gray blocks correspond to genes for which the origin could not be unambiguously determined in the corresponding phylogenetic trees. B) Gene functions annotated using EggNOG 4.5 (Huerta-Cepas et al., 2016) for "red" and "green" genes detected in transcriptomes and nuclear genomes of euglenids and chlorarachniophytes.
translocation apparatus [START_REF] Gutensohn | structure and function of protein transport machineries in chloroplasts[END_REF]. Other eukaryotes obtained plastids through secondary endosymbiosis (symbiosis of green or red algae within other eukaryotic cells) or even through tertiary endosymbiosis (symbiosis of secondary photosynthetic eukaryotes within other eukaryotes) [START_REF] Delwiche | Tracing the thread of plastid diversity through the tapestry of life[END_REF]Archibald 2009;Keeling 2013). Euglenida (Excavata) and Chlorarachniophyta (Rhizaria) have green secondary plastids acquired through two independent endosymbioses involving Prasinophyceae and Ulvophyceae green algae, respectively [START_REF] Rogers | The complete chloroplast genome of the chlorarachniophyte Bigelowiella natans: Evidence for independent origins of chlorarachniophyte and euglenid secondary endosymbionts[END_REF][START_REF] Hrdá | The plastid genome of eutreptiella provides a window into the process of secondary endosymbiosis of plastid in euglenids[END_REF]Suzuki, et al. 2016). Photosynthetic species in the Cryptophyta, Alveolata, Stramenopiles and Haptophyta (CASH) lineages have plastids derived from red algae but their evolutionary history remains controversial (Lane and Archibald 2008;Archibald 2009;Keeling 2013).
Whereas phylogenomic analyses of plastid-encoded genes support CASH monophyly, arguing for a single red algal secondary endosymbiosis [START_REF] Yoon | The single, ancient origin of chromist plastids[END_REF][START_REF] Muñoz-Gómez | The New Red Algal Subphylum Proteorhodophytina Comprises the Largest and Most Divergent Plastid Genomes Known[END_REF], most host nuclear gene phylogenies do not support it [START_REF] Baurain | Phylogenomic evidence for separate acquisition of plastids in cryptophytes, haptophytes, and stramenopiles[END_REF][START_REF] Burki | Untangling the early diversification of eukaryotes: a phylogenomic study of the evolutionary origins of Centrohelida, Haptophyta and Cryptista[END_REF]. Some authors tried to reconcile these incongruent results by proposing that a unique phylum (that become extinct or evolved into one of the extant CASH phyla) acquired a red alga through secondary endosymbiosis and, subsequently, transmitted it via serial tertiary endosymbioses with other CASH phyla [START_REF] Larkum | Shopping for plastids[END_REF][START_REF] Sanchez-Puerta | A hypothesis for plastid evolution In chromalveolates[END_REF][START_REF] Bodył | Chromalveolate plastids: direct descent or multiple endosymbioses?[END_REF][START_REF] Baurain | Phylogenomic evidence for separate acquisition of plastids in cryptophytes, haptophytes, and stramenopiles[END_REF][START_REF] Petersen | Chromera velia, endosymbioses and the rhodoplex hypothesis -Plastid evolution in cryptophytes, alveolates, stramenopiles, and haptophytes (CASH lineages)[END_REF].
As in the primary endosymbiosis, secondary or tertiary endosymbioses entail numerous EGTs from the endosymbiotic red or green algal nucleus to the host nucleus. Thus, secondary photosynthetic eukaryotes possess two types of genes that can inform about the phylogenetic identity of their plastids: plastid-encoded genes and nucleus-encoded genes acquired via EGT. Genes of primary plastid genomes and EGTs in Archaeplastida genomes are related to cyanobacteria and have helped to identify the cyanobacterial lineage at the origin of the first plastid (Ponce-Toledo, et al. 2017). Similarly, plastid-encoded genes and EGTs in nuclear genomes of secondary photosynthetic eukaryotes are expected to reveal the red or green algal origin of their plastids. Compared to plastidencoded genes, EGTs have the additional advantage of informing about possible past plastids secondarily lost or replaced (cryptic plastids). However, if EGTs are valuable to track both contemporary and cryptic endosymbioses, their detection within nuclear genome sequences remains complex [START_REF] Stiller | Experimental design and statistical rigor in phylogenomics of horizontal and endosymbiotic gene transfer[END_REF]. EGT detection is rather straightforward in Archaeplastida because cyanobacterial-like genes are easily distinguishable from typical eukaryotic genes. It is more difficult in secondary endosymbioses because the poor resolution of many single-gene phylogenies may hamper distinguishing EGTs from vertically inherited nuclear genes, especially considering the short phylogenetic distance between donors (red and green algae) and acceptors (secondary algae). Two recent studies reporting a high number of genes apparently related to green algal homologs in red-plastid-bearing algae, chromerids (Alveolata) and diatoms (Stramenopiles) are illustrative. Whereas in chromerids the green signal was attributed to phylogenetic artifacts and insufficient sampling of red algal genome sequences [START_REF] Woehle | Red and problematic green phylogenetic signals among thousands of nuclear genes from the photosynthetic and apicomplexa-related Chromera velia[END_REF], in diatoms it was interpreted as evidence for a cryptic green algal endosymbiont [START_REF] Moustafa | Genomic Footprints of a Cryptic Plastid Endosymbiosis in Diatoms[END_REF]). However, reanalyses of the same genes using improved taxonomic sampling and phylogenetic methods largely erased the evidence of potential cryptic green endosymbiosis [START_REF] Burki | The evolutionary history of haptophytes and cryptophytes: phylogenomic evidence for separate origins[END_REF]Deschamps and Moreira 2012;[START_REF] Moreira | What was the real contribution of endosymbionts to the eukaryotic nucleus? Insights from photosynthetic eukaryotes[END_REF].
The impact of horizontal gene transfer (HGT) on eukaryotic evolution remains controversial. HGTs may help to infer the history of genomes and lineages [START_REF] Abby | Lateral gene transfer as a support for the tree of life[END_REF]but they can also introduce phylogenetic noise, notably in the study of EGTs [START_REF] Stiller | Experimental design and statistical rigor in phylogenomics of horizontal and endosymbiotic gene transfer[END_REF]. Through time, secondary photosynthetic eukaryotes may have accumulated HGTs from various sources, including non-endosymbiotic red and green algae. Such HGTs may display phylogenies comparable to those of real EGTs, making them difficult to set apart. Thus, anomalous phylogenetic signal in certain secondary photosynthetic groups has been interpreted as HGT rather than EGT from cryptic endosymbionts. For example, the analysis of the nuclear genome of the green-plastid-containing chlorarachniophyte Bigellowiella natans indicated that 22% of the genes potentially acquired via HGT had red algal origin [START_REF] Curtis | Algal genomes reveal evolutionary mosaicism and the fate of nucleomorphs[END_REF]. Because of the phagotrophic ability of chlorarachniophytes, these genes were considered as the result of progressive accumulation of HGTs from red algae or from red-plastid-containing CASH lineages [START_REF] Archibald | Lateral gene transfer and the evolution of plastid-targeted proteins in the secondary plastidcontaining alga Bigelowiella natans[END_REF][START_REF] Yang | An extended phylogenetic analysis reveals ancient origin of "non-green" phosphoribulokinase genes from two lineages of "green" secondary photosynthetic eukaryotes: Euglenophyta and Chlorarachniophyta[END_REF][START_REF] Yang | Phylogenomic analysis of "red" genes from two divergent species of the "green" secondary phototrophs, the chlorarachniophytes, suggests multiple horizontal gene transfers from the red lineage before the divergence of extant chlorarachniophytes[END_REF]. Likewise, studies on euglenids suggested a similar trend for several genes involved in central metabolic pathways (Maruyama, et al. 2011;[START_REF] Yang | An extended phylogenetic analysis reveals ancient origin of "non-green" phosphoribulokinase genes from two lineages of "green" secondary photosynthetic eukaryotes: Euglenophyta and Chlorarachniophyta[END_REF]Markunas and Triemer 2016). If the unexpected presence of these 'red' genes in chlorarachniophytes and euglenids was first considered as evidence of multiple HGTs, the increasing number of reported cases prompted some authors to propose putative cryptic red algal endosymbioses (Maruyama, et al. 2011;Markunas and Triemer 2016). Nevertheless, a systematic investigation of HGT/EGT is still missing in euglenids and chlorarachniophytes but, as mentioned above, it can be difficult to distinguish in the context of secondary endosymbioses among HGT, EGT, and just unresolved trees on the basis of single-gene phylogenies (Deschamps and Moreira 2012).
Here, we have focused on a particular group of genes to reduce uncertainty: genes transferred from the original cyanobacterial plastid endosymbiont into the nuclear genome of Archaeplastida and, subsequently, from Archaeplastida into the genomes of complex secondary algae. In Archaeplastida, these genes are often involved in essential plastid functions and tend to be highly conserved [START_REF] Reyes-Prieto | Cyanobacterial Contribution to Algal Nuclear Genomes Is Primarily Limited to Plastid Functions[END_REF]Deschamps and Moreira 2009), we thus expected that they provide strong phylogenetic signal. To identify them, we queried by BLAST the whole predicted proteomes of Guillardia theta and algae. Interestingly, in 7 of these trees, all CASH were monophyletic, arguing for a common origin of the corresponding genes. Almost all of the 85 genes identified here encode plastid-targeted proteins involved in essential plastid functions (fig. 1B and supplementary table S4, Supplementary Material online). In both chlorarachniophytes and euglenids, these nuclear-encoded 'red' genes participate in plastid genome expression (e.g., elongation factors, aminoacyl-tRNA ligases, ribosomal proteins), light harvesting, chlorophyll biosynthesis, and photosystem II assembly. Keeping these important genes implies a plastid-related selective pressure, which makes improbable that they could have accumulated in the heterotrophic ancestors of green secondary photosynthetic eukaryotes before plastid acquisition.
The disproportion of unexpected gene sources in green versus red secondary photosynthetic lineages (~50% and 10%, respectively) is intriguing and may be interpreted in different ways. First, the green algal ancestors of chlorarachniophyte and euglenid plastids may have accumulated many red algal genes by HGT. However, such rampant FIG 1. Genes of red and green algal ancestry in secondary photosynthetic eukaryotes. (A) Number of red or green algal-like genes for each lineage among the 85 genes analyzed, gray blocks correspond to genes for which the origin could not be unambiguously determined in the corresponding phylogenetic trees. (B) Gene functions annotated using EggNOG 4.5 (Huerta-Cepas, et al. 2016) for 'red' and 'green' genes detected in transcriptomes and nuclear genomes of euglenids and chlorarachniophytes.
HGT involving essential plastid genes has not been reported so far in any green alga. Second, 'red' genes may have accumulated in chlorarachniophyte and euglenid nuclear genomes by numerous HGTs, for example from food sources. This would imply a longlasting feeding preference towards red prey in both lineages and also that, for unknown reasons, HGT is much more frequent in secondary green lineages than in red ones.
Moreover, the 'red' genes are often shared by all species of the relatively rich taxon sampling available for chlorarachniophytes (fig. 2B), indicating that their acquisition predated the chlorarachniophyte diversification but stopped afterwards (no tree supported recent HGTs involving only chlorarachniophyte subgroups).Despite a poorer taxonomic sampling, a similar trend was observed in euglenids. In addition, chlorarachniophytes and euglenids often branched deep among the red lineages, arguing for an ancient timing of 'red' gene acquisitions comparable to that of the CASH phyla themselves. These observations allow a third interpretation: 'red' genes could have been acquired from ancient cryptic red plastids in chlorarachniophytes and euglenids. The 'red' genes found in chlorarachniophytes are also shared with plastid-endowed stramenopiles and alveolates.
Would it be possible that they were transferred from a single red algal endosymbiont ancestral to the whole SAR supergroup (Stramenopiles, Alveolata, and Rhizaria)?. This original red plastid would have been lost in many phyla and replaced by a green alga in chlorarachniophytes. However, this scenario poses several problems. First, traces of past red algal plastids, in the form of EGTs, in most non-photosynthetic SAR lineages are scarce and controversial [START_REF] Elias | Sizing up the genomic footprint of endosymbiosis[END_REF]Stiller, et al. 2009;[START_REF] Stiller | Experimental design and statistical rigor in phylogenomics of horizontal and endosymbiotic gene transfer[END_REF].
Second, chlorarachniophytes constitute a relatively late-emerging SAR branch (Sierra, et al. 2016); if their present-day green plastid replaced a former red one, this red plastid would have had to be present until recently and been lost in all other rhizarian lineages, which may seem unparsimonious. The case of euglenids is even more difficult to interpret as these excavates are not closely related with any other photosynthetic lineage. In addition, massive sequence data remain much more limited for euglenids than for chlorarachniophytes (only a few transcriptomes available, see supplementary table S1, Supplementary Material online), making difficult to infer the relative age of possible gene transfers. Nonetheless, 'red' genes were often shared by several euglenids, suggesting a similar pattern of ancient acquisition as in chlorarachniophytes (supplementary figs. S1-S85, Supplementary Material online).
Our results support the presence of an unexpectedly high number of genes of red algal affinity in secondary green algae, euglenids and chlorarachniophytes, which is significantly higher than the frequency of 'green' genes in algae with secondary red plastids, the CASH lineages. Since we have focused on a subset of genes selected because of their strong phylogenetic signal, it is uncertain whether this conclusion can be generalized to the rest of HGTs potentially present in the genomes of these algae. However, we did not identify any particular bias in our gene selection process that could have artificially enriched the 'red' gene frequency observed in euglenids and chlorarachniophytes. Despite the methodological problems inherent to global genome analyses, including the highly unbalanced representation of red and green algal genomes in sequence databases (Deschamps and Moreira 2012), the study of the chlorarachniophyte B. natans genome already pointed in that direction, with 22% of EGT genes of apparent red algal ancestry [START_REF] Curtis | Algal genomes reveal evolutionary mosaicism and the fate of nucleomorphs[END_REF]. The origin of 'red' genes in euglenids and chlorarachniophytes, either by cumulative HGT or by EGT from cryptic red algal plastids, remains mysterious but our work indicates that they were acquired early in both groups and that they fulfill functions critical for plastid activity and maintenance. Interestingly, indisputable evidence supports that in the dinoflagellate genus Lepidodinium -a third group of complex algae with green plastids-a former red plastid was replaced by the current green one, leading to a mosaic plastid proteome encoded by a mix of red and green algal genes [START_REF] Minge | A phylogenetic mosaic plastid proteome and unusual plastidtargeting signals in the green-colored dinoflagellate Lepidodinium chlorophorum[END_REF] reminiscent of those that we found in euglenids and chlorarachniophytes. It has been proposed that, since they retain more gene-rich genomes than green ones, red plastids have increased capacity for autonomous metabolism that could explain why they are more widespread as secondary plastids across the diversity of eukaryotes (the "portable plastid" hypothesis [START_REF] Grzebyk | The mesozoic radiation of eukaryotic algae: the portable plastid hypothesis[END_REF]). It is thus tempting to speculate a similar case for euglenids and chlorarachniophytes as for Lepidodinium, with green plastids that replace initial red ones. Even if this hypothesis turns out to be wrong and these cryptic endosymbioses never existed, the ancient acquisition by an unknown mechanism of a significant number of red algal genes in both groups before their diversification and, especially, their maintenance in contemporary species through millions of years of evolution, suggest that the 'red' genes were instrumental in the establishment of the secondary green plastids. Sequencing and analysis of additional genomes of euglenids, chlorarachniophytes, and their non-photosynthetic relatives will help to refine the inventory of 'red' genes and learn more about their timing and, eventually, mechanism of acquisition.
ORIGIN OF THE SELMA
TRANSLOCON IN SECONDARY
PLASTIDS OF RED ALGAL ORIGIN
"About 30 years ago there was much talk that geologists ought only to observe and not theorise; and I well remember some one saying that at this rate a man might as well go into a gravel-pit and count the pebbles and describe the colours. How odd it is that anyone should not see that all observation must be for or against some view if it is to be of any service!"
Charles Darwin. Letter to Henry Fawcett (September 18, 1861)
ORIGIN OF THE SELMA TRANSLOCON IN SECONDARY
PLASTIDS OF RED ALGAL ORIGIN
Context and objectives
As we have seen in the previous chapter, although secondary plastid-harboring lineages have been widely studied, there are still many questions that remain unresolved.
In regard to the evolution of secondary red plastids, the major debate concerns the number of endosymbiotic events at the origin of all chlorophyll c-containing algae (section 1.3.2). The monophyly of the SELMA translocon is considered to provide strong evidence of the single origin of secondary red plastids. This protein import system is thought to be shared by all lineages with secondary red plastids surrounded by four membranes.
However, the monophyly of SELMA has been inferred from phylogenetic analyses of very few proteins (Felsner et al., 2011). The claim for a SELMA system shared by all secondary red algae has been mostly based on the identification using similarity searches of SELMAlike homologs containing transit peptide (TP) sequences (Stork et al., 2012).
However, TP sequences usually have lineage-specific characteristics that make difficult their identification in oversimplified TP prediction models. Although several programs have been developed to account for the variability of TP sequences among secondary plastid-harboring lineages [START_REF] Cilingir | ApicoAP: The first computational model for identifying apicoplast-targeted proteins in multiple species of apicomplexa[END_REF][START_REF] Gruber | Plastid proteome prediction for diatoms and other algae with secondary plastids of the red lineage[END_REF][START_REF] Gschloessl | HECTAR: A method to predict subcellular targeting in heterokonts[END_REF], prediction programs may fail (false positives or false negatives) or some proteins may lack TP sequences if they were not completely sequenced. Above all, it is important to consider that similarity searches and TP detection do not give any information regarding the evolutionary origin of proteins, and phylogenetic analyses must be performed in order to determine the monophyly of the different components.
To continue with our study of the evolution of secondary photosynthetic eukaryotes, we conducted a comprehensive phylogenetic study of the SELMA translocon aiming to provide key insights into the acquisition of secondary red plastids and their evolution: do all SELMA components derive from the red algal endosymbiont as proposed by the standard model? Are all the SELMA proteins monophyletic?
In this last part of my PhD research project, we performed phylogenetic analyses on the largest collection of putative SELMA components reported so far (Stork et al., 2012) and analyzed the best-fit scenario for the evolution of this translocon under the chromalveolate hypothesis or a model of serial endosymbioses.
Results
Our phylogenetic analyses of putative SELMA components show that most of these proteins do not have a monophyletic origin (Table 2). The composition and origin of the SELMA translocon in cryptophytes is extremely different from the import machinery established in stramenopiles, haptophytes and alveolates. In some cases, these red lineages seem to have recruited a different set of paralogous proteins from the ERAD system of the red algal endosymbiont. We show that the evolutionary history of SELMA is not as straightforward as it is commonly assumed in the standard models that have been proposed to explain the origin and evolution of secondary red algae. In the next section I show a preliminary draft of the article manuscript for a detailed description of our results.
INTRODUCTION
Eukaryotes have acquired photosynthesis through the engulfment of a cyanobacterium symbiont more than 1.5 billion year ago (Yoon et al. 2004). Three extant lineages are known to have diversified after this primary endosymbiosis: Glaucophytes, Viridiplantae and Rhodophytes, composing the Archaeplastida supergroup [START_REF] Adl | The new higher level classification of eukaryotes with emphasis on the taxonomy of protists[END_REF]. All photosynthetic Archaeplastida carry a plastid surrounded by two membranes and containing a reduced genome that derived from the cyanobacterium endosymbiont.
Depending on the lineage, these small circular genomes encode 60 to 250 genes involved in their own maintenance and expression, as well as in the building of the photosystem [START_REF] Barbrook | Why are plastid genomes retained in nonphotosynthetic organisms?[END_REF].
All other proteins acting in the plastid are encoded in the nuclear genome, transcribed and translated via the canonical machinery and relocated to the plastid compartment using a specialized addressing pathway. This pathway requires a N-terminal tag sequence called the transit peptide. Unfolded proteins are taken in charge by a cytosolic guidance complex that will deliver them to the the outer plastid membrane. Two multi-protein integral translocators of the inner and outer chloroplast membranes, shortly named Tic and Toc, will then recognize the transit peptide and allow proteins to enter the lumen of the plastid where they can either stay soluble or be integrated into the chloroplast or thylakoid membranes [START_REF] Sjuts | Import of Soluble Proteins into Chloroplasts and Potential Regulatory Mechanisms[END_REF].
A survey of the genomes of Archaeplastida has shown that many of the genes they encode are not phylogenetically related to eukaryotes but to cyanobacteria (Martin et al. 2002). These genes were acquired from the endosymbiont soon after primary endosymbiosis via a process called Endosymbiotic Gene Transfer (EGT) (Martin et al. 1998). Many of these genes are addressed to the plastid in extant Archaeplastida (Reyes-Prieto et al. 2006), but they could also have found a role in other compartment of the cell, making primary endosymbiosis an incredible source of metabolic reshuffling and innovation. The Tic/Toc machinery is one example, as it is composed of proteins from both the host and the symbiont of primary endosymbiosis (Shi and Theg 2013). Moreover, the composition of Tic/Toc being identical in all Archaeplastida species, it has been considered, combined with several phylogenetic evidences (Moreira et al. 2000;Rodríguez-Ezpeleta et al. 2005), as a strong indication that all Archaeplastida share a common ancestor that derived from a single primary endosymbiotic event (McFadden and van Dooren 2004;[START_REF] Steiner | Homologous protein import machineries in chloroplasts and cyanelles[END_REF]).
Although the nature of the common eukaryotic ancestor of Archaeplastida is undefined, a recent study points the ancestor of the primary plastid to an extinct cyanobacterium related to the "early diverging" freshwater Gloeomargaritales (Ponce-Toledo et al. 2017). There is only one other evolutionary event comparable to primary endosymbiosis, involving a marine cyanobacterium related to the Prochlorococcus/Synechococcus clade and a phagotrophic cercozoan, that gave rise to the poorly diversified thecate amoebas of the genus Paulinella (Marin et al. 2005;[START_REF] Yoon | A single origin of the photosynthetic organelle in different Paulinella lineages[END_REF]). These species show a lower degree of integration of the organelle, with a less reduced genome, a moderate amount of EGTs and few identified cases of protein addressing (Nowack et al. 2011;[START_REF] Gagat | Protein translocons in photosynthetic organelles of Paulinella chromatophora[END_REF].
Photosynthesis in eukaryotes is not restricted to Archaeplastida and Paulinella.
Many other distant phyla also carry plastids that derive from so-called 'complex'
endosymbioses. Chlorarachniophytes (Rhizaria) and Euglenids (Excavata) both carry plastids with pigments and photosystems similar to green algae and surrounded by three and four membranes, respectively. Four phyla contain species with plastids similar to red algae, containing chlorophyll c and surrounded by four membranes: Cryptophytes, Alveolates, Stramenopiles and Haptopytes, with the exception of the Dinoflagellates (Alveolates) that only have three membranes (Keeling 2010). The presence of additional membranes and the similarities with plastid features of Archaeplastida early led to the hypothesis that all these complex lineages evolved by secondary endosymbioses involving heterotrophic eukaryote hosts and green or red algae [START_REF] Taylor | Implications and Extensions of the Serial Endosymbiosis Theory of the Origin of Eukaryotes[END_REF]. Phylogenetic analyses of plastid encoded genes repeatedly proved this point and even determined that plastid of Euglenids and Chlorarachniophytes derive from two independent secondary endosymbioses involving a Prasinophyceae and a Ulvophyceae respectively (Rogers et al. 2007;Hrdá et al. 2012;Suzuki et al. 2016) (Petersen et al. 2014). However, efforts aiming at reconstructing the phylogeny of eukaryotes nuclei have failed to retrieve the monophyly of the CASH super-group (Baurain et al. 2010). Moreover, all lineages member of the CASH comprise a large part of non-photosynthetic species, including early diverging branches, with no detectable traces of plastids, nor leftover genes related to
Archaeplastida, leaving open the possibility of several independent secondary endosymbioses, in each CASH lineage, with very similar red algae.
The incongruence between plastid phylogeny and nuclear phylogeny, and the existence of heterotrophic chromists is interpreted in two ways: Cavalier-Smith advocates that artifacts in molecular phylogenies are to blame and that the history told by cellular features as well as genes presence/absence patterns should prevail. For him, these latter elements are compatible with the monophyly of Chromists as a whole, who emerged from a unique founder secondary endosymbiosis followed by many (at least 16) secondary losses (Cavalier-Smith 2017). Other authors consider that the monophyly of all secondary red plastids is compatible with the polyphyly of secondary photosynthetic lineages if the red plastid was transmitted via a chain of higher order endosymbioses. Sometimes termed the "rhodoplex" hypothesis (Bodył et al. 2009;Petersen et al. 2014;Stiller et al. 2014) this scenario depicts a single first secondary endosymbiosis setting up the secondary rhodoplast in one CASH lineage. This rhodoplast is then latter transmitted by tertiary endosymbiosis to another CASH lineage which itself gave it to another lineage by quaternary endosymbiosis. In such a scenario, phylogenies reconstructed using plastid genes will depict a monophyletic ancestry of the plastid in all CASH, each lineage nested within one another. At the same time, phylogenies of nuclear genes of the corresponding lineages may support the polyphyly of CASH lineages. Finally, given the observed acceleration of molecular evolution of plastid encoded sequences (Deschamps and Moreira 2009), it is also possible that the monophyly of secondary red plastids is an artifact, and that multiple secondary endosymbioses involving closely related red algae happened in distant parts of the eukaryotic tree.
Like in the case of Archaeplastida, nuclear genomes of secondary photosynthetic eukaryotes are complex mosaics composed of genes inherited from their eukaryote ancestors, as well as genes transfered by EGT from their former endosymbionts (Lane and Archibald 2008). These later being already eukaryote-prokaryote assemblages, the final mosaic in secondary lineages presents a higher level of complexity, with genes from cyanobacteria and two types of genes from eukaryotes.
In Chlorarachniophytes and Cryptophytes, an additional genetic compartment exists corresponding to the former nucleus of the secondary symbiont [START_REF] Greenwood | The Cryptophyta in relation to phylogeny and photosynthesis[END_REF]Hibberd and Norris 1984). This nucleomorph is located in the periplastidial space (PPS), between the second outermost membrane (or Periplastidial Membrane, PPM) and the plastid double membrane (PM) and contains a vestigial genome encoding 300 to 450 genes involved in its own replication and expression as well as in some metabolic functions of the plastid [START_REF] Maier | The nucleomorph genomes of cryptophytes and chlorarachniophytes[END_REF].
Similarly to Archaeplastida, a great proportion of nucleus-encoded proteins in secondary photosynthetic lineages needs to be addressed to the plastid. If the Tic/Toc machinery also exists in the two innermost membrane of secondary plastids (Stork et al. 2013), proteins shall also pass through one (euglenids and dinoflagellates) or two (chlorarachniophytes and all other CASH) additional outer membranes. Concerning the outermost membrane, two observations have helped understanding how it is crossed.
First, in Cryptophytes, Stramenopiles and Haptophytes this membrane is fused with the nuclear envelope and populated with 80S ribosomes, making it an equivalent of the reticulum membrane (Cavalier-Smith 2002). Second, in every secondary lineage, the heading sequence of nuclear-encoded plastid-located proteins contains a bipartite transit signal (BTS), a combination of a signal peptide and a transit peptide indicating that plastid targeted proteins are translocated into the reticulum using the Sec61 secretion pathway [START_REF] Bhaya | Targeting proteins to diatom plastids involves transport through an endoplasmic reticulum[END_REF]. In the special case of Alveolates, where the plastid is not located in the reticulum, it has been shown that proteins are packaged into vesicles to be transported toward the the plastid where they fuse and deliver proteins in the PPS [START_REF] Agrawal | More Membranes, more Proteins: Complex Protein Import Mechanisms into Secondary Plastids[END_REF]. While the existence of a dedicated translocation machinery at the PPM was theorized long ago, its actual functioning in CASH phyla has only been recently unveiled (Stork et al. 2013). Conversely, for chlorarachniophytes, the story stops here, and no one knows how targeted proteins cross the PPM.
The understanding of how the PPM is crossed in CASH species came from the analysis of the gene content of the nucleomorph genome of the cryptophyte Guillardia theta where Sommer et al. identified components of the former red-algal ERAD translocon machinery (Sommer et al. 2007). In the same work, putative homologs of the components found in nucleomorphs were also detected in the genome of alveolates and stramenopiles, with additional BTS, suggesting the relocation of their protein product to the plastid. ERAD (Endoplasmic Reticulum Associated Degradation) is a ubiquitin dependent pathway used by all eukaryotic cells to extract misfolded proteins from the reticulum back into the cytoplasm where they will be degraded by the proteasome [START_REF] Meusser | ERAD: the long road to destruction[END_REF]). In the context of a minimized cytoplasm like the PPC, where it is not yet clear if there is a endomembrane network, it was speculated that this copy of ERAD could have been tinkered to create a translocon to import proteins across the PPM. This putative new complex was named SELMA for Symbiont-specific Erad-Like MAchinery and the first experimental evidence supporting its function was provided by fluorescence localization as well as preprotein transit peptide interaction experiments of plastid-targeted Derlins (the putative constituents of the translocation pore) (Hempel et al. 2009) and a conditional null mutant of the same Derlin in Toxoplasma gondii [START_REF] Agrawal | Genetic evidence that an endosymbiont-derived endoplasmic reticulum-associated protein degradation (ERAD) system functions in import of apicoplast proteins[END_REF].
The structure of the SELMA translocon has been further dissected both by in silico (Felsner et al. 2011;[START_REF] Moog | In silico and in vivo investigations of proteins of a minimized eukaryotic cytoplasm[END_REF]Stork et al. 2012) and by functional analyses using mainly two models: Toxoplasma gondii and Phaeodactylum tricornutum (Hempel et al. 2010;Agrawal et al. 2013;[START_REF] Lau | N-terminal lysines are essential for protein translocation via a modified ERAD system in complex plastids[END_REF][START_REF] Lau | Protein-protein interactions indicate composition of a 480 kDa SELMA complex in the second outermost membrane of diatom complex plastids[END_REF][START_REF] Fellows | A Plastid Protein That Evolved from Ubiquitin and Is Required for Apicoplast Protein Import in Toxoplasma gondii[END_REF]) . As to now, SELMA is hypothesized to function using about 15 proteins classified in four functional categories (Table 1). ( 1) a pore made of a complex of Derlin (Der1) proteins through which proteins enter the PPC. ( 2) An ubiquitinylation machinery to transfer ubiquitin moieties to translocated pre-protein so they can be recognized by the cdc48 complex. It is composed of a SELMA-specific polyubiquitin, a ubiquitin activating enzyme E1, a conjugating enzyme E2, a ubiquitin ligase E3. To avoid a potential degradation of translocated proteins by the proteasome of the PPC, and because the pre-proteins need to be devoid of ubiquitin to cross the Tic/Toc complex, there is also an hypothesized de-ubiquitinylation enzyme (Hempel et al. 2010). ( 3) A cdc48 complex consuming ATP to pull the ubiquitinylated preproteins through the Derlin pore, and composed of Cdc48, Ufd1 and Npl4. ( 4) A set of accessory chaperones. (Stork et al. 2012). Using this survey, the authors argued that all CASH phyla possess a symbiont-derived SELMA machinery.
However, apart from the Derlins (Hirakawa et al. 2012;Petersen et al. 2014;Cavalier-Smith 2017), cdc48 [START_REF] Agrawal | Genetic evidence that an endosymbiont-derived endoplasmic reticulum-associated protein degradation (ERAD) system functions in import of apicoplast proteins[END_REF]Felsner et al. 2011;[START_REF] Kienle | Shedding light on the expansion and diversification of the Cdc48 protein family during the rise of the eukaryotic cell[END_REF], and Uba1 (Felsner et al. 2011), no systematic phylogenetic analysis of the components of the SELMA complex was ever done.
The various phylogenetic analysis of the Cdc48 protein family have concluded that all CASH have inherited their SELMA copies from a red alga. From this single evidence, the community has taken for granted that the whole SELMA complex has a common ancestry across all CASH, the cryptophytes nucleomorph-encoded version being the "original setup" because it is still partially encoded in the former symbiont nucleus (Gould et al. 2015;Cavalier-Smith 2017). This needs, however, to be validated. In this work, we propose to determine the phylogeny of all described components of the SELMA complex and to assess if they support the hypothesis of a monophyletic origin of the complex in the CASH super-group.
MATERIAL AND METHODS
Dataset construction and Phylogenetic inferences
For all protein families analyzed in this work, the starting material is the the set of sequences accession numbers reported in table 1 and supplementary table S1 provided in Stork et al. 2012(Stork et al. 2012). The corresponding protein sequences were used to query a local custom genomes and transcriptomes database using psi-blast (Altschul et al. 1997), allowing for 5 rounds of sequences searches and an e-value threshold of 1e-5. The composition of this database is available in supplementary table 1. All hits retrieved using psi-blast were collected in a first multi-protein file and were aligned using MAFFT v7.205 with default parameters [START_REF] Katoh | Recent developments in the MAFFT multiple sequence alignment program[END_REF]. Gaps and poorly informative sites were removed using BMGE (Criscuolo and Gribaldo 2010). Some of the SELMA component proteins are rather short, depending on the size of each alignments, we used either the BLOSUM62 or BLOSUM30 substitution matrix in BMGE to maximize the number of recovered sites for phylogenetic tree reconstruction. A first set of phylogenetic trees were reconstructed using FastTree v2.1.7, default parameters (Price et al. 2010). These first trees were manually inspected and pruned to remove duplicate sequences as well as groups of similar proteins that were not related to the orthologous groups of interest. The remaining sequences were realigned and trimmed using the same parameters, and final trees reconstructed with Iqtree v1.5.5 using two combinations of parameters : LG+G(4 cat) +I and PMSF : LG+C60+F+G(4 cat) using as guide trees the ones produced with the first parameters [START_REF] Minh | Ultrafast approximation for phylogenetic bootstrap[END_REF][START_REF] Nguyen | IQ-TREE: A fast and effective stochastic algorithm for estimating maximum-likelihood phylogenies[END_REF]. et al. (2012) identified ERAD and SELMA components in various protists using Blast similarity searches. In their model, it is expected that all secondary endosymbiotic lineages will carry at least 2 copies of each functional component of an ERAD-like machinery; one involved in ERAD, the other in SELMA.
RESULTS
Stork
We carried out phylogenetic analyses of 14 components shared by ERAD and SELMA. We gathered all previously reported protein identifiers and used them to retrieve as many similar proteins as possible using psi-blast against a custom genome database.
We then used these collections of similar proteins to reconstruct phylogenetic trees.
Previous reports argued about the difficulty to reconstruct reliable phylogenetic trees, due to a putative high degree of divergence of the SELMA proteins. Though we also had difficulties to obtain well resolved trees, for most cases, we found sufficient phylogenetic signal to infer the evolution of SELMA components.
Orthologs classification
In the following part, the terminology of previous studies will be used: ERAD components in secondary endosymbiotic lineages will have the prefix h (for host), and SELMA components will be prefixed s (for symbiont). However, we must stress out that host or symbiont prefixes are only a commodity to differentiate ERAD and SELMA, but do not indicate, as we will show below, that the corresponding proteins is actually phylogenetically derived from the former host ERAD or from the red algal symbiont ERAD.
In the case of cryptophytes, all components encoded in the nucleomorph genome are de facto part of SELMA and evolved from the red algal symbiont. For all other phyla, the classification of each protein as part of cytosolic ERAD or plastidial SELMA was determined by Stork et al. using similarity as a proxy of their distance with ERAD genes from Saccharomyces and ERAD genes from red algae. Additionally, if a BTS was detected in a ERAD-like protein, in addition to the cytosolic copy, it was considered to be potentially part of the symbiont-derived translocation machinery. However, plastid-targeted does not necessarily means symbiont-derived. Moreover, As it has been repetitively reported, the similarity ordered hits list produced by blast searches does not reflect the actual phylogenetic relationships between the query and the hits (Koski and Golding 2001). Thus, when trying to circumscribe groups of orthologs in large protein families, it is common to mix up members between different paralogous groups. As a matter of fact, from our phylogenetic trees we determined that some Ubc paralogs were misclassified.
The Der1 translocon
Phylogenetic analyses of the SELMA pore-forming Der1 proteins are scarce and uncertain about their shared ancestry (Hirakawa et al. 2012;Petersen et al. 2014).
Cavalier-Smith recently published a set of phylogenies of the Derlin proteins where he redefined two paralogous groups namely DerA and DerB corresponding respectively to Der1 and Dfm1, the two ERAD members found in Saccharomyces cerevisiae (Cavalier-Smith 2017). In his interpretation, SELMA DerA is only present in the nucleomorph of cryptophytes and is the only copy in this lineage. In all other CASH, he found two SELMA copies that would have derived from DerB and were duplicated in the ancestor of Chromists.
When looking at the many versions of the phylogenetic trees of derlin proteins in the supplementary information of Cavalier-Smith ( 2017), it appears that the phylogenetic signal is very unstable. We had similar issues with our analyses and were unable to of Der1 and Dfm1 of Saccharomyces in our trees were very unstable, and almost never clustered with similar proteins of other fungi, questioning their validity as reference to classify the two paralogous groups.
The ubiquitination machinery: Uba1, Ubc, Hrd1, ptDUP Uba1, the ubiquitin-activating enzymes, or simply enzymes E1 catalyze the attachment of ubiquitin moieties to the ubiquitin conjugating protein (E2). The gene phylogeny of Uba1 shows that haptophytes, stramenopiles and alveolates encode a sUba1 gene that derive from the same red algal hUba1 ancestor gene. However in cryptophytes, contrary to what was previously published (Felsner et al. 2011), the putative protein corresponding to sUba1 is likely a duplication of the host's ERAD hUba1 (table 1, supplementary fig. S1). The ubiquitin-conjugating (Ubc) enzymes E2 transiently carry ubiquitin for a future transfer on targeted pre-proteins. In their report, Stork et al. detected that different secondary lineages have a putative SELMA copy derived from various isoforms of Ubc. However, a general tree of all isoforms clearly clusters all of them in a single subfamily, that we called Ubc_new in Table 1, because it did not group with any of the eight ERAD isoforms identified so far (Figure 1A). The topology of the phylogenetic tree restricted to Ubc_new displays a shared origin of sUbc protein in the Alveolates-Stramenopiles-Haptophytes assemblage likely by EGT from a red algal ancestor of their plastid (Table 1, Figure 1B). In cryptophytes, we detected an early duplication of the Ubc7 gene of the host. One copy has a putative BTS, suggesting that one of host ERAD-like duplicates might be directed to the plastid as part of the SELMA machinery, but this would need functional confirmation. Interestingly, cryptophytes possess a higher number of Ubcs, including two encoded in the nucleomorph, with no detected homologs in other lineages.
Ubiquitin ligases E3 (Hrd1) transfer ubiquitin to plastid pre-proteins. As reported previously (Hempel et al. 2010), only cryptophytes possess a sHrd1 protein, encoded in the nucleomorph. Hempel et al. used relaxed similarity searches to try to find a protein supporting the same function in stramenopiles. They identified ptE3P, a protein that has a measurable ubiquitin transferase activity, and is located in the PPC. Our similarity searches show that this protein exists in stramenopiles and haptophytes but with no apparent phylogenetic relationships (the corresponding gene tree is unresolved, (see supplementary fig. S10), indicating that this function is probably taken in charge by another type of enzyme in alveolates. Altogether, the recruitment of different proteins to catalyze the same function testifies that there is no shared ancestry of sHrd1 in CASH.
For the SELMA model to be correct, targeted proteins need to be deubiquitinylated before passing through the two innermost membranes of the plastid, and to avoid their destruction by the proteasome of the PPC. Hempel et al. (2010) detected in stramenopiles a protein putatively supporting this function. Similar to our results of Uba1 and some Ubc proteins of cryptophytes, we found that the plastid-targeted deubiquitinases derive from a duplication of a host gene (see supplementary fig. S11). Likewise, we found that the symbiont-specific deubiquitinase reported in haptophytes by Stork et al. (2012), is also a paralogous gene of the host. However, we could not detect a bipartite targeting sequence in the haptophyte proteins. The origin of the plastid-targeted deubiquitinases in stramenopiles is unclear.
The Cdc48-Ufd1-Npl4 complex.
Cdc48, is an ATPase promoting the transport of ubiquitinated proteins through the Derlin complex. We found, like other authors, two copies of sCdc48 (sCdc48-1 and sCdc48-2) in cryptophytes, stramenopiles and haptophytes, and only one in plastidbearing alveolates (Table 1). In cryptophytes, one copy is encoded in the nucleomorph while the other was transferred to the nucleus. All trees reported so far strongly suggests an early duplication of a single red algal hCdc48 gene in the ancestor of all CASH, one copy having likely been lost in the alveolate ancestor. We also found a clear red algal ancestry for all sCdc48 genes. In our tree reconstructed with the LG model (supplementary fig. S14), we observed that the nucleomorph sCdc48 copy branches between red algae and all other CASH sCdc48, including the nucleus encoded cryptophyte one. However, the tree topology sparsely reconcile with the relayed idea that this nucleomorph copy is the ancestor of the other, eventually by higher level endosymbioses. Indeed, some duplication are lineage specific and do not correspond to each cryptophytes isoform, as one could expect if they were transmitted via EGT. Additionally, when using the PMSF site heterogeneous model of IQ-tree, the nucleomorph sCdc48 copy branched in between different red algae than all other CASH sCdc48, suggesting the possibility that these phylogenies suffer from hidden paralogy (supplementary fig. S15).
Concerning the cofactors sUfd1 and sNpl4, we could determine that both are of red algal origin in cryptophytes but are not related to other CASH genes (table 1, supplementary figures S12-S13). Instead, our trees show that the PPC-targeted sUfd1 and sNpl4 in the haptophytes-stramenopiles-alveolates assemblage both share a common ancestor gene, but we could not properly determine its origin. These result suggest an independent recruitment of different ancestral genes in cryptophytes compared to other CASH lineages.
Accessory proteins
An Hsp70 chaperone might be involved in the correct folding of translocated proteins in the PPC. In cryptophytes, the symbiont-specific sHsp70 is encoded in the nucleomorph, a clear evidence of its red algal ancestry. In our tree, the PPC-located Hsp70 homologs of stramenopiles group with potential symbiont copies of haptophytes, but could not clearly identify whether all these proteins are derived from the red endosymbiont or from an early duplicated gene from the host. Anyway, they are definitely not related to the nucleomorph-encoded Hsp70 of cryptophytes, once again supporting the idea of a alternative recruitment of ancestral genes in cryptophytes compared to other CASH.
DISCUSSION
It is well established that primary and higher order endosymbioses have shaped the evolution of photosynthetic eukaryotes. When Cavalier-Smith proposed the chromalveolates hypothesis, arguing for a single secondary endosymbiosis between a red alga and a common eukaryote ancestor for alveolates and chromists (i.e. cryptophytes, haptophytes and stramenopiles), he was exposing what he believes is the most parsimonious explanation for all the shared traits between lineages carrying plastids with chlorophyll c (Cavalier-Smith 1999). This opened two decades of intense debate as phylogenetics and later phylogenomics were unable to confirm or infirm a molecular support for the chromalveolates. Indeed, the evolutionary history of these lineages is likely more complex and tangled than Cavalier-Smith initially envisioned. Plastid encoded genes of all chromalveolates were shown to derive from the same red alga. However, phylogenies of nuclear-encoded proteins do not recover the monophyly of the host cell [START_REF] Petersen | A "green" phosphoribulokinase in complex algae with red plastids: evidence for a single secondary endosymbiosis leading to haptophytes, cryptophytes, heterokonts, and dinoflagellates[END_REF][START_REF] Teich | Origin and distribution of Calvin cycle fructose and sedoheptulose bisphosphatases in plantae and complex algae: a single secondary origin of complex red plastids and subsequent propagation via tertiary endosymbioses[END_REF]Baurain et al. 2010;Burki et al. 2016) . When this incongruence became clear, new hypotheses were proposed, including the possibility of serial higher-order endosymbioses (Bodył et al. 2009;Petersen et al. 2014) or kleptoplastdy (Bodył 2017).
When genes of the SELMA translocon were identified in the nucleomorph of Guillardia theta, the idea of a preexisting protein transport machinery, originally encoded by the symbiont, and ready to import proteins from the REG lumen across the PPM with moderate tinkering was immediately charming. The observation that all four membrane plastids carrying chromalveolates seemed to have a similar SELMA translocon at the second outermost membrane was considered as a strong evidence for the monophyly of the secondary red plastid. The underlying idea, similarly to Tic/Toc for primary plastids, is the unlikelihood that similar transport systems could have been assembled independently multiple times from ERAD machineries of similar red algal endosymbionts. However, and contrary to Tic/Toc, the phylogenetic support for the monophyly of SELMA has never been systematically tested and rely on only two proteins upon a dozen, Cdc48 and Uba1 (Felsner et al. 2011).
In this study, we performed phylogenetic analyses of a collection of ERAD/SELMA proteins in order to better understand the evolution of the PPM translocation machinery.
Except (and only perhaps) for the sCdc48 proteins, we could not retrieve the monophyly of the putative SELMA components in CASH (Table 1). The general trend is that the nucleomorph-encoded copy of cryptophytes is not related to homologous proteins found in other CASH phyla. Even in the case of Cdc48, depending on the chosen evolution model (LG vs site-heterogeneous), the nuclear-encoded gene shared by all CASH phyla does not always derive from the same red alga than the nucleomorph-encoded gene of cryptophytes (supplementary figures S14-S15). Even the cryptophytes nuclear copy is not necessarily related to the nucleomorph one, indicating a possible case of hidden paralogy that would dismiss this gene from supporting the acknowledged view of the evolution of SELMA. For almost all other proteins, our phylogenetic analyses showed that SELMA components, when detected, have the same origin in stramenopiles, alveolates and haptophytes whereas symbiont-specific ERAD homologs from cryptophytes seem to have been independently acquired.
The fact that SELMA components likely have a common origin in haptophytes, stramenopiles, chromerids and apicomplexa (Table 1) recalls the tree published by Burki et al. using 258 nuclear genes that pointed out a close relationship between haptophytes and the SAR (Stramenopiles-Alveolates-Rhizaria) group (Burki et al. 2012). We also found out that some nuclear-encoded proteins of the ubiquitination chain (Uba1 and Ubcs), reported as potentially derived from the red endosymbiont in cryptophytes, are in fact ERAD-like paralogs from the host. Some of these copies carry a BTS, contrary to their cytosolic counterparts, suggesting that they might be targeted to the PPC/plastid.
It is well known that, in the aftermath of endosymbioses, genes from the host can receive transit peptides and start to relocate their product to the plastid (Archibald 2015). Thus, our results suggest that at least in cryptophytes, some proteins of the translocation machinery are not derived from the ERAD proteins of the red alga but from duplicated genes of the host. Interestingly, the nuclear genome of the cryptophyte Guillardia theta revealed that paralogs from the host participate in the control and maintenance of the plastid (Curtis et al. 2012). Therefore, it seems quite possible that ERAD-like host paralogous genes might have also played a role in the establishment of the import system in the second-outermost membrane of cryptophytes plastids by being duplicated and subfunctionalized during the early steps of the host-symbiont integration. This would be comparable to some subunits of the TIC/TOC complex that were also recruited from preexisting host nuclear genes (Shi and Theg 2013).
An alternative scenario would be that the original symbiont-derived genes might have been lost and were replaced by host homologs to compensate the loss of function. In any case, it seems that the translocation machinery in cryptophytes has an hybrid nature with genes derived from the host and the red algal endosymbiont. One can argue that EGT genes, like the ones encoding SELMA proteins in haptophytes, stramenopiles, chromerids an apicomplexan, might have evolved differently than nucleomorph-encoded genes, like the SELMA components of cryptophytes. These differences might have promoted phylogenetic artifacts explaining why we do not retrieve the monophyly of these proteins in all CASH. Nevertheless, even nucleus-encoded SELMA components in cryptophytes appear to be different from their homologs in the other red lineages, supporting the idea that these proteins might have been independently acquired.
Evolutionary hypotheses invoking serial higher order endosymbioses to reconcile the monophyly of secondary red plastids and the polyphyly of the hosts posit that one CASH lineages may have been the recipient of the original secondary endosymbiosis with the red algal ancestor of the plastid (Stiller et al. 2014). It has been proposed that this lineage may have been the cryptophytes, because keeping the nucleomorph is seen as a transient state. The fact that they encode a large part of their SELMA genes in the nucleomorph has been interpreted as an additional evidence that the secondary red plastid evolved first in cryptophytes where the red-algal ERAD was converted into SELMA to allow the import of proteins from the REG lumen (Gould et al. 2015). Then, the already functional secondary red plastid would have been transmitted to other CASH lineages by serial endosymbioses followed by the transfer of SELMA genes to the nucleus of the host.
On the other hand, in a model arguing for a single red secondary endosymbiosis, followed by changes and losses, like the one proposed by Cavalier-Smith for the Chromists (Cavalier-Smith 2017), cryptophytes would have diverged very early, keeping a part of the original host/symbiont genome setup thanks to the nucleomorph. In both cases, trees of genes coding for the SELMA machinery should recover all CASH as related, monophyletic with a red algal ancestor. However, our phylogenetic analyses do not support this configuration.
In this work, we show that the SELMA translocation machinerie has been assembled differently in cryptophytes compared to haptophyte+alveolates+stramenopiles, and most importantly that genes of the latter assemblage are never related to nucleomorph genes of cryptophytes. This opens an alternative and provocative hypothesis where cryptophytes and other CASH have build up their SELMA transporter independently, and maybe even at the onset of two different secondary endosymbioses with related red algae. As mentioned above, the very fast rate of molecular evolution of plastid genes, together with insufficient taxon sampling, may lead to a false phylogenetic trees where all red secondary plastids appear monophyletic. Still, one could argue that such an hypothesis is not parsimonious with respect to the assemblage of similar complex solutions to the same evolutionary problem, but many cases of comparable convergent evolution exist. Interestingly, in chlorarachniophytes, where the four membrane secondary plastid derives from a green alga, another unknown solution has been developed to import proteins across the second outermost membrane. Our main conclusion is that, if SELMA components are similar in cryptophytes and in other CASH phyla, they are probably not orthologous. This observation is crucial, because it enforces a strong revision of all evolutionary hypotheses that rely on the shared ancestry of SELMA in the CASH supergroup.
DISCUSSION
"Y en efecto, uno de los grandes obstáculos con que luchó la teoría de la evolución, fue la oposición de 3 o 4,000 naturalistas dedicados a buscar las especies nuevas, que calificaban a veces con sus propios apellidos, y consideraban como realidades existentes y no como formas transitorias en vía de continuada evolución: por lo mismo, perdían su tradicional prestigio, disminuyendo mucho la importancia de los clasificadores y de sus inservibles colecciones de curiosidades. Otra hubiera sido la situación de la humanidad y de la ciencia si desde Lineo hubiéranse visto los seres como problemas que explicar y no como especies que clasificar"
Alfonso L. Herrera. Nociones de Biología (1904).
DISCUSSION
The evolution of oxygenic photosynthesis in an ancestor of cyanobacteria ~2.6 bya [START_REF] Sánchez-Baracaldo | Origin of marine planktonic cyanobacteria[END_REF] represents one of the most important evolutionary events in the history of life on Earth. This event changed drastically the atmosphere composition, affecting the biogeochemical cycles and the biological diversity of ecosystems [START_REF] Hamilton | The role of biology in planetary evolution: Cyanobacterial primary production in low-oxygen Proterozoic oceans[END_REF][START_REF] Lalonde | Benthic perspective on Earth's oldest evidence for oxygenic photosynthesis[END_REF]. Moreover, the contribution of cyanobacteria did not finish there, they are also at the origin of photosynthetic eukaryotes through a primary endosymbiotic event between a cyanobacterium and a phagotrophic eukaryote. This event originated the supergroup Archaeplastida, an assemblage composed of glaucophytes, red algae, green algae and land plants (Rodríguez-Ezpeleta et al., 2005). After primary endosymbiosis, higher order endosymbioses (i.e. an eukaryotic cell engulfs a photosynthetic eukaryote) spread the oxygenic photosynthetic capability across the eukaryotic tree and generated a high degree of reticulation in the evolution of photosynthetic eukaryotes.
Today, the importance of endosymbiotic events in the origin of photosynthetic eukaryotes is widely accepted. Nonetheless, there are still multiple unresolved questions regarding the number and identity of the partners that originated all the diversity of plastidbearing eukaryotes. Following the objectives we established for my PhD (section II), we profited of the recent increase in genomic data for cyanobacteria and photosynthetic eukaryotes to address some of these open questions. We obtained results with interesting implications for our understanding of the early evolution of photosynthetic eukaryotes.
Primary plastids evolved from an unicellular early-branching cyanobacterium
When the cyanobacterial origin of chloroplasts became widely accepted, a next step was to identify the type of cyanobacterial endosymbiont that gave rise to the first photosynthetic eukaryotes. Cyanobacteria are a highly diversified prokaryotic phylum with more than 4,600 described species divided in 10 orders [START_REF] Guiry | World-wide electronic publication[END_REF], with unicellular and multicellular species living in a very broad range of environments including hot springs, hypersaline environments, deserts, oceans and freshwater habitats [START_REF] Bahl | Ancient origins determine global biogeography of hot and cold desert cyanobacteria[END_REF][START_REF] Garcia-Pichel | The phylogeny of unicellular, extremely halotolerant cyanobacteria[END_REF][START_REF] Six | Diversity and evolution of phycobilisomes in marine Synechococcus spp.: a comparative genomics study[END_REF][START_REF] Ward | Cyanobacteria in geothermal habitats[END_REF] . Some cyanobacteria are able to establish symbiotic relationships with a broad range of organisms, from protists to multicellular species (e.g. sponges and plants) [START_REF] Adams | Cyanobacterial-plant symbioses[END_REF][START_REF] Carpenter | Marine cyanobacterial symbioses[END_REF]. The diversity and the long evolutionary history of cyanobacteria correlate with the low similarity in gene content among distant species: only 9 to 40% of genes were shared between cyanobacteria with large genomes (e.g. Nostoc punctiforme) and small genomes (e.g. Prochlorococcus marinus), respectively, when compared to other cyanobacterial species in a study performed by [START_REF] Shi | Genome evolution in cyanobacteria: The stable core and the variable shell[END_REF], which attests of the remarkable divergent evolution within cyanobacteria.
Considering the differences among cyanobacteria, one legitimate question would be which one of the extant cyanobacterial lineages is phylogenetically closest to the plastid ancestor. The identification of the phylogenetic ascription of the cyanobacterial endosymbiont can shed light on interesting questions regarding the very poorly understood endosymbiotic event that originated the first photosynthetic eukaryotes. Was the plastid ancestor an early-or late-branching cyanobacterium? Unicellular or filamentous? Did early photosynthetic eukaryotes emerge in a marine or terrestrial environment?
As it is described in section 2.1.6, studies on plastid origins can be divided roughly in two groups depending on whether they support an early-or a late-branching endosymbiont within the phylogeny of cyanobacteria. Our phylogenomic analyses of plastid-and nuclear-encoded genes of cyanobacterial origin (EGTs) strongly supported that the early-branching cyanobacterium Gloeomargarita lithophora is the present-day closest relative to primary plastids. This species belongs to a recently described new cyanobacterial clade present in freshwater microbialites and microbial mats, including high-temperature ones (Ragon et al., 2014). Additionally, G. lithophora is an interesting example of biomineralization due to its capability to form amorphous intracellular carbonates (Couradeau et al., 2012).
Our results contradict the late-branching models that suggest that the plastid ancestor would be related to filamentous heterocyst-forming cyanobacteria (Dagan et al., 2013;Deusch et al., 2008) or unicellular diazotrophic cyanobacteria [START_REF] Falcón | Dating the cyanobacterial ancestor of the chloroplast[END_REF]. These models propose that the ability to fix nitrogen found in some cyanobacterial clades may have favored the early establishment of the primary endosymbiosis. According to these models, the cyanobacterial endosymbiont would have supplied fixed nitrogen to the host in addition to the photosynthate. However, the genome of G. lithophora does not encode the proteins required for nitrogen fixation which suggests that the primary endosymbiosis was not based on a nitrogen dependence. Indeed, contemporary photosynthetic eukaryotes are unable to fix nitrogen, which makes the nitrogen fixation hypothesis highly speculative.
Plastids in Archaeplastida have a single origin deeply nested within the phylogeny of cyanobacteria. However, as shown in section 1.2, nuclear genomes and plastid proteomes of Archaeplastida species have a mosaic nature. The apparent overrepresentation of proteins similar to those of nitrogen-fixing filamentous cyanobacteria of section IV and V in the nuclear genomes of Archaeplastida led Dagan et al., (2013) to propose that the plastid ancestor was related to these clades. Nonetheless, these results are probably biased considering that these cyanobacterial lineages have much larger genomes than other cyanobacteria, reaching up to 12 Mb, whereas basal-branching unicellular cyanobacteria (section I) harbor much smaller genomes between 3-5 Mb (Shih et al., 2013). For instance, the genome of G. lithophora is a small one, of 3 Mb. Other possible explanation for the mosaic nature of Archaeplastida genomes has been proposed by the shopping bag hypothesis of plastid endosymbiosis (Larkum et al., 2007). This hypothesis posits that while the final stable plastid originated from a single endosymbiont (a Gloeomargarita-like relative according to our phylogenomic analyses), diverse organisms (transient symbionts or not) may have transferred some genes to the host, resulting in a chimeric nature of the nucleus as well as of the proteins targeted to the plastid. Nonetheless, our phylogenomic analyses of EGTs showed, as expected, that the G. lithophora lineage is the biggest contributor of nuclear-encoded genes of cyanobacterial origin in Archaeplastida.
Given that our phylogenetic analyses of all type of markers (plastid-encoded genes, ribosomal RNAs, EGTs) point out that plastids evolved from a cyanobacterium related to G. lithophora, it is interesting to ask why a Gloeomargarita relative, an unicellular earlybranching cyanobacterium living in freshwater environments, was the successful partner in a very rare symbiogenetic event that originated the first photosynthetic eukaryotes. As Lynn Margulis noticed, "Symbiogenesis is not ever random" [START_REF] Margulis | Symbiogenesis. A new principle of evolution rediscovery of Boris Mikhaylovich Kozo-Polyansky (1890-1957)[END_REF] and despite that it is not possible to have a complete answer regarding the origin of primary plastids, it may be possible to bring some elements of discussion from models of plastid endosymbiosis as well as the ecological setting where the primary endosymbiosis took place.
Firstly, the fact that the plastid ancestor was an early-branching cyanobacterium instead of a filamentous (heterocyst-forming) late-branching one is not a matter of timing of divergence within the phylogeny of cyanobacteria, that is, the primary endosymbiosis with a Gloeomargarita relative 1.6-1.9 bya cannot be simply explained by the possible late evolution of filamentous clades, posterior to the endosymbiotic event that originated Archaeplastida. Evolution of multicellularity in cyanobacteria occurred relatively shortly after the emergence of cyanobacteria ~2. 6 bya (Sánchez-Baracaldo, 2015). Furthermore, it has been proposed that the Great Oxygenation Event that occurred ~2.4 bya may have been favored by the evolution of multicellular cyanobacterial lineages [START_REF] Schirrmeister | Cyanobacteria and the Great Oxidation Event: evidence from genes and fossils[END_REF]. Microfossils of nostocalean akinetes (i.e. resting cells formed by filamentous heterocyst-forming cyanobacteria) have been observed in stromatolites that have been dated to around 2 billion years old [START_REF] Amard | Microfossils in 2000 Ma old cherty stromatolites of the Franceville Group, Gabon[END_REF]. Likewise, evidence of multicellularity in fossil cyanobacteria has been reported in dolomite rocks from Western Australia with similar age (circa 2000 Mya) [START_REF] Knoll | Distribution and diagenesis of microfossils from the lower proterozoic duck creek dolomite, Western Australia[END_REF]. Relaxed molecular clock phylogenetic analyses also support an early transition to multicellularity in the evolutionary history of cyanobacteria [START_REF] Schirrmeister | Cyanobacteria and the Great Oxidation Event: evidence from genes and fossils[END_REF].
In order to establish a permanent endosymbiosis between two free-living organisms, it is necessary a high degree of genetic integration between the host and the endosymbiont, which eventually will reach a turning point when the endosymbiont can be vertically transmitted. However, the size of the partners can impose physical constraints that may limit the possibilities to establish a permanent endosymbiosis. For instance, endosymbionts in foraminifera (e.g. diatoms, chlorophytes, dinoflagellates and rhodophytes) can be transmitted vertically via asexual reproduction of the host. However, due to the larger size of endosymbionts compared to the foraminiferal gametes, the endosymbionts need to be engulfed again after sexual reproduction, originating only transient endosymbioses [START_REF] Nowack | Endosymbiotic associations within protists[END_REF]. Therefore, is it possible that the small size of Gloeomargarita (less than 3 μm) may have facilitated its integration as a permanent endosymbiont to become an organelle.
Interestingly, the other known example of cyanobacterial primary endosymbiosis, the chromatophore in Paulinella derives from a cyanobacterial ancestor closely related to the Synechococcus/Prochlorococcus clade, an unicellular marine group of microcyanobacteria that underwent a cell size reduction during evolution ending up with cell diameters less than 2 μm [START_REF] Sánchez-Baracaldo | Origin of marine planktonic cyanobacteria[END_REF]. Endosymbiotic nitrogen-fixing cyanobacteria seem to follow the same trend. For example, the spheroid bodies which are considered to fix nitrogen within the diatom Epithemia turgida derive from a cyanobacterial endosymbiont closely related to the unicellular diazotrophic cyanobacterium Cyanothece spp., and they are obligate endosymbionts that have lost the ability to photosynthesize and are vertically transmitted during cell division [START_REF] Yamaguchi | Molecular diversity of endosymbiotic Nephroselmis (Nephroselmidophyceae) in Hatena arenicola (Katablepharidophycota)[END_REF]Prechtl et al., 2004). Interestingly, bacterivorous prasinophyte algae, which possibly retained an ancestral phagotrophic capability of Archaeplastida, have a preference for small-sized bacteria [START_REF] Mckie-Krisberg | Phagotrophy by the picoeukaryotic green alga Micromonas: Implications for Arctic Oceans[END_REF]. By contrast, filamentous cyanobacteria have larger cells surrounded by a continuous outer membrane forming a multicellular organism that can reach several dozens of micrometers [START_REF] Sánchez-Baracaldo | Origin of marine planktonic cyanobacteria[END_REF][START_REF] Schirrmeister | Cyanobacterial evolution during the Precambrian[END_REF]. For comparison, the cell size of Cyanophora paradoxa, an algal species that belongs to Glaucophyta, possibly the earliest-branching lineage within Archaeplastida (see section 1.2.2.3), is 9-15 μm in length [START_REF] Korshikov | Protistologische Beobachtungen. I Cyanophora paradoxa n. g. et sp[END_REF]. Moreover, filamentous cyanobacteria have complex regulatory mechanisms of cell communication and differentiation that may be difficult to overcome in order to create a permanent endosymbiont [START_REF] Flores | Compartmentalized function through cell differentiation in filamentous cyanobacteria[END_REF]. Taken together, it seems that unicellular small organisms may be better partners when it comes to create permanent endosymbionts.
Other possibility to explain the success of the ancestral Gloeomargarita-like cyanobacterium as primary endosymbiont might be that it would have had a genetic/genomic pre-adaptation to live in symbiosis. [START_REF] Ohbayashi | Diversification of DnaA dependency for DNA replication in cyanobacterial evolution[END_REF] proposed that the plastid ancestor possibly had a DNA replication mechanism independent from the initiation factor DnaA which is highly conserved in prokaryotes but absent in plastids. All cyanobacterial genomes encode the dnaA gene but the degree of DnaA dependency to perform DNA replication varies among cyanobacterial clades [START_REF] Ohbayashi | Diversification of DnaA dependency for DNA replication in cyanobacterial evolution[END_REF].
Therefore, a DnaA-independent mechanism may have facilitated the early reorganization of the replication machinery in the cyanobacterial endosymbiont.
Primary photosynthetic eukaryotes appeared in a freshwater habitat
The possible evolution of DnaA-independent replication mechanisms in cyanobacteria has been suggested to be related to their natural habitat: cyanobacteria living in environments where nutrient abundances tend to be stable, such as many oceanic regions, may be more dependent on DnaA whereas in freshwater habitats where nutrient availability can fluctuate very rapidly, different selective pressures may act favoring cyanobacteria with more flexible DNA replication mechanisms [START_REF] Ohbayashi | Diversification of DnaA dependency for DNA replication in cyanobacterial evolution[END_REF].
Remarkably, Gloeomargarita has only been found in freshwater environments (Ragon et al., 2014). An interesting perspective of future research would be to study the degree of dependency of replication on the DnaA protein in G. lithophora, in particular, to determine whether DNA replication can occur in DnaA-deleted mutants.
The fact that primary plastids most likely evolved in a freshwater environment constitutes an interesting change of paradigm as it is often assumed that the origin of plastids took place in a marine habitat, probably because the ocean is usually seen as the environmental setting where all major evolutionary events have occurred such as the origin of life or the origin of animals [START_REF] Rota-Stabelli | Molecular timetrees reveal a cambrian colonization of land and a new scenario for ecdysozoan evolution[END_REF][START_REF] Sojo | The Origin of Life in Alkaline Hydrothermal Vents[END_REF]. However, oxygenic photosynthesis, both bacterial and eukaryotic, likely appeared on land. The ancestral state reconstruction of morphology and habitat showed that cyanobacterial clades evolved from an unicellular freshwater cyanobacterium [START_REF] Uyeda | A comprehensive study of cyanobacterial morphological and ecological evolutionary dynamics through deep geologic time[END_REF]. These results are also supported by the fact that basal cyanobacterial lineages such as the thylakoid-less Gloeobacter, are restricted to terrestrial environements [START_REF] Mareš | The Primitive Thylakoid-Less Cyanobacterium Gloeobacter Is a Common Rock-Dwelling Organism[END_REF].
Additionally, stochastic mapping of trait evolution supports a freshwater origin of primary plastids and has also confirmed our results that Gloeomargarita is the present-day cyanobacterial relative closest to the plastid lineage [START_REF] Sánchez-Baracaldo | Early photosynthetic eukaryotes inhabited low-salinity habitats[END_REF].
Evolution of secondary plastid-harboring eukaryotes
Several nuclear-encoded plastid-targeted proteins of red algal origin have been detected in secondary algae with green plastids (i.e. chlorarachniophytes and euglenids) (Archibald et al., 2003;Yang et al., 2011). For instance, the majority of genes of the Calvin-Benson cycle in euglenids seem to derive from red algae instead of the green algal endosymbiont as could be expected (Markunas & Triemer, 2016). Likewise, the tetrapyrrole biosynthetic pathway has been almost completely transferred from a red algal source in chlorarachniophytes and euglenids [START_REF] Cihláø | Evolution of the tetrapyrrole biosynthetic pathway in secondary algae: Conservation, redundancy and replacement[END_REF][START_REF] Lakey | The tetrapyrrole synthesis pathway as a model of horizontal gene transfer in euglenoids[END_REF] On the other hand, the CASH lineage (cryptophytes, alveolates, stramenopiles and haptophytes) appears to have acquired multiple genes from green algae. The nuclear genomes of diatoms and other red plastid-harboring groups seem to encode many genes from green algae which has been interpreted as remnants from a possible former secondary endosymbiosis with a green alga that was replaced by a red one [START_REF] Allen | Evolution and functional diversification of fructose bisphosphate aldolase genes in photosynthetic marine diatoms[END_REF][START_REF] Frommolt | Ancient recruitment by chromists of green algal genes encoding enzymes for carotenoid biosynthesis[END_REF]Moustafa et al., 2009). Likewise, the green origin of some nuclear-encoded proteins of Chromera velia and other alveolates opens the possibility of a cryptic green algal endosymbiosis (Woehle et al., 2011). However, these results have been challenged by subsequent re-analyses of the same data (see below).
In order to gain a deeper understanding of the apparent conflict between the type of present-day secondary plastids and the apparent origin of some of the nuclear-encoded genes of algal origin, we focused on the phylogenetic analysis of secondary EGTs, that is, the subset of nuclear-encoded genes that through the inspection of phylogenetic trees for which it is possible to trace a complete history of endosymbiotic gene transfers: first, from the cyanobacterial endosymbiont to the ancestor of Archaeplastida via primary endosymbiosis; and secondly, from the green or red algal endosymbiont to other eukaryotic hosts through secondary EGT. Detection of secondary EGTs is useful to identify cryptic endosymbiotic events. For instance, analyses of transcriptomic sequences data revealed that the nuclear genome of the parasitic dinoflagellate Hematodinium encodes some EGTs from the former red plastid that was secondarily lost [START_REF] Gornik | Endosymbiosis undone by stepwise elimination of the plastid in a parasitic dinoflagellate[END_REF].
Similar remaining secondary EGTs from previous endosymbioses have been observed in other dinoflagellate species such as Karenia brevis and Durinskia baltica where the former peridinin-containing plastids have been replaced by an haptophyte-derived and a diatomderived plastid, respectively (Burki et al., 2014;[START_REF] Waller | Plastid Complexity in Dinoflagellates: A Picture of Gains, Losses, Replacements and Revisions[END_REF]. This type of secondary EGTs with a complete transmission history are particularly helpful because of the large phylogenetic distance between prokaryotes (cyanobacteria) and photosynthetic eukaryotes makes relatively easy the identification of tree topologies compatible with an EGT scenario and distinguishing these genes from those vertically inherited. However, it is important to consider that, although the phylogenetic tree of a gene may show a topology compatible with an EGT scenario, this does not necessarily imply that the gene was transferred in an endosymbiotic context. It is also possible that the gene was acquired horizontally by punctual HGT from red or green algae, or even that the gene was already part of the endosymbiont genome by more ancient HGTs.
After a strict protocol of identification of genes compatible with an EGT/HGT scenario in photosynthetic eukaryotes with secondary plastids, we recovered a final set of 85 genes. Regarding the genes in the CASH lineage, the vast majority (90%) had a "red" origin, in agreement with their current red algal endosymbionts. The rest of the genes (10%) have ambiguous phylogenies but some genes seem to have been transferred from green algae. Our results do not support previous studies that reported a great number of green genes in diatoms, suggesting a cryptic green algal endosymbiosis (Moustafa et al., 2009). Stramenopiles, including diatoms, had the same proportion of green algal-derived genes as other lineages with red plastids. The scarce number of green genes that we found in CASH lineages is in line with the results of Deschamps & Moreira, (2012), who reanalyzed the set of genes allegedly acquired from a cryptic green endosymbiont in diatoms and found that the vast majority of genes were misidentified as having a green origin due to a combination of poor taxon sampling and the use of an oversimplistic automated software to identify the phylogenetic origin of proteins. Therefore, given the small proportion of green algal-derived genes in CASH lineages in our analyses, it seems that there is no compelling evidence for a former secondary endosymbiosis with a green alga. The few green algal genes found in these lineages may be the result of classical horizontal transfers.
By contrast, regarding the origin of secondary EGTs in chlorarachniophytes and euglenids, we expected a vast majority of genes of green algal origin as a result of the endosymbiotic gene transfer from their green endosymbionts (similar to the red algal genes in the CASH lineages). However, we found 58% and 45% of red algal genes in chlorarachniophytes and euglenids, respectively. This represents a striking difference when compared to the ~10% of green algal genes in secondary red lineages. There are two ways to explain the high amount of red algal genes in both secondary green lineages: either there was a huge amount of transferred genes from red algae via HGT that created a high degree of chimerism in the plastid proteome of chlorarachniophytes and euglenids or these lineages may have had a former endosymbiosis with a red alga. Our results do not particularly favor any evolutionary scenario but show that chlorarachniophytes and euglenids have independently recruited a large amount of plastid-targeted red algal genes that participate in varied and important functions such as plastid translation (e.g. ribosomal proteins and aminoacyl-tRNA ligases), biosynthetic pathways (e.g chlorophyll and fatty acid biosynthesis) and photosynthesis-related functions (e.g. light harvesting and photosystem assembly).
The way to interpret the red algal contribution depends on which event is considered to have predominated in the evolution of chlorarachniophytes and euglenids: HGT or EGT. According to [START_REF] Martin | Too Much Eukaryote LGT[END_REF], HGT from prokaryotes to eukaryotes as well as among eukaryotes has been overestimated. He posits that although HGT in eukaryotes can occur, it is mostly a rare event. Thus, the vast majority of acquisitions of foreign genes in eukaryotes had occurred in endosymbiotic contexts. However, these claims are not widely accepted and the relevance of HGT in eukaryotic evolution is an ongoing controversial debate [START_REF] Husnik | Functional horizontal gene transfer from bacteria to eukaryotes[END_REF][START_REF] Keeling | Functional and ecological impacts of horizontal gene transfer in eukaryotes[END_REF][START_REF] Koutsovoulos | No evidence for extensive horizontal gene transfer in the genome of the tardigrade Hypsibius dujardini[END_REF][START_REF] Martin | Too Much Eukaryote LGT[END_REF]. Nonetheless, if the red algal genes in euglenids and chlorarachniophytes were acquired horizontally and there has been massive acquisitions of foreign genes during their evolution, perhaps due to a mixotrophic lifestyle [START_REF] Calderon-Saenz | Morphology, biology, and systematics of Cryptochlora perforans (Chlorarachniophyta), a phagotrophic marine alga[END_REF], it would be expected that some genes had dissimilar origins, while some species would have kept the original green algal-derived gene, others could have replaced it later by a red algal one. However, all the red algal genes that we identified are shared by all species within the euglenids or the chlorarachniophytes, implying that if the gene was transferred via HGT, in all cases the transfer was ancient and already present in the last common ancestors of these groups, and that HGT stopped in both groups -by unknown reasons-so that we do not observe any recent case of gene acquisition. This seems cear in chlorarachniophytes, with a rich taxon sampling already available, but requires confirmation in euglenids, for which we only have two transcriptome sequences available.
The alternative possibility to explain the large amount of red algal genes in the secondary green lineages is that previous to the secondary endosymbiosis with green algae, these lineages might have had red plastids that were subsequently replaced by green ones. Although the cryptic plastid hypothesis may be suitable for both chlorarachniophytes and euglenids, there could be two different scenarios: 1) chlorarachniophytes and euglenids both had a former red plastid; 2) only one of these lineages established a red secondary endosymbiosis. Although the red algal genes in chlorarachniophytes and euglenids could have been acquired in different ways, under the cryptic red plastid hypothesis a similar reasoning can be used to explain the huge impact of plastid-targeted red algal-derived genes in these lineages.
Chlorarachniophytes belong to the phylum Cercozoa within the supergroup Rhizaria, a highly diverse group that comprises photosynthetic organisms such as chlorarachniophytes and the primary plastid-bearing Paulinella as well as heterotrophic lineages such as radiolarians, foraminiferans and many cercozoans [START_REF] Cavalier-Smith | Phylogeny and classification of phylum Cercozoa (Protozoa)[END_REF]. Importantly, Rhizaria is part of a monophyletic supergroup together with Stramenopiles and Alveolata forming the SAR clade [START_REF] Burki | Phylogenomics reshuffles the eukaryotic supergroups[END_REF]. Interestingly, stramenopiles and alveolates both have photosynthetic representatives endowed with secondary red plastids.
Was the rhizarian ancestor photosynthetic?
Although the great diversity of Rhizaria and the important contribution to the planktonic biomass, rhizarian protists have received little attention in genomic and transcriptomic analyses until recently (Sierra et al., 2016). Only three complete nuclear genomes have been sequenced so far for: the chlorarachniophyte Bigelowiella natans (Curtis et al., 2012), the foraminiferan Reticulomyxa filosa [START_REF] Glöckner | The genome of the foraminiferan reticulomyxa filosa[END_REF] and the plant pathogen Plasmodiophora brassicae [START_REF] Rolfe | The compact genome of the plant pathogen Plasmodiophora brassicae is adapted to intracellular interactions with host Brassica spp[END_REF]. The scarcity of genomic data hinders the research of a putative photosynthetic past of Rhizaria. Nonetheless, the genome of Reticulomyxa filosa does not encode photosynthesis-related genes, which was interpreted as evidence for the lack of photosynthetic capability in the ancestor of Rhizaria [START_REF] Glöckner | The genome of the foraminiferan reticulomyxa filosa[END_REF]. Similarly, the genome of the rhizarian pathogen Plasmodiophora, which has been reduced due to the parasitic lifestyle, does not show any hint of a photosynthetic past [START_REF] Rolfe | The compact genome of the plant pathogen Plasmodiophora brassicae is adapted to intracellular interactions with host Brassica spp[END_REF]. These results together with the very large number of independent plastid losses necessary to explain the vast diversity of heterotrophic lineages within Rhizaria argue against a photosynthetic past of Rhizaria. Nevertheless, there is still a possibility that a red algal plastid was ancestral to SAR and had been kept until shortly before the secondary endosymbiosis with a green alga (Fig. 19A).
Although the branching order of rhizarian clades remains unclear (being deeper nodes more difficult to resolve) [START_REF] Howe | Novel Cultured Protists Identify Deep-branching Environmental DNA Clades of Cercozoa: New Genera Tremula, Micrometopion, Minimassisteria, Nudifila, Peregrinia[END_REF][START_REF] Pawlowski | Untangling the phylogeny of amoeboid protists[END_REF]Sierra et al., 2016), other possibility under the cryptic endosymbiosis hypothesis is that the endosymbiosis took place after the diversification of Rhizaria between a phagotrophic cercozoan and a red plastid-containing endosymbiont, such as a photosynthetic alveolate or ochrophyte (Fig. 19B).
Molecular clocks, chemical biomarkers and fossil evidence support that the SAR clade diversified during the Neoproterozoic (1000-541 Mya) and red plastid-harboring lineages within stramenopiles and alveolates appeared in this era [START_REF] Brocks | The rise of algae in Cryogenian oceans and the emergence of animals[END_REF][START_REF] Butterfield | A vaucheriacean alga from the middle Neoproterozoic of Spitsbergen: implications for the evolution of Proterozoic eukaryotes and the Cambrian explosion[END_REF][START_REF] Meng | The oldest known dinoflagellates: Morphological and molecular evidence from Mesoproterozoic rocks at Yongji, Shanxi Province[END_REF]Parfrey et al., 2011) which is temporally congruent for a "red" period also within a putative cercozoan lineage, followed by the loss of the red plastid and a more recent endosymbiotic event between the chlorarachniophyte ancestor and a green alga some 100-150 Mya [START_REF] Delaye | How Really Ancient Is Paulinella Chromatophora ?[END_REF]. Importantly, if red algal genes come from this putative alveolate or ochrophyte endosymbiont, it would be expected that phylogenetic trees of these red algal genes show chlorarachniophytes nested within one of these two groups. However, given the poor resolution frequently found in single-gene trees, it can be difficult to distinguish between this result and a tree of a vertically inherited gene due to the phylogenetic proximity of the different lineages within the SAR clade.
Thus, if the putative red plastid was not ancestral in Rhizaria but particular to a cercozoan lineage that gave rise to the chlorarachniophytes, this may explain the lack of photosynthesis-related genes in other rhizarian lineages.
Phylogenetic analysis of the 18S rDNA shows that Cercozoa is a highly diversified group with deep-branching clades that have been widely overlooked [START_REF] Howe | Novel Cultured Protists Identify Deep-branching Environmental DNA Clades of Cercozoa: New Genera Tremula, Micrometopion, Minimassisteria, Nudifila, Peregrinia[END_REF].
To test the hypothesis of a cryptic red algal endosymbiosis, it would be necessary to have a better understanding of cercozoan evolution and generate genomic data for deepbranching cercozoan species that could shed some light on the origin of the red algal genes in chlorarachniophytes. Nonetheless, it is possible that all plastid-derived genes of the putative photosynthetic rhizarian/cercozoan ancestor may have been lost in presentday heterotrophic lineages. Interestingly, massive loss of symbiont-derived genes seems to be very common in secondarily heterotrophic organisms [START_REF] Huang | Phylogenomic evidence supports past endosymbiosis, intracellular and horizontal gene transfer in Cryptosporidium parvum[END_REF][START_REF] Smith | A Plastid without a Genome: Evidence from the Nonphotosynthetic Green Algal Genus Polytomella[END_REF].
Although the cryptic endosymbiosis hypothesis is difficult to test in chlorarachniophytes and euglenids, we know that this type of plastid replacement can occur in nature. The dinoflagellate Lepidodinium chlorophorum represents an interesting case of a red plastid that was replaced by a green one. This species currently possesses a green algal plastid but its genome harbors a mix of red and green algal genes that were recruited from the former red plastid and the present-day green algal endosymbiont, respectively [START_REF] Sastre | A phylogenetic mosaic plastid proteome and unusual plastid-targeting signals in the green-colored dinoflagellate Lepidodinium chlorophorum[END_REF]. Instead of activating new genes and retarget new plastid proteins from the green endosymbiont, it is possible that under a context of serial endosymbiotic events, the favored evolutionary strategy may be the recycling of remaining EGTs. Thus, the red algal genes in Lepidodinium akin to the red genes in chlorarachniophytes and euglenids might have facilitated the acquisition of the new green plastids.
Evolutionary history of the SELMA translocon
Similar to primary plastids in which the monophyly of the multiprotein complex involved in the translocation of proteins across their double membrane, the TIC/TOC complex, strongly supports a single origin of this protein import system (Shi & Theg, 2013), the apparent monophyly of SELMA (i.e. the protein import machinery used to cross the second outermost membrane of secondary red plastids with four membranes, see section 1.3.2.1) would suggest that there was a single red algal secondary endosymbiotic event. In this scenario (the Chromalveolate hypothesis), SELMA would have evolved vertically in the different secondary red lineages. In the alternative scenario involving higher order endosymbioses spreading secondary red plastids across different lineages, SELMA would have been transferred concomitantly with the plastids as it was the case for the TIC/TOC complex during secondary endosymbioses (Hirakawa et al., 2012;[START_REF] Sheiner | Protein sorting in complex plastids[END_REF]. Interestingly, although the protein import system used in chlorarachniophytes to cross the second outermost membrane remains unknown, it seems that it does not derive from the ERAD machinery of its green algal endosymbiont, suggesting that the SELMA complex is an evolutionary innovation of red algal-derived plastids (Hirakawa et al., 2012).
Several putative SELMA proteins have been detected using bioinformatic analyses (Sommer et al., 2007;Stork et al., 2012), however, very few phylogenetic reconstructions of these proteins have been published. The monophyly of this system has been mostly based on phylogenetic analyses of the protein Cdc48, an ATPase considered to translocate proteins in a ubiquitin-dependent manner [START_REF] Bolte | Making new out of old: Recycling and modification of an ancient protein translocation system during eukaryotic evolution[END_REF]Felsner et al., 2011;Petersen et al., 2014). On the contrary, phylogenetic trees of derlin proteins have provided unclear results (Petersen et al., 2014). Consequently, strict phylogenetic evidence for the monophyly of SELMA, a system in which very strong evolutionary conclusions are regularly taken for granted, is still missing.
To study the origin and evolution of SELMA, we performed phylogenetic analyses of the multiple SELMA components. Our trees show that the translocation machinery in cryptophytes (in which many SELMA proteins remain encoded in the nucleomorph) differs considerably from that established in the other red lineages (i.e. alveolates, stramenopiles and haptophytes) due to a combination of differentially recruited ERAD paralogs from the red algal endosymbiont and a possible contribution of host genes. The derlin proteins provide a good example.
The ERAD system has commonly two paralogs: derlin1-1 and derlin1-2. Following the nomenclature proposed by Cavalier-Smith (2017) to avoid confusion, derlin1-1 and derlin1-2 will be called hereafter derlin A and derlin B, respectively. This author recently showed that cryptophytes recruited the nucleomorph-encoded derlin A while haptophytes, stramenopiles and alveolates use the paralogous derlin B, which possibly experienced an early duplication in the chromist ancestor generating two paralogs, derlin B1 and derlin B2, which would be targeted to the periplastidial compartment. Likewise, our results also support a differential recruitment of derlin paralogs into the SELMA complex. Interestingly, although Cavalier-Smith proposed that alveolates lost one subparalogous derlin B protein, we identified the two derlin B1 and derlin B2 in alveolates which have escaped former detection.
As previously reported, the phylogenetic analysis of Cdc48 shows that this protein derives from the ERAD machinery of the red alga at the origin of secondary red plastids.
Nonetheless, this protein seems to have a more complex history than usually assumed. It seems that it has undergone lineage-specific duplications and losses that could lead to a hidden paralogy scenario. Cryptophytes, haptophytes and stramenopiles have two copies of symbiont-specific Cdc48 while alveolates only retained one. In cryptophytes one protein is still encoded in the nucleomorph whereas the other is encoded in the nuclear genome. If the two copies present in cryptophytes, haptophytes and stramenopiles had resulted from an early duplication, similar to derlin B1 and derlin B2, we would expect two differentiated paralogs but our results showed that several Cdc48 proteins seem to correspond to latebranching paralogs suggesting that the two copies present in haptophytes and stramenopiles may have been duplicated after diversification of the chromist ancestor probably from the nuclear-encoded cdc48 gene while the nucleomorph copy was lost.
In the case of the proteins that participate in the ubiquitination pathway in SELMA (necessary to import proteins into the periplastidial compartment), we found a similar pattern: cryptophytes use different ubiquitin activating enzymes (E1), ubiquitin conjugating enzymes (E2) and ubiquitin ligases (E3) when compared to those in haptophytes, alveolates and stramenopiles. Interestingly, some of these proteins remain encoded in the nucleomorph. It is noteworthy that some CASH lineages may have recruited some genes from the host: the ubiquitin activating enzyme in cryptophytes is likely a paralogous gene from the host and at least one E3-ubiquitin ligase that has been localized in the apicoplast as part of the ubiquitination machinery (Agrawal et al., 2013), derives from the host in apicomplexans, suggesting a hybrid nature of the protein import machinery. Interestingly, the TIC/TOC complex in primary plastids was also built from a set of cyanobacterial and eukaryotic proteins (Shi & Theg, 2013).
As pointed out previously by Cavalier-Smith (2017), the fact that the symbiontspecific derlin in alveolates, haptophytes and stramenopiles had evolved from the derlin B paralog whereas cryptophytes use the nucleomorph-encoded derlin A paralog is difficult to reconcile with the hypothesis of multiple putative tertiary and quaternary rounds of endosymbiosis where a cryptophyte symbiont would be the donor of the SELMA translocon to one or more heterotrophic hosts in alveolates, stramenopiles and haptophytes. Likewise, our phylogenetic analyses of derlin proteins as well as for almost all other SELMA components show that the translocation machinery in cryptophytes does not use the same set of proteins as these other lineages. Therefore, if we assume that there were an undetermined number of serial endosymbioses that would have initiated with a cryptophyte endosymbiont that participated in a tertiary endosymbiotic event, it is necessary to postulate that at the time of that putative endosymbiosis, the SELMA translocon in cryptophytes was more complex and had more symbiont-derived ERAD-like proteins than the SELMA in present-day cryptophytes. Moreover, during the tertiary endosymbiosis, the new would-be red lineage would have recruited a subset of SELMA components from the initial cryptophyte pool of SELMA genes. This subset would be different from the one retained in contemporary cryptophytes. This would imply that SELMA experienced a secondary simplification in cryptophytes during evolution. This evolutionary scenario where differential loss and retention of proteins took place during tertiary symbiogenetic events seems very unparsimonious.
Interestingly, there is an example of endosymbiotic transfer of the SELMA translocon in dinoflagellates of the family Kareniaceae. It is well established that members of this family replaced the ancestral peridinin-containing plastid by an haptophyte-derived one via tertiary endosymbiosis [START_REF] Gabrielsen | Genome evolution of a tertiary dinoflagellate plastid[END_REF]. Recent studies showed that these dinoflagellates encode bipartite-targeting sequences with clear haptophyte origin for all SELMA components (Kořený et al., unpublished, cited by Waller & Kořený, 2017). This is a remarkable case showing evidence that SELMA can be endosymbiotically transferred but, as mentioned before, it is not completely surprising considering that the TIC/TOC import machinery has been transferred in every secondary and tertiary plastid endosymbiosis. However, in this example, the Kareniaceae SELMA proteins are clearly related to those of haptophytes, as expected, which seems not to be the case in haptophytes, stramenopiles and alveolates if their red plastids were actually derived from a cryptophyte endosymbiont.
Hence, according to the origin of SELMA components in cryptophytes and the other red lineages, the endosymbiotic transfer of SELMA from a putative cryptophyte endosymbiont seems unlikely. One other possibility is that a more complex SELMA system was present in the last common ancestor of all chlorophyll c-containing algae and it was probably composed of more proteins (Fig. 20), but would have been simplified during evolution (loss of paralogous proteins in some cases). SELMA may have been restructured perhaps with retargeting of host-derived proteins in some lineages. This model speculates a differential construction of the system from the same initial set of proteins which is highly dubious (similar to the cryptophyte endosymbiont hypothesis).
Nonetheless, the loss of the nucleomorph and SELMA-related nucleomorph-encoded genes that did not make their way into the nuclear genome may have exerted strong selective pressures to restructure the system from the remaining set of genes.
Although our results do not particularly favor any scenario, we showed that SELMA has a much more complex evolutionary history than initially thought and the system needs to be thoroughly studied in future research.
Perspectives
The results of my PhD have opened interesting questions regarding the evolution of photosynthetic eukaryotes that would be worth considering for future research:
• DNA replication mechanism of Gloeomargarita lithophora. It has been proposed that a DNA replication mechanism independent from the replication initiator protein DnaA might have facilitated the reorganization of the replication machinery facilitating primary endosymbiosis [START_REF] Ohbayashi | Diversification of DnaA dependency for DNA replication in cyanobacterial evolution[END_REF]. Some cyanobacteria can replicate their DNA without the initiation factor, showing that the degree of dependency strongly varies among cyanobacterial clades. However, it is not known whether DNA replication can occur without the DnaA protein in G. lithophora. Hence, a next step in the study of primary endosymbiosis would be to evaluate the relevance of the DnaA gene in G. lithophora, the free-living cyanobacterium closest to primary plastids.
• Branching order within Archaeplastida: Although our phylogenomic analyses of plastid and EGT genes support that Glaucophyta was the earliest branching lineage of Archaeplastida and these results are also supported by the ancestral retention of carboxysomes and the peptidoglycan wall of cyanelles [START_REF] Jackson | The Glaucophyta: The blue-green plants in a nutshell[END_REF], the branching order is still a matter of debate (Mackiewicz & Gagat, 2014). To address this problem, one possibility would be to sequence more glaucophyte genomes considering that only the plastid genome of Cyanophora paradoxa has been completely sequenced so far [START_REF] Stirewalt | Nucleotide sequence of the cyanelle genome from Cyanophora paradoxa[END_REF], and perform phylogenomic analyses of plastid genes with a rich taxon sampling, including basal clades from the different Archaeplastida lineages, such as the basal green alga Verdigellas peltata [START_REF] Leliaert | Chloroplast phylogenomic analyses reveal the deepestbranching lineage of the Chlorophyta, Palmophyllophyceae class[END_REF].
• Ancestral gene repertoire of primary plastids. It would be interesting to reconstruct the probable ancestral gene repertoire of primary plastids through a comparative analysis of plastid and EGT genes among Archaeplastida lineages, in order to understand the process of primary endosymbiosis and the genome reorganization underwent by the cyanobacterial endosymbiont: What happened during the genome reduction of the cyanobacterial endosymbiont? Which genes were preferentially retained or replaced? How were the plastid-located pathways affected during primary endosymbiosis? Is there any functional pattern between the genes that were transferred to the nucleus and the genes that continue to be encoded in the plastid?
• Genomic study of basal Cercozoa: Our results showed that chlorarachniophytes and euglenids encode several genes of red algal origin despite that both photosynthetic lineages carry secondary green plastids. In order to study the origin of these red algal genes, it is necessary to have a better understanding of the genomic evolution and gene content of closer heterotrophic lineages of chlorarachniophytes and euglenids. This may shed some light of the role of eukaryovory in the mosaicism observed in nuclear genomes and plastid proteomes of these green lineages. For instance, the deep-branching cercozoan Micrometopion nutans may be a suitable candidate species for further study. It is an eukaryote-eating flagellate deeply nested within the phylogeny of Cercozoa that seem to have diverged earlier than chlorarachniophytes [START_REF] Howe | Novel Cultured Protists Identify Deep-branching Environmental DNA Clades of Cercozoa: New Genera Tremula, Micrometopion, Minimassisteria, Nudifila, Peregrinia[END_REF].
• Mechanistic model of the protein import in secondary red plastids. Despite that a model of the protein import into the periplastidial compartment (PPC) mediated by the SELMA translocon has been proposed (mostly developed in diatoms), and has been widely accepted [START_REF] Bolte | Making new out of old: Recycling and modification of an ancient protein translocation system during eukaryotic evolution[END_REF]Hempel et al., 2009, 2010, Stork et al., 2012, 2013), we have shown that the evolutionary history of SELMA is more complex than generally assumed and the system needs to be studied carefully. An alternative model has been proposed recently by Cavalier-Smith, (2017) in which SELMA may be localized in the periplastid reticulum within the PPC instead of the second outermost membrane. This model would change previous assumptions and need to be considered in future studies. Thus, it is needed a better microscopic description of the PPC of secondary red plastids as well as of the mechanistic model of protein import. It is necessary to understand from a global perspective the similarities and differences of the PPC among secondary red plastid-harboring lineages, the structure of the SELMA machinery and the origin of their components.
• To Improve the phylogenetic tree of eukaryotes. To study the evolution of secondary red plastids it is necessary to have a very good understanding of the phylogenetic tree of eukaryotes. Evolutionary hypotheses regarding the evolution of secondary red plastids are heavily influenced by the resolution of the eukaryotic tree, specially the phylogenetic relationships among plastid-harboring lineages. The likelihood that all secondary red plastids arose by vertical descent from a chromist ancestor as proposed by the chromalveolate hypothesis or by serial endosymbioses as some models argue (Petersen et al., 2014;Stiller et al., 2014) depends on the logic behind the phylogenetic relationships among photosynthetic groups. Likewise, the number of probable plastid losses and the timing of morphological evolution of certain characteristics depend on the branching order suggested by the phylogenetic trees. Hence, to reconcile the evolution of secondary red plastids with the evolution of their hosts is mandatory to improve the resolution of the phylogenetic tree of eukaryotes.
CONCLUSIONS "The knowledge of the laws of nature, whether we can trace them in the alternate ebb and flow of the ocean, in the measured path of comets, or in the mutual attractions of multiple stars, alike increases our sense of the calm of nature, whilst the chimera so long cherished by the human mind in its early and intuitive contemplations, the belief in a 'discord of the elements', seems gradually to vanish in proportion as science extends her empire"
Alexander von Humboldt. Cosmos: a sketch of a physical description of the universe (1849)
CONCLUSIONS
My PhD project centered on the study of the early evolution of photosynthetic eukaryotes, from the primary endosymbiosis that gave rise to the first group of plastidbearing eukaryotes up to the acquisition of secondary plastids. They together originated all the extant diversity of eukaryotic organisms capable to photosynthesize. As I showed all along this work, there are still several doubts concerning the origin and evolution of photosynthetic eukaryotes. My PhD project addressed some of these questions. The major outcomes of our work are listed below:
1. Primary plastids are closely related to the unicellular early-branching Gloeomargarita clade. Our phylogenomic analyses of plastid and nuclear-encoded genes of cyanobacterial origin of Archaeplastida lineages clearly showed that primary plastids have a deep origin within the phylogeny of cyanobacteria. The plastid ancestor originated from a relative closely related to the recently described cyanobacterium Gloeomargarita lithophora. Our results contradict other studies in which primary plastids would have originated from a late-branching filamentous nitrogen-fixing cyanobacterium.
2. Primary plastids likely arose in a freshwater habitat. Our results coupled with the ecological distribution of Gloeomargarita strongly support that primary photosynthetic eukaryotes appeared in a freshwater habitat.
3. The plastid ancestor was not capable of fixing atmospheric nitrogen. Primary endosymbiosis was not driven by the dependency of the host on fixed nitrogen supplied by the cyanobacterial endosymbiont, as shown by the lack of genes necessary to fix nitrogen in G. lithophora.
Plastid proteomes of chlorarachniophytes and euglenids have recruited several
proteins from red algae. We found that approximately half of nuclear-encoded plastidtargeted proteins in chlorarachniophytes and euglenids have been acquired from a red algal source. These results leave open the possibility that these green lineages may have carried a red plastid that was replaced by a green one.
5.
No evidence for a cryptic green plastid endosymbiosis in chromalveolates. Our results did not support previous claims that chromalveolates encode a large number of genes that would have been transferred from green algae. We argue that the low percentage (10%) of plastid-targeted proteins with a green algal ancestry may be better explained by horizontal gene transfer events.
6.
The SELMA translocon has a more complex evolutionary history than previously thought. Our phylogenetic analyses of SELMA components showed that most of these proteins are not monophyletic. The SELMA translocon in cryptophytes is formed from a different set of red algal-derived proteins than that in the other red lineages (i.e.
haptophytes, alveolates and stramenopiles).
7. SELMA has seemingly a hybrid nature. Besides the recycling of genes from the ERAD machinery of the red algal endosymbiont, it seems that some red lineages appear to have recruited genes from the host to be part of the SELMA translocation machinery. Melainabacteria (outgroup) Arthrospira maxima CS-328
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,64 1 1 1 1 1 1 1 1 1 1 0,88 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,71 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,64 1
Microcoleus vaginatus FGP-2 1 1 1 1 0,5 1 1 1 1 1 1 1 1/100 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,69 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,52 1 1 1 1 1 1 1 1 1 1 1 1 0.
0,62 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,94 1 1 1 1 1 1 1 1 1 1 0,71 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1/100 1 1 1 1 1 1 1 1 1 0,66 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.
Selection of Phylogenetic Markers
In addition to the information provided in the main text, our procedure to exclude potentially problematic markers based on the manual inspection of the phylogenetic trees reconstructed with each individual marker was as follows: We first identified trees containing species with two or more distantly related sequences, which indicated the presence of paralogs. If these were not issued from recent duplications affecting only single species of small groups of species within a lineage, we excluded the marker. We also detected ancient paralogs that were duplicated before the diversification of cyanobacteria and plastids. If the two paralogs were well separated in our trees, we kept both as two different A second round of alignment/trimming/inference was then processed with these subsets of similar proteins. Some alignments were then improved by removing duplicates and very short partial sequences. To speed up subsequent calculations, several outgroup sequences were also removed from all alignments (see Table S2). The final sequence datasets were realigned, trimmed using TrimAL v1.4.rev15 with "gappy-out" parameter {Capella-Gutierrez, 2009 #4702} and used to infer ML trees using IQtree S3).
Gene functional annotation
We annotated the functions of the 85 proteins in the final selection (see above) through the EggNOG 4.5 {Huerta-Cepas, 2016 #4852} web portal (http://eggnogdb.embl.de). For each protein dataset we used as queries the ortholog sequences of Guillardia theta and Bigelowiella natans.
Table S3. Phylogenetic affinity of genes in photosynthetic eukaryotes inferred from single-gene phylogenies (see Figs. S1-S85)
Colors and letters (R,G,U,A) indicate the position of secondary photosynthetic eukaryotes in individual gene trees: close to red algae (R), green algae (G), unresolved (U) or absent (A). Stephanosphaera pluvialis 1KP|ZLQE_2028564 Gloeobacter kilaueensis JS1 NCBI|AGY56555.
Tree Outgroup Cryptophyta Haptophyta Alveolata Stramenopiles Euglenida Chlorarach. RG_002 Cyanobacteria R R A R A R RG_003 Cyanobacteria R A R R R R RG_005 Cyanobacteria U U U U U U RG_006 Cyanobacteria R R R R G R RG_007 Cyanobacteria R R R R G R RG_008 Cyanobacteria R A R R G G RG_010 Cyanobacteria R R R R G R RG_013 Cyanobacteria R R R R R R RG_014 Cyanobacteria R R R R U R RG_015 Cyanobacteria R R R R R G RG_016 Cyanobacteria R R R R U R RG_017 Cyanobacteria R R R R R G RG_018 Cyanobacteria R R R R A R RG_019 Cyanobacteria R R U R G R RG_020 Cyanobacteria R R R R G G RG_022 Cyanobacteria R R A R A R RG_024 Cyanobacteria R R A R G A RG_025 Cyanobacteria R R R R U G RG_026 Cyanobacteria R R A R A G RG_029 Cyanobacteria R R R R A G RG_030 Cyanobacteria R R R R G A RG_031 Cyanobacteria R R A R R R RG_032 Cyanobacteria R R R R G G RG_033 Cyanobacteria R R R R R G RG_034 Cyanobacteria R R U R R R RG_035 Cyanobacteria R R A R A U RG_036 Bacteria R R A R A R RG_037 Cyanobacteria R R A R A R RG_038 Cyanobacteria U R A R A R RG_039 Cyanobacteria R R R R G G RG_040 Cyanobacteria R R R R R G RG_041 Cyanobacteria R R R R G R RG_042 Cyanobacteria U R R R G R RG_047 Cyanobacteria R R A R R R RG_048 Cyanobacteria G G G G G G RG_049 Cyanobacteria R R R R U R RG_050 Cyanobacteria R R A R G R RG_053 Bacteria R R A R R G RG_057 Cyanobacteria R R R R G R RG_058 Cyanobacteria R R R R R R RG_059 Cyanobacteria U R R R G R RG_060 Cyanobacteria R R R R A R RG_061 Cyanobacteria R R R R A RG_062 Cyanobacteria U U A U R U RG_064 Cyanobacteria U U U U A U RG_066 Cyanobacteria R G G G G G RG_067 Cyanobacteria R R R R G G RG_070 Bacteria R R A R R R RG_071 Cyanobacteria A R R R A R RG_072 Bacteria R R R R A R RG_073 Bacteria R R R R R G RG_074 Cyanobacteria G G G G G A RG_075 Cyanobacteria R R R R A G RG_076 Cyanobacteria G G G G G G RG_079 Bacteria R R R R A R RG_081 Cyanobacteria R R R R R R RG_082 Cyanobacteria G R A R R R RG_083 Cyanobacteria R R R R A R RG_084 Cyanobacteria R U A U A U RG_085 Cyanobacteria R R R R R G RG_086 Cyanobacteria R R R R A G RG_087 Cyanobacteria R R R R G G RG_088 Cyanobacteria R R R R G G RG_089 Cyanobacteria R R R R R G RG_090 Cyanobacteria R R R R R G RG_092 Cyanobacteria R R R R U G RG_093 Cyanobacteria R R R R R R RG_095 Cyanobacteria R R A R G G RG_096 Cyanobacteria R R U R G U RG_097 Cyanobacteria R R A R A R RG_101 Cyanobacteria R R A R U R RG_102 Cyanobacteria R R R R G R RG_104 Cyanobacteria A R R R G R RG_105 Cyanobacteria R R U R R G RG_106 Cyanobacteria R R A R A G RG_107 Bacteria R R R R R R RG_108 Cyanobacteria R R R R G R RG_110 Cyanobacteria R R R R U R RG_111 Cyanobacteria R R R R G R RG_112 Bacteria R R A R A R RG_113 Bacteria R R R R R G RG_114 Bacteria R R R R G G RG_115 Bacteria A R R R G A RG_116 Cyanobacteria G G A G G G
Origine des plastes primaires
L'origine unique des plastes primaires des Archaeplastida est largement soutenue par les analyses phylogénomiques des gènes nucléaires, plastidiaux et mitochondriaux [START_REF] Jackson | The mitochondrial genomes of the glaucophytes gloeochaete wittrockiana and cyanoptyche gloeocystis: Multilocus phylogenetics suggests amonophyletic archaeplastida[END_REF], Rodríguez-Ezpeleta et al., 2005). Cependant, l'identité de la lignée cyanobactérienne qui a donné naissance aux plastes primaires a été longuement débattue. Les études visant à son identification peuvent être divisés en deux groupes selon la position phylogénétique de la cyanobactérie endosymbiote dans la phylogénie de cyanobactéries. Les modèles « early-branching » proposent que la cyanobactérie à l'origine des plastes primaires aurait une origine précoce dans la phylogénie des cyanobactéries alors que les modèles « late-branching » suggèrent que les plastes dérivent d'une lignée cyanobactérienne qui se serait diversifiée plus tardivement.
Pour identifier la lignée cyanobactérienne la plus proche aux plastes primaires, il a été nécessaire d'étendre l'échantillonnage taxonomique des cyanobactéries. Quelques études ont récemment entrepris des projets de séquençage de génomes cyanobactériens qui ont augmenté l'échantillonnage taxonomique des espèces avec différents styles de vie, morphologies et positions phylogénétiques. Cette augmentation de données « omiques » nous a permis d'utiliser des approches phylogénomiques afin de mettre en évidence la lignée la plus proche de l'ancêtre du plaste. Cependant, le séquençage de Gloeomargarita lithophora par notre équipe, une cyanobactérie représentant un clade s'étant diversifié précocement a été décisif pour placer l'origine des plastes dans la phylogénie des cyanobactéries. G. lithophora appartient à un nouveau clade cyanobactérien récemment décrit dans les microbialites d'eau douce et les tapis microbiens (Ragon et al., 2014).
Nos analyses phylogénomiques des gènes plastidiaux concaténés ont montré clairement que G. lithophora est la cyanobactérie actuelle connue la plus proche de l'ancêtre des plastes primaires (Fig. 21). De même, les gènes d'origine cyanobactérien codés dans les noyaux (EGT, Endosymbiotic Gene Transfer, transfert endosymbiotique des gènes en anglais) des Archaeplastida corroborent les résultats des gènes des plastes (Fig. 22). Ces résultats montrent pour la première fois le groupe frère des plastes primaires et montrent aussi que le plaste n'est pas apparenté aux cyanobactéries filamenteuses capable de former des hétérocystes ou aux cyanobactéries diazotrophiques unicellulaires comme avait été proposé dans un certain nombre d'études précédentes (Dagan et al., 2013;Deusch et al., 2008;[START_REF] Falcón | Dating the cyanobacterial ancestor of the chloroplast[END_REF]. La plupart des modèles « late-branching » donne une importance majeure à la capacité de fixer l'azote de plusieurs clades cyanobactériens et suggère que l'établissement de l'endosymbiose primaire pourrait s'être initialement basée sur l'exploitation par l'hôte de cette capacité métabolique de la cyanobactérie symbiote.
Cependant, le génome de G. lithophora ne code pas les protéines nécessaires à la fixation de l'azote, ce qui suggère que l'endosymbiose primaire n'était pas basée sur un échange de cet élément. De plus, les eucaryotes photosynthétiques ne gardent aucune trace de ce passé supposé des plastes.
Une conséquence importante de nos résultats sur la proximité phylogénétique de Gloeomargarita et l'ancêtre des plastes primaires concerne le type d'écosystème qui aurait hebergé les premiers eucaryotes photosynthétiques. Les cyanobactéries issues du nouvel ordre des Gloeomargaritales n'ont été retrouvées que dans des environnements d'eau douce (Ragon et al., 2014) et elles sont généralement très peu abondantes [START_REF] Couradeau | Prokaryotic and eukaryotic community structure in field and cultured microbialites from the alkaline Lake Alchichica (Mexico)[END_REF][START_REF] Saghaï | Metagenome-based diversity analyses suggest a significant contribution of non-cyanobacterial lineages to carbonate precipitation in modern microbialites[END_REF]. La distribution écologique de G. lithophora suggère que les premières eucaryotes photosynthétiques sont apparus dans un habitat d'eau douce. Nos résultats sont appuyés par la reconstruction des caractères ancestraux des eucaryotes photosynthétiques, qui conclue également à une émergence dans un environnement terrestre et confirme nos résultats en montrant que les plastes primaires dérivent d'un ancêtre cyanobactérien proche à G. lithophora.
Évolution des lignées d'eucaryotes avec des plastes secondaires
Dans mon travail de thèse, je me suis intéressé aussi à l'étude des plastes complexes. La diversité des plastes secondaires est large ainsi que leurs morphologies, mais il existe deux grandes divisions : les plastes d'origine « rouges » si le plaste dérive d'une endosymbiose avec une algue rouge, ou « verte » si l'endosymbiote est une algue verte. Les groupes possédant des plastes secondaires verts incluent les chlorarachniophytes et les euglénophytes dont les plastes sont issus de deux endosymbioses indépendantes (Rogers et al., 2007). Cependant, plusieurs protéines codées dans les noyaux des euglénophytes et des chlorarachniophytes semblent avoir être acquis par transfert de gènes depuis des algues rouges. La plupart de ces protéines participent à des fonctions liées à la photosynthèse (Markunas et Triemer, 2016;Yang et al., 2014) mais aussi à l'entretien et la division du plaste comme la protéine FtsZ. Celle-ci est nécessaire à la division du plaste, et dans le cas des chlorarachniophytes, elle dérive d'une algue rouge (Yang et al., 2014).
Figure 1 .
1 Figure 1. Multiple symbiogenetic origins of nucleated cells and plastids (adapted from Mereschkowsky, 1920).
Figure 2 .
2 Figure 2. Polyphyletic origin of plants (adapted from[START_REF] Mereschkowsky | La plante considérée comme un complexe symbiotique[END_REF]
Figure 6 .
6 Figure 6. Ménage-à-trois hypothesis of primary plastid endosymbiosis. A) Initial proposal B) Updated model. N=Nucleus.
Figure 7 .
7 Figure 7. Primary plastid-bearing lineages.
Figure 8 .
8 Figure 8. Distribution of photosynthetic lineages across the eukaryotic tree.
Figure 9 .
9 Figure 9. Photosynthetic lineages with secondary green plastids. A) Chlorarachniophytes. B) Euglenophytes. Nu: nucleomorph; M: Mitochondria; N: nucleus.
Figure 10 .
10 Figure 10. Single red algal enslavement at the origin of secondary red plastids (adapted from Cavalier-Smith, 2017).
Figure 11 .
11 Figure 11. Distribution of translocons in secondary red plastids surrounded by four membranes. The figure depicts the architecture of secondary red plastids in which the outermost membrane is fused with the host ER such as in stramenopiles, haptophytes and cryptophytes. However, cryptophytes harbor a nuclemorph, the vestigial nucleus of the red algal endosymbiont, within the PPC. ER: endoplasmic reticulum; PCC: periplastidial compartment; IMS: Intermembrane space.
Figure 14 .Figure 15 .Figure 16 .
141516 Figure 14. Diagram of the methodology used to study the origin of primary plastids (Chapter 4)
Figure 17 .
17 Figure 17. Bayesian phylogenetic tree based on the concatenation of 97 plastid-encoded proteins and their cyanobacterial homologs. Information about the habitat and morphology of the cyanobacterial species is provided. Branches supported by posterior probability 1 are labeled with black circles.
CS 2 5 8 M 8 1 BFigure 1 .
258811 Figure 1. The Position of Plastids in the Cyanobacterial Phylogeny
GG 3 2 OtheFigure 2 .
322 Figure 2. Supernetwork Analysis of Plastid-Encoded Proteins and Cyanobacterial Homologs
Function
Function
recover his results. It seems from our tree (supplementary fig.S16) that cryptophytes only have one SELMA Derlin gene derived from DerA in their nucleomorph. We could also recover one monophyletic putative SELMA DerB shared by alveolates, stramenopiles and haptophytes, but for the second putative SELMA Derlin shared by the same lineages, we were unable to define if it is closer to DerA or DerB. It is worth mentioning that the position
Figure 1 .
1 Figure 1. Phylogenies of Ubc paralogs. A) Unrooted phylogenetic tree of all similarity-based detected Ubc proteins. B) ML phylogenetic tree restricted to the Ubc_new isoform. Numbers represent bootstrap values.The scale bars represent the number of substitutions per a unit branch length.
Figure 19 .
19 Figure 19. Cryptic red plastid in chlorarachniophytes. A) The last common ancestor of the SAR clade harbored a secondary red plastid. B) An early cercozoan engulfed a red plastid-bearing eukaryote.
Figure 20 .
20 Figure 20. Putative distribution of SELMA components from the chromist ancestor to present-day secondary red lineages. Proteins within red rectangles derive from the ERAD system of the red algal endosymbiont. Proteins within gray rectangles represent proteins that were lost. Dash line circles are nucleomorph-encoded proteins whereas continuous line circles represent proteins encoded in the nuclear genome.
Figure S1. Related to Figure 1. The Position of Plastids in the Cyanobacterial Phylogeny (A) Bayesian phylogenetic tree based on the concatenation of 97 plastidencoded proteins and their cyanobacterial homologues (21942 sites), including the fast-evolving cyanobacterial SynPro clade. Numbers indicate posterior probabilities. (B) Bayesian phylogenetic tree based on the concatenation of 16S + 23S rRNA sequences of plastids and cyanobacteria (3427 nucleotides). The tree is rooted on the sister group of Cyanobacteria, the Melainabacteria. Numbers indicate posterior probabilities. (C) Bayesian phylogenetic tree based on the concatenation of 72 nucleusencoded proteins of plastid origin (28102 sites). Numbers indicate posterior probabilities. Maximum likelihood bootstrap support is also shown for the Gloeomargarita+Archaeplastida group. (D) Bayesian phylogenetic tree based on the concatenation 97 plastid-encoded proteins and their cyanobacterial homologues recoded with the Dayhoff recoding scheme (21942 sites). Numbers indicate posterior probabilities. (E) Bayesian phylogenetic tree based on all sites of the concatenation of 97 plastid-encoded proteins and their cyanobacterial homologues. Numbers indicate posterior probabilities. (F-M) Bayesian phylogenetic trees based on the concatenation 97 plastidencoded proteins and their cyanobacterial homologues after removal of a proportion of the fastest-evolving sites. (F) 10% fastest-evolving sites removed; (G) 20% fastest-evolving sites removed; (H) 30% fastest-evolving sites removed; (I) 40% fastest-evolving sites removed; (J) 50% fastest-evolving sites removed; (K) 60% fastest-evolving sites removed; (L) 70% fastest-evolving sites removed; and (M) 80% fastest-evolving sites removed. Numbers indicate posterior probabilities.
2 .
2 Figure S2. Related to Figure 2. Heatmaps Showing the Distance Between the Plastid Proteins of Cyanophora paradoxa and Those of Other Taxa for the 97 Plastid-encoded Proteins Used in our Analyses. (A) Distances calculated from pairwise alignments. (B) Patristic distances.
Functioning
Ponce-Toledo¹, David Moreira¹, Purificación López-García¹ and Philippe Deschamps¹* ¹Unité d'Ecologie Systématique et Evolution, CNRS, Université Paris-Sud, AgroParisTech, Université Paris-Saclay, 91400, Orsay, France *Corresponding author: philippe.deschamps@u-psud.fr regions of the alignments were trimmed with BMGE v1.0 {Criscuolo, 2010 #4578} allowing a maximum of 50% gaps per position. Preliminary phylogenetic trees were inferred from trimmed alignments using FastTree v2.1.7 {Price, 2010 #4665} with default parameters. This first set of trees was then manually inspected to determine those showing a topology compatible with an EGT/HGT scenario. For all positive topologies, only sequences corresponding to the portion of interest of each draft phylogenetic tree (the part showing the photosynthetic eukaryotes and the closest outgroup) were saved for the remaining steps.
v1. 5 .
5 1 with the CAT 20+1-Gamma model of sequence evolution, plus 1000 pseudobootstrap replicates {Nguyen, 2015 #4854}. A final selection of these trees done by manual inspection to keep those fulfilling the following two requirements: 1) The protein has to be shared by cyanobacteria (or other bacteria), Archaeplastida and at least three secondary photosynthetic lineages, and 2) the corresponding phylogenetic trees have to support the monophyly of Archaeplastida and a clear separation of Viridiplantae and Rhodophyta. Finally, the 85 trees passing this final filter (figs S1-S85) were inspected to infer the phylogenetic origin of the corresponding genes in the secondary photosynthetic lineages (Table
of 4-diphosphocytidyl-2-C-methyl-Derythritol from CTP and 2-C-methyl-D-erythritol 4
sp. CCMEE 5410 NCBI|WP_010473634.
aurita isolate 1302-5 CAMERA|CAMPEP_0113545534 100.0 Proboscia alata PI-D3 CAMERA|CAMPEP_0200180494 Proboscia inermis CCAP1064-1 CAMERA|CAMPEP_0171307276
photosynthétiques, ainsi que de nouveaux génomes de cyanobactéries, pour appliquer un approche phylogénomique et ainsi approfondir notre appréhension de l'origine et de l'évolution des eucaryotes photosynthétiques et améliorer notre compréhension des endosymbioses primaires et secondaires. Ainsi, nous avons pu identifier la lignée cyanobactérienne la plus proche phylogénétiquement des plastes primaires et proposer quelques hypothèses pour retracer l'évolution des plastes secondaires. Les résultats les plus pertinents de mon travail de thèse sont résumés ci-dessous.
Figure 22 .
22 Figure 22. Arbre phylogénétique construit par inférence bayésienne de la concaténation de 72 protéines (28,102 sites) d'origine cyanobactérienne codées dans les noyaux des Archaeplastida. Les numéros indiquent des probabilités postérieures.
D
photosynthétique dérivé d'une algue rouge), le noyau a gardé un petit nombre de gènes d'origine plastidiale (EGT secondaires), ce qui suggère que C. parvum a eu un plaste mais qu'il l'a perdu secondairement[START_REF] Huang | Phylogenomic evidence supports past endosymbiosis, intracellular and horizontal gene transfer in Cryptosporidium parvum[END_REF]. Dans notre étude des gènes nucléaires chez les lignées eucaryotes possédant des plastes secondaires, nous avons pu identifier 85 gènes compatibles avec une topologie d'EGT secondaire. La plupart de ces gènes sont d'origine cyanobactérienne mais nous avons aussi trouvé un petite nombre de gènes bactériens non-cyanobactériens, possiblement acquis par HGT chez les Archaeplastida et ensuite transféré comme EGT pendant les endosymbioses secondaires. En ce qui concerne l'origine des EGT secondaires chez les chromalvéolés, la plupart des gènes (90%) a été transféré à partir d'une algue rouge, conformément à l'origine de leurs plastes secondaires. Le reste des gènes (10%) semble provenir des algues vertes. Nos résultats infirment les études précédentes qui ont rapporté un grand nombre de gènes verts dans les diatomées, suggérant une endosymbiose cryptique avec une algue verte (Moustafa et al., 2009). En revanche, ils sont compatible avec celles de Deschamps & Moreira (2012) indiquant que les EGT secondaires verts avaient été identifiés à tort. De la même façon, nos résultats ne montrent pas de preuves suffisantes pour soutenir l'hypothèse d'une endosymbiose cryptique verte chez les diatomées ou dans d'autres lignées d'eucaryote possédant actuellement un plaste secondaire rouge. Il est possible que les quelques gènes d'algues vertes trouvés dans ces lignées puissent être le résultat de transferts horizontaux. En ce qui concerne les chlorarachniophytes et les euglénophytes, deux lignées photosynthétiques avec des plastes secondaires verts, nous avons trouvé un mélange de gènes rouges et verts (Fig. 23), présents en proportions très semblables. Nos résultats montrent que 58% et 45% des gènes EGT proviennent d'algues rouges chez les chlorarachniophytes et les euglénophytes, respectivement. Cela représente une différence très importante comparé aux ~ 10% de gènes d'algues vertes dans les lignées rouges secondaires. L'importante contribution des algues rouges aux génomes nucléaires des chlorarachniophytes et des euglénophytes a été retrouvé à plusieurs reprises. Par exemple, Curtis et al. (2012) ont identifié que parmi les groupes de gènes provenant des algues et qui sont codés dans le noyau de Bigelowiella natans, 22 % des gènes sembleraient avoir été transférés des algues rouges.
Figure 23 .
23 Figure 23. Protéines codées dans les génomes nucléaires de chlorarachniophytes et d'euglénophytes ont des origines diverses. A) Exemple de gène transféré à partir des algues vertes. B) Exemple de gène transféré à partir des algues rouges. Les arbres phylogénétique ont été reconstruits avec une approche de maximum de vraisemblance. Les numéros représentent des valeur de bootstrap.
3 .
3 Évolution du système de transport de protéines SELMALa machinerie d'importation de protéines SELMA, qui se localise dans la deuxième membrane la plus externe des plastes secondaires rouges entourés par quatre membranes, semble dériver du système ERAD de l'algue rouge endosymbiote à l'origine du plaste. L'apparente monophylie de SELMA suggère une seule endosymbiose secondaire avec une algue rouge. Selon l'hypothèse des Chromalvéolés, SELMA aurait évolué dans l'ancêtre des chromistes et aurait été transmis aux lignées rouges secondaires actuelles par descendance verticale. Selon les hypothèses d'endosymbioses en série SELMA aurait été transféré en même temps que les plastes lors d'un nombre indéterminé d'endosymbioses tertiaires ou quaternaires, comme c'était le cas pour le complexe TIC/TOC au cours des endosymbioses secondaires(Hirakawa et al., 2012;[START_REF] Sheiner | Protein sorting in complex plastids[END_REF]. Il existe un exemple de transfert endosymbiotique du translocon SELMA chez les dinoflagellés de la famille des Kareniaceae. Il est bien établi que des membres de cette famille ont remplacé le plaste ancestral contenant de la péridinine par un plaste dérivé d'une haptophyte[START_REF] Gabrielsen | Genome evolution of a tertiary dinoflagellate plastid[END_REF]. Des études le système SELMA ait été restructuré, peut-être en utilisant aussi des protéines dérivées de l'hôte dans certaines lignées. Ce modèle spécule une construction différentielle du système à partir du même ensemble initial de protéines. Pourtant, comme dans l'hypothèse d'endosymbioses en série, ces modèles semblent très peu parcimonieux.Malgré le modèle d'origine de SELMA, il apparaît que la structure et la composition du système ait été très influencé par la perte du nucléomorphe et les composants du système codés par celui-ci qui n'ont pas été transféré vers les noyaux des haptophytes, alvéolés et straménopiles. Bien que nos résultats ne soient pas particulièrement favorables à une des possibles hypothèses, nous avons montré que l'évolution du système SELMA est beaucoup plus complexe qu'on ne le pensait auparavant. Ainsi, il sera nécessaire de continuer l'étude de SELMA de manière approfondie dans les recherches futures.
Table 2 .
2 The
different putative components of SELMA and their respective inferred phylogenetic origins in CASH lineages. Empty spaces in the table imply that the corresponding protein is not present or was not detected in our analyses. R: gene derived from the red algal endosymbiont; X: Unknown origin; E: gene derived from the eukaryotic host; C: Cryptophytes; H: Haptophytes; S: Stramenopiles; A: Apicomplexa; Nm: gene encoded in the nucleomorph.
Stork et al. have produced the most comprehensive genomic survey of SELMA components in all available (as of 2012) genomes of red secondary lineages. This survey was conducted using similarity Blast searches starting with ERAD proteins of Saccharomyces and SELMA components previously identified in cryptophytes, apicomplexans and stramenopiles as references to query other genomes. The classification of all detected proteins as either part of ERAD or SELMA was based on similarity and on in silico detection of BTS
Table
Table S1 . List of species in the local genome database Archaea
S1
Thermococcus gammatolerans EJ3 Cyanobacterium sp. PCC 10605 Methylobacillus flagellatus KT Serratia proteamaculans 568 Botrytis cinerea Gracilaria salicornia Porphyridium purpureum Chromera velia Thermoproteus neutrophilus V24Sta Geobacillus kaustophilus HTA426 Phenylobacterium zucineum HLK1 Synechococcus sp. WH7803 Dinobryon sp. UTEXLB2267 Neobodo designis CCAP1951 Spizellomyces punctatus Monomorphina aenigmatica
Thermococcus kodakarensis KOD1 cyanobacterium sp. PCC 7702 Methylobacterium extorquens PA1 Shewanella amazonensis SB2B Brachiomonas submarina Gracilaria tenuistipitata Postia placenta Cryptoglena skujai Thermoproteus tenax Kra 1 Geobacter metallireducens GS_15 Photobacterium profundum SS9 Synechococcus sp. WH8102 Diplonema papillatum Neoparamoeba aestuarina Sporobolomyces roseus Nannochloropsis gaditana
Thermococcus onnurineus NA1 Cyanobacterium stanieri PCC 7202 Methylocella silvestris BL2 Shigella boydii Sb227 Brachypodium distachyon Gracilariopsis lemaneiformis Prasinococcus capsulatus CCMP1194 Cryptomonas paramecium CCAP977 Thermoproteus uzoniensis 768-20 Geobacter sulfurreducens PCA Photorhabdus luminescens TTO1 Synechocystis sp. PCC6803 Dolichomastix tenuilepis CCMP3274 Neosiphonia japonica Sporolithon durum Nannochloropsis granulata
Acidianus hospitalis W1 Thermococcus sibiricus MM 739 Cyanobacterium Yellowstone A Methylococcus capsulatus Bath Silicibacter pomeroyi DSS_3 Bryopsis plumosa Grammatophora oceanica CCMP 410 coloniale CCMP1413 merolae Methanocaldococcus jannaschii DSM 2661 Thermosphaera aggregans DSM 11486 Geobacter uraniumreducens Rf4 Phytoplasma australiense Synechocystis sp. PCC 7509 Dumontia simplex Neospora caninum Stagonospora nodorum Odontella sinensis
Acidilobus saccharovorans 345-15 Thermococcus sp. AM4 Cyanobacterium Yellowstone B Microchaete sp. PCC 7126 Simkania negevensis Z Caenorhabditis elegans Grateloupia filicina Prasiola crispa Cyanidium caldarium Methanocaldococcus sp. FS406-22 Uncultured Marine group II euryarchaeote Gloeobacter kilaueensis JS1 Plesiocystis pacifica SIR-1 Syntrophobacter fumaroxidans MPOB Dunaliella salina Nephroselmis olivacea Stephanosphaera pluvialis Oryza sativa
Aciduliprofundum boonei T469 Thermofilum pendens Hrk_5 Cyanobium gracile PCC 6307 Microcoleus sp. PCC 7113 Sinorhizobium medicae WSM419 Cafeteria roebergensis E4-10 Grateloupia taiwanensis Proboscia alata PI-D3 Cyanophora paradoxa Methanocaldococcus vulcanius M7 uncultured Methanogenic archaeon RC-I Gloeobacter violaceus PCC7421 Pleurocapsa sp. PCC 7319 Syntrophomonas wolfei. Wolfei str. Goettingen Ectocarpus siliculosus Neurospora crassa OR74A Stereomyxa ramosa Ostreococcus tauri
Aeropyrum pernix K1 Thermoplasma acidophilum DSM_1728 Cyanothece sp. ATCC51142 Microcoleus vaginatus FGP-2 Sodalis glossinidius Calcidiscus leptoporus RCC1130 Gregarina niphandrodes Proboscia inermis CCAP1064-1 Cylindrotheca closterium Methanocella paludicola SANAE Vulcanisaeta distributa DSM14429 Gloeocapsa sp. PCC 73106 Pleurocapsa sp. PCC 7327 Syntrophus aciditrophicus SB Eimeria mitis Norrisiella sphaerica BC52 Stichococcus bacillaris Partenskyella glossopodia RCC365
Archaeoglobus fulgidus DSM_4304 Thermoplasma volcanium GSS1 Cyanothece sp. ATCC 51472 Microcystis aeruginosa NIES843 Solibacter usitatus Ellin6076 Calliarthron tuberculosum Griffithsia okiensis Procryptobia sorokini Didymosphenia geminata Methanococcoides burtonii DSM_6242 Vulcanisaeta moutnovskia 768-28 Gloeocapsa sp. PCC 7428 Polaromonas sp. JS666 Thermoanaerobacter pseudethanolicus Elphidium margaritaceum Ochromonas sp. CCMP1393 Stigeoclonium helveticum Pavlova lutheri
Archaeoglobus profundus DSM 5631 Cyanothece sp. CCY0110 Microcystis aeruginosa PCC 7806 Sorangium cellulosum Capsaspora owczarzaki Guillardia theta Proteomonas sulcata CCMP704 Ectocarpus siliculosus Methanococcus aeolicus Nankai_3 Gloeomargarita lithophora Polynucleobacter sp. QLW P1DMWA 1 Thermobifida fusca YX Emiliania huxleyi CCMP1516 Odontella aurita isolate 1302-5 Striatella unipunctata CCMP2910 Phaeodactylum tricornutum
Archaeoglobus veneficus SNP6 Cyanothece sp. PCC 7424 Moorea producens 3L Sphingomonas wittichii RW1 Carteria crucifera Gymnochlora sp. CCMP2014 Prototheca wickerhamii Emiliania huxleyi CCMP1516 Methanococcus maripaludis C5 Gluconacetobacter diazotrophicus PAl_5 Porphyromonas gingivalis W83 Thermodesulfovibrio yellowstonii DSM 11347 Entamoeba dispar SAW760 Odontella sinensis Stygamoeba regulata BSH-02190019 Physcomitrella patens
Caldiarchaeum subterraneum Bacteria Cyanothece sp. PCC 7425 Moorella thermoacetica ATCC 39073 Sphingopyxis alaskensis RB2256 Cephaleuros virescens Gymnochlora stellata Prymnesium parvum Texoma1 Euglenaformis proxima Methanococcus vannielii Gramella forsetii KT0803 Prochlorococcus marinus AS9601 Thermomicrobium roseum DSM 5159 Entamoeba histolytica Oedogonium cardiacum Synchroma pusillum CCMP3072 Populus trichocarpa
Caldivirga maquilingensis IC_167 Acaryochloris marina MBIC11017 Cyanothece sp. PCC 7822 Mycobacterium abscessus Spirulina major PCC 6313 Ceramium kondoi Gymnodinium catenatum GC744 Pseudoneochloris marina Euglena gracilis Methanococcus voltae A3 Borrelia afzelii PKo Granulibacter bethesdensis CGDNIH1 Prochlorococcus marinus CCMP1375 Thermosipho melanesiensis BI429 Entamoeba invadens IP1 Oryza sativa Tetradesmus dimorphus Porphyra pulchra
Candidatus Methanoregula boonei 6A8 Acaryochloris sp. CCMEE 5410 Cyanothece sp. PCC 8801 Mycoplasma agalactiae PG2 Spirulina subsalsa PCC 9445 Cerataulina daemon Haematococcus lacustris Pseudopedinella elastica CCMP716 Euglena longa Methanocorpusculum labreanum Z Brachyspira hyodysenteriae WA1 Haemophilus ducreyi 35000HP Prochlorococcus marinus CCMP1986 Thermosynechococcus elongatus BP1 Entamoeba nuttalli P19 Ostreococcus lucimarinus Tetrahymena thermophila Porphyra purpurea
Candidatus Micrarchaeum acidiphilum Acholeplasma laidlawii PG_8A Cyanothece sp. PCC 8802 Myxococcus xanthus DK_1622 Stanieria sp. PCC 7437 Chaetoceros curvisetus Hafniomonas reticulata Pseudoscourfieldia marina Euglenaria anabaena Methanoculleus marisnigri JR1 Bradyrhizobium sp. BTAi1 Hahella chejuensis KCTC_2396 Prochlorococcus marinus MIT9215 Thermotoga lettingae TMO Eremosphaera viridis Ostreococcus tauri Tetraselmis chuii PLY429 Porphyra yezoensis
Candidatus Nanosalinarum sp. J07AB56 Acidiphilium cryptum JF_5 Cylindrospermopsis raciborskii CS-505 Natranaerobius thermophilus JWNM-WN-LF Staphylococcus aureus MRSA252 Chaetoceros neogracile CCMP1317 Halochlorococcum marinum Pteridomonas danica PT Eunotia naegelii Methanohalobium evestigatum Z-7303 Brucella abortus 9_941 Haliangium ochraceum DSM 14365 Prochlorococcus marinus MIT9301 Thermus thermophilus HB8 Erythrolobus australicus CCMP3124 Palpitomonas bilix NIES-2562 Thalassiosira pseudonana Porphyridium purpureum
Candidatus Nanosalina sp. J07AB43 Acidithiobacillus ferrooxidans ATCC 2327 Cylindrospermum stagnale PCC 7417 Nautilia profundicola AmH Stenotrophomonas maltophilia K279a Chaetoceros simplex Hanusia phi CCMP325 Pteromonas angulosa Eutreptiella gymnastica Methanohalophilus mahii DSM 5219 Buchnera aphidicola APS Halorhodospira halophila SL1 Prochlorococcus marinus MIT9303 Thioalkalivibrio sp. HL-EbGR7 Erythrolobus madagascarensis CCMP3276 Pandorina morum Thalassiothrix antarctica L6-D1 Pyropia haitanensis
Candidatus Nitrosoarchaeum koreensis MY1 Acidobacteria bacterium Ellin345 Cytophaga hutchinsonii ATCC_33406 Neisseria gonorrhoeae FA_1090 Streptococcus agalactiae 2603V_R Chaetomium globosum Helicodictyon planctonicum Puccinia graminis Fistulifera solaris Methanoplanus petrolearius DSM11571 Burkholderia sp. 383 Halothece sp. PCC 7418 Prochlorococcus marinus MIT9312 Thiobacillus denitrificans ATCC_25259 Ettlia oleoabundans Parachlorella kessleri Thecamonas trahens Pyropia perforata
Candidatus Nitrosoarchaeum limnia SFB1 Acidothermus cellulolyticus 11B Dactylococcopsis salina PCC 8305 Neorickettsia sennetsu Miyayama Streptomyces avermitilis MA_4680 Chaetopeltis orbicularis Helobdella robusta Pycnococcus provasolii RCC2336 Fucus vesiculosus Methanopyrus kandleri AV19 Caldicellulosiruptor saccharolyticus DSM_8903 Halothermothrix orenii H 168 Prochlorococcus marinus MIT9313 Thiomicrospira crunogena XCL_2 Eucampia antarctica CCMP1452 Paramecium tetraurelia Theileria orientalis Shintoku Rhizosolenia imbricata
Candidatus Nitrososphaera gargensis Ga9.2 Acinetobacter baumannii AB0057 Dechloromonas aromatica RCB Nitratiruptor sp. SB155_2 Sulcia muelleri GWSS Chattonella subsalsa CCMP2191 Hemiselmis andersenii CCMP439 Pycnococcus sp. CCMP1998 Gelidium elegans Methanosaeta concilii GP6 Calothrix sp. PCC 6303 Helicobacter acinonychis Sheeba Prochlorococcus marinus MIT9515 Tolypfthrix sp. PCC 9009 Eucheuma denticulatum Paramoeba atlantica Theileria parva Muguga Rhodomonas salina
Candidatus Nitrosotenuis uzonensis Actinobacillus pleuropneumoniae L20 Dehalococcoides sp. BAV1 Nitrobacter hamburgensis X14 Sulfurihydrogenibium sp. YO3AOP1 Chlamydomonas reinhardtii Heterochlamydomonas inaequalis Pyramimonas parkeae CCMP726 Gelidium vagum Methanosaeta harundinacea 6Ac Calothrix sp. PCC 7103 Heliobacterium modesticaldum Ice1 Prochlorococcus marinus NATL1A Treponema denticola ATCC_35405 Eudorina elegans Paraphysomonas imperforata PA2 Thraustochytrium sp. LLF1b Roundia cardiophora
Candidatus Parvarchaeum acidiphilum Aeromonas hydrophila ATCC_7966 Deinococcus geothermalis DSM_11300 Nitrosococcus oceani ATCC_19707 Sulfurimonas denitrificans DSM 1251 Chlorarachnion reptans CCCM449 Heterosigma akashiwo CCMP2393 Pyropia haitanensis Gracilaria chilensis Methanosaeta thermophila PT Calothrix sp. PCC 7507 Herminiimonas arsenicoxydans Prochlorococcus marinus NATL2A Trichodesmium erythraeum Euglenaformis proxima Partenskyella glossopodia RCC365 Timspurckia oligopyrenoides CCMP3278 Saccharina japonica
Candidatus Parvarchaeum acidophilus Agrobacterium tumefaciens C58 Delftia acidovorans SPH_1 Nitrosomonas europaea ATCC_19718 Sulfurovum sp. NBC37_1 Chlorella sp. NC64A Heterosigma akashiwo NIES293 Pyropia perforata Gracilaria salicornia Methanosalsum zhilinae DSM 4017 Campylobacter concisus 13826 Herpetosiphon aurantiacus ATCC 23779 Propionibacterium acnes KPA171202 Trichormus azollae 708 Euglena gracilis Pavlova lutheri Toxoplasma gondii GT1 Sargassum horneri
Cenarchaeum symbiosum A Akkermansia muciniphila ATCC BAA-835 Desulfitobacterium hafniense Y51 Nitrosospira multiformis ATCC_25196 Symbiobacterium thermophilum IAM_14863 Chlorella vulgaris C-169 Heterosiphonia pulchra Rhizopus oryzae Gracilaria Methanosarcina acetivorans C2A Candidatus Phytoplasma asteris AYWB Hydrogenobaculum sp. Y04AAS1 Prosthecochloris vibrioformis DSM_265 Tropheryma whipplei TW08_27 Euglena longa Pavlova sp. CCMP459 Trachelomonas volvocina Selaginella moellendorffii
Crenarchaeota sp. Alcanivorax borkumensis SK2 Desulfococcus oleovorans Hxd3 Nocardia farcinica IFM_10152 Synechococcus calcipolaris Chlorococcum oleofaciens Histoplasma capsulatum Rhizosolenia imbricata tenuistipitata Methanosarcina barkeri str. Fusaro Carboxydothermus hydrogenoformans Z_2901 Hyphomonas neptunium ATCC_15444 Proteus mirabilis HI4320 Ureaplasma parvum ATCC 700970 Euglenaria anabaena Pedinomonas minor Trebouxia arboricola Sporolithon durum
Desulfurococcus kamchatkensis Aliivibrio salmonicida LFI1238 Desulfotalea psychrophila LSv54 Nocardioides sp. JS614 Synechococcus elongatus PCC6301 Chondrus crispus Homo sapiens Rhizosolenia setigera CCMP 1694 Gracilariopsis lemaneiformis Methanosarcina mazei Go1 Carsonella ruddii PV Idiomarina loihiensis L2TR Protochlamydia amoebophila UWE25 Verminephrobacter eiseniae EF01_2 Euglena sp. Pelagococcus subviridis CCMP1429 Trentepohlia annulata Thalassiosira pseudonana
Desulfurococcus mucosus DSM2162 Alkalilimnicola ehrlichei MLHE_1 Desulfotomaculum reducens MI_1 Nodosilinea nodulosa PCC 7104 Synechococcus elongatus PCC7942 Chromera velia Ignatius tetrasporus Rhodella maculata CCMP736 Grateloupia taiwanensis Methanosphaera stadtmanae DSM_3091 Caulobacter crescentus CB15 Jannaschia sp. CCS1 Pseudanabaena sp. PCC 6802 Vibrio cholerae N16961 Eunotia naegelii Pelagomonas calceolata CCMP1756 Trichoderma reesei Toxoplasma gondii GT1
Euryarchaeota sp. Alkaliphilus metalliredigens QYMF Desulfovibrio desulfuricans G20 Nodularia spumigena CCY9414 Synechococcus sp. CC9311 Chromera velia Isochrysis sp. CCMP1324 Rhodella violacea Guillardia theta Methanosphaerula palustris E1-9c Cellvibrio japonicus Ueda107 Janthinobacterium sp. Marseille Pseudanabaena sp. PCC 7367 Waddlia chondrophila WSU 86-1044 Euplotes focardii TN1 Percolomonas cosmopolitus AE-1 Trichomonas vaginalis G3 Trachelomonas volvocina
Evoldeep Eury fosmid Alteromonas macleodii Deep ecotype Desulfovibrio vulgaris DP4 Nostoc punctiforme 11 Synechococcus sp. CC9605 Chroodactylon ornatum Laccaria bicolor Rhodochaete parvula Gymnochlora stellata Methanospirillum hungatei JF_1 Chamaesiphon minutus PCC 6605 Kineococcus radiotolerans SRS30216 Pseudanabaena sp. PCC 7429 Wigglesworthia glossinidia Eutreptiella gymnastica Percursaria percursa Trichoplax adhaerens Grell-BS-1999 Triparma laevis
Evoldeep Thaum fosmid Amoebophilus asiaticus 5a2 Desulfovibrio vulgaris Hildenborough Nostoc sp. PCC 7107 Synechococcus sp. CC9902 Chroomonas mesostigmatica CCMP1168 Leishmania infantum Rhodomonas abbreviata Heterosigma akashiwo NIES293 Methanothermobacter marburgensis. Marburg Chlamydia muridarum Nigg Klebsiella pneumoniae MGH_78578 Pseudoalteromonas atlantica T6c Wolbachia endosymbiont TRS Exanthemachrysis gayraliae RCC1523 Perkinsus marinus Trichosphaerium sp. Ulnaria acus
Ferroglobus placidus DSM 10642 Anabaena cylindrica PCC 7122 Diaphorobacter sp. TPSY Nostoc sp. PCC7120 Synechococcus sp. PCC 6312 Chrysochromulina polylepis CCMP1757 Leishmania major strain Friedlin Rhodomonas salina Lithodesmium undulatum Methanothermobacter thermautotrophicus Chlamydia trachomatis 70 Kocuria rhizophila DC2201 Pseudomonas aeruginosa PAO1 Wolinella succinogenes DSM_1740 Filamoeba nolandi Pessonella sp. Triparma laevis Undaria pinnatifida
Ferroplasma acidarmanus fer1 Anabaena sp. PCC 7108 Dichelobacter nodosus VCS1703A Nostoc sp. PCC 7524 Synechococcus sp. PCC 7002 Cladophora glomerata Leptocylindrus danicus CCMP1856 Rhodosorus marinus CCMP769 Lotharella vacuolata Methanothermococcus okinawensis IH1 Chlamydophila abortus S26_3 Lactobacillus acidophilus NCFM Psychrobacter arcticus 273_4 Xanthobacter autotrophicus Py2 Fistulifera solaris Phacotus lenticularis Trypanosoma brucei TREU927 Vaucheria litorea CCMP2940
Halalkalicoccus jeotgali B3 Anabaena variabilis Dictyoglomus thermophilum H-6-12 Novosphingobium aromaticivorans DSM_12444 Synechococcus sp. PCC 7335 Coccomyxa subellipsoidea Leptosira obovata Rhynchomonas nasuta Micromonas pusilla CCMP1545 Methanothermus fervidus Chlamydophila pneumoniae CWL029 Lactococcus lactis Il1403 Psychromonas ingrahamii 37 Xanthomonas campestris ATCC_33913 Florenciella parvula RCC1693 Phaeodactylum tricornutum Trypanosoma cruzi strain CL Brener Vertebrata lanosa
Haloarcula hispanica ATCC 33960 Anaerocellum thermophilum DSM 6725 Dinoroseobacter shibae DFL_12 Oceanobacillus iheyensis HTE831 Synechococcus sp. PCC 7336 Codium fragile Lithodesmium undulatum Rosalina sp. Micromonas sp. RCC299 Methanotorris igneus Kol 5 Chlorobaculum parvum NCIB 8327 Laptolyngbya sp. PCC 7376 Ralstonia eutropha H16 Xenococcus sp. PCC 7305 Florenciella sp. RCC1587 Phaeomonas parva CCMP2877 Ulnaria acus
Haloarcula marismortui ATCC_43049 Anaeromyxobacter sp. Fw109_5 Ehrlichia canis Jake Ochrobactrum anthropi ATCC_49188 Synechococcus sp. PCC 7502 Compsopogon coeruleus Litonotus pictus P1 Roundia cardiophora Nanoarchaeum equitans Kin4_M Chlorobium chlorochromatii CaD3 Lawsonia intracellularis PHE_MN1_00 Raphidiopsis brookii D9 Xylella fastidiosa 9a5c Floydiella terrestris Phanerochaete chrisosporium Ulvella endozoica
Halobacterium salinarum R1 Anaplasma marginale StMaries Elusimicrobium minutum Pei191 Oenococcus oeni PSU_1 Synechococcus sp. RCC307 Condylostoma magnum COL2 Lobochlamys segnis Rozella allomycis CSF55 Natrialba magadii ATCC 43099 Chloroflexus aurantiacus J_10_fl Legionella pneumophila Corby Renibacterium salmoninarum ATCC_33209 Yersinia enterocolitica 8081 Fragilariopsis cylindrus Phycomyces blakesleeanus Undaria pinnatifida
Haloferax volcanii DS2 Anoxybacillus flavithermus WK1 Enterobacter sp. 638 Oligotropha carboxidovorans OM5 Synechococcus sp. WH5701 Coprinopsis cinerea Lobomonas rostrata Saccharina japonica Natronomonas pharaonis DSM_2160 Chlorogloeopsis fritschii PCC 6912 Leifsonia xyli CTCB07 Rhizobium etli CFN_42 Zymomonas mobilis ZM4 Fragilariopsis kerguelensis L26-C5 Physcomitrella patens Unidentified eukaryote sp. NY0313808BC1
Halogeometricum borinquense DSM 11551 Aquifex aeolicus VF5 Enterococcus faecalis V583 Onion yellows Phytoplasma sp. OY-M Corethron pennatum L29A3 Lotharella amoebiformis Saccharomyces cerevisiae RM11-1a Nitrosopumilus maritimus SCM1 Chlorogloeopsis fritschii PCC 9212 Leptolyngbya boryana PCC 6306 Rhodobacter sphaeroides 2_4_1 Fritschiella tuberosa Phytophthora capsici Uronema belkae
Halomicrobium mukohataei DSM 12286 Arcobacter butzleri RM4018 Erythrobacter litoralis HTCC2594 Opitutus terrae PB90-1 Crustomastix stigmata CCMP3273 Lotharella globosa Salpingoeca rosetta Picrophilus torridus DSM_9790 Chloroherpeton thalassium ATCC 35110 Leptolyngbya sp. PCC 6406 Rhodococcus sp. RHA1 Fucus vesiculosus Phytophthora sojae Ustilago maydis
halophilic archaeon DL31 Aromatoleum aromaticum EbN1 Escherichia coli 536 Oscillatoria acuminata PCC 6304 Eukaryotes Cryptococcus neoformans JEC21 Lotharella vacuolata Sargassum horneri Pyrobaculum aerophilum IM2 Chromobacterium violaceum ATCC_12472 Leptolyngbya sp. PCC 7375 Rhodoferax ferrireducens T118 Furcellaria lumbricalis Picochlorum oklahomensis CCMP2329 Vannella robusta
Halopiger xanaduensis SH-6 Arthrobacter aurescens TC1 Exiguobacterium sibiricum 255-15 Oscillatoria sp. PCC 10802 Acanthamoeba castellanii str Neff Cryptoglena skujai Madagascaria erythrocladiodes CCMP3234 Schizochytrium aggregatum ATCC28209 Pyrobaculum arsenaticum DSM 13514 Chromohalobacter salexigens DSM 3043 Leptospira borgpetersenii JB197 Rhodopirellula baltica SH_1 Aureoumbra lagunensis CCMP1507 Fusarium graminearum Picocystis salinarum CCMP1897 Vannella sp.
Haloquadratum walsbyi DSM_16790 Arthrospira maxima CS-328 Fervidobacterium nodosum Rt17_B1 Oscillatoria sp. PCC 6407 Ahnfeltiopsis flabelliformis Cryptomonas paramecium CCAP977 Mayorella sp. Schizophylum commune Pyrobaculum calidifontis JCM 11548 Chroococcidiopsis sp. PCC 6712 Leptothrix cholodnii SP-6 Rhodopseudomonas palustris BisA53 Aureoumbra lagunensis CCMP1510 Galdieria sulphuraria Pinguiococcus pyrenoidosus CCMP2078 Vaucheria litorea CCMP2940
Halorhabdus utahensis DSM 12940 Arthrospira platensis NIES-39 Finegoldia magna ATCC_29328 Oscillatoria sp. PCC 6506 Alexandrium minutum CCMP113 Cryptosporidium hominis TU502 Mazzaella japonica Schizosaccharomyces pombe Pyrobaculum islandicum DSM 4184 Chroococcidiopsis thermalis PCC 7203 Leuconostoc citreum KM20 Rhodospirillum rubrum ATCC 11170 Babesia bigemina Gelidium elegans Pirula salina Vertebrata lanosa
Halorubrum lacusprofundi ATCC 49239 Arthrospira platensis Paraca Fischerella muscicola PCC 73103 Oscillatoria sp. PCC 7112 Allomyces macrogynus Cryptosporidium parvum Melampsora larici-populina Scrippsiella trochoidea CCMP3099 Pyrobaculum sp. 1860 Citrobacter koseri ATCC_BAA895 Listeria innocua Clip11262 Richelia intracellularis HH01 Babesia microti Gelidium vagum Planophila laetevirens Vexillifera sp.
Haloterrigena turkmenica DSM 5511 Arthrospira sp. PCC 8005 Fischerella muscicola PCC 7414 Parabacteroides distasonis ATCC_8503 Alveolata sp. CCMP3155 Cyanidioschyzon merolae Micromonas pusilla CCMP1545 Selaginella moellendorffii Pyrococcus abyssi GE5 Clavibacter michiganensis NCPPB_382 Lyngbya sp. PCC8106 Rickettsia akari Hartford Bangia atropurpurea Geminigera cryophila CCMP2564 Plasmodium falciparum 3D7 Vitrella brassicaformis CCMP3346
Hyperthermus butylicus DSM_5456 Azoarcus sp. BH72 Fischerella sp. JSC-11 Parachlamydia acanthamoebae Ammonia sp. Cyanidium caldarium Micromonas sp. RCC299 Sorites sp. Pyrococcus furiosus DSM 3638 Clostridium acetobutylicum ATCC_824 Lysinibacillus sphaericus C3_41 Rivularia sp. PCC 7116 Bathycoccus prasinos Gephyrocapsa oceanica RCC1303 Plasmodium vivax North Korea Volvox carteri
Ignicoccus hospitalis KIN4_I Azorhizobium caulinodans ORS_571 Fischerella sp. PCC 9339 Paracoccus denitrificans PD1222 Anophryoides haemophila AH6 Cyanophora paradoxa Microthamnion kuetzingianum Spermatozopsis exsultans Pyrococcus horikoshii OT3 Coleofasciculus chthonoplastes PCC 7420 Macrococcus caseolyticus JCSC5402 Roseiflexus castenholzii DSM_13941 Batrachochytrium Giardia lamblia Plasmodium yoelii 17XNL Yarrowia lipolytica
Ignisphaera aggregans DSM17230 Bacillus amyloliquefaciens FZB42 Fischerella sp. PCC 9431 Parvibaculum lavamentivorans DS1 Aphanochaete repens Cyanoptyche gloeocystis Mimulus guttatus Pyrococcus sp. NA2 Colwellia psychrerythraea 34H Magnetococcus sp. MC_1 Roseobacter denitrificans OCh_114 dendrobatidis JAM81 Glaucocystis nostochinearum Pleurochrysis carterae CCMP645
Korarchaeum cryptofilum OPF8 Marine group II euryarchaeote SCGC AB629J06 Metallosphaera cuprina Ar-4 Metallosphaera sedula DSM_5348 Methanobacterium sp. AL-21 Methanobrevibacter ruminantium M1 Methanobrevibacter smithii ATCC_35061 Methanocaldococcus fervens AG86 Methanocaldococcus infernus ME Bacteroides fragilis NCTC_9343 Bacteroides fragilis YCH46 Bartonella bacilliformis KC583 Baumannia cicadellinicola Hc Bdellovibrio bacteriovorus HD100 Beijerinckia indica subsp. Indica ATCC 9039 Bifidobacterium adolescentis ATCC_15703 Blochmannia floridanus Bordetella bronchiseptica RB50 Fischerella sp. PCC 9605 Fischerella thermalis PCC 7521 Flavobacterium johnsoniae UW101 Flavobacterium psychrophilum JIP02_86 Francisella novicida U112 Frankia alni ACN14a Fusobacterium nucleatum ATCC_25586 Geitlerinema sp. PCC 7407 Geminocystis herdmanii PCC 6308 Pasteurella multocida Pm70 Pectobacterium atrosepticum SCRI1043 Pediococcus pentosaceus ATCC_25745 Pelagibacter ubique HTCC1062 Pelobacter carbinolicus DSM_2380 Pelobacter propionicus DSM_2379 Pelodictyon luteolum DSM_273 Pelotomaculum thermopropionicum SI Petrotoga mobilis SJ95 Aplanochytrium sp. PBS07 Aplanochytrium stocchinoi GSBS06 Arabidopsis thaliana Aspergillus niger Asterionella formosa Asterionellopsis glacialis Asteromonas gracilis Aurantiochytrium limacinum ATCCMYA1381 Aureococcus anophagefferens Cylindrocapsa geminella Cylindrotheca closterium Daphnia pulex Debaryomyces hansenii CBS767 Desmochloris halophila Dictyocha speculum CCMP1381 Dictyostelium discoideum Dictyostelium purpureum QSDP1 Didymosphenia geminata Mitosporidium daphniae Monomorphina aenigmatica Monosiga brevicollis Mucor circinelloides Mycosphaerella graminicola Naegleria gruberi Nannochloropsis gaditana Nannochloropsis granulata Nematostella vectensis Plastids and nucleomorphs Alveolata sp. CCMP3155 Arabidopsis thaliana Asterionella formosa Asterionellopsis glacialis Aureococcus anophagefferens Aureoumbra lagunensis CCMP1507 Bangia atropurpurea Pyrococcus yayanosii CH1 Pyrolobus fumarii 1A Staphylothermus hellenicus DSM 12710 Staphylothermus marinus F1 Sulfolobus acidocaldarius DSM_639 Sulfolobus islandicus L.S.2.15 Sulfolobus solfataricus P2 Sulfolobus tokodaii str 7 Thermococcus barophilus MP Coprothermobacter proteolyticus DSM 5265 Corynebacterium diphtheriae NCTC_13129 Coxiella burnetii RSA_493 Criblamydia sequanensis CRIB-18 Crinalium epipsammum PCC 9333 Crocosphaera watsonii WH 3 Crocosphaera watsonii WH8501 Cronobacter sakazakii ATCC BAA-894 Cupriavidus taiwanensis Magnetospirillum magneticum AMB_1 Mannheimia succiniciproducens MBEL55E Maricaulis maris MCS10 Marinobacter aquaeolei VT8 Marinomonas sp. MWYL1 Mastigocladopsis repens PCC 10914 Mesoplasma florum L1 Mesorhizobium sp. BNC1 Methylibium petroleiphilum PM1 Rubidibacter lacunae KORDI 51-2 Rubrobacter xylanophilus DSM_9941 Ruegeria sp. TM1040 Saccharophagus degradans 2_40 Saccharopolyspora erythraea NRRL_2338 Salinibacter ruber DSM_13855 Salinispora arenicola CNS_205 Salmonella enterica arizonae Scytonema hofmanni PCC 7110 Betaphycus philippinensis Bicosoecid sp. ms1 Bigelowiella natans Blastocystis hominis Bodo saltans Bolbocoleon piliferum Bolidomonas pacifica CCMP1866 Botryococcus braunii Botryosphaerella sudetica Glaucosphaera vacuolata Gloeochaete wittrockiana Gloiopeltis furcata Golenkinia longispicula Goniomonas sp. Gonium pectorale Gracilaria changii Gracilaria chilensis Gracilaria chouae Polarella glacialis CCMP1383 Bigelowiella natans Polysphondylium pallidum Brachypodium distachyon Polytomella parva SAG 63-3 Calliarthron tuberculosum Populus trichocarpa Cerataulina daemon Porphyra pulchra Chaetoceros simplex Porphyra purpurea Chlamydomonas reinhardtii Porphyra sp. Chlorella vulgaris C-169 Porphyra yezoensis Porphyridium cruentum Chondrus crispus
Table S2 . List of outgroup cyanobacterial species from final alignments
S2
Acaryochloris marina MBIC11017 Fischerella thermalis PCC 7521
Anabaena sp. PCC 7108 Geminocystis herdmanii PCC 6308
Arthrospira maxima CS-328 Gloeocapsa sp. PCC 73106
Arthrospira platensis NIES-39 Laptolyngbya sp. PCC 7376
Arthrospira sp. PCC 8005 Leptolyngbya sp. PCC 6406
Calothrix sp. PCC 7103 Leptolyngbya sp. PCC 7375
Chamaesiphon minutus PCC 6605 Lyngbya sp. PCC8106
Chlorogloeopsis fritschii PCC 6912 Mastigocladopsis repens PCC 10914
Chroococcidiopsis sp. PCC 6712 Microchaete sp. PCC 7126
Coleofasciculus chthonoplastes PCC 7420 Microcoleus vaginatus FGP-2
Crocosphaera watsonii WH8501 Microcystis aeruginosa NIES843
Cyanobacterium sp. PCC 10605 Moorea producens 3L
Cyanobacterium sp. PCC 7702 Nodosilinea nodulosa PCC 7104
Cyanobacterium stanieri PCC 7202 Nostoc punctiforme 11
Cyanobacterium Yellowstone B Nostoc sp. PCC 7120
Cyanothece sp. ATCC51142 Nostoc sp. PCC 7524
Cyanothece sp. CCY0110 Oscillatoria sp. PCC 10802
Cyanothece sp. ATCC 51472 Oscillatoria sp. PCC 7112
Cyanothece sp. PCC 7424 Oscillatoria sp. PCC 6506
Cyanothece sp. PCC 7425 Pleurocapsa sp. PCC 7319
Cyanothece sp. PCC 7822 Pseudanabaena sp. PCC 6802
Cyanothece sp. PCC 8801 Pseudanabaena sp. PCC 7429
Cylindrospermopsis raciborskii CS-505 Raphidiopsis brookii D9
Cylindrospermum stagnale PCC 7417 Richelia intracellularis HH01
Dactylococcopsis salina PCC 8305 Rubidibacter lacunae KORDI 51-2
Fischerella muscicola PCC 7414 Spirulina major PCC 6313
Fischerella sp. JSC-11 Stanieria sp. PCC 7437
Fischerella sp. PCC 9339 Thermosynechococcus elongatus BP1
Fischerella sp. PCC 9431 Tolypfthrix sp. PCC 9009
Fischerella sp. PCC 9605 Trichormus azollae 708
Table S4 . EggNOG functional annotation of genes transferred from Archaeplastida to secondary photosynthetic eukaryotes
S4 Part of a stress-induced multi-chaperone system, it is involved in the recovery of the cell from heat-induced damage, in cooperation with DnaK, DnaJ and GrpE. Acts before DnaK, in the processing of protein aggregates.
RG_029 AT3G16950.2 RG_049 AT5G13630.2 RG_072 AT5G08280.1 RG_095 AT5G30510.1 Energy production Coenzyme transport Coenzyme transport Translation, Dihydrolipoyl dehydrogenase cobaltochelatase, cobn subunit Tetrapolymerization of the monopyrrole PBG into the 30S ribosomal protein S1
and conversion and metabolism and metabolism ribosomal structure hydroxymethylbilane pre-uroporphyrinogen in several discrete
RG_030 AT3G10270.1 RG_050 AT1G58290.1 Replication, Coenzyme transport and biogenesis DNA gyrase negatively supercoils closed circular double-Catalyzes the NADPH-dependent reduction of glutamyl-steps
Code RG_073 AT4G15560.1 Reference sequence RG_096 AT3G63190.1 RG_002 AT1G24360.1 RG_003 AT4G14070.1 RG_005 AT3G51820.1 RG_006 AT5G38660.1 RG_007 AT3G62910.1 RG_008 AT5G15450.1 RG_010 AT3G08010.1 RG_013 AT5G64840.1 RG_014 AT1G32500.1 RG_031 AT3G10690.1 RG_053 CMR268C RG_074 AT1G10830.1 RG_075 AT5G08740.1 RG_097 AT5G24490.1 RG_057 AT3G17930.1 RG_058 AT2G18950.1 RG_076 AT1G08490.1 RG_032 AT2G22360.1 RG_101 AT5G04590.1 RG_059 AT3G11945.1 RG_079 AT5G18570.1 RG_102 AT5G01220.1 RG_033 AT1G55930.1 RG_060 AT2G43945.1 RG_061 AT1G78620.1 RG_062 AT5G49030.1 RG_104 AT4G26500.1 RG_105 AT5G52970.1 RG_106 AT5G55220.1 RG_034 AT5G62790.1 RG_064 AT1G62640.1 RG_081 AT2G25840.2 RG_082 AT3G13490.1 RG_107 AT5G64050.1 RG_083 AT2G24200.1 Functional class recombination and and metabolism Coenzyme transport Translation, Lipid transport and metabolism Lipid transport and metabolism Coenzyme transport and metabolism Function unknown Translation, ribosomal structure and biogenesis Post-translational modification, protein turnover, and chaperones Function unknown Function unknown Post-translational repair Replication, recombination and repair Post-translational modification, protein turnover, and chaperones and metabolism ribosomal structure and biogenesis Function unknown Energy production Translation, Function unknown and conversion ribosomal structure Coenzyme transport Amino acid transport and biogenesis Post-translational modification, protein turnover, and chaperones and metabolism and metabolism Energy production Coenzyme transport and conversion and metabolism Energy production Cell Inorganic ion Function unknown and conversion wall/membrane/envel Function unknown ope biogenesis Translation, ribosomal structure Function unknown Function unknown and biogenesis Post-translational transport and metabolism Lipid transport and Translation, modification, protein Lipid transport and ribosomal structure turnover, and metabolism chaperones and biogenesis metabolism Translation, Translation, ribosomal structure ribosomal structure modification, protein and metabolism chaperones Amino acid transport turnover, and and biogenesis and biogenesis stranded DNA in an ATP-dependent manner and also catalyzes tRNA(Glu) to glutamate 1-semialdehyde (GSA) Catalyzes the acyloin condensation reaction between C atoms 2 Responsible for the release of ribosomes from messenger RNA Function description reductase long-chain-fatty-acid--CoA ligase bacteriochlorophyll chlorophyll a synthase Protein of unknown function (DUF2854) Peptide chain release factor 1 directs the termination of translation in response to the peptide chain termination codons UAG and UAA Protein binding stimulates the ATPase activity Protein of unknown function (DUF1092) ABC transporter the interconversion of other topological isomers of double-stranded DNA rings, including catenanes and knotted rings (By similarity) DNA gyrase negatively supercoils closed circular double-and 3 of pyruvate and glyceraldehyde 3-phosphate to yield 1-at the termination of protein biosynthesis. May increase the Redox regulated molecular chaperone. Protects both thermally deoxy-D-xylulose-5-phosphate (DXP) efficiency of translation by recycling ribosomes from one round of unfolding and oxidatively damaged proteins from irreversible aggregation. Plays an important role in the bacterial defense translation to another NnrU family system toward oxidative stress faD-dependent pyridine nucleotide-disulfide oxidoreductase ribosomal subunit Interface protein stranded DNA in an ATP-dependent manner and also catalyzes the interconversion of other topological isomers of double-stranded DNA rings, including catenanes and knotted rings (By Protein of unknown function (DUF3007) Tocopherol phytyltransferase Cysteine desulfurase similarity) Sulfite reductase ATP binding to DnaK triggers the release of the substrate Tocopherol phytyltransferase protein, thus completing the reaction cycle. Several rounds of An essential GTPase which binds GTP, GDP and possibly Glycosyl transferase (Group 1 ATP-dependent interactions between DnaJ, DnaK and GrpE are required for fully efficient folding. Also involved, with DnaK and GrpE, in the DNA replication of plasmids through activation of initiation proteins Domain of unknown function DUF21 (p)ppGpp with moderate affinity, with high nucleotide exchange NA rates and a fairly low GTP hydrolysis rate. It may play a role in Integral membrane protein isoleucyltRNA synthetase control of the cell cycle, stress response, ribosome biogenesis Fe-S metabolism associated SufE and in those bacteria that undergo differentiation, in Inherit from NOG: 15.0 kDa protein morphogenesis control Involved in protein export. Acts as a chaperone by maintaining Tryptophanyl-tRNA synthetase the newly synthesized protein in an open conformation lysyL-tRNA synthetase feS assembly protein SufD intracellular proteins. Catalyzes the removal of unsubstituted N-Presumably involved in the processing and regular turnover of
RG_015 AT1G15700.1 Energy production terminal amino acids from various peptides
RG_016 AT3G60620.1 RG_017 AT4G25080.3 RG_036 AT5G16715.1 RG_084 AT5G36170.1 RG_110 AT3G04870.2 RG_037 CMO352C RG_086 AT3G27110.1 RG_085 AT1G08520.1 RG_111 AT2G32480.1 and conversion Lipid transport and metabolism Coenzyme transport and metabolism Translation, ribosomal structure and biogenesis Translation, Function unknown Replication, modification, protein repair Post-translational recombination and and metabolism Coenzyme transport ope biogenesis ribosomal structure and biogenesis wall/membrane/envel Cell Phosphatidate cytidylyltransferase Mg-protoporphyrin IX methyl transferase similarity) Peptide chain release factor 2 directs the termination of desaturase amino acids such as threonine, to avoid such errors, it has a posttransfer editing activity that hydrolyzes mischarged Thr-peptidase, M48 helicase magnesium chelatase tRNA(Val) in a tRNA-dependent manner (By similarity) translation in response to the peptide chain termination codons UGA and UAA Membrane-associated zinc metalloprotease
RG_018 AT1G74470.1 RG_019 AT4G34350.1 RG_020 AT4G35440.1 RG_038 AT2G04842.1 RG_067 AT3G24590.1 RG_039 AT3G12080.2 RG_087 AT2G20890.1 RG_070 AT1G62750.1 RG_040 AT2G30390.1 RG_088 AT3G50820.1 Coenzyme transport and metabolism Lipid transport and metabolism Inorganic ion Translation, ribosomal structure Intracellular turnover, and chaperones trafficking, secretion, and biogenesis Nucleotide transport and metabolism and vesicular Cell cycle control, cell transport division, chromosome partitioning Translation, ribosomal structure transport and metabolism Coenzyme transport and metabolism wall/membrane/envel and biogenesis Cell geranylgeranyl reductase Converts 1-hydroxy-2-methyl-2-(E)-butenyl 4-diphosphate into isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP) threonyL-tRNA synthetase Signal peptidase i GTPase that plays an essential role in the late steps of ribosome May be involved in photosynthetic membrane biogenesis biogenesis (By similarity) Cl-channel, voltage gated Catalyzes the ferrous insertion into protoporphyrin IX (By similarity) photosystem II manganese-stabilizing
RG_022 CMB059C RG_024 AT4G17740.2 RG_041 AT5G42270.1 RG_089 AT5G17230.1 RG_042 AT4G36810.1 1 RG_071 WP_010473488. RG_090 AT1G65260.1 Inorganic ion transport and metabolism Cell wall/membrane/envel ope biogenesis Post-translational modification, protein turnover, and ope biogenesis Lipid transport and metabolism chaperones Coenzyme transport and biogenesis mechanisms and metabolism ribosomal structure transduction Translation, Transcription, Signal aluminum resistance protein Acts as a processive, ATP-dependent zinc metallopeptidase for both cytoplasmic and membrane proteins. Plays a role in the quality control of integral membrane proteins movement phytoene synthase of the two tRNA molecules, the mRNA and conformational changes in the ribosome Protease nucleotides and has strong preference for UGA stop codons. It stimulates activities of RF-1 and RF-2. It binds guanine Polyprenyl synthetase Increases the formation of ribosomal termination complexes and Phage shock protein A
RG_025 AT5G21920.1 RG_026 AT5G22620.1 RG_047 AT1G56050.1 RG_092 AT4G02790.1 RG_048 AT3G59400.1 RG_093 AT3G25920.1 Function unknown Carbohydrate transport and metabolism Translation, ribosomal structure Transcription and biogenesis Function unknown ribosomal structure Translation, integral membrane protein phosphoglycerate mutase gtp-binding protein may interact directly with the ribosome. The stimulation of RF-1 Required for a late step of 50S ribosomal subunit assembly. Has and RF-2 is significantly reduced by GTP and GDP, but not by GMP GTPase activity GUN4 domain protein Binds to the 23S rRNA
and biogenesis
Produces ATP from ADP in the presence of a proton gradient across the membrane. The gamma chain is believed to be important in regulating ATPase activity and the flow of protons through the CF(0) complex Catalyzes the NADP-dependent rearrangement and reduction of 1-deoxy-D-xylulose-5-phosphate (DXP) to 2-C-methyl-Derythritol 4-phosphate (MEP) (By similarity) RG_035 AT3G08740.1 Translation, ribosomal structure and biogenesis Involved in peptide bond synthesis. Stimulates efficient translation and peptide-bond synthesis on native or reconstituted 70S ribosomes in vitro. Probably functions indirectly by altering the affinity of the ribosome for aminoacyl-tRNA, thus increasing their reactivity as acceptors for peptidyl transferase (By Catalyzes the condensation reaction of fatty acid synthesis by the addition to an acyl acceptor of two carbons from malonyl-ACP. Catalyzes the first condensation reaction which initiates fatty acid synthesis and may therefore play a role in governing the total rate of fatty acid production. Possesses both acetoacetyl-ACP synthase and acetyl transacylase activities. Its substrate specificity determines the biosynthesis of branchedchain and or straight-chain of fatty acids RG_066 AT5G08650.1 Cell wall/membrane/envel ope biogenesis Required for accurate and efficient protein synthesis under certain stress conditions. May act as a fidelity factor of the translation reaction, by catalyzing a one-codon backward translocation of tRNAs on improperly translocated ribosomes. Back-translocation proceeds from a post-translocation (POST) complex to a pre-translocation (PRE) complex, thus giving elongation factor G a second chance to translocate the tRNAs correctly. Binds to ribosomes in a GTP-dependent manner Catalyzes the GTP-dependent ribosomal translocation step during translation elongation. During this step, the ribosome changes from the pre-translocational (PRE) to the posttranslocational (POST) state as the newly formed A-site-bound peptidyl-tRNA and P-site-bound deacylated tRNA move to the P and E sites, respectively. Catalyzes the coordinated Catalyzes the attachment of glutamate to tRNA(Glu) in a twostep reaction glutamate is first activated by ATP to form Glu-AMP and then transferred to the acceptor end of tRNA(Glu) RG_108 AT5G23120.1 Function unknown The ortholog in A.thaliana is involved in photosystem II (PSII) assembly, but knockout of the corresponding gene in Synechoccus PCC 7002 has no effect on PSII activity
Current Biology 27, 386-391, February 6, 2017 ª 2016 Elsevier Ltd.
This Bayesian phylogenetic tree is based on the concatenation of 97 plastid-encoded proteins and their cyanobacterial homologs. Branches supported by posterior probability 1 are labeled with black circles. ML bootstrap value is also indicated for the branch uniting plastids with the cyanobacterium Gloeomargarita lithophora. A false-colored scanning electron microscopy image of this cyanobacterium is shown in the center of the tree. Information about the habitat and morphology of the cyanobacterial species is provided. For the complete tree, see FigureS1.CurrentBiology 27, 386-391, February 6, 2017 387
Current Biology 27, 386-391, February 6, 2017
This phylogenetic supernetwork is based on the individual ML trees of 97 plastid-encoded proteins. For a distance-based analysis of these individual proteins, see FigureS2.CurrentBiology 27, 386-391, February 6, 2017 389
SUPPLEMENTARY MATERIAL
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1Synechococcus sp. JA
3B a 2 13 Synechococcus sp. JA
3Ab
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1/100 1 1 1 1 1 1 1 1 1 1 1 1 0,5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1/100 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,52 1 1 1 1 1 1 1 1 1 1 1 1 1 0,5 1 1 1 1 0,5 0,5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,51 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,5 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,83 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1/100 1 0,56 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,75 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 0,5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,55 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,8 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,7 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,5 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.2 Synechococcus sp. PCC 7335
1 1 1 1 1 1 1 1 0,55 1 1 1 0,79 1 1 1 1 0,52 1 1 1 1 1 1 1/100 1 0,68 1 1 1 1 1 1 1 1 1 0,56 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,69 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0,62 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.5 Anabaena sp. PCC 7108
ACKNOWLEDGEMENTS
When I arrived in France I knew that my stay in this country would bring me wonderful experiences and it did. I want to thank all the people that have been with me in this ACKNOWLEDGMENTS This study was supported by European Research Council grants ProtistWorld (P.L.-G.; grant agreement no. 322669) and CALCYAN (K.B.; grant agreement no. 307110) under the European Union's Seventh Framework Program and the RTP CNRS G enomique Environnementale project MetaStrom (D.M.).
Acknowledgments
This study was supported by European Research Council grant ProtistWorld (P.L.-G., agreement no. 322669), the Université Paris-Sud program "Attractivité" (P.D.) and the Agence Nationale de la Recherche (D.M., project ANR-15-CE32-0003 "ANCESSTRAM"). 0.3 Cyanobacterium stanieri PCC 7202 Nostoc sp. PCC 7107 Chlorella vulgaris Pleurocapsa sp PCC 7319 Arthrospira maxima CS-328 Rubidibacter lacunae KORDI 51-2 Gloeocapsa sp. PCC 7428 Chlorogloeopsis fritschii PCC 9212 Leptolyngbya sp. PCC 7376 Arthrospira sp. PCC 8005 Synechocystis sp. PCC 7509 Synechococcus sp. PCC 7336 Porphyra yezoensis Acaryochloris marina MBIC11017 Gracilaria tenuistipitata Grateloupia taiwanensis Oscillatoria acuminata PCC 6304 Fischerella muscicola PCC 73103 Arthrospira platensis Paraca Fischerella sp. PCC 9339 Chlorogloeopsis fritschii PCC 6912 Ostreococcus tauri Calothrix sp. PCC 7103 Crocosphaera watsonii WH8501 Oscillatoria sp. PCC 6407 Cyanobacterium sp. PCC 10605 Tolypothrix sp. PCC 9009 Populus trichocarpa Thermosynechococcus elongatus BP1 cyanobacterium PCC 7702 Halothece sp. PCC 7418 Oscillatoria sp. PCC 7112 Dactylococcopsis salina PCC 8305 Microcoleus sp. PCC 7113 Xenococcus sp PCC 7305 Chroococcidiopsis thermalis PCC 7203 Chondrus crispus Geminocystis herdmanii PCC 6308 Nostoc sp PCC 7524 Synechococcus sp. PCC 7502 Gloeobacter violaceus PCC 7421 Spirulina subsalsa PCC 9445 Synechococcus sp. PCC 7002 Pseudanabaena sp. PCC 7367 Trichodesmium erythraeum Cyanothece sp. PCC 7822 Microcoleus vaginatus FGP-2 Synechococcus sp. PCC 7335 Synechococcus sp. JA-3-3Ab Gloeobacter kilaueensis JS1 Fischerella thermalis PCC 7521 Leptolyngbya sp. PCC 7375 Vertebrata lanosa Raphidiopsis brookii D9 Rivularia sp. PCC 7116 Oscillatoria sp. PCC 6506 Chlamydomonas reinhardtii Arabidopsis thaliana Cylindrospermopsis raciborskii CS-505 Crocosphaera watsonii WH 0003 Cyanothece sp. ATCC 51472 Geitlerinema sp. PCC 7407 Mastigocladopsis repens PCC 10914 Stanieria sp. PCC 7437 Synechococcus sp. PCC 6312 Physcomitrella patens Chroococcidiopsis sp. PCC 6712 Micromonas pusilla CCMP1545 Microcystis aeruginosa PCC 7806 Cyanothece sp. PCC 7424 Microchaete sp. PCC 7126 Scytonema hofmanni PCC 7110 Cyanophora paradoxa Anabaena cylindrica PCC 7122 Pseudanabaena sp PCC 7429 Calothrix sp. PCC 6303 Cyanidioschyzon merolae Calliarthron tuberculosum Cyanothece sp. CCY0110 Calothrix sp. PCC 7507 Synechococcus sp. JA-2-3B-a-2-13 Pseudanabaena sp. PCC 6802 Synechococcus calcipolaris Cyanothece sp. PCC 8802 Moorea producens 3L Anabaena variabilis Chamaesiphon minutus PCC 6605 Brachypodium distachyon Cyanothece sp. ATCC51142 Cyanothece sp. PCC 8801 Nostoc punctiforme Fischerella sp. PCC 9431 Lyngbya sp. PCC 8106 Gloeomargarita lithophora Pyropia perforata Leptolyngbya sp. PCC 6406 Crinalium epipsammum PCC 9333 Pleurocapsa sp. PCC 7327 Nodosilinea nodulosa PCC 7104 Richelia intracellularis HH01 Acaryochloris sp. CCMEE 5410 Cyanothece sp. PCC 7425 Cylindrospermum stagnale PCC 7417 Nodularia spumigena CCY9414 Coleofasciculus chthonoplastes PCC 7420 Anabaena sp. PCC 7108 Gloeocapsa sp. PCC 73106 Arthrospira platensis NIES-39 Trichormus azollae 0708 Synechocystis sp. PCC 6803 Fischerella sp. JSC-11 Fischerella sp. PCC 9605 Nostoc sp. PCC 7120 Leptolyngbya boryana PCC 6306 Selaginella moellendorffii Spirulina major PCC 6313 Oscillatoria sp. PCC 10802 Microcystis aeruginosa NIES843 Fischerella muscicola PCC 7414
After having exposed the diversity of photosynthetic eukaryotes as well as the certainties and doubts concerning their evolution, I present our results of the origin and evolution of primary (Chapter 4) and secondary plastids (Chapter 5 and 6). The discussion and the conclusions of my PhD are presented in the chapters 7 and 8, respectively.
"Don't compete! -competition is always injurious to the species, and you have plenty of resources to avoid it!"
Pëtr Kropotkin. Mutual Aid: A Factor of Evolution (1902) SUPPLEMENTAL INFORMATION Supplemental Information includes Supplemental Experimental Procedures, two figures, and one dataset and can be found with this article online at http://dx.doi.org /10.1016/j.cub.2016.11. Bigelowiella natans against a local genome database containing representatives of the three domains of life, in particular a comprehensive collection of genomes and transcriptomes of photosynthetic protists (detailed methods are described in the Supplementary Material). Guillardia and Bigelowiella proteins with hits in other photosynthetic eukaryotes and in cyanobacteria were selected for maximum likelihood (ML) phylogenetic analysis. Trees for these proteins were manually filtered to retain those fulfilling two criteria: i) trees have to support the monophyly of Archaeplastida and a clear separation of Viridiplantae and Rhodophyta (with secondary lineages branching within them), and ii) proteins have to be shared by at least three secondary photosynthetic lineages. We identified in this way 85 genes most likely acquired by secondary photosynthetic eukaryotes from Archaeplastida. 73 were cyanobacterial genes likely transferred sequentially through primary and secondary endosymbioses, and 12 were genes likely transferred from diverse bacterial groups to Archaeplastida and subsequently to secondary photosynthetic groups (supplementary table S2 and figs. S1-S85, Supplementary Material online). Most of them were absent in non-photosynthetic eukaryotes, supporting that they were not misinterpreted vertically-inherited genes.
The 85 ML phylogenies were highly resolved and enabled to unambiguously determine, for each secondary lineage, whether the gene had green or red algal origin. As expected, in most trees (~90%, fig. 1A), the red-plastid-endowed CASH lineages had genes derived from red algae (e.g., fig. 2A and2B). We expected the opposite situation, namely a majority of 'green' genes, in the green-plastid-endowed chlorarachniophytes and euglenids. However, 47 of the 81 trees containing chlorarachniophytes (58 %, fig. 1A) and 22 of the 51 trees containing euglenids (43 %, fig. 1A) supported a 'red' origin of the corresponding genes (e.g., fig 2A andB). These surprisingly high values were in sharp contrast with the mere 9 trees (10%, fig. 1A) showing CASH phyla branching with green individual markers (e.g., apoproteins A1 and A2 of the photosystem I P700 chlorophyll a). Otherwise, we eliminated them. In addition, we compared our individual trees with the multigene cyanobacterial phylogeny recently published by Shih et al. (2013) [S3]. These authors identified a set of well-supported clearly monophyletic cyanobacterial groups. When those groups were retrieved nonmonophyletic with strong support in our single-marker trees, the corresponding markers were discarded. We therefore followed a highly conservative approach to reduce as much as possible any noise due to erroneous marker selection.
Phylogenetic Analysis
Multiple sequence alignments were carried out with MAFFT [S4] using default parameters. Selection of conserved regions for phylogenetic analyses was done with trimAL [S5] with automatic selection of parameters (-automated1 option).
Substitution models for phylogenetic analysis of individual protein markers were selected using ProtTest 3 [S6].
Supplemental References
S1. Hyatt, D., Chen, G.L., Locascio, P.F., Land, M.L., Larimer, F.W., and Hauser, L.J.
(2010). Prodigal: prokaryotic gene recognition and translation initiation site identification. BMC Bioinformatics 11, 1471-2105. S2.
Altschul, S.F., Madden, T.L., Schaffer, A.A., Zhang, J., Zhang, Z., Miller, W., and Lipman, D.J. (1997). Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res 25, 3389-3402. S3. Shih, P.M., Wu, D., Latifi, A., Axen, S.D., Fewer, D.P., Talla, E., Calteau, A., Cai, F., Tandeau de Marsac, N., Rippka, R., et al. (2013). Improving the coverage of the cyanobacterial phylum using diversity-driven genome sequencing. Proc Natl Acad Sci U S A 110, 1053-1058. S4. Katoh, K., and Standley, D.M. (2013). MAFFT multiple sequence alignment software version 7: improvements in performance and usability. Mol Biol Evol 30, 772-780. S5.
Capella-Gutierrez, S., Silla-Martinez, J.M., and Gabaldon, T. ( 2009). trimAl: a tool for automated alignment trimming in large-scale phylogenetic analyses. Bioinformatics. 25, 1972-1973. S6. Darriba, D., Taboada, G.L., Doallo, R., and Posada, D. (2011). ProtTest 3: fast selection of best-fit models of protein evolution. Bioinformatics 27, 1164Bioinformatics 27, -1165.
Functioning of Secondary
Supplementary Materials and Methods
Data availability
Protein sequences used for inferring all phylogenetic trees presented in this work are available for download at http://www.ese.u-psud.fr/article950.html?lang=en. They include nonaligned sequences and trimmed alignments.
Sequence analysis
A local database was constructed to host the proteomes predicted for various nuclear genomes and transcriptomes as well as plastid genomes (see Table S1 for the complete list).
All proteins in the Bigelowiella natans (Chlorarachniophyta) and Guillardia theta (Cryptophyta) predicted proteomes were used as queries for BLASTp sequence similarity searches {Camacho, 2009 #4853} against the local database. We retained up to 350 top hits retained with an e-value threshold <1e-05. BLASTp outputs were parsed to identify proteins shared by diverse photosynthetic eukaryotes and that are more similar to cyanobacteria or other bacteria than to non-photosynthetic eukaryotes.
For protein sequences that passed this pre-selection, reciprocal BLASTp were done against the database to collect a set of up to 600 similar proteins. We then aligned each
Supplementary table S1
The composition of the genome local database used in this study is the same as depicted in the Gloeocapsa sp. PCC 7428
Titre : Origine et évolution des eucaryotes photosynthétiques
Mots clés : Photosynthèse, endosymbiose, plastes, cyanobactéries, eucaryotes Résumé : Les plastes primaires proviennent d'une cyanobactérie qui a établi une relation endosymbiotique avec un hôte eucaryote. Cet événement a donné naissance au super-groupe Archaeplastida qui inclut les Viridiplantae (algues vertes et plantes terrestres), les Rhodophyta (algues rouges) et les Glaucophyta. Suite à l'endosymbiose primaire, les algues rouges et vertes ont étendu la capacité de photosynthèse à d'autres lignées eucaryotes via des endosymbioses secondaires. Bien que des progrès considérables aient été réalisés dans la compréhension de l'évolution des eucaryotes photosynthétiques, d'importantes questions sont restées ouvertes, telles que l'identité de la lignée cyanobactérienne la plus proche des plastes primaires ainsi que le nombre et l'identité des partenaires dans les endosymbioses secondaires. Ma thèse a consisté à étudier l'origine et l'évolution précoce des eucaryotes photosynthétiques en utilisant des approches phylogénétiques et phylogénomiques. Je montre par mon travail que les plastes primaires ont évolué à partir d'un symbiote phylogénétiquement proche de Gloeomargarita lithophora, une cyanobactérie représentant un clade s'étant diversifié précocement et qui a été détectée uniquement dans les milieux terrestres. Ce résultat fournit des pistes nouvelles sur le contexte écologique dans lequel l'endosymbiose primaire a probablement eu lieu. En ce qui concerne l'évolution des lignées eucaryotes avec des plastes secondaires, je montre que les génomes nucléaires des chlorarachniophytes et des euglénophytes, deux lignées photosynthétiques avec des plastes dérivés d'algues vertes, encodent un grand nombre de gènes acquis par transferts depuis des algues rouges. Enfin, je mets en évidence que SELMA, la machinerie de translocation des protéines à travers la seconde membrane externe des plastes rouges secondaires à quatre membranes, a une histoire étonnamment compliquée aux implications évolutives importantes : les cryptophytes ont recruté un ensemble de composants de SELMA différent de ceux des haptophytes, straménopiles et alvéolés. Ainsi, ma thèse a permis d'identifier pour la première fois la lignée cyanobactérienne la plus proche des plastes primaires et apporte de nouvelles pistes pour éclaircir les événements complexes qui ont jalonné l'évolution des eucaryotes photosynthétiques secondaires.
Title : Origins and early evolution of photosynthetic eukaryotes
Keywords : Photosynthesis, endosymbiosis, plastids, cyanobacteria, eukaryotes Abstract : Primary plastids derive from a cyanobacterium that entered into an endosymbiotic relationship with a eukaryotic host. This event gave rise to the supergroup Archaeplastida which comprises Viridiplantae (green algae and land plants), Rhodophyta (red algae) and Glaucophyta. After primary endosymbiosis, red and green algae spread the ability to photosynthesize to other eukaryotic lineages via secondary endosymbioses. Although considerable progress has been made in the understanding of the evolution of photosynthetic eukaryotes, important questions remained debated such as the present-day closest cyanobacterial lineage to primary plastids as well as the number and identity of partners in secondary endosymbioses. The main objectives of my PhD were to study the origin and evolution of plastid-bearing eukaryotes using phylogenetic and phylogenomic approaches to shed some light on how primary and secondary endosymbioses occurred. In this work, I show that primary plastids evolved from a close relative of Gloeomargarita lithophora, a recently sequenced early-branching cyanobacterium that has been only detected in terrestrial environments. This result provide interesting hints on the ecological setting where primary endosymbiosis likely took place. Regarding the evolution of eukaryotic lineages with secondary plastids, I show that the nuclear genomes of chlorarachniophytes and euglenids, two photosynthetic lineages with green alga-derived plastids, encode for a large number of genes acquired by transfers from red algae. Finally, I highlight that SELMA, the translocation machinery putatively used to import proteins across the second outermost membrane of secondary red plastids with four membranes, has a surprisingly complex history with strong evolutionary implications: cryptophytes have recruited a set of SELMA components different from those present in haptophytes, stramenopiles and alveolates. In conclusion, during my PhD I identified for the first time the closest living cyanobacterium to primary plastids and provided new insights on the complex evolution that have undergone secondary plastidbearing eukaryotes. |
01759914 | en | [
"math.math-mp"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01759914/file/AnalysisAndHomOfBidModel.pdf | Annabelle Collin
Sébastien Imperiale
Mathematical analysis and 2-scale convergence of a heterogeneous microscopic bidomain model
The aim of this paper is to provide a complete mathematical analysis of the periodic homogenization procedure that leads to the macroscopic bidomain model in cardiac electrophysiology. We consider space-dependent and tensorial electric conductivities as well as space-dependent physiological and phenomenological non-linear ionic models. We provide the nondimensionalization of the bidomain equations and derive uniform estimates of the solutions. The homogenization procedure is done using 2-scale convergence theory which enables us to study the behavior of the non-linear ionic models in the homogenization process.
-In Section 4.3, we give the mathematical framework required for the 2-scale convergence. Among the standard results of the 2-scale convergence theory that we recall, we provide less standard properties by giving 2-scale convergence results on surfaces. -In Section 4.4 the 2-scale convergence is used and the limit homogenized problem is given. It corresponds to a 2-scale homogenized model. A specific care is taken for the convergence analysis of the non-linear terms and it represents one of the most technical point of the presented approach. -Finally, in Section 4.5, the two-scale homogenized model is decoupled and a macroscopic bidomain equation is recovered.
in which a proof of the homogenization process is proposed using Γ-convergence for a specific ionic model. This technique is well suited when the model is naturally
Introduction
Cardiac electrophysiology describes and models chemical and electrical phenomena taking place in the cardiac tissue. Given the large number of related pathologies, there is an important need for understanding these phenomena. As illustrated in Figure 1, there are two modeling scales in cardiac electrophysiology. The modeling at the microscopic scale aims at producing a detailed description of the origin of the electric wave in the cells responsible for the heart contraction. The modeling at the macroscopic scale -deduced from the microscopic one using asymptotic techniques -describes the propagation of this electrical wave in the heart.
One of the most popular mathematical models in cardiac electrophysiology is the bidomain model, introduced by [START_REF] Tung | A bi-domain model for describing ischemic myocardial d-c potentials[END_REF] and described in detail in the reference textbooks [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF], [START_REF] Sundnes | Computing the Electrical Activity in the Heart[END_REF] and [START_REF] Pullan | Mathematically Modeling the Electrical Activity of the Heart[END_REF]. At the microscopic scale, this model is based upon the description of electrical and chemical quantities in the cardiac muscle. The latter is segmented into the intra-and the extra-cellular domains -hence the name of the model. These two domains are separated by a membrane where electric exchanges occur. A simple variant found in the literature comes from an electroneutrality assumption -justified by an asymptotic analysis -applied to the Nernst-Planck equations, see for example [START_REF] Mori | A three-dimensional model of cellular electrical activity[END_REF] and [START_REF] Mori | From three-dimensional electrophysiology to the cable model: an asymptotic study[END_REF]. This variant leads to partial differential equations whose unknowns are intra-and extra-cellular electric potentials coupled with non linear ordinary differential equations called ionic models at the membrane. They represent the transmembrane currents and other cellular ionic processes. Many non-linear ionic models exist in the literature and can be classified into two categories: physiological models, see for instance [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF][START_REF] Noble | A modification of the Hodgkin-Huxley equation applicable to Purkinje fiber action and pacemaker potentials[END_REF][START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF][START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF] and phenomenological models, see for example [START_REF] Mitchell | A two-current model for the dynamics of cardiac membrane[END_REF][START_REF] Nagumo | An active pulse transmission line stimulating nerve axon[END_REF][START_REF] Fitzhugh | Impulses and physiological states in theoretical models of nerve membrane[END_REF]. See also [START_REF] Keener | Mathematical Physiology[END_REF] and [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF] as reference textbooks on the matter. The choice of an adapted model is based on the type of considered cardiac cells (ventricles, atria, Purkinje fibers, . . . ) but also on the desired algorithm complexity (in general phenomenological models are described with less parameters). From the mathematical standpoint, existence and uniqueness analysis for different ionic models is given in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF].
A homogenization procedure allows for the deduction of the macroscopic behaviors from the microscopic ones and leads to the equations of the macroscopic bidomain model. Concerning the mathematical point of view, this homogenization procedure is given formally in [START_REF] Neu | Homogenization of syncytial tissues[END_REF] or more recently in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Richardson | Derivation of the bidomain equations for a beating heart with a general microstructure[END_REF]. In [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] and [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], it is proven using Γ-convergence. The existence and the uniqueness of a solution for the bidomain model at the macroscopic scale have been studied for different ionic models in the literature [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Sanfelici | Convergence of the Galerkin approximation of a degenerate evolution problem in electrocardiology[END_REF][START_REF] Veneroni | Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field[END_REF][START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF].
The aim of this paper is to fill a gap in the literature by providing a complete mathematical analysis based on 2-scale convergence theory of the homogenization procedure that leads to the macroscopic bidomain model. Our analysis is exhaustive in the sense that we provide existence and uniqueness results, nondimensionalization of the equations and 2-scale convergence results -in particular, for the non-linear terms supported on the membrane surfacein the same mathematical framework. To anticipate meaningful modeling assumptions, we consider that the electric conductivities are tensorial and space varying at the microscopic scale. We also consider ionic models of both types (physiological and phenomenological) that may vary smoothly in space (in order to consider ventricular or atrial cells for instance). We carefully introduce the various standard assumptions satisfied by the ionic terms and discriminate the models compatible with our analysis. We are convinced that this work will further allow the analysis of more complex models by laying the ground of the bidomain equations 2-scale analysis. More precisely, among the modeling ingredients that could fit in our context, one could consider: heterogeneous concentrations of ionic species inside the cells, influences of heart mechanical deformations [START_REF] Richardson | Derivation of the bidomain equations for a beating heart with a general microstructure[END_REF][START_REF] Göktepe | Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem[END_REF][START_REF] Corrado | Identification of weakly coupled multiphysics problems. application to the inverse problem of electrocardiology[END_REF], gap junctions [START_REF] Hand | Homogenization of an electrophysiological model for a strand of cardiac myocytes with gap-junctional and electric-field coupling[END_REF] and the cardiac microscopic fiber structure in the context of local 2-scale convergence [START_REF] Briane | Three models of non periodic fibrous materials obtained by homogenization[END_REF][START_REF] Ptashnyk | Multiscale modelling and analysis of signalling processes in tissues with non-periodic distribution of cells[END_REF].
The paper is organized as follows.
• In Section 2, we describe the considered heterogeneous microscopic bidomain model and review the main ionic models. Depending on how they were derived, we organize them into categories. This categorization is useful for the existence (and uniqueness) analysis.
• Although it is not the main focus of the article, we present -in Section 3 -existence and uniqueness results of the heterogeneous microscopic bidomain model. The proof -given in Appendix 5 -uses the Faedo-Galerkin approach. The originality of the proposed strategy is the reformulation of the microscopic equations as a scalar reaction-diffusion problem, see Section 3.1. Such an approach is inspired by the macroscopic analysis done in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and from the analysis of an electroporation model given in [START_REF] Kavian | Classical" electropermeabilization modeling at the cell scale[END_REF]. Then, before stating the existence and uniqueness theorems of Section 3.3, we present and discuss -see Section 3.2 -in detail all the mathematical assumptions required. Finally, in Section 3.4, we explain how the solutions of our original problem can be recovered by a post-processing of the scalar reaction-diffusion problem solutions.
• In Section 4, the homogenization process of the heterogeneous microscopic bidomain model is given. It relies on the underlying assumption that the medium is periodic and uses the 2-scale convergence theory (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]). In preliminary steps, we provide a nondimensionalization of the bidomain equations, see Section 4.1. In order to mathematically analyze the micro-and macroscopic scales of the model and to consider the time-dependence of the system, we develop -in Section 4.2 -adapted uniform estimates. Our strategy can be extended to other problems as for instance the study of cell electroporation. Then the 2-scale convergence theory is applied in order to obtain the macroscopic bidomain equations. The analysis is done in three steps: described as the minimization of a convex functional which is not the case for all the physiological ionic models.
Microscopic bidomain model
In this section, we give a short description of the heterogeneous microscopic bidomain model and the considered ionic models.
The cardiac muscle is decomposed into two parts. We denote by Ω ⊂ R 3 the volume of the heart, Ω i the intracellular region and Ω e the extracellular region. Physiologically, the cells are connected by many gap junctions therefore, geometrically, we assume that Ω i and Ω e are two connected domains with Lipschitz boundary verifying Ω = Ω e ∪ Ω i and Ω e ∩ Ω i = ∅.
(
) 1
The subscripts i and e are used to distinguish the intra-and extracellular quantities, respectively, and α to refer to either of them indifferently. We suppose that the membrane Γ m = ∂Ω e ∩ ∂Ω i is regular and non-empty. We define n i and n e as the unit normal vectors pointing from Ω i and Ω e , respectively, to the exterior. The following microscopic bidomain model is studied for time t ∈ [0, T ]
∇ x • ( σ α ∇ x u α ) = 0 Ω α , σ i ∇ x u i • n i = σ e ∇ x u e • n i Γ m , -σ i ∇ x u i • n i = C m ∂V m ∂t + I tot ion Γ m , V m = u i -u e Γ m , (2)
where u i and u e are electric potentials, C m the membrane capacitance and I tot ion an electrical current depending on ionic activities at the membrane. The conductivities σ α are assumed to be tensorial and depend on x in order to take various modeling assumptions into account. For example, this general form of the conductivities allows us to consider: the dependence of ionic concentrations (remark that a first approximation is to consider space-wise constant ionic concentrations); the heart mechanical deformations [START_REF] Göktepe | Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem[END_REF][START_REF] Corrado | Identification of weakly coupled multiphysics problems. application to the inverse problem of electrocardiology[END_REF] or a complex model of gap junctions (see [START_REF] Hand | Homogenization of an electrophysiological model for a strand of cardiac myocytes with gap-junctional and electric-field coupling[END_REF]). In order to close the problem, we need to prescribe adequate boundary conditions on ∂Ω, the external boundary of the domain. We assume that no electric current flows out of the heart
σ α ∇ x u α • n α = 0, ∂Ω α ∩ ∂Ω. (3)
Finally, one can observe that Equations ( 2) and (3) define u i or u e up to the same constant. Therefore, we choose to impose
Γm u e dγ = 0.
We now describe the term I tot ion which appears in [START_REF] Aliev | A simple two-variable model of cardiac excitation[END_REF]. In terms of modeling, action potentials are produced as a result of ionic currents that pass across the cell membrane, triggering a depolarization or repolarization of the membrane over time. The currents are produced by the displacement of ionic species across the membrane through ionic channels. The channels open and close in response to various stimuli that regulate the transport of ions across the membrane. The cell membrane can be modeled as a combined resistor and capacitor,
C m ∂V m ∂t + I tot ion .
The ionic current I tot ion is decomposed into two parts, I tot ion = I ion -I app , where the term I app corresponds to the external stimulus current. Historically, the first action potential model is the Hodgkin-Huxley model [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF]. In order to understand the complexity of physiological models, we give a brief description of this model -the most important model in all of the physiological theory see [START_REF] Keener | Mathematical Physiology[END_REF] -originally formulated for neurons. The transmembrane current I ion proposed by the Hodgkin-Huxley model is I ion = I N a +I K +I l , where I N a is the sodium current, I K the potassium current and I l the leakage current which concerns various and primarily chloride ions. The currents are determined for k = N a, K, l by
I k = g k (V m -E k ),
where g k is the conductance and E k , the equilibrium voltage. The conductance g l is supposed to be constant and the other conductances are defined by
g N a = m 3 hḡ N a , g K = n 4 ḡK , (5)
where ḡNa and ḡK are the maximal conductances of the sodium and potassium currents, respectively. The dimensionless state variables m, n and the inactivation variable h satisfy the following ordinary differential equations
∂ t w = α w (V m )(1 -w) -β w (V m )w, w = m, n, h, (6)
where α w and β w are the voltage-dependent rate constants which control the activation and the inactivation of the variable w. In Chapter 4 of [START_REF] Keener | Mathematical Physiology[END_REF], α w and β w both have the following form
C 1 e (Vm-V0)/C2 + C 3 (V m -V 0 ) 1 + C 4 e (Vm-V0)/C5 , (7)
where C i , i = 1, • • • , 5 and V 0 are the model parameters. An adaptation of the Hodgkin-Huxley model to the cardiac action potential was suggested by D. Noble in 1962 [START_REF] Noble | A modification of the Hodgkin-Huxley equation applicable to Purkinje fiber action and pacemaker potentials[END_REF]. Many physiological models have been proposed ever since: for the ventricular cells [START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF][START_REF] Tusscher | A model for human ventricular tissue[END_REF][START_REF] Tusscher | Alternans and spiral breakup in a human ventricular tissue model[END_REF][START_REF] Grandi | A novel computational model of the human ventricular action potential and Ca transient[END_REF][START_REF] O'hara | Simulation of the undiseased human cardiac ventricular action potential: Model formulation and experimental validation[END_REF] and for the atrial cells [START_REF] Grandi | A novel computational model of the human ventricular action potential and Ca transient[END_REF][START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF][START_REF] Nygren | Mathematical model of an adult human atrial cell the role of K+ currents in repolarization[END_REF][START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF][START_REF] Koivumäki | Impact of sarcoplasmic reticulum calcium release on calcium dynamics and action potential morphology in human atrial myocytes: A computational study[END_REF][START_REF] Grandi | Human atrial action potential and Ca2+ model: sinus rhythm and chronic atrial fibrillation[END_REF][START_REF] Wilhelms | Benchmarking electrophysiological models of human atrial myocytes[END_REF]]. We refer for example to [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF] for a 2004 survey.
All the cited models are physiological models. Other models -called phenomenological models -are approximations of the ionic channels behavior. These models are intended to describe the excitability process with a lower complexity. With only one (or few) additional variable(s) denoted by w and called the state variable(s) -and then only one (or few) ordinary differential equation(s) -these models are able to reproduce the depolarization and/or the repolarization of the membrane. The FitzHugh-Nagumo (FHN) model [START_REF] Fitzhugh | Impulses and physiological states in theoretical models of nerve membrane[END_REF][START_REF] Nagumo | An active pulse transmission line stimulating nerve axon[END_REF], the Roger and McCulloch model [START_REF] Rogers | A collocation-Galerkin finite element model of cardiac action potential propagation[END_REF] and the Aliev and Panfilov model [START_REF] Aliev | A simple two-variable model of cardiac excitation[END_REF] can be written as follows
I ion (V m , w) = k(V m -V min )(V m -V max )(V m -V gate ) + f 2 (V m ) w, ∂ t w + g(V m , w) = 0, (8)
with
g(V m , w) = δ(γ g 1 (V m ) + w),
and where δ, γ, k and V gate are positive constants. The parameters V min and V max are reasonable potential ranges for V m . The functions f 2 and g 1 (see Assumption 7 for the notational choice) depend on the model. A more widely phenomenological accepted model for ventricular action potential is the Mitchell-Schaeffer model presented in [START_REF] Mitchell | A two-current model for the dynamics of cardiac membrane[END_REF],
I ion = w τ in (V m -V min ) 2 (V m -V max ) V max -V min - V m -V min τ out (V max -V min ) , ∂ t w + g(V m , w) = 0, (9)
with
g(V m , w) = w τ open - 1 τ open (V max -V min ) 2 if V m ≤ V gate , w τ close if V m > V gate ,
and with τ open , τ close , τ in , τ out and V gate , positive constants. Due to its lack of regularity, the mathematical analysis of this model is complicated. A straightforward simplification consists in using a regularized version of this model. Following [START_REF] Djabella | A two-variable model of cardiac action potential with controlled pacemaker activity and ionic current interpretation[END_REF], it reads
g(V m , w) = 1 τ close + τ close -τ open τ close τ open h ∞ (V m ) w - h ∞ (V m ) (V max -V min ) 2 (10) and h ∞ (V m ) = - 1 2 tanh V m -V min (V max -V min )η gate - V gate η gate + 1 2 ,
where η gate is a positive constant. This regularized version is considered in what follows in order to prove mathematical properties of the bidomain problem.
Remark 1. We consider non-normalized versions of the ionic models. For the considered phenomenological models, we expect to have
V min ≤ V m ≤ V max ,
although it can not be proven without strong assumptions on the source term. Concerning the gating variable, we expect to have 0 ≤ w ≤ 1 for FHN like models [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] and, following a choice commonly done in the literature of the Mitchell-Schaeffer model, we expect to have 2 for the Mitchell-Schaeffer model [START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF]. This last inequality can be proven (see Assumption 10 below and the proof of Lemma 2 in Appendix). Finally for all physiological models, we expect some bounds -from below and above -on the gating variables(s) as natural consequences of the structure of Equation [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF]. Note that in what follows, we consider the bidomain equations with only one gating variable w. All the results presented below can be extended to the case where the ionic term I ion depends on several gating variables.
0 ≤ w ≤ 1 (V max -V min )
In the next section, the following microscopic bidomain model is studied (it corresponds to System (2) coupled with an ionic model and with the boundary conditions (3) and (4)), for all time t ∈ [0, T ],
∇ x • ( σ α ∇ x u α ) = 0 Ω α , σ i ∇ x u i • n i = σ e ∇ x u e • n i Γ m , -σ i ∇ x u i • n i = C m ∂V m ∂t + I tot ion (V m , w) Γ m , V m = u i -u e Γ m , ∂ t w = -g(V m , w) Γ m , σ i ∇ x u i • n i = 0 ∂Ω i ∩ ∂Ω, σ e ∇ x u e • n e = 0 ∂Ω e ∩ ∂Ω.
Γm
u e dγ = 0. ( 11
)
3 Analysis of the microscopic bidomain model
In this section, the analysis (existence and uniqueness) of the heterogeneous microscopic bidomain model presented in Section 2 is proposed.
As explained before, we assume that Ω i and Ω e are connected sets and that they have a Lipschitz boundary. Our analysis involves the use of standard L p Banach spaces and H s Hilbert spaces. Apart from the use of (•, •) D to denote the L 2 scalar product on a domain D, we use standard notations found in many textbooks on functional analysis. In what follows, we use the trace of u i and u e on the boundary. Therefore to work in the adequate mathematical framework we introduce the Hilbert (trace) space H 1/2 (∂Ω α ) whose dual (the space of continuous linear functionals) is H -1/2 (∂Ω α ). Using the fact that the boundary Γ m is a subdomain of ∂Ω α and following the notation of [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF] (Chapter 3), we introduce the Hilbert space
H 1/2 (Γ m ) = u| Γm , u ∈ H 1/2 (∂Ω i ) = u| Γm , u ∈ H 1/2 (∂Ω e ) .
Note that the two definitions of H 1/2 (Γ m ) coincide since there exists a continuous extension operator from H 1/2 (Γ m ) to H 1/2 (∂Ω α ) (see the proof of Theorem 4.10 in [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF]). We denote by H -1/2 (Γ m ) the dual space and the duality pairing is denoted •, • Γm . For each j ∈ H -1/2 (Γ m ), the standard dual norm is defined by
j H -1/2 (Γm) = sup 0 =v∈H 1/2 (Γm) | j, v Γm | v H 1/2 (Γm) .
It is standard to assume that some positivity and symmetry properties are satisfied by the parameters of the system. Assumption 1. The capacitance satisfies C m > 0 and the diffusion tensors σ α belong to [L ∞ (Ω α )] 3×3 and are symmetric, definite, positive and coercive, i.e. there exists
C > 0 such that σ α ρ, ρ Ωα ≥ C ρ 2 L 2 (Ωα) ,
for all ρ ∈ L 2 (Ω α ) 3 . This implies that
• L 2 σα : L 2 (Ω α ) → R + ρ → σ α ρ, ρ Ωα defines a norm in L 2 (Ω α ).
Elimination of the quasi-static potential unknown
Following an idea developed in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] for the bidomain equation at the macroscopic level or in [START_REF] Kavian | Classical" electropermeabilization modeling at the cell scale[END_REF] for an electroporation model at the microscopic level, we rewrite System (11) by eliminating the unknown electric potentials u α in Ω α and writing an equation for V m = u iu e on Γ m . Note that the equation for the gating variable w is kept because the only electric quantity involved is V m along Γ m . We introduce the linears operators T i and T e that solve interior Laplace equations in Ω i and Ω e respectively. First, we define
T i : H 1/2 (Γ m ) → H -1/2 (Γ m ) which is given formally by T i (v) := σ i ∇ x v i • n i along Γ m , where v i is the unique solution of ∇ x • ( σ i ∇ x v i ) = 0 Ω i , v i = v Γ m , σ i ∇ x v i • n i = 0 ∂Ω ∩ ∂Ω i . ( 12
)
Since the problem above is well-posed (it is elliptic and coercive because Γ m = ∅, see Theorem 4.10 of [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF]), the linear functional T i (v) can be rigorously defined by, for all w ∈
H 1/2 (Γ m ), T i (v), w Γm = ( σ i ∇ x v i , ∇ x w i ) Ωi ,
where v i is given by ( 12) and
w i ∈ H 1 (Ω i ) is the unique solution of ∇ x • ( σ i ∇ x w i ) = 0 Ω i , w i = w Γ m , σ i ∇ x w i • n i = 0 ∂Ω ∩ ∂Ω i . (13)
The operator T i satisfies the properties summed up in the following proposition.
Proposition 1. If Assumption 1 holds, we have for all (v, w) ∈ [H 1/2 (Γ m )] 2 , T i (v), w Γm = T i (w), v Γm , T i (v), v Γm = ( σ i ∇ x v i , ∇ x v i ) Ωi ≥ 0, T i (v) H -1/2 (Γm) ≤ C v H 1/2 (Γm) ,
where C is a positive scalar depending only on σ i and the geometry.
Proof. By the definition of T i and v i , we have
T i (v), w Γm = ( σ i ∇ x v i , ∇ x w i ) Ωi .
Moreover from the weak form of Problem [START_REF] Cioranescu | The periodic unfolding method in domains with holes[END_REF], one can deduce that
( σ i ∇ x w i , ∇ x v i ) Ωi = T i (w), v Γm ,
hence the first relation of the proposition. The second relation is obtained by setting w = v in the previous equation and using the fact that σ i is a definite positive tensor (Assumption 1). Moreover, since σ i is L ∞ (Assumption 1), we also have
sup w =0 | T i (v), w Γm | w H 1/2 (Γm) ≤ C v i H 1 (Ωi) w i H 1 (Ωi) w H 1/2 (Γm) .
The third relation is then a consequence of stability results on elliptic problems with mixed boundary conditions (again see Theorem 4.10 of [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF])) that map boundary data v and w to, respectively v i and w i , i.e.
v i H 1 (Ωi) ≤ C v H 1/2 (Γm) .
Corollary 1. There exists c > 0 such that for all v ∈ H 1/2 (Γ m ), we have
T i (v), v Γm + Γm v dγ 2 ≥ c v 2 H 1/2 (Γm) .
Proof. This result is obtained using the second relation of Proposition 1 and a Poincaré -Wirtinger type inequality.
The operator T i is used in order to substitute the first equation with α = i of System (11) into the third equation of the same system. This is possible since u i satisfies a static equation inside Ω i . The same argument holds for the extra-cellular potential u e , therefore, for the same reason we introduce the operator
T e : H -1/2 (Γ m ) → H 1/2 (Γ m ),
which is defined by T e (j) := v e along Γ m , where v e ∈ H 1 (Ω e ) is the unique solution of
∇ x • ( σ e ∇ x v e ) = 0 Ω e , σ e ∇ x v e • n i = j - j, 1 Γm |Γ m | Γ m , σ e ∇ x v e • n e = 0 ∂Ω ∩ ∂Ω e , Γm v e dγ = 0. ( 14
)
Similar to the operator T i , the operator T e satisfies some properties which are summed up in the following proposition.
Proposition 2. If Assumption 1 holds, we have for all (j, k)
∈ [ H -1/2 (Γ m )] 2 k, T e (j) Γm = j, T e (k) Γm , j, T e (j) Γm = -( σ e ∇ x v e , ∇ x v e ) Ωe ≤ 0, T e (j) H 1/2 (Γm) ≤ C j H -1/2 (Γm) ,
where C is a positive scalar depending only on σ e and the geometry.
Proof. For all k ∈ H -1/2 (Γ m ), we define w e ∈ H 1 (Ω e ) such that,
∇ x • ( σ e ∇ x w e ) = 0 Ω e , σ e ∇ x w e • n i = k - k, 1 Γm |Γ m | Γ m , σ e ∇ x w e • n e = 0 ∂Ω ∩ ∂Ω e ,
Γm w e dγ = 0.
By the definition of v e and w e , we deduce the two following equalities
( σ e ∇ x v e , ∇ x w e ) Ωe = -j - j, 1 Γm |Γ m | , T e (k) Γm (15)
and
( σ e ∇ x w e , ∇ x v e ) Ωe = -k - k, 1 Γm |Γ m | , T e (j)
Γm .
The first relation of the proposition is obtained by noticing that
Γm T e (j) dγ
= Γm T e (k) dγ = 0 hence j, 1 Γm |Γ m | , T e (k) Γm = k, 1 Γm |Γ m | , T e (j) Γm = 0.
The second relation of the proposition is obtained by setting k = j in (15) and using the fact that σ e is a definite positive tensor. To prove the continuity we first notice that
∇ x v e 2 L 2 (Ωe) ≤ C j H -1/2 (Γm) T e (j) H 1/2 (Γm) = C j H -1/2 (Γm) v e H 1/2 (Γm) , since σ e ∈ [L ∞ (Ω α )] 3×3 (
C m ∂ t V m + I tot ion (V m , w) = -T i (u i ) Γ m , u e = T e (T i (u i )) Γ m , ∂ t w = -g(V m , w) Γ m , (17)
where by definition V m = u iu e . Using the second equation of [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF], we obtain
u i -T e (T i (u i )) = V m Γ m . (18)
We can prove (see Lemma 1) that the operator T := I -T e T i :
H 1/2 (Γ m ) → H 1/2 (Γ m )
has a bounded inverse (I stands for the identity operator) and we therefore introduce the operator
A : H 1/2 (Γ m ) → H -1/2 (Γ m ) v → T i (I -T e T i ) -1 v
and substitute the term u i in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF]. This implies that if Assumption 1 holds, solutions of [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] satisfy
C m ∂ t V m + I tot ion (V m , w) = -A(V m ) Γ m , ∂ t w = -g(V m , w) Γ m . ( 19
)
The converse is true. Solutions of ( 19) can be used to recover solutions of [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] as shown in Section 3.4. The properties of the operator A are summed up in the following proposition.
Proposition 3. If Assumption 1 holds, the linear operator A satisfies for all (v, ṽ) ∈
[H 1/2 (Γ m )] 2 , A(v), ṽ Γm = A(ṽ), v Γm and A(v), v Γm ≥ 0.
Moreover, there exist constants c and C such that, for all (v, ṽ)
∈ [H 1/2 (Γ m )] 2 , | A(v), ṽ Γm | ≤ C v H 1/2 (Γm) ṽ H 1/2 (Γm) (20)
and
A(v), v Γm + Γm v dγ 2 ≥ c v 2 H 1/2 (Γm) . (21)
Proof. To simplify the proof, we define θ ∈ H 1/2 (Γ m ) and θ ∈ H 1/2 (Γ m ) as (I -T e T i )(θ) := v and (I -T e T i )( θ) := ṽ.
Using Propositions 1 and 2, we obtain
A(v), ṽ Γm = A(v), (I -T e T i )( θ) Γm = (I -T i T e )A(v), θ Γm .
Since by definition A = T i (I -T e T i ) -1 and using (I -T i T e )T i = T i (I -T e T i ), we deduce
A(v), ṽ Γm = T i (v), θ Γm = T i ( θ), v Γm . ( 22
)
The symmetry of A(•), • is then a consequence of the definition of θ as we have T i ( θ) = A(ṽ).
From [START_REF] Evans | Partial differential equations[END_REF], we can also deduce the non-negativity of the bilinear form by choosing ṽ ≡ v. We find that
A(v), v Γm = T i (θ), (I -T e T i )(θ) Γm , (23)
which is non-negative thanks to the non-negativity of T i and the non-positivity of T e . The continuity [START_REF] Djabella | A two-variable model of cardiac action potential with controlled pacemaker activity and ionic current interpretation[END_REF] is a direct consequence of the third equation of Proposition 1 and Lemma 1 (i.e. T := I -T e T i has a bounded inverse). Now remark that by the definition of T e , we have, for all j ∈ H -1/2 (Γ m ),
Γm T e (j) dγ = 0 ⇒ Γm v dγ = Γm T (θ) dγ = Γm θ dγ. ( 24
)
Using the equalities above and (23), we find
A(v), v Γm + Γm v dγ 2 ≥ T i (θ), θ Γm + Γm θ dγ 2 .
Corollary 1 shows the existence if of a positive scalar c such that, for all v ∈ H 1/2 (Γ m ),
A(v), v Γm + Γm v dγ 2 ≥ c θ H 1/2 (Γm) = c T -1 (v) H 1/2 (Γm) .
Inequality [START_REF] Dragomir | Some Gronwall type inequalities and applications[END_REF] is then a consequence of the boundedness of T given by Lemma 1.
The last step of this section consists in proving the technical lemma regarding the invertibility of the operator I -T e T i , which allows to define the operator A.
Lemma 1. The linear operator
T := I -T e T i : H 1/2 (Γ m ) → H 1/2 (Γ m )
is bounded and has a bounded inverse.
Proof. Using Propositions 1 and 2, we see that the operator T is linear and bounded, and hence continuous. In what follows, we prove that T is injective and then we deduce a lower bound for the norm of T . Finally, we prove that the range of the operator is closed and that its orthogonal is the null space. These two last steps allow to show that the range of T is H 1/2 (Γ m ) and the result follows from the bounded inverse theorem.
Step 1: Injectivity of the operator
For any v ∈ H 1/2 (Γ m ) such that T (v) = 0, we have T i (v), v Γm = T i (v), T e T i (v) Γm .
The first term of the equality is non-negative (Proposition 1) while the second is non-positive thanks to Proposition 2. Therefore, we obtain T i (v), v Γm = 0 and this implies thanks to Proposition 1 that v is constant along Γ m . However, for any constant function c, we have
T i (c) = 0 therefore, in our case, T (v) = 0 implies v = T e T i (v) = 0.
Step 2: Lower bound for the operator norm For all θ ∈ H 1/2 (Γ m ), we define v ∈ H 1/2 (Γ m ) as T (θ) = v. Then, as written in Equation [START_REF] Göktepe | Electromechanics of the heart: a unified approach to the strongly coupled excitation-contraction problem[END_REF], θ and v have the same average along Γ m and
T (θ 0 ) = v 0 where θ 0 := θ - 1 |Γ m | Γm θ dγ and v 0 := v - 1 |Γ m | Γm v dγ.
We have
T i (θ 0 ), v 0 Γm = T i (θ 0 ), T (θ 0 ) Γm = T i (θ 0 ), θ 0 Γm -T i (θ 0 ), T e T i (θ 0 ) Γm .
Since T i is non negative and T e non positive (Proposition 1 and 2), we deduce
T i (θ 0 ), θ 0 ≤ T i (θ 0 ) H -1/2 (Γm) v 0 H 1/2 (Γm) ≤ C θ 0 H 1/2 (Γm) v 0 H 1/2 (Γm) , ( 25
)
where C is the continuity constant given by Proposition 1. Using Corollary 1 and the fact that θ 0 has zero average along Γ m , we have c θ 0 2 H 1/2 (Γm) ≤ T i (θ 0 ), θ 0 , therefore using (25), we find
θ 0 H 1/2 (Γm) ≤ C c v 0 H 1/2 (Γm) .
Finally, we get
θ H 1/2 (Γm) ≤ θ 0 H 1/2 (Γm) + 1 |Γ m | Γm θ dγ 1 H 1/2 (Γm) ≤ C c v 0 H 1/2 (Γm) + 1 |Γ m | Γm v dγ 1 H 1/2 (Γm) .
This implies that there exists another constant C depending only on the geometry and
σ i such that θ H 1/2 (Γm) ≤ C v H 1/2 (Γm) = C T (θ) H 1/2 (Γm) .
Step 3: Orthogonal of the operator range
Let j ∈ H -1/2 (Γ m ) such that for all v ∈ H 1/2 (Γ m ), j, T (v) Γm = 0. ( 26
)
We choose v = T e (j) in ( 26) and thanks to Propositions 1 and 2, we find
j, T e (j) Γm = j, T e T i T e (j) Γm = T i T e (j), T e (j) Γm .
The last term is non negative therefore, since T e (j), j Γm is non positive, it should vanish. This implies that j is constant along Γ m (since in [START_REF] Cioranescu | Homogenization in open sets with holes[END_REF], ∇ x v e = 0). Now, we choose v ≡ 1 in ( 26) and we find j,
T (v) Γm = 0 ⇒ j, 1 Γm = 0 ⇒ j = 0.
We are now ready to give the required assumptions to prove the existence and the uniqueness of System (19).
Mathematical assumptions for the well-posedness
As mentioned in [START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF], no maximum principle has been proven for Problem [START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF] (or any linearized version) contrary to standard reaction-diffusion problems. The consequence is that one can not deduce easily bounds in time and space for the electric variable V m . This implies that some specific assumptions are required to guarantee that non-linear terms involving V m are well-defined and that existence results hold. More generally, in this section, the required assumptions to prove existence and/or uniqueness of weak solutions of System ( 19) are presented.
First, from the equivalence of the bidomain equations and System [START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF] (see Section 3.1), it is clear that our problem has to be completed with the following initial conditions on Γ m
V m (•, 0) = u i (•, 0) -u e (•, 0) = V 0 m , w(•, 0) = w 0 .
It is sufficient to require the following assumption concerning the initial conditions. Assumption 2. Properties of the initial conditions.
V 0 m ∈ L 2 (Γ m ) and w 0 ∈ L 2 (Γ m ).
As already said, the ionic term I tot ion is decomposed into two parts
I tot ion = I ion -I app , (27)
where I app is the applied current and is a function of time and space. We assume the following property concerning the applied current.
Assumption 3. Property of the source term.
I app ∈ L 2 (Γ m × (0, T )).
To represent the variety of the behavior of the cardiac cells, we mathematically define the ionic terms I ion and g by introducing a family of functions on R 2 parametrized by the space variable
I ion ( x, •) : R 2 -→ R and g( x, •) : R 2 -→ R.
At each fixed x ∈ Ω, the ionic terms I ion and g describe a different behavior that corresponds to a non-linear reaction term. Therefore to further the mathematical analysis, it is assumed that these functions have some regularity in all their arguments. This leads to the following assumption.
Assumption 4. Regularity condition.
I ion ∈ C 0 (Ω × R 2 ) and g ∈ C 0 (Ω × R 2 ).
In the mathematical analysis of macroscopic bidomain equations, several paths have been followed in the literature according to the definition of the ionic current. We summarize below the encountered various cases.
Physiological models
For these models, one can prove that the gating variable w is bounded from below and above, due to the specific structure of ( 6) and [START_REF] Ammari | Spectroscopic imaging of a dilute cell suspension[END_REF]. To go further in the physiological description, some models consider the concentrations as variables of the system, see for example the Luo-Rudy model [START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF]. In [START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF][START_REF] Veneroni | Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field[END_REF], such models are considered.
Phenomenological models a)
The FitzHugh like models [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] The FitzHugh like models have been studied in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF][START_REF] Colli Franzone | Mathematical Cardiac Electrophysiology[END_REF]. In these models, there are no obvious bounds on the gating variable. The FitzHugh-Nagumo model satisfies some good mathematical properties (existence and uniqueness of solutions for arbitrary observation times) whereas the Aliev-Panfilov and MacCulloch model still rise some mathematical difficulties. In particular, no proof of uniqueness of solutions exists in the literature because of the non-linearity in the coupling terms between w and V m . b) The Mitchell-Schaeffer model [START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF] The Mitchell-Schaeffer model has been studied in [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Kunisch | Well-posedness for the Mitchell-Schaeffer model of the cardiac membrane[END_REF][START_REF] Colli Franzone | Mathematical Cardiac Electrophysiology[END_REF]. This model and its regularized version have a very specific structure. First, we will show that the gating variable is bounded from below and above but uniqueness of the solution is a difficult mathematical question which is addressed in [START_REF] Kunisch | Well-posedness for the Mitchell-Schaeffer model of the cardiac membrane[END_REF] for a related ordinary differential (ODE) problem.
In what follows, we describe in detail the structures of I ion and g. In all models, some "growth conditions" are required to write the problem in an adequate variational framework.
Assumption 5. Growth condition. There exists a scalar C ∞ > 0 such that for all x ∈ Ω and (v, w) ∈ R 2 we have
|I ion ( x, v, w)| ≤ C ∞ (|v| 3 + |w| + 1) (28)
and |g( x, v, w)| ≤ C ∞ (|v| 2 + |w| + 1). ( 29
)
Remark 2. The inclusion
H 1/2 (Γ m ) into L 4 (Γ m ) is continuous, see Proposition 2.4 of [17],
therefore by identification of integrable functions with linear forms, there is a continuous inclusion of
L 4/3 (Γ m ) into H -1/2 (Γ m ). Moreover if v ∈ L 2 ((0, T ), H 1/2 (Γ m )), w ∈ L 2 ((0, T ) × Γ m ),
we have (the dependency w.r.t. x is omitted for the sake of clarity)
I ion (v, w) ∈ L 2 ((0, T ), L 4/3 (Γ m )) and g(v, w) ∈ L 2 ((0, T ) × Γ m ).
Remark 3. Several mathematical analyses concern only the macroscopic bidomain equations (see [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] for instance) and these analyses can be extended to the case of the microscopic bidomain equations with some slightly different assumptions on the non-linear terms I ion and g. These assumptions take into account the functional framework in which we have to work. Namely, we have to use the trace space H 1/2 (Γ m ) instead of the more standard functional space H 1 (Ω). More precisely in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF], the growth conditions are
|I ion ( x, v, w)| ≤ C ∞ (|v| 5 + |w| + 1) and |g( x, v, w)| ≤ C ∞ (|v| 3 + |w| + 1).
In that case, it can be shown that I ion (v, w) and g(v, w) are integrable and well defined if
v ∈ H 1 (Ω) and w ∈ L 2 (Ω).
As mentioned in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF], the growth conditions ( 28) and ( 29) are there to ensure that the functions I ion and g can be used to construct models that are well defined in the variational sense. The growth condition (Assumption 5) is not sufficient to guaranty the existence of solutions of Problem [START_REF] Courtemanche | Ionic mechanisms underlying human atrial action potential properties: insights from a mathematical model[END_REF]. Indeed, since I ion can behave like a cubic polynomial for large value of v and the function g could behave as a quadratic polynomial in v, it turns out to be necessary to have a signed condition (see Equations ( 51) -( 54) for more insight). This leads to the following assumption Assumption 6. There exist µ > 0 and C I > 0 such that for all x ∈ Ω and (v,
w) ∈ R 2 , we have v I ion ( x, v, w) + µ w g( x, v, w) ≥ -C I ( |v| 2 + |w| 2 + 1). ( 30
)
In general, V m is more regular in space than w due to the presence of the positive operator A. Due to this lack of regularity on the gating variables, some additional assumptions have to hold to carry out the mathematical analysis. Concerning this question two kinds of assumption are proposed in the literature. Roughly speaking, these assumptions depend on if the model is physiological or phenomenological. To be able to refine our analysis depending on the different properties of the models, we make the following assumption.
Assumption 7. One of the following assumptions hold a) Global lipschitz property. There exists a positive scalar L g > 0, such that for all
x ∈ Ω, (v 1 , v 2 ) ∈ R 2 and (w 1 , w 2 ) ∈ R 2 , |g( x, v 1 , w 1 ) -g( x, v 2 , w 2 )| ≤ L g |v 1 -v 2 | + L g |w 1 -w 2 |. (31)
b) Decomposition of the non-linear terms. There exist continuous functions
(f 1 , f 2 , g 1 , g 2 ) ∈ [C 0 (Ω × R 2 )] 4
such that
I ion ( x, v, w) = f 1 ( x, v) + f 2 ( x, v) w, g( x, v, w) = g 1 ( x, v) + g 2 ( x) w,
and there exist positive constants C 1 , c 1 and C 2 such that for all x ∈ Ω and v ∈ R,
v f 1 ( x, v) ≥ C 1 |v| 4 -c 1 ( |v| 2 + 1) and |f 2 ( x, v)| ≤ C 2 ( |v| + 1). ( 32
)
Remark 4. The existence result of solutions given later (Theorem 1) is valid with either Assumption 7a or 7b. Note also that Assumption 7a is satisfied for the physiological models, the Mitchell-Schaeffer model and the FitzHugh-Nagumo model (the simplest form of FitzHugh-like models), whereas Assumption 7b holds for the Aliev-Panfilov and MacCulloch models.
Finally, to prove the uniqueness of a solution for the microscopic bidomain problem, the terms I ion and g have to satisfy a global signed Lipschitz relation (the following assumption is a variant of what is suggested in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF] or [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF]). Assumption 8. One-sided Lipschitz condition. There exist µ > 0 and
L I > 0 such that for all x ∈ Ω, (v 1 , w 1 ) ∈ R 2 and (v 2 , w 2 ) ∈ R 2 I ion ( x, v 1 , w 1 ) -I ion ( x, v 2 , w 2 ) (v 1 -v 2 ) + µ g( x, v 1 , w 1 ) -g( x, v 2 , w 2 ) (w 1 -w 2 ) ≥ -L I |v 1 -v 2 | 2 + |w 1 -w 2 | 2 . (33)
Note that such an assumption is not satisfied for the Aliev-Panfilov, the MacCulloch and the Mitchell-Schaeffer models. Uniqueness of a solution for these models is still an open problem.
For physiological models and for the Mitchell-Schaeffer model, there exist two finite scalars w min < w max such that we expect the solution w to be bounded from below by w min and above by w max . For this to be true, it has to be satisfied for the initial data and this leads to the following assumptions when considering such models.
Assumption 9.
w min ≤ w 0 (•) ≤ w max , Γ m .
Assumption 10. For all x ∈ Ω and v ∈ R, g( x, v, w min ) ≤ 0 and g( x, v, w max ) ≥ 0.
The last assumption is satisfied when considering the function g defined as in physiological models [START_REF] Allaire | Homogenization of the Neumann problem with nonisolated holes[END_REF][START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] or as in the Mitchell-Schaeffer model [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF]. For such models, the terms I ion ( x, v, w) and g( x, v, w) should be replaced by I ion ( x, v, χ(w)) and g( x, v, χ(w)) respectively, where
χ(w) = w min w ≤ w min , w max w ≥ w max , w otherwise.
Note that with such substitutions, we do not modify the solution while the global conditions of Assumptions 5, 6, 7a and 8 are more likely to be fulfilled since it corresponds to verify local conditions on w. As an example, we can remark that the physiological models -of the form given by (5, 6) -do not satisfy the Lipschitz property (Assumption 8) globally in w but only locally. However, for these models, we can show a priori that w is bounded from below and above, hence the suggested modification of the non-linear terms. Finally, note that this assumption is also a physiological assumption. Indeed, it makes sense to have some bounds on the gating variables.
Remark 5. Assumptions 1-4 are always satisfied and do not depend on the structure of the non-linear terms I ion and g. Moreover, it is possible to classify the models of the literature depending on which assumption they satisfy, see Table 5. We refer the reader to [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] for
Assumption 5 6 7 8 9-10 FitzHugh-Nagumo a) Roger MacCulloch a) Aliev-Panfilov b) Regularized Mitchell-Schaeffer a) Physiological models a)
Table 1: Ionic models and verified assumptions the analysis of FitzHugh-Nagumo, Roger MacCulloch and Aliev-Panfilov assumptions and Section 2 for the regularized Mitchell-Schaeffer and physiological models.
Existence and uniqueness analysis
All the proofs of this section are given in Appendix 5.
In the literature, the analysis of the classical bidomain model is done most of the time at the macroscopic scale. Equations at that scale are obtained from the microscopic model using a formal asymptotic homogenization procedure (see [START_REF] Neu | Homogenization of syncytial tissues[END_REF][START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF]) or the Γ-convergence method (see [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF][START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF]). In any cases, to justify the homogenization process, a complete mathematical analysis at the microscopic scale is necessary. In this section, we give existence and uniqueness results for solutions of System [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF]. Note that the existence of solutions for the macroscopic bidomain equation used in the literature is a consequence of the 2-scale convergence theory presented in the next section.
In the literature, one can find three different approaches that are used for the mathematical analysis of the macroscopic classical bidomain equations. Following the classification suggested in the recent book [START_REF] Colli Franzone | Mathematical Cardiac Electrophysiology[END_REF], these three approaches are 1. The use of the abstract framework of degenerate evolution variational inequalities in Hilbert spaces (see for instance [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF]). Such an approach has been used to do the analysis when FitzHugh-Nagumo models are considered and is adapted to the analysis of semi-discretization in time of the problem (see [START_REF] Sanfelici | Convergence of the Galerkin approximation of a degenerate evolution problem in electrocardiology[END_REF]).
2. The use of Schauder fixed point theorem. This is the approach suggested in [START_REF] Veneroni | Reaction-diffusion systems for the microscopic bidomain model of the cardiac electric field[END_REF] and [START_REF] Veneroni | Reaction-diffusion systems for the macroscopic bidomain model of the cardiac electric field[END_REF]. In these references, the ionic term depends on the concentration of ionic species (in addition to the dependence on the gating variables). This approach is adapted to the analysis of physiological models.
3. The Faedo-Galerkin approach. This approach is used in the context of electrophysiology in [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF][START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF][START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and in the context of electroporation in [START_REF] Kavian | Classical" electropermeabilization modeling at the cell scale[END_REF]. It is the most versatile approach although it has been used to analyze only phenomenological models in the mentioned references. We refer the reader to the textbook of J.-L. Lions [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] for a detailed description. This technique is based upon a limit process of discretization in space of the partial differential equations combined with the use of standard results on systems of ODEs (at each fixed discretization). This is the approach that we consider in the appendix of this paper.
In what follows, we proceed in three steps. The first step consists in showing existence/uniqueness results for the evolution equation of the gating variable w. More precisely, given any (electric potential)
U ∈ C 0 ([0, T ]; L 2 (Γ m )), (34)
we show in the appendix that the associated gating variable -solution of ( 35) -is bounded from below and above when physiological or Mitchell-Schaeffer models are concerned. The next step concerns existence results for the full non-linear microscopic bidomain equations and the final step gives the uniqueness result.
Step 1 -Evolution equation of the gating variable.
The term g(V m , w) in ( 19) is replaced by the term g(U, w) and we denote the corresponding solution w = w U . As mentioned previously, our main purpose here is to state the fact that -in the case of physiological or Mitchell-Schaeffer models -the solution w U is bounded from below and above.
Lemma 2. If Assumption 2, 4, 5 and 7a hold, there exists a unique function
w U ∈ C 1 ([0, T ]; L 2 (Γ m )),
which is a solution of
∂ t w U + g( x, U, w U ) = 0, Γ m , ∀ t ∈ [0, T ], w U ( x, 0) = w 0 ( x), Γ m . ( 35
)
Moreover if Assumptions 9 and 10 are satisfied then for all t ∈ [0, T ] and almost all x ∈ Γ m ,
w min ≤ w U ( x, t) ≤ w max .
The proof of Lemma 2 is done by considering smooth approximations of U and w 0 . Then, the problem reduces to the analysis of an ordinary differential equation where the space variable x plays the role of a parameter. Finally, the solution of (35) is constructed by a limit process using the density of smooth functions into L 2 (Γ m ).
Step 2 -Existence result for the microscopic bidomain equation Theorem 1. If Assumptions 1-7 hold, there exist
V m ∈ C 0 ([0, T ]; L 2 (Γ m )) ∩ L 2 ((0, T ); H 1/2 (Γ m )), ∂ t V ∈ L 2 ((0, T ); H -1/2 (Γ m )),
and w ∈ H 1 ((0, T ); L 2 (Γ m )),
which are solutions of
C m ∂ t V m + A V m + I ion V m , w = I app , H -1/2 (Γ m ), a.e. t ∈ (0, T ), ∂ t w + g V m , w = 0, Γ m , a.e. t ∈ (0, T ), (36)
and
V m ( x, 0) = V 0 m ( x) Γ m , w( x, 0) = w 0 ( x) Γ m . ( 37
)
The proof of Theorem 1 is done using the Faedo-Galerkin method. More precisely, the equations are first space-discretized using a finite dimensional basis of L 2 (Γ m ) constructed with the eigenvectors of A. After the discretization, it is proven that semi-discrete solutions exist by applying the Cauchy-Peano theorem (to be more specific, we use the more general Carathéodory's existence theorem) on systems of ordinary differential equations. Finally, by a limit procedure, the existence of solutions is proven for the weak form of (36) (the limit procedure uses compactness results to deduce strong convergence of the semi-discrete solutions. This strategy allows to pass to the limit in the non-linear terms I ion and g). Remark 6. If Assumption 7a is valid (i.e. g is globally Lipschitz) then by application of Lemma 2 the solution given by Theorem 1 has the additionnal regularity
w ∈ C 1 ([0, T ]; L 2 (Γ m )).
Step 3 -Uniqueness results for the microscopic bidomain equation Uniqueness is proven by standard energy techniques for models satisfying the one sided Lipschitz property, see Assumption 8.
Corollary 2. If Assumption 8 holds, then the solution of the microscopic bidomain equations given by Theorem 1 is unique.
Post-processing of the intra-and extra-cellular potentials
From the solution V m = u i -u e given by Theorem 1, we can first recover the intra-cellular potential
u i ∈ L 2 ((0, T ); H 1 (Ω i )).
Using Equation ( 18), one can see that it is defined as the unique solution of the following quasi-static elliptic problem (the time-dependence appears only in the boundary data),
∇ x • ( σ i ∇ x u i ) = 0 Ω i , u i = (I -T e T i ) -1 V m Γ m , σ i ∇ x u i • n i = 0 ∂Ω ∩ ∂Ω i . (38)
In the same way, the extra-cellular potential
u e ∈ L 2 ((0, T ); H 1 (Ω e ))
is defined as the unique solution of the following quasi-static elliptic problem,
∇ x • ( σ e ∇ x u e ) = 0 Ω e , σ e ∇ x u e • n i = T i (u i ) - T i (u i ), 1 |Γ m | Γ m , σ e ∇ x u e • n e = 0 ∂Ω ∩ ∂Ω e , Γm u e dγ = 0. ( 39
)
From the definition above, one can recover energy estimates on (V m , w, u i , u e ) from the energy estimates derived for the system [START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF] where only (V m , w) appears. To do so, we will use later the following proposition.
Proposition 4. Let V m ∈ L 2 ((0, T ); H 1/2 (Γ m ))
, then for almost all t ∈ (0, T ), we have
A(V m ), V m Γm = ( σ i ∇ x u i , ∇ x u i ) Ωi + ( σ e ∇ x u e , ∇ x u e ) Ωe ,
where (u i , u e ) are given by [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF] and [START_REF] Mitchell | A two-current model for the dynamics of cardiac membrane[END_REF].
Proof. We have
A(V m ), V m Γm = T i T -1 (V m ), V m Γm = T i T -1 (V m ), T T -1 (V m ) Γm = T i T -1 (V m ), T -1 (V m ) Γm -T i T -1 (V m ), T e T i T -1 (V m ) Γm .
The first term of the right-hand side gives by definition and Proposition 1 the quadratic term on u i whereas the second term gives by definition of T e , u e and Proposition 2 the quadratic term on u e .
We are now in position to perform a rigorous homogenization of the microscopic bidomain model (2).
Homogenization of the bidomain equations
The microscopic model is unusable for the whole heart in term of numerical applications. At the macroscopic scale, the heart appears as a continuous material with a fiber-based structure. At this scale, the intracellular and extracellular media are undistinguishable. Our objective is to use a homogenization of the microscopic bidomain model in order to obtain a bidomain model where all the unknowns are defined everywhere hence simplifying the geometry of the domain. Formally after homogenization, we consider that the cardiac volume is " Ω = Ω i = Ω e ".
Homogenization of partial differential systems is a well known technique (see [START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF] for a reference textbook on the matter). It is done by considering the medium as periodic, with the period denoted by ε, then by constructing equations deduced by asymptotic analysis w.r.t. ε.
A classical article for the formal homogenization of the microscopic bidomain model -when the conductivities σ α are strictly positive constants -is [START_REF] Neu | Homogenization of syncytial tissues[END_REF]. In [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF], the homogenization of the microscopic bidomain equations (with constants conductivities) is presented using formal asymptotic analysis. It should also be noted that in [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF][START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], the same type of results have be proven using the theory of Γ-convergence in some simplified situations. The approach presented below uses the 2-scale convergence method (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) to extend the results obtained in [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF][START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF]. As typical for this kind of homogenization problem, we adopt the following approach:
1. Nondimensionalization of the problem. The microscopic bidomain equations are scaled in space and time, and written unitless. Using the characteristic values of the physical parameters of our problem (cells size, conductivities, ionic current,...), a small parameter ε is introduced in the equations and in the geometry.
2. Uniform estimate of solutions. Using energy estimates, norms of the solutions -as well as norms of the non-linear terms -are uniformly bounded with respect to the small parameter ε.
3. Two-scale convergence. The limit equations are deduced by application of the 2-scale convergence theory. One of the main difficulties of this step is the convergence analysis of the non-linear terms. It relies on the one-sided Lipschitz assumption (Assumption 8).
Although the elimination of the electrostatic potentials u i and u e was useful to simplify the analysis of the microscopic bidomain equations, we must re-introduce these unknowns for the homogenization process. The reason is that -to the best of our knowledge -only local in space differential operators are adapted to the 2-scale homogenization process and the operator A does not enter into this category of operators.
Nondimensionalization of the problem
The nondimensionalization of the system is necessary in order to understand the relative amplitudes of the different terms. In the literature, few works in that direction have been carried out, although we can cite [START_REF] Rioux | A predictive method allowing the use of a single ionic model in numerical cardiac electrophysiology[END_REF] for a nondimensionalization analysis of the ionic term I ion and [START_REF] Neu | Homogenization of syncytial tissues[END_REF] for the analysis in the context of homogenization. We define L 0 as a characteristic length of the heart and T 0 as a characteristic time of a cardiac cycle. In the same spirit, we denote by Σ 0 a characteristic conductivity, C 0 a characteristic membrane capacitance, V max and V min characteristic upper and lower bounds for the transmembrane potential and W 0 a characteristic value of the gating variable. We set
u e ( x, t) = (V max -V min ) ũe x L 0 , t T 0 , u i ( x, t) = (V max -V min ) ũi x L 0 , t T 0 + V min , w( x, t) = W 0 w x L 0 , t T 0 , C m = C 0 Cm , σ α = Σ 0 ˜ σ α . (40)
We assume that the contribution of I ion and I app are of the same order, namely, there exists a characteristic current amplitude I 0 such that
I ion ( x, u i -u e , w) = I 0 Ĩion x L 0 , ũi -ũe , w , I app = I 0 Ĩapp . (41)
In the same way, we assume that g can be defined using a normalized function g and a characteristic amplitude G 0 such that
g( x, u i -u e , w) = G 0 g x L 0 , ũi -ũe , w . (42)
All quantities denoted by a tilde are dimensionless quantities. We obtain from ( 11), the dimensionless system
∇ x • ( ˜ σ α ∇ x ũα ) = 0 Ωα , ˜ σ i ∇ x ũi • n i = ˜ σ e ∇ x ũe • n i Γm , ˜ σ i ∇ x ũi • n i = - L 0 I 0 Σ 0 U 0 Ĩion (•, ũi -ũe , w) + L 0 I 0 Σ 0 U 0 Ĩapp - L 0 C 0 Σ 0 T 0 Cm ∂ t (ũ i -ũe ) Γm , ∂ t w = - T 0 G 0 W 0 g(ũ i -ũe , w) Γm , (43)
where Ωα and Γm are rescaled by L 0 and where U 0 = V max -V min . The same nondimensionalization process is used to define boundary conditions along ∂ Ω using the boundary conditions given by Equation (3).
We now define ε -the parameter which tends to zero in the homogenization processas the ratio between the maximal length of a cell (of the order 10 -4 m) and L 0 (equals to 10 -1 m). This implies that ε is of the order of 10 -3 . The dimensionless quantity
L 0 C 0 Σ 0 T 0
-with T 0 of the order of 1s, C 0 of 10 -2 F.m -2 and Σ 0 of 1 S.m -1 -is of the same order of ε and can be set to ε by a small modification of the reference quantities. The term U 0 is of order 10 -1 V and the term I 0 of order 10 -3 A.m -2 (this is the typical order of magnitude of I app ), therefore we have
L 0 I 0 Σ 0 U 0
of the order of ε and set it to ε. Finally, up to a small change in the definition of g, we assume that T 0 G 0 /W 0 is of order 1. The fact that ε is small means that the microscopic scale and the macroscopic scale are well separated. For the sake of clarity, we do not keep the tilde notation but we write explicitly the dependence in ε. To study the mathematical properties of this problem, we consider the family of problems parametrized by ε > 0, and we will characterize the limit equation as ε tends to zero. We will use the results of the 2-scale convergence, see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]. This method has been used in many fields of science and engineering. The main assumption of the 2-scale convergence to obtain a well defined limit problem, is that the domain -in which the equations are solvedis periodic.
We denote by Y the open reference domain that is used to define the idealized microstructure corresponding to the periodic arrangement of cardiac cells. This micro-structure is decomposed into two open connected subdomains: the intracellular part Y i and the extracellular part Y e . We have
Y i ∩ Y e = ∅, Y = Y i ∪ Y e .
The intra-and the extra-cellular domains are separated by Γ Y . The global position vector is denoted by x and the local position vector by y. We define the domain Ω = Ω ε i ∪ Ω ε e by ε-periodicity and we denote by Γ ε m the boundary between the intra-and the extra-cellular domains Ω ε i and Ω ε e . More precisely we assume that Ω ε i and Ω ε e are the union of entire cells and we have
Ω ε α = k (εY α + ε w k ) and then Γ ε m = k (εΓ Y + ε w k ), (44)
where w k is the vector corresponding to the translation between the considered cell and the reference cell. By definition, we have w 0 = 0. Note that by construction, from any macroscopic position vector x, one can deduce a corresponding position y in the reference periodic cell by y = x/ε. We assume that the diffusion tensors depend on the two scales using ε-independent tensor fields
σ ε α ( x) = σ α x, x ε .
The objective is to homogenize the following problem, i.e. study the convergence -when ε tends to zero -of the solutions of the microscopic bidomain model,
∇ x • ( σ ε α ∇ x u ε α ) = 0 Ω ε α , σ ε i ∇ x u ε i • n i = σ ε e ∇ x u ε e • n i , Γ ε m σ ε i ∇ x u ε i • n i = -εC m ∂ t (u ε i -u ε e ) -εI ion (u ε i -u ε e , w ε ) + εI ε app Γ ε m , ∂ t w ε = -g(u ε i -u ε e , w ε ), Γ ε m . (45)
The boundary conditions along ∂Ω read
σ ε i ∇ x u ε α • n α = 0 ∂Ω ∩ ∂Ω ε α , (46)
and the initial conditions are
u ε i (•, 0) -u ε e (•, 0) = V 0,ε m , w ε (•, 0) = w 0,ε , Γ ε m . (47)
Finally, for the definition of a unique extracellular electric potential, we impose
Γ ε m u ε e dγ = 0. ( 48
)
Uniform estimate of the solutions
The homogenization processi.e. the analysis of the limit process when ε tends to 0requires norm estimates of the solution that are independent of the parameter ε. This is the objective of this subsection.
A variational equation for unknowns u ε α can be directly deduced from the partial differential equations ( 45) and [START_REF] Noble | A modification of the Hodgkin-Huxley equation applicable to Purkinje fiber action and pacemaker potentials[END_REF]. It reads, for almost all t ∈ [0, T ],
σ ε i ∇ x u ε i , ∇ x v ε i Ω ε i + σ ε e ∇ x u ε e , ∇ x v ε e Ω ε e + ε C m ∂(u ε i -u ε e ) ∂t , v ε i -v ε e Γ ε m + ε I ion (u ε i -u ε e , w ε ), v ε i -v ε e Γ ε m = ε I ε app , v ε i -v ε e Γ ε m , (49)
for all (v ε i , v ε e ) ∈ H 1 (Ω ε i ) × H 1 (Ω ε e ).
We can formally derive an energy estimate by assuming that u ε α is regular in time and by taking v ε α = u ε α (•, t) in [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF]. Since, by definition
V ε m = u ε i -u ε e on Γ ε m , we obtain ∇ x u ε i 2 L 2 σ i + ∇ x u ε e 2 L 2 σe + ε C m 2 d dt V ε m 2 L 2 (Γ ε m ) + ε I ion (V ε m , w ε ), V ε m Γ ε m = ε I ε app , V ε m Γ ε m . (50)
Before integrating [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF] with respect to time, we multiply the equation by e -λt , where λ is a positive constant which will be determined in what follows. The third term of (50) becomes
ε C m 2 t 0 e -λs d dt V ε m 2 L 2 (Γ ε m ) ds = ε C m 2 e -λt V ε m (t) 2 L 2 (Γ ε m ) -ε C m 2 V ε m (0) 2 L 2 (Γ ε m ) + ε λ C m 2 t 0 e -λs V ε m 2 L 2 (Γ ε m ) ds.
Similarly, for all µ > 0, one can deduce that
ε µ 2 t 0 e -λs d dt w ε 2 L 2 (Γ ε m ) ds = ε µ 2 e -λt w ε (t) 2 L 2 (Γ ε m ) -ε µ 2 w 0,ε 2 L 2 (Γ ε m ) + ε λ µ 2 t 0 e -λs w ε 2 L 2 (Γ ε m ) ds = -ε µ t 0 e -λs g(V ε m , w ε ), w ε Γ ε m ds.
Note that in the previous equation, we have introduced the scalar µ in order to use Assumption 6. Then using the two previous equations as well as [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], we obtain
E λ,µ (u ε i , u ε e , w ε , t) -E λ,µ (u ε i , u ε e , w ε , 0) + ε t 0 e -λs I ion (V ε m , w ε ), V ε m Γ ε m + λC m 2 V ε m 2 L 2 (Γ ε m ) ds +ε t 0 e -λs µ g(V ε m , w ε ), w ε Γ ε m + λ µ 2 w ε 2 L 2 (Γ ε m ) ds = ε t 0 e -λs I ε app , V ε m Γ ε m ds, (51)
where the term E λ,µ is the energy associated to the system and is defined by
E λ,µ (u ε i , u ε e , w ε , t) = ε C m 2 e -λt V ε m (t) 2 L 2 (Γ ε m ) + ε µ 2 e -λt w ε (t) 2 L 2 (Γ ε m ) + t 0 e -λs ∇ x u ε i 2 L 2 σ i ds + t 0 e -λs ∇ x u ε e 2 L 2 σe ds. ( 52
)
To shorten the presentation, we omit the reference to the physical quantities in the definition of the energy, i.e. in what follows, we introduce the notation
E ε λ,µ (t) = E λ,µ (u ε i , u ε e , w ε , t).
For µ > 0 given by Assumption 6, we assume that λ is sufficiently large, more precisely it should satisfy
min λC m 2 , λµ 2 ≥ C I , (53)
where C I is the positive scalar appearing in [START_REF] Keener | Mathematical Physiology[END_REF] and is independent of ε. Using Assumption 6 and (51). We then obtain the first energy estimate
E ε λ,µ (t) ≤ E ε λ,µ (0) + ε t 0 e -λs I ε app , V ε m Γ ε m + C I |Γ ε m | ds. ( 54
)
Relation ( 51) is the energy relation that can be proven rigorously using a regularization process. Such derivations are done in the proof of Theorem 1 (see Remark 7 in Appendix 5). We also refer to the macroscopic bidomain model analysis of [START_REF] Bendahmane | Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue[END_REF] for some related considerations.
We are now in the position to state the first proposition of this subsection.
Proposition 5. There exist positive scalars µ, λ 0 and C independent of ε such that, for all λ ≥ λ 0 and for all t ∈ [0, T ], the solutions given by Theorem 1 satisfy
E ε λ,µ (t) ≤ E ε λ,µ (0) + C 1 + ε 1/2 t 0 e -λ 2 s I ε app L 2 (Γ ε m ) ds .
Proof. By noting that the last term of ( 54) can be estimated as follows
ε t 0 e -λs I ε app , V ε m Γ ε m ds ≤ 2ε C m 1/2 t 0 e -λ 2 s I ε app L 2 (Γ ε m ) E ε λ,µ (s) 1/2 ds,
and that ε |Γ ε m | is bounded uniformly w.r.t. ε, we can conclude using Gronwall's inequality (see [START_REF] Dragomir | Some Gronwall type inequalities and applications[END_REF], Theorem 5).
To obtain uniform estimates, we need the following assumption.
Assumption 11. Uniform estimates of the data. We assume that there exists a scalar C > 0 independent of ε such that
ε 1/2 T 0 I ε app L 2 (Γ ε m ) dt ≤ C and ε V 0,ε m 2 L 2 (Γ ε m ) + ε w 0,ε 2 L 2 (Γ ε m ) ≤ C.
Now we introduce the following proposition which is the main result of this section.
Proposition 6. If Assumption 11 holds, there exists a positive scalar C independent of ε, such that solutions of the bidomain equations -given by Theorem 1 -satisfy
T 0 u ε i 2 H 1 (Ω ε i ) dt + T 0 u ε e 2 H 1 (Ω ε e ) dt ≤ C (55)
and
ε T 0 Γ ε m |I ion (V ε m , w ε )| 4/3 dγ dt + ε T 0 Γ ε m |g(V ε m , w ε )| 2 dγ dt ≤ C. (56)
In order to simplify the proof of Proposition 6, we will introduce one preliminary corollary and two preliminary lemmas. Our starting point is Proposition 5 together with Assumption 11 that provides uniform bounds on the data. As a direct consequence of this assumption and using the equivalence between • L 2 and • L 2 σα , we have the following corollary.
Corollary 3. If Assumption 11 holds, there exists a positive scalar C independent of ε, such that solutions of the bidomain equations -given by Theorem 1 -satisfy
T 0 ∇ x u ε i 2 L 2 (Ω ε i ) dt + T 0 ∇ x u ε e 2 L 2 (Ω ε e ) dt ≤ C
and for all t ∈ [0, T ],
ε V ε m (t) 2 L 2 (Γ ε m ) + ε w ε (t) 2 L 2 (Γ ε m ) ≤ C. ( 57
)
Corollary 3 is still not sufficient for our purpose since we need an estimation of the intraand extra-cellular potentials in the L 2 -norm in Ω ε i and Ω ε e respectively. To do so, we need a Poincaré-Wirtinger inequality and a trace inequality that should take into account the geometry dependence in ε. Such inequalities are given in [START_REF] Ammari | Spectroscopic imaging of a dilute cell suspension[END_REF], Corollary B.1 and Lemma C.1. They are given in dimension 2 but they can be extended to the 3-dimensional setting. With our notations, these inequalities are given in the following lemma. Lemma 3. There exists a constant C independent of ε such that for all v ε e ∈ H 1 (Ω ε e ), we have
v ε e - 1 |Ω ε e | Ω ε e v ε e d x L 2 (Ω ε e ) ≤ C ∇ x v ε e L 2 (Ω ε e ) (58)
and v ε e 2 L 2 (Γ ε m ) ≤ C ε -1 v ε e 2 L 2 (Ω ε e ) + C ε ∇ x v ε e 2 L 2 (Ω ε e ) . (59)
Note that inequality ( 58) is no longer true if the domain Ω ε e does not satisfy [START_REF] Neuss-Radu | Some extensions of two-scale convergence[END_REF], i.e. if Ω is not the union of entire cells for all ε (which allows non-connected extra-cellular subdomains to appear at the boundary of the domain for some sequences of ε). Moreover, we also need a Poincaré-like inequality to bound the L 2 -norm of the solution inside the intraand the extra-cellular domains. Such an inequality can be found in [START_REF] Ammari | Spectroscopic imaging of a dilute cell suspension[END_REF] Lemma C.2, and in our context, it is given in the following lemma. Lemma 4. There exists a constant C independent of ε such that for all
v ε α ∈ H 1 (Ω ε α ), v ε α 2 L 2 (Ω ε α ) ≤ C ε v ε α 2 L 2 (Γ ε m ) + C ε 2 ∇ x v ε α 2 L 2 (Ω ε α ) . (60)
Finally collecting the results of Corollary 3 and Lemmas 3 and 4, we can prove Proposition 6.
Proof. (Proof of Proposition 6)
Step 1: A preliminary inequality. To simplify the following computations, we introduce the linear forms m Ω and m Γ defined for all v ε e ∈ H 1 (Ω ε e ) by
m e (v ε e ) = 1 |Ω ε e | Ω ε e v ε e d x and m Γ (v ε e ) = 1 |Γ ε m | Γ ε m v ε e dγ.
For all v ε e ∈ H 1 (Ω ε e ), we have
v ε e -m e (v ε e ) 2 L 2 (Γ ε m ) = v ε e -m Γ (v ε e ) -m e (v ε e -m Γ (v ε e )) 2 L 2 (Γ ε m ) = v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) + m e (v ε e -m Γ (v ε e )) 2 L 2 (Γ ε m ) -2 Γ ε m v ε e -m Γ (v ε e ) m e (v ε e -m Γ (v ε e )) dγ.
Now observing that the last term vanishes hence, for all v ε e ∈ H 1 (Ω ε e ), we have
v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) ≤ v ε e -m e (v ε e ) 2 L 2 (Γ ε m ) . (61)
Step 2: Uniform estimates of the potentials. Using the trace inequality (59) (applied to v ε em e (v ε e )) and inequality (61) we find
ε v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) ≤ C v ε e -m e (v ε e ) 2 L 2 (Ω ε e ) + ε 2 ∇ x v ε e 2 L 2 (Ω ε e )
. Thanks to the Poincaré-Wirtinger inequality (58), we conclude that there exists C independent of ε such that
ε v ε e -m Γ (v ε e ) 2 L 2 (Γ ε m ) ≤ C ∇ x v ε e 2 L 2 (Ω ε e )
. Now setting v ε e = u ε e in the previous equation (remind that m Γ (u ε e ) = 0), integrating with respect to time and using the estimation of Corollary 3, we find
ε T 0 u ε e 2 L 2 (Γ ε m ) ds ≤ C,
where C is another constant independent of ε. With the estimate (57) of Corollary 3, we also have
ε T 0 u ε i 2 L 2 (Γ ε m ) ds ≤ C
, where C is another constant independent of ε. Finally, we can use Lemma 4 to obtain the estimate [START_REF] Rogers | A collocation-Galerkin finite element model of cardiac action potential propagation[END_REF].
Step 3: Uniform estimates of the non-linear terms. It is also important to obtain a uniform estimate on the term I ion (u ε i -u ε i , w ε ) and on the term g(u ε i -u ε i , w ε ) in the appropriate norms. From (51), we have
ε T 0 e -λt λ |Γ ε m | + I ion (V ε m , w ε ), V ε m Γ ε m + λC m 2 V ε m 2 L 2 (Γ ε m ) dt + ε T 0 e -λt µ g(V ε m , w ε ), w ε Γ ε m + λ µ 2 w ε 2 L 2 (Γ ε m ) dt ≤ E ε λ,µ (0) + ε T 0 e -λt I ε app , V ε m Γ ε m + C I |Γ ε m | dt ≡ R ε (T ), ( 62
)
then the right hand side of the equation above (denoted R ε (T )) can be estimated as follows
R ε (T ) ≤ C 1+ε V 0,ε m 2 L 2 (Γ ε m ) +ε w 0,ε 2 L 2 (Γ ε m ) +ε C sup t∈[0,T ] V ε m (t) L 2 (Γ ε m ) T 0 I ε app L 2 (Γ ε m )
where we have used the property that ε |Γ ε m | is bounded uniformly with respect to ε and C is a constant independent of ε. As a consequence of Assumption 11 and Corollary 3, we have that R ε (T ) is uniformly bounded w.r.t. ε. Using Assumption 6, we know that there exists µ > 0 such that for λ satisfying [START_REF] Richardson | Derivation of the bidomain equations for a beating heart with a general microstructure[END_REF], the integrand of the left hand side of ( 62) is positive almost everywhere on Γ ε m . Therefore, bounding e -λt by below, we deduce that
ε T 0 Γ ε m λ + I ion (V ε m , w ε ) V ε m + λ C m 2 (V ε m ) 2 + µ g(V ε m , w ε ) w ε + λ µ 2 (w ε ) 2 dγ dt ≤ C, ( 63
)
where C is another constant that depends on e λT but is independent of ε. We must now study two distinct cases.
Step 3a: Assumption 7a holds. Since g is Lipschitz we can show, with the estimate (57) that
ε T 0 Γ ε m |g(V ε m , w ε )| 2 dγ dt ≤ C,
where C independent of ε. Therefore, we deduce from ( 63) that
ε T 0 Γ ε m |I ion (V ε m , w ε ) V ε m | dγ dt ≤ C, ( 64
)
where C is another scalar independent of ε. Therefore, using the growth condition ( 28) and Young's inequalities, we get
ε T 0 Γ ε m |I ion (V ε m , w ε )| 4/3 dγ dt ≤ ε C C 1/3 ∞ T 0 Γ ε m |I ion (V ε m , w ε )| (|V ε m | + |w ε | 1/3 + 1) dγ dt ≤ ε C C 1/3 ∞ T 0 Γ ε m |I ion (V ε m , w ε ) V ε m | + 3 2 η 4/3 |I ion (V ε m , w ε )| 4/3 + η 4 4 |w ε | 4/3 + η 4 4 dγ dt,
where η > 0 can be chosen arbitrarily and C is another scalar independent of ε. Finally, since |w ε | 4/3 ≤ |w ε | 2 + 4/27 almost everywhere on Γ ε m , we can use estimates ( 64)-( 57) and choose η sufficiently large (but independent of ε) in order to obtain [START_REF] Sachse | Computational Cardiology: Modeling of Anatomy, Electrophysiology and Mechanics[END_REF].
Step 3b: Assumption 7b holds. Starting from ( 62) and using the first inequality in [START_REF] Kunisch | Well-posedness for the Mitchell-Schaeffer model of the cardiac membrane[END_REF] we have
ε T 0 Γ ε m λ + C 1 |V ε m | 4 -c 1 (|V ε m | 2 + 1) + f 2 (V ε m ) V ε m w ε + λ C m 2 (V ε m ) 2 + µ g(V ε m , w ε ) w ε + λ µ 2 (w ε ) 2 dγ dt ≤ C.
Note that for λ sufficiently large (but independent of ε), the integrand above is positive. Moreover, from Assumption 7b, we have, for all x ∈ Ω and (v, w) ∈ R 2 ,
|f 2 ( x, v, w) v w| ≤ C (|v| + 1) 2 |v| 2 η + η |w| 2 ,
and
|g( x, v, w) w| ≤ C |v| 4 η 4/3 + (1 + η 4 )|w| 2 + 1 ,
for some η > 0 and where C is a positive constant independent of ε and η. Therefore, by choosing η large enough (η is independent of ε), one can show that there exists another positive constant C independent of ε such that
ε T 0 Γ ε m |V ε m | 4 dγ dt ≤ C.
The results of Proposition 6 are then a direct consequence of Assumption 5.
Homogenization of the bidomain equations by 2-scale convergence
The 2-scale convergence theory has been developed in the reference articles [START_REF] Allaire | Homogenization and two-scale convergence[END_REF] and [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF]. This mathematical tool justifies and deduces the homogenized problem in a single process. It is also well adapted to treat the case of perforated domains (in our case Ω i and Ω e can both be seen as perforated domains). The analysis of homogenization in a perforated domain presents some additional difficulties since the solutions are defined in domains whose geometry is not fixed. This issue is not new (see [START_REF] Cioranescu | Homogenization in open sets with holes[END_REF]) and it is addressed in [START_REF] Allaire | Homogenization of the Neumann problem with nonisolated holes[END_REF] -using compactness results for sequence of functions defined in a family of perforated domains -or in [START_REF] Acerbi | An extension theorem from connected sets, and homogenization in general periodic domains[END_REF], [START_REF] Cioranescu | Homogenization of Reticulated Structures[END_REF] and [START_REF] Cioranescu | The periodic unfolding method in domains with holes[END_REF]. In the last two mentioned references, the periodic unfolding method is used. Such a method can be related to 2-scale convergence, as in [START_REF] Marciniak-Czochra | Derivation of a macroscopic receptor-based model using homogenization techniques[END_REF] in which the homogenization of a reaction-diffusion problem is done using both techniques: the 2-scale convergence gives the preliminary convergence results and the periodic unfolding method is used to deal with the reaction term. The treatment of the reaction termsi.e. the non linear terms or the ionic terms in our context -is one of the main difficulty. To tackle this problem, we present an approach based upon the general ideas of the original article [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]. Finally, the last difficulty which is typical to biological tissue modeling, is that the non-linear terms of the equations -that correspond to exchange of ionic quantities at the membrane of a cell -lie on the boundary of the domain. Therefore the 2-scale convergence theory must be adapted and to do so, we use the results presented in [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] (see also [START_REF] Neuss-Radu | Some extensions of two-scale convergence[END_REF] for related results).
We define Ω T := Ω × (0, T ), and we introduce the space C 0 (Y ) of continuous periodic functions on the periodic cell Y (up to the boundary) and L ∞ (Y ) the space of essentially bounded periodic functions on Y . Proposition 7. Let {u ε } be a sequence of functions in L 2 (Ω T ) such that
Ω T |u ε ( x, t)| 2 d x dt ≤ C
where C does not depend on ε, then the sequence 2-scale converges to a limit u 0 ∈ L 2 (Ω T ×Y ), i.e. for any ϕ ∈ C 0 (Ω T ; L ∞ (Y )) we have, up to a subsequence,
lim ε→0 Ω T u ε ( x, t) ϕ( x, t, x/ε) d x dt = 1 |Y | Ω T ×Y u 0 ( x, t, y) ϕ( x, t, y) d x dt d y.
The same notion of weak convergence exists for a function defined on Γ ε m , and straightforward generalizations of the results in [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] lead to the following proposition.
Proposition 8. Let {u ε } be a sequence of functions in L p (Γ ε m × (0, T )) with p ∈ (1, +∞) such that ε Γ ε m ×(0,T ) |u ε ( x, t)| p dγ dt ≤ C, (65)
where C does not depend on ε, then the sequence 2-scale converges to a limit u 0 ∈ L p (Ω T × Γ Y ), i.e. for any ϕ ∈ C 0 (Ω T ; C 0 (Y )) we have, up to a subsequence,
lim ε→0 Γ ε m ×(0,T ) u ε ( x, t) ϕ( x, t, x/ε) dγ dt = 1 |Y | Ω T ×Γ Y u 0 ( x, t, y) ϕ( x, t, y) d x dt dγ.
As previously mentioned, one of the main advantages of 2-scale convergence is the ability to analyze partial differential equations in a perforated domain by introducing simple extension operators. In our context Ω ε i and Ω ε e can be seen as perforated domains, therefore, following [START_REF] Allaire | Homogenization and two-scale convergence[END_REF], we denote by • the extension by zero in Ω. More precisely, we define
∇ x u ε i = ∇ x u ε i Ω ε i , 0 Ω ε e , ∇ x u ε e = 0 Ω ε i , ∇ x u ε e Ω ε e ,
and we define σ ε by periodicity as follows
σ( x, y) = σ i ( x, y) y ∈ Y i , σ e ( x, y) y ∈ Y e , and σ ε ( x) = σ x, x ε .
Using [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF], one can see that the functions (u ε i , u ε e ) and ( ∇ x u ε i , ∇ x u ε e ) satisfy, for all v ε i , v ε e ∈ H 1 (Ω) and for almost all t ∈ (0, T ),
σ ε ∇ x u ε i , ∇ x v ε i Ω + σ ε ∇ x u ε e , ∇ x v ε e Ω + ε C m ∂(u ε i -u ε e ) ∂t , v ε i -v ε e Γ ε m + ε I ion (u ε i -u ε e , w ε ), v ε i -v ε e Γ ε m = ε I ε app , v ε i -v ε e Γ ε m . (66)
Two-scale convergence theory enables us to relate the 2-scale limits of ∇ x u ε i and ∇ x u ε e with the 2-scale limits of the extension of (u ε i , u ε e ) defined as
ũε i = u ε i Ω ε i , 0 Ω ε e , ũε e = 0 Ω ε i , u ε e Ω ε
e . Then the a priori estimate (55) allows for the application of the 2-scale convergence theory in a perforated domain as presented in [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]. To do so, we define H 1 (Y ) -as the completion for the norm H 1 (Y ) of C ∞ (Y ) -the space of infinitely differentiable functions that are periodic of period Y (a similar definition holds if Y is replaced by Y α and H 1 (Y ) is replaced by L 2 (Y )).
Proposition 9. If Assumption 11 holds ( i.e. uniform norm-estimate of the data), there exist
u 0 α ∈ L 2 ((0, T ); H 1 (Ω)) and u 1 α ∈ L 2 (Ω T ; H 1 (Y α ))
, such that the solution of the bidomain equations given by Theorem 1 satisfies
ũε α -→ 2-scale u 0 α ( x, t)χ Yα ( y), ∇ x u ε α -→ 2-scale ( ∇ x u 0 α + ∇ y u 1 α )χ Yα ( y), (67)
where χ Yα is the characteristic function of Y α .
Note that in this proposition, the convergences have to be understood in the sense given in Proposition 7. For regular enough functions, we need to relate the 2-scale limit on a volume to the 2-scale limit on surface. This is the object of the proposition given below whose proof is very inspired by [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] (Proposition 2.6) and therefore only the main idea is given.
Proposition 10. Let {u ε } be a sequence of functions in L 2 ((0, T ); H 1 (Ω ε α )) such that T 0 Ω ε α |u ε ( x, t)| 2 + | ∇ x u ε ( x, t)| 2 d x dt ≤ C,
where C does not depend on ε. Let ũε denote the extension by zero in (0, T ) × Ω of u ε . There exists u 0 ∈ L 2 ((0, T );
H 1 (Ω)) such that ũε ( x, t) -→ 2-scale u 0 ( x, t) χ Yα ( y) (68)
and for any ϕ ∈ C 0 (Ω T ; C 0 (Y )),
ε T 0 Γ ε m u ε ( x, t) ϕ x, t, x ε dγ dt -→ 1 |Y | Ω T u 0 ( x, t) Γ Y ϕ( x, t, y ) dγ d x dt. ( 69
)
Proof. Being given ϕ ∈ C 1 (Ω T ; C 0 (Y )), for each x and t (seen here as parameters), we introduce ψ x,t a function of y periodic in Y i with mean value 0 as the solution of
∆ y ψ x,t = 1 |Y i | Γ Y ϕ( x, t, y ) dγ in Y i , ∇ y ψ x,t • n Γ Y = ϕ( x, t, •) on Γ Y , (70)
where n Γ Y is the outward normal of Y i . Then the results of the proposition are obtained by first noticing that
ε T 0 Γ ε m u ε ( x, t) ϕ x, t, x ε dγ dt = ε T 0 Γ ε m u ε ( x, t) ∇ y ψ x,t • n Γ Y x ε dγ dt,
then using Green's formulae to recover integral over Ω ε i and finally using 2-scale convergence results as in Proposition 7.
The two-scale homogenized limit model
The next step in deriving the homogenized equations (i.e. setting the equations of the limit terms u 0 α ) consists in using regular enough test functions in (66) of the form
v ε α ( x, t) = v 0 α ( x, t) + ε v 1 α ( x, t, x ε ), (71) with
v 0 α ∈ C 1 (Ω T ), v 0 α ( x, T ) = 0, v 1 α ∈ C 1 (Ω T ; C 0 (Y α )), v 1 α ( x, T, y ) = 0.
The decomposition (71) can be explained a priori since it corresponds to the expected behavior of the limit field, i.e. it should not depend on y. Doing so, Equation (66) gives, after integration with respect to time,
σ ε ∇ x u ε i , ∇ x v 0 i + ∇ y v 1 i + ε ∇ x v 1 i Ω T + σ ε ∇ x u ε e , ∇ x v 0 e + ∇ y v 1 e + ε ∇ x v 1 e Ω T -ε C m T 0 u ε i -u ε e , ∂ t (v 0 i + εv 1 i -v 0 e -εv 1 e ) Γ ε m dt + ε T 0 I ion (u ε i -u ε e , w ε ), v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt = ε T 0 I ε app , v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt -ε C m V 0,ε m , v 0 i + ε v 1 i -v 0 e -ε v 1 e Γ ε m . (72)
We want to apply the results of the 2-scale convergence. First, we will focus on the volume terms. As explained in [START_REF] Allaire | Homogenization and two-scale convergence[END_REF], the next step is to choose
ψ α ( x, y) := σ( x, y)( ∇ x v 0 α ( x) + ∇ y v 1 α ( x, y))
as a test function in the definition of 2-scale convergence. However, Assumption 1 on the diffusion tensor σ ε ( x) is not sufficient to have ψ α ∈ C 0 (Ω T ; L ∞ (Y )) 3 . This motivates the following additional assumption.
Assumption 12. σ α ∈ C 0 (Ω; L ∞ # (Y α )) 3×3 .
Such an assumption ensures that ψ α is an admissible test function (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) and can be considered as a test function in Proposition 7.
The surface terms also need a detailed analysis. Since I ε app is uniformly bounded in L 2 (Γ ε m × (0, T )) (see Assumption 11), we can use Proposition 8 to write that there exists
I 0 app ∈ L 2 (Ω T × Γ Y ) such that up to a subsequence, ε T 0 I ε app , v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt -→ 1 |Y | Γ Y I 0 app ( x, t, y) dγ, v 0 i -v 0 e Ω T . (73)
Moreover, since we have assumed that the initial data are also uniformly bounded in the adequate norm (see Assumption 11) and using again the 2-scale convergence theorem on surfaces (Proposition 8), we know that there exists
V 0 m ∈ L 2 (Ω × Γ Y ) such that, up to a subsequence, ε C m V 0,ε m , v 0 i + ε v 1 i -v 0 e -ε v 1 e Γ ε m -→ C m |Y | Ω Γ Y V 0 m ( x, y) dγ (v 0 i -v 0 e )( x, 0) d x. (74)
In the same way, thanks to Proposition 6, we can show that I ion satisfies the uniform bound [START_REF] Wilhelms | Benchmarking electrophysiological models of human atrial myocytes[END_REF] with p = 4/3. We can therefore apply Proposition 8 and find there exists
I 0 ∈ L 4/3 (Ω T × Γ Y ) such that, up to a subsequence, ε T 0 I ion (u ε i -u ε e , w ε ), v 0 i + εv 1 i -v 0 e -εv 1 e Γ ε m dt -→ 1 |Y | Ω T Γ Y I 0 ( x, t, y) dγ (v 0 i -v 0 e )( x, t) d x dt. (75)
One of the difficulties -which is postponed -is to relate I 0 with the limits of u ε α and w ε . To deal with the third term in (72), we also need Proposition 10 and we get
ε T 0 Γ ε m u ε α ( x, t) ∂ t v ε α ( x, t) dγ dt -→ |Γ Y | |Y | Ω T u 0 α ( x, t) ∂ t v 0 α ( x, t) d x dt. ( 76
)
Using the convergence results obtained above, the weak formulation of the microscopic bidomain equations -as given by (72) -becomes at the limit
σ i ∇ x u 0 i + ∇ y u 1 i , ∇ x v 0 i + ∇ y v 1 i Ω T ×Yi + σ e ∇ x u 0 e + ∇ y u 1 e , ∇ x v 0 e + ∇ y v 1 e Ω T ×Ye -C m |Γ Y | u 0 i -u 0 e , ∂ t (v 0 i -v 0 e ) Ω T + Γ Y I 0 dγ , v 0 i -v 0 e Ω T = Γ Y I 0 app dγ , v 0 i -v 0 e Ω T -C m Ω Γ Y V 0 m ( x, y) dγ v 0 i -v 0 e ( x, 0) d x. ( 77
)
We now consider the equation on the gating variable. From (45), we deduce that for all
ψ ∈ C 1 (Ω T ; C 0 (Y )) such that ψ( x, T, y) = 0,
we have
- T 0 Γ ε m w ε ( x, t) ∂ t ψ x, t, x ε dγ dt + T 0 Γ ε m g(V ε m , w ε ) ψ x, t, x ε dγ dt = - Γ ε m w 0,ε ( x) ψ x, 0, x ε dγ.
Using again the 2-scale convergence of a sequence of L 2 functions on Γ ε m × (0, T ) (Proposition 8), we find that
-w 0 , ∂ t ψ Ω T ×Γ Y + g 0 , ψ Ω T ×Γ Y = - Ω Γ Y w 0 ( x, y) ψ( x, 0, y) dγ d x, (78)
where w 0 , g 0 and w 0 are the 2-scale limits of w ε , g(V ε m , w ε ) and w 0,ε respectively. These 2-scale limits are well-defined (up to subsequences) since w ε is a continuous function in time with value in L 2 (Γ ε m ) and is uniformly bounded with respect to ε (Corollary 3). The same argument holds for g (Proposition 6).
Finally, one can pass to the 2-scale limit in Equation ( 48) to recover a condition on the average of u 0 e and we get the closure equation, for all ϕ ∈ C 0 ([0, T ]),
ε T 0 Γ ε m u ε e dγ ϕ dt = 0 -→ ε→0 |Γ Y | |Y | T 0 Ω u 0 e d x ϕ dt = 0. ( 79
)
This implies that we have, for almost all t ∈ [0, T ],
Ω u 0 e d x = 0. (80)
Equations ( 77), ( 78) and (79) define some micro-macro equations for the bidomain problem.
To close these equations, we need to relate I 0 and g 0 to the 2-scale limits of V ε m = u ε iu ε e and w ε . This is the object of Proposition 11 given in what follows but first, let us mention that is it possible to derive an energy estimate for the micro-macro problem. As mentioned, such energy relations are formally obtained by setting v 0 α = e -λs u 0 α and v 1 α = e -λs u 1 α in Equation ( 77) and ψ = e -λs w 0 in Equation (78). The following energy relation can be proven, for all λ > 0 and µ > 0,
E 0 λ,µ (u 0 i , u 1 i , u 0 e , u 1 e , w 0 , t) -E 0 λ,µ (u 0 i , u 1 i , u 0 e , u 1 e , w 0 , 0) + t 0 e -λs Γ Y I 0 dγ , v 0 i -v 0 e Ω + λC m |Γ Y | 2 (u 0 i -u 0 e )(t) 2 L 2 (Ω) ds + t 0 e -λs µ g 0 , w 0 L 2 (Ω×Γ Y ) + λ µ 2 w 0 2 L 2 (Ω×Γ Y ) ds = t 0 e -λs Γ Y I 0 app dγ , u 0 i -u 0 e Ω ds, (81)
where the term E 0 λ,µ is the energy associated with the system and is defined by
E 0 λ,µ (u 0 i , u 1 i , u 0 e , u 1 e , w 0 , t) = C m |Γ Y | 2 e -λt (u 0 i -u 0 e )(t) 2 L 2 (Ω) + µ 2 e -λt w 0 (t) 2 L 2 (Ω×Γ Y ) + α∈{i,e} t 0 e -λs σ α ∇ x u 0 α + ∇ y u 1 α , ∇ x u 0 α + ∇ y u 1 α Ω×Yα ds. ( 82
)
As previously mentioned, the following proposition relates I 0 and g 0 to the 2-scale limits of V ε m = u ε iu ε e and w ε , respectively. Proposition 11. We assume that Assumptions 8 and 12 are satisfied and that the source term and the initial data are given by,
I ε app ( x, t) = I app ( x, t), V 0,ε m ( x) = V 0 m ( x), w 0,ε ( x) = w 0 ( x), ∀ t ∈ [0, T ], ∀ x ∈ Γ ε m , with I app ∈ C 0 (Ω T ), V 0 m ∈ C 0 (Ω T ), w 0 ∈ C 0 (Ω T ). Let (u ε i , u ε e , w ε
) be a solution of equations ( 45)- [START_REF] O'hara | Simulation of the undiseased human cardiac ventricular action potential: Model formulation and experimental validation[END_REF] given by Theorem 1 and let u 0 α , w 0 , I 0 and g 0 , the 2-scale limits of u ε α , w ε , I ion (u ε iu ε e , w ε ) and g(u ε iu ε e , w ε ) respectively then we have,
I 0 = I ion (u 0 i -u 0 e , w 0 ), g 0 = g(u 0 i -u 0 e , w 0 ).
Proof. This result is not a straightforward consequence of the u ε α and w ε estimates. We adapt here the technique used in [START_REF] Allaire | Homogenization and two-scale convergence[END_REF] for the 2-scale analysis of non-linear problems. Thanks to Assumption 8, for λ > 0 large enough (but independent of ε) and µ > 0 given, for all regular enough test functions ϕ ε i , ϕ ε e and ψ ε , we have
E λ,µ (u ε i -ϕ ε i , u ε e -ϕ ε e , w ε -ψ ε , T ) + ε T 0 e -λt I ion (V ε m , w ε ) -I ion (ν ε m , ψ ε ), V ε m -ν ε m Γ ε m + λC m 2 V ε m -ν ε m 2 L 2 (Γ ε m ) dt + ε T 0 e -λt µ g(V ε m , w ε ) -g(ν ε m , ψ ε ), w ε -ψ ε Γ ε m + λ µ 2 w ε -ψ ε 2 L 2 (Γ ε m ) dt ≥ 0, (83)
where ν ε m = ϕ ε iϕ ε e on Γ ε m and where the energy functional E λ,µ is defined in Equation ( 52). The idea is to use the energy relation (51) -which is satisfied by the solution (u ε i , u ε e , w ε ) -to simplify the previous equation. Doing so, we introduce a term corresponding to the data
d ε (T ) := ε T 0 e -λt I app , V ε m Γ ε m dt + ε C m 2 V 0 m 2 L 2 (Γ ε m ) + ε µ 2 w 0 2 L 2 (Γ ε m ) .
Substituting ( 51) into (83), we obtain the inequality
0 ≤ d ε (T ) -2 e ε (T ) + E λ,µ (ϕ ε i , ϕ ε e , ψ ε , T ) + i ε (T ) + µ g ε (T ), (84)
with
e ε (T ) = ε C m 2 e -λT (V ε m , ν ε m ) Γ ε m + ε µ 2 e -λT (w ε , ψ ε ) Γ ε m + T 0 e -λt ( σ i ∇ x u ε i , ∇ x ϕ ε i ) Ω ε i dt + T 0 e -λt ( σ e ∇ x u ε e , ∇ x ϕ ε e ) Ω ε e dt, i ε (T ) = T 0 e -λt -ε I ion (V ε m , w ε ), ν ε
Now, we set, for any given positive real scalar τ ,
ϕ ε α ( x, t) = ϕ 0 α ( x, t) + ε ϕ 1 α ( x, t, x/ε) + τ ϕ α ( x, t), ψ ε ( x, t) = ψ 0 ( x, t, x/ε) + τ ψ( x, t, x/ε),
where
(ϕ 0 α , ϕ α ) ∈ C 1 (Ω T ) 2 , ϕ 1 α ∈ C 1 (Ω T ; C 1 (Y )), (ψ 0 , ψ) ∈ C 0 (Ω T ; C 0 (Y )) 2 .
By construction, ϕ ε i and ϕ ε e are test functions that allow us to use the 2-scale convergence (see Propositions 7 and 8). The same remark is true for I app , V 0 m and w 0 by assumption. Moreover, we need the following results concerning 2-scale convergence of test functions (see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF])
χ Yα x ε ϕ ε α -→ 2-scale χ Yα ( y)(ϕ 0 α + τ ϕ α ), and χ Yα x ε ∇ x ϕ ε α -→ 2-scale χ Yα ( y)( ∇ x ϕ 0 α + ∇ y ϕ 1 α + τ ∇ x ϕ α ),
where the convergence has to be understood in the sense given by Proposition 7. Moreover,
ν ε m -→ 2-scale Φ 0 + τ Φ, ψ ε -→ 2-scale ψ 0 + τ ψ, where Φ 0 = ϕ 0 i -ϕ 0 e and Φ = ϕ i -ϕ e ,
and where the convergence has to be understood in the sense given by Proposition 8. To study the limit of (84) when ε tends to 0, we treat each terms separately. We first have lim
ε→0 d ε = d 0 with d 0 (T ) = |Γ Y | T 0 e -λt (I app , V 0 ) Ω dt + |Γ Y | C m 2 V 0 m 2 L 2 (Ω) + |Γ Y | µ 2 w 0 2 L 2 (Ω) .
In the same way, one can pass to the limit ε → 0 in e ε (t) and we denote by e 0 (t) this limit.
It is given by
e 0 (T ) = C m |Γ Y | 2 e -λt u 0 i -u 0 e , Φ 0 + τ Φ Ω + µ 2 e -λt w 0 , ψ 0 + τ ψ Ω×Γ Y + α∈{i,e} T 0 e -λs σ α ∇ x u 0 α + ∇ y u 1 α , ∇ x ϕ 0 α + ∇ y ϕ 1 α + τ ∇ x ϕ α Ω×Yα dt.
We also get
e 0 λ,µ (T ) := lim ε→0 E λ,µ (ϕ ε i , ϕ ε e , ψ ε , T ) = E 0 λ,µ (ϕ 0 i + τ ϕ i , ϕ 1 i , ϕ 0 e + τ ϕ e , ϕ 1 e , ψ 0 + τ ψ, T ),
where E 0 λ,µ is the energy of the limit 2-scale problem defined in Equation (82). Note that to pass to the limit in the microscopic energy, we have used the strong 2-scale convergence of test functions. Indeed using [START_REF] Allaire | Two-scale convergence on periodic surfaces and applications[END_REF] (Lemma 2.4), we have
lim ε→0 ε ψ ε 2 L 2 (Γ ε m ) = ψ 0 + τ ψ 2 L 2 (Ω×Γ Y ) .
To pass to the limit in the terms i ε and g ε , we need to study the convergence of I ion (ν ε m , ψ ε ) and g(ν ε m , ψ ε ) respectively. Since the function I ion is continuous (Assumption 4), as well as (Φ 0 , Φ, ψ 0 , ψ), one can see that I ion (Φ 0 + τ Φ, ψ 0 + τ ψ) is an adequate test function in the sense of Proposition 8, i.e.
I ion (Φ 0 + τ Φ, ψ 0 + τ ψ) ∈ C 0 (Ω T ; C 0 (Y )).
Moreover, we have,
lim ε→0 I ion (ν ε m , ψ ε ) -I ion (Φ 0 + τ Φ, ψ 0 + τ ψ) L ∞ (Ω T ×Γ Y ) = 0,
and therefore
lim ε→0 ε T 0 e -λt I ion (ν ε m , ψ ε ), V ε m -ν ε m Γ ε m dt = T 0 e -λt I ion (Φ 0 + τ Φ, ψ 0 + τ ψ), V 0 m -Φ 0 -τ Φ Ω×Γ Y dt.
Using the results above, one can show that
i 0 (T ) := lim ε→0 i ε (T ) = T 0 e -λt -I 0 , Φ 0 + τ Φ Ω×Γ Y -I ion (Φ 0 + τ Φ, ψ 0 + τ ψ), V 0 m -Φ 0 -τ Φ Ω×Γ Y -λ |Γ Y | C m (V 0 m , Φ 0 + τ Φ) 2 L 2 (Ω) + λ |Γ Y | C m 2 Φ 0 + τ Φ 2 L 2 (Ω) dt.
Similar results can be deduced in order to compute the limit of g ε which we denote g 0 . Collecting all the convergence results mentioned above, Inequality (84) becomes 0 ≤ d 0 (T ) -2 e 0 (T ) + e 0 λ,µ (T ) + i 0 (T ) + µ g 0 (T ).
Since this inequality is true for all ϕ 0 α , ϕ 1 α and ψ 0 , it is true for each element of the sequences {ϕ 0 α,n }, {ϕ
ϕ 0 α,n -u 0 α L 2 ((0,T );H 1 (Ω)) + ϕ 1 α,n -u 1 α L 2 (Ω T ;H 1 (Yα)) = 0 and lim n→+∞ ψ 0 n -w 0 L 2 (Ω T ×Γ Y ) = 0.
From the continuity requirement (Assumption 4) and the growth conditions ( 28) -( 29), I ion and g can be seen as weak continuous applications. Such results are a consequence of a variant of Lemma 1.3 of [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (the same proof can be followed) that can be stated as follows: (for conciseness, we set
D = Ω × Γ Y )
Let {I n }, a uniformly bounded sequence and I, a function in L 2 ((0, T ), L p (D)) with 1 < p < +∞. Assume that the sequence {I n } converges almost everywhere to I in (0, T ) × D, then I n converges weakly towards I in L 2 ((0, T ), L p (D)).
Using the result above one can pass to the limit in Inequality (85), it shows that this inequality is true after the following formal substitutions, ϕ 0 α = u 0 α , ϕ 1 α = u 1 α and ψ 0 = w 0 . Using the energy identity given in Equation (81), one can simplify Inequality (85) and we obtain
0 ≤ τ T 0 e -λt I ion (V 0 m + τ Φ, w 0 + τ ψ) -I 0 , Φ Ω×Γ Y dt + τ T 0 e -λt g(V 0 m + τ Φ, w 0 + τ ψ) -g 0 , ψ Ω×Γ Y dt + O(τ 2 ).
Dividing by τ and then letting τ tends to 0, we find that for all continuous functions Φ and ψ,
0 ≤ T 0 e -λt I ion (V 0 m , w 0 ) -I 0 , Φ Ω×Γ Y dt + T 0 e -λt g(V 0 m , w 0 ) -g 0 , ψ Ω×Γ Y dt,
which gives the result of the proposition.
The macroscopic bidomain equations
The obtained model combines a priori the micro-and the macroscopic scales. However, we will show below that we can decouple these two scales by explicitly determining the correction terms u 1 i and u 1 e . This determination appears through the analysis of canonical problems which are set in the reference periodic cells Y i and Y e . First, we choose v 0 i = v 0 e = v 1 e = 0 and (77) becomes
σ i ( ∇ x u 0 i + ∇ y u 1 i ), ∇ y v 1 i Ω T ×Yi = 0. ( 86
)
One can observe that such a problem corresponds to the classical cell problem, see [START_REF] Allaire | Homogenization and two-scale convergence[END_REF][START_REF] Bensoussan | Asymptotic Analysis for Periodic Structures[END_REF]. It can be shown that u 1 i are defined up to a function ũ1 i ∈ L 2 (Ω T ) and can be decomposed as follows
u 1 i ( x, y, t) = 3 j=1 X j i ( y) ∇ x u 0 i ( x, t) • e j + ũ1 i ( x, t), (87)
where the canonical functions X j i , j = 1..3 belong to H 1 (Y i ) and are uniquely defined by the following variational formulation
σ i ( e j + ∇ y X j i ), ∇ y ψ Yi = 0, ∀ ψ ∈ H 1 (Y i ), Yi X j i d y = 0. ( 88
)
From the canonical functions, one can define the associated effective medium tensor T i as follows
( T i ) j,k = Yi σ i ( ∇ y X j i + e j ) • ( ∇ y X k i + e k )d y. (89)
We use exactly the same method in order to define and decouple the cell problem in the extracellular domain Y e and to define the effective medium T e . The macroscopic equations are obtained by taking v 1 i = v 1 e = v 0 e = 0 in (77) and we get
σ i ∇ x u 0 i + ∇ y u 1 i , ∇ x v 0 i Ω T ×Yi -C m |Γ Y | u 0 i -u 0 e , ∂ t v 0 i Ω T + Γ Y I ion (u 0 i -u 0 e , w 0 ) dγ , v 0 i Ω T = |Γ Y | I app , v 0 i Ω T -C m |Γ Y | Ω V 0 m ( x) v 0 i ( x, 0) d x. ( 90
)
Using the decomposition of ∇ y u
1 i = 3 j=1 ∇ y X j i ∂ x j u 0 i and ∇ x u 0 i = 3 j=1 ∂ x j u 0 i e j , we obtain
σ i ( ∇ x u 0 i + ∇ y u 1 i ), ∇ x v 0 i Ω T ×Yi = σ i 3 j=1 ( e j + ∇ y X j i )∂ x j u 0 i , ∇ x v 0 i Ω T ×Yi = T i ∇ x u 0 i , ∇ x v 0 i Ω T .
This equality allows us to simplify Equation (90). In the same way, for the extra-cellular part, we get
T e ∇ x u 0 e , ∇ x v 0 e Ω T + C m |Γ Y | u 0 i -u 0 e , ∂ t v 0 e Ω T - Γ Y I ion (u 0 i -u 0 e , w 0 ) dγ , v 0 e Ω T = -|Γ Y | I app , v 0 e Ω T + C m |Γ Y | Ω V 0 m ( x) v 0 e ( x, 0) d x. ( 91
)
Note that Equations (90) and (91) are not yet satisfactory because I ion (u 0 iu 0 e , w 0 ) may depend on y since w 0 is a priori a function of y. However, we have assumed that the initial data do not depend on ε therefore, we get
-w 0 , ∂ t ψ Ω T ×Γ Y + g(u 0 i -u 0 e , w 0 ), ψ Ω T ×Γ Y = - Ω w 0 ( x) Γ Y ψ( x, 0, y) dγ d x,
which is the weak formulation of the following problem
∂w 0 ∂t + g(u 0 i -u 0 e , w 0 ) = 0 Ω × (0, T ) × Γ Y , w 0 ( x, 0, y) = w 0 ( x) Ω × Γ Y . (92)
Since the non-linear term g is not varying at the micro scale and since (u 0 iu 0 e ) does not depend on y, it can be proven, using Assumption 8, that the solution w 0 of (92) is unique for all y ∈ Γ Y hence it is independent of the variable y. As a consequence, we have
1 |Y | Γ Y I ion (u 0 i -u 0 e , w 0 ) = A m I ion (u 0 i -u 0 e ,
-∇ x • T i |Y | ∇ x u 0 i + A m C m ∂(u 0 i -u 0 e ) ∂t + A m I ion (u 0 i -u 0 e , w 0 ) = A m I app Ω × (0, T ), -∇ x • T e |Y | ∇ x u 0 e -A m C m ∂(u 0 i -u 0 e ) ∂t -A m I ion (u 0 i -u 0 e , w 0 ) = -A m I app Ω × (0, T ), ( T i • ∇ x u i ) • n = 0 ∂Ω × (0, T ), ( T e • ∇ x u e ) • n = 0 ∂Ω × (0, T ), (u 0 i -u 0 e )( x, 0) = V 0 m ( x) Ω.
(93) System ( 93)-(92) corresponds to the sought macro-scale equations. Finally, note that we close the problem by recalling Equation ( 80)
Ω u 0 e d x = 0.
Since the analysis has already been done for the microscopic bidomain model, we can infer -up to a subsequence -the existence of a solution of System (92)-( 93). The proposed model is a generalization of the very classical macroscopic bidomain model in which constant electric conductivities are considered. Indeed, compare to previous studies, see for example [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] and [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF], and to anticipate meaningful modeling assumptions, we have considered at the microscopic level that the electric conductivities are tensorial. This does not appear in the expression of System (93) but is hidden in the definition of the cell-problems (88) and in the definition of the tensor (89). Moreover, we have shown that the classical macroscopic bidomain model formally obtained in [START_REF] Colli Franzone | Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level[END_REF] and [START_REF] Neu | Homogenization of syncytial tissues[END_REF] is valid under some more general conditions on the ionic terms than those assumed in [START_REF] Ambrosio | On the asymptotic behaviour of anisotropic energies arising in the cardiac bidomain model[END_REF] and [START_REF] Pennacchio | Multiscale modeling for the bioelectric activity of the heart[END_REF]. More precisely, we have extended the validity of the macroscopic bidomain equations to space-varying physiological models.
Note that this weighted norm is obviously equivalent to the standard L 2 -norm. The parameter λ is chosen later to obtain existence and uniqueness results. Given any function U ∈ C 0 ([0, T ]; L 2 (Γ m )), we have the following lemma. Lemma 2 If Assumptions 2, 4, 5 and 7a hold then there exists a unique function
w U ∈ C 1 ([0, T ]; L 2 (Γ m )),
which is a solution of
∂ t w U + g(U, w U ) = 0, Γ m , ∀ t ∈ [0, T ], w U ( x, 0) = w 0 ( x), Γ m . (94)
Moreover if Assumptions 9 and 10 are satisfied then for all t ∈ [0, T ] and almost all x ∈ Γ m ,
w min ≤ w U ( x, t) ≤ w max .
Proof. By density of continuous functions in L 2 spaces, there exist two sequences {w 0
n } ⊂ C 0 (Γ m ) and {U n } ⊂ C 0 ([0, T ] × Γ m ) such that w 0 n -w 0 L 2 (Γm) -→ n→+∞ 0,
and for all t ∈ [0, T ],
U n (t) -U (t) L 2 (Γm) -→ n→+∞ 0.
Then for all x ∈ Γ m , we denote by w Un ( x, •), the solution of the following Cauchy problem (now x plays the role of a parameter), for all x ∈ Γ m and t ∈ [0, T ],
d dt w Un ( x, •) + g( x, U n ( x, •), w Un ( x, •)) = 0, w Un ( x, 0) = w 0 ( x). (95)
Since we have assumed that g is Lipschitz in its second argument by a standard application of the Picard-Lindelöf theorem, we can show that there exists a unique solution w Un ( x, •) to this problem which belongs to C 1 ([0, T ]). Now, for all x ∈ Γ m , for all (n, m) ∈ N 2 , we have,
d dt [w Un ( x, •) -w Um ( x, •)] = g(U m ( x, •), w Um ( x, •)) -g( x, U n ( x, •), w Un ( x, •)).
We set w n,m := w Unw Um . We multiply the previous equation by e -λt w n,m and integrate with respect to space and time. After some manipulations we get, for all t ∈ [0, T ],
e -λt 2 w n,m (t) 2 L 2 (Γm) - 1 2 w 0 n -w 0 m 2 L 2 (Γm) + λ 2 |||w n,m ||| 2 t,λ = t 0 e -λs (g(U m , w Um ), w n,m ) Γm -(g(U n , w Un ), w n,m ) Γm ds.
Since g is globally Lipschitz (Assumption 7a), we have
e -λt 2 w n,m (t) 2 L 2 (Γm) ≤ 3L g 2 - λ 2 |||w n,m ||| 2 t,λ + 1 2 w 0 n -w 0 m 2 L 2 (Γm) + L g 2 |||U n -U m ||| 2 t,λ . (96
) Then choosing λ > 3L g in (96), we can deduce that w Un is a bounded Cauchy sequence in L 2 ((0, T ) × Γ m ) which is a Banach space. Therefore the sequence w Un converges strongly to a limit denoted w U . Moreover, since U n and w Un converge strongly in L 2 ((0, T ) × Γ m ) and since g is Lipschitz (Assumption 7a), we have
|||g(U n , w Un ) -g(U (t), w U )||| T,0 -→ n→+∞ 0.
Then by passing to the limit in the weak formulation of (95), it can be proven that the limit w U ∈ L 2 ((0, T ) × Γ m ) is a weak solution of (94). In a second step, by inspection of (94), one can show that ∂w U ∂t ∈ L 2 ((0, T ) × Γ m ).
Therefore thanks to Lemma 1.2 of [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] -up to some modification on zero-measure setsthe function w U satisfies w
U ∈ C 0 ([0, T ]; L 2 (Γ m )).
This last property implies, again by inspection of [START_REF] Luo | A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes[END_REF], that w U ∈ C 1 ([0, T ]; L 2 (Γ m )). Finally, by simple arguments and Assumption 9, the solution w Un satisfies -for all x ∈ Γ m and for all t ∈ [0, T ] -w min ≤ w Un ( x, t) ≤ w max if and only if
w min ≤ w 0 n ≤ w max . (97)
This can be ensured for every n and at the limit if it is satisfied for the initial data w 0 (i.e. Assumption 9 holds) and if we choose a sequence {w 0 n } of approximating functions that preserves (97). Such sequences are classically constructed by convolution with parametrized smooth positive functions of measure one and of decreasing supports around the origin (see [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF] Chapter 3 for instance).
Theorem 1 If Assumptions 1-7 hold, there exist
V m ∈ C 0 ([0, T ]; L 2 (Γ m )) ∩ L 2 ((0, T ); H 1/2 (Γ m )), ∂ t V ∈ L 2 ((0, T ); H -1/2 (Γ m )), and w ∈ H 1 ((0, T ); L 2 (Γ m )),
which are solutions of
C m ∂ t V m + A V m + I ion V m , w = I app , H -1/2 (Γ m ), a.e. t ∈ (0, T ), ∂ t w + g V m , w = 0, Γ m , a.e. t ∈ (0, T ), (98)
and
V m ( x, 0) = V 0 m ( x) Γ m , w( x, 0) = w 0 ( x) Γ m . (99)
Proof. The proof uses the general ideas of the Faedo-Galerkin technique and some useful intermediate results which come from [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF]. Our proof deals simultaneously with physiological and phenomenological models only if Assumption 7a is satisfied. Our proof is only partial when phenomenological models satisfying Assumption 7b are considered. More precisely, the proof is valid up to Step 4 and we refer the reader to the analysis done in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF] and [START_REF] Boulakia | A coupled system of PDEs and ODEs arising in electrocardiograms modeling[END_REF] to extend the proof.
Step 1: Introduction of the eigenvector basis of the operator A.
We introduce {λ k } k≥0 ⊂ R + and {ψ k } k≥0 ⊂ H 1/2 (Γ m ), the set of increasing nonnegative eigenvalues and corresponding eigenvectors such that, for all v ∈ H 1/2 (Γ m ),
A(ψ k ), v Γm = λ k (ψ k , v) Γm .
Thanks to the properties of the operator A given by Proposition 3 and the fact that H 1/2 (Γ m ) is dense in L 2 (Γ m ) with compact injection, such eigenvalues and eigenvectors exist and the set {ψ k } k≥0 is an orthonormal basis of L 2 (Γ m ) (see [START_REF] Mclean | Strongly Elliptic systems and Boundary Integral equation[END_REF], Theorem 2.37). Note that λ 0 = 0 and ψ 0 = 1/|Γ m | 1/2 ). As in [START_REF] Bourgault | Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology[END_REF], we introduce the continuous projection operator P : L 2 (Γ m ) → L 2 (Γ m ) defined by
P (v) = k=0 (v, ψ k ) Γm ψ k . It is standard to show that, for any w ∈ L 2 (Γ m ) and v ∈ H 1/2 (Γ m ), lim →+∞ w -P (w) L 2 (Γm) = 0 and lim →+∞ v -P (v) H 1/2 (Γm) = 0.
Finally, note that P is also continuous from H 1/2 (Γ m ) to H 1/2 (Γ m ) and
P (v) 2 L 2 (Γm) = k=0 |(v, ψ k ) Γm | 2 , c P (v) 2 H 1/2 (Γm) ≤ |(v, ψ 0 ) Γm | 2 + k=1 λ k |(v, ψ k ) Γm | 2
where c > 0 is independent of .
Step 2: Local existence result for a corresponding finite dimensional ODE system.
Multiplying the equations on V m and w by ψ k and integrating over Γ m suggest to introduce for any given and for all 0 ≤ k ≤ the following system of ordinary differential equations (ODE)
C m d dt V k + λ k V k + Γm I ion (V , w ) ψ k dγ = Γm I app ψ k dγ, d dt w k + Γm g(V , w )ψ k dγ = 0,
where we have defined
V ( x, t) := k=0 V k (t) ψ k ( x) and w ( x, t) := k=0 w k (t) ψ k ( x).
The following initial conditions complete the system
V k (0) := V ,0 k = P (V 0 m ), w k (0) := w ,0 k = P (w 0
). The idea is to apply standard existence results for an ODE system of the form
d dt V k = i k (t, {V k }, {w k }), d dt w k = g k (t, {V k }, {w k }), V k (0) = V ,0 k , w k (0) = w ,0 k . ( 100
)
where for all integers 0 ≤ k ≤ , the functions i
k : [0, T ] × R × R → R and g k : [0, T ] × R × R → R are defined by i k (t, {V k }, {w k }) := - λ k C m V k - 1 C m Γm I ion (V , w ) ψ k dγ + 1 C m Γm I app ψ k dγ, and g k (t, {V k }, {w k }) := - Γm g(V , w ) ψ k dγ.
Such functions are continuous in each V k and w k thanks to Lemma 1.3 of [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (see also the proof of Proposition 11). Moreover from Assumption 5 (i.e. I ion and g bounds) and from Assumption 3 (i.e. I app ∈ L 2 ((0, T ) × Γ m )), one can show that there exist positive scalars C and C (depending on ) such that
k=0 |i k (t, {V k }, {w k })| 2 + k=0 |g k (t, {V k }, {w k })| 2 ≤ C ( V 2 H 1/2 (Γm) + w 2 L 2 (Γm) + 1) + I app (t) 2 L 2 (Γm) . ≤ C ( k=0 |V k | 2 + k=0 |w k | 2 + 1) + sup t∈[0,T ] I app (t) 2 L 2 (Γm) .
This implies that {i k , g k } are L 2 -Carathéodory functions (see Definition 3.2 of [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF]). Then, one can apply Theorem 3.7 of [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF] to show that there exist T 0 ∈ [0, T ] and solutions V k , w k of Equation (100) such that
V k ∈ H 1 (0, T 0 ), w k ∈ H 1 (0, T 0 ), 0 ≤ k ≤ .
Step 3: Existence result on [0, T ] for the finite dimensional ODE system.
Theorem 3.8 of [START_REF] O'regan | Existence Theory for Nonlinear Ordinary Differential Equations[END_REF] shows that T 0 > 0 is independent of the initial data. Now, our objective is to show that one can find a bound independent of T 0 on the solution for t ∈ [0, T 0 ]. Then such a uniform bound is used to guaranty existence of solutions up to a time T ≥ T 0 by a recursion argument. By using an energy technique, one can deduce
e -λt C m 2 V (t) 2 L 2 (Γm) - C m 2 V (0) 2 L 2 (Γm) + λ C m 2 |||V ||| 2 t,λ = - t 0 e -λs AV , V ds - t 0 e -λs (I ion (V , w ) -I app , V ) Γm ds. (101)
Furthermore, for µ > 0 as in [START_REF] Keener | Mathematical Physiology[END_REF] of Assumption 6, we have
e -λt µ 2 w (t) 2 L 2 (Γm) + λ µ 2 |||w ||| 2 t,λ = µ 2 w (0) 2 L 2 (Γm) -µ t 0
e -λs (g(V , w ), w ) Γm ds.
Summing the two previous equations and using [START_REF] Keener | Mathematical Physiology[END_REF] Estimate (103) shows that the solution remains bounded up to time T 0 . The bound being independent of T 0 , one can repeat the process with initial data V (T 0 ) and w (T 0 ) and therefore construct a solution up to time 2 T 0 (since T 0 is independent of the initial data). Such a solution satisfies (103) with initial data corresponding to V (T 0 ) and w (T 0 ). By repeating this process, we can construct a solution up to time T .
Step 4: Strong convergence result for the potential First from (103) and from the coercivity of A (see ( 21)), we can deduce that there exists C > 0 independent of such that
T 0 V (t) 2 H 1/2 (Γm) dt ≤ C. (104)
One can see that V satisfies, for all v ∈ H 1/2 (Γ m ),
C m ∂ t V , v Γm = C m ∂ t V , P (v) Γm
= -A(V ), P (v) Γm -I ion (V , w ) -I app , P (v) Γm . (105)
Using the continuity of A (Eq. ( 20)), the continuity of P (v) in H 1/2 (Γ m ), the bound on I ion [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF], and the estimates (103)-( 104), there exists another positive scalar C independent of such that
T 0 ∂ t V (t) 2 H -1/2 (Γm) dt ≤ C.
From these observations, we deduce that {V } is bounded in
Q := v ∈ L 2 ((0, T ); H 1/2 (Γ m )), ∂v ∂t ∈ L 2 ((0, T ); H -1/2 (Γ m )) .
Therefore, using the Lions-Aubin compactness theorem introduced in [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (translated into english in [START_REF] Showalter | Monotone operators in Banach space and nonlinear partial differential equations[END_REF]), we know that the space Q is included in the space L 2 ((0, T ) × Γ m ) with compact injection. As a consequence, there exists V m ∈ Q such that, up to a subsequence, V converges weakly to V m in the space L 2 ((0, T ); H 1/2 (Γ m )) and ∂ t V converges weakly to ∂ t V m in L 2 (0, T ; H -1/2 (Γ m )). Moreover, we have lim
→+∞ |||V (t) -V m (t)||| T,0 = 0. ( 106
)
Finally from [START_REF] Lions | Non-Homogeneous Boundary Value Problems and Applications[END_REF] (Chapter 1 Theorem 3.2 or Chapter 3 Theorem 4.1), we know that
V m ∈ Q ⇒ V m ∈ C 0 ([0, T ]; L 2 (Γ m )). (107)
We now want to identify the equations satisfied by the limit terms w and V m in Steps 5 and 6 respectively. These steps have to be adapted in the case where Assumption 7b holds instead of 7a.
Step 5a: Strong convergence result and identification of the limit evolution equation for the gating variable Since we have assumed that g is globally Lipschitz, we can deduce a similar strong convergence result for the gating variable w . For all w ∈ L 2 (Γ m ), the equation satisfied by w reads, (∂ t w , P ( w)) Γm +(g(V , w ), P ( w)) Γm = 0 ⇔ (∂ t w , w) Γm +(g(V , w ), P ( w)) Γm = 0, We introduce the unique solution w given by Lemma 2 with U = V m . It is possible to show that the following equation is satisfied (∂ t w -∂ t w , w) Γm + (g(V m , w)g(V , w ), w) Γm + (g(V , w ), w -P ( w)) Γm = 0.
Setting w := ww (hence w -P ( w) = w -P (w)), we obtain for almost all time t ∈ (0, T ) for λ > 0 sufficiently large.
Step 6a: Identification of the limit evolution equation for the potential
We have already shown that w satisfies the equation for the gating variable. We now want to pass to the limit in the space-time weak form of (105) which reads: for all v ∈ C 1 ([0, T ]; H 1/2 (Γ m )) such that v(T ) = 0, we have The first two terms pass to the limit -using the weak convergence of V to V m in Qtherefore, Finally, the last difficulty is to prove that the term I ion (V , w ) converges weakly to the term I ion (V m , w m ). Using Assumption 5, there exists a scalar C > 0 independent of such that From Estimation (103) and the continuous injection H 1/2 (Γ m ) ⊂ L 4 (Γ m ), we can deduce that I ion (V , w ) is uniformly bounded in L 4/3 ((0, T ) × Γ m ). Moreover, since V and w converge almost everywhere to V m and w respectively, I ion (V , w ) converges weakly to the term I ion (V m , w) in L 4/3 ((0, T ) × Γ m ) ⊂ L 2 ((0, T ); H -1/2 (Γ m )), by application of Lemma 1.3 in [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] (see also the proof of Proposition 11). We can therefore deduce that V m satisfies the weak formulation of the bidomain equations. For all v ∈ C 1 ([0, T ]; H 1/2 (Γ m )) such that v(T ) = 0, we have
-C m T 0 ∂ t v, V m Γm dt + T 0 A(V m ), v Γm dt + T 0 I ion (V m , w), v Γm dt = T 0 I app , v Γm dt -C m (V 0 m , v(0)) Γm . ( 112
)
Note that using the weak formulation, we see that the initial condition is V m (0) = V 0 m . However, this has a meaning only if V m is continuous in [0, T ] with value sin L 2 (Γ m ) which is the case (Equation (107)). Moreover, from the weak formulation (112), we deduce System (98) (in the sense of distribution in time). Finally, we deduce the regularity given in the statement of the theorem for ∂ t V m . The regularity for ∂ t w follows. Then, all the terms above can be seen as linear forms on v which are continuous in the norm of L 2 ((0, T ); H 1/2 (Γ m )). Therefore, by the density of functions in C 1 ([0, T ]; H 1/2 (Γ m )) with compact support into L 2 ((0, T ); H 1/2 (Γ m )), Equation (113) is true with v replaced by e -λt V m . Finally, since V m ∈ C 0 ([0, T ], L 2 (Γ m )), it can be shown that (see Theorem 3, Chapter 5 of [START_REF] Evans | Partial differential equations[END_REF] for some ideas of the proof ), Corollary 2 If Assumption 8 holds then the solution of the microscopic bidomain equation [START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF] given by Theorem 1 is unique.
Proof. The proof is standard. Indeed, we assume that two solutions exist and we show that they must be equal by the energy estimate. We denote by (V 1 , w 1 ) and (V 2 , w 2 ) two solutions of [START_REF] Maleckar | Mathematical simulations of ligand-gated and cell-type specific effects on the action potential of human atrium[END_REF]. Following the same way that we have used to obtain (114) and then (115), we have for ( g(V 2 , w 2 ), w 1w 2 Γm ds.
V 1 , V 2 ), e -λt C m 2 V 1 (t) -V 2 (t) 2 L 2 (Γm) + λ C m 2 |||V 1 -V 2 ||| 2 t,λ ≤ - t 0 e -λs I ion (V 1 , w 1 ) -I ion (V 2 , w 2 ), V 1 -V 2
Collecting the two previous equations and using the one-side Lipschitz assumption (Assumption 8), we find
C m V 1 (t) -V 2 (t) 2 L 2 (Γm) + µ w 1 (t) -w 2 (t) 2 L 2 (Γm) ≤ e λt (2 L I -λ C m ) ||| V 1 -V 2 ||| 2 t,λ + e λt (2 L I -λ µ) ||| w 1 -w 2 ||| 2 t,λ .
Choosing λ large enough, we obtain V 1 = V 2 and w 1 = w 2 .
3 Figure 1 :
31 Figure 1: Cartoon of the considered domain at the microscopic scale and the macroscopic scale.
w 0 ), where A m = |Γ Y |/|Y | is the ratio of membrane area per unit volume. Now observe that Equations (90) and (91) are the weak forms of the following set of equations
g(V m , w)g(V , w ), ww ) Γm = -(g(V m , w), w -P (w)) Γm . (108)Since the right-hand side vanishes when tends to infinity and using the Lipschitz property of g, we can deduce by a standard energy technique that lim →+∞ |||ww ||| λ,T = 0, (109)
-C m T 0 ∂ 0 I 0 I
000 t P (v), V Γm dt + T 0 A(V ), P (v) Γm dt + T ion (V , w ), P (v) Γm dt = T app , P (v) Γm dt -C m (P (V 0 m ), v(0)) Γm . (110)
lim →+∞ T 0 ∂ 0 ∂ 0 ∂ 0 A 0 I 0 I
000000 t P (v), V Γm dt = lim →+∞ T t v, V Γm dt = T t v, V m Γm dt.Furthermore, using Proposition 3 and the properties of the operator P , we havelim →+∞ T 0 A(V ), P (v) Γm dt = lim →+∞ T 0 A(V ), v Γm dt = lim →+∞ T (v), V Γm dt = T 0 A(v), V m Γm dt = T 0 A(V m ), v Γm dt. (111)Using only the approximation properties of the operator P , we find lim→+∞ T app , P (v) Γm dt -C m (P (V 0 m ), v(0)) Γm = T app , v Γm dt -C m (V 0 m , v(0)) Γm .
T 0 I 3 L 4 / 3 ( 0 V 4 L 4 (w 2 L 2 (
034304422 ion (V , w ) 4/Γm) dt ≤ C T Γm) dt + T 0 Γm) dt + 1 .
Remark 7 . 0 I 0 I
700 Energy identity for the limit solutionTo obtain an energy identity, observe that from the weak formulation (112)C m T 0 ∂ t V m , v Γm dt + T 0 A(V m ), v Γm dt + T ion (V m , w), v Γm dt = T app , v Γm dt. (113)
2 T 0 ∂ 0 e -λt V m 2 L 2 (m 2 T 0 e -λt V m 2 L 2 ( 0 e 0 e 0 ee
200222022000 t V m , e -λt V m Γm dt = e -λT V m (T ) 2 L 2 (Γm) -V m (0) 2 L 2 (Γm) +λ T Γm) dt.From (113), we deduceC m 2 e -λT V m (T ) 2 L 2 (Γm) -C m 2 V m (0) 2 L 2 (Γm) + λ C Γm) dt + T -λt A(V m ), V m Γm dt + T -λt I ion (V m , w), V m Γm dt = T -λt I app , V m Γm dt. (114)Moreover, from the evolution equation (98) on the gating variable w and sincew ∈ H 1 ([0, T ]; L 2 (Γ m )) and g(V m , w) ∈ L 2 ([0, T ]; L 2 (Γ m )),we deduce straightforwardly the energy identityµ 2 e -λT w(T ) 2 L 2 (Γm) --λt g(V m , w), w Γm dt = 0. (115)Summing (114) and (115), we get the fundamental energy identity.
Therefore v e H 1/2 (Γm) ≤ C ∇ x v e L 2 (Ωe) , since v e has zero average along Γ m and we finally obtain the relation v e H 1/2 (Γm) ≤ C j H -1/2 (Γm) , hence the third inequality of the proposition.Remark that our choice of definitions of T i and T e implies that
Assumption 1). Using the continuity of the extension H 1/2 (Γ m ) into H 1/2 (∂Ω e ), the continuity of the trace operator and finally a Poincaré -Wirtinger type
inequality, we can show that
v e H 1/2 (Γm) ≤ C ∇ x v e L 2 (Ωe) + 1 |Γ m | Γm v e dγ .
u e dγ = 0. (16)
Γm
Other choices are possible to define u e uniquely but are arbitrary and correspond to a choice of convention. Assuming that it has regular enough solutions u α (t) ∈ H 1 (Ω α ), for almost all t ∈ [0, T ], System (11) is equivalent to
Choosing λ large enough, we can show using Gronwall's inequality (see also Proposition 5) that there exists a constant C T that depends only on C m , µ, C I and T such that, for all t ≤ T 0 , Γm ds≤ C T V (0) 2 L 2 (Γm) + w (0) 2 L 2 (Γm) +
≤ C m 2 V (0) 2 L 2 (Γm) + µ 2 w (0) 2 L 2 (Γm) +
+ (C I - λ C m 2 ) |||V ||| 2 t,λ + (C I - λ µ 2 ) |||w ||| 2 t,λ . (102)
V (t) 2 L 2 (Γm) + w (t) 2 L 2 (Γm) +
T e -λ 2
0
, we get
e -λt C m 2 V (t) 2 L 2 (Γm) + e -λt µ 2 w (t) 2 L 2 (Γm) + t 0 e -λs AV , V Γm ds t 0 e -λs (I app , V ) Γm + C I |Γ m | ds t 0 e -λs AV , V t I app L 2 (Γm) dt + 1 . (103)
Γm ds, and for (w 1 , w 2 )e -λt µ 2 w 1 (t)w 2 (t) 2 L 2 (Γm) +
λ µ 2 |||w 1 -w 2 ||| 2 t,λ ≤ -µ
t 0 e -λs g(V 1 , w 1 )
Appendix (proofs of Section 3.3)
In what follows, we will work with the following weighted norm, |
01760780 | en | [
"info.info-ni"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01760780/file/quantitative-description-activity9.pdf | Andra Anoaica
email: andra.anoaica@irt-systemx.fr
Hugo Levard
email: hugo.levard@exit.irt-systemx.fr
Quantitative Description of Internal Activity on the Ethereum Public Blockchain
Keywords: Blockchain, Ethereum, cryptocurrency, smart contract, graph analysis
One of the most popular platform based on blockchain technology is Ethereum. Internal activity on this public blockchain is analyzed both from a quantitative and qualitative point of view. In a first part, it is shown that the creation of the Ethereum Alliance consortium has been a game changer in the use of the technology. In a second part, the network robustness against attacks is investigated from a graph point of view, as well as the distribution of internal activity among users. Addresses of great influence were identified, and allowed to formulate conjectures on the current usage of this technology.
I. INTRODUCTION
A transaction between individuals, in the classical sense, requires either trust in each parties, or in a third party. Blockchain solves this problem of trust between the actors by using a consensus algorithm. Beyond the debate on whether or not this technology can be qualified as innovative or simply being an aggregation of already existing building blocks (peerto-peer protocols, multi-party validation algorithm etc.), it can be reasonably stated that its usage lies on a new paradigm of communication and value exchange . The number of scientific publications dedicated to the general concept of blockchain was multiplied by 15 between 2009 and 2016, which demonstrates both a growing interest for the technical development of blockchains compounds and for its applications in domains for which it has been thought to be potentially disruptive [START_REF] Ben-Hamida | Blockchain for Enterprise: Overview, Opportunities and Challenges[END_REF]- [START_REF] Tian | An agri-food supply chain traceability system for china based on rfid blockchain technology[END_REF].
Looming over the daily increasing number of available blockchain technologies, Bitcoin is by far the most popular and widely used blockchain. However, in February 2017, a consortium of major I.T. and banking actors announced the creation of the Ethereum Alliance, a large project aiming at developing a blockchain environment dedicated to the enterprise based on the Ethereum blockchain. This event suddenly promoted the latter to the level of world wide known and trustworthy blockchain technologies -that up to then only included Bitcoin -making it an essential element of the blockchain world.
Despite an increasing notoriety and a monthly growing fiat money equivalent volume, it remains difficult to find publications dedicated to the establishment of economical and behavioural models aiming at describing internal activity on Ethereum, similarly to the studies performed for the Bitcoin network [START_REF] Athey | Bitcoin pricing, adoption, and usage: Theory and evidence[END_REF]- [START_REF] Chiu | The Economics of Cryptocurrencies-Bitcoin and Beyond[END_REF]. Concomitantly, global, time-resolved or major actors-resolved statistical indicators on the past internal activity and on the blockchain network topology are not commonly found in the literature, while they exist for Bitcoin [START_REF] Kondor | Do the rich get richer? An empirical analysis of the Bitcoin transaction network[END_REF]- [START_REF] Lischke | Analyzing the bitcoin network: The first four years[END_REF].
This paper aims at providing basic quantitative insights related to the activity on the Ethereum public network, from the origin, on July 2015, to August 2017. In the first part, correlations in time between internal variables, and between internal variables and the USD/ETH exchange rate are computed. A strong sensitivity of the activity to external events is highlighted. In a second part, the network is analyzed from a graph point of view. Its topology and robustness against attacks is investigated, as well as the distribution of internal activity among users. This leads to the identification of major actors in the blockchain, and to a detailed insight into their influence on the Ethereum economy.
II. THE ETHEREUM TECHNOLOGY
Similarly to Bitcoin, Ethereum is a public distributed ledger of transactions [START_REF] Wood | Ethereum: A secure decentralised generalised transaction ledger[END_REF]. Yet, the latter differs from the former by major features, among which the existence of smart contracts. Smart contracts are pieces of code that execute on a blockchain. Users or other smart contracts can call its functions, in order to store or transfer tokens, perform simple calculations, dealing with multi-signature transactions etc.
The existence of smart contracts allows us to distinguish five different kinds of transactions:
• User-to-user: a simple transfer of tokens from one address to another -both addresses can belong to the same physical user. • User-to-smart contract: a signed call to one of the functions of a smart contract. • Smart contract deployment: a transaction that contains the binary code of a compiled smart contract and sent to a special deployment address. • Smart contract-to-smart contract and smart contract-touser: user calling a smart contract might call another function of the same smart contract, or of another smart contract, or again transfer tokens to a user. These are called internal transactions, and their study is beyond the scope of this paper.
A. Blocks and transactions
In order to collect data, a public Geth [14] node was connected to the Ethereum public network. Once synchronised, the blockchain was stored for further analysis. Within this paper, we only deal with validated transactions, i.e. inserted into a block that was mined and added to the main chain before August 31 st 2017.
B. Transactions features
Two main transactions features are retained for the analysis.
• address: A hexadecimal chain of characters pointing to an account, that can either be a user or a smart contract.
Even though there is no direct link between an address and its user identity, some of them are publicly known to belong to major actors such as exchange platforms or mining pools. The correspondence can be found on open access blockchain explorer websites, such as Etherscan 1 .
• value: the amount of tokens, expressed in wei, transferred through the transaction. The ether/wei conversion rate is a hard coded constant equal to 10 18 . The time-resolved exchange rate between ether (ETH) and USD is provided by the Poloniex website API. The notions of uncles, gas and gas price, inherent to block validation protocol on Ethereum, are not investigated in this paper. It is worth noting that although the user-to-user transactions gather almost two thirds of the total of all transactions, they carry almost 90% of the transferred amount of tokens. A detailed investigation of the use of smart contracts reveals that most of them have been called only once, but that a small fraction of them have been massively used; this explains the smallness of the number of smart contract deployments compared to the number of user-to-smart contract transactions.
Figure 1 displays the monthly total number of transactions and transferred value, respectively, for each of the three categories of transactions defined above. The first two, ranging within the same orders of magnitude, are plotted together for both kinds of plot ((a) and (c)).
1) Number of transactions:
A behavior common to the three categories when it comes to the variation of the number of transactions in time is a A very similar trend is observed on the same period concerning the USD/ETH exchange rate, leading to conjecture that these parameters are strongly correlated.
However, a careful examination of these variations reveals that two distinct time windows should be distinguished at this stage when investigating correlations between transactions internal features, and external features, on this network. Indeed, the activity on public blockchains such as Bitcoin or Ethereum, as they allow to invest traditional currencies through exchange platforms, may be subject to the same sudden fluctuations as those that can be observed on common market places after external events, such as marketing announcements or financial bankrupts. In the present case, we can reasonably conjecture that there is a causal relationship between the creation of the Ethereum Alliance on February 28 th 2017 and the sharp take-off of the above-mentioned features. Considering the renown of the initial partners, this announcement may have promoted Ethereum to a larger audience, even in the nonspecialist public, and may have brought a massive interest from individuals resulting in an exponential growth of the activity in terms of number of transactions of all kinds. Hence, the strong correlation that could be calculated between features on a global time range, because of scale effects, may be biased and not reflect a normal behaviour.
To test this hypothesis we computed the Pearson correlation coefficient [START_REF] Mckinney | pandas: a foundational Python library for data analysis and statistics[END_REF] between the USD/ETH exchange rate and the number of each of the three kinds of transactions defined above, for four different aggregation time periods, and for two subsets of data, that differ from their latest cut-off date: the first one includes all transactions from the creation date of the blockchain (July 31 st 2015) up to the announcement date of the Ethereum Alliance (February 28 th 2017), while the second one ends on August 31 st 2017. Results are displayed in table II.
When considering the entire blockchain lifetime (unbold figures), we observe a strong correlation coefficient between the exchange rate and both the user-to-user and the user-tosmart contract number of transactions for all aggregation time sizes (between 0.83 and 0.96), which is consistent with the visual impression discussed above. But when excluding the time range [March 2017-August 2017] (bold figures), such a strong correlation only remains for the user-to-user number of transactions, and for aggregation time sizes no shorter than a day (between 0.92 and 0.95). It turns out that this particular data set is the only one for which the exchange rate variation in time follows the bump observed between March and October 2016, which explains the low correlation coefficient for the two other kinds of transactions. As was conjecture, the Ethereum Alliance creation announcement seems to have been a game changer on the Ethereum internal activity.
2) Values: The total exchanged values by unit of time displayed on plots (c) and (d) of Figure 1 are shown on log scales for clarity. The peak of activity, in terms of number of transactions, in the period that follows the Ethereum Alliance creation translates here into an average multiplicative factor of 10 as for the total exchanged value through the user-to-user transactions (bottom left figure), compared to the period that precedes it. As for the range of value transferred through smart contract deployment, it spans two orders of magnitude on the whole blockchain lifetime time window, and shows no substantial correlation with any of the retained features within this study.
To emphasise the rise of interest Ethereum has benefitted between 2016 and 2017, we display in table III the equivalent in USD of the total value that circulated within the blockchain during the months of June of these two years. The fluctuation of the average amount of tokens transferred per transaction bears no relation to the sudden increase of both the USD/ETH exchange rate and the number of transactions after the Ethereum Alliance creation announcement. The tremendous rise of the total value exchanged is thus a direct consequence of the internal activity increase in terms of number of transaction, and not of a behavior change among the individual addresses in terms of amount of tokens transferred through transactions. The macro perspective presented can be The Ethereum blockchain graph is built by setting the addresses as nodes, the transactions as edges, and using a time window that includes all internal events from the first block on July 31 st 2015 to August 31 st 2017. The user-to-user and userto-smart-contract are different types of interaction. In this short paper, we thus limit to user-to-user transactions. The resulting graph contains 5,174,983 nodes (unique user addresses) and 33,811,702 edges (transactions).
1) Network scaling and robustness against attack: The topology is firstly analyzed. Random networks are modeled by connecting their nodes with randomly placed links, as opposed to scale-free networks [START_REF] Clauset | Power-law distributions in empirical data[END_REF], such as the Internet, where the presence of hubs is predominant. Following a scalefree architecture implies that the network is rather robust to isolated attacks, however, remaining vulnerable to coordinated efforts, that might shut down the important nodes. In order to understand potential vulnerabilities of the Ethereum Network, we will investigate the presence of central nodes.
In accordance with a previous study on the Bitcoin network in which it is shown to be scale-free [START_REF] Lischke | Analyzing the bitcoin network: The first four years[END_REF], and with what is commonly observed in real networks, a power law distribution of the nodes degree d of the form c • d -α is expected, with c a constant. Such a fit in the case of Ethereum gives α = 2.32, which lies in the observed range for most real networks [START_REF] Clauset | Power-law distributions in empirical data[END_REF].
2) Centrality on the Ethereum Network: In order to determine whether the activity is well spread among the users, or whether there exist major actors or activity monopoles, we make use of three different centrality indicators:
• In-Degree/Out-Degree: the number of incoming/outgoing edges a node is connected to; • Betweenness Centrality: an indicator that summaries how often a node is found on the shortest path between two other nodes and, when communities exist, how well it connects them; • Left Eigenvector Centrality: a measure of the influence of a node based on the node's neighbors centrality. Figure 2a depicts the network directed degree distribution discrete probability density p, i.e. the probability for a randomly picked node to show a certain in-(d i ) and out-(d o ) degree. The latter are plotted in logarithmic scale for clarity and, by convention, any initial in-(respectively out-) degree value of 0 is plotted with a -1 in-(respectively out-) degree coordinate, to preserve surface continuity. The probability associated with p is denoted P .
It appears that the great majority of users do a rather limited number of transactions, having an in-degree and an out-degree equal to 1 (30.0%), followed by users that just send transactions once, never receiving any (20.0%). Firstly, the radial anisotropy is seen subsequent to larger values on the d i = d o line, which implies that in-and out-degree distribution are not independent variables: with p(d i ) and p(d o ) following a power law distribution, it seems that p( . These results suggest that, regarding the description of degree distribution, more information on the blockchain network could be obtained using a more sophisticated model than a simple power-law [START_REF] Bollobas | Directed scale-free graphs[END_REF], contrarily to what was assumed above.
d i , d o ) = p(d i ) • p(d o ),
Following these results, the degree spread among addresses is being investigated. Figure 2b shows the cumulative in-and out-degree percentage that represent, over all users, the first 100 addresses in descending order according to their in-degree or out-degree. It reveals that, out of more than 1 million addresses, just 20 addresses account for more than 60% of the transactions sent and 20% of the transactions received. It is then of interest to look for the identity of these addresses and try to infer their public role on Ethereum.
Consequently, we identified the owner of each of the 20 first users in these two lists of addresses, and gather them under three labels, Mining pool, Exchange platform, Unknown, IV. Among the top 20 addresses that send transactions are found 12 mining pools (60%), 5 exchange platforms (25%) and 3 unknown addresses (15%). The top 20 addresses that receive transactions are 7 exchange platforms or addresses related to one of them. The rest of the addresses are unknown. Since the mining retribution is sent to the pool main address only, we can conjecture that around 40% of transactions consist in token redistribution to miners that contribute to a pool. Similarly, because of the lack of services proposing direct payment in ether, it is likely that miners transfer their earned tokens to exchange platforms to convert them into other numeric digital currencies, such as dollar or bitcoin.
The betweenness centrality of the over 1 million nodes lies within the range 0-1%, apart from two addresses for which it reaches nearly 15%. These nodes are important as a high value indicates that a significant number of transactions are connected to this node. How well they connect communities in the network is left for further investigations. Among the top 20 nodes in this category there are 10 exchange services related addresses and 3 mining pools.
Among the 21 unique addresses identified as most central, none of them belong to the 20 most central addresses in terms of eigenvector centrality. Because the eigenvector centrality awards higher score to nodes connected to other nodes showing a high connectivity, it can be concluded that the most central nodes, from this perspective, are individuals that interact often with major actors, rather than the latter interacting with themselves. Inspecting the interaction of services previously identified as central, according to the in-and outdegree and betweenness centrality, we compute for each of these 21 addresses the percentage of transactions in which they take part that connect each of them to other members of the group. It is found that none of them has more than 1.17% of outbound transactions within the group.
The time-independent network topology was investigated and, as for the node directed connectivity, a sharp asymmetry between the in-and out-degree distribution was noticed. A conjecture on the non-independence between these two features was established. Major actors in terms of number of transactions were identified, as opposed to the vast majority of addresses which are used only once.
V. CONCLUSION
In this paper, quantitative indicators that summarize the internal activity on the Ethereum blockchain were presented.
The study of transaction features temporal variation revealed that the announcement of the Ethereum Alliance creation initiated an increase of the activity by several hundred percent,both in terms of number of transactions and the amount of exchange tokens by unit of time. Thus the subsequent caution in the interpretation of time correlations in a blockchain network was highlighted.
The study of the transaction graph revealed that more than 97% of nodes have been engaged in less than 10 transactions. Oppositely, 40 addresses, among which mining pools and exchange platforms, were found to account for more than 60% of the activity, leaving open the question of the health of the Ethereum economic ecosystem.
1 https://etherscan.io/ IV. ACTIVITY ON THE ETHEREUM NETWORK A. Evolution in time of transaction main features
Fig. 1 :
1 Fig. 1: Number of different transactions and value transfered over time. The gray line highlights the creation of the Ethereum Alliance.
Fig. 2 :
2 Fig. 2: (a) Directed degree distribution (logarithmic color scale) upon in-degree and out degree of user addresses expressed as probability; (b) Cumulated in-degree and cumulated out-degree for the first 100 addresses in descending order of their in-degree or out-degree
Table I displays the global percentage of the number of transactions and the amount of tokens that each of the three kinds of transactions defined in Section II represents.
Number of transactions Value transferred
user-to-user 64.6% 90.5%
user-to-smart contract 34.3% 9.5%
smart contract deployment 1.1% < 0.1%
TABLE I :
I Proportions of transactions sent and value transfered through the three kinds of transactions.
TABLE II :
II Pearson correlation coefficient over time between the USD/ETH exchange rate and the number of transactions validated, for different aggregation periods (month, week, day and hour), and two time windows -bold figures highlight the time-range ending before the creation of the Ethereum Alliance. sharp increase from March to August 2017 (top two figures).
TABLE IV :
IV Public status of the top 20 addresses according to different measurement of centrality which we assume meaning neither mining poool or exchange platform. Results are displayed in Table
ACKNOWLEDGMENT
This research work has been carried out under the leadership of the Institute for Technological Research SystemX, and therefore granted with public funds within the scope of the French Program Investissements dAvenir. |
01757087 | en | [
"sdu.astr.im"
] | 2024/03/05 22:32:13 | 2019 | https://insu.hal.science/insu-01757087/file/1-s2.0-S0094576517315473-main.pdf | Alain Hérique
Dirk Plettemeier
Caroline Lange
Jan Thimo Grundmann
Valérie Ciarletti
Tra-Mi Ho
Wlodek Kofman
Benoit Agnus
Jun Du
Alain Herique
Valerie Ciarletti
Tra-Mi Ho
Wenzhe Fa
Oriane Gassot
Ricardo Granados-Alfaro
Jerzy Grygorczuk
Ronny Hahnel
Christophe Hoarau
Martin Laabs
Christophe Le Gac
Marco Mütze
Sylvain Rochat
Yves Rogez
Marta Tokarz
Petr Schaffer
André-Jean Vieau
Jens Biele
Christopher Buck
Jesus Gil Fernandez
Christian Krause
Raquel Rodriguez Suquet
Stephan Ulamec
V Ciarletti
T.-M Ho
Valerie Ciarletti
Tra-Mi Ho
Christophe Le Gac
Rochat Sylvain
Raquel Rodriguez
A radar package for asteroid subsurface investigations: Implications of implementing and integration into the MASCOT nanoscale landing platform from science requirements to baseline design
Keywords: MASCOT Lander, Radar Tomography, Radar Sounding, Asteroid, Planetary Defense, AIDA/AIM
Introduction
The observations of asteroid-like bodies and especially their internal structure are of main interest for science as well as planetary defense. Despite some highly successful space missions to Near-Earth Objects (NEOs), their internal structure remains largely unknown [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF][START_REF] Herique | A direct observation of the asteroid's structure from deep interior to regolith: why and how?[END_REF][START_REF] Herique | A Direct Observation of the Asteroid's Structure from Deep Interior to Regolith: Two Radars on the AIM Mission[END_REF]. There is some evidence that an aggregate structure covered by regolith ("rubble pile") is very common for medium size bodies, but there are no direct observations. The size distribution of the constitutive blocks is unknown: is it fine dust, sand, pebbles, larger blocks, or a mixture of all of these? Observations of asteroid-like bodies hint at the existence of a whole range of variation between these very extreme objects. Some may be 'fluffballs' composed entirely of highly porous fine-grained material [START_REF] Thomas | Saturn's Mysterious Arc-Embedded Moons: Recycled Fluff?[END_REF]. There are also very large objects that appear to be at least somewhat cohesive [START_REF] Polishook | The fast spin of near-Earth asteroid (455213) 2001 OE84, revisited after 14 years: constraints on internal structure[END_REF], and possibly monoliths bare of any regolith layer [START_REF] Naidu | Goldstone radar images of near-Earth asteroids[END_REF]. Binary systems in their formation by evolution of asteroid spin state [START_REF] Rubincam | Radiative Spin-up and Spin-down of Small Asteroids[END_REF] appear to disperse, re-aggregate or reconfigure their constitutive blocks over time [START_REF] Jacobson | Dynamics of rotationally fissioned asteroids: Source of observed small asteroid systems[END_REF], leading to a complex geological structure and history [START_REF] Ostro | Radar Imaging of Binary Near-Earth Asteroid (66391)[END_REF][START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF][START_REF] Cheng | Asteroid impact and deflection assessment mission[END_REF]. This history includes components of separated binaries appearing as single bodies [START_REF] Busch | Radar observations and the shape of Near-Earth Asteroid[END_REF][START_REF] Scheeres | The geophysical environment of bennu[END_REF] as well as transitional states of the system including highly elongated objects [START_REF] Brozivić | Goldstone and Arecibo radar observations of (99942) apophis in 2012-2013[END_REF], contact binaries [START_REF] Pätzold | A homogeneous nucleus for comet 67p/Churyumov-Gerasimenko from its gravity field[END_REF][START_REF] Kofman | Properties of the 67p/Churyumov-Gerasimenko interior revealed by CONSERT radar[END_REF][START_REF] Biele | The landing(s) of Philae and inferences about comet surface mechanical properties[END_REF] and possibly ring systems [START_REF] Braga-Ribas | A ring system detected around the Centaur (10199) Chariklo[END_REF]. The observed spatial variability of the regolith is not fully explained and the mechanical behavior of granular materials in a low gravity environment remains difficult to model.
After several asteroid orbiting missions, these crucial and yet basic questions remain open. Direct measurements are mandatory to answer these questions. Therefore, the modeling of the regolith structure and its mechanical reaction is crucial for any interaction of a spacecraft with a NEO, particularly for a deflection mission. Knowledge about the regolith's vertical structure is needed to model thermal behavior and thus Yarkovsky (cf. [START_REF] Giorgini | Asteroid 1950 DA's Encounter with Earth in 2880: Physical Limits of Collision Probability Prediction[END_REF][START_REF] Milani | Long-term impact risk for (101955)[END_REF]) and YORP accelerations. Determination of the global structure is a way to test stability conditions and evolution scenarios. There is no way to determine this from ground-based observations (see [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF] for a detailed review of the science rationale and measurement requirements).
Radar Sounding of Asteroids
A radar operating remotely from a spacecraft is the most mature instrument capable of achieving the science objective to characterize the internal structure and heterogeneity of an asteroid, from sub-metric to global scale, for the benefit of science as well as planetary defense, exploration and in-situ resource prospection [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF][START_REF] Milani | Long-term impact risk for (101955)[END_REF][START_REF] Ulamec | Relevance of Philae and MASCOT in-situ investigations for planetary[END_REF]. As part of the payload of the AIM mission a radar package was proposed to the ESA Member States during the Ministerial council meeting in 2016 [START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF][START_REF] Michel | European component of the AIDA mission to a binary asteroid: Characterization and interpretation of the impact of the DART mission[END_REF]. In the frame of the joint AIDA demonstration mission, DART (Double Asteroid Redirection Test ) [START_REF] Cheng | Asteroid impact and deflection assessment mission[END_REF], a kinetic impactor, was designed to impact on the moon of the binary system, (65803) Didymos, while ESA's AIM [START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF] was designed to determine the momentum transfer efficiency of the kinetic impact and to observe the target structure and dynamic state.
Radar capability and performance is mainly determined by the choice of frequency and bandwidth of the transmitted radio signal. Penetration depth increases with decreasing frequency due to lower attenuation. Resolution increases with bandwidth. Bandwidth is necessarily lower than the highest frequency, and antenna size M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 3 constraints usually limit the lowest frequency. These are the main trade-off factors for instrument specification, which also has to take into account technical constraints such as antenna accommodation or operation scenarios [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF].
The AIM mission would have had two complementary radars on board, operating at different frequencies in order to meet the different scientific objectives [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF]. A monostatic radar operating at higher frequencies (HFR) can achieve the characterization of the first ten meters of the subsurface with a metric resolution to identify layering and to link surface measurements to the internal structure. Deep interior structure tomography requires a low frequency radar (LFR) in order to propagate through the entire body and to characterize the deep interior. The HFR design is based on the WISDOM radar [START_REF] Plettemeier | Full polarimetric GPR antenna system aboard the ExoMars rover[END_REF][START_REF] Ciarletti | WISDOM GPR Designed for Shallow and High-Resolution Sounding of the Martian Subsurface[END_REF] developed for the ExoMars / ESA-Roskosmos mission and LFR is a direct heritage of the CONSERT radar designed for ESA's Rosetta mission.
HFR: High Frequency Radar for Regolith Tomography
The monostatic HFR radar on board the orbiter spacecraft is a high frequency synthetic aperture radar (SAR) to perform reflection tomography of the first tens of meters of the regolith with a metric resolution [START_REF] Herique | Direct Observations of Asteroid Interior and Regolith Structure: Science Measurement Requirements[END_REF]. It can image the shallow subsurface layering and connect the surface measurements to the internal structure. The HFR is a stepped frequency radar operating from 300 MHz to 800 MHz in nominal mode and up to 3 GHz in an optional mode. It inherits from the WISDOM radar and is optimized to study small bodies.
Table 1 summarizes the main characteristics and budgets of the radar. It provides a decimetric vertical resolution and better than one meter resolution in horizontal direction, depending on the spacecraft speed relative to the asteroid surface. This high resolution allows characterizing the spatial variation of the regolith texture, which is related to the size and mineralogy of the constituting grains and macroporosity.
A primary objective of the HFR within the AIM mission was the characterization of the regolith of Didymoon, the Didymos system's secondary body or moon of the primary, Didymain.The HFR was supposed to survey Didymoon before and after the DART impact, in order to determine the structure and layering of the secondary's shallow subsurface down to a few meters. The tomography of the DART artificial impact crater would further provide a better estimate of the ejected mass to model the momentum transfer efficiency. With a single acquisition sequence, Didymoon mapping provides the 2D distribution of geomorphological elements (rocks, boulders, etc.) that are embedded in the subsurface. Multipass acquisition and processing is required to obtain the 3D tomography of the regolith. Another primary objective is the determination of the dielectric properties of the subsurface of Didymoon. The dielectric permittivity can be derived from the spatial signature of individual reflectors or by analyzing the amplitude of the surface echoes.
Instrument Design
The HFR electronics (Figure 2) uses a heterodyne system architecture utilizing two frequency generators to form a stepped frequency synthesizer. Transmitted wave as well as the local oscillator frequencies are generated separately and incoherently with phase-locked loop (PLL) synthesizers. A functional block schematic of the radar system shows Figure 1. Its front-end mainly consists of a high output power transmitter and two dedicated receivers. The antenna is fed by a 0° and a 90° phase shifted signal to generate circular polarization for the transmitted wave, using a 90° hybrid divider. The transmitter output is muted during reception by switching off the power amplifier output, in order to not overload the receivers. A separate receiver processes one of the receive polarization respectively. For the SAR operation mode an ultra-stable frequency reference provides a stable reference to the digital and RF electronics. All modules are supplied by a dedicated DC/DC module, which provides all necessary supply voltages for the individual blocks from a single primary input voltage.
The receiver's superheterodyne architecture uses a medium intermediate frequency at the digitizer input. This ensures high performance by eliminating the 1/f noise, thereby improving noise and interference performance. A calibration subsystem allows for a calibration of the horizontal (H) and vertical (V) receiver regarding image rejection, inter-receiver phase and amplitude balance. The received H-and V-signals are compensated subsequently to ensure very high polarization purity.
The Digital Module (DM) is built around a Field Programmable Gate Array (FPGA) and microcontroller. It controls and manages the data flow of the instrument. This includes digital signal processing of the measurement data, short time data accumulation, transfer to the spacecraft and processing of control commands for radar operation.
The antenna system comprises of a single antenna, which transmits circular polarization and receives both linear polarizations. This ultra-wideband dualpolarized antenna system operates in the frequency range from 300 MHz to 3.0 GHz. Figure 3 shows a 3D model and its corresponding antenna prototype. Antenna diagrams are shown in Figure 4.
Operations and Operational Constraints
The requirements for the HFR instrument are strongly driven by the acquisition geometry. Indeed, Synthetic Aperture Radar reflection tomography in 3D requires observations of different geometries and can only be achieved by constraining the spacecraft motion and position with respect to the observed target. For each acquisition geometry, the radar acquires the signal returned by the asteroid as function of propagation time, which is a measure of the distance from the spacecraft to the observed body. This range measurement resolves information in a first spatial dimension. The resolution in that dimension is given by the bandwidth of the radar signal and is significantly better than one meter (~30 cm in vacuum).
M A N U S C R I P T A C C E P T E D
For kilometric-size asteroids, the rotational period is generally in the order of a few hours, much smaller than the spacecraft orbital period during remote observation operations within the Hill sphere. In the Didymos system, the main body's rotation period is 2.3 h. Its moon orbits the primary in 11.9 h and it is expected to rotate synchronously [START_REF] Michel | Science case for the asteroid impact mission (AIM): A component of the asteroid impact & deflection assessment (AIDA) mission[END_REF]. The spacecraft's orbital period in a gravitationally bound orbit of 10 km radius is nearly two weeks. It can also be at rest relative to the system's barycenter while on a heliocentric station-keeping trajectory.
For the processing, we consider the body fixed frame with a spacecraft moving in the asteroid sky (Figure 7). Thus, the relative motion along the orbit plane between the spacecraft and the moon resolves a second dimension by coherent Doppler processing (Figure 5) [START_REF] Cumming | Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation[END_REF]. This brute-force SAR processing takes into account the observation geometry to give 2D images of the body surface mixing in the same pixel surface and subsurface features (Figure 5c). For a spherical body this induces an ambiguity between North and South hemispheres [START_REF] Hagfors | Mapping of planetary surfaces by radar[END_REF] which corresponds to the aliasing of the North target to the South hemisphere in Figure 5a. Therefore, the resolution is determined by the length of the observation orbit arc (i.e., Doppler bandwidth) and is better than one meter for an arc of 20° longitude. For the Didymos system, the surfaces of the primary and secondary object show very different Doppler behavior due to their different periods of rotation. This allows to resolve ambiguities when both are inside the radar's field of view.
In addition, a spacecraft position out of the equatorial plane breaks the symmetry. Shifting the signal partially out of the orbital plane reduces the North to South ambiguities as it is spread and a less powerful alias remains in the other hemisphere (South in Figure 5a). The accuracy requirement for the spacecraft pointing is typically 5° when the reconstructed trajectory accuracy requirement is in the order of hundreds of meters. The orbit restitution accuracy can be improved by the SAR processing itself using autofocus techniques [START_REF] Carrara | Spotlight Synthetic Aperture Radar: Signal Processing Algorithms[END_REF].
To achieve a 3D tomography, the third dimension to be resolved needs to be orthogonal to the orbit plane (Figure 7). To do so, the HFR instrument performs several passes at different latitudes. Typically, 20 passes allow a metric resolution. The spacecraft position evolves in a declination and right ascension window centered around 30° radar incidence of the observed target point (Figure 7). The extent of this window is about 20°. Each pass lasts for one to two hours and is traversed close to constant declination. The spacecraft is in very slow motion of a few mm/s along this axis orthogonal to the orbit plane. Such a velocity is difficult to achieve in operations. A proposed solution is to combine this slow motion to a movement along the orbit axis. All the passes can be done in a single spacecraft trajectory. Each pass corresponds to the period when HFR is facing the moon that is orbiting around the main body (Figure 7). In this multi-pass scenario, the resulting resolution for the third direction is 1 m and it is the limiting one (Figure 6).
The distance between the HFR instrument and its target is limited by the radar link budget for the upper boundary and by the speed of the electronic system for the lower boundary. HFR is expected to operate from 1 km up to 35 km, the nominal distance being 10 km.
M A N U S C R I P T A C C E P T E D
LFR: Low Frequency Radar for Deep Interior Sounding
Deep interior structure tomography requires a low frequency radar to propagate through the entire body. The radar wave propagation delay and the received power are related to the complex dielectric permittivity (i.e. composition and microporosity) and the small-scale heterogeneities (scattering losses), while the spatial variation of the signal and multiple propagation paths provide information on the presence of heterogeneities (variations in composition or porosity), layers, large voids or ice lenses. A partial coverage will provide 2D cross-sections of the body; a dense coverage will allow a complete 3D tomography. Two instrument concepts can be considered (Figure 8). A monostatic radar like MARSIS/Mars Express (ESA) [START_REF] Picardi | Radar soundings of the subsurface of Mars[END_REF] analyzing radar waves transmitted and received by the orbiter after reflection at the asteroids' surface and internal structure or a bistatic radar like CONSERT/Rosetta (ESA) [START_REF] Kofman | The Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT). A short description of the instrument and of the commissioning stages[END_REF] analyzing radar waves transmitted by a lander, propagated through the entire body and received by the orbiter. The monostatic radar sounder requires very low operation frequencies, necessitating the use of large antennas. It is also more demanding in terms of mission resources (mass, data flow, power), driving all the mission specifications. In contrast to the monostatic approach, a bistatic radar can use slightly higher frequencies, simplifying the accommodation on the carrier mission as well as on the surface package. The bistatic low frequency radar measures the wave propagation between the surface element and an orbiter through the target object, like Didymoon. It provides knowledge of the deep structure of the moon, a key information needed to be able to model binary formation and stability conditions. The objective is to discriminate monolithic structures from building blocks, to derive the possible presence of various constituting blocks and to provide an estimate of the average complex dielectric permittivity. This information relates to the mineralogy and porosity of the constituting material. Assuming a full 3D coverage of the body, the radar determines 3D structures such as deep layering, spatial variability of the density, of the block size distribution and of the permittivity. As a beacon on the surface of Didymoon, it supports the determination of the binary system's dynamic state and its evolution induced by the DART impact (a similar approach as used for the localization of the Philae lander during the Rosetta mission [START_REF] Herique | Philae localization from CONSERT/Rosetta measurement[END_REF]).
Instrument Design
The LFR radar consists of an electronic box (E-Box) shown in Figure 13 and an antenna set on each spacecraft (i.e. lander and orbiter). Both electronic units are similar: two automats sending and receiving a BPSK code modulated at 60 MHz in time-sharing (Figure 9 and Figure 14). This coded low frequency radar is an in-time transponder inherited from CONSERT on board Rosetta (ESA) [START_REF] Kofman | The Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT). A short description of the instrument and of the commissioning stages[END_REF]: in order to measure accurately the propagation delay, a first propagation path from the orbiter to the lander is processed on-board the lander. The detected peak is used to resynchronize the lander to the orbiter. A second propagation from the lander to the orbiter constitutes then in itself the science measurement (Figure 10). This concept developed for CONSERT on board Rosetta [START_REF] Kofman | The Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT). A short description of the instrument and of the commissioning stages[END_REF] allows measuring the propagation delay with a raw accuracy better than 100 ns over a few tens of hours of acquisition using a quartz oscillator with a frequency stability in the range of 10 -7 . This accuracy can be increased up to 20 ns by on-ground processing post-processing [START_REF] Pasquero | Oversampled Pulse Compression Based on Signal Modeling: Application to CONSERT/Rosetta Radar[END_REF], yielding a typical accuracy better than a few percent of the average dielectric permittivity. The LFR characteristics and budgets are summarized in Table 1. As the LFR antennas cannot be deployed immediately after separation from the carrier spacecraft due to the need to relocate from the landing area to the LFR operating area, an antenna deployment mechanism is required, which needs to be operable in the low gravity environment on the surface of Didymoon. Astronika has designed a mechanical system deploying a tubular boom with a total mass of ~0.25 kg. It is able to deploy the 1.4 m antennas consuming only ~2 W for ~1 minute.
On the lander, the main antenna (V shape in Figure 11 and Figure 12) is deployed after reaching its final location. It provides linear polarization with high efficiency for the sounding through the body. A secondary antenna set with lower efficiency is deployed just after lander separation to allow operations in visibility during descent and lander rebounds, and for secondary objectives and operational purposes. The use of circular versus linear polarization induces limited power losses but reduces operational constraints on the spacecraft attitude. The LFR antenna on the orbiter is composed of four booms at the spacecraft corners in order to provide circular polarization.
Operations and Operational Constraints
Tomographic sections in bistatic mode are created in the plane of the moving line of sight through the target object between the lander and the orbiter passing by underneath. A full volume tomography is then assembled from a succession of several (as many as feasible) different of such pass geometries adjusted by changes in the orbiter trajectory. Considering the rugged topography of asteroids and the fact that all sections of the target object converge at the lander location, it is advantageous to have multiple landers and/or lander mobility, in order to ensure full volume coverage and uniform resolution of the target's interior. Lander mobility is particularly useful in binary systems where the lines of sight as well as the lander release, descent, landing, and orbiter pass trajectories can be constrained by the two
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 13
objects and their orbital motions relative to the spacecraft. The complex shapes and gravity fields of contact binaries or extremely elongated objects can create similar constraints.
The geometric constraints on the operational scenario for the bistatic experiment are driven by scientific and technical requirements on both the orbiter and lander platform. Considering simultaneously the baseline mission science data volume, the orbiter minimum mission duration in the frame of the AIDA mission, and the worstcase power constraints on-board of the lander, it is not possible to ensure full coverage of Didymoon according to the Nyquist criteria, i.e. λ/2 at the surface of the body. Under this constraint, when a full tomography of the body [START_REF] Barriot | A two dimensional simulation of the CONSERT experiment (Radio tomography of comet Wirtanen)[END_REF][START_REF] Pursiainen | Detection of anomalies in radio tomography of asteroids: Source count and forward errors[END_REF] is not feasible with a priori information [START_REF] Eyraud | Imaging the interior of a comet from bistatic microwave measurements: Case of a scale comet model[END_REF], then statistical tomography allows to characterize heterogeneity scales [START_REF] Herique | A characterization of a comet nucleus interior[END_REF] and to retrieve composition information (for CONSERT see also [START_REF] Ciarletti | CONSERT constrains the internal structure of 67p at a few meter size scale[END_REF] and [START_REF] Herique | Cosmochemical implications of CONSERT permittivity characterization of 67p/CG[END_REF]). However, it is likely that a combination of higher data volume by utilization of additional passes, allocation of more ground station time, or mission extension, together with any better than worst-case power availability on the lander platform can result in a much better tomographic coverage.
To achieve a good coverage of Didymoon, seven to ten tomography slices need to be collected, with each measurement sequence taking about 10 hours. Those slices must also be sufficiently separated in space. Thus, the spacecraft has to be able to operate at various latitudes relative to Didymoon.
A single acquisition sequence is composed of a sequence of visibility, occultation and again visibility between orbiter and lander. The first visibility period is mandatory for a time synchronization between orbiter and lander platform. The science measurements are performed during the occultation period. The last visibility slot is re served for calibration. The accuracy on the orbiter trajectory reconstruction needs to be typically a few meters, whereas the altitude reconstruction accuracy should be in the order of about 5°. The radar link budget constrains the operational distance from the orbiter unit to the lander unit to about 10 km.
Concerning the lander, proper operation of the LFR imposes constraints on the landing site selection (Figure 15 and Figure 16). The acquisition geometry constrained by Didymoon's motion around the main body. Most likely, it is in 1:1 spin-orbit resonance, which means that the side facing the main body is always the same, as with Earth's Moon. With a moving spacecraft on the latitude axis the lander needs to land near to the equator of Didymoon, i.e., between -15° and +15° latitude, in order to achieve alternating visibility and occultation periods. In that case, the orbiter spacecraft will be able to cover a range of latitudes between -25° and +25°. This alternation also constrains the longitude of the landing site to a zone between -120° and +120°, with optimal science return between -60° and +60°. It is also constrained by the lander platform's solar energy availability, which means having to avoid eclipses by the main body, and having a "forbidden zone" between -45° and +45° of longitude. Figure 16: LFR landing site possible areas in green: optimal, yellow: acceptable and red: impossible 0° longitude corresponds to the point facing the main body of the Didymos system.
Integration into the MASCOT2 Lander Platform
The MASCOT2 lander for the AIM mission is derived from the MASCOT lander, originally designed for and flying on the HAYABUSA2 mission to asteroid (162173) Ryugu [START_REF] Ho | MASCOT -The Mobile Asteroid Surface Scout Onboard the Hayabusa2 Mission[END_REF]. In order to integrate a radar instrument into the lander system, originally envisaged for short lifetime and mobile scouting on an asteroid surface, several changes are incorporated to cope with the measurement and instrument requirements of the radar package. Table 2 shows a summary of the main differences and commonalities between the original MASCOT and the proposed MASCOT2 variant of the lander platform [START_REF] Lange | MASCOT2 -A Small Body Lander to Investigate the Interior of 65803 Didymos' Moon in the Frame of AIDA/AIM[END_REF][START_REF] Biele | MASCOT2, a Lander to Characterize the Target of an Asteroid Kinetic Impactor Deflection Test Mission[END_REF].
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
15 The LFR E-Box (Figure 13) is designed in order to be compatible with the MASCOT2 lander platform's available volume. The MASCOT2 lander design is ideally suited to incorporate different suites of payloads, which means that a mechanical integration of the LFR E-Box would have no impact on the overall accommodation. The integration of the LFR's primary antennas and their deployment mechanisms requires a slightly larger effort due to volume restrictions in the bus compartment of the lander
The antenna system is designed to match both, the requirements of the MASCOT2 lander and the influence of surface and subsurface in the vicinity of the lander. EM simulations are used to verify the suitability of the antenna system accommodation.
Figure 17 shows a simulation setup and a typical 3D radiation pattern assuming a flat surface. Figure 18 shows the antenna far field diagrams in two planes perpendicular to the surface for 50 MHz, 60MHz and 70MHz.
From an electrical point of view, the integration of the instrument into the lander platform is challenging in two ways: (1) the operational concept along with the overall architecture had to be optimized in order to be compatible with a long-duration highpower measurement mode and (2) precise timing is needed in order to achieve usable instrument characteristics. Both aspects center on the energy demand of the LFR instrument and related services which are based on the need to support a lot of repeated long continuous runs. In contrast, MASCOT aboard HAYABUSA2 is designed to fulfil a short-duration scouting mission. It is expected to operate only on two consecutive asteroid days of ~7.6 h, each. The design-driving power consumption results from the operations of the MicrOmega instrument (~20 W total battery power for only ~½ hr) and the mobility unit (up to ~40 W for less than 1.5 s). The energy for this mission is completely provided by a non-rechargeable battery. The choice for primary batteries was partly driven by the fact that such, a power system operates independent of the topographic illumination [START_REF] Grundmann | One Shot to an Asteroid -MASCOT and the Design of an Exclusively Primary Battery Powered Small Spacecraft in Hardware Design Examples and Operational Considerations[END_REF]. A short mission duration also implies few opportunities and little time for ground-loop intervention, thus the power subsystem operates permanently hotredundant and provides many automatic functions. This leads to an elevated idle power consumption of about 6.5 W, rising to about 10 W with the continuous activity of the MARA and MasMAG instruments. The simplicity of this concept comes at the expense of a very significant thermal design and control effort, required to keep the primary battery cold during interplanetary cruise in order to prevent self-discharge, and warm during on-asteroid operation to ensure maximum use of the available capacity. This simplicity of the original MASCOT concept comes at the expense of a very significant thermal design and control effort, required to keep the primary battery cold during interplanetary cruise in order to prevent self-discharge, and warm during onasteroid operation to ensure maximum use of the available capacity. The support of the LFR with its long-duration high-power measurement mode requires modifications to the platform design due to thermal aspects.
The MicrOmega (MMEGA, [START_REF] Pilorget | NIR reflectance hyperspectral microscopy for planetary science: Application to the MicrOmega instrument[END_REF]) instrument, accommodated at the respective location in the original MASCOT lander, requires cold operation due to its infrared sensor and optics. The LFR E-Box can operate in the typical "warm" conditions of other electronics modules (Figure 19). Therefore, its mass can be used together with the bus E-Box and mobility mechanisms to augment thermal energy storage around the battery, improving the mass to surface ratio of the warm compartment and saving electrical energy which would otherwise be required for heating. For this purpose, the cold compartment on the payload side of the lander was reduced to a "cold corner" or pocket around the camera, MasCAM [START_REF] Jaumann | The Camera of the MASCOT Asteroid Lander on Board Hayabusa2[END_REF], and the radiometer, MARA [START_REF] Grott | The MASCOT Radiometer MARA for the Hayabusa 2 Mission[END_REF].
The accommodation of the magnetometer, MasMAG [START_REF] Herčík | The MASCOT Magnetometer[END_REF], as on MASCOT was considered for optional use together with the proposed magnetometer experiments aboard COPINS, sub-satellites to be inserted into orbit in the Didymos system by the AIM spacecraft [START_REF] Walker | Cubesat opportunity payload inter-satellite network sensors (COPINS) on the ESA asteroid impact mission (AIM), in: 7th Interplanetary Cubesat Workshop[END_REF]. A triaxial accelerometer, DACC, was added in order to observe the interaction of the lander with the surface regolith during touch-down, bouncing and self-righting, reaction to motion during deployment operations, and possibly the DART impact shock wave.
M A N U S C R I P T A C C E P T E D
For the long-duration MASCOT2 mission, the mission energy demand will be orders of magnitude higher due to the repeated long continuous LFR runs. Thus, a rechargeable battery and photovoltaic power is required. The design-driving power consumption results from the LFR instrument operating for several hours at a time (see Table 1) defining the minimum battery capacity, and the simultaneous operation of the dual mobility mechanism. Both have a similar peak power demand, defining the power handling capability.
A deployable photovoltaic panel is necessary to satisfy the energy demand of LFR operations without too extensive recharging periods between LFR sounding passes. The panel will be released after the MASCOT2 lander relocates to the optimal LFR operations site on Didymoon, self-righted there, and deployed the LFR antennas.
The possibility to recharge the battery and wait for ground loop intervention allows mainly cold-redundant operations and reduces the need for highly sophisticated autonomy within the power subsystem. This alone greatly reduces idle power consumption, and thus battery capacity requirements to survive the night. Further reduction of idle consumption is achieved by optimizing the electronics design.
However, the energy demand of LFR is such that a much deeper discharge of the battery will occur as would usually be accepted for Earth-orbiting spacecraft. This will reduce battery lifetime. Thus, some fundamental autonomous functions are used to protect the system from damage by short circuits or deep discharge of the battery and to ensure a restart after the battery has accumulated sufficient energy. For this purpose, the photovoltaic power conversion section charging the battery is selfsupplied and does not require battery power to start up. In case the battery gets close to the minimum charge level e.g. when a LFR run cannot be properly terminated due to an unforeseen event, all loads are disconnected so that all incoming photovoltaic power can be used for recharge. State of the art rechargeable batteries can operate sufficiently well and with only minor operational restrictions at cell temperatures from about -20°C to +50°C, nearly as wide as the temperature range of the primary battery of MASCOT, but with much better performance in cold conditions below +20°C. In case the temperature is too low to allow maximum charging rate, all excess photovoltaic power is diverted to a battery heater [START_REF] Dudley | ExoMars Rover Battery Modelling & Life Tests[END_REF].
During use and in favorable illumination on the ground, battery life extending charge control is applied [START_REF] Neubauer | The Effect of Variable End of Charge Battery Management on Small-Cell Batteries[END_REF].
As described the features of a long-lived high energy mission can be coped with by a deployable photovoltaic panel. As an alternative a moderate enlargement of the MASCOT-like box shape was also considered as an option for the AIM mission. It could provide the same daily average power level. Depending on which sides of the lander are enlarged, the immediately available photovoltaic power can be adjusted within the daily cycle. A flat shape with a similar top plate area as the deployed panel of MASCOT2 increases power generation around noon while higher or wider sides increase power at sunrise and sunset (assuming a clear view to the horizon at the landing site). The increased volume, if provided by the carrier mission, can be used to accommodate additional instruments or a larger battery, also providing more robustness during the relocation phases. Depending on the antenna design, relocation for more extensive LFR tomography also becomes possible. It is thus possible to combine investigations of the interior and the surface mineralogy as
M A N U S C R I P T A C C E P T E D
carried out by MASCOT. The mass increase is little more than the instrument's, i.e. the bus mass would increase by about 10% with the addition of one relatively large instrument. If the carrier mission provides still more mass allowance, a set of multiple MASCOT type landers based on a common infrastructure but carrying different instruments and individually optimized for these can also be considered [START_REF] Grundmann | Capabilities of GOSSAMER-1 derived Small Spacecraft Solar Sails carrying MASCOT-derived Nanolanders for In-Situ Surveying of NEAs[END_REF].
Design Methodologies for Lander Design Reuse
The "mother" mission of MASCOT, the HAYABUSA2 mission, has benefited greatly from its predecessor HAYABUSA. It reused main portions of the design and optimized its main weaknesses based on lessons learned, such as the antenna, the orientation control and engine as well as the sampling approach [START_REF] Tsuda | Flight status of robotic asteroid sample return mission HAYABUSA2[END_REF]. Other than this particular example, and except for the well-known and documented reuse of the Mars Express Flight Spare Hardware for the Venus Express mission [START_REF] Mccoy | Call for Ideas for the Re-Use of the Mars Express Platform -Platform Capabilities[END_REF][START_REF] Mccoy | The Venus express mission[END_REF], the MASCOT2 re-use exercise is the only known system level reuse of a previously flown deep space system in a new environment and with an almost completely new science case, as described above. The fostered and maximized re-use of an already very precisely defined system for a very different mission recreates the unusual situation of an extremely wide range of subsystem maturity levels, from concepts to already flown designs. The integration of new instruments like the LFR radar is one of such lowermaturity cases. New design methodologies based on Concurrent Engineering and Model Based Systems Engineering methods can enhance the redesign, instrument integration and system adaptation process and make it faster and more cost efficient [START_REF] Lange | Systematic reuse and platforming: Application examples for enhancing reuse with modelbased systems engineering methods in space systems development[END_REF][START_REF] Lange | A model-based systems engineering approach to design generic system platforms and manage system variants applied to mascot follow-on missions[END_REF][START_REF] Braukhane | Statistics and Evaluation of 30+ Concurrent Engineering Studies at DLR[END_REF]. In addition, the general use case of a small landing package piggy-backing on a larger main mission is very lucrative and widely applicable in the context of planetary defense and small body exploration, making the platform approach, already known from Earth orbiting missions, a feasible strategy. A strategically planned MASCOT-type lander platform with an ever-increasing portfolio of technology options will further enhance the applicability of the small lander concept to all kinds of missions. Several of the technologies specifically required to realize the radar mission scenario as described above fall into this category. Other technologies such as advanced landing subsystems and new mobility concepts are also interesting and currently under development [START_REF] Lange | Exploring Small Bodies: Nanolander and -spacecraft options derived from the Mobile Asteroid Surface Scout[END_REF].
Conclusions
Direct measurements are mandatory to get a deeper knowledge of the interior structure of NEOs. A radar package consisting of a monostatic high frequency radar and a bistatic low frequency radar is able to perform these direct measurements. Both radar systems provide a strong scientific return by the characterization of asteroid's internal structure and heterogeneity. Whereas the LFR provides a tomography of the deep interior structure, the HFR maps the shallow subsurface layering and connects the surface measurements to the internal structure. In addition to this main objective, the radars can support other instruments providing complementary data sets. The nanolander MASCOT2 demonstrates, by carrying the mobile part of the bistatic radar, its flexibility. It can carry instruments with a wide range of maturity levels using state of the art design methodologies. As shown, a moderate redesign allows for long-term radar runs in contrast to the original short-term operation scenario of MASCOT.
M A N U S C R I P T A C C E P T E D
The presented radar package and the MASCOT2 lander have been developed at phase A/B1 level in the frame of ESA´s AIM mission study. Although the mission has not been confirmed and the next steps to establish such a mission are not clear. The modification of the MASCOT lander platform to a fixed but longtime radar surface station demonstrates the large range of applications for small landing packages on small airless bodies [START_REF] Ulamec | Landing on Small Bodies: From the Rosetta Lander to MASCOT and beyond[END_REF]. The radar instrument package presented has a high maturity and is of main interest for planetary defense as well as for NEO science.
M A N U S C R I P T A C C E P T E D
Figure 2 :
2 Figure 2: Module stack of the HFR system prototype: DC/DC Module, Digital Module with FPGA, microcontroller and signal level converters, Low Power Module with synthesizer, receiver and switches for calibration, and High Power Module with power amplifier and preamplifier assemblies.
Figure 3 :
3 Figure 3: 3D model (left) and prototype (right) of the HFR ultra-broadband dual polarized antenna.
Figure 4 :
4 Figure 4: Simulated antenna pattern of HFR antenna system (E-plane).
Figure 5 :
5 Figure 5: HFR mono pass impulse response on Didymoon's surface map for a point target located at 20° latitude and 180° longitude (a). The impulse response power is shown by color mapping, in dB.The same impulse response presented in 3D on a sphere portion that represents the surface of Didymoon (b) and shown in 3D (c). Note that a clear ambiguity along the vertical axis remains in a mono pass. The color scale corresponds to a dynamic range of 100 dB and exaggerates signal distortions. This measurement is simulated, along an arc of orbit or 20°, considering an isotropic point target located. On Didymoon's surface and taking into account propagation delay and geometrical losses. Simulation was done in the frequency domain using the instrument characteristics listed in this paper. A SAR processor, corresponding to a coherent summation after compensation of the propagation delay, processes the simulated measurements.
Figure 6 :Figure 7 :
67 Figure 6: HFR impulse response with 30 passes for a point target located on Didymoon's surface at 30° latitude and 90° longitude. The HFR observation window is chosen so that it has a 30° incidence angle with the target. The color mapping, in dB, shows the impulse response power; (a) presents a view of the radial/along track plane while (b) presents a view of the radial/across track plane; (c) presents the same impulse response planes in 3D, including the tangent plane (across/along track).The color scale corresponds to a dynamic range of 100 dB. This dynamics exaggerates signal distortions.
Figure 8 :
8 Figure 8: Bistatic (left) and monostatic (right) radar configuration, Artist view from CONSERT/Rosetta. From [1]. Credit: CGI/Rémy Rogez; shape model: Mattias Malmer CC BY SA 3.0, Image source: ESA/Rosetta/NAVCAM, ESA/Rosetta/MPS.
Figure 9 :
9 Figure 9: Block diagram of the LFR instrument, orbiter (top), lander (bottom). .
Figure 10 :
10 Figure 10: Lander synchronization: effect on the measured signal taking into account the periodicity of the calculated signal.
Figure 11 :
11 Figure 11: Lander antennas: V-shaped dipole and secondary dipole antenna. MASCOT2 accommodation.
Figure 12 :
12 Figure 12: Antenna in tubular boom technology general architecture with basic subassemblies: (1) Structure (2) Tubular boom (3) Tubular boom guidance system (4) Drive and damping unit (5) Lock an release mechanism (6) Electrical connection.
Figure 13 :
13 Figure 13: LFR electronic box -housing, global view.
Figure 14 :
14 Figure 14: Block schematic of the LFR system architecture showing electronic box including transmitter (Tx), Receiver (Rx) and digital module.
Figure 15 :
15 Figure 15 : Definition of Didymoon reference system.
Figure 17 :
17 Figure 17: Simulated 3D LFR radiation pattern inside (lower hemisphere) and outside (upper hemisphere) the asteroid at 60 MHz in case of a flat surface, assuming a relative permittivity of 5.
Figure 18 :
18 Figure 18: Simulated antenna patterns of MASCOT2 antenna system above ground.Left: Φ=0° (perpendicular to the y-axis); right: Φ=90° (perpendicular to the x-axis).
Figure 19 :
19 Figure 19: Detailed view of MASCOT2 platform showing accommodation of LFR, including E-box and antenna systems.
Table 1 :
1 Main characteristics and performance of the bistatic low frequency radar and the monostatic high frequency radar.
Bistatic Radar Monostatic Radar
Orbiter LFR Lander
Frequency (nominal) 50-70 MHz 300 -800 MHz
Frequency (extended) 45-75 MHz up to 3 GHz
Signal modulation BPSK Step frequency
Resolution 10 -15 m (1D) 1 m (3D)
Polarization Circular (AIM) Linear (Mascot) Tx: 1 Circular
Rx: OC and SC
Tx power 12 W 20 W
Pulse repetition 5 seconds 1 second (typical)
Sensitivity Dynamic = 180 dB NEσ0 = -40 dB.m 2 /m 2
Mass
Electronic 920 g 920 g 830 g
Antenna 470 g 230 + 100 g 1560 g
Total w/o margin 1390 g 1250 g 2390 g
Power max / mean 50 W / 10 W 50 W / 10 W 137 W / 90 W
Typical Data (Gbit) 1 0.3 300
Table 2 :
2 Main differences and commonalities of the proposed MASCOT2 lander.
Differing attribute MASCOT MASCOT2
Main Science Case surface composition and physical properties internal structure by radar tomography
mapper
Landing site restricted by thermal and communications restricted by measurement requirements
reasons
Target body diameter 890 m 170 m
Rotation period 7.6 h 11.9 h
Lifetime ~16 hours >3 months
Deployment wrt to S/C sideways, 15° downwards not restricted
Communications synergy with Minerva landers with AIM ISL (Copins)
interoperability
Lander mounting plane 15° angled "down" parallel to the carrier sidewall
Storage inside panel in a pocket outside panel, flush
Mobility 1 DOF 2 DOF
Localization passive, by orbiter self-localization
Power primary battery only solar generator and rechargeable batteries
Thermal Control variable conductivity passive (MLI, heater)
Self-awareness basic extended sensor suite
Communication VHF transceiver from JAXA S-band transceiver
Scientific Payload MARA, MASCam, MasMAG, MicrOmega MARA, MASCam, LFR, DACC, (MAG)
Acknowledgement
Radar development has been supported by the CNES's R&T program ("CONSERT Next Generation" study) and by the ESA's General Studies Program (AIM Phase A). The High Frequency Radar inherits from WISDOM/Exomars founded by CNES and DLR. The Low Frequency Radar inherits from CONSERT/Rosetta founded by CNES and DLR. The MASCOT2 study was funded and carried out with support of the DLR Bremen CEF team. |
00176086 | en | [
"spi.meca.biom",
"phys.meca.biom",
"sdv.ib.ima",
"spi.signal",
"info.info-ts"
] | 2024/03/05 22:32:13 | 2007 | https://hal.science/hal-00176086/file/Lobos_Claib07.pdf | Claudio Lobos
email: claudio.lobos@imag.fr
Marek Bucki
Yohan Payan
Nancy Hitschfeld
Techniques on mesh generation for the brain shift simulation
Keywords: Mesh Generation, Brain Shift, Finite Elements, Real-time Models
published or not. The documents may come L'archive ouverte pluridisciplinaire
I. INTRODUCTION
Accurate localization of the target is essential to reduce morbidity during a brain tumor removal intervention. Image guided neurosurgery nowadays faces an important issue for large skull openings, with intra-operative changes that remain largely unsolved. The deformation causes can be grouped by:
• physical changes (dura opening, gravity, loss of cerebrospinal fluid, actions of the neurosurgeon, etc) and • physiological phenomena (swelling due to osmotic drugs, anesthetics, etc). As a consequence of this intra-operative brain-shift, preoperative images no longer correspond to reality. Therefore the neuro-navigation system based on those images is strongly compromised to represent the current situation.
In order to face this problem, scientists have proposed to incorporate into existing image-guided neurosurgical systems, a module to compensate for brain deformations by updating the pre-operative images and planning according to intra-operative brain shape changes. This means that a strong modeling effort must be carried out during the design of the biomechanical model of the brain. The model must also be validated against clinical data.
The main flow of our strategy can be divided into three main steps:
• The segmentation of MRI images to build a surface mesh of the brain with the tumor. • The generation of a volume mesh optimized for real-time simulation.
• The creation of a model of the Brain Shift with
Finite Elements and the updating with ultrasound images. This paper aims to deal with the second point: the selection of a meshing technique for this particular problem. Several algorithms and applications are analyzed and contrasted.
II. MESHING CONSTRAINTS
Various constraints and statements have arisen in the path to achieve this surgical simulation. These global ideas can be summarized as follows:
• The speed of the FEM computation depends on the number of points the system has to deal with. • A good representation of the tumor as well as the Opening Skull Point (OSP) and the path between them is needed. This path will from now on be referred to as the Region of Interest (RoI). • Consideration of the brain ventricles is desirable.
• The algorithm should consider as input a surface mesh with the ventricles and the tumor. • Obtain a surface representation and element quality throughout the entire mesh.
III. MESHING TECHNIQUES
A. Advancing Front
This is a technique that starts with a closed surface [START_REF] Frey | Delaunay tetrahedralization using an advancing front approach[END_REF][START_REF] Lobos | 3D NOffset mixed-element mesh generator approach[END_REF]. All the faces that describe the surface are treated as fronts and are expanded into the volume in order to achieve a final 3D representation. The selection of points to create the new faces encourages the use of existing points. Additional process to improve the quality of the elements can be made. There are two main drawbacks to this approach. The first one is that this technique is recommended when it is important to maintain the original input faces. This is not a specific constraint in our case. The second is that it would be necessary to use external libraries to produce a local or regional refinement. In our case, this is to have a refined mesh in the RoI and coarse elsewhere. However, the control of internal regions can be easily achieved because inner surface subsets are also considered to expand. All this is shown in figure 1.
B. Mesh matching
Mesh matching is an algorithm that starts with a generic volume mesh and tries to match it to the specific surface [START_REF] Couteau | The mesh-matching algorithm: an automatic 3D mesh generator for finite element structures[END_REF]. The base volume is obtained from an interpolation of several sample models. To obtain a new mesh, in our case for the brain, the problem is reduced to find a transformation function that will be applied to the entire base mesh and in that manner produce the final volume mesh for the current patient. This is shown in figure 2.
The problem with this technique is that even though a good representation of the surface and quality of the mesh can be achieved, it would not contain the tumor information because its position and thus the RoI changes from one patient to another. This technique is recommended for, and has been successfully applied to complex structures such as bones and maxillofacial models [START_REF] Luboz | Orbital and maxillofacial computer aided surgery: patient-specific finite element models to predict surgical outcomes[END_REF].
A good adaptation of this technique to our problem would be to provide tools for local element refinement and with this produce a more refined mesh in the RoI. Then a subset of these elements would be labeled as the tumor, leaving the rest of the mesh untouched.
Fig. 2 The mesh matching algorithm: the segmented lines represent the base mesh that has to match the surface.
C. Regular Octree
The octree technique starts from the bounding box of the surface to mesh [START_REF] Schneiders | Octree-based hexahedral mesh generation[END_REF]. This basic cube or "octant" is split into eight new octants. Each octant is then iteratively split into eight new ones, unless it resides outside the input surface mesh, in which case it is remove from the list. The algorithm stops when a predefined maximum level of iterations is reached or when a condition of surface approximation is satisfied. This is shown in figure 3. The octree by itself does not consider a surface approximation algorithm once the split process is done.
Techniques in mesh generation for brain shift simulation.
Therefore it has to be combined with other techniques in order to produce a final mesh that represents well the surface. Two main approaches are considered:
• Marching cubes [START_REF] Lorensen | Marching Cubes: A High Resolution 3D Surface Construction Algorithm[END_REF]: this algorithm crops the cubes that lie within the surface and produce, in most cases, tetrahedrons. • Surface projection: this technique projects the points of those elements that intersect the surface, onto it. The main problem is that this can produce degenerated elements unless a minimal displacement is needed. The problem with a regular octree mesh is that it counts with a high number of points even in regions where no displacement is expected [START_REF] Hu | Intraoperative brain shift prediction using a 3D inhomogeneous patient-specific finite element model[END_REF]. Therefore, a non-optimal mesh would be the input for the FEM producing unnecessary time consumption for the entire simulation.
D. Delaunay
A Delaunay triangulation or Delone triangularization for a set P of points in the plane is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation. The triangulation was invented by Delaunay in 1934 [START_REF] Delaunay | Sur la sphère vide[END_REF].
To evaluate this technique we used TetGen [START_REF] Tetgen | a Quality Tetrahedral Mesh Generator[END_REF]. This is an open-source application that generates tetrahedral meshes and Delaunay tetrahedralizations. The tetrahedral meshes are suitable for finite element and finite volume methods. TetGen can be used either as an executable program or as a library for integrating into other applications. A mesh generated by TetGen is shown in figure 4. When used as a stand alone program, a surface mesh must be provided as well as some information to identify regions and cavities. The output is a constrained Delaunay volume mesh that contains quality tetrahedral elements.
Because only one file that describes the surface (and inner regions) can be used as input, this mesh must contain the RoI within itself, i.e. several faces have to be joined in order to produce a closed region that will count with a higher level of refinement.
It is possible to not constrain the volume in the zones outside the RoI. In this case TetGen will produce elements as large as possible in those regions, while satisfying the Delaunay property as well as surface representation.
When used as a library, the situation changes. The decision of which element to split can be controlled by another class that defines some specific criteria like regional refinement. The algorithm would start from a global mesh that is not constrained by a maximum element volume. Then detect every tetrahedron that resides in the RoI and produce a refinement of those elements until a certain condition on point quantity is reached. With these modifications, all the initial constraints as defined in section II would be satisfied, producing an optimal patient-specific mesh. This approach has not been implemented.
E. Modified Octree
The octree structure is very convenient for refining only certain elements in the mesh. As explained before, it splits all the elements that are not completely outside the input domain. A minor conceptual modification would make a significant difference in obtaining an optimal mesh: split all the elements that are not completely outside the RoI. This creates a new category of element: an element that is outside the RoI but inside the input domain.
At this level the mesh is unsuitable for the FEM because it has some element faces that are split by one side and not by the other. In other words, there are faces that count with some points inserted on their edges and even on their surface, as shown in figure 5. A mesh is said to be 1-irregular if each element face has a maximum of one point inserted on each edge and a face midpoint. The mesh produced by adding the refinement constraint at the RoI is not necessarily 1-irregular. To solve this problem, all the elements that are not 1-irregular are split until the entire mesh respects this property [START_REF] Hitschfeld | Generation of 3D mixed element meshes using a flexible refinement approach[END_REF].
It is important to produce a 1-irregular mesh because throughout patterns we can split those octants by adding different type of elements (such as: tetrahedra, pyramids and prisms). This stage results in a congruent mixed element mesh that has different levels of refinement.
The last step is to achieve a representation of the surface. This is done by projecting the points of the elements that are outside the input surface, onto it. Better solutions, such as marching cubes, can be implemented with modifications in order to avoid problems with the junction of other types of elements (tetrahedra, pyramid and prism). The main motivation for changing this sub-process is that projected elements may not respect some aspect-ratio constraints as can be seen figure 6. This modified octree has been implemented with satisfactory results. However it can still evolve further with the achievement of a better surface representation. This technique also complies with the constraints mentioned in section II.
The stop condition for the initial octree is not a certain level of surface approximation, but a global quantity of points. Once this quantity is reached, the only process that will increase the number of points is the 1-irregular process. After that, the management of the transition and the process of projection will not increase the quantity of points.
Fig. 1
1 Fig.1The advancing front technique: top-left: the initial surface mesh with the expansions directions, top-right: one face (front) is expanded, bottomleft: another expansion using already inserted points, bottom-right: expansion of cavity inner face using inserted points.
Fig. 3
3 Fig.3Regular Octree: a regular mesh that contains hexahedral elements. Only elements that intersect the surface or the cavity are shown to better appreciate the last one. A real output mesh consider all the hexahedra that where erased this time and all of them have the same size.
Fig. 4
4 Fig. 4 One output mesh generated by TetGen.
Fig. 5
5 Fig. 5 Octree transitions: a) RoI output, b) 1-irregular mesh and c) mixed element valid mesh.
Fig. 6
6 Fig. 6 One output mesh generated by the modified octree technique. The circle represents the tumor inside the RoI. Inner elements remain regular.
IV. DISSCUSION
We have presented several meshing techniques that confront the problem of brain shift. In particular, we have mentioned two techniques that can respect all the initial constraints of the problem (TetGen and Modified Octree) and one technique that can be very useful when time constrain is the most important (Multi-resolution).
At this point we can summarize the future works as: 1. Adapt TetGen to the problem, using it as a library. 2. Improve the quality of the projected elements in the Modified Octree. 3. Compare Modified Octree with TetGen in terms of point quantity and element quality.
V. ACKNOWLEDGMENT This project has been financially supported by the ALFA IPECA project as well as FONDEF D04-I-1237 (Chile), and carried out in collaboration with Praxim-Medivision (France). The work of Nancy Hitschfeld is supported by Fondecyt project number 1061227.
Author: Claudio Lobos Institute: TIMC-IMAG Street: Faculté de médecine, pavillon Taillefer City:
La Tronche Country: France Email: Claudio.Lobos@imag.fr |
01760925 | en | [
"shs.eco"
] | 2024/03/05 22:32:13 | 2017 | https://theses.hal.science/tel-01760925/file/59408_DEFEBVRE_2017_archivage.pdf | Keywords: work, employment, working conditions, retirement, general health, mental health, depression, anxiety, chronic diseases, childhood, endogeneity, instrumental variables, matching, panel methods, difference-in-differences, France
ni improbation aux opinions émises dans les thèses ; ces opinions doivent être considérées comme propres à leurs auteurs.
À l'heure à laquelle j'écris ces mots (quelques jours avant la soumission du présent manuscrit), je suis dans la crainte de ne savoir exprimer ma reconnaissance envers toutes les personnes ayant rendu cette aventure possible, et ayant permis quelle se déroule de la manière dont elle s'est déroulée. J'enjoins donc le lecteur à user de compréhension, et à considérer que tout oubli de ma part est plus le fruit de la fatigue et de la nervosité, que de l'ingratitude.
Tout d'abord, je tiens à remercier tout particulièrement mon Directeur de Thèse, Thomas Barnay, qui m'a accompagné, soutenu et dirigé pendant ces quatre années. Je précise que ce n'est pas par respect de la coutume que je place ces remerciements à mon Directeur tout en haut de la liste, mais bien parce-que c'est grâce à son extrême bienveillance, son investissement et ses qualités humaines qu'il a su faire de moi un jeune chercheur, je l'espère intègre. Rien de ce qui s'est passé durant ces quatre années (et même un peu avant) n'aurait été possible sans la confiance qu'il a mise en moi, et ce dès le Master 2. Quand moi je n'y croyais pas (ou plus), Thomas a toujours été là pour prendre le contre-pied et me faire avancer. Merci infiniment pour tout ça, ainsi que pour les parties de tennis de table ! I would then like to thank Maarten Lindeboom and Judit Vall Castelló for agreeing to be members of the committee. It will be a pleasure and an honour to be meeting you at the Ph.D.
Defence. Et plus particulièrement, je remercie Eve Caroli et Emmanuel Duguet, nonseulement pour m'avoir fait l'honneur d'accepter d'être membres de mon Jury, mais aussi d'avoir participé à ma soutenance blanche de thèse. J'espère de tout coeur avoir su rendre justice à votre travail et à vos commentaires de manière satisfaisante dans la présente version du manuscrit.
Ma thèse s'est, pour la majeure partie du temps, déroulée au laboratoire Érudite, à l'Université Paris-Est Créteil, et ainsi il va de soi que nombre de ses membres ont aussi contribué au déroulement de ce travail. Dès mon arrivée, j'ai été accueilli chaleureusement par les doctorants historiques qui, s'ils ne sont plus à l'Upec aujourd'hui, sont encore bien présents dans ma mémoire et représentent sans doute un certain âge d'or du laboratoire. Ainsi, je remercie notamment Ali, Igor, Haithem, Issiaka et Majda pour toutes les parties de rigolades et les discussions (parfois sérieuses) à table durant les premiers temps de la thèse. Plus récemment, c'est avec Sylvain, Redha, Naïla et Adrien que la bonne ambiance a pu continuer.
Sylvain, les discussions (politiques et sociétales) en terrasse étaient un vrai plaisir. Redha, bien que nos goûts en matières sportive et automobile puissent parfois diverger, t'avoir eu comme étudiant mais surtout t'avoir comme collègue a permis de me changer les idées en fin de thèse ! Naïla, bien que nous n'ayons pas véritablement eu le temps de discuter, je te souhaite toute la réussite que tu mérites pour ton stage, et bien sûr pour ta thèse si toi aussi tu te lances dans l'aventure. Adrien, il est vrai que malheureusement depuis l'épisode à Aussois nous avons eu du mal à nous retrouver, mais les blagues en lien à un certain personnage bien connu de la vie publique me font encore rire aujourd'hui ! De nombreux enseignantschercheurs du laboratoire m'ont aussi prêté main-forte dans la réalisation de ma thèse.
Particulièrement, Pierre Blanchard (merci encore pour votre aide pour le lancement du chapitre 2 !), Thibault Brodaty et Emmanuel Duguet pour des questions plus économétriques, Ferhat Mihoubi (pour ta gentillesse et ta compréhension, notamment sur les questions de conférences) et Arnold Vialfont (les barres en terrasse !) ont tous facilité la démarche de la thèse. Bien-sûr, les participants du Séminaire de Thèse (ou des Doctorants, nul ne le sait plus vraiment) en Économie de la Santé (ou SM(T/D)ES) que sont Karine Constant, Patrick Domingues, François Legendre et Yann Videau, par leurs commentaires et par leur aide régulière tout au long du chemin ont aussi grandement participé à l'avancée scientifique du manuscrit. Les enseignants-chercheurs de l'Érudite ont toujours su se montrer disponibles et bienveillants envers moi (et sans ménager leurs efforts !), et les solliciter a toujours été agréable et très enrichissant.
During my Ph.D., I had the opportunity to do a research stay at The Manchester Centre for Health Economics. Matt, Luke, Søren, Shing, Phil, Cheryl, Julius, Rachel, Pipa, Kate, Alex, Laura, Tom, Niall, Beth and the others, it was a blast! Thank you so much for being so welcoming, kind and helpful. Thanks to you, I was able to have very nice feedbacks on my work while having a lot of fun at the same time. I hope my rather shy and anxious nature was not too much of a pain to deal with. I really hope I will have the opportunity to see you guys again as soon as possible! Avant d'être en thèse, j'ai passé 6 mois en stage de recherche au Bureau État de Santé de la Population (BESP) à la Drees, et il va de soi que de nombreuses personnes, là aussi, m'ont donné le goût de la recherche. Tout d'abord, Lucie Gonzalez qui m'a donné ma chance et m'a encadré sur place, puis l'ensemble des membres du BESP (notamment Marc et Nicolas) et même d'autres Bureaux (BDSRAM notamment) ont rendu ce stage extrêmement intéressant et très sympathique à vivre ! Encore avant ça, c'est grâce à un stage (en classe de seconde !) à l'Institut National de Recherche et de Sécurité que la vocation de la thématique Santé-Travail est née en moi. Ainsi, je remercie particulièrement Pierre Danière et Roger Pardonnet pour avoir accepté de me prendre sous leur aile et de m'avoir permis de découvrir tant de choses qui m'ont ouvert les yeux. Si j'ai fait une thèse sur cette thématique, notamment en termes de conditions de travail, c'est grâce au feu que eux et les membres du Centre Interrégional de Mesures Physiques de l'Est (Cimpe) ont fait naître en moi à cette occasion.
Je remercie aussi toutes les personnes que j'ai pu rencontrer au durant la thèse, en conférence, workshop ou autre évènement pour leurs retours, commentaires et suggestions. On ne se rend pas nécessairement compte à quel point un rapide contact peut aider à gagner énormément de temps sur un point délicat ! Il y aurait trop de monde à citer, donc je vais me contenter de ces remerciements un peu généraux… Il faut bien évidemment que je rende justice à mes amis. Bien que je donne en général peu de nouvelles, ils sont toujours restés à mes côtés (parfois à distance), et m'ont supporté moi et mon caractère durant de nombreuses années. Alexandre (meilleur ami de la première heure), Céline, Jean-Thomas, Willy et d'autres amis datant du lycée, Julie avec qui j'ai fait un bout de chemin à partir du Master 1… Bien que nous n'ayons pas énormément d'opportunités de nous voir, vous êtes tous restés de très bon amis. Le « noyau dur » Armand, Mickael et Félicien : jamais vous n'avez cédé, et vous faites ainsi partie du cercle des amis les plus infaillibles et sincères que l'on puisse avoir. Armand et Mickael, vous et nos discussions infinies autour de (plusieurs) verres me manquent énormément. Félicien, entre les soirées lilloises, le road trip en Angleterre et les tournées en Belgique, que de bons moments ! Dorian, nous avons pu faire un bout de chemin ensemble et j'ai beaucoup apprécié et profité de ces moments de discussion avec toi. Sandrine, j'ai le sentiment que nous avons pu beaucoup partager durant ces années thèse. Tu as toujours été là pour discuter, m'aider (et boire des coups). J'ai aimé chacun des moments passés avec toi, et traverser les épreuves de la thèse a été beaucoup plus facile grâce à toi… J'espère que nos routes continueront de se croiser, aussi longtemps que possible. Merci à tous, et pour tout ! Enfin, je terminerai en remerciant mon père et ma mère qui, à chaque moment, ont été là pour me soutenir et m'aider, surtout dans les moments difficiles. Je leur dois ma réussite, et ce qu'il peut y avoir de bon dans ce travail leur est dû. Je leur dédie ainsi cette thèse. Je vous aime. À mon père. À ma mère.
List of figures
List of tables
Work evolution and health consequences
Moving work
The face of employment in Europe is changing. Stock-wise and on the extensive margin, employment rates in EU28 reached 70.1% in 2015, nearing the pre-crisis levels of 2008 (Eurostat). These employment rates know important variations between countries (going from 54.9% for Greece to 80.5% in Sweden). When men's employment rates remained relatively stable between year 2005 and year 2015 (75.9%), women's knew a sizeable increase (60.0% in 2005, 64.3% in 2015) and even though older workers' is still rather low (53.3%), it also went up considerably since 2005 (42.2%). Yet, an important education-related gradient still exists, as only 52.6% of the less educated population is employed, when employment rates amount to 82.7% in the more educated. The results for France are slightly below the average of developed countries, as 69.5% of the population aged 20-64 is in employment (73.2% in men, 66% in women), but know a particularly weak level of employment in older workers (48.7%) in 2015. On the intensive margin, weekly working times in Europe have known a slight and steady decreasing trend since 2005, going from 41.9 hours to 41.4 hours in 2015 with rather comparable amounts between countries. France ranks at 40.4 hours a week.
What is also noticeable is that workers' careers appear to be more and more fragmented.
When the proportion of workers employed with temporary contracts globally remained constant over the last decade in Europe (14.1% in total with 13.8% of men and 14.5% of women, 16.0% total in France), resorting to part-time job becomes more and more common. 17.5% of workers worked part-time in 2005, when almost one fifth of them do in 2015 (19.6% and 18.4% in France). The sex differences are very important: in 2015, only 8.9% of men worked part-time, when 32.1% of women did. Almost 4% of EU28 workers resort to a second job (from 0.5% in Bulgaria to 9.0% in Sweden and 4.3% in France). At the same time, unemployment rates also increased in Europe, going from 7% of the active population in 2007 (before the crisis) to 9.4% in 2015, and from 4.6% in Germany to 24.9% in Greece (10.4% in France). Long-term unemployment, intended as individuals actively seeking for a job for at least a year, also drastically increased during this period, going from 3.0% in 2007 to 4. 5% in 2015 (1.6% in Sweden, 18.2% in Greece and 4.3% in France).
Intensificating work
On top of these more fragmented career paths, European workers face growing pressures at work. Notably, [START_REF] Greenan | Has the quality of working life improved in the EU-15 between 1995 and 2005?[END_REF] indicate that, between 1995 and 2005, European employees have faced a degradation of their working-life quality. There has been a growing interest in the literature for the health-related consequences of detrimental working conditions and their evolution. In a world where the development of new technologies, management methods, activity controls (quality standards, processes rationalization, etc.) as well as contacts with the public confront employees with different and increased work pressures [START_REF] Askenazy | Innovative Work Practices, Information Technologies, and Working Conditions: Evidence for France: Working Conditions in France[END_REF], the question of working conditions indeed becomes even more acute. When the physical strains of work have been studied for a long time, it has only been the case later on for psychosocial risk factors. Notably, the seminal Job demand -Job control model of [START_REF] Karasek | Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign[END_REF] and its variations [START_REF] Johnson | Combined effects of job strain and social isolation on cardiovascular disease morbidity and mortality in a random sample of the Swedish male working population[END_REF][START_REF] Theorell | Current issues relating to psychosocial job strain and cardiovascular disease research[END_REF] introduced a theoretical approach for these more subjective strains. Other models later included the notion of reward as a modulator, with the Effort-Reward Imbalance model [START_REF] Siegrist | Adverse health effects of high-effort/low-reward conditions[END_REF]. Whatever the retained indicators for strenuous working conditions, their role on health status seems consensual [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF].
These exposures to detrimental working conditions too, beyond possible evolutions in workers' perceptions of their own conditions at work [START_REF] Gollac | Donner un sens aux données, l'exemple des enquêtes statistiques sur les conditions de travail (No. 3)[END_REF][START_REF] Gollac | Les conditions de travail[END_REF], have known several changes. If exposures to physical strains have slightly declined with the years, psychosocial strains have grown massively within the same time span. Exposures to physical risks as a whole almost remained constant since 1991 (Eurofound, 2012). Some risks declined in magnitude, when some others increased: tiring and painful positions (46% of the workforce) and repetitive hand or arm movements for instance (being the most prevalent risk of all, with 63% of workers exposed). Men are the most exposed to these risks. At the same time, subjective measures for work intensity increased overall for the past 20 years. 62% of workers reported tight deadlines, 59% high speed work, with workers having potentially less opportunities to alter the pace of their work. The level of one's control on his/her job also seem to evolve in a concerning way: 37% of workers report not being able to choose their method of work; 34% report not being able to change the order of their tasks and 30% not being able to change their speed of work, among other indicators (Eurofound, 2012). The situation in France also appeared to deteriorate between 2006 and 2010, gradually linking high levels of physical strains with low levels of job autonomy: increases in exposures to high work intensity, emotional demands, lack of autonomy, tensions and especially lack of recognition (as measured in the Santé et Itinéraire Professionnel 2006 and 2010 surveys by [START_REF] Fontaine | L'exposition des travailleurs aux risques psychosociaux a-t-elle augmenté pendant la crise économique de 2008 ?[END_REF].
Everlasting work
These evolutions are even more alarming that we work longer than we used to, and that we are going to work even longer in the future. Three major factors are in line to explain this situation. First, we live longer. Eurostat projections for the evolution of life expectancy in Europe indicate that, between 2013 and 2060, our life expectancy at age 65 will increase by 4.7 years in men and 4.5 years in women (European Commission, 2014). The regularly increasing life expectancy comes, as a consequence, with an increase in the retirement/worklife imbalance, inducing financing issues.
Second, despite the objective set at the Stockholm European Council to achieve an employment rate of 50% for those aged 55-64 years old by year 2010, the European average was still only 47.4% in 2011 [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF], and only reached 53.3% in 2015 (Eurostat 2016). These particularly low employment rates for senior workers can be explained by a number of factors (economic growth not producing enough new jobs, poor knowledge of existing retirement frameworks, unemployment insurance being too generous, insufficient training at work for older workers, etc.). Notably, even though workers may have the capacity to stay in employment longer [START_REF] García-Gómez | Health Capacity to Work at Older Ages: Evidence from Spain (No. w21973)[END_REF], they can also be explained by the role of strenuous careers and degraded Health Capital (health status seen as a capital stock, producing life-time in good health - [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF], increasing risks of job loss or sick leave take-ups [START_REF] Blanchet | Aspiration à la retraite, santé et satisfaction au travail : une comparaison européenne[END_REF]. The obvious consequence is that potentially too few older workers contribute to the pension system in comparison to the number of recipients.
Hence, because of these first two points, the third factor is that pay-as-you-go systems are more often than not facing growing deficits. To counter this phenomenon, European governments have progressively raised retirement ages and/or increased the contribution period required to access full pension rights. In France, increases in the contribution period required to obtain full-rate pensions (laws of July 1993 and August 2003) followed by gradual increases of the retirement age of 60 years-old for the generation born before July, 1 st 1951 and 62 years-old for those born on or after January 1 st 1955 (law of November 2010) have been introduced. The aim of these reforms was to compensate for longer life spans, ensuring an intergenerational balance between working-and retirement-lives, allowing "fair treatment with regard to the duration of retirement and the amount of pensions" (Article L.111-2-1 of the French Social Security Code). As a result of these reforms, the relationship between working lives and retirement has remained relatively constant for generations born between 1943and 1990[START_REF] Aubert | Durée passée en carrière et durée de vie en retraite : quel partage des gains d'espérance de vie ? Document de Travail Insee[END_REF], inducing longer work lives.
Affordable work: what are the health consequences?
Unaccounting for the possible exposures to detrimental conditions faced by individuals at work, being in employment has overall favourable effects on health status. Notably being in employment, among various social roles (such as being in a relationship or being a parent) is found to be correlated with lower prevalence of anxiety disorders and depressive episodes [START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF], beyond its obvious positive role on wealth and well-being. This link between health (especially mental health) and employment status is confirmed by more econometrically robust analyses, notably by [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. This relationship appears to be different depending on sex, as it seems stronger in men. This virtuous relationship between health status and employment is corroborated by another part of the literature, focusing on job loss. When being employed seems to protect one's health capital, being unemployed is associated with more prevalent mental health disorders, especially in men again [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF]. Losing one's job is logically also associated with poorer levels of well-being [START_REF] Clark | Lags And Leads in Life Satisfaction: a Test of the Baseline Hypothesis*[END_REF], even more so considering the first consequences may be observed before lay-off actually happens [START_REF] Caroli | Does job insecurity deteriorate health?: Does job insecurity deteriorate health?[END_REF]. In any case massive and potentially recurring unemployment periods are notorious for their adverse effects on health status [START_REF] Böckerman | Unemployment and self-assessed health: evidence from panel data[END_REF][START_REF] Haan | Dynamics of health and labor market risks[END_REF][START_REF] Kalwij | Health and labour force participation of older people in Europe: what do objective health indicators add to the analysis?[END_REF]. Retirement also comes with likely negative health consequences [START_REF] Coe | Retirement effects on health in Europe[END_REF].
Nevertheless, and even if health status seems to benefit from employment overall, exposures to detrimental conditions at work are a factor of health capital deterioration. Factually, close to a third of EU27 employees declares that work affects their health status. Among these, 25% declared a detrimental impact when only 7% reported a positive role (Eurofound, 2012). Thus, in a Eurofound (2005) report on health risks in relationship to physically demanding jobs, the results of two studies (one in Austria and the other in Switzerland) were used to identify the deleterious effects of exposures on health status. In Austria, 62% of retirements are explained by work-related disabilities in the construction sector. In Switzerland, significant disparities in mortality rates exist, depending on the activity sector. On French data, [START_REF] Platts | Physical occupational exposures and health expectancies in a French occupational cohort[END_REF] show that workers who have faced physically demanding working conditions have a shorter life expectancy, in the energy industry. In addition, [START_REF] Goh | Exposure To Harmful Workplace Practices Could Account For Inequality In Life Spans Across Different Demographic Groups[END_REF] determine that 10% to 38% of disparities in life expectancy between cohorts can be attributed to exposures to poor working conditions.
Which are the options?
Because careers are more fragmented than they used to (see Section 1.1) with at the same time increasing and more diversified pressures at work (Section 1.2) and because careers tend to be longer (Section 1.3), health consequences are or will be even more sensible (Section 1.4).
From the standpoint of policy-makers, all of this comes as new challenges, with the objective being to ensure that employment in general and the work-life in particular remain sustainable (i.e. workers being able to remain in their job throughout their career). A lot of public policies are hence targeting this objective. In Europe, the European Union is competent in dealing with Health and Safety matters, which in turn is one of the main fields of European policies.
The Treaty of Functioning of the European Union allows the implementation, by the means of directives, of minimum requirements regarding "improvement of the working environment to protect workers' health and safety". Notably, employers are responsible of adapting the workplace to the workers' needs in terms of equipment and production methods, as explicated in Directive 89/391/EEC [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF].
In France, the legislative approach is mostly based on a curative logic. As far as the consideration of work strains is concerned, a reform in 2003 introduced explicitly the notion of Pénibilité (work drudgery), through Article 12 [START_REF] Struillou | Pénibilité et Retraite[END_REF]. This reform failed because of the difficulty to define this concept, and to determine responsibilities. A reform in 2010 followed by creating early retirement schemes related to work drudgery, with financial incentives. 3,500 workers in 2013 benefited from early retirement because of exposures to detrimental working conditions inducing permanent disabilities. Early 2014, a personal account for the prevention of work drudgery is elaborated, allowing workers to accumulate points related to different types of exposures during their career (focusing exclusively on physical strains). Reaching specific thresholds, workers are eligible to trainings in order to change job, to access part-time work paid at full rate or early retirement schemes. According to the Dares (Direction de l'animation de la recherche, des études et des statistiques -French ministry for Labour Affairs), 18.2% of employees could be affected by exposure to these factors (Sumer Survey 2010).
Whatever the scheme considered (account for work drudgery, dedicated early retirement schemes and/or compensation schemes for occupational accidents and illnesses), the curative logic of ex post compensation has for a long time prevailed almost exclusively. However, more recent plans highlight the importance of prevention in the relationship between health and work. In France, three successive Health and Work Plans (Plan Santé Travail) have been instigated since 2005, with the latter (Plan Santé Travail 2016-2020) emphasising on primary prevention and work-life quality. The results of these successive plans are mixed. However, other strategies coexist, mostly focusing on reducing illness-induced inequalities on the labour market (see the Troisième Plan Cancer for an example on cancer patients), an easier insertion on the labour market of workers suffering from mental health disorders and greater support to help them remaining in their job (Plan Psychiatrie et santé mentale [2011][2012][2013][2014][2015], or any other handicap (notably a law in 1987, reinforced in 2005, binds employers from both public and private sectors to hire a minimum of 6% of disabled workers in their workforce) [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF].
Work-Health influences: the importance of the individual biography
On the side of theoretical and empirical research, relationships between health, work and employment are particularly difficult to disentangle, because every part of the health and work cycles are linked with each other, and because their initial determinants happen very early in one's life (a summary of these interrelationships can be found in Figure I, which also highlights the specific interactions that will be studied in this Ph.D. Dissertation). First, studying such relationships is rather demanding in terms of available data. Not so many international surveys or administrative databases allow researchers to get information on professional paths, employment status, working conditions as well as health status and individual characteristics, while allowing temporal analyses. This scarcity of available data is even more pronounced when considering the French case. The need for temporal data (panel data, cohorts, etc.) is particularly important, as the relationships existing between health and professional paths are imbricated, with the weight of past experiences or shocks having potentially sizeable consequences on the decisions and on the condition of an individual, at any given point in time.
Then, the first determinants of future health and professional cycles can be found as early as the childhood period. Beyond elements happening in utero (described in the latency approach -Backer, 1995), significant life events or health conditions happening during the early-life of individuals are able to explain, at least partly, later outcomes for health and employment. For instance, poor health levels or the presence of disability during childhood are found to induce detrimental consequences on mental health at older ages as well as the appearance of chronic diseases [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. The consequences are also sensible on career paths.
Because healthier individuals are usually preferred at work, especially in demanding jobs, the initial health capital is bound to play a major role in employability levels, at least during the first part of one's career (see the Healthy Worker Effect) [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. Health status is not the only relevant determinant. Elements related to the socioeconomic background during childhood also benefited from several studies in the empirical literature. For instance, [START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF] demonstrated that one's environment during childhood impacts the likelihood to face, later on, occupational accidents and disabilities. Health consequences can also be expected in individuals who shortened their initial studies [START_REF] Garrouste | The Lasting Health Impact of Leaving School in a Bad Economy: Britons in the 1970s Recession[END_REF]. Early conditions, unaccounted for, hence may very well generate methodological difficulties when assessing the impact of work on health, notably because of selection effects.
These initial circumstances indeed bear consequences over to the next part of one's life: the professional career and contemporary health status. Individuals facing poor conditions during childhood are then potentially more exposed to harder circumstances during their work life, for instance lower levels of employability when at the same time, facing unemployment early on in the career is found to generate ill-health. Low initial levels of Human Capital (intended as the stock of knowledge, habits, social and personality attributes that contributes to the capacity for one to produce - [START_REF] Becker | Human capital: A theoretical and empirical analysis, with special reference to education[END_REF], including health capital, impact all elements related to work and employment outcomes, ranging from increased exposures to certain types of detrimental working conditions (notably physical exposures in the lower-educated), greater probabilities to be employed part-time or in temporary contracts and overall more fragmented careers. Because of that, the health status of these originally disadvantaged individuals is likely to deteriorate even further. It is also true that contemporary health determines current employment outcomes, causing particularly detrimental vicious circles and inducing reverse causality issues. During this professionally active part of one's life, other shocks may happen.
Illnesses or the death of a close relative or partner or marital separations, for instance, have a negative impact on health status [START_REF] Dalgard | Negative life events, social support and gender difference in depression: A multinational community survey with data from the ODIN study[END_REF][START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF]. Financial difficulties, are they current or older, are also often associated with the onset of common mental disorders [START_REF] Weich | Material Standard of Living, Social Class, and the Prevalence of the Common Mental Disorders in Great Britain[END_REF]. When these shocks are unobserved, disentangling the role of the career on health status from other shocks appears as particularly tricky.
When considering the last part of one's career from the retirement decision onwards, the accumulation of all these circumstances throughout an individual's life cycle reinforce potential selection effects [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. The decision to retire, because it is partly based on health status and the nature of the professional career, can possibly be massively altered, as much as later levels of human capital. Retirees who faced difficult situations at work in terms of employment or working conditions are more likely to be in worst health conditions than others [START_REF] Coe | Retirement effects on health in Europe[END_REF]. Hence, originally because of poor initial life conditions (in terms of health or socioeconomic status), individuals may face radically changed professional and health paths. Moreover, at any time, elements of health status, employment or working conditions can also positively or negatively influence the rest of the life cycle, bearing repercussions until its end.
Research questions
Health-Work causality: theoretical background
The theoretical relationships between work and health status can be analysed under the double expertise of health and labour economics.
The initial model of [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF] proposes an extension of the Human Capital theory developed by [START_REF] Becker | Human capital: A theoretical and empirical analysis, with special reference to education[END_REF] by introducing the concept of Health Capital. Each individual possesses a certain level of health capital at birth. Health status, originally regarded as exogenous in the "demand for medical care" model by Newhouse and Phelpsen (1974), is supposed to be endogenous and can be both demanded (through demands for care) and produced by consumers (concept of investment in health). Individuals decide on the level of health that maximizes their utility and make trade-offs between time spent in good and poor health. In a later model for the demand of health, health capital is seen as an element allowing the output of healthy time [START_REF] Grossman | The Human Capital Model of the Demand for Health (No. w7078[END_REF]. This model offers a possibility for intertemporal analysis to study health both in terms of level and depreciation rate over the life cycle [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. If the depreciation rate of health capital mostly refers to a biological process, health care consumption, health investment and labour market characteristics also influence this rate. The time devoted to work can increase (in the case of demanding work) or decrease (in case of a high quality work life) the rate of depreciation of health capital. Notably, in the case of an individual facing a very demanding job, the depreciation rate of his/her health capital over the life cycle is progressively rising, inducing an increasing price (or shadow price as it is hardly measurable) of health, just like for the ageing process. It is particularly the case in the less educated workers, who constitute a less efficient health-producing workforce [START_REF] Grossman | The Human Capital Model of the Demand for Health (No. w7078[END_REF]. Contradictory effects can then occur simultaneously as work can also be beneficial to health status (in comparison to non-employment), but the drudgery induced by certain working conditions can accelerate its deterioration [START_REF] Strauss | Health, Nutrition and Economic Development[END_REF].
In this context, exposure to past working conditions may partly explain the differential in measured health status. Notably, the differences in wages between equally productive individuals can be explained by differences in the difficulty of work-related tasks, meaning workers with poorer working conditions are paid more than others in a perfectly competitive environment [START_REF] Rosen | Hedonic Prices and Implicit Markets: Product Differentiation in Pure Competition[END_REF]. In this framework, it is possible to imagine that health capital and wealth stock are substitutable, hence workers using their health in exchange for income [START_REF] Muurinen | The economic analysis of inequalities in health[END_REF]. Individuals can therefore decide, depending on their utility function, to substitute part of their health capital in a more remunerative work, due to harmful exposures. However, despite the hypothesis retained by [START_REF] Muurinen | Demand for health[END_REF] in an extension of [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF], working conditions are probably not exogenous. Several selection effects may exist, both in entering the labour market and in the capacity to occupy and remain in strenuous jobs for longer periods, thereby discrediting the hypothesis of exogeneity. These effects refer to characteristics of both the labour supply and demand. First, it can be assumed that the initial human capital (initial health status and level of education) of future workers will determine, in part, their entry conditions into the labour market but also the ability to "choose" a supposedly more or less strenuous job. Then, employers can also be the source of selection effects, based on criteria related to employees' health and their adaptability to demanding positions. Part of the empirical literature relying notably on testing methods testify of the existence of discriminations towards disabled individuals, including discriminations in employment [START_REF] Bouvier | Les personnes ayant des problèmes de santé ou de handicap sont plus nombreuses que les autres à faire part de comportements stigmatisants[END_REF]. Thus, whether for health or for work, the hypothesis of exogeneity does not seem to be acceptable.
Health-Work causality: empirical resolution
If this exogeneity hypothesis does not seem trivial in a theoretical analysis, it is even more the case in an empirical framework.
First, selection biases are very common in the study of Health-Work relationships. For instance, one's health status may be determined by his/her former levels of human capital or past exposures to strenuous careers. Another example would be that the choice of a job is also made according to several characteristics, including constitutive elements of the initial human capital. Individuals may choose their job according to their own preferences, but also based on their education, health condition or childhood background. Thus, when unaccounted for, this endogenous selection may result in biased estimates in empirical studies. In particular, because healthier individuals may tend to prefer (self-selection) or to be preferred (discrimination) for more demanding jobs [START_REF] Barnay | The Impact of a Disability on Labour Market Status: A Comparison of the Public and Private Sectors[END_REF], researchers could face an overrepresentation of healthy yet exposed workers in their samples. In this case, the estimations are likely to be biased downwards because of individuals being both healthier and exposed to demanding jobs being overrepresented in the sample (inducing a Healthy Worker Effect - [START_REF] Haan | Dynamics of health and labor market risks[END_REF]. On the other hand, workers with lesser levels of initial health capital may benefit from fewer opportunities on the labour market and thus be restricted to the toughest jobs, leading in that case to an overrepresentation of initially unhealthy and exposed individuals, resulting in an upward bias of the estimates.
The Health-Work relationships are also more often than not plagued with reverse causality biases. The link between health status and employment is indeed bidirectional. When studying the role of a given health condition on one's capacity to be in employment for instance, it is quite easily conceivable that employment status is also able to partly determine current health status. A lot of empirical studies face this particular issue (see [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF] for an example on mental health). In particular, being unemployed may impair individuals' mental health [START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. On the other hand, studying the role of employment on health status also suffers from this very same bias. In the literature, the causal role of retirement on health status has long been plagued with reverse causality, inducing that individuals with poorer levels of health capital were the ones to retire earlier. Again, most recent empirical works acknowledged this possibility [START_REF] Coe | Retirement effects on health in Europe[END_REF].
The omission of variables leads to unobserved heterogeneity, which is also potentially a source of endogeneity when measuring such relationships. Some information is very rarely available on survey or administrative data, because of the difficulty to observe or quantify it.
Among numerous others, family background or personality traits [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF], involvement and motivations [START_REF] Nelson | Survival of the Fittest: Impact of Mental Illness on Employment Duration[END_REF], risky health-related behaviours, subjective life expectancy, risk aversion preferences or disutility at work [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF] are mostly unobserved, thus omitted in most studies. Yet, these factors, remaining unobservable may therefore act as confounders, or as endogeneity sources when correlated with both the error term and observable characteristics. These unobserved individual or time-dependant heterogeneity sources may hence result in biased estimations [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF].
Finally, measurement errors or declarative biases can also be highlighted. When working on sometimes sensitive data like health-related matters or risky behaviours as well as some difficult work situations, individuals may be inclined to alter their declarative behaviours. For instance, individuals may alter their health status declarations in order to rationalize their choices on the labour market in front of the interviewer [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. Also, the nonparticipation to the labour market may be justified ex-post by the declaration of a worse health status. [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF] and [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF], showed that economic incentives are likely to distort health status declarations. There may also be declarative social heterogeneity in terms of health status, specifically related to sex and age [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. It is often argued that men have a tendency to under declare their health condition when it is the contrary for women. Older individuals tend to consider their own health status relatively to their age, hence often overestimating their health condition.
Research questions and motivation
Do common mental health impairments (depression and anxiety) impact workers' ability to remain in employment (Chapter 1)? -Studies on the impact of mental health impairments on employment outcomes are numerous in the empirical literature, at an international level. This literature is diverse in its measurement of mental health: when many studies focus on heavy mental disorders such as psychoses or schizophrenia [START_REF] Greve | Useful beautiful minds-An analysis of the relationship between schizophrenia and employment[END_REF], a growing part of this literature is based on more common, less disabling disorders such as stress, anxiety or depression. This empirical literature has been focusing in more recent years on handling the inherent biases linked to the endogeneity of mental health indicators as well as declarative biases [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF][START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF] in the study of the capacity of individuals suffering from mental health problems to find a job or to sustain their productivity levels. In particular, the relationship between mental health and employment appears to be bidirectional [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF][START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF], and unobserved characteristics such as risk preferences, workers' involvement at work, personality traits, family background or risky behaviours are likely to induce biased estimates of the effect of mental health on employment [START_REF] Nelson | Survival of the Fittest: Impact of Mental Illness on Employment Duration[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. In the economics literature accounting for these biases, it is found that mental health impairments do impact individuals' capacity to find a job. [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF], [START_REF] Chang | Mental health and employment of the elderly in Taiwan: a simultaneous equation approach[END_REF] and [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF] all find that individuals suffering from common mental health disorders are less likely to be in employment than others. This effect is found to vary among different groups, according to age [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] and more importantly to sex, with mixed evidence: [START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF] and [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] find a stronger effect on men's employment outcomes when [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] on women's. Yet this literature, while mostly focusing on one's capacity to find a job, does not provide evidence on the role of mental health conditions in individuals already in employment, on their capacity to keep their job. The specific role of physical health status is also unaccounted for in most studies when it may act as a cofounding factor when analysing the specific effect of mental health on employment outcomes. Thus the first research question of this Ph.D. Dissertation will be to understand the role of common mental impairments in the ability to remain in employment.
Do varying levels of exposure to detrimental physical and psychosocial working
conditions differently impact health status (Chapter 2)? -The role of working conditions on workers' health status has received considerable attention in the scientific literature, when it is not as much the case in the economic literature because of the biases it faces. First, the choice of a job by an individual is not made at random [START_REF] Cottini | Mental health and working conditions in Europe[END_REF], but the reasons and consequences of this selection bias are potentially contradictory. Healthier individuals may indeed prefer or be preferred for more arduous jobs, but it is also possible to imagine that individuals with a lesser initial health capital may be restricted to the toughest jobs. Then, unobserved characteristics (individual preferences, risk aversion behaviours, shocks, crises) may also induce biased estimates [START_REF] Bassanini | Is Work Bad for Health? The Role of Constraint versus Choice[END_REF]. Because of the lack of panel data linking both working conditions and health status indicators on longer periods, few papers actually dealt with these methodological difficulties. The economic literature generally finds strong links between exposures to detrimental working conditions and poorer health conditions. Specifically, physical strains like heavy loads, night work, repetitive work [START_REF] Case | Broken Down by Work and Sex: How Our Health Declines[END_REF][START_REF] Choo | Wearing Out -The Decline of Health[END_REF][START_REF] Debrand | Working Conditions and Health of European Older Workers[END_REF][START_REF] Ose | Working conditions, compensation and absenteeism[END_REF] as well as environmental exposures such as exposures to toxic or hazardous materials, extreme temperatures [START_REF] Datta Gupta | Work environment satisfaction and employee health: panel evidence from Denmark, France and Spain, 1994-2001[END_REF] and psychosocial risk factors like Job strain and social isolation do impact a variety of physical and mental health indicators [START_REF] Cohidon | Working conditions and depressive symptoms in the 2003 decennial health survey: the role of the occupational category[END_REF][START_REF] Cottini | Mental health and working conditions in Europe[END_REF][START_REF] De Jonge | Job strain, effort-reward imbalance and employee well-being: a large-scale cross-sectional study[END_REF]. This average instantaneous effect of exposures has been decomposed by [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF] in order to account for chronic exposures, and notably by the psychosocial literature in general to account for simultaneous exposures. More often than not, this literature is plagued with inherent issues coming from selection biases into employment and individual and temporal unobserved heterogeneity. On top of that, no study accounts for cumulative effects of strains due to both potentially simultaneous and chronic exposures, nor is the possibility of delayed effects on health status accounted for. The second research question is dedicated to the heterogeneous influence of varying levels of exposures (in terms of chronic or simultaneous exposures) to detrimental physical and psychosocial working conditions on health status.
What is the effect of retirement on general and mental health status in France (Chapter 3)? -Much has been said about the role of retirement on health conditions at the international level [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. A big proportion of the studies in economics accounts for the endogeneity biases related to reverse causality (health status determines the decision to retire or not -García-Gómez, 2011, or the pace of this decision - [START_REF] Alavinia | Unemployment and retirement and ill-health: a crosssectional analysis across European countries[END_REF][START_REF] Jones | Sick of work or too sick to work? Evidence on selfreported health shocks and early retirement from the BHPS[END_REF], unobserved heterogeneity and the specific role of ageing. The overall effect of retirement on health status differs greatly, depending on the outcome chosen. When the decision to retire appears beneficial to one's self-assessed health status and mental health indicators such as anxiety and depression [START_REF] Blake | Collateral effects of a pension reform in France[END_REF][START_REF] Coe | Retirement effects on health in Europe[END_REF][START_REF] Grip | Shattered Dreams: The Effects of Changing the Pension System Late in the Game*: MENTAL HEALTH EFFECTS OF A PENSION REFORM[END_REF][START_REF] Insler | The Health Consequences of Retirement[END_REF][START_REF] Neuman | Quit Your Job and Get Healthier? The Effect of Retirement on Health[END_REF], it seems to be the contrary for other mental health conditions, such as cognitive abilities [START_REF] Behncke | Does retirement trigger ill health?[END_REF][START_REF] Bonsang | Does retirement affect cognitive functioning[END_REF][START_REF] Dave | The effects of retirement on physical and mental health outcomes[END_REF][START_REF] Rohwedder | Mental Retirement[END_REF]. The reasons of the beneficial health effects of retirement have been studied more recently, notably by [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF], showing that retirement had a positive effect on being a non-smoker, a range of social and physical activities. Yet, the literature faces difficulties to accurately account for the nature of past professional careers of retirees, when it appears as one of the most important determinant of both the decision to retire and health status [START_REF] Coe | Retirement effects on health in Europe[END_REF]. It is indeed very likely that individuals relieved from arduous jobs will face the greatest improvements when it comes to their health condition after retirement. Generally speaking, single studies also rarely assess both potential heterogeneity sources and mechanisms simultaneously. This is even more the case for the French situation, where the literature on retirement and its impact on health status is very scarce. The third research question hence refers to the heterogeneous effect of retirement on general and mental health status in France.
Outline
My Ph.D. Dissertation relies on the use of a French panel dataset: the French Health and Professional Path survey ("Santé et Itinéraire Professionnel" -Sip). This survey was designed jointly by the French Ministries in charge of Healthcare and Labour. The panel is composed of two waves (one in 2006 and another one in 2010). Two questionnaires are proposed: the first one is administered directly by an interviewer and investigates individual characteristics, health and employment statuses. The second one is self-administered and focuses on more sensitive information such as health-related risky behaviours (weight, alcohol and tobacco consumption). Overall, more than 13,000 individuals are interviewed in 2006 and 11,000 of them in 2010 as well, making this panel survey representative of the French population. The main strength of this survey, on top of the wealth of individual data, is that it also contains a lifegrid allowing the reconstruction of a biography of individuals' lives: childhood, education, health, career and working conditions as well as major life events, from the beginning of one's life to the date of the survey. This allows for a great health and professional description, notably in terms of major work-related events.
Chapter 1 aims to measure, in 4,100 French workers aged 30-55 in 2006, the causal impact of self-assessed mental health in 2006 (in the form of anxiety disorders and depressive episodes) on employment status in 2010. In order to control for endogeneity biases coming from mental health indicators, bivariate probit models, relying on childhood events and elements of social support as sources of exogeneity, are used to explain simultaneously employment and mental health outcomes. Specifications control for individual, employment, general health status, risky behaviours and professional characteristics. The results show that men suffering from at least one mental disorder (depression or anxiety) are up to 13 percentage points ( ) less likely to remain in employment. Such a relationship cannot be found in women after controlling for general health status. Anxiety disorders appear as the most impactful on men's capacity to remain in employment, as well as being exposed to both mental disorders at the same time (
), in comparison to only one ( ).
Chapter 2 estimates the causal impact of exposures to detrimental working conditions on selfdeclarations of chronic diseases. Using a rebuilt retrospective lifelong panel for 6,700 French individuals and defining indicators for physical and psychosocial strains, a mixed econometric strategy relying on difference-in-differences and matching methods taking into account for selection biases as well as unobserved heterogeneity is implemented. For men and women, deleterious effects of both types of working conditions on the declaration of chronic diseases after exposure can be found, with varying patterns of impacts according to the strains' nature and magnitude. In physically exposed men (resp. women), exposures are found to explain around (resp. between to ) of the total number of chronic diseases.
Psychosocial exposures account, in men (resp. women), for (resp.
) of the total number of chronic diseases.
Chapter 3 assesses the role of retirement on physical and mental health outcomes in 4,600 The issue of job retention for people with mental disorders appears to be essential for several reasons. It is established that overwork deteriorates both physical and mental health [START_REF] Bell | Work Hours Constraints and Health[END_REF]. Moreover, the intensity of work (high pace and lack of autonomy) and job insecurity lead employees to face more arduous situations. In addition part-time jobs, when not chosen, affects mental health [START_REF] Robone | Contractual conditions, working conditions and their impact on health and well-being[END_REF].
The relationship between mental health and employment has been widely documented in the literature, establishing a two-way causalities between the two. A precarious job or exposure to detrimental working conditions can affect mental health. Self-reported health indicators are also characterized by justification biases and measurement errors as well as reporting social heterogeneity [START_REF] Akashi-Ronquest | Measuring the biases in selfreported disability status: evidence from aggregate data[END_REF][START_REF] Etilé | Income-related reporting heterogeneity in self-assessed health: evidence from France[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. Mental health, when subjective, is specifically associated with a measurement bias prompting to unravel the links between physical and mental health. Just like for physical health status, selection effects are also at work, an individual with mental disorders being found less often in employment. Mental health measurements are also potentially subject to a specific selection bias linked to the psychological inability to answer questionnaires.
Our goal is to establish a proper causality of mental health on job retention using French data.
This study is inspired by [START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF] who measure the impact of physical health and risky behaviours on leaving employment four years later. While many studies focus on the role of mental health on employability, not a lot of them acknowledge its impact on workers' capacity to remain in their jobs. We also expend on the literature by considering the endogeneity biases generated by reverse causality (effect of employment on mental health).
Another addition is that we take into account for the role of physical health status which may very well act, when unaccounted for, as a cofounding factor when analysing the specific effect of mental health on employment outcomes. To our knowledge, no French study has empirically measured the specific effect of mental health on job retention while addressing these biases.
To We articulate our article as follows. We expose in a literature review the main empirical results linking mental health and employment status. We then present the database and empirical strategy. A final section presents the results and concludes.
The links between mental health and employment
Mental health measurements
The economic literature establishing the role of mental health on employment mainly retains two definitions of mental health. The first one focuses on heavy mental disorders, such as psychoses [START_REF] Bartel | Some Economic and Demographic Consequences of Mental Illness[END_REF]. Notably, many studies evaluate the ability to enter the labour market for individuals with schizophrenia [START_REF] Greve | Useful beautiful minds-An analysis of the relationship between schizophrenia and employment[END_REF]. The second one is based on more common but less disabling disorders such as stress or depression. Often used to assess mental health, these disorders are observed using standardized measures and are presented in the form of scores. Thus, the Kessler Psychological Distress Scale (K-10) allows, from 10 questions about the last 30 days, to evaluate individuals' overall mental state [START_REF] Dahal | An econometric assessment of the effect of mental illness on household spending behavior[END_REF][START_REF] Kessler | The effects of chronic medical conditions on work loss and work cutback[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. Like in the K-10 questionnaire, the Short-Form General Health Survey (SF-36) evaluates mental health over the past four weeks with questions about how individuals feel (excitement, sadness, lack of energy, fatigue, ...) [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF]. Another quite similar score was built, this time focusing on senior workers (age 50-64): the Center for Epidemiologic Studies Depression Scale (CES-D), with more specific questions such as isolation and self-esteem [START_REF] Chang | Mental health and employment of the elderly in Taiwan: a simultaneous equation approach[END_REF].
However the simplification risk linked to the aggregate nature of these scores justified the setup of other indicators to better approximate the true mental health diagnosis. Indicators of generalized anxiety disorders and major depressive episodes were then used, allowing a further analysis of mental health [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF][START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF]. They allow to identify the population suffering from these disorders and their symptoms (see Appendix 1 and Appendix 2). Despite their specificity and without being perfect substitutes to a medical diagnosis, these indicators prove robust to detect common mental disorders.
In addition, the subjective nature of the declaration of health in general and particularly of mental health, makes it difficult to make comparisons between two apparently similar declarations [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF], notably due to reporting biases [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF] try to assess the importance of reporting biases in mental health and unveil that a latent health condition greatly contributes to mental health: two individuals may declare different mental health conditions depending on their general and physical health status. A person with a poor general condition will indeed be more likely to report a more degraded mental health status than a person in good general health. [START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF] confirm these results and show a strong correlation between physical and mental health, particularly among women.
1.2. The influence of mental health on employment: a short literature review
Methodological difficulties
If the measurement of mental health from declarative data is not trivial, the relationship between mental health and employment is also tainted by endogeneity biases associated with reverse causality and omitted variables. From a structural point, we can quite easily conceive that if mental health and employment are observed simultaneously, the relationship will be bidirectional [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF][START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF]. In particular, being unemployed may impair individuals' mental health [START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF].
The omission of variables leads to unobserved heterogeneity, which is also potentially a source of endogeneity when measuring the impact of mental health on employment. Risk preferences [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF], workers' involvement at work and the ability to give satisfaction [START_REF] Nelson | Survival of the Fittest: Impact of Mental Illness on Employment Duration[END_REF], personality traits, family background [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF], risky behaviours (smoking, alcohol and overweight) are related to mental health as much as employment. These factors, remaining unobservable for some of them in household surveys, therefore act as confounders. [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] conclude, from Australian data (pooled data from the National Health Survey -NHS) and multivariate probit methods, that tobacco consumption in men and women as well as overweight in women increase the risk of reporting mental disorders. These behaviours are also shown to have a specific effect on the situation on the labour market [START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF].
Finally, it is possible to highlight some justification biases. Individuals may alter their health status declarations in order to rationalize their choices on the labour market in front of the interviewer [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. For example, the non-participation to the labour market can be justified ex-post by the declaration of a worse health status. [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF] showed on Dutch panel data using fixed effects models, that economic incentives are likely to distort health status declarations. This still seems to be the case on Irish panel data and after controlling for unobserved heterogeneity [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF].
Effects of mental health on employment
To address these methodological issues, the empirical literature makes use of instrumental variables and panel data models allowing to take care of unobserved heterogeneity by including fixed effects and reverse causality by a time gap between exogenous variables and the outcome.
Whatever the mental health indicators, the various studies appear to converge on a detrimental role of deteriorated mental health on employment outcomes. Thus, [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF] find, using bivariate Probit models and Two-Stage Least Squares (2SLS) performed on crosssectional data, that people suffering from mental disorders (major depressive episodes and generalized anxiety disorders) in the 12 last months are much less likely to be in employment than others at the time of the survey. They do not find a significant effect of these mental conditions on the number of weeks worked and days of sick-leaves in individuals in employment after controlling for socioeconomic characteristics, chronic diseases and the area of residence in the U.S. territory. [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF] show, on cross-sectional data using two-stage (2SLS and bivariate probit) and Altonji Elder and Taber modelling (AET - [START_REF] Altonji | Selection on Observed and Unobserved Variables: Assessing the Effectiveness of Catholic Schools[END_REF] and taking into account unobserved heterogeneity, that these mental disorders appearing in the last 12 months reduce by an average of 15% the likelihood to be in employment at the time of the survey. An American study, resorting in instrumental variable methods, found that most people with mental disorders are in employment, but more pronounced symptoms reduce their participation to the labour market [START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF].
Finally, simultaneous modelling on Taiwanese pooled data confirms that a degraded mental health decreases the probability of working, while specifying that the prevalence of these disorders is lower among workers, thus inducing a protective effect of work on mental health [START_REF] Chang | Mental health and employment of the elderly in Taiwan: a simultaneous equation approach[END_REF]. [START_REF] Cottini | Mental health and working conditions in Europe[END_REF] also confirm reverse causality in the relationship, using instrumental variables in three waves of the European Working Conditions Survey (EWCS), stressing the negative effects of poor working conditions on mental health.
These average effects are heterogeneous according to age and sex. [START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF] conducted stratified regressions on two age groups: the 18-49 years-old on the one hand and the 50-64 years-old on the other hand and find that mental health-related discriminations on the labour market are greater in middle-aged workers than for older workers. Sex effects are also important. The role of mental disorders on employment seems stronger in men [START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. However, there is no consensus on this fact in the literature. [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] show a stronger effect of mental health on women's employment, using
Australian panel data (Household, Income and Labour Dynamics in Australia -HILDA) and several models, including bivariate Probit and fixed effects model.
What instrument(s) for mental health?
It is necessary to identify an instrument whose influence on mental health is established in the empirical literature (1.3.1) without being correlated with the error term (1.3.2).
The determinants of mental health
Determinants and other factors related to mental health are numerous in the literature and can be classified into three categories: social determinants, major life events and work-related factors.
Social factors refer to the society role of the individual and to his/her social relationships. [START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF] identify three types of social roles being correlated with a better mental health condition: the roles of partner, parent and worker. Being in a relationship is associated with a stronger declaration of good mental health and a lower risk of depression and anxiety [START_REF] Kelly | Determinants of mental health and well-being within rural and remote communities[END_REF][START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF]. Endorsing the two roles of parent and partner seems linked to a better mental health. Professional activity can slow the depreciation rate of one's mental health capital, as shown by a study on panel data taking into account the endogenous nature of the relationship between health and employment [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. In contrast, [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF] show that unemployment is often correlated with worse mental health status among men and in women to a lesser extent. The combinations of these roles correspond to increased chances of reporting good mental health condition by 39% [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF][START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF].
Major life events also play a role in the determination of mental health. Unemployment and furthermore inactivity occurring during the beginning of professional life can induce the onset of depressive symptoms later on, as shown on U.S. panel data by [START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. Using a fixed effects framework on panel data, [START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF] establish that events such as illnesses or death of a close relative or partner impairs mental health. Moreover, marital separations and serious disputes within or outside the couple seem correlated with poorer mental health [START_REF] Dalgard | Negative life events, social support and gender difference in depression: A multinational community survey with data from the ODIN study[END_REF][START_REF] Kelly | Determinants of mental health and well-being within rural and remote communities[END_REF]. Past or present financial problems are also often associated with the occurrence of common mental disorders such as depression and anxiety [START_REF] Laaksonen | Explanations for gender differences in sickness absence: evidence from middle-aged municipal employees from Finland[END_REF][START_REF] Weich | Material Standard of Living, Social Class, and the Prevalence of the Common Mental Disorders in Great Britain[END_REF], as well as the deterioration of physical health (especially in women) [START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF]. A poor health status or the presence of disability during childhood also bears negative consequences on mental health at older ages and on the declaration of chronic diseases, regardless of the onset age [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF].
Work-related factors may also have an effect on mental health. Atypical labour contracts such as part-time jobs increase the occurrence of depressive symptoms in employees [START_REF] Santin | Depressive symptoms and atypical jobs in France, from the 2003 Decennial health survey[END_REF]. [START_REF] Bildt | Gender differences in the effects from working conditions on mental health: a 4-year follow-up[END_REF] show, using multivariate models, that exposure to detrimental working conditions can have a deleterious effect on mental health four years later, with sex-related differences. Men would be most affected by changes in tasks and a lack of recognition at work when in women, other specific conditions such as the role of the lack of training and lack of motivation and support at work are highlighted. Other factors linked to sex and associated with poorer mental health are found by [START_REF] Cohidon | Working conditions and depressive symptoms in the 2003 decennial health survey: the role of the occupational category[END_REF]: the preponderance of work, contacts with the public, repetitive tasks and the lack of cooperation at work in men and the early beginning of career and involuntary interruptions in women.
Instruments in the literature and choices in our study
In the diversity of explanatory factors for mental health, only some of them have been retained in the economic literature as valid and relevant instruments. [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] used the death of a close friend intervened in the twelve months preceding the survey as an instrument for mental health. [START_REF] Hamilton | Better Health With More Friends: The Role of Social Capital in Producing Health: BETTER HEALTH WITH MORE FRIENDS[END_REF] used the stressful events in life, the regularity of sport and a lagged mental health indicator, the latter being also used by [START_REF] Banerjee | Effects of Psychiatric Disorders on Labor Market Outcomes: A Latent Variable Approach Using Multiple Clinical Indicators: Psychiatric Disorders and Labor Market Outcomes[END_REF]. The psychological status of parents [START_REF] Ettner | The Impact of Psychiatric Disorders on Labor Market Outcomes (No. w5989[END_REF][START_REF] Marcotte | The labor market effects of mental illness The case of affective disorders[END_REF], the one of children [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF][START_REF] Ettner | The Impact of Psychiatric Disorders on Labor Market Outcomes (No. w5989[END_REF], social support [START_REF] Alexandre | Labor Supply of Poor Residents in Metropolitan Miami, Florida: The Role of Depression and the Co-Morbid Effects of Substance Use[END_REF][START_REF] Hamilton | Better Health With More Friends: The Role of Social Capital in Producing Health: BETTER HEALTH WITH MORE FRIENDS[END_REF][START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF] were also frequently introduced.
These factors were privileged because of them being valid determinants of mental health while meeting the exogeneity assumption, either because of their temporal distance from the other factors explaining employment or because of their absence of direct effects on employment. We make use of this literature by choosing proxies of mental health during childhood (violence suffered during this period and having been raised by a single parent) and an indicator for psychological status and social support during adult life (marital breakdowns), with a different approach according to sex, as suggested by the literature. Doing so, we put some temporal distance between these events and employment status (events occurring during childhood are observed up to 18 years-old whereas our working sample includes only individuals aged 30 and older; marital ruptures occur before 2006), and there is a low probability of direct effects of these instruments on the employment status of 2010, the professional route characteristics, employment at the time of the survey and risky behaviour being also controlled for. The description of the general sample is presented in Table 29 (Appendix 6). Women report more frequent physical and mental health problems: anxiety disorders (7%), depressive episodes (8%), poor perceived health status (22%) and chronic illness (28%) are more widely reported by women than by men (resp. 4%, 3 %, 18% and 25%). These response behaviours are frequently raised by the literature and testify at least for some of them of the presence of reporting biases (rather downwards for men, and rather on the rise for women), as shown notably in [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF] or by Shmueli in 2003. Conversely, risky behaviours are substantially more developed in men. It is the case for daily smoking (28% in men vs. 24% in women) but it is even more acute for alcohol consumption (46% vs. 14%) and overweight (51% vs. 29%).
Empirical analysis
Figure II: Prevalence of health problems in the population in employment in 2006
Reading: 6% of men and 12% of women report having at least one mental disorder (GAD or MDE)
Health problems and job retention
82% of men in employment and suffering from at least one mental disorder in 2006 are still in employment in 2010, against 86% of women 1 . Anxiety disorders have the biggest influence:
79% of men are employed (vs. 88% of women). General health status indicators show fairly similar results for men and women. For risky behaviours, daily tobacco consumption showed no significant difference in employment rates between men and women while alcohol (93% vs. 90%) and overweight (93% vs. 89%) are associated with comparatively lower employment rates for women than for men (Figure III).
1 Given the weakness of some of the subsample sizes, one must be cautious about the conclusions suggested by these
Mental health and general health status
A strong correlation between general and mental health status is observed in the sample.
About 20% of men and women suffering from at least one mental disorder also reported activity limitations against 10% in the entire sample with normal mental health condition (see
Figure II
). Nearly 50% of them report poor perceived health (vs. 20% overall). Chronic diseases (45% vs. 25%) and daily tobacco consumption (30% vs. 25%) are also more common among these individuals. 53% of men and 17% of women with mental disorders declare risky alcohol consumption, against 46% and 13% resp. in the overall sample. Finally, overweight is declared by 44% of men and 31% of women with mental disorders, against resp. 51% and
Econometric strategy
Univariate models
The econometric strategy is based on two steps to correct individual heterogeneity and the possibility of reverse causality.
In a first step, we initiate binomial univariate probit models to estimate, among people in employment in 2006, the effect of mental health in 2006 on the likelihood to remain in employment in 2010 (in employment vs. unemployed -dependent variable ). Several specifications are tested and we stratify by sex for each one of them due to strong gendered differences in mental health linked to social heterogeneity in declarations [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF][START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF]. We take a three-step strategy to gradually add relevant variable groups in the model and thus assess the robustness of the correlation between mental health in 2006 and employment in 2010 by gradually identifying confounders. The first baseline specification (1) explains job retention by mental health status, controlling for a set of standard socioeconomic variables:
(1)
Mental health in 2006 (
) is represented by a binary variable taking the value when individual is suffering from a generalized anxiety disorder or a major depressive episode, or both. Socio-economic variables are represented by the vector . They include age (in five-year increments from 30 to 55 years), marital status, presence of children, educational level, professional category, industry sector, type of employment (public, private, or independent) and part-time work. Age plays a major role on the employability of individuals and in the reporting of mental disorders [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. Current marital status and the presence of children in the household can also affect employability (especially in women) and reported mental health since people in a relationship with children turn out to be in better health status [START_REF] Artazcoz | Unemployment and mental health: understanding the interactions among gender, family roles, and social class[END_REF][START_REF] Plaisier | Work and family roles and the association with depressive and anxiety disorders: Differences between men and women[END_REF]. Work characteristics are also integrated [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]).
An intermediate specification ( 2) is then performed with the addition of three variables from the European Mini-Module about individuals' general health status: their self-assessed health (taking the value if it is good, and for poor health), the fact that they suffer from chronic diseases or not and whether they are limited in their daily activities. These health status indicators are used in order to effectively isolate the specific effect of depression and anxiety on the position on the labour market (to disentangle it from the one of the latent general health status [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF]. This model also includes three variables of risky behaviours: being a daily smoker, a drinker at risk or overweight. The objective of these variables is to determine to which extent the role of mental health does not go partly through risky behaviours [START_REF] Butterworth | Poor mental health influences risk and duration of unemployment: a prospective study[END_REF][START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF][START_REF] Lim | Lost productivity among full-time workers with mental disorders[END_REF]. Such behaviours are indeed known to affect the reporting of activity limitations in general [START_REF] Arterburn | Relationship between obesity, depression, and disability in middle-aged women[END_REF], employability [START_REF] Paraponaris | Obesity, weight status and employability: Empirical evidence from a French national survey[END_REF], or the incidence of disease and premature mortality [START_REF] Teratani | Dose-response relationship between tobacco or alcohol consumption and the development of diabetes mellitus in Japanese male workers[END_REF] as well as work-related accidents [START_REF] Bourgkard | Association of physical job demands, smoking and alcohol abuse with subsequent premature mortality: a 9-year follow-up population-based study[END_REF][START_REF] Teratani | Dose-response relationship between tobacco or alcohol consumption and the development of diabetes mellitus in Japanese male workers[END_REF].
Finally, the last specification (3) adds two variables related to the professional route, reconstructed using retrospective information which is likely to play a role on the individual characteristics in 2006 and employment transitions observed between 2006 and 2010. The objective is to control our results of potentially unstable careers (state dependence phenomenon), leading to a greater fragility on the labour market [START_REF] Kelly | Determinants of mental health and well-being within rural and remote communities[END_REF][START_REF] Mossakowski | The Influence of Past Unemployment Duration on Symptoms of Depression Among Young Women and Men in the United States[END_REF]. These variables include time spent in contracts of more than 5 years and the stability of the employment path, represented by the number of transitions made between jobs over 5 years, short periods of employment, periods of unemployment of more than one year and periods of inactivity.
(3)
General health status variables and risky behaviours in 2006 are presented in vector and control variables on the professional route are included in the vector. Thus, the relationship between the employment status of 2010 and mental health status in 2006 is controlled for general health status, health-related risky behaviours and elements linked to the professional route.
However, as widely explained in the literature, our mental health variable potentially suffers from endogeneity biases. Direct reverse causality is most likely ruled out since there is a time gap between our measure of mental health (2006) and that of employment ( 2010) and the fact that the nature of the past professional career (and status in employment in 2006 de facto) are taken into account. However, some individual characteristics (unobserved individual heterogeneity) linked not only to employment but also to mental health are not included in our model and the measurement of mental health is likely to be biased. We are in the presence of an endogenous mental health variable, due to omitted variables.
Handling endogeneity biases
In order to take into account this endogeneity issue, we set up a bivariate probit model. As suggested by the literature [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF][START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF][START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF] dealing with biases related to mental health variables, we set up a methodology using bivariate probit modelling estimated by maximum likelihood. It is somewhat equivalent to the conventional linear two-stage approaches.
The two simultaneous equations to estimate can be written as follow:
(4)
(5)
where and are the respective residuals for esquations (4) and ( 5). Despite the inclusion of these control variables, it is likely that the residuals of these two equations are correlated, inducing .
Several reasons can be stated. First, in the case of simultaneous observations of health status and employment outcomes, there is a high risk of reverse causality. In our case, to the extent that both are separated by several years, we limit this risk. However, it seems possible that there are unobserved factors that affect not only mental health condition but also the capacity to remain employed, such as individual preferences or personality traits. Notably, an unstable employment path before 2006 is one of the explanatory factors of the mental health of 2006 as well as of the employment status of 2010 (state dependence). Thus, only estimating equation ( 4) would result in omitting part of the actual model.
In such a case, a bivariate probit modelling is required in the presence of binary outcome and explanatory variables [START_REF] Lollivier | Économétrie avancée des variables qualitatives[END_REF]. A new specification ( 6) is therefore implemented, taking the form of a bivariate probit model using specification (3) as the main model and simultaneously explaining mental health by three identifying variables (vector ):
(6)
We assume that the error terms follow a bivariate normal distribution:
In theory it is possible to estimate such a model without resorting to identifying variables (exclusion condition). However it is generally preferred, in the empirical literature, to base estimates on the exclusion criterion and use identifying variables. The identifying variables used in this study are chosen in line with the literature on the determinants of mental health status and are taken from Sip's lifegrid: we use the fact of having been raised by a single parent, having suffered from violence during childhood from relatives or at school and finally having experienced many marital breakdowns. We differentiate our instruments by sex 2 : for men, we retain having suffered violence and marital breakdown; for women, having suffered violence and having been raised by a single parent.
Using a binary endogenous variable of mental health, there is no real specialized test to assess the validity of our identifying variables. However, correlation tests have been conducted
(presented in Table 32 andTable 33, Appendix 7) to determine if they are likely to meet the validity and relevance assumptions. According to these limited tests, our three identifying variables are not likely to breach these assumptions. This intuition also tends to be confirmed by the estimates for , the comparison of univariate and bivariate estimations for employment status (Table 1 andTable 2) and for mental health (Table 34, Appendix 7) (see section 3.2).
On a more theoretical standpoint, because we only consider individuals aged 30 or more in 2006 (i.e. being in employment since some time in 2010) and because violence and the fact of being raised by a single parent relate to events occurring during childhood (before age 18), we are confident that these variables should not have a direct impact on employment status in 2010 (especially considering the stability of career paths are accounted for and because only individuals in employment are selected in our sample). On the other hand, marital breakdowns should not specifically be correlated with men's behaviour on the labour market 3 .
Results
A poor mental health condition decreases the likelihood to remain in employment
We test three specifications of the probability of being employed in 2010 among people employed in 2006 in order to decompose the effect of mental health in 2006 but also to try taking into account for confounding factors.
The baseline model presented in Table 1 for men and Table 2 for women (specification 1)
shows that men and women suffering from GAD and/or MDE in 2006 are less likely to remain in employment in 2010, after controlling for the individual and employment characteristics of 2006. Men in employment and declaring suffering from at least one mental
2 Following the dedicated literature indicating strong sex-linked relationships in the determinants for mental health, we decided to differentiate our instruments for men and women. Initial estimations including all three instruments (available upon request) have still been conducted, indicating similar yet slightly less precise results. 3 The data management has been done using SAS 9.4. The econometric strategy is implemented in Stata 11 using respectively the "probit" and "biprobit" commands.
disorder in 2006 are in average percentage points (pp) less likely to remain in employment in 2010 ( less likely in women). The other determinants of employment however differ between men and women in agreement with what other French studies have observed [START_REF] Barnay | Santé déclarée et cessation d'activité[END_REF]. In addition to mental health, in women, the predictors of unemployment are age (over 45), the presence of children, agricultural or industrial sectors (vs. In specification 2, we include general health status (self-assessed health, chronic diseases and activity limitations) and risky behaviours (daily tobacco consumption, risky alcohol drinking and being overweight). This new specification allows the assessment of potential indirect effects of mental health, transiting through the latent health status [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF]. In the male population, the coefficient associated with mental health declines slightly (the decline in the probability of remaining in employment falls from to ) but remains very significant. Activity limitations ( ) and daily tobacco consumption ( ) also play a role in job loss regardless of the effect of mental health. Being observed simultaneously, it is not possible to disentangle the causal relationship between general health, mental health and risky behaviours in this type of models but the explicit inclusion of these variables tends to reduce social employment inequalities in our results. In the female population, the impact of health status on employment does not seem to go through mental health as we measure it but more through a poor general health status and activity limitations ). Risky behaviours however appear to have no impact on job retention in women.
Past professional career information (in terms of security and stability of employment) is added in a third specification. It allows to control for the nature of the professional career, influencing both mental health and employment. While stable job trajectories (marked by long-term, more secure jobs) favours continued employment between 2006 and 2010, the deleterious effect of poor mental health condition on employment is resilient to this third specification in men. In women, employment stability does not participate to the transitions in employment between 2006 and 2010.
Just like in the empirical literature, it appears that we basically find the most conventional determinants influencing the labour market on our data. Age, the presence of children and part-time work among women, the level of education and professional category in men are found to have a significant impact on the ability for individuals to remain in employment.
Mental health is found to be very significant in men but not in women, which again appears to be in line with the literature [START_REF] Chatterji | Psychiatric disorders and labor market outcomes: Evidence from the National Comorbidity Survey-Replication[END_REF][START_REF] Ojeda | Mental illness, nativity, gender and labor supply[END_REF][START_REF] Zhang | Chronic diseases and labour force participation in Australia[END_REF]. The study of [START_REF] Frijters | The effect of mental health on employment: evidence from australian panel data: effect of mental health on employment[END_REF] however goes in the opposite direction, indicating a stronger effect in women which could possibly be explained by the lack of controls for general health status in this study, while the links between physical and mental health are strong in women [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Leach | Gender differences in depression and anxiety across the adult lifespan: the role of psychosocial mediators[END_REF]. As an illustration, our regressions also find a significant effect of mental health in women when we do not take into account general health status (Table 2, specification 1). Being a daily smoker is shown to have important consequences on men's employment in 2010, in agreement with the literature [START_REF] Butterworth | Poor mental health influences risk and duration of unemployment: a prospective study[END_REF][START_REF] Jusot | Job loss from poor health, smoking and obesity: a national prospective survey in France[END_REF]. Alcohol and overweight do not play a significant role on employment in our regressions. Despite the decrease in the accuracy of the estimates for employment status, the use of identifying variables should enable the establishment of a causal relationship. The use of this type of models seems justified by the significance (for men) of the correlation coefficient ( ) between the residuals of the two simultaneous equations. In addition, evolutions in the results between univariate and bivariate employment and mental health models (Table 1, Table 2 and
Robustness checks
To assess the robustness of our results, we tested two other alternative specifications to better understand mental health (differentiating MDE and GAD and taking into account their cumulative effects), we considered other age groups5 and a shorter temporal distance between mental health and employment (it indeed may be questionable to measure the role of poor mental health on employment four years later).
MDE versus GAD
We first wanted to better understand the respective roles of MDE and GAD on job retention.
Table 4 shows the results when considering MDE alone (specification 1), GAD alone (specification 2) and a counter of mental disorders (indicating if an individual faced one or both mental disorders at once). This decomposition of mental health disorders did not change the results in the female population: even when women report suffering from both MDE and GAD, mental health problems do not significantly affect their employment trajectory. In men, GAD marginally plays the major role on the inability to remain in employment compared to for MDE) and suffering from both mental disorders significantly deteriorates their labour market outcomes ( ).
An employment indicator over the period 2007-2010
The measurement of the impact of mental health on employment outcomes is potentially subject to biases given the duration of the observation period. Career paths and mental health between 2006 and 2010 may have been significantly affected by the effects of economic conditions (notably the economic crisis of 2009) regardless of the mental health condition of 2006. To deal with this problem, we set-up a more restrictive approach by considering individuals having been at least 3 years in employment between 2007 and 2010 (and not only in employment in 2010 precisely). The results are presented in Table 5.
Discussion and conclusion
This study demonstrates that a degraded mental health condition directly reduces the ability of men to remain in employment four years later after controlling for socioeconomic characteristics, employment, general health status, risky behaviours and the nature of past Our study confirms the importance of mental health when considering work and employment.
It appears appropriate to keep on with the implementation of public policies to support people with mental disorders starting from entry into the labour market but by extending them to common mental disorders such as depressive episodes and anxiety disorders, which prevalence is high in France. We bring new elements with respect to sex differences in the impact of mental health, after controlling for general health status. In men, activity limitations and GAD play a specific and independent role on professional paths. However in women, only general health indicators (perceived health and activity limitations) are capable of predicting future job situations. This differentiation between men and women is also confirmed in terms of mental health determinants, which is taken into account here by using different identifying variables according to sex. Consequently, accompanying measures for men at work could be helpful in keeping them on the labour market. Notably, the French 31). As a consequence, the differences we find could very well be explained, at least partly, by the fact that a man and a women both declaring facing anxiety disorders or depressive episodes depicts two different realities. Notably, it is acknowledged that men have a tendency to declare such issues when their troubles are already at a more advanced state (in terms of intensity of the symptoms) than women. Even though our indicators are relatively robust to false positives, it is not as much the case for false negatives (as explained in Appendix 5). It would also be interesting to determine the transmission channels of these differences. The distinction between GAD and MDE demonstrates the sensitivity of our results to the definition of mental health. As such, robustness checks using a mental health score to better assess the nature and intensity of mental health degradations would help to better assess its effect on employment. Yet, no such score is available in the survey.
Introduction
In a context of changing and increasing work pressures [START_REF] Askenazy | Innovative Work Practices, Information Technologies, and Working Conditions: Evidence for France: Working Conditions in France[END_REF], the question of working conditions has become even more acute. Notably, a law implemented in 2015 in France fits into this logic and either offers access to training programs in order to change jobs, or gives the most exposed workers an opportunity to retire earlier.
The relationship between employment, work and health status has received considerable attention in the scientific community, especially in fields such as epidemiology, sociology, management, psychology and ergonomics. From a theoretical standpoint in economics, the differences in wages between equally productive individuals can be explained by differences in the difficulty of work-related tasks, meaning workers with poorer working conditions are paid more than others in a perfectly competitive environment [START_REF] Rosen | Hedonic Prices and Implicit Markets: Product Differentiation in Pure Competition[END_REF]. In this framework, it is possible to imagine that health capital and wealth stock are substitutable, hence workers may use their health in exchange for income [START_REF] Muurinen | The economic analysis of inequalities in health[END_REF].
From an empirical point of view, the question of working conditions and their potential effects on health status becomes crucial in a general context of legal retirement age postponement being linked to increasing life expectancy and the need to maintain the financial equilibrium of the pension system. Prolonged exposures throughout one's whole career are indeed likely to prevent the most vulnerable from reaching further retirement ages, a fortiori in good health condition. However, this research area has received less attention because of important endogeneity problems such as reverse causality, endogenous selection and unobserved heterogeneity [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF] as well as the difficulty in fully embracing the diversity and magnitude of exposures. Nevertheless, a large majority of the studies agree that there is a deleterious effect on health status from detrimental working conditions.
In this paper, I examine the role of physical and psychosocial working conditions as well as their interactions when declaring chronic diseases. I expand on the aforementioned literature by two means. First, I rely on a sample of around 6,700 French male and female workers who participated in the French Health and Professional Path survey (Santé et Itinéraire Professionnel -Sip), for whom it is possible to use retrospective panel data for reconstructing their entire career from their entry into the labour market to the date of the survey. This allows me to resolve the inherent endogeneity in the relationship caused by selection biases and unobserved heterogeneity using a difference-in-differences methodology combined with matching methods. My second contribution arises from being able to establish and analyze the role of progressive and differentiated types of exposures and account for potentially delayed effects on health status. I believe such a work does not exist in the literature and that it provides useful insights for policy-making, particularly in regard to the importance of considering potentially varying degrees of exposures as well as the physical and psychosocial risk factors in a career-long perspective.
The paper first presents an overview of the economic literature (Section 1), the general framework of this study (Section 2), the data (Section 3) and empirical methodology (Section 4). Then, the results are presented, along with robustness checks and a discussion (Section 5, Section 6 and Section 7).
1. Literature
Global effect of work strain on health status
Unlike in fields such as epidemiology, working conditions and their impact on health status did not receive a lot of attention in the economic literature [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF][START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF]). Yet, this literature agrees on a deleterious average effect of work strain on workers' health capital. The numerous existing indicators used to assess this role usually classify the strains into two main categories: those related to physical or environmental burdens (expected to influence mostly physical health status) and psychosocial risk factors (supposed to have a major part to play in the deterioration of mental health).
Having a physically demanding job is known to impact self-rated health [START_REF] Debrand | Working Conditions and Health of European Older Workers[END_REF]. Notably, [START_REF] Case | Broken Down by Work and Sex: How Our Health Declines[END_REF] use multiple cross-sectional data to find that manual work significantly deteriorates self-assessed health status. This result is robust to the inclusion of classical socio-demographic characteristics such as education and it varies according to the levels of pay and skills involved. This was later confirmed by [START_REF] Choo | Wearing Out -The Decline of Health[END_REF], who also used cross-sectional data, controlling for chronic diseases and risky health behaviours. Using panel data, [START_REF] Ose | Working conditions, compensation and absenteeism[END_REF] have an influence on workers' health status. In a study on U.S. workers, the impact of facing detrimental environmental working conditions (weather, extreme temperatures or moisture) is found to specifically impact young worker's self-rated health status [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF].
This result, obtained on panel data using random effects ordered probits, accounts for initial health status. Datta Gupta and Kristensen (2008) use longitudinal data and cross-country comparisons to show that a favourable work environment and high job security lead to better health conditions, after controlling for unobserved heterogeneity.
Psychosocial risk factors have been studied more recently in the empirical literature [START_REF] Askenazy | Innovative Work Practices, Information Technologies, and Working Conditions: Evidence for France: Working Conditions in France[END_REF], even though their initial formulation in the psychological field is older [START_REF] Karasek | Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign[END_REF][START_REF] Theorell | Current issues relating to psychosocial job strain and cardiovascular disease research[END_REF]. Individuals in a situation of Job strain (i.e. exposed to high job demands and low decisional latitude) are found to suffer more frequently from coronary heart diseases [START_REF] Kuper | Job strain, job demands, decision latitude, and risk of coronary heart disease within the Whitehall II study[END_REF]. [START_REF] Johnson | Combined effects of job strain and social isolation on cardiovascular disease morbidity and mortality in a random sample of the Swedish male working population[END_REF] demonstrated that social isolation combined with Job strain correlates with cardiovascular diseases (Iso-strain situation). Mental health is also potentially impaired by such exposures. [START_REF] Laaksonen | Associations of psychosocial working conditions with self-rated general health and mental health among municipal employees[END_REF] show that stress at work, job demands, weak decision latitude, lack of fairness and support are related to poorer health status. [START_REF] Bildt | Gender differences in the effects from working conditions on mental health: a 4-year follow-up[END_REF] show that being exposed to various work stressors such as weak social support and lack of pride at work may be related to a worse mental health condition, while [START_REF] Cohidon | Working conditions and depressive symptoms in the 2003 decennial health survey: the role of the occupational category[END_REF] stress the role of being in contact with the public. Improving on this ground, part of the literature focuses on the role of rewards at work and how it might help in coping with demanding jobs [START_REF] Siegrist | Adverse health effects of high-effort/low-reward conditions[END_REF]. Notably, de Jonge et al. (2000) use a large-scale cross-sectional dataset to find effects of Job demands and Effort-Reward Imbalance on workers' well-being. [START_REF] Cottini | Mental health and working conditions in Europe[END_REF] use three waves of European data on 15 countries. They take into account the endogeneity of working conditions related to selection on the labour market based on initial health status and find that job quality (in particular job demands) affects mental health.
The role of simultaneous and chronic exposures
Even though the economic literature on the topic of exposure to detrimental working conditions is scarce in regard to both simultaneous exposures (multiple exposures at once) and cumulative exposures (length of exposure to given strains), other fields such as epidemiology have demonstrated their importance in terms of work strains and their impact on health status [START_REF] Michie | Reducing work related psychological ill health and sickness absence: a systematic literature review[END_REF]. By its very nature, the literature that focuses on Karasek's and
Biases
More often than not, the literature's assessment of the health-related consequences of exposures to working conditions is plagued with several methodological biases that can lead to potentially misleading results. First, the choice of a job is unlikely a random experience [START_REF] Cottini | Mental health and working conditions in Europe[END_REF], resulting in contradictory assumptions. In particular, healthier individuals may tend to prefer (self-selection) or to be preferred (discrimination) for more demanding jobs [START_REF] Barnay | The Impact of a Disability on Labour Market Status: A Comparison of the Public and Private Sectors[END_REF]. In this case, the estimations are likely to be biased downwards because of individuals being both healthier and exposed to demanding jobs, thus being overrepresented in the sample (inducing a Healthy Worker Effect - [START_REF] Haan | Dynamics of health and labor market risks[END_REF]. Second, it is also reasonable to assume that workers with lesser health capital may have fewer opportunities in the labour market and thus be restricted to the toughest jobs, in which case an upward bias may result. Therefore, unobserved individual and temporal heterogeneities that are unaccounted for may also result in biased estimations [START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF]. Individual preferences and risk aversion behaviours as well as shocks, crises or other time-related events can cast doubt on the exogeneity hypothesis of working conditions [START_REF] Bassanini | Is Work Bad for Health? The Role of Constraint versus Choice[END_REF].
Due to a lack of panel data that includes detailed information on both work and health status over longer periods, few papers have actually succeeded in handling these biases. Notably, [START_REF] Cottini | Mental health and working conditions in Europe[END_REF] implemented an instrumental variable strategy on repeated crosssectional data while relying on variations across countries in terms of workplace health and safety regulation, doing so in order to identify the causal effect of detrimental working conditions on mental health. In most cases, the difficulty in finding accurate and reliable instruments for working conditions leads to the question of selection biases, and unobserved heterogeneity is either treated differently or avoided altogether when working on crosssectional data.
General framework
The main objective of this study is to assess the role of varying levels of exposure to detrimental working conditions in declaring chronic diseases. To do so, I rely on a differencein-differences framework which considers a chronic diseases baseline period, i.e., the initial number of chronic diseases before all possible exposures to work strains, and a follow-up period after a certain degree of exposure has been sustained (the latter being called the treatment). After labour market entry, employment and working conditions are observed and the treatment may take place. To allow for more homogeneity in terms of exposure and treatment dates, as well as to ensure that exposure periods cannot be very much separated from each other, I observe working conditions within a dedicated period (starting from labour market entry year). In order to be treated, one must reach the treatment threshold within this observation period. Individuals not meeting this requirement are considered controls.
Minimum durations of work are also introduced: because individuals who do not participate in the labour market are likely to be very specific in terms of labour market and health characteristics, they are at risk of not really being comparable to other workers [START_REF] Llena-Nozal | The effect of work on mental health: does occupation matter?[END_REF]. Indications: in years.
Reading: For the seventh threshold ( ), an individual must reach 16 years of single exposure or 8 years of poly-exposure within the 24 years following labour market entry to be considered treated. Also, he/she must have worked at least 8 years within this period to be retained in the sample. His/her health status will be assessed by the mean number of yearly chronic diseases at baseline (the 2 years before labour market entry), and three more times (follow-up periods) after the end of the working conditions observation period. Source: Author.
Nine progressive exposure levels (denoted ) have been designed in order to assess potentially varying effects of increasing strains on declaring chronic diseases. In order to take into account the cumulative effects between strains, two types of exposure are considered (see first half of Table 6): single exposure (when an individual faced only one strain at a time each year) and poly-exposure (if an individual faced two or more strains simultaneously each year).
Then, the duration of exposure is accounted for by introducing varying minimum durations of exposure (thresholds). Empirically, this framework covers exposure thresholds ranging from 4 years of single exposure or 2 years of poly-exposure ( ) to, respectively, 20 and 10 years of exposure ( ), with a step of 2 years (resp. 1 year) from a threshold to another for single (resp.
poly-) exposures. However, changing the treatment thresholds will, as a consequence, lead to other necessary changes in the framework, notably to the duration of the working conditions observation period and to the minimum duration at work within it (see second half of Table 6). More details about the choices made for these parameters can be found in Appendix 8.
Note that only thresholds to are presented in the rest of the paper (for simplification purposes), because previous thresholds reveal no significant effect on chronic diseases from exposure to detrimental working conditions. Let us take the example of two fictive individuals, and , in the seventh threshold sample to illustrate the framework. To be a treated, individual needs to be exposed to at least 16 years of single exposures or 8 years of poly exposures during the first 24 years after labour market entry. He also needs to have worked at least 8 years within this period to be retained in the sample. Individual , in order to be in the control group, needs to have been exposed less than 16 years to single exposures and less than 8 years of poly exposures within the 24 years after labour market entry. may or may not be exposed after the 24-year observation period but in any case will still be a member of the control group for the threshold level considered ( in this example). Individual needs, just like , to have worked at least 8 years within his/her observation period to remain in the sample. All in all, the only element separating from is the fact that reached the exposure threshold within the working conditions observation period, when did not. In this study, I work with this reconstructed longitudinal retrospective dataset comprising more than 6,700 individuals, including their career and health-related data from childhood to the year of the survey. Thus, the final working sample is composed of around 3,500 men and 3,200 women, for whom complete information is available and who meet specific inclusion criteria described in Section 2 (see also Appendix 8 for more details).
Data
Variables of interest
Working conditions: Definition of a treatment
Ten The second one forms the psychosocial risk factors that include full skill usage, working under pressure, tensions with the public, reward, conciliation between work and family life
and relationships with colleagues. The third one represents the global exposure to both physical and psychosocial strains (which includes all ten working conditions indicators). For each indicator, individuals must declare if they "Always", "Often", "Sometimes" or "Never" faced it during this period: I consider one individual to be exposed if he/she "Always" or "Often" declared facing these strains.
Chronic diseases
The indicator of health status is the annual number of chronic diseases 11 : a chronic disease is understood in the Sip survey to be an illness that lasts or will last for a long time, or an illness that returns regularly. Allergies such as hay fever or the flu are not considered chronic diseases. This definition is broader than the French administrative definition, and it is selfdeclarative. This indicator is available from childhood to the date of the survey (2006).
Available chronic diseases include cardiovascular diseases, cancers, pulmonary problems, ENT disorders, digestive, mouth and teeth, bones and joints, endocrine, metabolic and ocular problems, nervous and mental illnesses, neurological problems, skin diseases and addictions. Table 7 gives a description of the sample used in the 7 th threshold described above. I chose this specific threshold because it should give an adequate representation of the average of the studied population (as it is the middle point between presented thresholds to and because it should not differ in non-treatment-related characteristics for the most part, due to the samples used for all thresholds being the same).
General descriptive statistics
The main conclusions of these descriptive statistics are, first, that the populations who are to be physically and globally treated in the future seem to be in a better initial health condition than their respective control groups. Such a difference cannot be found in the psychosocial sample. Second, no significant effect of the physical and global treatments is observed on subsequent numbers of chronic diseases. This is once again the opposite for the psychosocial subsample, which displays growingly significant and negative differences in the number of chronic diseases between treated and control groups, thus revealing a potentially detrimental effect on health status from psychosocial exposures. However, because the structures of the treated and control groups are very heterogeneous in terms of observed characteristics, the differences in chronic diseases for each period between the two are likely to be unreliable.
Yet, for at least the physically and globally demanding jobs, there seem to be signs of a sizeable selection effect indicating that healthier individuals prefer or are preferred for these types of occupations.
In a similar fashion, Table 8 below gives more detailed information about the different components of the reconstructed indicators for working conditions and chronic diseases for the 7 th threshold. The first half of the table gives the average number of years of exposure to the ten work strains used in this study. The second half of the table gives an overview of the 15 chronic diseases families used and the average number of these faced by the sample. Note that these chronic disease statistics do not hold for a specific period of time, but rather account for the entire life of the sample up until the date of the survey. What can be learned from these descriptive statistics about working conditions is that the most common types of strains, in terms of mean number of years of exposures are, first, facing a high physical load ( years), exposures to hazardous materials ( years), repetitive work ( years), work under pressure ( years) and the lack of recognition ( years).
Important differences depending on the type of treatment can also be logically seen: when exposures to a high physical burden, to hazardous materials and to a repetitive work are predominant in the physically treated (resp. , and years in comparison to their control group), the lack of recognition and working under pressure are specific characteristics of the psychosocially exposed workers (resp. and years).
As can be seen from the second half of the table, the individuals of the seventh threshold faced differentiated types of chronic diseases during their lives. When the average number of addictions is only , problems related to bones and joints are much more common ( ).
Some expected differences between treated and control groups also appear (physically treated declaring more bone/joints or pulmonary problems; psychosocially treated more psychological issues). Yet some others are less intuitive (for instance, the physically treated group declares facing cancers less often than the control group). This is explained first by the fact that no specific period of time is targeted in these simple statistics and consequently these cancers may happen during childhood or during the work life but before an individual could reach the treatment threshold, in which cases facing such issues early (in comparison to the treatment onset) most likely reduces the probability to be a treated, especially in physical jobs.
Empirical analysis
Econometric strategy
The general framework of the difference-in-differences methodology is given by Equation 1( [START_REF] Angrist | Mostly harmless econometrics: an empiricist's companion[END_REF]. The left-hand side member gives the observed performance difference between the treated and control groups. The first right-hand side member is the Average Treatment Effect on the Treated ( ), and the far right-hand side member is the selection bias. The latter equals when the potential performance without treatment ( ) is the same whatever the group to which one belongs (independence assumption): .
(1)
In practical terms, the estimation of the difference-in-differences for individual and times (baseline) and (follow-up) relies on the fixed-effects, heteroskedasticity-robust Within panel data estimator 12 for the estimation of Equation 2, which explains the mean number of chronic diseases ( ):
(2) is a dummy variable taking value if the period considered is ; is a dummy variable for the treatment (taking value when individual is part of the treated 12 It is also possible to estimate such a specification using the Ordinary Least Squares estimator and group-fixed unobserved heterogeneity terms. The results should be relatively close [START_REF] Givord | Méthodes économétriques pour l'évaluation de politiques publiques[END_REF], which has been tested and is the case in this study. Yet, panel data estimators appear to be the most stable because of the increased precision of the individual fixed effects in comparison to group-fixed effects, and thus have been preferred here.
group); (variable of interest) is a cross variable taking value when
individual is treated in ; is a vector of covariates and , , and are their respective coefficients. and , respectively, represent the individual and temporal unobserved heterogeneities and the error term. The main objective of a difference-indifferences framework is to get rid of both and , as well as to account for the baseline situation ( , which may differ between the two groups.
In order to satisfy the independence assumption, i.e., to reduce the ex-ante differences between treated and control groups as much as possible and thus handle the selection bias existing in the sample, I perform a matching method prior to the difference-in-differences setup using pre-treatment characteristics ( ) related to health status and employment elements, so that . A Coarsened Exact Matching method is implemented (CEM - [START_REF] Blackwell | cem: Coarsened Exact Matching in Stata[END_REF]. The main objective of this methodology is to allow the reduction of both univariate and global imbalances between treated and control groups according to several pre-treatment covariates [START_REF] Iacus | Matching for Causal Inference Without Balance Checking[END_REF]. CEM divides continuous variables into different subgroups based on common empirical support and can also regroup categorical variables into fewer, empirically coherent items. It then creates strata based on individuals (treated or controls) achieving the same covariate values and match them accordingly by assigning them weights 13 (unmatched individuals are weighted ). This offers two main advantages compared to other matching methods. It helps in coping effectively with the curse of dimensionality by preserving sample sizes: coarsening variables in their areas of common empirical support ensures a decent number of possible counterfactuals for each treated observation in a given stratum, and therefore decreases the number of discarded observations due to the lack of matches. In addition, CEM reduces the model dependence of the results [START_REF] Iacus | Matching for Causal Inference Without Balance Checking[END_REF]). Yet, this matching method is still demanding in terms of sample size, and only pre-treatment variables (i.e. variables determined before the exposure to detrimental working conditions) must be chosen 14 .
4.2. Matching variables and controls 13 The weight value for matched individuals equals , with representing the sample size for respectively the treated ( ) and control ( ) groups in stratum and the total sample sizes for both groups. 14 The data management has been done using SAS 9.4. The econometric strategy is implemented in Stata 11 using respectively the Coarsened Exact Matching (CEM) package and the "xtreg" procedure. Some robustness checks have also been conducted using the Diff package and the "regress procedure".
Matching pre-treatment variables are chosen so that they are relevant in terms of health status and status determination in the labour market, in addition to helping cope with the (self-
)selection bias (individuals sustaining high levels of exposure are bound to be particularly resilient or, in contrast, particularly deprived from better opportunities in the labour market).
Individuals are matched according to their: year of entry into the labour market (in order to get rid of temporal heterogeneity related to generation/conjuncture effects); gender [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]; education level (four levels: no education, primary or secondary, equivalent to bachelor degree and superior); health status before labour market entry (heavy health problems and handicaps) to have a better assessment of their initial health status and to cope with endogenous sorting in the labour market; and important events during childhood, aggregated into two dummy variables (on the one hand, heavy health problems of relatives, death of a relative, separation from one or more parent; on the other hand, violence suffered from relatives and violence at school or in the neighbourhood), as it is pretty clear that such childhood events may impact early outcomes in terms of health status [START_REF] Case | The lasting impact of childhood health and circumstance[END_REF][START_REF] Lindeboom | An econometric analysis of the mentalhealth effects of major events in the life of older individuals[END_REF]. Matching the samples on such variables is bound to reduce the initial heterogeneity existing between the treated and control groups, as well as to limit the selection bias into employment and into different degrees of exposure, as part of the individuals' resilience to work strains is accounted for notably by proxy variables for their initial health capital.
After reaching the treatment threshold, workers can still be exposed to varying levels of working conditions. This possibility of post-treatment exposures is accounted for by a control variable in the difference-in-differences models (taking the value at baseline and , or depending on if the individual has been exposed, respectively, hardly, a little or a lot to detrimental work strains during this post-treatment period). Health habits are also controlled for in the difference-in-differences models by adding a variable indicating if individuals, at any given time, are daily smokers or not. The idea behind this is that health-related behaviours (such as tobacco and alcohol consumption, being overweight and other health habits) are bound to be correlated with each other as well as with exposures to work strains and with the declaration of chronic diseases, all of which induce biased estimates when unaccounted for.
This variable takes the value when an individual is not a daily smoker and the value if he/she is in either the baseline or follow-up periods.
Matched descriptive statistics
The naive results (descriptive statistics presented in Section 3.3 and the unmatched differencein-differences results presented in Section 5.1) tend to confirm the possibility of a (self-
)selection bias in the sample, suggesting that people are likely to choose their job while considering their own initial health status; in any case, the results justify an approach that takes into account this possibility. In order to minimize this selection process, a matching method is used prior to the difference-in-differences models.
Table 9 gives a description of the same sample used in , which was presented earlier (for comparison purposes), after CEM matching. The matching method succeeds in reducing the observed structural heterogeneity between the treated and control groups for every single pretreatment covariate. Residual heterogeneity still exists, namely for the year of entry into the labour market and age, but it is shown to be minor and, in any case, statistically nonsignificant (difference of less than a month in terms of labour market entry year and of approximately a quarter for age). It is also interesting to note that initial health status differences are also greatly reduced and that larger negative follow-up differences between treated and control groups can now be observed, making the hypothesis of a detrimental impact of working conditions on health status more credible.
Results
Naive results
The results for unmatched difference-in-differences naive models for the five thresholds ( to ) are presented in rows in Table 35, Table 36 andTable 37 (Appendix 9), and can be interpreted as differences between groups and periods in the mean numbers of chronic diseases. Despite not taking into account for the possibility of endogenous selection in the sample nor differences in observable characteristics between the two groups' structures, these models do take care of unobserved, individual-fixed heterogeneity. As expected after considering the sample description given in Table 7, unmatched baseline differences (i.e.
differences in chronic diseases between treated and control populations before labour market entry) display statistically significant negative differences between future physically treated and controls in men (Table 35). These differences cannot be witnessed in women or for the psychosocial treatment (Table 36). The possibility of endogenous sorting hence cannot be excluded. The positive follow-up differences (i.e. differences in the numbers of chronic conditions between treated and control populations after the treatment period and not accounting for initial health status) indicate that the treated population reported higher numbers of chronic diseases than the control group in average. Logically, these differences are growing in magnitude as the exposure degree itself becomes higher.
Difference-in-differences results (i.e. the gap between treated and control populations, taking into account for differences in initial health status) suggest a consistent effect of detrimental work strains on the declaration of chronic conditions, which increases progressively as exposures intensify. While physical strains appear to play a role on the declaration of chronic diseases straight from in women and in men, effects after psychosocial strains seem to require higher levels of exposure to become statistically significant: in men, first significant differences appear from ( in women). For the global treatment (Table 37), first significant differences happen for in women and in men. These effects do not turn out to be short term only, as the differences tend to grow bigger when considering later periods of time.
Main results
The results for matched difference-in-differences models for the five thresholds are provided in Table 10, Table 11 and Table 12 below. These results, relying on matched samples, take care of the selection biases generated by endogenous sorting in the labour market and observed heterogeneity, as well as unobserved individual-fixed and time-varying heterogeneities as a result of using difference-in-differences frameworks. Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups, respectively, before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e., the difference between follow-up and baseline differences). The mean chronic diseases column indicates the mean number of chronic diseases of the treated population in the health period considered. The N column gives the sample sizes for, respectively, the treated and total populations. The last column denotes the percentage of the initial sample that found a match for, respectively, the treated and control groups. Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups, respectively, before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e., the difference between follow-up and baseline differences). The mean chronic diseases column indicates the mean number of chronic diseases of the treated population in the health period considered. The N column gives the sample sizes for, respectively, the treated and total populations. The last column denotes the percentage of the initial sample that found a match for, respectively, the treated and control groups. Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups, respectively, before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e. the difference between follow-up and baseline differences). The mean chronic diseases column indicates the mean number of chronic diseases of the treated population in the health period considered. The N column gives the sample sizes for, respectively, the treated and total populations. The last column denotes the percentage of the initial sample that found a match for, respectively, the treated and control groups. It should be noted that around 90% of the initial sample is preserved after matching in physical and psychosocial samples, and that at least 80% of the sample is preserved for the global treatment (because of the higher number of treated). Matching the samples on pretreatment variables consistently succeeds in reducing initial health status gaps between treated and control groups, to the point where none of them are still present in the matched results.
It appears that men are clearly much more exposed to detrimental working conditions than women, especially for physically demanding jobs (with an average of percentage points ( ) more in men than in women), but also to a lesser extent for psychosocial risk factors in men). In comparison to women, the gender gap regarding all working conditions (global treatment) is approximately in men. A clear impact of exposures to work strains on the declaration of chronic diseases can be observed in the difference-in-differences (columns 5 and 6). Treated workers indeed seem to suffer from a quicker degradation trend in their health status than their respective control groups. This trend exists between levels of exposure (thresholds), but it is also suggested by the evolution of the number of chronic diseases by health status observation period, even though these differences in means are unlikely to be statistically significant. This main result holds for all treatment types and for both genders, and it tends to demonstrate possible long-term effects of exposures rather than only short-term consequences.
In the physical sample, the first significant consequences in terms of health status degradation can be seen in women, starting from (i.e., after 12 years of single exposure or 6 years of simultaneous exposures), while this is the case much later in men, at (resp. after at least 18 or 9 years of exposure). Between and , the differences between treated and control groups in the mean number of chronic diseases in women increase from to ; while in men the differences between and range from to . In order to have an idea of the meaning of these differences, it is possible to compare them to the mean number of chronic diseases in the treated population after the treatment occurred, given in column 7. In physically exposed women (resp. men), exposures to work strains may account for 20% to 25% of their chronic diseases (resp. a little more than 10%). Psychosocial strains have a more homogenous initial impact on the declaration of chronic diseases, with sizeable health status consequences happening at in men (resp. 14 or 7 years of exposure) and in women (resp.
16 or 8 years of exposure). The difference in women (resp. in men) goes from in ( in ) to in ( in ). Thus, in psychosocially exposed women (resp. men), approximately 21% of chronic diseases in the treated population can be explained by psychosocial strains (resp. 17%). For the global treatment, effects of exposures start at in women (resp. in men) and go from to (resp. to in men). According to the results for this global type of exposure, 20% (resp. 10% to 15%) of exposed women's (resp. men's) chronic diseases come from combined physical and psychosocial job strains.
The effects of the global treatment appear weaker in terms of onset and intensity, which is most likely due to the fact that the exposure thresholds are easier to reach because of the greater number of working condition indicators considered. Nevertheless, even though women are less exposed than men to work strains, it seems that their health status is more impacted by them.
Robustness checks
Common trend assumption
In order to ensure that the results obtained using a difference-in-differences framework are robust, one needs to assess whether the treated and control groups share a common trend in terms of the number of chronic diseases before all possible exposures to detrimental working conditions, i.e., before labour market entry. samples for . The first panel represents the baseline period and stops at the mean year of labour market entry for this sample. From all three graphs, one can see that both treated and control groups share the same trend in terms of a rise in chronic diseases. This is no longer the case after labour market entry. The common trend hypothesis seems to therefore be corroborated. It should be noted that the test results on unmatched samples (available upon request) are rather close, but they are not as convincing.
Model dependency
I also test whether the results obtained using matched difference-in-differences could be obtained more easily by relying only on a matching method. Yet because CEM is not in itself an estimation method, I set up a simple, heteroskedasticity-robust specification that was estimated using Ordinary Least Squares on matched data with the same control variables (specification 3), followed by a comparison of the results with those obtained through difference-in-differences using specification 2 (Table 38, Appendix 11).
(3)
The results for all three samples on indicate that, in terms of statistical significance, the detrimental impact of exposure to work strains on the number of chronic diseases is confirmed. This is not very surprising, as CEM has the particularity to reduce the model dependence of the results [START_REF] Iacus | Matching for Causal Inference Without Balance Checking[END_REF]. Yet, the amplitude of the effect is mostly a bit higher in OLS. This could be explained by the fact that these simple OLS regressions neither account for initial differences in terms of health status, nor do they take into account individual and temporal unobserved heterogeneities when both these phenomena are going in opposite directions. As a consequence, difference-in-differences results are preferred here because of their increased stability and reliability. It should be noted that, logically, single exposures induce a weaker effect on the number of chronic diseases than poly-exposures. All the results still converge towards a positive and statistically significant effect of exposures on the declaration of chronic diseases. In addition, the differences in intensity that can be observed between individuals exposed to 16 years of single exposures and those exposed to 8 years of simultaneous exposures do not appear to be statistically significant.
Health habits
Even though a part of the role that health habits play in the relationship between working conditions and health (possibly generating endogeneity issues) is accounted for by controlling for the evolutions in tobacco consumption in the difference-in-differences, other behaviours are not taken into account directly (because they cannot be reconstructed in a longitudinal fashion using Sip data), even if they are likely correlated with smoking habits.
Table 40 (Appendix 13) presents an exploratory analysis on the wages and risky healthrelated behaviour differences in 2006, on , between treated and control groups for all three treatments. In unmatched samples, important differences can be observed in terms of monthly wage, regular physical activities, alcohol, tobacco consumption and being overweight. The treated group on average earns less and does less sport but has more health-related risky behaviours than the control group. In matched samples, no statistically significant difference remains between the two groups in 2006 except for wages. This indicates that the treatment effects presented here should not pick up specific effects of health-related behaviours, except possibly those related to health investments (as the control groups are generally richer than the treated groups).
Gender gap
Important gender differences appear to exist in terms of effects from a certain degree of exposure to detrimental working conditions. In order to try and explain these differences, an exploratory analysis specifically on year 2006 has been conducted in Appendix 14.
First, men and women may be employed in different activity sectors, the latter being characterized by different types of exposures to working conditions (Table 41). As expected, very large differences exist in the gender repartition as well as work strain types encountered within activity sectors. Thus, it is likely that men and women are not exposed to the same types of strains. Table 42 confirms this intuition and indicates that, for at least five out of ten working conditions indicators, a statistically significant difference exists between men and women in terms of repartition into strains.
As a consequence, the explanation for this gender-gap in working conditions and health is most likely twofold. First, there might be declarative social heterogeneity between men and women. Both may not experience an objectively comparable job situation in the same way, just as they may not experience an objectively comparable health condition in the same way [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. However, what could also be true is that men and women may not be exposed to the exact same typology of working conditions within a certain treatment. Even though belonging to a specific treatment group ensures a quantitatively similar exposure (in terms of number of strains at a given time and in terms of lengths of exposures), it does not completely ensure that the types of strains are qualitatively equivalent, which in turn could explain part of the observed differences. Yet, this hypothesis should be partially relaxed by the use of two different treatment types (one handling physical demands and another for psychosocial risk factors).
Discussion and conclusion
In this study, I use French retrospective panel data to highlight links that physical and psychosocial working conditions -separately and combined -have with chronic diseases in exposed males and females. Workers facing gradually increasing strains in terms of duration or simultaneity of exposure are more frequently coping with rising numbers of chronic diseases. Using combined difference-in-differences and matching methods, the empirical strategy helps to handle both (self-)selection in the labour market based on health status and other observable characteristics as well as unobserved individual and temporal heterogeneity.
Based on a career-long temporal horizon for exposures and health status observation periods, I find major differences in health conditions between treated and control groups, which are very likely the result of past exposures to work strains. To my knowledge, this is the first paper to work on both the simultaneous and cumulative effects of two distinct types of work strains and their combination with such a large temporal horizon, while acknowledging the inherent biases related to working conditions.
However, the paper suffers from several limitations. First, working with retrospective panel data and long periods of time leads to estimates being at risk of suffering from declaration biases. The individuals are rather old at the date of the survey, and their own declarations in terms of working and health conditions are therefore likely to be less precise (recall biases) or even biased (a posteriori justification or different conceptions according to different generations). Even if it is impossible to deal completely with such a bias, matching on entry year into the labour market (i.e., their generation) and on education (one of the deciding factors when it comes to memory biases) should help in reducing recall heterogeneity. Also, simple occupational information notably tends to be recalled rather accurately, even over longer periods [START_REF] Berney | Collecting retrospective data: Accuracy of recall after 50 years judged against historical records[END_REF]). Yet, justification biases most likely remain (for instance, ill individuals may declare they faced detrimental working conditions more easily because of their health condition), especially considering the declarative nature of the data.
Second, potential biases remain in the estimations. I work on exposures happening during the first half of the professional career (i.e., to relatively young workers), at a time when individuals are more resilient to these strains. This means that the impact found in this study would most likely be higher for an equivalent exposure level if an older population were targeted. I am also unable to completely account for possible positive healthcare investments in the treated population, because if the most exposed are also better paid (hedonic price theory, [START_REF] Rosen | Hedonic Prices and Implicit Markets: Product Differentiation in Pure Competition[END_REF] this wealth surplus could be used for relatively more health capital investments. Alternatively, the treated and control groups may have different health habits.
Hence it is possible that the mean results I find are once again biased. Yet, even though wealth-type variables are endogenous, this hypothesis has been tested empirically with an alternative specification in the study by [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF] and they were found to be irrelevant. Also, health-related risky behaviours are at least partly accounted for by implementing a variable for tobacco consumption in the difference-in-differences model.
Another important point about potentially remaining biases in the estimations is that timevarying individual unobserved heterogeneity still is unaccounted for. For instance, a specific unobserved shock impacting both exposures to work strains and chronic diseases with heterogeneous effects depending on individuals cannot be accounted for (one can think for example to an economic crisis which usually degrades, in average, work quality and may also deteriorate individuals' health status -in this particular case, the estimations are at risk to be biased upwards). One must thus be careful concerning the causal interpretation of the results.
Third, because of the method I use and the sample sizes I am working with, it is not possible to clearly analyse the potential heterogeneity in the effect of working conditions on health status across demographic and socio-economic categories, even though this mean effect is shown to vary [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF][START_REF] Muurinen | The economic analysis of inequalities in health[END_REF]. Fourth, part of the selection process into a certain level of exposure possibly remains. Considering that the sample is matched with elements of human and health capitals and because I consider only homogenous individuals present in the survey for at least 38 years (who worked at least 10 years and for whom the post-treatment exposures are controlled for), I should have rather similar individuals in terms of resilience to detrimental working conditions, i.e., with similar initial abilities to sustain a certain level of severity of exposure. So, to some extent at least, the selection into a certain level of treatment is acknowledged. Yet, it is impossible to directly match the samples depending on the fact that whether or not they reached a certain level of treatment (because it is endogenous). Because of that, it is likely that some degree of selection remains (notably, only the "survivors" are caught in the data, which possibly induces downward-biased estimations). It should also be noted that part of the heterogeneity of the results between men and women can still be explained by declarative social heterogeneity regarding their working and health conditions as well as qualitative differences in their exposures, both elements which cannot really be accounted for using such declarative data.
Finally, I use a wide definition of chronic conditions as an indicator for health status. This indicator does not allow for direct comparisons with the literature (commonly used indicators, such as self-assessed health status or activity limitations, are not available on a yearly basis).
Yet, I believe that it may represent a good proxy of general health status while at the same time being less subject to volatility in declarations compared to self-assessed health (i.e., more consistent).
These results justify more preventive measures being enacted early in individuals' careers, as it appears that major health degradations (represented by the onset of chronic conditions) tend to follow exposures that occur as early as the first half of the career. These preventive measures may first focus on workers in physically demanding jobs while also targeting workers facing psychosocial risk factors, the latter still being uncommon in public policies.
These targeted schemes may benefit both society in general (through higher levels of general well-being at work and reduced healthcare expenditures later in life) and firms (more productive workers and less sick leaves). It notably appears that postponing the legal age of retirement must be backed up by such preventive measures in order to avoid detrimental adverse health effects linked to workers being exposed longer while also taking into account both types of working conditions (which is not the case in the 2015 French pension law).
Today, the human and financial costs of exposures to detrimental working conditions seem undervalued in comparison to the expected implementation cost of these preventive measures.
Érudite) for their useful advice. Finally, I thank the two anonymous reviewers of the Health Economics journal.
Introduction
Traditional structural reforms for a pay-as-you go pension system in deficit rely on lower pensions, higher contributions or increases in retirement age. The latter was favoured by the indirect means of increases in the contribution period required to obtain a full rate pension (Balladur 1993 andFillon 2003 reforms) or by the direct increase in the legal age of retirement (Fillon 2010 reform) including a gradual transition from 60 to 62. However, the issue of funding pensions occults other specifics of the pension system that may play a role on health status and ultimately on the finances of the health insurance branch and the management of long-term care. Exposure to harsh working conditions and the impact of ill health on the employment of older workers, notably, are already well documented in France.
The effect of transitioning into retirement has not received the same attention in the French economic literature (besides [START_REF] Blake | Collateral effects of a pension reform in France[END_REF]. Retirement in France mostly remains an absorbing state (relatively few employment situations of individuals cumulating retirement benefits and paid jobs). It can thus be seen in many cases as an irreversible shock.
The sharp transition into retirement can often affect perceived health status, but the nature of the causal relationship between retirement and health can also be bidirectional due to retirement endogeneity.
Before retirement, health status already appears as one of the most important non-monetary drivers in the trade-off between work and leisure in older workers [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF][START_REF] Lindeboom | Health and Work of Older Workers[END_REF]. Although the nature of the relationship between health and employment appears obvious, studying causal impacts is complex [START_REF] Strauss | Health, Nutrition and Economic Development[END_REF]. The retirement decision may free individuals from a job strain situation. By examining the relationship between work and health, the first can indeed be beneficial to the latter, but the arduous nature of certain working conditions may also deteriorate health.
The retirement decision is indeed partly motivated by health status, healthier individuals tending to remain in employment. In contrast, a poor health condition reduces labour supply and causes early exit from the labour market. Many studies have highlighted the existence of a healthy worker effect testifying of the selection on the labour market of the most resilient workers. A poor health status may speed up the retirement decision [START_REF] Alavinia | Unemployment and retirement and ill-health: a crosssectional analysis across European countries[END_REF][START_REF] Jones | Sick of work or too sick to work? Evidence on selfreported health shocks and early retirement from the BHPS[END_REF]: notably, [START_REF] Dwyer | Health problems as determinants of retirement: Are selfrated measures endogenous[END_REF] show that sick workers can advance from one or two years their plan to retire. From ECHP (European Community Household Panel), [START_REF] García-Gómez | Institutions, health shocks and labour market outcomes across Europe[END_REF] studies the effect of a health shock on employment in nine European countries. The results obtained from a matching method suggest that health shocks have a negative causal effect on the probability of being employed. People with health problems are more likely to leave employment and transit to situations of disability.
Moreover, it is difficult to isolate the health-related effects of retirement from those of the natural deterioration rate related to ageing, and many unobservable individual characteristics are also able to explain not only the retirement decision behaviours, but also health status indicators (subjective life expectancy, risk aversion behaviours or the labour supply disutility). Finally retirement, considered as non-employment may be the cause of a feeling of social utility loss which can lead to declining cognitive functions and a loss in self-esteem.
In this paper, we study the role of retirement on several physical and mental health status indicators. In order to take care of the inherent endogeneity biases, we set up an instrumental variable approach relying on discontinuities in the probability to retire generated by legal incentives at certain ages as a source of heterogeneity. Thanks to the Health and Professional
Path survey (Santé et Itinéraire Professionnel -Sip) dataset, we are able to control for a variety of covariates, including exposures to detrimental working conditions throughout the whole career. We also acknowledge the likely heterogeneity of the effect of retirement and the possible mechanisms explaining its effects on health status. To our knowledge, no study evaluates the effect of the retirement decision on the physical and mental health conditions of retirees, after taking into account biases associated with this relationship as well as exposures to working conditions and the nature of the entire professional career.
The paper is organized as follows. Section 1 is dedicated to an empirical literature review of relationships between retirement and health status. Section 2 and Section 3 then describes the database, Section 4 describes the empirical strategy. Section 5 then presents the results and Section 6 concludes.
Background and literature
French retirees have a rather advantageous relative position compared with other similar countries. The retirement age is comparatively lower (62 years while the standard is 65 in most other countries like Japan, Sweden, the U.K., the U.S. or Germany). The share of public expenditures devoted to the pension system is 14%, with only Italy devoting a superior part of its wealth. The net replacement rate is 68%, which places it among the most generous countries with Italy and Sweden. In contrast, the Anglo-Saxon countries relying on funded schemes have lower replacement rates and the share of individual savings in retirement is much higher than in countries where pension systems are of the pay-as-you go type. This position is convergent when considering life expectancy indicators at 65 or poverty levels.
The life expectancy of a 65 year-old or more French countryman is systematically higher than the one observed in other countries (except for Japanese women, who can expect to live 24 years compared to 23.6 years in France). The poverty rate among the elderly is the lowest among all the countries mentioned here (3.8% in France compared to 12.6% on average for the OECD).
Even though the issue of the links between health and work has many microeconomic and macroeconomic implications, the French economic literature is still relatively scarce compared to the number of international studies on the subject [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. The deterioration of health status contributes first to change the preferences for leisure and decreases individuals' work capacity or productivity. The [START_REF] Grossman | On the Concept of Health Capital and the Demand for Health[END_REF][START_REF] Grossman | The human capital model[END_REF] model indicates that each individual has a health capital that depreciates with age. Any health event affects the career path via the potential stock effects (instant exogenous shock) and the depreciation rate of this health capital but also, more generally, on future investments in human capital (primary or secondary prevention actions in health). Disease can lead individuals to include a reallocation of time spent between work and leisure times. Alteration of the health condition therefore reduces the labour supply. Conversely, poor working and employment conditions can affect health status and generate costs for the company (related to absenteeism). Stressful work situations can also generate an increase in healthcare consumptions and the number of daily allowances for illness.
The specific relationship between non-employment and health has received very little attention in France unlike in Europe [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]. In general, job loss is associated with a deterioration of well-being. Persistent unemployment and recurrent forms of non-employment have a deleterious effect on health, for example overweight and alcohol consumption [START_REF] Deb | The effect of job loss on overweight and drinking[END_REF]. Unemployment and inactivity, happening early in the professional life, can promote the onset of depressive symptoms thereafter, as shown by Mossakowski in 2009 on U.S. longitudinal data. Furthermore, job loss increases mortality [START_REF] Sullivan | Job Displacement and Mortality: An Analysis Using Administrative Data *[END_REF]. Finally, many studies agree on a negative effect of unemployment on health [START_REF] Böckerman | Unemployment and self-assessed health: evidence from panel data[END_REF][START_REF] Browning | Effect of job loss due to plant closure on mortality and hospitalization[END_REF]Eliason andStorrie, 2009a, 2009b;[START_REF] Kalwij | Health and labour force participation of older people in Europe: what do objective health indicators add to the analysis?[END_REF].
The effects of retirement on health status are not trivial. Two competing hypotheses can be advanced. Retirement can first free individuals from job strain situations and may improve their health condition in the short run. This virtuous circle will be sustainable provided that individuals have a capacity to invest in their health (income effect). Many international empirical studies show that retirement is beneficial to health status [START_REF] Blake | Collateral effects of a pension reform in France[END_REF][START_REF] Charles | Is Retirement Depressing?: Labor Force Inactivity and Psychological Well-Being in Later Life[END_REF][START_REF] Coe | Retirement effects on health in Europe[END_REF][START_REF] Grip | Shattered Dreams: The Effects of Changing the Pension System Late in the Game*: MENTAL HEALTH EFFECTS OF A PENSION REFORM[END_REF][START_REF] Insler | The Health Consequences of Retirement[END_REF][START_REF] Neuman | Quit Your Job and Get Healthier? The Effect of Retirement on Health[END_REF].
Coe and Zamarro (2011) measure the health effect of retirement and conclude that it decreases the likelihood of reporting poor perceived health (35%) after controlling for reverse causality. However, this effect is not observed with the two depression indicators. In the U.K., [START_REF] Bound | Estimating the Health Effects of Retirement[END_REF] found a positive but transitory health effect of retirement, only in men. The retirement decision can also generate a loss of social role [START_REF] Kim | Retirement transitions, gender, and psychological well-being: a life-course, ecological model[END_REF], a reduction of social capital and therefore a deterioration in mental health, strengthened in the case of a negative impact on the living standards. Other studies also reach opposite results
including mental health (cognitive abilities) [START_REF] Behncke | Does retirement trigger ill health?[END_REF][START_REF] Bonsang | Does retirement affect cognitive functioning[END_REF][START_REF] Dave | The effects of retirement on physical and mental health outcomes[END_REF][START_REF] Mazzonna | Aging, Cognitive Abilities and Retirement in Europe[END_REF][START_REF] Rohwedder | Mental Retirement[END_REF]. Overall, the positive effect of retirement on health status seems to prevail, except for cognitive abilities.
To our knowledge, only very few studies tried to work out the effect of transitioning into retirement on health in France and show that retirement decision improves physical health for non-qualified people.
Data
The individuals. In order to avoid too heterogeneous samples, we select individuals aged 50-69 in 2010 for whom we benefit from all the information needed in terms of pension and health status.
Thus, we work on a sample of 4,610 individuals. 2,071 of them are retired.
Descriptive statistics
The general descriptive statistics on the 50-69 year-old sample are available in Table 13. First four columns grant information about the whole sample, fifth column ( ) gives the number of individuals belonging to the category in row and last three columns respectively give the average in the retired or non-retired populations and the significance of the difference between the two.
The most important element to notice in these simple descriptive statistics is that retirees apparently systematically self-report a worse general health condition and a better mental health status than non-retirees. Obviously these raw statistics do not account for other characteristics, notably the 8-year difference in age between the two populations. Yet, 38% of the retired population declare poor levels of self-assessed health against 36% in the nonretired population, 50% a chronic disease (against 40%) and 26% being limited in daily activities (vs. 24%). These findings are not quite similar for mental health indicators, which indicate that the retired population suffers from less anxiety disorders (5%) and depressive episodes (6%) than the control group (resp. 8% and 9%). Exposure to harsh physical and psychosocial working conditions is much higher among retirees than among non-retirees as it is likely that the last years of professional life are marked by greater exposures. Finally, retirees are more prone to having social activities such as associations, unions, religious or artistic activities (48% vs. 38%), have more physical activities (45% vs. 40%), are less often smokers (16% vs. 27%, most likely at least partly indicating a selection effect, the most heavy smokers having a shorter life expectancy) but are more overweight (60% vs. 52%) than the rest of the population. Each point represents the proportion of retirees in the sample at a given age (starting from less than of retirees at age 50 to at age 69). Each 5-year category from age 50 to 69 has been considered and fitted separately in order to identify eventual discontinuities in the growth of the proportion at specific ages. As expected for the French case, three retirement ages seem to emerge as the most common, hence being the most effective cut points: age 55, 65 but mostly age 60, which corresponds to the legal threshold for full-rate pension. Thus, when the proportion of pensioners is only of about 45% of the sample's total at age 59, it amounts to more than 80% of the total number only a year later. Similar graphs specifically for men and women are available in Appendix 15 (Figure XI and Figure XII).
Empirical strategy
Biases
As evidenced in the literature, determining the effect of the retirement decision on retirees' health condition is not trivial. In fact, besides taking into account the natural deterioration rate of the health capital related to ageing, estimates are subject to biases due to the endogeneity of the relationship between health status and retirement. Thus, two major sources of endogeneity may be raised. The first is the existing two-way relationship between retirement and health status. In particular, the decision to retire taken by individuals depends on their initial health condition, leading to a health-related selection bias. The second is the unobserved factors influencing not only health status but also retirement. To the extent that individuals have different characteristics, notably in terms of subjective life expectancy, risk aversion preferences or disutility at work, then the estimates are at risk of being biased.
Identifying variables approach
Advantages
To address these methodological difficulties, we set up an identifying variable method, the objective being to determine the causal effect of retirement decision on retirees' health condition. The identification strategy of this method relies on the use of legal norms following which individuals undergo a change (decision to retire) or not, norms therefore regarded as sources of exogeneity [START_REF] Coe | Retirement effects on health in Europe[END_REF]. The general idea of this method lies in the exploitation of discontinuities in the allocation of a treatment (the retirement decision) related to laws granting incentives to retire at a certain age. To the extent that a full rate legal retirement age in France exists (60 years-old for this study, before the implementation of the Fillon reform in 2010), we use this indicator as the identifying variable for the retirement process. However, it is noteworthy that age, and more importantly reaching a certain age, is not the only element predicting the retirement decision. Using a minimum age as a source of exogeneneity, the instrumental variable method is relatively close to a Regression Discontinuity Design method (RDD) on panel data, the major difference between instrumental variables and RDD being that it is possible with the latter to establish different trends before and after reaching the threshold, which is not possible with a conventional instrumental variables method [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF]. Nevertheless, instrumental variables allow greater flexibility in estimations and do not focus exclusively on very short-term effects of retirement on health.
Hypotheses
The use of instrumental variable methods is based on two assumptions widely discussed in the literature. The first, called the relevance assumption induces that the identifying variable is correlated with the endogenous variable. In our case, the identifying variable being the legal age of retirement at full rate, it appears intrinsically relevant to explain the decision to retire.
The second, called the validity assumption, assumes that the identifying variable is not correlated with the error term. To the extent that the legal age of retirement is decided at the level of the state and is not conditioned by health status, this hypothesis, although not directly testable, does not appear as particularly worrying especially considering this empirical strategy is very widely used in the literature. It is also to be noted that reaching a certain particular age (for instance age 60) should not specifically generate a discontinuity in the agerelated health status degradation trend.
Identifying variables
We consider, in the French context, three possible significant ages of retirement suggested by the legislation and by the data itself: age 55, 60 and 65. 55 is the first significant age inducing early retirements. Before the Fillon 2010 reform, age 60 is the legal age for a full pension and has the greatest discontinuity in the number of retirees. Finally, we also test age 65 to account for late retirement decisions. As evidenced in Figure VII below, 37% of retirees have done so precisely at age 60, 9% at 55 and 5% at age 65. Note that, for the rest of the paper, only the fact of being aged 60 and older will be used as an identifying variable except in some specific robustness checks.
Estimation
We consider first a simple specification relying on a binomial probit model, explaining health status in 2010 (vector , for health indicator and individual ) by the self-declared retirement status ( ), controlling the model by a vector of other explanatory variables ( ):
(1)
However, for the reasons mentioned above, this specification (1) does not appear satisfying enough to determine a causal effect of retirement on health status. This relationship is characterised by endogeneity biases related to reverse causality and unobserved heterogeneity.
Formally, our identification strategy is then based on the fact that, even if achieving or
Distribution
Retirees exceeding a certain age does not fully determine the retirement status, it causes a discontinuity in the probability of being retired at a certain age. Therefore, in order to exploit this discontinuity, we also estimate the following equation ( 2):
(2)
The dummy variable takes the value when individual is at least years-old.
Consequently, we estimate simultaneously a system of two equations ( 3):
(3)
Empirically, to estimate this simultaneous two-equation system, we set up a bivariate probit model, estimated by maximum likelihood. The use of such models is justified by the fact that both explained and explanatory variables are binary indicators [START_REF] Lollivier | Économétrie avancée des variables qualitatives[END_REF]. This method is equivalent to conventional two-stage methods in a linear case.
(4)
We simultaneously explain the probability of being retired and health status. We introduce the vector representing the identifying variables allowing the model's identification (4).
These variables take the form of dummies, taking value if individual is at least yearsold and otherwise, the threshold depending on the legal retirement age considered. Taking the example of the full-rate age of retirement (60), the corresponding identifying variable will take value if individual is aged 60 or over, and otherwise (other thresholds 55 and 65 are determined in the same manner). Bivariate probit models also assume the correlation between Regarding our variable of interest, we use a question specifying the current occupation status at the time of the 2010 survey, and build a dummy variable equal to if the individual has reported being retired or pre-retired at this date and otherwise.
We control all our results by sex, age, age squared (age plays an important role in determining health status, and this role is not necessarily linear throughout the entire life), educational level in three dummies (the more educated individuals are generally better protected in terms of health status than the less educated), having had at least one child, activity sector (public, private or self-employed, when applicable) as it is likely that some sectors are more protective than others. Relying on the retrospective part of the data, we include indicators for having spent the majority of the career in long-term jobs of more than 5 years and finally an indicator for career fragmentation (these are especially important because of their influence not only on health status but also on the age of retirement). We are also able to reconstruct, year by year, the professional path (including working conditions) of individuals since the end of their initial studies to the end of their career. Exposure to physical and psychosocial working conditions during the whole career (the fact of having been exposed 20 years to single strains or 10 years to multiple simultaneous strains of the same type) are thus accounted for. The hypothesis behind it is that individuals having faced such strains at work should be even more 15 The data management has been done using SAS 9.4. The econometric strategy is implemented in Stata 11 using the "probit" and "biprobit" commands for the main results, as well as the "ivreg2" package for linear probability models used as robustness checks.
relieved by retirement, hence inducing heterogeneity in the effect of retirement on health status.
The potential mechanisms explaining the role of retirement on health status will be assessed by daily social activities (associations, volunteering, unions, political, religious or artistic activities), physical activity and health-related risky behaviours (tobacco, alcohol and BMI).
Results
Main results
Table 14 below presents the econometric results for the five health indicators first displaying naive univariate probit models and then bivariate probit models accounting for endogeneity biases using the legal age of retirement at full rate (60) as source of exogeneity. The models for the probability to be retired (first step) are available in Table 43 (Appendix 19).
Naive univariate models indicate, whatever the health indicator considered, no effect of retirement on health status whatsoever. Yet, many expected results can be found: the deleterious effect of ageing (except for chronic diseases and anxiety disorders), a powerful protective effect of the level of education and from being self-employed. Having spent the majority of one's career on long-term jobs and having experienced a stable career path also play an important role. Exposures to detrimental working conditions during the whole career has an extremely strong influence on health, including higher impacts from physical constraints on perceived health status and activity limitations and larger amplitudes of psychosocial risks factors on anxiety disorders and depressive episodes. Finally, being a man appears to be very protective when considering anxiety disorders and depressive episodes. probability of being retired can also be noted. However, being self-employed seems to greatly reduce the probability of being retired ( . Finally, having been exposed to physical strains at work also appears to accelerate the retirement process ( .
Comparing the results of the bivariate probit models with their univariate equivalents (the latter assuming no correlation between residuals of the two models), there is a fairly high consistency of the results for all variables but the role of retirement in the determination of health status is changing dramatically between uni-and bivariate models.
Heterogeneity
This mean impact of retirement on health status is bound to be heterogeneous, notably according to sex (men and women have different types of career and declarative patterns), education levels (because of the protective role of education in terms of career and health outcomes) and more importantly past exposures to detrimental working conditions (retirement seen as a relief from possibly harmful jobs). We can therefore test these assumptions by seeking for heterogeneity in the effect by sex (Table 15 andTable 16), by education levels (Table 17 andTable 18) and possible past exposures to physically (Table 19 andTable 20) or psychosocially (Table 21 andTable 22) demanding jobs. The models have also been conducted on a subsample excluding civil servants (Appendix 20, Table 44 andTable 45). All the following models make use of the fact of being aged 60 or older as a source of exogeneity.
Sex
Because the determinants of men's and women's health status and career outcomes may differ and because health condition suffers from declarative social heterogeneity [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF][START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF], it is first interesting to assess the possible heterogeneity of the effect of retirement on health status according to sex. The results are hence stratified by sex (results for men are presented in Table 15 and for women in Table 16 below). In the male population, retirement reduces the probability to declare activity limitations, generalized anxiety disorders and major depressive episodes. No significant effect appears on self-assessed health and chronic diseases. Among women, retirement only seems favourable for GAD and MDE. In terms of magnitude, retirement decreases the probability of activity limitations and GAD by and of MDE by in men, when in women the decrease in GAD and MDE is of respectively and .
Education
We then stratify our sample according to the level of education: on the one hand, we consider individuals with a primary or secondary education level (Table 17) and on the other hand, the ones that reached a level at least equivalent to the French baccalaureat (Table 18). It is to be noted that the sample sizes of the two populations are fairly different (resp. 3,045 and 1,497 individuals for the lowly and highly educated). In the lower-educated population, retirement seems beneficial in terms of daily activity limitations ( on the probability to declare activity limitations), GAD ( and MDE ( . In the higher-educated sample, the role of retirement is sensible on chronic diseases ( and even more important for mental health (resp. and for GAD and MDE). Other changes in the determinant of health status are noticeable between these two populations: having been in long term jobs as well as physical and psychosocial working conditions during the career do exhibit massive impacts on health status in 2010, when it is not as much the case in the higher-educated sample.
Past work strains
The beneficial effects of retirement on health status are often explained because retirement, seen as the fact of not working anymore, is considered as a relief from hard jobs in terms of working conditions. Here we test the hypothesis according to which retirement is even more beneficial on health if retirees were originally employed in harmful jobs. We stratify the sample respectively according to high and low physical exposures (Table 19 andTable 20) and high and low psychosocial exposures (Table 21 andTable 22) during the whole career. Again, despite precision-losses related to sample sizes, the most psychosocially exposed individuals during their career also experience massive improvements in all aspects of their health status (resp. , , , and for self-assessed health, chronic diseases, activity limitations, GAD and MDE). In the less exposed individuals, only GAD ( ) and MDE ( ) are affected. The massive impacts in the psychosocial subgroup specifically on self-assessed health and mental health indicators can be explained by the relief from a very stressful work-life. The impact on chronic diseases most likely depicts the role of retirement on long-term mental health deterioration as a consequence.
Civil servants
Because civil servants (who are included in our sample) are likely to be specific in terms of retirement requirements, we test whether or not the results vary if we only consider individuals who are/were not civil servants (it is impossible to run the regressions on civil servants only, because of sample sizes). The results indicate no major changes, and the effect of retirement on health status is confirmed by these regressions (Appendix 20, Table 44 andTable 45).
Mechanisms
We investigate several possible reasons (mechanisms) as of why retirement appears to have such a positive impact on retirees' health. In section 5.3.1, we acknowledge the effects of retirement on daily activities and then, in section 5.3.2, on health-related risky behaviours. All the following models make use of the fact of being aged 60 or older as a source of exogeneity. Retirement has a positive role on the probability of having daily social activities as well as on the probability to have physical activities ( ), which is in line with the literature [START_REF] Eibich | Understanding the effect of retirement on health: Mechanisms and heterogeneity[END_REF] (Table 23). Even though it is not possible to say for sure this may causally explain why retirees have a better health condition, daily social activities and sport are bound to be correlated with better health status and well-being (Ho, 2016;[START_REF] Ku | Leisure-Time Physical Activity, Sedentary Behaviors and Subjective Well-Being in Older Adults: An Eight-Year Longitudinal Research[END_REF][START_REF] Sarma | The Effect of Leisure-Time Physical Activity on Obesity, Diabetes, High BP and Heart Disease Among Canadians: Evidence from 2000/2001 to 2005/2006: THE EFFECT OF LTPA ON OBESITY[END_REF]. Retiring is also found to decrease the probability of smoking ( ) which is also in line with a general health status improvement and makes sense, because of the relief retirement generates from the stress of the work-life for instance. Yet, most likely because of the increase in spare time and despite the fact that retirees do sport more often, they are also more numerous to have a risky alcohol consumption ( ) and to be overweight ( ) (Table 24). These results are congruent with the literature, which notably shows that quitting smoking involves higher BMI levels [START_REF] Courtemanche | The Effect of Smoking on Obesity: Evidence from a Randomized Trial[END_REF], just like the fact of retiring [START_REF] Godard | Gaining weight through retirement? Results from the SHARE survey[END_REF]. ). We estimate bivariate Probit models, this time including these three thresholds in the retirement models. The main results are unchanged, and the auxiliary models show no effect of the 55-year threshold, while a strong effect can be found for the 60 and 65 thresholds, this potentially rendering them useful as identifying variables (Appendix 21, Table 46 and Table 47).
Health-related risky behaviours
Robustness checks
We then put our results to the test of linear probability models (LPM), estimated by the generalized method of moments (GMM) with heteroscedasticity-robust standard errors, in order to take advantage of the possibility of using our two relevant identifying variables (60 and 65 years-old thresholds) by initiating different tests. The type of modelling also allows for several tests, as well as for a better handling of unobserved heterogeneity [START_REF] Angrist | Mostly harmless econometrics: an empiricist's companion[END_REF]. It also allows relaxing the hypothesis of the residuals following a bi-normal distribution (which is the case of bivariate probits). The results of the models (Appendix 21, Table 48) are resilient to LPM modelling. It is the same for the results of auxiliary retirement models, which are also stable (Appendix 21, Table 49). We performed Sargan-Hansen tests for over-identification, which show that the null hypothesis of correctly excluded instruments is never rejected in our case. Moreover, the Kleibergen-Paap test statistics are consistently well above the arbitrary critical value of 10, indicating that, with no surprise, our instruments seem relevant to explain the retirement decision.
Finally, we test whether the results hold up when not controlling for several, endogenous covariates, related to the professional career. What can be noted is that the results appear as robust to this new specification, indicating that the effect of retirement was not driven by endogenous relationships with such variables (Appendix 21, Table 50 andTable 51).
Discussion
This study measures the causal effect of retirement on health status by mobilizing an econometric strategy allowing to take into account the endogenous nature of the retirementhealth relationship (via instrumental variables) and retrospective panel data on individual careers. We find that retirement has an average positive effect on activity limitations, GAD and MDE after controlling for reverse causality and unobserved heterogeneity. No significant effect can be found on self-assessed health and chronic diseases. It is also the case in the male population when in women, retirement benefits appear only on GAD and MDE and no effect is to be measured on physical health status. These results are particularly strong in the less educated and in the most exposed individuals to physical and psychosocial working conditions during their career, while also partly holding for the rest of the population to a lesser extent. We also find that this positive effect on health status might be explained by a greater ability for retirees to have more social and physical daily activities and smaller tobacco consumption (even though we cannot be certain of the causal relationship between these mechanisms and health status in our study). Yet, retirees are also found to be significantly more at risk for alcohol consumption and overweight. To our knowledge, this is the first study to give insights on the average effect of retirement on the whole population in France and on the mechanisms which could explain its health effects as well as describing heterogeneous impacts according to sex, education levels and past exposures to two types of working conditions during the entire career, while addressing the endogeneity biases inherent to this type of study.
Yet, several limitations can be noted. As we do not rely on panel data per se, we do not have the possibility to account systematically for individual unobserved heterogeneity. Even though this should not matter because of our instrumental variables framework, panel data would have enabled RDD methods allowing the implementation of differentiated trends left and right of the thresholds, at the cost of temporal distance and sample sizes. Also, in the case of unobserved characteristics correlated with both the probability to be retired and health status, an endogeneity issue cannot be excluded, which can render our identification strategy doubtful in that respect. Another main limit lies in the fact that we cannot determine if the mean effect of retirement on health status differs according to the distance with the retirement shock. We do not know, because of our data, if this effect is majorly led by short-, mid-or long-run consequences, neither can we determine if the impact on health status happens right after retirement or in a lagged fashion. There are also several missing variables, such as the professional status before retirement and standards of living as well as elements related to retirement reforms. It is also to be noted that comparisons between stratified samples are complicated because the results hold on different samples.
Some perspectives also remain to be tested. An initial selection of the sample taking into account the fact that individuals have worked during their careers or even a selection of individuals who have worked after reaching 50 would probably grant a greater homogeneity in the sample. Finally, the potentiality of some individuals being impacted by pension reforms will be assessed and further robustness checks accounting for this possibility will be conducted if necessary.
General conclusion 1. Main results
Because of its temporal approach, the main findings of this Ph.D. Dissertation can be summed-up in terms of occupational and health cycles.
Starting from the beginning of the work life, this Ph.D. Dissertation was able to find that exposures to detrimental working conditions early on are related to higher amounts of chronic diseases in exposed men and women (Chapter 2). Based on a career-long temporal horizon both for physical and psychosocial exposures and health status, major differences in terms of health condition between the most and least exposed workers related to job strains are indeed found. Workers facing gradually increasing strains in terms of duration or simultaneity of exposure are more frequently coping with raising numbers of chronic diseases, being either physical or mental conditions. Even though these workers are supposedly more resilient to such strains being exposed during the first part of their career, sensible health status degradations are visible. Accounting for baseline characteristics including childhood important events, this result is robust to selection processes into a job and unobserved heterogeneity. In physically exposed men, around of chronic diseases can be explained by gradually increasing levels of exposures. Exposures to psychosocial strains account for of them. In women, increasing physical (resp. psychosocial) exposures explain between and (resp.
) of their number of chronic diseases, after exposure. As a consequence, women (when not being the most exposed), are found to experience the most degrading effect of such exposures.
In part, workers may experience health shocks during their career, which are susceptible to deteriorate their capacity to remain in their job. Notably, mental health conditions such as depressive episodes or anxiety disorders appear as strong explanatory factors of this capacity (Chapter 1). After accounting for socioeconomic characteristics, employment, general health status, risky behaviours and most importantly the professional career, suffering from common mental disorders induces a decrease of up to in the probability of remaining in employment four years later for men at work in 2006. In the female population, no such effect can be found, as general health status remains predominant in explaining their trajectory on the labour market. This result is in line with the literature about employability of individuals facing mental health conditions in the general population, but provides insights about the capacity for ill workers to remain in employment. Considering separately depressive episodes and anxiety disorders suggests that the disabling nature of mental health goes through both indicators. In addition, the accumulation of mental disorders increases the risk of leaving employment during the period for men facing both disorders compared to for those only facing one of the two). These findings induce that individuals facing such impairments are more likely to know more fragmented careers.
As a consequence retirement's role on health status differs according to the nature of past circumstances, notably related to initial human capital and job characteristics. It is indeed found to be beneficial for individuals' physical and mental health status overall, with disparities depending notably on the nature of the career. Accounting for reverse causality and unobserved heterogeneity, retirement decreases the probability to declare activity limitations ( ), anxiety disorders ( ) and depressive episodes ( ) when no significant effect can be found on self-assessed health and chronic diseases in men. In women, retirement benefits appear only on mental health outcomes (resp. in anxiety and in depression). Heterogeneity in this global effect is found, indicating a particularly strong relationship in the less educated and in the most exposed individuals to physical and psychosocial working conditions during their career, while also partly holding for the rest of the population to a lesser extent. As far as explanatory mechanisms go, a greater ability for retirees to have more social and physical daily activities ( ) and smaller tobacco consumption ( ) are likely to generate these positive health outcomes. Yet, retirees are also found to be significantly more at risk for alcohol consumption ( ) and overweight ( ).
Limitations and research perspectives
Every chapter of this dissertation relies on survey data. All chapters make use of the French panel data of the Santé et Itinéraire Professionnel survey (Sip). Moreover they all rely, at least partly, on retrospective information (i.e. information from the past gathered at the time of the survey, possibly much later). Thus, because of the nature of the data, biases in declarative behaviours and memory flaws cannot be excluded. It is indeed possible that, depending on some characteristics, individuals might answer a given question differently even
if the objective answer would be the same [START_REF] Devaux | Hétérogénéité sociale de déclaration de l'état de santé et mesure des inégalités de santé[END_REF][START_REF] Shmueli | Socio-economic and demographic variation in health and in its measures: the issue of reporting heterogeneity[END_REF]. Apart from that, a posteriori justifications or rationalisations are also likely to generate misreporting and measurement errors [START_REF] Gannon | The influence of economic incentives on reported disability status[END_REF][START_REF] Lindeboom | Health and work of the elderly: subjective health measures, reporting errors and endogeneity in the relationship between health and work[END_REF]. Also, the indicators used in this dissertation are more or less subjective measures for the most part. Health status indicators like self-reported self-assessed health, chronic diseases, activity limitations, generalized anxiety disorders and major depressive episodes are all, to a certain extent, subjective measurements for health conditions. Yet, it is to be noted that these indicators also appear to be reliable and valid to assess individuals' health status and are standard and widely used. Self-assessed health is notoriously correlated with life expectancy [START_REF] Idler | Self-Rated Health and Mortality: A Review of Twenty-Seven Community Studies[END_REF], anxiety disorder and depressive episodes are consolidated measures coming from the when more subjective indicators better succeed in embracing the whole picture of work strains. Also, when it is understandable that the legislator seeks for objectivity in a context of potential compensations, the subjective feelings beyond objective strains appear as much more relevant when trying to assess the role of these strains on health status.
Some research perspectives for Chapter 1 are possible. Results suggest very different types of impact of mental health on job retention. It would be interesting to be able to disentangle the mechanisms behind these differences. They may partly be explained by differences in social norms related to the perception of mental disorders and employability, as well as by differences in the severity of diseases. As is, it is not possible to assess such social norms or the severity of the disease. A mental health score would most likely allow for it, as well as providing a more stable indicator for mental health (as it is apparent that the amplitude of the results depends a lot on the retained definition of mental health). The results are also conditioned by the fact that the 2006-2010 period is particular in terms of economic conjuncture, asking the question of the external validity of the results. Obviously, clarifying the exact role of the economic crisis in the relationship we observe in this Chapter would allow for more detailed interpretations.
Chapter 2 may also benefit from some extensions. It would first be interesting to test potential heterogeneous effects of working conditions on health, depending on the time of exposure. If there is already a sensible effect of exposure early on, when individuals are more resilient to these strains, it is definitely a possibility that exposure on older workers would imply even greater health disparities. Yet, this hypothesis needs to be tested empirically. Another interesting topic would be to establish the part between what is induced by exposures themselves and what is implied by health-related behaviours [START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF]. Exposed workers may have specific behaviours in terms of tobacco or alcohol consumption for instance, or some specific features in terms of healthcare usage that would be correlated with their exposures and health status. Finally, detailed work on heterogeneity sources in the effect seems important, in terms of demographic and socioeconomic characteristics.
Research perspectives for Chapter 3 include specific work to determine if the average effect of retirement on health status differs according to the distance with retirement shock. Is the effect majorly led by short-, mid-or long-run consequences? Is the impact on health status happening right after retirement or in a lagged fashion? Another question that will need to be answered is whether or not the effect of retirement on health status differs depending on the retirement profile, i.e. if the individual retires early or late. It is indeed possible that the effect might be stronger in workers retiring early (because of more detrimental exposures during their career to work strains), or stronger in workers retiring late (because of longer exposures)
notably. This specific dilemma would be interesting to test.
Policy implications
Some recommendations can be suggested, based on this work.
First, because their incapacitating nature is lower than heavier mental health disorders, depressive episodes and anxiety disorders generally received less attention from policy makers. Yet, these disorders are more widespread (6% of men and 12% of women suffer from at least one of these condition in France, according to our data), and their detrimental role on the capacity of workers to remain in employment seems verified, at least in the male population. Because of the onset of depression or anxiety, the probability of male workers to remain in employment within a timespan of four years is significantly decreased. Hence, policies should account for such conditions and increase support for workers facing them in the workplace. Policies focusing on adapting the workplace to the needs of these ill workers and making it easier for them to find a job are most likely the two most relevant kinds of frameworks that could help in reducing the role of their disease on their career outcomes organisation and hierarchical practices in order to promote mental health. Then, because of the timing of exposure (usually starting early on during the career) and considering the longlasting detrimental effects on health status (onset of chronic diseases), a greater emphasis may need to be put on preventive measures such as health and safety promotion at work and the design of a more health-preserving workplace, instead of curative frameworks. Overall, by being able to better quantify the long-term health costs of strenuous jobs, a need for a change from the currently dominating position of curative scheme to preventive measures starting from the very design of the workplace seems mandatory. The European Commission (1989) states that "work shall be adapted to individuals and not individuals to work", and insists since then on the concept of work sustainability (EU strategy 2007-2012-European Commission 2007).
In a context of overall kick back of legal ages of retirement due to deficits in pension systems induced, notably, by constant increases in life expectancy, the question of the role of retirement on the determination of health status is crucial. Chapter 3 demonstrates a clear positive impact of retirement on general and mental health, both for men and women, but with variations across sex, education and exposure levels to detrimental working conditions. It appears that retirement bears even more beneficial effects for the less educated and more exposed workers during their career, especially to psychosocial strains. Postponing retirement decisions seem then all the more risky that as it is, retirement in general appears as the one tool to relieve workers from their potentially poor working conditions. In that sense, postponing legal retirement ages may not be successful in balancing pension systems, simply because there are consequences in terms of health status at old ages of these reforms, and also because exposed workers may not be able to reach these higher thresholds at work (hypothesis quite possibly at least partly verified by existing low levels of employability for senior workers). Extensions of the contribution period or the reversibility of the retiree's status (increasingly desired in Europe in recent years - [START_REF] Barnay | Health, work and working conditions: a review of the European economic literature[END_REF]) should be accompanied by preventive measures for work strains (which is in line with the conclusions of Chapters 2 and 3) during the career, or at least by differentiated retirement schemes depending on the nature and intensity of the entire work life of pensioners. Because retirement generally seems to promote more healthy behaviours due to the increase of available free time but yet also suggests an increase in alcohol consumption and overweight, information campaigns and specific incentives towards retirees in that sense could be introduced.
Appendices Appendix 1: Major Depressive Episodes (MDE)
The MDE are identified in two stages. First, two questions making use of filters are asked:
- -A positive response to two filter questions and four symptoms are listed -Two positive answers to two filter questions and three symptoms are listed
Figure I :
I Figure I: Summary of Work-Health relationships in the Ph.D. Dissertation ...........................
Figure
Figure II: Prevalence of health problems in the population in employment in 2006 ...............
Figure
Figure III: Employment rates in 2010 according to self-reported health status in 2006 ..........
Figure
Figure IV: General health status of anxious and/or depressed individuals in 2006 .................
Figure V :
V Figure V: Configuration of working conditions and chronic diseases periods ........................
Figure
Figure VI: Proportion of retirees in the sample according to age ............................................
Figure
Figure VII: Distribution of retirement ages ..............................................................................
Figure
Figure VIII: Common trend assumption test -Physical sample ( ) ....................................
Figure
Figure IX: Common trend assumption test -Psychosocial sample ( ) ................................
Figure X :
X Figure X: Common trend assumption test -Global sample ( ) ...........................................
Figure
Figure XI: Proportion of retirees in the male sample, according to age ................................
Figure
Figure XII: Proportion of retirees in the female sample, according to age ............................
Table 1 :
1 Estimated probability of employment in 2010, male population ............................... Table 2: Estimated probability of employment in 2010, female population ............................ Table 3: Estimation of mental health in 2006 .......................................................................... Table 4: Impact of mental health in 2006 on employment in 2010 according to various measures, men and women ....................................................................................................... Table 5: Estimated probability of employment (binary variable 2007-2010) .......................... Table 6: Thresholds description ............................................................................................... Table 7: Base sample description ( ) ..................................................................................... Table 8: Working conditions and chronic diseases description ( ) ........................................ Table 9: Matched sample description ( ) ............................................................................... Table 10: Matched difference-in-differences results ( to ), physical treatment ................ Table 11: Matched difference-in-differences results ( to ), psychosocial treatment ........ Table 12: Matched difference-in-differences results ( to ), global treatment ................... Table 13: General descriptive statistics .................................................................................... Table 14: Retirement and health status .................................................................................... Table 15: Heterogeneity analysis -Male population ............................................................... Table 16: Heterogeneity analysis -Female population ...........................................................
vs. attrition population according to mental health and employment status in 2006 ................................................................................... Table 29: General descriptive statistics ..................................................................................Table 30: Employment status in 2006, according to mental health condition ....................... Table 31: Mental health status in 2010 of individuals in employment and reporting mental health disorders in 2006 ......................................................................................................... Table 32: Correlations of identifying variables (men) ........................................................... Table 33: Correlations of identifying variables (women) ...................................................... Table 34: Mental Health estimations in 2006 ........................................................................Table 35: Unmatched difference-in-differences results ( to ), physical treatment ......... Table 36: Unmatched difference-in-differences results ( to ), psychosocial treatment .. Table 37: Unmatched difference-in-differences results ( to ), global treatment ............. Table 38: Specification test -Matched Diff.-in-Diff. vs. Matched Ordinary Least Squares -Physical, psychosocial and global treatments ( ) -Matched ............................................... Table 39: Thresholds tests -Normal treatment vs. Single exposures only vs. Poly-exposures only -Physical, psychosocial and global treatments ( ) -Matched .................................... Table 40: Wage and risky behaviours in 2006 -Unmatched and matched samples ............. Table 41: Gender and working conditions typologies, per activity sector in 2006 ................ Table 42: Working conditions typology, by gender in 2006 .................................................. Table 43: Auxiliary models of the probability of being retired ............................................. Table 44: Retirement and health status -No civil servants ................................................... Table 45: Auxiliary models of the probability of being retired -No civil servants .............. Table 46: Tests with three instruments (age 55, 60 and 65)...................................................
Figure I :
I Figure I: Summary of Work-Health relationships in the Ph.D. Dissertation
2. 1 .
1 The Santé et Itinéraire Professionnel survey The Santé et Itinéraire Professionnel (Sip) used in this study provides access to a particularly detailed individual description. Besides the usual socioeconomic variables (age, sex, activity sector, professional category, educational level, marital status), specific items are provided about physical and mental health. The survey was conducted jointly by the French Ministries in charge of Healthcare and Labour and includes two waves (2006 and 2010), conducted on the same sample of people aged 20-74 living in private households in metropolitan France. The 2010 wave was granted with an extension to better assess psychosocial risk factors. Two questionnaires are available: the first one is administered by an interviewer and accurately informs the individual and job characteristics and the current health status of the respondents. It also contains a biographical lifegrid to reconstruct individual careers and life events: childhood, education, health, career changes, working conditions and significant life events. The second one is a self-administered questionnaire targeting risky health behaviours (weight, cigarette and alcohol consumption). It informs current or past tobacco and alcohol consumption (frequency, duration, etc.). A total of 13,648 people were interviewed in 2006, and 11,016 of them again in 2010. In this study, we focus on people who responded to the survey both in 2006 and 2010, i.e. 11,016 people. We select individuals aged 30-55 years in employment in 2006 to avoid including students (see Appendix 3 and Appendix 4 for a discussion of the initial selection made on the sample in 2006 and a note on attrition between the two waves). The final sample thus consists of 4,133 individuals, including 2,004 men and 2,129 women. 2.2. Descriptive statistics 2.2.1. Health status of the employed population in 2006 To broadly understand mental health, we use major depressive episodes (MDE) and generalized anxiety disorder (GAD), from the Mini International Neuropsychatric Interview (MINI), based on the Diagnostic and Statistical Manual of Mental disorders (DSM-IV). These indicators prove particularly robust in the Sip survey (see Appendix 5). Around 6% of men and 12% of women in employment in 2006 report having at least one mental disorder (Figure II).
services), belonging to the private or public sectors (vs. self-employed) and part time work. It is interesting to note that within this selected population (i.e. in employment in 2006), professional categories have no role on employment trajectory between 2006 and 2010. In men, being 50 and over in 2006, the lack of education, celibacy and professional category (blue collars are most likely to leave the labour market) are all significant factors of poor labour market performance. The only common denominator between men and women appears to be the role of mental health and age.
professional careers. A decrease of up to in the probability of remaining in employment 4 years later for men at work in 2006 can be observed. In the female population, general health status remains predominant in explaining their trajectory on the labour market. Our results, in line with those of the literature, provide original perspectives on French data about the capacity of mentally-impaired workers to keep their jobs. Considering separately MDE and GAD suggests that the disabling nature of mental health goes through both indicators. In addition, the accumulation of mental disorders (MDE and GAD) greatly increases the risk of leaving employment during the period ( for men facing both disorders compared to for those only facing one of the two). These results are also supported by specific estimations on the 2007-2010 period, partly allowing to deal with the events occurring between 2006 and 2010.
Psychiatry and Mental Health Plan 2011-2015 affirms the importance of job stress prevention and measures to enable easier job retention and return to work of people with mental disorders.Following this first step, several extensions could be appropriate. First, an important weakness in our identification strategy remains possible. The identifying variables used may indeed be correlated with unobservable characteristics such as instability or the lack of self-confidence which are also related to outcomes on the labour market. This can possibly render the hypothesis of exogeneity of the relationship doubtful. If such characteristics are components or consequences of our mental health indicators, then it should not be problematic as their effect would transit completely through the latter. Yet, we cannot exclude that at least part of the variance induced by these unobservable characteristics is directly related to employment, regardless of our mental health indicators. Our results demonstrate a different impact of mental health on job retention. This difference may partly result from selection related to mental health and employment in 2006, differing by sex 6 . It can also be explained by differences in social norms related to the perception of mental disorders and employability, by differences in the disease severity and differentiated paths during the 2006-2010 period (as suggested by the health status trajectories for individuals in employment and ill in 2006 -see Table
Siegrist's models tend to study the results of combined exposures to several, simultaneous work stressors (job strain and iso-strain).[START_REF] De Jonge | Job strain, effort-reward imbalance and employee well-being: a large-scale cross-sectional study[END_REF] show the independent and cumulative effects of both types of models. On the matter of cumulative exposures,[START_REF] Amick | Relationship of job strain and iso-strain to health status in a cohort of women in the United States[END_REF] demonstrate, based on longitudinal data that chronic exposures to low job control is related to higher mortality in women. The study of[START_REF] Fletcher | Cumulative effects of job characteristics on health[END_REF] uses panel data and analyses the role of cumulative physical and environmental exposures over five years (from 1993 to 1997) while controlling for initial health status and health-related selection. This study is very likely the closest paper in the literature to the present study. They aggregate several physical and environmental working conditions indicators and create composite scores, which they then sum over five years. They find clear impacts of these indicators, on both men and women, with variations depending on demographic subgroups. This work expands on this particular study notably by considering exposures to both physical and psychosocial risk factors as well as by taking into account exposures that occur throughout the whole career (it is easily imaginable that larger health effects may occur in cases of longer exposures). I also include the possibility of accounting for simultaneous exposures.
Figure V :
V Figure V: Configuration of working conditions and chronic diseases periods
3. 1 .
1 The Santé et Itinéraire Professionnel (Sip) survey I use data coming from the French Health and Professional Path survey (Santé et Itinéraire Professionnel -Sip). It has been designed jointly by the statistical departments of two French ministries in charge of Health 7 and Labour 8 . The panel is composed of two waves (2006 and 2010). Two questionnaires are proposed: the first one is administered directly by an interviewer and investigates individual characteristics, health and employment statuses. It also contains a life grid, which allows reconstructing biographies of individuals' lives: childhood, education, health, career and working conditions, as well as major life events. The second one is self-administered and focuses on more sensitive information such as health-related risky behaviours (weight, alcohol and tobacco consumption). Overall, more than 13,000 individuals were interviewed in 2006 and 11,000 in 2010, making this panel survey representative of the French population 9 .I make specific use of the biographic dimension of the 2006 survey by reconstructing workers' career and health events yearly 10 . I am therefore able to know each individual's employment status, working conditions and chronic diseases every year from their childhood to the date of the survey(2006). As far as work strains are concerned, the survey provides information about ten indicators of exposure. The intensity of exposure to these work strains is also known. Individuals' health statuses are assessed by their declaration of chronic diseases, for which the onset and end dates are available.
individual annual indicators are used to assess the exposure to detrimental work strains and I regroup them into three relevant categories. The first one represents the physical load of work and includes night work, repetitive work, physical load and exposure to toxic materials.
Field:
Population aged 42-74 in 2006 and present from to . Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
Field:
Population aged 42-74 in 2006 and present from to . Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
Field:
Population aged 42-74 in 2006 and present from to . Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
Figure
Figure VIII, Figure IX and Figure X (Appendix 10), respectively, present the chronic disease trends for the treated and control groups in the matched physical, psychosocial and global
6. 3 .
3 Single vs. simultaneous exposures I tested the relevance of the differentiation made between single and multiple exposures in the three working condition treatments, i.e., the relevance of considering that a certain number of single exposures are equivalent to half that number of poly-exposures (inspired from the French legislation -Sirugue et al., 2015). Table 39 (Appendix 12) presents several results. The first two columns indicate, for , the results obtained with a treatment considering 16 years of single exposures or 8 years of polyexposures (which are the main results presented in this paper). The next two columns indicate the results when considering treatment while accounting only for 16 years of single exposures. The last two columns present the results for a treatment when considering only 8 years of poly-exposures.
Santé et Itinéraire Professionnel survey (Sip) used in this study provides access to particularly detailed individual descriptions. Besides the usual socioeconomic variables (age, sex, activity sector, professional category, educational level, marital status), specific items are provided about physical and mental health. The survey was designed jointly by the French Ministries in charge of Healthcare and Labour and includes two waves (2006 and 2010), conducted on the same sample of people aged 20-74 years living in private households in metropolitan France. The 2010 wave was granted with an extension to better assess psychosocial risk factors. Two questionnaires are available: the first one is administered by an interviewer and accurately informs the individual and job characteristics and the current health status of the respondents. It also contains a biographical lifegrid to reconstruct individual careers and life events: childhood, education, health, career changes, working conditions and significant life events. The second one is a self-administered questionnaire targeting risky health behaviours (weight, cigarette and alcohol consumption). It notably informs the current or past tobacco and alcohol consumption (frequency, duration, etc.). A total of 13,648 people were interviewed in 2006, and 11,016 of them again in 2010.We make use of the biographic dimension of the 2006 survey by reconstructing workers' careers yearly. We are therefore able to know, for each individual, his/her employment status and working conditions every year from their childhood to the date of the survey(2006). As far as work strains are concerned, the survey provides information about ten indicators of exposure: night work, repetitive work, physical load and exposure to toxic materials, full skill usage, work under pressure, tensions with the public, reward, conciliation between work and family life and relationships with colleagues. The intensity of exposure to these work strains is also known.In our sample, we only retain individuals present in both the2006 and 2010 waves, i.e. 11,016
Figure
Figure VI shows the evolution of the proportion of retirees in the sample, depending on age.
Figure VI :
VI Figure VI: Proportion of retirees in the sample according to age
Figure VII :
VII Figure VII: Distribution of retirement ages
residuals and , i.e. . In addition, residuals of this model are expected to follow a bi-normal distribution 15 : 4.4. Variables Five health status indicators are used in this study. In order to acknowledge the effect of retirement decision on general health condition, we use three indicators coming from the Mini European Health Module (see Appendix 16): self-assessed health status (dichotomized to oppose very good and good perceived health conditions on the one hand and fair, bad and very bad on the other hand), chronic illnesses (binary) and limitations in daily activities (binary). We also use two mental health indicators: suffering from Generalised Anxiety Disorders (GAD) in the six previous months or Major Depressive Episodes (MDE) over the past two weeks (see Appendix 17 and Appendix 18).
First
, we test other retirement thresholds, as three different thresholds are potentially relevant in the French case: years 55, 60 and 65 (see Figure VI as well as Figure XI and Figure XII in Appendix 15
Diagnostic and Statistical Manual of Mental disorders (DSM-IV) and chronic diseases and activity limitations are, by definition, less subject to volatility in declarations compared to other indicators (because of their long-lasting and particularly disabling nature), even if selfdeclared. Working conditions are also subjective and self-declared in this dissertation, and hence cannot really allow for detailed comparison to legislative frameworks, which are based on objective measures. However, these objective measures only hold on physical strains and nothing else (simply because psychosocial risk factors are, by definition, subjective feelings)
Over the past two weeks, have you felt particularly sad, depressed, mostly during the day, and this almost every day? Yes/No -Over the past two weeks, have you almost all the time the feeling of having no interest in anything, to have lost interest or pleasure in things that you usually like? Yes/No Then, if one of the two filter questions receives a positive answer, a third question is then asked, in order to know the specific symptoms: Over the past two weeks, when you felt depressed and/or uninterested for most things, have you experienced any of the following situations? Check as soon as the answer is "yes", several possible positive responses. -Your appetite has changed significantly, or you have gained or lost weight without having the intention to (variation in the month of +/-5%) -You had trouble sleeping nearly every night (sleep, night or early awakenings, sleep too much) -You were talking or you moved more slowly than usual, or on the contrary you feel agitated, and you have trouble staying in place, nearly every day -You felt almost tired all the time, without energy, almost every day -You feel worthless or guilty, almost every day -You had a hard time concentrating or making decisions, almost every day -You have had several dark thoughts (such as thinking it would be better be dead), or you thought about hurting yourself Using the responses, two algorithms are then implemented in accordance with the criteria of the Diagnostic and Statistical Manual (DSM-IV). An individual suffers from MDE if:
Figure X :
X Figure X: Common trend assumption test -Global sample ( )
Table 17 :
17 Heterogeneity analysis -Low education attainment .............................................
Table 18 :
18 Heterogeneity analysis -High education attainment .............................................
Table 19 :
19 Heterogeneity analysis -Highly physically demanding career .............................
Table 20 :
20 Heterogeneity analysis -Lowly physically demanding career ..............................
Table 21 :
21 Heterogeneity analysis -Highly psychosocially demanding career ......................
Table 22 :
22 Heterogeneity analysis -Lowly psychosocially demanding career ......................
Table 23 :
23 Mechanisms -The effect of retirement on daily activities ....................................
Table 24 :
24 Mechanisms -The effect of retirement on health-related risky behaviours ..........
Table 25 :
25 Selection analysis -Population in employment vs. unemployed in 2006 .............
Table 26 :
26 Selection analysis -Main characteristics of individuals reporting at least one mental disorder in 2006, according to their employment status in 2006 ...........................................
Table 27 :
27 Attrition analysis -panel population (interviewed in 2006 and 2010) vs. attrition population (interviewed in 2006 and not in 2010) .................................................................
Table 28 :
28 Attrition Analysis -panel population
Table 47 :
47 Auxiliary models of the probability of being retired (age 55, 60 and 65) .............
Table 48 :
48 Estimation of linear probability models (LPM) using the generalized method of moments (GMM) with two instruments (60 and 65) .............................................................
Table 49 :
49 Auxiliary models of the probability of being retired -LPM (GMM)....................
Table 50 :
50 Retirement and health status -No endogenous covariates ....................................
Table 51 :
51 Auxiliary models of the probability of being retired -No endogenous covariates
Employment rates in 2010 according to self-reported health status in 2006
Figure III:
50
40
30
20
10
0
GAD MDE At least one mental disorder Activity limitations Men Poor general health Women Chronic disease Daily smoking Risky alcohol consumption Overweight
descriptive statistics on mental disorders. GAD are faced by 88 men and 195 women and MDE respectively by 91 and 236. 150 men and 335 women declare suffering from at least one mental disorder. Reading: 82% of men in employment and suffering from at least one mental disorder (GAD or MDE) in 2006 are still in employment in 2010, against 86% of women. Field: individuals age 30-55 in employment in 2006. Source: Sip (2006), weighted and calibrated statistics.
Figure IV: General health status of anxious and/or depressed individuals in 2006
95
90
85
80
75
70
GAD MDE At least one mental disorder Activity limitations Poor general health Chronic disease Daily smoking Risky alcohol consumption Overweight Overall population in employment in 2006
Employment rate in 2010 (M) Employment rate in 2010 (W)
29%. It is interesting to note that men with at least one mental disorder are less likely to report being overweight (Figure IV). Reading: 53% of men reporting mental disorders in 2006 also have risky alcohol consumption in 2006, against 17% of women. Field: individuals age 30-55 in employment in 2006 who reported having at least one mental health disorder. Source: Sip (2006), weighted and calibrated statistics.
Table 1 : Estimated probability of employment in 2010, male population
1
Univar. Probit (M1) Univar. Probit (M2) Univar. Probit (M3) Bivariate Probit (IV)
Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Coeff. Std. err.
Mental health in 2006
At least one mental disorder -.09*** .02 -.07*** .02 -.07*** .02
Mental health (instr.) in 2006
At least one mental disorder -.13** .05
Ind. characteristics in 2006
Age (ref.: 30-35 years-old)
-35-39 .02 .02 .01 .03 .01 .03 -.01 .03
-40-44 -.01 .02 -.03 .02 -.04 .03 -.03 .03
-45-49 -.02 .02 -.01 .03 -.03 .03 -.03 .03
-50-55 -.14*** .02 -.15*** .02 -.16*** .02 -.16*** .03
In a relationship (ref.: Single) .03** .01 .03** .01 .03** .01 .02 .02
Children (ref.: None) -.02 .02 -.01 .02 -.01 .02 -.02 .02
Education (ref.: French bac.)
-No diploma -.06** .02 -.05** .02 -.05* .03 -.06** .03
-Primary -.03 .02 -.01 .02 -.01 .02 -.01 .02
-Superior -.00 .02 -.00 .02 -.00 .02 .01 .02
Employment in 2006
Act. sector (ref.: Industrial)
-Agricultural -.03 .02 -.02 .03 -.02 .03 -.03 .03
-Services -.00 .01 .00 .01 .00 .01 .01 .02
Activity status (ref.: Private)
-Public sector .03* .02 .02 .02 .02 .02 .01 .02
-Self-employed .04 .03 .04 .03 .03 .03 .03 .04
Prof. cat. (ref.: Blue collar)
-Farmers .15*** .05 .12** .05 .12** .05 .12** .06
-Artisans .07** .04 .06* .04 .06* .04 .10** .04
-Managers .05** .02 .04** .02 .04** .02 .04* .02
-Intermediate .03* .02 .02 .02 .02 .02 .02 .02
-Employees .01 .02 .00 .02 -.00 .02 -.01 .02
Part time (ref.: Full-time) -.05 .03 -.04 .02 -.03 .03 -.01 .04
General health status in 2006
Poor perceived health status -.02 .02 -.02 .02 -.00 .02
Chronic diseases .00 .01 .00 .01 .00 .01
Activity limitations -.03* .02 -.03* .02 -.04** .02
Risky behaviours in 2006
Daily smoker -.04*** .01 -.04*** .01 -.05*** .01
Risky alcohol consumption -.00 .01 .00 .01 .01 .01
Overweight .01 .01 .01 .01 .01 .01
Professional route
Maj. of empl. in long jobs .03* .02 .02 .01
Stable career path .01 .01 .00 .01
Rho Hausman test 4 .22** 1,71 .12
N 2004 2004 2004 1860
Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey, men aged 30-55 in employment in 2006.
Table 2 : Estimated probability of employment in 2010, female population
2 The last column of Table1and Table2presents the results of the bivariate probit models, respectively for men and women. The results for the bivariate mental health models are summarized in Table3(complete results of univariate and bivariate probit models for mental health are available in Table34, Appendix 7). For these results, note that as explained in
Univar. Probit (M1) Univar. Probit (M2) Univar. Probit (M3) Bivariate Probit (IV)
Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Coeff. Std. err.
Mental health in 2006
At least one mental disorder -.05*** .01 -.02 .02 -.02 .02
Mental health (instr.) in 2006
At least one mental disorder -.02 .09
Ind. characteristics in 2006
Age (ref.: 30-35 years-old)
-35-39 .01 .02 .01 .02 .00 .02 .00 .02
-40-44 .01 .02 .01 .02 .00 .02 .00 .02
-45-49 -.04** .02 -.03 .02 -.04 .02 -.04 .02
-50-55 .10*** .02 -.10*** .02 -.10*** .02 -.10*** .02
In a relationship (ref.: Single) .00 .01 .01 .01 .01 .01 .01 .01
Children (ref.: None) -.08*** .02 -.07*** .02 -.07*** .02 -.07*** .02
Education (ref.: French bac.)
-No diploma -.03 .03 -.04 .03 -.04 .03 -.04 .03
-Primary -.02 .02 -.01 .02 -.01 .02 -.01 .02
-Superior .00 .02 -.00 .02 -.01 .02 -.01 .02
Employment in 2006
Act. sector (ref.: Industrial)
-Agricultural .04 .04 .04 .04 -04 .04 -.04 .04
-Services .05*** .02 .06*** .02 .06*** .02 .06*** .02
Activity status (ref.: Private)
-Public sector .01 .01 .02* .01 .02 .01 .02 .01
-Self-employed .07** .04 .06* .04 .06* .04 .06* .04
Prof. cat. (ref.: Blue collar)
-Farmers .02 .07 .01 .07 -.00 .07 -.00 .07
-Artisans -.02 .04 -.03 .05 -.03 .05 -.03 .05
-Managers .00 .03 -.01 .03 -.02 .03 -.02 .03
-Intermediate -.00 .02 -.01 .02 -.01 .02 -.01 .02
-Employees .01 .02 .00 .02 .00 .02 -.00 .02
Part time (ref.: Full-time) -.03** .01 -.03** .01 -.02* .01 -.02* .01
General health status in 2006
Poor perceived health status -.04** .02 -.03** .02 -.03* .02
Chronic diseases .00 .01 -.00 .01 -.00 .01
Activity limitations -.04** .02 -.04** .02 -.04* .02
Risky behaviours in 2006
Daily smoker -.01 .01 -.00 .01 -.00 .01
Risky alcohol consumption -.01 .02 -.01 .02 -.01 .02
Overweight -.02 .01 -.01 .01 -.01 .01
Professional route
Maj. of empl. in long jobs .02 .01 .02 .01
Stable career path .01 .01 .01 .01
Rho .02 .36
Hausman test .00
N 2129 2129 2129 1982
Table 3 : Estimation of mental health in 2006
3
Men Women
Coeff. Std. err. Coeff. Std. err.
Identifying variables
Raised by a single parent - - .07*** .02
Suffered from violence during childhood .09** .05 .08*** .02
Experienced many marital breakdowns .03** .01 - -
After controlling for individual characteristics, employment, general health status and professional career. Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:
Santé et Itinéraire Professionnel survey, individuals aged 30-55 in employment in 2006.
Table 34
34
in Appendix 7) reinforce this hypothesis. In men, the causal effect of mental health
in 2006 on employment in 2010 seems corroborated by the bivariate analysis, indicating a
drop of in the probability of remaining at work. It is also possible to reaffirm the direct
role of smoking on the likelihood of job loss. Mental health remains non-discriminative on women's employment. Ultimately, our main results are confirmed by the bivariate analysis, and fall in line with the literature using the same methodologies. It is to be noted that the results for Hausman tests are all rendered non-significant, indicating that the indentifying variable frameworks might not be very different from naive models, hence not mandatory.
Table 4 : Impact of mental health in 2006 on employment in 2010 according to various measures, men and women Men Women Coeff. Std. err. Coeff. Std. err. Instrumented mental health
4
Suffers from MDE -.08*** .02 -.01 .01
Suffers from GAD -.10*** .02 -.02 .02
Disorders counter
-One disorder -.05* .02 -.02 .02
-Two simultaneous disorders -.14*** .04 -.02 .03
After controlling for individual characteristics, employment, general health status and professional career.
Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey, men and women aged 30-55 in employment in 2006.
Table 5 : Estimated probability of employment (binary variable 2007-2010)
5
Men Women
Coeff. Std. err. Coeff. Std. err.
Mental health in 2006
At least one mental disorder -.05*** .02 -.00 .02
After controlling for individual characteristics, employment, general health status and professional career.
Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey, individuals aged 30-55 in employment in 2006.
Table 6 : Thresholds description Threshold Parameter Treatment thresholds
6
Single exposure threshold 4 6 8 10 12 14 16 18
Poly-exposure threshold 2 3 4 5 6 7 8 9
Periods definition
Working conditions observation period 6 9 12 15 18 21 24 27
Minimum duration at work 2 3 4 5 6 7 8 9
Table 7 : Base sample description ( )
7 Population aged 42-74 in 2006 and present from to . 7 th threshold. Unmatched sample.
Variable Mean Std. error Min Max Physical sample Treated Control Diff. Treated Control Diff. Treated Control Diff. Psychosocial sample Global sample
Treatment
Physical treatment .47 .50 0 1 - - - - - - - - -
Psychosocial treatment .44 .50 0 1 - - - - - - - - -
Global treatment .68 .47 0 1 - - - - - - - - -
Health status
Initial chronic diseases .12 .36 0 4.67 .10 .13 -.04*** .12 .11 .01 .11 .14 -.03**
First health period .63 .93 0 9.50 .65 .62 .03 .70 .58 .12*** .64 .61 .03
Second health period .72 .99 0 9.00 .73 .70 .03 .80 .65 .15*** .73 .69 .04
Third health period .82 1.07 0 9.00 .83 .82 .02 .91 .76 .15*** .83 .81 .03
Demography
Entry year at work 1963 8.65 1941 1977 1962 1965 -2.7*** 1963 1963 -0.37 1963 1965 -2.3***
Men .51 .50 0 1 .63 .41 .21*** .54 .49 .05*** .57 .39 .19***
Women .49 .50 0 1 .37 .59 -.21*** .46 .51 -.05*** .43 .61 -.19***
Age 59.67 7.67 42 74 60.20 59.20 .99*** 59.94 59.47 .47* 60.09 58.78 1.31***
No diploma .13 .33 0 1 .18 .08 .09*** .14 .11 .03** .15 .08 .07***
Inf. education .62 .48 0 1 .69 .57 .12*** .61 .64 -.03* .64 .58 .06***
Bachelor .12 .32 0 1 .07 .16 -.09*** .11 .12 -.01 .09 .17 -.07***
Sup. education .12 .32 0 1 .05 .18 -.13*** .12 .12 -.00 .10 .16 -.07***
Childhood
Problems with relatives .44 .50 0 1 .47 .40 .07*** .48 .41 .07*** .46 .39 .07***
Violence .09 .29 0 1 .10 .08 .02** .12 .07 .05*** .10 .06 .04***
Severe health problems .13 .33 0 1 .13 .12 .01 .14 .12 .02* .13 .12 .02
Physical post-exposure
None .57 .49 0 1 .26 .85 -.59*** .48 .65 -.17*** .43 .88 -.46***
Low .20 .40 0 1 .30 .11 .20*** .22 .18 .04*** .26 .07 .18***
High .23 .42 0 1 .44 .04 .39*** .30 .17 .13*** .32 .04 .28***
Psycho. post-exposure
None .57 .49 0 1 .48 .66 -.18*** .27 .81 -.53*** .44 .85 -.41***
Low .21 .43 0 1 .25 .18 .07*** .31 .14 .18*** .26 .09 .17***
High .22 .41 0 1 .27 .17 .11*** .41 .06 .35*** .29 .05 .24***
Global post-exposure
None .43 .50 0 1 .22 .62 -.39*** .23 .59 -.35*** .26 .80 -.55***
Low .18 .38 0 1 .19 .17 .03* .19 .17 .01 .22 .10 .12***
High .39 .49 0 1 .58 .21 .37*** .58 .24 .34*** .53 .10 .43***
Tobacco consumption
During initial health period During 1 st health period During 2 nd health period During 3 rd health period .09 .23 .22 .21 .29 .42 .42 .41 0 0 0 0 1 1 1 1 .08 .24 .23 .22 .10 .22 .21 .20 -.03*** .03** .02 .02 .10 .23 .22 .21 .08 .23 .22 .21 .02 .01 -.00 -.00 .09 .24 .23 .21 .10 .21 .20 .19 -.01 .03** .03* .02
Interpretation: ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference
significant at the 10% level. Standard errors in italics. The average number of chronic diseases in the whole sample
before labour market entry is
. In the future physically treated population, this number is (which is significantly lower than for the future control group, i.e., at the 1% level). Such a difference at baseline in health statuses between future treated and control groups does not exist in the psychosocial sample. Field: Source: Santé et Itinéraire Professionnel survey
(Sip), wave 2006.
Table 8 : Working conditions and chronic diseases description ( )
8 The physically treated population faced nearly years of physical burden when their control group only faced one and a half. This difference of years is significant at the 1% level. For chronic diseases, the sample faced an average of cancer from the beginning of their lives to the date of the survey. Field:Population aged 42-74 in 2006 and present from to . 7 th iteration. Unmatched sample.
Variable Mean Std. error Min Max Physical sample Treated Control Diff. Psychosocial sample Treated Control Diff. Global sample Treated Control Diff.
Working conditions
Night work 1.34 5.41 0 32 2.58 .23 2.35*** 1.88 .92 .95*** 1.87 .19 1.68***
Repetitive work 4.35 9.07 0 40 7.88 1.20 6.68*** 5.72 3.31 2.40*** 5.86 1.17 4.68***
High physical load 8.31 12.85 0 46 15.80 1.59 14.20*** 10.94 6.27 4.67*** 11.60 1.31 10.28***
Hazardous materials 4.60 9.99 0 41 8.87 .76 8.11*** 5.77 3.69 2.08*** 6.43 .69 5.74***
Lack of skill usage 1.50 4.80 0 25 1.86 1.17 .69*** 2.71 .56 2.15*** 1.92 .61 1.31***
Work under pressure 3.76 8.51 0 38 5.45 2.24 3.20*** 7.18 1.11 6.07*** 5.15 0.80 4.35***
Tension with public 1.24 5.01 0 29 1.52 .98 .53*** 2.46 .29 2.17*** 1.71 .22 1.49***
Lack of recognition 3.72 8.45 0 40 5.41 2.21 3.20*** 7.22 1.01 6.21*** 5.11 .77 4.34***
Work/private life imbalance 1.43 5.41 0 31 1.90 1.01 .89*** 2.82 .35 2.47*** 1.98 .26 1.72***
Tensions with colleagues 1.34 5.41 0 32 .37 .31 .06 .59 .14 .45*** .42 .16 .26***
Chronic diseases
Cardiovascular .38 .54 0 3 .38 .38 .01 .39 .37 .02 .38 .38 .01
Cancer .09 .35 0 3 .06 .11 -.04*** .08 .09 -.01 .07 .11 -.04**
Pulmonary .16 .43 0 4 .19 .13 .07*** .16 .16 .01 .17 .13 .05**
ENT .12 .41 0 3 .13 .11 .02 .13 .12 .01 .13 .12 .01
Digestive .16 .46 0 4 .17 .15 .02 .17 .15 .02 .17 .15 .02
Mouth/teeth .01 .08 0 2 .01 .01 .00 .01 .01 .00 .01 .00 .00
Bones/joints .42 .67 0 3 .49 .36 .12*** .44 .40 .04 .44 .39 .05*
Genital .08 .32 0 2 .08 .08 -.00 .08 .08 -.00 .08 .08 .00
Endocrine/metabolic .26 .52 0 2 .26 .27 -.00 .23 .28 -.04* .25 .29 -.04
Ocular .09 .33 0 3 .08 .10 -.02 .09 .09 .01 .08 .10 -.02
Psychological .19 .51 0 4 .18 .20 -.02 .24 .16 .08*** .20 .17 .03*
Neurological .07 .31 0 2 .07 .07 .01 .08 .07 .00 .07 .08 -.00
Skin .05 .26 0 1 .05 .05 -.00 .05 .05 .01 .05 .05 .00
Addiction .02 .17 0 2 .02 .02 -.00 .02 .02 .00 .02 .02 -.00
Other .14 .44 0 4 .14 .14 -.00 .12 .15 -.03* .14 .13 .01
Interpretation: ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference significant at the 10% level. Standard errors in italics. The individuals present in the sample faced an average of years of exposure to a high physical load at work. Source: Santé et Itinéraire Professionnel survey
(Sip), wave 2006.
Table 9 : Matched sample description ( )
9
Variable Physical sample Treated Control Diff. Psychosocial sample Treated Control Diff. Global sample Treated Control Diff.
Health status
Initial chronic diseases .08 .10 -.02 .10 .10 .00 .09 .12 -.02
First health period .63 .55 .07** .68 .54 .13*** .63 .56 .07**
Second health period .72 .63 .09*** .78 .62 .16*** .72 .63 .08**
Third health period .82 .72 .10*** .89 .72 .17*** .83 .74 .09**
Demography
Entry year at work 1962 1962 -.08 1963 1963 .01 1963 1963 -.04
Men .63 .63 0 .54 .54 0 .56 .56 0
Women .37 .37 0 .46 .46 0 .44 .44 0
Age 60.02 60.31 -.28 59.82 59.61 .21 59.59 59.64 -.05
No diploma .15 .15 0 .13 .13 0 .11 .11 0
Inf. education .72 .72 0 .65 .65 0 .70 .70 0
Bachelor .06 .06 0 .10 .10 0 .09 .09 0
Sup. education .05 .05 0 .11 .11 0 .10 .10 0
Childhood
Problems with relatives .45 .45 0 .46 .46 0 .41 .41 0
Violence .07 .07 0 .07 .07 0 .04 .04 0
Severe health problems .10 .10 0 .10 .10 0 .09 .09 0
Interpretation: ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference
significant at the 10% level. After matching, there is no significant difference between the future treated and control
groups in terms of initial mean number of chronic diseases for physical, psychosocial and global samples. Field: Population aged 42-74 in 2006 and present from to . 7 th threshold. Matched (weighted) sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
Table 10 : Matched difference-in-differences results ( to ), physical treatment
10
Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N % matched
Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) (treat./contr.)
: being
exposed to at least 12 years of single exposures or 6 years of multiple exposures Men
First health period .012 .069 .036 .065 .488
Second health period -.024 .020 .012 .050 .036 .068 .500 1908/3212
Third health period Women .024 .066 .048 .047 .562 90% / 88%
First health period .086 .056 .100* .052 .439
Second health period -.014 .019 .087 .058 .101** .043 .496 1226/3044
Third health period .097* .051 .111** .048 .522
:
being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men
First health period .016 .072 .038 .070 .497
Second health period -.022 .019 .017 .074 .039 .073 .561 1890/3196
Third health period Women .024 .076 .046 .072 .620 90% / 88%
First health period .134*** .055 .148** .058 .597
Second health period -.014 .020 .142** .060 .156*** .053 .653 1162/3036
Third health period .155** .067 .169** .066 .762
:
being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men
First health period .024 .075 .047 .074 .607
Second health period -.023 .017 .032 .076 .055 .075 .681 1890/3226
Third health period Women .066 .078 .089 .077 .815 91% / 88%
First health period .178*** .068 .185*** .064 .769
Second health period -.007 .018 .192*** .073 .199*** .069 .862 1128/3042
Third health period .196** .081 .203*** .076 .959
:
being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men
First health period .063 .069 .076* .052 .736
Second health period -.013 .017 .84 .070 .097** .054 .833 1820/3224
Third health period Women .87 .076 .100** .055 .946 92% / 87%
First health period .193*** .072 .193** .079 .904
Second health period -.000 .019 .210*** .078 .210*** .074 .970 1064/3022
Third health period .221** .083 .221*** .068 1.044
:
being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men
First health period .80 .064 .087** .051 .764
Second health period -.007 .016 .110* .066 .117** .060 .871 1694/3232
Third health period Women .113* .070 .120*** .060 .986 92% / 86%
First health period .225*** .075 .228*** .082 .909
Second health period -.003 .019 .229*** .086 .232*** .077 .961 970/2976
Third health period .246*** .081 .249*** .070 1.045
Table 11 : Matched difference-in-differences results ( to ), psychosocial treatment
11
Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N % matched
Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) (treat./contr.)
: being
exposed to at least 12 years of single exposures or 6 years of multiple exposures Men
First health period .018 .039 .016 .035 .357
Second health period .014 .016 .046 .041 .032 .037 .408 1560/3318
Third health period Women .045 .045 .031 .042 .432 89% / 93%
First health period .037 .053 .040 .048 .380
Second health period -.003 .024 .053 .054 .056 .046 .419 1354/3068
Third health period .064 .056 .067 .044 .445
:
being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men
First health period .089* .043 .080** .040 .464
Second health period .009 .016 .090* .046 .081** .040 .521 1534/3288
Third health period Women .139*** .047 .130*** .045 .632 90% / 91%
First health period .035 .053 .047 .051 .516
Second health period -.012 .024 .053 .058 .065 .045 .569 1310/3072
Third health period .055 .062 .067 .056 .660
:
being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men
First health period .117** .049 .112** .046 .613
Second health period .005 .016 .118** .056 .113** .056 .664 1496/3320
Third health period Women .139** .066 .134** .067 .734 90% / 93%
First health period .151*** .059 .156*** .055 .743
Second health period -.005 .023 .155*** .065 .160*** .063 .867 1272/3142
Third health period .157** .072 .172*** .061 .969
:
being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men
First health period .123** .050 .111** .047 .671
Second health period .012 .017 .131** .067 .119** .048 .696 1410/3290
Third health period Women .161** .069 .149** .069 .830 91% / 92%
First health period .179*** .065 .181** .079 .881
Second health period -.002 .023 .204*** .072 .206*** .068 .963 1192/3106
Third health period .218*** .081 .220*** .061 1.058
:
being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men
First health period .127*** .053 .116** .052 .714
Second health period .011 .017 .133** .073 .122** .050 .730 1274/3272
Third health period Women .154*** .074 .143*** .053 .861 91% / 91%
First health period .206*** .066 .209*** .078 .917
Second health period -.003 .023 .222*** .072 .225*** .067 1.015 1110/3098
Third health period .230*** .081 .233*** .061 1.125
Table 12 : Matched difference-in-differences results ( to ), global treatment
12
Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N % matched
Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.) (treat./contr.)
: being
exposed to at least 12 years of single exposures or 6 years of multiple exposures Men
First health period -.003 .067 .023 .066 .391
Second health period -.026 .022 -.003 .070 .023 .069 .401 2256/3002
Third health period Women .017 .053 .043 .049 .434 82% / 94%
First health period .024 .056 .025 .051 .386
Second health period -.001 .023 .032 .054 .033 .047 .438 1850/3018
Third health period .034 .056 .035 .049 .473
:
being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men
First health period -.019 .073 .013 .073 .431
Second health period -.032 .021 -.010 .074 .022 .075 .491 2192/2962
Third health period Women .025 .076 .057 .076 .589 80% / 94%
First health period .067 .057 .076 .054 .527
Second health period -.009 .021 .078 .054 .087 .050 .586 1734/2978
Third health period .089 .063 .098* .056 .688
:
being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men
First health period .018 .038 .049 .067 .588
Second health period -.031 .020 .038 .070 .069 .069 .671 2160/2978
Third health period Women .049 .074 .80 .073 .804 81% / 94%
First health period .143*** .071 .148*** .067 .740
Second health period -.005 .020 .157*** .058 .162*** .054 .859 1710/3010
Third health period .167*** .063 .173*** .059 .972
:
being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men
First health period .058 .066 .080 .064 .703
Second health period -.022 .019 .065 .071 .087 .069 .772 2126/3024
Third health period Women .114 .074 .136* .073 .934 82% / 94%
First health period .138* .083 .139* .081 .840
Second health period -.001 .019 .170** .071 .171** .068 .936 1652/3034
Third health period .180*** .064 .181*** .061 1.044
:
being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men
First health period .097 .063 .100* .055 .724
Second health period -.003 .017 .099 .067 .102* .056 .777 2146/3172
Third health period Women .113* .071 .116* .068 .925 86% / 93%
First health period .191** .077 .190** .075 .885
Second health period .001 .019 .206*** .061 .205*** .058 .992 1586/3072
Third health period .210*** .067 .209*** .064 1.095
Table 13 : General descriptive statistics
13
Variable Mean Std. error Min. Max. N Mean Retirees Mean retirees non- Diff.
Retirement
Retired .42 .49 0 1 2071 - - -
Aged 55 or more .74 .44 0 1 3629 .98 .55 -.44***
Aged 60 or more .45 .50 0 1 2235 .90 .13 -.77***
Aged 65 or more .18 .38 0 1 876 .40 .01 -.39***
Health status
Poor perceived health .37 .48 0 1 1802 .38 .36 -.02*
Chronic diseases .45 .50 0 1 2200 .50 .40 -.10***
Activity limitations .25 .43 0 1 1219 .26 .24 -.02*
Anxiety disorder .07 .25 0 1 321 .05 .08 .02***
Depressive episode .08 .27 0 1 380 .06 .09 .03***
Demographics
Men .46 .50 0 1 2254 .51 .42 -.08***
Age 58.79 .40 50 69 4932 63.47 55.40 -8.06***
No education .09 .28 0 1 421 .08 .09 .01
Primary/secondary .56 .50 0 1 2782 .62 .52 -.09***
Equivalent to French BAC .14 .34 0 1 679 .12 .15 .04***
Superior .19 .40 0 1 957 .17 .21 .04***
One or more children .91 .29 0 1 4466 .91 .90 -.01
Employment
Public sector .18 .39 0 1 898 .12 .23 .11***
Private sector .36 .48 0 1 1772 .20 .47 .26***
Self-employed .07 .26 0 1 348 .04 .10 .06***
Career in long-term jobs .79 .41 0 1 3881 .84 .75 -.10***
Stable career .59 .49 0 1 2887 .53 .62 .10***
Poor physical working cond. .22 .41 0 1 1010 .29 .17 -.12***
Poor psychosocial working cond. .16 .37 0 1 731 .20 .13 -.07***
Mechanisms
Daily social activities .42 .49 0 1 2088 .48 .38 -.10***
Sport .42 .49 0 1 2063 .45 .40 -.05***
Tobacco consumption .22 .42 0 1 1034 .16 .27 .11***
Risky alcohol consumption .24 .42 0 1 1085 .25 .23 -.02
Overweight .56 .50 0 1 2540 .60 .52 -.09***
Note: ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Reading: Retirees are 38% to report poor perceived health and 36% of non-retirees are in good perceived health. This
difference of -2 percentage points is significant at the 10% level.
Field: Santé et Itinéraire Professionnel survey, individuals aged 50-69 in 2010.
Table 14 : Retirement and health status
14 When taking into account the endogenous nature of the retirement decision (i.e. reverse causality between health conditions and retirement as well as omitted variables related to these two dimensions), the results are thereby radically changed. Retirement indeed appears to have a fairly strong negative effect on the probability of reporting activity limitations (
Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired .00 .02 -.07 .05 .04 .02 -.02 .05 .00 .02 -.09** .04 -.02 .01 -.11*** .03 -.01 .01 -.10*** .03
Demographics
Men .00 .00 -.00 -.00 .02 .02 -.04*** -.04*** -.03*** -.03***
(ref.: women) .01 .01 .02 .02 .01 .01 .01 .01 .01 .01
Age .06** .03 .06** .03 .03 .03 .02 .03 .07*** .03 .07*** .03 .03 .02 .03* .02 .03** .02 .04*** .02
Age² -.01** .00 -.01* .00 -.00 .00 -.00 .00 -.01** .00 -.01** .00 -.01* .00 -.00 .00 -.01** .00 -.01* .01
Children -.03 -.03 -.03 -.03 .01 .01 .03* .03* .03* .03*
(ref.: none) .02 .02 .03 .03 .02 .02 .01 .02 .02 .02
Education
< BAC -.11*** -.11*** -.03 -.03 -.04* -.04* -.02 -.02 -.04*** -.04***
(ref.: no dipl.) .02 .02 .03 .03 .02 .02 .01 .01 .01 .01
= BAC -.14*** -.14*** -.03 -.03 -.04 -.04 -.01 -.00 -.04** -.03**
(ref.: no dipl.) .02 .03 .03 .03 .03 .03 .01 .02 .01 .02
> BAC -.26*** -.26*** -.08*** -.08** -.09*** -.09*** -.03** -.04** -.07*** -.07***
(ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02
Employment
Public sector -.02 -.02 -.01 -.01 -.05** -.05** .01 .01 .01 .01
(ref.: private) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Self-employed -.07** -.08*** -.04 -.05 -.05* -.06** -.02 -.04** -.04* -.05**
(ref.: private) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02
Long-term jobs -.12*** -.11*** -.08*** -.08*** -.10*** -.09*** -.02** -.01 -.04*** -.03***
(ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Stable career -.02 -.02 -.01 -.01 -.02* -.02* .00 .01 -.01* -.01
(ref.: unstable) .01 .01 .02 .02 .01 .01 .01 .01 .01 .01
Physical strains .11*** .02 .12*** .02 .07*** .02 .07*** .02 .09*** .02 .10*** .02 .02*** .01 .03*** .01 .02* .01 .02** .01
Psycho. strains .07*** .02 .07*** .02 .06*** .02 .06*** .02 .04** .02 .04** .02 .03*** .01 .04*** .01 .04*** .01 .04*** .01
Rho .14 .09 .10 .08 .21** .08 .47*** .10 .41*** .12
Hausman test 16 2.33 1.71 6.75*** 10.13*** 10.13***
N 4610
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010.
Table 15 : Heterogeneity analysis -Male population
15
Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired -.06 .03 -.08 .07 .01 .03 -.04 .07 -.04 .03 -.11* .06 -.02 .01 -.11*** .04 -.02 .02 -.13*** .05
Demographics
Age .13*** .04 .14*** .04 .08* .05 .08* .05 .09** .04 .10** .04 -.00 .02 .02 .03 .02 .02 .06* .03
Age² -.01*** -.01*** .00 .00 -.00 .00 -.00 .00 -.01** .00 -.01** .00 .00 .00 -.00 .00 -.00 .00 -.01* .00
Children -.00 -.00 -.05 -.05 .03 .03 .02 .03 .01 .01
(ref.: none) .03 .03 .04 .03 .03 .03 .02 .02 .02 .02
Education
< BAC -.10*** -.09*** .02 .02 -.03 -.03 -.01 -.01 -.04*** -.04***
(ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .01 .01 .01
= BAC -.22*** -.22*** -.01 -.01 -.08** -.08** -.03 -.04* -.05*** -.06***
(ref.: no dipl.) .04 .04 .04 .04 .04 .04 .02 .02 .02 .02
> BAC -.27*** -.27*** -.04 -.04 -.13*** -.14*** -.02 -.03 -.06*** -.07***
(ref.: no dipl.) .04 .04 .04 .04 .04 .04 .02 .02 .02 .02
Employment
Public sector -.07** -.07** -.05 -.05 -.06*** -.10*** -.01 -.01 -.00 -.01
(ref.: private) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02
Self-employed -.11*** -.11*** -.07* -.08* -.06* -.07** .01 -.01 -.02 -.04
(ref.: private) .04 .04 .04 .04 .03 .04 .02 .02 .02 .02
Long-term jobs -.15*** -.15*** -.12*** -.12*** -.10*** -.09*** -.02* -.02 -.05*** -.04**
(ref.: short term) .04 .04 .04 .04 .03 .03 .01 .02 .01 .02
Stable career -.03 -.03 -.02 -.02 -0.4** -.04** -.00 .00 -.01 -.00
(ref.: unstable) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Physical strains .09*** .02 .09*** .02 -.04 .03 .04 .03 -.07*** .02 .07*** .02 .02** .01 .03** .01 .01 .01 .02 .01
Psycho. strains .07*** .03 .07*** .03 .07** .03 .08** .03 -.04 .02 -.04 .02 .02* .01 .02* .01 .04*** .01 .04*** .01
Rho .05 .13 .09 .11 .17 .12 .60*** .15 .61*** .17
Hausman test .10 .63 1.81 5.40*** 5.76***
N 2140
Table 16 : Heterogeneity analysis -Female population
16
Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired .04 .03 -.08 .07 .04 .03 -.03 .07 -.04 .03 -.06 .06 -.01 .02 -.06** .04 -.01 .02 -.09** .04
Demographics
Age .01 .04 -.00 .04 -.01 .04 -.02 .04 .06* .04 .05 .04 .05** .02 .04* .03 .05* .03 .04 .03
Age² -.00 .00 .00 .00 .00 .00 .00 .00 -.01* .00 -.00 .00 -.01** .00 -.00 .00 -.01* .00 -.00 .00
Children -.04 -.04 .01 .01 .01 -.01 .03 .04 .06** .06**
(ref.: none) .03 .03 .04 .04 .03 .03 .02 .02 .03 .03
Education
< BAC -.13*** -.13*** -.09** -.09** -.05 -.04 -.03 -.02 -.03* -.03
(ref.: no dipl.) .03 .03 .04 .04 .03 .03 .02 .02 .02 .02
= BAC -.09** -.09** -.06 -.06 -.01 -.01 .01 .01 -.02 -.01
(ref.: no dipl.) .04 .04 .04 .04 .05 .04 .02 .02 .02 .02
> BAC -.27*** -.27*** -.12*** -.12*** -.07** -.07* -.04* -.04* -.06*** -.06***
(ref.: no dipl.) .04 .04 .04 .04 .03 .03 .02 .02 .02 .02
Employment
Public sector .01 .01 .02 .01 -.02 -.02 .02 .02 .01 .01
(ref.: private) .03 .03 .03 .03 .02 .02 .01 .02 .02 .02
Self-employed -.01 -.03 -.00 -.01 -.04 -.05 -.09** -.10** -.06* -.07**
(ref.: private) .05 .05 .05 .05 .04 .04 .04 .04 .04 .04
Long-term jobs -.12*** -.10*** -.07*** -.06*** -.11*** -.10*** -.02* -.01 -.04*** -.03**
(ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Stable career -.02 -.02 -.01 -.00 -.01 -.01 .01 .01 -.02 -.01
(ref.: unstable) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Physical strains .13*** .03 .14*** .03 -.10*** ..03 .10*** .03 .11*** .02 -11*** .02 .02 .02 .03 .02 .02 02 .03 .02
Psycho. strains .07** .03 .07** .03 .05* .03 .05* .03 -.03 .02 -.04 .02 .05*** .02 .05*** .02 .04** .02 .04** .02
Rho .22** .12 .13 .11 .20* .12 .34** .14 .30* .15
Hausman test 3.60 1.13 .15 2.08 5.33***
N 2470
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey. Women aged 50-69 in 2010.
Table 17 : Heterogeneity analysis -Low education attainment
17
Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired -.01 .03 -.08 .06 .03 .03 .02 .06 -.01 .03 -.13** .05 -.02 .01 -.08** .03 -.01 .02 -.07** .04
Demographics
Men .04** .05** .02 .02 .05*** .05*** -.03*** -.03*** -.03** -.02**
(ref.: women) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Age .05 .04 .05 .04 .02 .04 .03 .04 .07** .03 .08** .03 .02 .02 .02 .02 .04* .02 .04* .02
Age² -.00 .00 -.00 .00 -.00 .00 -.00 .00 -.01** .00 -.01** .00 -.00 .00 -.00 .00 -.01* .00 -.01 .01
Children -.01 -.01 -.04 -.04 .03 .03 .03 .03* .03 .04*
(ref.: none) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02
Employment
Public sector -.02 -.02 .01 .01 -.07*** -.07*** .01 .01 .01 .01
(ref.: private) .03 .03 .03 .03 .03 .03 .01 01 .01 .02
Self-employed -.08** -.09** -.03 -.03 -.03 -.04 -.01 -.02 -.02 -.03
(ref.: private) .04 .04 .04 .04 .04 .04 .02 .02 .02 .03
Long-term jobs -.15*** -.15*** -.12*** -.12*** -.13*** -.12*** -.03*** -.03** -.06*** -.06***
(ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Stable career -.03* -.03* -.01 -.01 -.02 -.01 .00 .01 -.02 -.01
(ref.: unstable) .02 .02 .02 .02 .02 .02 .00 .01 .01 .01
Physical strains .13*** .02 .13*** .02 .05** .02 .05** .02 .10*** .02 .10*** .02 .03*** .01 .03*** .01 .03*** .01 .03*** .01
Psycho. strains .08*** .02 .08*** .02 .07*** .03 .07*** .03 .02 .02 .02 .02 .03** .01 .03** .01 .04*** .01 .04*** .01
Rho .12 .10 .03 .09 .25** .10 .32** .14 .31** .13
Hausman test 1.81 .04 9.00*** 4.50*** 3.00
N 3045
Table 18 : Heterogeneity analysis -High education attainment
18
Variable Poor SAH Probit Biprobit Probit Biprobit Probit Chronic diseases Activity limitations Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired .02 .03 -.03 .09 .01 .04 -.15* .09 .04 .03 .03 .08 -.01 .02 -.14*** .05 -.03 .02 -.22*** .06
Demographics
Men -.06** -.06** -.03 -.03 -.04* -.04* -.07*** -.07*** -.04*** -.05***
(ref.: women) .02 .02 .03 .03 .02 0.2 .02 .02 .01 .02
Age .06 .05 .05 .05 .01 .05 -.01 .06 .05 .04 .05 .05 .04 .03 .04 .03 .02 .03 .01 .03
Age² -.00 .00 -.00 .00 -.00 .00 .00 .00 -.00 .00 -.00 .00 -.01* .00 -.00 .00 -.00 .00 .00 .00
Children -.04 -.04 -.01 -.01 .00 .00 .03 .03 .02 .03
(ref.: none) .04 .04 .04 .04 .03 .03 .02 .03 .02 .03
Employment
Public sector -.05* -.05* -.05* -.05* -.03 -.03 -.01 -.01 -.01 -.01
(ref.: private) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02
Self-employed -.05 -.06 -.07 -.10* -.04 -.06 -.03 -.07** -.06** -.12***
(ref.: private) .04 .05 .05 .05 .03 .04 .03 .03 .03 .04
Long-term jobs -.09*** -.09*** -.00 .02 -.05** -.04 -.00 .02 -.02 .00
(ref.: short term) .03 .03 .04 .04 .02 .03 .02 .02 .02 .02
Stable career -.01 -.01 -.03 -.03 -.05** -.05** -.00 -.00 .01 -.01
(ref.: unstable) .02 .02 .03 .03 .02 .02 .01 .01 .01 .02
Physical strains .07* .04 .08* .04 .15*** .05 .17*** .05 .06* .04 .07* .04 -.01 .02 .01 .03 -.03 .03 -.01 .03
Psycho. Strains .06* .03 .06* .03 .05 .04 .05 .04 .08** .03 .08** .03 .06*** .02 .06*** .02 .04** .02 .05** .02
Rho .10 .17 .28* .15 .02 .17 .57*** .15 .77*** .14
Hausman test .35 3.94** .02 8.05*** 11.28***
N 1565
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field : Santé et Itinéraire Professionnel survey. High-educated individuals aged 50-69 in 2010.
Table 19 : Heterogeneity analysis -Highly physically demanding career
19
Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired -.08 .05 -.08 .05 -.10* .05 -.13* .08 -.09* .05 -.15* .09 -.08*** .03 -.17** .08 -.04 .03 -.11** .06
Demographics
Men -.02 -.02 -.02 -.02 -.01 -.01 -.04** -.04** -.03* -.03*
(ref.: women) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02
Age .12* .07 .09 .07 .13* .07 .14** .07 .11* .06 .12 .07 -.04 .04 -.02 .04 .01 .04 .02 .04
Age² -.01* .00 -.00 .00 -.01 .00 -.01 .00 -.01* .00 -.01* .01 .00 .00 .00 .00 -.00 .00 -.00 .00
Children -.01 -.00 -.02 -.02 .06 .06 .01 .01 .01 .01
(ref.: none) .06 .06 .06 .06 .05 .05 .03 .03 .03 .04
Education
< BAC -.07 -.07 .02 .02 -.00 .00 -.02 -.02 -.03 -.03
(ref.: no dipl.) .04 .04 .04 .04 .04 .04 .02 .02 .02 .02
= BAC -.17** -.18** .03 .13* -.01 -.01 -.05 -.04 -.10** -.09**
(ref.: no dipl.) .07 .07 .08 .07 .07 .07 .04 .04 .05 .05
> BAC -.30*** -.30*** .03 .03 -.12 -.12 -.05 -.05 -.13** -.13**
(ref.: no dipl.) .08 .08 .08 .08 .08 .08 .05 .05 .06 .06
Employment
Public sector .03 .03 .01 .01 -.13** -.13** .03 .03 .05 .05
(ref.: private) .06 .06 .06 .06 .06 .06 .03 .03 .03 .03
Self-employed -.05 -.04 -.16* -.16* -.02 -.03 -.01 -.02 .02 .01
(ref.: private) .08 .08 .08 .08 .08 .08 .05 .05 .05 .05
Long-term jobs -.10** -.11** -.10** -.10** -.12*** -.11*** -.04 -.03 -.06** -.05**
(ref.: short term) .05 .05 .05 .05 .04 .04 .02 .02 .02 .02
Stable career -.01 -.02 -.05 -.04 .01 -.01 -.03 -.04* -.00 .00
(ref.: unstable) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02
Rho -.20 .16 .06 .17 .13 .17 .41* .25 .31* .17
Hausman test .00 .23 .64 1.47 1.81
N 1010
Table 20 : Heterogeneity analysis -Lowly physically demanding career
20
Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired .02 .02 -.07 .06 .07*** .03 .02 .06 .03 .02 -.07 .05 .00 .01 -.08*** .03 -.00 .01 -.09** .04
Demographics
Men .00 .01 .00 .00 .02 .02 -.05*** -.05*** -.03*** -.03***
(ref.: women) .02 .02 .02 .02 .01 .01 .01 .01 .01 .01
Age .05 .03 .04 .03 .00 .03 .00 .04 .06** .03 .06* .03 .05*** .02 .05** .02 .05** .02 .04** .02
Age² -.00 .00 -.00 .00 .00 .00 .00 .00 -.01** .00 -.01* .00 -.01*** .00 -.01** .01 -.01** .00 -.01** .00
Children -.03 -.03 -.03 -.03 .00 .00 .03* .03* .03* .04*
(ref.: none) .03 .03 .03 .03 .02 .02 .01 .02 .02 .02
Education
< BAC -.14*** -.13*** -.06* -.05* -.06** -.05** -.00 -.01 -.04*** -.04***
(ref.: no dipl.) .03 .03 .03 .03 .02 .02 .02 .01 .01 .01
= BAC -.15*** -.14*** -.07* -.07* -.05* -.05* .00 .00 -.03* -.03*
(ref.: no dipl.) .03 .03 .04 .04 .03 .03 .02 .02 .02 .02
> BAC -.27*** -.27*** -.10*** -.10*** -.10*** -.10*** -.03* -.03* -.06*** -.06***
(ref.: no dipl.) .03 .03 .03 .03 .03 .03 .02 .02 .02 .01
Employment
Public sector -.03 -.03 -.02 -.02 -.04* -.04* .00 .00 -.00 -.00
(ref.: private) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Self-employed -.07** -.09*** -.03 -.04 -.05* -.06** -.03 -.04** -.05** -.06***
(ref.: private) .03 .03 .03 .03 .03 .02 .02 .02 .02 .0
Long-term jobs -.12*** -.10*** -.08*** -.07*** -.09*** -.08*** -.01 -.01 -.03*** -.02**
(ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Stable career -.03* -.03 -.00 -.00 -.03** -.03** -.01 -.00 -.02* -.02*
(ref.: unstable) .02 .02 .02 .02 .01 .01 .01 .01 .01 .01
Rho .26** .10 .08 .09 .23** .10 .43*** .12 .39** .15
Hausman test 2.53 .93 4.76*** 8.00*** 5.40***
N 3600
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey. Individuals who faced a lowly physically demanding career, aged 50-69 in
2010.
Despite the loss in accuracy of the estimations due to a significantly lower sample size,
individuals having faced a physically strenuous career clearly experience the most positive
effects of retiring on their health condition, as every indicators but self-assessed health status
are impacted (resp. , , and decreases in the probability of declaring
chronic diseases, activity limitations, GAD and MDE). When it comes to individuals with
lower levels of physical exposures, only mental health is improved ( and for
GAD and MDE).
Table 21 : Heterogeneity analysis -Highly psychosocially demanding career
21
Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired -.12** .05 -.21** .11 -.15*** .05 -.35*** .12 -.11** .05 -.19* .12 -.04 .03 -.34*** .10 -.02 .04 -.23** .09
Demographics
Men .01 .01 .02 .02 .02 .02 -.06*** -.06** -.03 -.03
(ref.: women) .04 .04 .04 .04 .03 .03 .02 .02 .03 .02
Age .24*** .08 .26*** .08 .23*** .08 .25*** .08 .24*** .07 .25*** .08 .01 .05 .11 .08 .10* .06 .16** .07
Age² -.01*** -.01*** -.01*** .00 .00 .00 -.01*** .00 -.01*** .00 -.01*** .00 -.00 .00 -.00 .00 -.01* .00 -.01** .00
Children -.01 -.01 .03 .03 -.02 -.02 .05 .04 .02 -.02
(ref.: none) .06 .06 .07 .06 .06 .06 .05 .05 .05 .05
Education
< BAC -.09 -.08 -.02 .01 -.00 .01 .02 .07 .01 .02
(ref.: no dipl.) .06 .06 .06 .06 .05 .06 .04 .05 .04 .04
= BAC -.19*** -.18** .01 .03 -.00 .01 .05 .08* -.01 .01
(ref.: no dipl.) .09 .07 .07 .07 .07 .07 .04 .05 .05 .05
> BAC -.32*** -.31*** -.09 -.08 -.06 -.05 -.00 .00 -.06 -.05
(ref.: no dipl.) .07 .07 .07 .07 .07 .07 .04 .05 .05 .05
Employment
Public sector -.05 -.06 -.11* -.14** -.20*** -.21*** .00 -.04 -.03 -.07
(ref.: private) .06 .06 .06 .06 .06 .06 .04 .04 .04 .04
Self-employed -.09 -.07 -.14 -.14 -.01 -.01 .02 .03 .01 -.00
(ref.: private) .10 .10 .10 .10 .09 .09 .06 .06 .03 .03
Long-term jobs -.11** -.10* -.09 -.06 -.10** -.09* -.03 -.01 -.06* -.04
(ref.: short term) .05 .05 .05 .05 .05 .05 .03 .03 .03 .03
Stable career .01 .02 -.04 -.03 .03 -.03 -.05 -.07*** .01 .02
(ref.: unstable) .04 .04 .04 .04 .04 .04 .02 .02 .03 .03
Rho .16 .21 .38* .21 .16 .23 .93*** .20 .70** .23
Hausman test .84 3.36 .54 9.89*** 6.78***
N 731
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey. Individuals who faced a highly psychosocially demanding career, aged 50-69
in 2010.
Table 22 : Heterogeneity analysis -Lowly psychosocially demanding career
22
Variable Poor SAH Probit Biprobit Probit Biprobit Chronic diseases Activity limitations Probit Biprobit Probit Biprobit Probit Biprobit GAD MDE
Retired .03 .02 -.08 .05 .07*** .02 .03 .06 .03 .02 -.09* .05 -.01 .01 -.08*** .03 -.01 .01 -.09*** .03
Demographics
Men .01 .02 -.00 .00 -.03* .03** -.04*** -.04*** -.03*** -.03***
(ref.: women) .02 .02 .02 .02 .01 .01 .01 .01 .01 .01
Age .03 .03 .03 .03 .00 .00 -.00 .03 .05* .03 .04 .03 .03* .02 .03 .02 .02 .02 .02 .02
Age² -.00 .00 -.00 .00 .00 .00 -.00 .00 -.00 .00 -.00 .00 -.01* .00 -.00 .00 -.00 .00 -.00 .00
Children -.03 -.03 -.03 -.03 .02 .02 .02 .02 .03* .03**
(ref.: none) .03 .03 .03 .03 .02 .02 .01 .02 .02 .02
Education
< BAC -.13*** -.12*** -.04 -.04 -.05** -.05** -.03** -.03** -.05*** -.05***
(ref.: no dipl.) .03 .03 .03 .03 .02 .02 .01 .01 .01 .01
= BAC -.16*** -.16*** -.05* -.05 -.07** -.07** -.02 -.02 -.05*** -.02***
(ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02
> BAC -.29*** -.29*** -.09*** -.09*** -.13*** -.13*** -.05*** -.05*** -.07*** -.08***
(ref.: no dipl.) .03 .03 .03 .03 .03 .03 .01 .02 .02 .02
Employment
Public sector -.03 -.02 -.01 -.01 -.03* -.03* .01 .01 .01 .01
(ref.: private) .02 .02 .02 .02 .02 .02 .01 .01 .01 .01
Self-employed -.07** -.09*** -.03 -.04 -.05* -.07** -.03 -.04** -.02 -.03*
(ref.: private) .03 .03 .03 .03 .03 .03 .02 .02 .02 .02
Long-term jobs -.11*** -.10*** -.09*** -.08*** -.10*** -.08*** -.02** -.01 -.04*** -.03***
(ref.: short term) .02 .02 .02 .02 .02 .02 .01 .01 .01 .011
Stable career -.03** -.03* -.00 -.01 -.04** -.03** -.01 -.00 -.02** -.02*
(ref.: unstable) .02 .02 .02 .01 .01 .01 .01 .01 .01 .01
Rho .20** .09 .07 .09 .26*** .09 .39*** .12 .36** .14
Hausman test 5.76*** .50 6.86*** 6.13*** 8.00***
N 3879
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey. Individuals who faced a lowly psychosocially demanding career, aged 50-69
in 2010.
Table 24 : Mechanisms -The effect of retirement on health-related risky behaviours
24
Variable Probit Tobacco Biprobit Probit Alcohol Biprobit Overweight Probit Biprobit
Retired -.04** .02 -.08** .04 .04** .02 .08** .04 .05** .02 .12** .05
Demographics
Men .08*** .09*** .26*** .26*** .19*** .19***
(ref.: women) .01 .01 .01 .01 .01 .01
Age .01 .03 .00 .03 .05** .03 .05** .03 .05* .03 .06* .03
Age² -.00 .00 -.00 .00 -.01** .00 -.01** .00 -.00 .00 -.01* .00
Children -.01 -.01 .00 .00 .01 .01
(ref.: none) .02 .02 .02 .02 .03 .03
Education
< BAC -.03 -.03 .04 .04 -.01 -.01
(ref.: no dipl.) .02 .02 .02 .02 .03 .03
= BAC -.02 -.02 .04 .03 -.07** -.07**
(ref.: no dipl.) .03 .03 .03 .03 .03 .03
> BAC -.06** -.06** .04 .03 -.15*** -.15***
(ref.: no dipl.) .03 .03 .03 .03 .03 .03
Employment
Public sector .00 .00 -.01 -.01 -.04* -.04*
(ref.: private) .02 .02 .02 .02 .02 .02
Self-employed -.01 -.01 .02 .03 -.02 -.01
(ref.: private) .03 .03 .02 .02 .03 .03
Long-term jobs -.05*** -.05** -.03* -.04** -.02 -.03
(ref.: short term) .02 .02 .02 .02 .02 .02
Stable career -.02 -.01 .01 .01 .01 .01
(ref.: unstable) .01 .01 .01 .01 .02 .02
Physical strains .03** .02 .04** .02 -.00 .02 -.00 .02 .07*** .02 .07*** .02
Psycho. strains .02 .02 .02 .02 .00 .02 .00 .02 -.02 .02 -.02 .02
Rho .07 .10 -.09 .09 -.13 .08
Hausman test 1.33 1.33 2.33
N 4610
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%.
Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010.
(which is partly suggested by the Plan Psychiatrie et santé mentale[START_REF] García-Gómez | Institutions, health shocks and labour market outcomes across Europe[END_REF] -2015, in France), in France). On the long run, positive results can be expected from these frameworks, with both increased productivity in the workplace, a greater career stability and an increased health condition for workers, likely to result in decreased healthcare expenditures at the state level (mental healthrelated expenditures currently represents around 3 to 4% of the GDP because of decreased productivity, increased sick-leaves and unemployment according to the International Labour Organisation). Current work intensification and increased pressures on employees are both likely to make this problem even more topical in the coming years. At the European level, a European Pact for Mental Health and Well-being was established in 2008 and promotes mental health and well-being at work as well as the need to help people suffering from mental health disorders to return to the labour market.Chapter 2 suggests that long exposures to detrimental physical and psychosocial working conditions can have a long-term impact on health status, through increased numbers of chronic diseases. The first significant increase being found after less than 10 years of exposure implies that work strains are relevant in terms of health degradation starting from the beginning of workers' career. The results also suggest that psychosocial risk factors are very important in the determination of workers health. When the Compte Pénibilité in France makes a step in the right direction by allowing exposed workers to objectively measured physical strains to follow trainings, to work part-time or to retire early, this study advocates that workers' feelings about their working conditions are close to equivalent in terms of magnitude in the effects on health, and thus that psychosocial strains should not be excluded from public policies even if there are intrinsically harder to quantify. At the European level, the European Pact for Mental Health and Well-being also focuses on improving work
Reading: 24.2% of workers declaring at least one mental disorder in 2006 report suffering from activity limitations against 51.5% in the unemployed population in 2006. Field: Santé et Itinéraire Professionnel survey, individuals reporting at least one mental disorder and aged 30-55 in 2006. Weighted and calibrated statistics.
Unemployed
population in
2006
Mental Health, 2006
At least one mental disorder 5,9 22,2 11,6 21,0
No mental disorder 94,1 77,8 88,4 79,0
MDE 3,4 16,7 8,3 16,4
No MDE 96,6 83,3 91,7 83,6
GAD 3,5 13,2 6,6 13,1
No GAD 96,5 86,8 93,4 86,9
Individual characteristics, 2006
30-34 17,3 11,6 16,0 15,9
35-39 21,7 10,9 20,2 15,1
40-44 20,2 16,4 19,9 16,4
45-49 20,1 19,6 21,4 18,5
50-55 20,8 41,5 22,5 34,1
In a relationship 82,1 55,0 77,6 71,5
Single 17,9 45,0 22,4 28,5
At least one child 12,2 5,1 8,3 6,1
No child 87,8 94,9 91,7 93,9
No diploma 8,0 15,1 6,7 15,3
Primary 45,8 53,6 39,1 45,8
Equivalent to French baccalaureat 18,2 14,2 19,1 17,2
Superior 26,3 16,1 33,3 18,5
Job characteristics, 2006 Agricultural sector Industrial sector Services sector Private sector Public sector Self-employed Poor perceived health No chronic disease Chronic disease No activity limitation Activity limitations 9,0 21,0 70,0 66,7 19,1 10,9 47,2 52,8 56,6 43,4 75,8 24,2 3,1 9,1 87,7 58,9 29,1 6,6 27,1 72,9 39,1 60,9 48,5 51,5
Farmer Risky behaviours, 2006 4,7 1,2
Artisans Daily smoker 7,0 31,7 4,3 42,9
Manager Not a daily smoker 16,4 68,3 11,1 57,1
Intermediate Drinker at risk 24,1 29,2 22,2 29,6
Employee Not a drinker at risk 12,7 70,8 45,1 70,4
Blue collar Overweight 29,8 34,8 9,2 48,3
Part-time job Normal weight or underweight 3,0 65,2 30,7 51,7
Full time job Professional route 97,0 69,3
Majority of employment in long jobs General Health, 2006 Good perceived health Most of the professional route out of job 82,1 48,9 73,9 26,1 77,8 29,0 71,0 61,2
Poor perceived health Stable career path 17,9 51,1 66,7 22,2 44,0 38,8
No chronic disease Unstable career path 75,3 56,6 33,3 71,9 56,0 60,3
Chronic disease 24,7 43,4 28,1 39,7
No activity limitation 90,7 59,8 88,5 75,1
Activity limitations 9,3 40,2 11,5 24,9
Risky behaviours, 2006
Daily smoker 27,5 47,8 23,6 24,5
Not a daily smoker 72,5 52,2 76,4 75,5
Drinker at risk 46,2 42,2 13,6 13,1
Not a drinker at risk 53,8 57,8 86,4 86,9
Overweight 51,3 46,7 28,5 41,6
Normal weight or underweight 48,7 53,3 71,5 58,4
Professional route
Table 27 : Attrition analysis -panel population (interviewed in 2006 and 2010) vs. attrition population (interviewed in 2006 and not in 2010)
27 Santé et Itinéraire Professionnel survey, employed individuals aged 30-55 in 2006. Weighted and calibrated statistics.
Men (%) Women (%)
Panel pop. Attrition pop. Panel pop. Attrition pop.
Mental Health, 2006
At least one mental disorder 5,9 5,9 11,6 13,5
No mental disorder 94,1 94,1 88,4 86,5
MDE 3,4 4,4 8,3 9,0
No MDE 96,6 95,2 91,7 91,0
GAD 3,5 3,7 6,6 6,9
No GAD 96,5 96,3 93,4 93,1
Individual characteristics, 2006
30-34 17,3 18,9 16,0 15,3
35-39 21,7 21,5 20,2 23,5
40-44 20,2 21,3 19,9 21,6
45-49 20,1 17,8 21,4 18,6
50-55 20,8 20,5 22,5 21,0
In a relationship 82,1 71,7 77,6 61,8
Single 17,9 28,3 22,4 38,2
At least one child 12,2 23,8 8,3 18,4
No child 87,8 86,2 91,7 81,6
No diploma 8,0 8,0 6,7 7,8
Primary 45,8 46,7 39,1 40,4
Equivalent to French bac. 18,2 14,8 19,1 21,0
Superior 26,3 29,1 33,3 29,4
Job characteristics, 2006
Agricultural sector 9,0 4,8 3,1 3,5
Industrial sector 21,0 16,6 9,1 8,2
Services sector 70,0 78,6 87,7 88,3
Private sector 66,7 65,2 58,9 60,2
Public sector 19,1 20,7 29,1 28,4
Self-employed 10,9 10,0 6,6 5,9
Farmer 4,7 1,4 1,2 1,2
Artisans 7,0 9,6 4,3 4,3
Manager 16,4 16,8 11,1 12,0
Intermediate 24,1 20,7 22,2 22,9
Employee 12,7 12,9 45,1 44,7
Blue collar 29,8 32,4 9,2 8,0
Part-time job 3,0 4,1 30,7 25,1
Full time job 97,0 95,9 69,3 75,0
General Health, 2006
Good perceived health 82,1 79,7 77,8 74,7
Poor perceived health 17,9 20,3 22,2 25,3
No chronic disease 75,3 79,0 71,9 73,5
Chronic disease 24,7 21,1 28,1 26,5
No activity limitation 9,3 88,5 88,5 88,2
Activity limitations 90,7 11,5 11,5 11,8
Risky behaviours, 2006
Daily smoker 27,5 34,9 23,6 30,1
Not a daily smoker 72,5 65,1 76,4 69,9
Drinker at risk 46,2 44,0 13,6 14,1
Not a drinker at risk 53,8 36,0 86,4 85,9
Overweight 51,3 48,6 28,5 21,3
Normal weight or underweight 48,7 51,4 71,5 78,7
Professional route
Maj. of empl. in long jobs 83,5 69,9 71,7 69,4
Most of the prof. route out of job 16,5 30,1 28,3 30,6
Stable career path 74,3 76,0 68,9 67,6
Unstable career path 25,7 24,0 31,1 32,5
Field:
Table 28 : Attrition Analysis -panel population vs. attrition population according to mental health and employment status in 2006
28 Among individuals declaring in 2006 having at least one mental disorder, 18.6% were not re-interviewed in 2010, and 81.4% were. In individuals not reporting any mental disorders in 2006, 16.9% were not re-interviewed. Field: Santé et Itinéraire Professionnel survey, individuals aged 30-55 in 2006. Weighted and calibrated statistics.
Attrition (%) Panel (%)
Interpretation:
Table 29 : General descriptive statistics Men (%) Women (%)
29
Employment Employment
Prevalence probability Prevalence probability
(2010) (2010)
Field:
Santé et Itinéraire Professionnel survey, individuals aged 30-55 in 2006. Weighted and calibrated statistics.
Table 30 : Employment status in 2006, according to mental health condition
30 Reading: 68.6% of men with at least one mental disorder in 2006 are employed at the same date, against 64.5% of women in the same situation. Field:Santé et Itinéraire Professionnel survey, individuals aged 30-55 in 2006. Weighted and calibrated statistics.
Men (%) Women (%)
Employed Unemployed Employed Unemployed
Mental Health, 2006
At least one mental disorder 68,6 31,4 64,5 35,5
No mental disorder 90,9 9,1 77,0 23,0
Table 34 : Mental Health estimations in 2006
34
Uniprobit (Men) Biprobit(Men) Uniprobit (Women) Biprobit (Women)
Coeff. Std. err. Coeff. Std. err. Coeff. Std. err. Coeff. Std. err.
Ident. variables (men)
Violence during childhood .08** .04 .09** .05
Many marital breakdowns .02** .01 .03** .01
Ident. variables (women)
Violence during childhood .08*** .03 .07*** .02
Raised by a single parent .07*** .02 .08*** .02
Ind. characteristics, 2006
Age (ref.: 30-35 years-old)
-35-39 .05** .02 .05** .02 -.03 .03 -.03 .03
-40-44 .01 .02 .01 .02 .02 .02 .02 .02
-45-49 .02 .02 .02 .02 .00 .03 .00 .03
-50-55 .02 .02 .02 .02 .01 .03 .01 .03
In a relationship (ref.: Single) -.05*** .01 -.05*** .01 -.03** .01 -.03** .01
Children (ref: None) .02 .02 .03 .02 .01 .03 .02 .03
Education (ref.: French bac.)
-No diploma -.02 .03 -.02 .03 -.03 .04 -.03 .04
-Primary .00 .02 -.00 .01 .01 .02 .01 .02
-Superior -.00 .02 -.01 .02 .00 .02 .00 .02
Employment in 2006
Act. sector (ref.: Industrial)
-Agricultural .01 .03 .01 .02 -.03 .05 -.02 .05
-Services .02 .01 .02 .01 -.03 .02 -.03 .02
Activity status (ref.: Private)
-Public sector -.00 .01 -.01 .01 -.04** .02 -.03** .02
-Self-employed .05** .02 .04* .02 -.04 .04 -.04 .04
Prof. cat. (ref.: Blue collar)
-Farmers -.08* .05 -.08* .05 .05 .07 .05 .07
-Artisans -.02 .03 -.02 .03 .07 .05 .07 .05
-Managers .02 .02 .02 .02 .01 .03 .00 .03
-Intermediate -.00 .01 -.00 .01 -.01 .03 -.01 .03
-Employees -.03 .02 -.03 .02 .01 .02 .01 .02
Part time (ref.: Full-time) -.03 .03 -.03 .03 .02* .01 .02 .01
General health status in 2006
Poor perceived health status .09*** .01 .09*** .01 .14*** .02 .14*** .02
Chronic diseases .00 .01 .00 .01 .02 .02 .02 .02
Activity limitations .01 .02 .01 .02 .03* .02 .03 .02
Risky behaviours in 2006
Daily smoker .00 .01 .01 .01 .02 .02 .03 .02
Risky alcohol consumption .01 .01 .01 .01 .03 .02 .03 .02
Overweight -.01 .02 -.01 .01 -.02 .02 .02 .02
Professional route
Maj. of empl. in long jobs -.00 .02 .00 .02 -.01 .02 -.00 .02
Stable career path -.01 .01 -.01 .01 .01 .02 .01 .02
N 1876 1860 2143 1982
Table 36 : Unmatched difference-in-differences results ( to ), psychosocial treatment
36
Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N
Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.)
: being
exposed to at least 12 years of single exposures or 6 years of multiple exposures Men
First health period .018 .035 . 004 .031 .316
Second health period .014 .015 .034 .037 .020 .033 .371 1734/3586
Third health period .035 .040 .021 .037 .396
Women
First health period .90* .048 .058 .043 .445
Second health period .032 .020 .098** .049 .066 .040 .497 1554/3426
Third health period .102** .052 .070 .044 .522
:
being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men
First health period .086* .039 .080** .037 .442
Second health period .006 .015 .094** .041 .088** .039 .513 1690/3586
Third health period .141*** .045 .135*** .043 .641
Women
First health period .091* .050 .066 .044 .567
Second health period .025 .020 .102* .053 .077 .031 .600 1480/3426
Third health period .105** .057 .080 .048 .674
:
being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men
First health period .101** .045 .097** .043 .613
Second health period .004 .015 .132*** .047 .128*** .045 .713 1644/3586
Third health period .154*** .050 .150*** .048 .806
Women
First health period .134** .063 .107* .061 .769
Second health period .027 .020 .147** .069 .120** .050 .876 1410/3426
Third health period .160*** .057 .133** .055 .974
:
being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men
First health period .126*** .049 .116** .046 .700
Second health period .010 .016 .154*** .050 .144*** .048 .785 1574/3586
Third health period .186*** .054 .176*** .052 .918
Women
First health period .165*** .060 .145** .066 .928
Second health period .020 .020 .194*** .065 .174*** .059 1.021 1318/3426
Third health period .209*** .071 .189*** .054 1.115
:
being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men
Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups respectively before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e. the difference between follow-up and baseline differences). Field:Population aged 42-74 in 2006 and present from to . Unmatched sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
First health period .122*** .050 .111** .047 .704
Second health period .011 .016 .154*** .052 .143*** .049 .796 1412/3586
Third health period .181*** .056 .170*** .053 .923
Women
First health period .196*** .062 .182*** .068 .944
Second health period .014 .020 .219*** .066 .205*** .061 1.049 1208/3426
Third health period .224*** .073 .210*** .056 1.148
Table 37 : Unmatched difference-in-differences results ( to ), global treatment
37
Treatment Baseline Diff. Follow-up Diff. Diff.-in-Diff. Mean chronic N
Sex Coeff. Std. Err. Coeff. Std. Err. Coeff. Std. Err. diseases in treat. (treat./tot.)
: being
exposed to at least 12 years of single exposures or 6 years of multiple exposures Men
First health period -.012 .050 .027 .047 .390
Second health period -.039** .02 -.007 .035 .032 .041 .434 2796/3586
Third health period -.006 .047 .033 .044 .464
Women
First health period .045 .051 .036 .044 .427
Second health period .007 .02 .051 .046 .044 .038 .481 2190/3426
Third health period .052 .048 .045 .041 .517
:
being exposed to at least 14 years of single exposures or 7 years of multiple exposures Men
First health period .000 .048 .041 .045 .470
Second health period -.041** .019 .017 .050 .058 .048 .538 2770/3586
Third health period .031 .053 .072 .051 .643
Women
First health period .075 .051 .073* .043 .569
Second health period .002 .020 .082* .047 .080** .039 .614 2100/3426
Third health period .091* .055 .089** .041 .705
:
being exposed to at least 16 years of single exposures or 8 years of multiple exposures Men
First health period .035 .053 .078 .050 .644
Second health period -.043** .019 .058 .055 .101* .053 .729 2720/3586
Third health period .088 .057 .131** .056 .849
Women
First health period .101* .064 .100* .058 .764
Second health period .001 .020 .120** .053 .121** .047 .862 2046/3426
Third health period .125** .058 .124** .052 .971
:
being exposed to at least 18 years of single exposures or 9 years of multiple exposures Men
First health period .085 .056 .122** .053 .749
Second health period -.037** .018 .094* .057 .131** .055 .823 2638/3586
Third health period .132** .061 .169*** .059 .977
Women
First health period .106* .067 .109* .062 .869
Second health period -.003 .020 .125** .061 .128** .055 .972 1960/3426
Third health period .133*** .055 .136*** .050 1.063
:
being exposed to at least 20 years of single exposures or 10 years of multiple exposures Men
Interpretation: ***: significant at the 1% level, **: significant at the 5% level, *: significant at the 10% level. Standard errors in italics. The baseline and follow-up columns show the results for the first differences between the treated and control groups respectively before and after the treatment. The diff.-in-diff. column shows the results for the second differences (i.e. the difference between follow-up and baseline differences). Field:Population aged 42-74 in 2006 and present from to . Unmatched sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
First health period .071 .054 .096* .052 .746
Second health period -.025 .017 .076 .056 .101* .054 .817 2502/3586
Third health period .103* .060 .128** .058 .965
Women
First health period .140** .067 .146** .063 .897
Second health period -.006 .020 .157*** .060 .163*** .056 1.007 1826/3426
Third health period .157*** .055 .163*** .050 1.101
Table 42 : Working conditions typology, by gender in 2006
42 ***: difference significant at the 1% level, **: difference significant at the 5% level, *: difference significant at the 10% level. 70% of night workers are men and 30% are women. The difference in proportions is significant at the 1% level. Field: General Santé et Itinéraire Professionnel survey sample. Source: Santé et Itinéraire Professionnel survey (Sip), wave 2006.
Variable Men (%) Gender Women (%) Difference Men/Women (Chi² test)
Working conditions
Night work 70.36 29.64 ***
Repetitive work 49.90 50.10
Heavy load 51.10 48.90 ***
Hazardous materials 61.85 38.15 ***
Cannot use skills 46.29 53.71
Work under pressure 52.25 47.75 ***
Tensions with public 44.02 55.98 ***
Lack of recognition 47.16 52.84
Cannot conciliate private and work lives 49.21 50.79
Bad relationships with colleagues 47.83 52.17
Interpretation:
Résumé -Parcours Professionnel et de SantéL'objectif de cette thèse est de démêler quelques-unes des nombreuses interrelations entre travail, emploi et état de santé, la plupart du temps dans une logique longitudinale. Établir des relations causales entre ces trois dynamiques n'est pas chose aisée, dans la mesure où de nombreux biais statistiques entachent généralement les estimations, notamment les biais de sélection ainsi que les trois sources classiques d'endogénéité. Cette thèse se propose dans un premier chapitre d'étudier l'effet de la santé mentale sur la capacité à se maintenir en emploi des travailleurs. Le deuxième chapitre explore les possibles sources d'hétérogénéité du rôle des conditions de travail sur la santé en s'intéressant aux effets d'expositions variables en termes d'intensité et de nature en début de carrière sur les maladies chroniques. Enfin, le troisième chapitre traite de la fin de carrière et de la décision de départ en retraite. L'enquête en données de panel françaises de l'enquête Santé et itinéraire professionnel (Sip) comptant plus de 13 000 est utilisée dans cette thèse. Plusieurs méthodologies sont mises en place dans ce travail de manière à prendre en compte les biais d'endogénéité, notamment des méthodes en variables instrumentales ainsi que des méthodes d'évaluation des politiques publiques (appariement et différence-de-différences). Les résultats confirment qu'emploi, santé et travail sont intimement liés, avec respectivement des conséquences avérées des chocs de santé sur la trajectoire professionnelle, et inversement un rôle prépondérant du travail sur la santé.Mots-clés : travail ; emploi ; conditions de travail ; retraite ; santé générale ; santé mentale ; dépression ; anxiété ; maladies chroniques ; enfance ; endogénéité ; variables instrumentales ; appariement ; méthodes de données de panel ; différence-de-différences ; France.
The Hausman statistic has been calculated as follow: , followed by a Chi² test.
Reading: Marginal effects, standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:Santé et Itinéraire Professionnel survey, women aged 30-55 in employment in 2006.
Sensitivity tests were performed by estimating the models on the 25-50, 30-50 and 25-55 years-old groups. These tests, not presented here, confirm our results in all cases.
In the male population suffering from at least one mental disorder in 2006, 68.6% are employed against 90.9% in the nonaffected population. Among women, the proportions were 64.5%
and 77.0% respectively (Table30).
Directorate for Research, Studies, Assessment and Statistics (Drees) -Ministry of Health.
Directorate for Research, Studies and Statistics (Dares) -Ministry of Labour.
For a technical note on attrition management and data calibration in the Sip survey, see De Riccardis (2012).
The Hausman test has been calculated as follow: , followed by a Chi² test.
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:Santé et Itinéraire Professionnel survey. Men aged 50-69 in 2010.
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field :Santé et Itinéraire Professionnel survey. Low-educated individuals aged 50-69 in 2010.
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals who faced a highly physically demanding career, aged 50-69 in 2010.
12,2 19,7 20,6 22,3 25,2 72,3 27,7 12,2 87,8 5,2 49,3 18,1 26,3 19,8 16,5 15,2 15,6 32,9 59,1 40,9 8,1 91,9 18,2 47,9 13,7 14,6 General Health, 2006 Good perceived health
Reading: ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field:Santé et Itinéraire Professionnel survey, individuals aged 30-55 in employment in 2006.
Reading: Marginal effects. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010.
Reading: Coefficients. Standard errors in italics. ***: significant at 1%, **: significant at 5%, *: significant at 10%. Field: Santé et Itinéraire Professionnel survey. Individuals aged 50-69 in 2010.
Acknowledgements (mostly in French)
Acknowledgements
The authors wish to thank for their comments on an earlier version of this article, Thibault Brodaty (Érudite, Upec), Laetitia Challe (Érudite, Upem), Roméo Fontaine (Leg, University of Burgundy), Yannick L'Horty (Érudite, Upem), Aurélia Tison (Aix Marseille University) and Yann Videau (Érudite, Upec). The authors also wish to thank Caroline Berchet (Ined), Marc Collet (Drees), Lucie Gonzalez (Haut Conseil de la Famille), Sandrine Juin (Érudite, Upec and Ined) and Nicolas de Riccardis (Drees) for responding to the requests on an earlier version. The authors obviously remain solely responsible for inaccuracies or limitations of their work.
Acknowledgements
The author would like to especially thank Thomas Barnay (Upec Érudite) for his constant help and advice about this study. I thank Pierre Blanchard (Upec Érudite), Emmanuelle Cambois (Ined), Eve Caroli (LEDa-LEGOS, Paris-Dauphine University) and Emmanuel Duguet (Upec Érudite) for their technical help. I am thankful to Søren Rud Kristensen (Manchester Centre for Health Economics), Renaud Legal (Drees), Maarten Lindeboom (VU University Amsterdam), Luke Munford (Manchester Centre for Health Economics), Catherine Pollak (Drees) and Matthew Sutton (Manchester Centre for Health Economics) for reviewing earlier versions of the paper. I also wish to thank Patrick Domingues (Upec Érudite), Sandrine Juin (Ined, Upec Érudite), François Legendre, Dorian Verboux and Yann Videau (Upec Acknowledgements The authors would like to thank Pierre Blanchard (Upec Érudite), Eve Caroli (LEDa-LEGOS, Paris-Dauphine University), Emmanuel Duguet (Upec Érudite), Sandrine Juin (Ined, Upec Érudite), François Legendre and Yann Videau (Upec Érudite) for their useful advice. They also thank Annaig-Charlotte Pédrant (IREGE, Savoie Mont Blanc University) and Pierre-Jean Messe (GAINS, Le Mans University) for discussing the paper during a conference.
The French version of this chapter has been published as: Barnay T. and Defebvre É. (2016): « L'influence de la santé mentale sur le maintien en emploi », Économie et Statistique, Chapter II: Work strains and chronic diseases HARDER, BETTER, FASTER... YET STRONGER?
WORKING CONDITIONS AND SELF-DECLARATION OF CHRONIC DISEASES
Chapter III: Health status after retirement RETIRED, AT LAST?
THE ROLE OF RETIREMENT ON HEALTH STATUS IN FRANCE
This chapter is co-written with Thomas BARNAY (Paris-Est University) Appendix 2: Generalized Anxiety Disorder (GAD)
Daily activities
GAD are identified using a similar filter questions system.
Three questions are asked:
-Over the past six months, have you felt like you were too much concerned about this and that, have you felt overly concerned, worried, anxious about life's everyday problems, at work/at school, at home or about your relatives? Yes/No
In case of positive answer:
- For a person to suffer from generalized anxiety disorder, he/she must respond positively to the three filter questions, then three out of six symptoms described later. This protocol is consistent with that used by the DSM-IV.
Appendix 3: Initial selection of the sample in 2006
This study does not claim to measure the impact of mental health on employment but tries to establish the causal effect of mental health on job retention. The unemployed population in 2006 is therefore discarded, even though their reported prevalence of anxiety disorders and depressive episodes is far superior to those in employment (22% vs. 6% in men and 21% vs.
12% in women; see Table 25 andTable 26, Appendix 6). In addition, such a study working on the whole sample (including the unemployed) would suffer from significant methodological biases (reverse causality and direct simultaneity). A
Appendix 4: Attrition between the two waves
Attrition between the 2006 and 2010 waves can induce the selection of a population with specific characteristics. There are no significant differences in demographic, socioeconomic and health characteristics of our sample between respondents and non-respondents to the 2010 survey on the basis of their first wave characteristics (see Table 27 andTable 28, Appendix 6). However, differences in the response rate to the 2010 survey exist according to perceived health status, activity limitations, the declaration of major depressive episodes and the declaration of motion or sleep disorders [START_REF] De Riccardis | Traitements de la non-réponse et calages pour l'enquête santé et itinéraire professionnel de[END_REF] While the questionnaire on mental disorders makes full use of the nomenclature proposed by the Mini, it has no diagnostic value. It can rather be seen as diagnostic interviews conducted by an interviewer, based on all the symptoms described by the DSM-IV and Cim-10. It must not lead to a medical diagnosis [START_REF] Bahu | Le choix d'indicateurs de santé : l'exemple de l'enquête SIP[END_REF]. However, it appears that according to the results of a qualitative post-survey interview about some indicators used in the Sip survey including health indicators [START_REF] Guiho-Bailly | Rapport subjectif au travail : sens des trajets professionnels et construction de la santé -Rapport final[END_REF], the over-reporting phenomenon (false positives) of mental disorders in the survey is not widespread, while in contrast under-reporting (false negative) may occur more often. In the study of the impact of mental health on job retention, this would lead to an underestimation of the effect of mental health.
Appendix 6: Descriptive statistics
Appendix 8: Detailed description of the parameters
The nine thresholds are designed according to increasing levels of exposures to detrimental working conditions: a 2-year step for single exposures from one threshold to another. Polyexposure durations are half that of single ones, based on the requirements of the 2015 French law requiring that past professional exposures to detrimental working conditions be taken into account in pension calculations (in which simultaneous strains count twice as much as single exposures - [START_REF] Sirugue | Compte personnel de prévention de la pénibilité : propositions pour un dispositif plus simple, plus sécurisé et mieux articulé avec la prévention[END_REF]. The durations of the observation periods for working conditions are set arbitrarily to allow some time for reaching the treatment thresholds: It represents three halves of the maximum duration of exposure needed to be treated, i.e., three halves of the single exposure threshold). This way, individuals are able to reach the treatment even though their exposure years are not necessarily a continuum. The minimum duration at work during the observation period is set as the minimum exposure threshold to be treated, i.e., it equals the poly-exposure threshold. As individuals not meeting this minimum requirement are not in capacity to reach the treatment (because the bare minimum to do so is to work and be exposed enough to reach the poly exposure threshold), they are dropped from the analysis for comparability purposes. The length of observation periods for chronic diseases is set to two years in order to avoid choosing overly specific singletons (some specific isolated years may not perfectly reflect individuals' health condition) while preserving sample sizes (because the longer the intervals, the greater the sample size losses).
The estimations are performed on these nine thresholds using the same sample of individuals:
I keep only individuals existing in all nine of them for comparison purposes. The sample is thus based on the most demanding threshold, . This means that, in this setup, individuals must be observed for a minimal duration of 38 years (2 years before labour market entry for baseline health status, plus 30 years of observation and 6 years of follow-up health status periods, as well as a minimum of 10 years in the labour market -see Figure V). In other words, with the date of the survey being 2006, this means that the retained individuals (6,700) are those who entered the labour market before 1970 (and existing in the dataset before 1968), inducing heavily reduced sample sizes in comparison to the 13,000 starting individuals.
Appendix 9: Naive unmatched difference-in-differences models Appendix 11: Specification test
Appendix 13: Exploratory analysis on health habits
Appendix 14: Exploratory analysis on gender-gaps
Appendix 16: The Mini European Health Module
The Mini European health module is intended to give a uniform measure of health status in European countries by asking a series of three questions apprehending perceived health, the existence of chronic diseases and activity limitations.
It is based on Blaxter's model (1989) which identifies three semantic approaches to health:
-The subjective model based on the overall perception of the individual, "How is your overall health? Very Good/Good/Average/Bad/Very bad"; -The medical model, based on disease reporting, "Do you currently have one or more chronic disease(s)? Yes/No"; -The functional model which identifies difficulties in performing frequent activities: "Are you limited for six months because of a health problem in activities people usually do? Yes/No".
Appendix 17: Major Depressive Episodes (MDE)
The MDE are identified in two stages. First, two questions making use of filters are asked:
-Over the past two weeks, have you felt particularly sad, depressed, mostly during the day, and this almost every day? Yes/No -Over the past two weeks, have you almost all the time the feeling of having no interest in anything, to have lost interest or pleasure in things that you usually like? Yes/No Then, if one of the two filter questions receives a positive response, a third question is then asked, in order to know the specific symptoms: Over the past two weeks, when you felt depressed and/or uninterested for most things, have you experienced any of the following situations? Check as soon as the answer is "yes", several possible positive responses. For a person to suffer from generalized anxiety disorder, he/she must respond positively to the three filter questions, then three out of six symptoms described later. This protocol is consistent with that used by the DSM-IV. Appendix 20: Civil servants
Appendix 19: Main auxiliary models |
01760968 | en | [
"spi",
"spi.signal"
] | 2024/03/05 22:32:13 | 2019 | https://hal.science/hal-01760968/file/manuscript_FK.pdf | Fangchen Feng
Matthieu Kowalski
Underdetermined Reverberant Blind Source Separation: Sparse Approaches for Multiplicative and Convolutive Narrowband Approximation
We consider the problem of blind source separation for underdetermined convolutive mixtures. Based on the multiplicative narrowband approximation in the time-frequency domain with the help of Short-Time-Fourier-Transform (STFT) and the sparse representation of the source signals, we formulate the separation problem into an optimization framework. This framework is then generalized based on the recently investigated convolutive narrowband approximation and the statistics of the room impulse response. Algorithms with convergence proof are then employed to solve the proposed optimization problems. The evaluation of the proposed frameworks and algorithms for synthesized and live recorded mixtures are illustrated. The proposed approaches are also tested for mixtures with input noise. Numerical evaluations show the advantages of the proposed methods.
I. INTRODUCTION A. Time model
Blind source separation (BSS) recovers source signals from a number of observed mixtures without knowing the mixing system. Separation of the mixed sounds has several applications in the analysis, editing, and manipulation of audio data [START_REF] Comon | Handbook of Blind Source Separation: Independent component analysis and applications[END_REF]. In the real-world scenario, convolutive mixture model is considered to take the room echo and the reverberation effect into account:
x m (t) = N n=1 a mn (t) * s n (t) + n m (t), (1)
where s n is the n-th source and x m is the m-th mixture. N and M are the number of sources and microphones respectively. a mn (t) is the room impulse response (RIR) from the nth source to the m-th microphone. n m (t) is the additive white Gaussian noise at the m-th microphone. We denote also s img mn (t) = a mn (t) * s n (t) the image of the n-th source at the m-th microphone.
B. Multiplicative narrowband approximation
The source separation for convolutive mixtures is usually tackled in the time-frequency domain with the help of STFT (Short-Time-Fourier-Transform) [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF], [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF], [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF]. With the narrowband assumption, the separation can be performed in each Fangchen Feng is with Laboratoire Astroparticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, Sorbonne Paris Cité, 75205, Paris, France (email: fangchen.feng@apc.in2p3.fr) Matthieu Kowalski is with Laboratoire des signaux et systèmes, CNRS, Centralesupélec, Université Paris-Sud, Université Paris-Saclay, 91192, Gifsur-Yvette, France (email: matthieu.kowalski@l2s.centralesupelec.fr) frequency band [START_REF] Kellermann | Wideband algorithms versus narrowband algorithms for adaptive filtering in the DFT domain[END_REF]. Because of the permutation ambiguity in each frequency band, the separation is then followed by a permutation alignment step to regroup the estimated frequency components that belong to the same source [START_REF] Sawada | Measuring dependence of binwise separated signals for permutation alignment in frequency-domain bss[END_REF]. In this paper, we concentrate on the separation step.
The multiplicative narrowband approximation [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF], [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF] deals with the convolutive mixtures in each frequency using the complex-valued multiplication in the following vector form:
x(f, τ ) = N n=1 ãn (f )s n (f, τ ) + ñ(f, τ ), (2)
where x(f, τ ) = [x 1 (f, τ ), . . . , xM (f, τ )] T and sn (f, τ ) are respectively the analysis STFT coeffcients of the observations and the n-th source signal. ãn (f ) = [ã 1n (f ), . . . , ãMn (f )]
T is a vector that contains the Fourier transform of the RIR associated with the n-th source. ñ(f, τ ) = [ñ 1 (f, τ ), . . . , ñM (f, τ )] T consistes not only the analysis STFT coefficients of the noise, but also the error term due to the approximation. The formulation [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF] approximates the convolutive mixtures by using instantaneous mixture in each frequency. This approximation therefore largely reduces the complexity of the problem and is valid when the RIR length is less than the STFT window length. The sparsity assumption is largely utilized for source separation problem [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF], [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF], [START_REF] Zibulevsky | Blind source separation by sparse decomposition in a signal dictionary[END_REF], [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF]. Based on the model [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF] and by supposing that only one source is active or dominant in each time-frequency bin (f, τ ), the authors of [START_REF] Winter | MAP-based underdetermined blind source separation of convolutive mixtures by hierarchical clustering and 1 -norm minimization[END_REF] proposed to estimate the mixing matrix in each frequency by clustering, and then estimate the source in a maximum a posteriori (MAP) sense. This method is further improved by [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF] where the authors proposed to use a soft masking technique to perform the separation. The idea is to classify each time-frequency bin of the observation x(f, τ ) into N class, where N is the number of sources. Based on a complex-valued Gaussian generative model for source signals, they inferred a bin-wise a posteriori probability P (C n |x(f, τ )) which represents the probability that the vector x(f, τ ) belongs to the n-th class C n . This method obtains good separation results for speech signals, however only in low reverberation scenario [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. The performance of these methods is limited by the multiplicative approximation whose approximation error increases rapidely as the reverberation time becomes long [START_REF] Kowalski | Beyond the narrowband approximation: Wideband convex methods for under-determined reverberant audio source separation[END_REF]. Moreover, the disjointness of the soures in the time-frequency domain is not realistic [START_REF] Yilmaz | Blind separation of speech mixtures via time-frequency masking[END_REF].
C. Beyond the multiplicative narrowband model
A generalization of the multiplicative approximation is proposed in [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] by considering the spatial covariance matrix of the source signals. By modeling the sources STFT coefficients as a phase-invariant multivariate distribution, the authors inferred that the covariance matrix of the STFT coefficients of the n-th source images s img n = s img 1n , s img 2n , . . . , s img
M n
T can be factorized as:
R simg n (f, τ ) = v n (f, τ )R n (f ), (3)
where v n (f, τ ) are scalar time-varying variances of the n-th source at different frequencies and R n (f ) are time-invariant spatial covariance matrices encoding the source spatial position and spatial spread [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF]. The multiplicative approximation forces the spatial covariance matrix to be of rank-1 and the authors of [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] exploited a generalization by assuming that the spatial covariance matrix is of full-rank and showed that the new assumption models better the mixing process because of the increased flexibility. However, as we show in this paper by experiments, the separation performance of this full-rank model is still limited in strong reverberation scenarios. Moreover, as both the bin-wise method [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF] and the full-rank approach [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] do not take the error term ñ into consideration, they are therefore sensitive to additional noise.
Recently, the authors of [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF] investigated the convolutive narrowband approximation for oracle source separation of convolutive mixtures (the mixing systems is known). They showed that the convolutive approximation suits better the original mixing process especially in strong reverberation scenarios. In this paper, we investigate the convolutive narrowband approximation as the generalization of the multiplicative approximation in the full blind setting (the mixing system if unknown).
The contribution of the paper is three-folds: first based on the multiplicative narrowband approximation, we formulate the separation in each frequency as an optimization problem with 1 norm penalty to exploit sparsity. The proposed optimization formulation is then generalized based on the statistics of the RIR [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF] and the convolutive narrowband approximation model [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF]. At last, we propose to solve the obtained optimizations with PALM (Proximal alternating linearized minimization) [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] and BC-VMFB (Block coordinatevariable metric forward backward) [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF] algorithms which have convergence guarantee.
The rest of the article is organized as follows. We propose the optimization framework based on multiplicative approximation with 1 norm penalty and present the corresponding algorithm in Section II. The optimization framework is then generalized in Section III based on the statistics of the RIR and the convolutive approximation. The associated algorithm is also presented. We compare the separation performance achieved by the proposed approaches to that of the state-ofthe-art in various experimental settings in Section IV. Finally, Section V concludes the paper.
II. THE MULTIPLICATIVE NARROWBAND APPROXIMATION
We first rewrite the formulation (2) with matrix notations by concatenating the time samples and source indexes. In each frequency f , we have:
Xf = Ãf Sf + Ñf , (4)
where Xf ∈ C M ×L T is the matrix of the analysis STFT coefficients of the observations at the given frequency f . Ãf ∈ C M ×N is the mixing matrix at frequency f . Sf ∈ C N ×L T is the matrix of the analysis STFT coefficients of the sources at frequency f . Ñf ∈ C M ×L T is the noise term which also contains the approximation error. In the above notations, L T is the number of time samples in the time-frequency domain.
The target of the separation is to estimate Ãf and Sf from the observations Xf . However, according to the definition of the analysis STFT coefficients , the estimated Sf has to be in the image of the STFT operator (see in [START_REF] Balazs | Adapted and adaptive linear time-frequency representations: a synthesis point of view[END_REF] for more details). To avoid this additional constraint, we propose to replace the analysis STFT coefficients Sf by the synthesis STFT coefficients α f ∈ C N ×L T , which leads to:
Xf = Ãf α f + Ñf . (5)
In the following, we denote also α f,n the n-th source componant (row) of α f and α f,n (τ ) the scalar element at position τ in α f,n .
A. Formulation of the optimization
Based on the model ( 5), we propose to formulate the separation as an optimization problem as follow:
min Ãf ,α f 1 2 Xf -Ãf α 2 F + λ α f 1 + ı C ( Ãf ), (6)
where • F denotes the Frobenius norm and • 1 is the 1 norm of the matrix which is the sum of the absolute value of all the elements. ı C ( Ãf ) is an indicator function to avoid the trivial solution caused by the scaling ambiguity between Ãf and α f :
ı C ( Ãf ) = 0, if ãf,n = 1, n = 1, 2, . . . , N + ∞, otherwise (7)
with ãf,n the n-th column of Ãf . λ is the hyperparameter which balances between the data term 1 2 Xf -Ãf α f 2 F and the penalty term α f 1 .
For instantaneous mixtures, the formulation (6) has been firstly proposed in [START_REF] Zibulevsky | Blind source separation by sparse decomposition in a signal dictionary[END_REF] and recently investigated in [START_REF] Feng | A unified approach for blind source separation using sparsity and decorrelation[END_REF]. Compared to the masking technique of separation [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF], the 1 norm term exploits only sparsity which is more realistic than the disjointness assumption for speech signals. Moreover, the Lagrangian form with the data term 1 2 Xf -Ãf α f 2 F allows us to take the noise/approximation error into consideration.
B. Algorithm: N-Regu
The optimization problem ( 6) is non-convex with a nondifferentiable term. In this paper, we propose to solve the problem by applying the BC-VMFB (block coordinate variable metric forward-backward) [START_REF] Chouzenoux | Variable metric forwardbackward algorithm for minimizing the sum of a differentiable function and a convex function[END_REF] algorithm. This algorithm relies on the proximal operator [START_REF] Combettes | Proximal splitting methods in signal processing[END_REF] given in the next definition.
Definition 1. Let ψ be a proper lower semicontinuous function, the proximal operator associated with ψ is defined as:
prox ψ := argmin y ψ(y) + 1 2 y -x 2 F . (8)
When the function ψ(y) = λ y 1 , the proximal operator becomes the entry-wise soft thresholding presented in the next proposition.
Proposition 2. Let α ∈ C N ×L T . Then, α = prox λ • 1 (α) := S λ (α) is given entrywise by soft-thresholding:
αi = α i |α i | (|α i | -λ) + , (9)
where (α) + = max(0, α).
When the function ψ in Definition 1 is the indicator function ı C , the proximal operator reduces to the projection operator presented in Proposition 3.
Proposition 3. Let à ∈ C M ×N . Then = prox ı C ( Ã) := P C ( Ã)
is given by the column-wise normalization projection:
ân = ãn ãn , n = 1, 2, . . . , N (10)
With the above proximal operators, we present the algorithm derived from BC-VMFB in Algorithm 1. We denote the data term by
Q(α f , A f ) = 1 2 Xf -Ãf α f 2 F . L (j) = Ã(j+1) H f Ã(j+1) f 2 is the Lipschitz constant of ∇ α f Q(α (j) f , Ã(j) f ) with • 2
denoting the spectral norm of matrix. Details of the derivation of this algorithm and the convergence study are given in Appendix VI-A. In the following, this algorithm is referred as N-Regu (Narrowband optimization with regularization).
Algorithm 1: N-Regu Initialisation : α (1) f ∈ C N ×L T , Ã(1) f ∈ C M ×N , L (1) = Ã(1) H f Ã(1) f 2 , j = 1; repeat ∇ α f Q α (j) f , Ã(j) f = - Ã(j) H f Xf - Ã(j) f α (j) f ; α (j+1) f = S λ/L (j) (α (j) f -1 L (j) ∇ α f Q(α (j) f , Ã(j) f ); Ã(j+1) f = P C ( Xf α (j+1) H f ); L (j+1) = Ã(j+1) H f Ã(j+1) f 2 ; j = j + 1; until convergence;
III. THE CONVOLUTIVE NARROWBAND APPROXIMATION
A. Convolutive approximation
Theoretically, the multiplicative narrowband approximation ( 2) is valid only when the RIR length is less than the STFT window length. In practice, this condition is rarely varified because the STFT window length is limited to ensure the local stationarity of audio signals [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF]. To avoid such limitation, the convolutive narrowband approximation was proposed in [START_REF] Avargel | System identification in the short-time fourier transform domain with crossband filtering[END_REF], [START_REF] Talmon | Relative transfer function identification using convolutive transfer function approximation[END_REF]:
x(f, τ ) = N n=1 L l=1 hn (f, l)s n (f, τ -l), (11)
where hn = h1n , . . . , hMn T is the vector that contains the impulse responses in the time-frequency domain associated with the n-th source. L is the length of the convolution kernel in the time-frequency domain.
The convolutive approximation ( 11) is a generalization of the multiplicative approximation (2) as it considers the information diffusion along the time index. When the kernel length L = 1, it reduces to the multiplicative approximation. The convolution kernel in the time-frequency domain hmn (f, τ ) is linked to the RIR in the time domain a mn (t) by [START_REF] Li | Audio source separation based on convolutive transfer function and frequency-domain Lasso optimization[END_REF]:
hmn (f, τ ) = [a mn (t) * ζ f (t)] | t=τ k0 , (12)
which represents the convolution with respect to the time index t evaluated with a resolution of the STFT frame step k 0 with:
ζ f (t) = e 2πif t/L F j ϕ(j) φ(t + j), (13)
where L F is the number of frequency bands. ϕ(j) et φ(j) denote respectively the analysis and synthesis STFT window. With matrix notations, for each frequency f , the convolutive approximation ( 11) can be written as:
Xf = Hf Sf + Ñf , (14)
where Hf ∈ C M ×N ×L is the mixing system formed by concatenating the impulse responses of length L. In the following, we denote also hf,mn the vector that represents the impulse response at position (m, n) in Hf and hf,mn (τ ) the scalar element at position (m, n, τ ). The operator denotes the convolutive mixing process [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF].
Compared to the original mixing process in the time domain (1), the convolutive approximation ( 14) largely reduces the length of the convolution kernel, thus makes the estimation of both the mixing system and the source signals practically possible.
B. Proposed optimization approach a) Basic extension of the multiplicative model: Once again, to circumvent the additional constraint brought by the analysis coefficients of the sources, we replace the analysis STFT coefficients Sf by the synthesis coefficients α f , which leads to:
Xf = Hf α f + Ñf . ( 15
)
Based on (15), we generalize ( 6) by replacing the multplicative operator by the convolutive mixing operator:
min Hf ,α f 1 2 Xf -Hf α f 2 F + λ α f 1 + ı Conv C ( Hf ), ( 16
)
where ı Conv C ( Hf ) is the normalisation constraint to avoid trivial solutions:
ı Conv C ( Hf ) = 0, if m,τ | hf,mn (τ )| 2 = 1, n = 1, . . . , N + ∞, otherwise.
(17) b) Regularization for the convolution kernel: In [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF], the authors consider the problem of estimating the RIR supposing that the mixtures and the sources are known. They formulated the estimation problem as an optimization problem and proposed a differentiable penalty for the mixing system in the time domain:
m,n,t |a mn (t)| 2 2ρ 2 (t) , (18)
where ρ(t) denotes the amplitude envelope of RIR which depends on the reverberation time RT 60 :
ρ(t) = σ10 -3t/RT60 , (19)
with σ being a scaling factor. The penalty ( 18) is designed to force an exponential decrease of the RIR which satisfaits the acoustic statistics of the RIR [START_REF] Kuttruff | Room acoustics[END_REF].
As the convolutive kernel in the time-frequency domain is linked to the RIR in time domain by [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF], in this paper, we consider the penalty in the time-frequency domain in the same form:
P( Hf ) = m,n,τ | hf,mn (τ )| 2 2ρ 2 (τ ) , (20)
where ρ(τ ) is the decreasing coefficients in the time-frequency domain which depends on ρ(t) and the STFT transform.
Other forms of penalty are also proposed in [START_REF] Benichoux | Convex regularizations for the simultaneous recording of room impulse responses[END_REF]. However, their adaption in the time-frequency domain is not straightforward.
c) Final optimization problem: With the above penalty term, the formulation ( 16) can be improved as:
min Hf ,α f 1 2 Xf -Hf α f 2 F + λ α f 1 + P( Hf ) + ı Conv C ( Hf ). (21)
C. Algorithm: C-PALM
We propose to use the Proximal Alternating Linearized Minimization (PALM) algorithm [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] to solve the problem. The derived algorithm is presented in Algorithm 2, and one can refer to Appendix VI-C for details on the derivation and the convergence study. We refer to this algorithm as C-PALM (Convolutive PALM) in the following. We denote:
Q( αf , Hf ) = 1 2 X -Hf α f 2 F + P( Hf )
and the gradient of P( Hf ) is given coordinate-wise by:
∇ Hf P( Hf ) f,mnτ = hf,mn (τ ) ρ4 (τ ) . ( 22
)
In Algorithm 2, HH f and α H f are respectively the adjoint operators of the convolutive mixtures with respect to the convolution kernel and the sources. Details of derivation of these adjoint operators are given in Appendix VI-B. L (j)
α f and L (j) Hf are respectively the Lipschitz constant of ∇ α f Q(α (j) f , H(j) f ) and ∇ Hf Q(α (j+1) f , H(j) f ). L (j)
α f can be calculated with the power iteration algorithm [START_REF] Kowalski | Beyond the narrowband approximation: Wideband convex methods for under-determined reverberant audio source separation[END_REF] shown in Algorithm 3. L (j) Hf can be approximately estimated thanks to the next proposition.
Algorithm 2: C-PALM Initialisation : α (1) f ∈ C N ×L T , H(1) f ∈ C M ×N , j = 1; repeat ∇ α f Q α (j) f , H(j) f = - H(j) H f Xf - H(j) f α (j) ; α (j+1) f = S λ/L (j) α f α (j) f -1 L (j) α f ∇ α f Q(α (j) f , H(j) f ) ; ∇ Hf Q(α (j+1) f , H(j) f ) = -( Xf - H(j) f α (j+1) f ) α (j+1) H f + ∇ Hf P( H(j) f ); H(j+1) f = P Conv ı C H(j) f -1 L (j) Hf ∇ Hf Q(α (j+1) f , H(j) f ) ; Update L (j) α f et L (j) Hf ; j = j + 1; until convergence; Algorithm 3: Power iteration for the calculation of L α f Initialisation : v f ∈ C N ×L T ; repeat W = HH f Hf v f ; L α f = W ∞ ; v f = W Lα f ; until convergence;
Proposition 4. If we suppose that the source componants α f,1 , α f,2 , . . . , α f,N are mutually independant and L L T , then L Hf , the Lipschitz constant of ∇ Hf Q(α f , Hf ) can be calculated as:
L Hf = max n (L f,n ) + max τ ( 1 ρ8 (τ ) ), (23)
where
L f,n = Γ f,n , with Γ f,n = γ f,n (0) γ f,n (1) . . . γ f,n (L -1) γ f,n (-1) γ f,n (0) . . . γ f,n (L -2) . . . . . . . . . . . . γ f,n (1 -L) γ f,n (2 -L) . . . γ f,n (0) , (24)
and γ f,n (τ ) is the empirical autocorrelation function of α f,n :
γ f,n = L T -1 =1 α f,n ( + τ )α * f,n ( ). (25)
Proof. The proof is postponed in Appendix VI-D.
If the independance assumption mentioned in Proposition 4 appears to be strong, it is well adapted for audio signals as it is the basic hypothesis of the FDICA (frequency domain independant component analysis) [START_REF] Sawada | A robust and precise method for solving the permutation problem of frequency-domain blind source separation[END_REF] used for source separation of determined convolutive mixtures. Although we do not have any guarantee of independence in the proposed algorithm, the experiments show that good performances are obtained.
Finally, we must stress that the BC-VMFB algorithm is not suitable for [START_REF] Sawada | A robust and precise method for solving the permutation problem of frequency-domain blind source separation[END_REF] as it relies on the second derivative of Q( αf , Hf ) w.r.t Hf , which does not necessarily simplify the algorithm.
IV. EXPERIMENTS A. Permutation alignment methods
For the proposed approaches, we use the existing permutation alignment methods. For N-Regu, we compare the approach based on TDOA (Time Difference Of Arrival) used in Full-rank method [START_REF] Duong | Under-determined reverberant audio source separation using a full-rank spatial covariance model[END_REF] and the approach based on interfrequency correlation used in the Bin-wise approach [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. For the inter-frequency correlation permutation, we use the power ratio [START_REF] Sawada | Measuring dependence of binwise separated signals for permutation alignment in frequency-domain bss[END_REF] of the estimated source to present the source activity. For C-PALM, as the TDOA permutation is not adapted, we use only the correlation permutation.
For the proposed approaches (N-Regu and C-PALM) and the reference algorithms (Bin-wise and Full-rank), we also designed an oracle permuation alignment method. In each frequency, we look for the permutation that maximizes the correlation between the estimated and the original sources. Such permutation alignment is designed to show the best permutation possible in order to have a fair comparison of the separation approaches instead of the choice made for solving the permutation problem.
B. Experimental setting
We first evaluated the proposed approaches with 10 sets of synthesized stereo mixtures (M = 2) containing three speech sources (N = 3) of male/female with different nationalities. The mixtures are sampled at 11 kHz and truncated to 6 s. The room impulse response were simulated via the toolbox [START_REF] Lehmann | Prediction of energy decay in room impulse responses simulated with an image-source model[END_REF]. The distance between the two microphone is 4 cm. The reverberation time is defined as 50 ms, 130 ms, 250 ms and 400 ms. The Fig. 1 illustrates the room configuration. For each mixing situation, the mean values of the evaluation results over the 10 sets of mixtures are shown.
We then evaluated the algorithm C-PALM with the live recorded speech mixtures from the dataset SiSEC2011 [START_REF] Araki | The 2011 signal separation evaluation campaign (sisec2011):-audio source separation[END_REF]. Music mixtures are avoided because the instrumental sources are often synchronized to each other and this situation is difficult for the permutation alignment based on inter-frequency correlation [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. An effective alternative way is to employ nonnegative matrix factorization [START_REF] Feng | Sparsity and low-rank amplitude based blind source separation[END_REF]. The parameters of STFT for the synthesized and live recorded mixtures are summarized in Table I. The STFT window length (and window shift) for synthesized mixtures are chosen to preserve local stationarity of audio sources without bringing too much computational costs. The parameters for the live recorded mixtures are the same as the reported reference algorithm Bin-wise [START_REF] Sawada | Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[END_REF]. The separation performance is evaluated with the signal to distortion ratio (SDR), signal to interference ratio (SIR), source image to spatial distortion ratio (ISR) and signal to artifact ratio (SAR) [START_REF] Vincent | First stereo audio source separation evaluation campaign: data, algorithms and results[END_REF]. The SDR reveals the overall quality of each estimated source. SIR indicates the crosstalk from other sources. ISR measures the amount of spatial distortion and SAR is related to the amount of musical noise.
N-Regu is initialized with Gaussian random signals. C-PALM is initialized with the results of N-Regu with 1000 iterations. This choice of initialization for C-PALM compensates the flexibility of the convolutive model (and then the number of local minima in ( 21)) without bringing too much computational cost. We use the stopping criteria α (j+1) f α j f F < 10 -4 for both algorithms.
C. Tuning the parameters
For the proposed methods, we chose several pre-defined hyperparameter λ and select the λ which corresponds to the best SDR. Even though such a way of choosing this hyper-parameter is not possible for real applications, such evaluation offers a "fair" comparison with the state-of-the-art approaches and gives some empirical suggestions of choosing this parameter in practice. We implement the continuation trick, also known as warm start or fixed point continuation [START_REF] Hale | Fixed-point continuation for 1minimization: Methodology and convergence[END_REF] for a fixed value of λ: we start the algorithms with a large value of λ and iteratively decrease λ to the desired value.
It is also important to mention that the hyperparameter λ should be theoretically different for each frequency since the sparsity level of the source signals in each frequency can be very different (for speech signals, the high frequency componants are usually sparser than the low frequency componants). Therefore, different λ should be determined for each frequency. However, in this paper, we used a single λ for all the frequencies and the experiments show that this simplified choice can achieve acceptable results if we perform a whitening pre-processing for each frequency.
For C-PALM, as the reverberation time is unknown in the blind setting, we pre-define the length of the convolution kernel in the time-frequency domain L = 3 as well as the penalty parameter ρ(τ ) = [1.75, 1.73, 1.72]
T . Although these parameters should vary with the reverberation time, we show in the following that the proposed pre-defined parameters work well in different strong reverberation conditions.
D. Synthesized mixtures without noise
We first evaluate the algorithms with synthesized mixtures in the noiseless case as a function of the reverberation time RT 60 . The results are shown in Fig. 2.
For RT 60 = 50 ms, it is clear that the Full-rank method performs the best in terms of all four indicators. Its good performance is due to the fact that the full-rank spatial covariance model suits better the convolutive mixtures than the multiplicative approximation and the fact that the TDOA permutation alignement has relatively good performance in low reverberation scenario. N-Regu outperforms Bin-wise only in SDR and SAR. It is because that N-Regu has better data fit than the masking-based Bin-wise method while Binwise obtains time-frequency domain disjoint sources which have lower inter-source interference. C-PALM is dominated by other methods in SDR, SIR and ISR. We believe it is because that the pre-defined penalty parameter ρ(τ ) does not fit the low reverberation scenario. The advantages of C-PALM can be seen in relatively stronger reverberation scenarios (especially RT 60 = 130, 250 ms) where C-PALM outperforms other methods in SDR and SIR. For RT 60 = 400 ms, all the presented algorithms have similar performance while C-PALM performs slightly better in SIR. To compare the two permutation methods used for N-Regu, TDOA permutation performs better than inter-frequency correlation permutation in SDR, SIR and SAR.
Fig. 3 compares the presented algorithms with oracle permutation alignment. For RT 60 = 50 ms, once again, Full-rank has the best performance in all four indicators. This confirms the advantages of the full rank spatial covariance model. In high reverberation conditions, C-PALM performs better than others in SDR and SIR. In particular, for RT 60 = 130, 250 ms, C-PALM outperforms Full-rank by more than 1 dB in SDR and outperforms Bin-wise by about 1.2 dB in SIR. N-Regu performs slightly better than Bin-wise in SDR for all reverberation conditions.
The above observations show the better data fit brought by the optimization framework used in N-Regu (and C-PALM) and confirm the advantages of convolutive narrowband approximation used in C-PALM for high reverberation conditions (especially RT 60 = 130, 250 ms). Fig. 4 illustrates the performance of the presented algorithm as a function of the sparsity level1 of the estimated synthesis coeffcients of the sources for RT 60 = 130 ms. As the sparsity level is directly linked to the hyperparameter λ in the proposed algorithms, this comparison offers some suggestions of choosing this hyperparameter. Full-rank method does not exploits sparsity, thus has 0% as sparsity level. As the number of sources N = 3, the sparsity level of the masking-based Bin-wise method is 66.6%.
C-PALM performs better than N-Regu in terms of SDR, SIR and SAR when the sparsity level is less than 60% and its best performance is achieved when the sparsity level is around 40%. For N-Regu, in terms of SDR, SAR and ISR, the best performance is achieved with the least sparse result.
E. Synthesized mixtures with noise
In this subsection, we evaluate the proposed methods with synthesized mixtures with additive white Gaussian noise. The noise of different energy is added which leads to different input SNR. Fig. 5 reports the separation performance as a function of input SNR with the reverberation time fixed to RT 60 = 130 ms.
It is clear that N-Regu with TDOA permutation outperforms other methods in terms of SDR and SIR. In particular, it performs better than others by about 1 dB in SIR for all the input SNR tested. C-PALM outperforms the state-of-the-art approaches only in SDR. We believe that it is due to the fact that the freedom degree of the convolutive narrowband approximation used in C-PALM could be sensitive to input noise. Another reason is that the inter-frequency correlation based permutation could be sensitive to input noise. The latter conjecture is supported by the observation that, in terms of SDR and SIR, the gap between N-Regu with TDOA permutation and with correlation permutation increases as the input noise becomes stronger. Further evidence can be found by the comparisons between the presented algorithm with oracle permutation alignment in Fig. 6.
In Fig. 6, in terms of SDR and SIR, it is clear that the gap between N-Regu and C-PALM decreases as the input noise gets stronger. This remark confirms that the separation step of C-PALM is sensitive to input noise. Moreover, in terms of SIR, C-PALM with oracle permutation performs consistently better than N-Regu with oracle permutation, while C-PALM with correlation permutation is dominated by N-Regu with TDOA permutation by about 1 dB (Fig. 5). This observation shows that the performance of C-PALM can be largely improved for noisy mixtures if better permutation alignment method is developped. Fig. 7 reports the separation performance as a function of the sparsity level of the estimated synthesis coefficients of the sources. RT 60 = 130 ms and the input SNR is 15 dB. The results of Full-rank and Bin-wise method are also shown.
In terms of SDR and SIR, N-Regu with TDOA permutation consistently outperforms the other methods and achieves its best performance when the sparsity level is about 78%. Compared to Bin-wise method, this observation coincides with the intuition that, for noisy mixtures, the coefficients of the noise in the observations should be discarded to achieve better separation. C-PALM achieves its best performance in terms of SDR and SIR when the sparsity level is about 75%.
Fig. 8 illustrates the results of separation as a function of the reverberation time for a fixed input SNR (SNR=15 dB).
We can see that N-Regu with TDOA permutation has the best performance in terms of SDR.
F. Synthesized mixtures with different sources positions
In this subsection, we tested the robusteness of the proposed algorithms w.r.t the sources positions. The same room setting as shown in Fig. 1 is used. Fig. 9 illustrates the four tested In these experiments, the reverberation time is fixed to RT 60 = 130 ms and no noise is added to the mixtures. Fig. 10 shows the separation performance.
It is clear that in terms of SDR, SIR and ISR, all the presented algorithms have the worst performance in setting 3.
This remark shows that having two sources close to each other and one source relatively far (setting 3) could be a more difficult situation for blind source separation than having three sources close to each other (setting 4). For C-PALM, it has the best performance in terms of SDR, SIR and ISR for all the settings. This observation shows that C-PALM (and the pre-defined penalty parameter) is robust to sources positions,
G. Live recorded mixtures without noise
This subsection reports the separation results of C-PALM for publicly avaiable benchmark data in SiSEC2011 [START_REF] Araki | The 2011 signal separation evaluation campaign (sisec2011):-audio source separation[END_REF]. We used the speech signals (male3, female3, male4 and female4) from the first development data (dev1.zip) in "Under-determined speech and music mixtures" data sets. Table II shows the separation results. For C-PALM, we chose the hyperparameter λ such that the sparsity level of the estimated coefficients of the sources is about 20%, 60% for RT 60 = 130, 250 ms respectively. Compared to the performances reported in [START_REF] Araki | The 2011 signal separation evaluation campaign (sisec2011):-audio source separation[END_REF], C-PALM obtains relatively good separation results epsecially when the number of sources N = 3.
H. Computational time
We terminate the expriment section by presenting the computational time of the presented algorithm for the synthesized mixtures in Table III. C-PALM is of relative big computational cost mainly because of the convolution operator in each iteration of the algorithm.
V. CONCLUSION
In this paper, we have developped several approaches for blind source separation with underdetermined convolutive mixtures. Based on the sparsity assumption for the source signals and the statistics of the room impulse response, we developed the N-Regu with multiplicative narrowband approximation and C-PALM with convolutive narrowband approximation. The numerical evaluations show the advantages of C-PALM for noiseless mixtures in strong reverberation scenarios. The experiments also show the good performance of N-Regu for noisy mixtures. The penalty parameter ρ(τ ) in C-PALM has to be predefined, which makes C-PALM not suitable for low reverberation condition. Future work will concentrate on the estimation of ρ(τ ). In this paper, we used inter-frequency correlation permutation alignment for C-PALM. It would be interesting to exploit TDOA based permutaiton method for convolutive narrowband approximation to improve C-PALM.
VI. APPENDIX
A. Derivation of N-Regu
We consider the following optimization problem: F is a constant and does not change the minimizer. The reason of adding this term is purely algorithmic. We then solve the optimization [START_REF] Hale | Fixed-point continuation for 1minimization: Methodology and convergence[END_REF] with BC-VMFB [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF].
min Ãf ,α f 1 2 Xf -Ãf α f 2 F + µ 2 Ãf 2 F + λ α f 1 + ı C ( Ãf ). ( 26
Let the general optimization min
x,y
F (x) + Q(x, y) + G(y) , (27)
where F (x) and G(y) are lower semicontinuous functions, Q(x, y) is a smooth function with Lipschitz gradient on any bounded set. BC-VMFB uses the following update rules to solve (27):
x (j+1) = argmin x F (x) + x -x (j) , ∇ x Q(x (j) , y (j) ) + t 1,(j) 2 x -x (j) 2 U 2,(j) , (28)
y (j+1) = argmin y G(y) + y -y (j) , ∇ y Q(x (j+1) , y (j) ) + t 2,(j) 2 y -y (j) 2 U 2,j , (29)
where U 1,(j) and U 2,(j) are positive definite matrices. x 2 U denotes the variable metric norm:
x 2 U = x, Ux . (30)
With the variable metric norm, the proximal operator (8) can be generalized as:
prox U,ψ := argmin y ψ(y) + 1 2 y -x 2 U . (31)
Then ( 28) and (29) can be rewritten as follow:
x (j+1) =prox U 1,(j) ,F/t 1,(j) x (j) - 1 t 1,(j) U 1,(j) -1 ∇ x Q(x (j) , y (j) ) , (32)
y (j+1) =prox U 2,(j) ,G/t 2,(j) y (j) - 1 t 2,(j) U 2,(j) -1 ∇ y Q(x (j+1) , y (j) ) . (33)
It is shown in [START_REF] Chouzenoux | A block coordinate variable metric forward-backward algorithm[END_REF] that the sequence generated by the above update rules converges to a critical point of the problem (27).
For the problem (26), we make the following substitutions:
F (α f ) = λ α f 1 , Q(α f , Ãf ) = 1 2 Xf -Ãf α f 2 F + µ 2 Ãf 2 F , G( Ãf ) = ı C ( Ãf ), (34)
Denoting by L
(j) the Lipschitz constant of ∇ α f Q(α (j) f , Ã (j)
f ), we have chosen:
U 1,(j) = L (j) I, U 2,(j) = ∂Q( Ãf , α (j+1) f ) 2 ∂ 2 Ãf = α (j+1) f α (j+1) H f
+ µI, t 1,(j) = t 2,(j) = 1.
(35)
The update step of the mixing matrix can be written as: (36)
Ã(j+1/2) f = Xf α (j+1) H f (α (j+1) f α (j+1) H f + µI) -1 , Ã (j+1)
As the choice of the parameter µ does not change the minimizer of [START_REF] Hale | Fixed-point continuation for 1minimization: Methodology and convergence[END_REF], by choosing µ sufficiently large, the update step of Ãf becomes:
Ã(j+1/2) f = P C Xf α (j+1) H f . ( 37
)
We obtain the N-Regu as shown in Algorithm 1.
B. Convolutive mixing operator and its adjoint operators
Given a signal s ∈ C T , and a convolution kernel h ∈ C L , the convolution can be written under the matrix form:
x = Hs = Sh , (38)
H ∈ C T ×T and S ∈ C T ×L being the corresponding circulant matrices.
The convolutive mixing operator can then be represented by
where s 1 , s 2 , . . . , s N ∈ C T are N source signals and x 1 , x 2 , . . . , x M ∈ C T are M observations. H mn is the convolution matrix from the n-th source to the m-th microphone. Thanks to these notations, the adjoint operator of convolutive mixing with respect to the mixing system is a linear operator C M ×T → C N ×T and can be represented by the following matrix multiplication:
s 1 s 2 . . . s N =
x 1 x 2 . . . x M . (40)
In order to coincide with the previous notations in [START_REF] Balazs | Adapted and adaptive linear time-frequency representations: a synthesis point of view[END_REF], we denote the above formulation as:
S = H H X. (41)
The adjoint operator of the convolutive mixture with respect to the sources can then be written as:
H = X S H , (42)
with
h mn = S H n x m . (43)
C. Derivation of C-PALM
The PALM algorithm [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] is designed to solve the nonconvex optimization problem in the general form (27) by the following update rules:
x (j+1) = argmin x F (x) + xx (j) , ∇ x Q(x (j) , y (j) ) + t 1,(j) 2
xx (j) 2 2 , (44) y (j+1) = argmin y G(y) + yy (j) , ∇ y Q(x (j+1) , y (j) ) + t 2,(j) 2 yy (j) 2 2 , (45
)
where j is the iteration index and t 1,(j) et t 2,(j) are two step parameters.
It is shown in [START_REF] Bolte | Proximal alternating linearized minimization for nonconvex and nonsmooth problems[END_REF] that the sequence generated by the above update rules converges to a critical point of the problem (27).
From the general optimization (27), we do the following substitutions:
F (α f ) = λ α f 1 , Q(α f , Hf ) = 1 2 Xf -Hf α f 2 F + P( Hf ), G( Hf ) = ı Conv C ( Hf ), (46)
and the particular choices: t 1,(j) = L 1,(j) , t 2,(j) = L 2,(j) , where L 1,(j) and L 2,(j) are respectively the Lipschitz constant of ∇ α f Q(α (47)
Let Ψ n denotes the circulant matrix associated with α n . If the synthesis coefficients of different sources are independent, we have E [Ψ i Ψ j ] = 0, for i = j.
Then, using similar notations as in Appendix VI-B, one can write Ĥ as:
ĥmn = Ψ H n Ψ n hm , (48)
Finally, Proposition 4 comes from the definition of the Lipschitz constant.
Fig. 1 .
1 Fig. 1. Room configuration for synthesized mixtures
Fig. 3 .
3 Fig. 3. Separation performance of different algorithms with oracle permutation alignment as a function of the reverberation time RT 60 in noiseless case
Fig. 4 .Fig. 5 .
45 Fig. 4. performance of different algorithms as a function of the sparsity level in noiseless case. RT 60 = 130 ms
Fig. 6 .Fig. 7 .
67 Fig. 6. Separation performance of different algorithms with oracle permutation alignment as a function of the input SNR for RT 60 = 130 ms
Fig. 8 .
8 Fig. 8. Separation performance of different algorithms as a function of the reverberation time RT 60 with input SNR=15 dB
4 Fig. 9 .
49 Fig. 9. Different settings of source positions for synthesized mixtures without input noise
Fig. 10 .
10 Fig. 10. Separation performance of different algorithms for different sources positions in noiseless case. RT 60 = 130 ms.
) This optimization is equivalent to the problem (6): the indicator function ı C ( Ãf ) forces the normalization on each column of Ãf , therefore the term µ 2 Ãf 2
f∈
prox U 2,(j) ,ı C ( Ã(j+1/2) f ).
We obtain the C-PALM algorithm presented in Algorithm 2.D. Calculation of the Lipschitz constant in C-PALMWe present the calculation of the Lipschitz constant of the functionI( Hf ) := Hf α f α H f + ∇ Hf P( Hf ) = Ĥf + ∇ Hf P( Hf ).
TABLE I
I
EXPERIMENTAL CONDITIONS
synthesized live recorded
Number of microphones M = 2 M = 2
Number of sources N = 3 N = 3, 4
Duration of signals 6 s 10 s
Reverberation time (RT 60 ) 50, 130, 250, 400 ms 130, 250 ms
Sample rate 11 KHz 16 kHz
Microphone distance 4 cm 5 cm, 1 m
STFT window type Hann Hann
STFT window length 512 (46.5 ms) 2048 (128 ms)
STFT window shift 256 (23.3 ms) 512 (32 ms)
Fig. 2. Separation performance as a function of the reverberation time RT 60 in noiseless case
10 15
SDR (dB) 8 4 6 N-Regu with TDOA permutation N-Regu with correlation permutation C-PALM with correlation permutation Bin-wise Full-rank SIR (dB) 10 5
2 0
50 0 100 150 200 250 300 350 400 50 -5 100 150 200 250 300 350 400
RT 60 (ms) RT 60 (ms)
15 14
12
ISR (dB) 5 10 SAR (dB) 6 8 10
4
50 0 100 150 200 250 300 350 400 50 2 100 150 200 250 300 350 400
RT 60 (ms) RT 60 (ms)
10 15
8 N-Regu with oracle permutation C-PALM with oracle permutation
SDR (dB) 4 6 Bin-wise with oracle permutation Full-rank with oracle permutation SIR (dB) 5 10
2
50 0 100 150 200 250 300 350 400 50 0 100 150 200 250 300 350 400
RT 60 (ms) RT 60 (ms)
16 14
14 12
ISR (dB) 8 10 12 SAR (dB) 6 8 10
6 4
50 4 100 150 200 250 300 350 400 50 2 100 150 200 250 300 350 400
RT 60 (ms) RT 60 (ms)
TABLE II SEPARATION
II RESULTS OF C-PALM FOR LIVE RECORDED MIXTURES FROM SISEC2011 (SDR/SIR/ISR/SAR IN DB)
RT 60 = 130 ms RT 60 = 250 ms
microphone space 5 cm 1 m 5 cm 1 m
male3 7.65 / 11.38 / 12.10 / 10.65 7.53 / 11.27 / 11.77 / 10.58 5.20 / 7.67 / 9.01 / 8.62 4.98 / 10.62 / 6.67 / 7.04
female3 6.69 / 9.81 / 10.90 / 10.52 9.77 / 14.49 / 14.13 / 13.02 5.29 / 9.16 / 7.77 / 8.75 7.34 / 11.22 / 10.97 / 11.02
male4 3.25 / 4.65 / 6.09 / 6.01 2.34 / 2.15 / 5.16 / 5.47 2.10 / 1.79 / 4.63 / 5.49 3.08 / 4.22 / 6.00 / 6.11
female4 2.36 / 2.05 / 5.37 / 6.53 3.66/ 6.05 / 6.80 / 7.15 2.39 / 2.20 / 5.27 / 6.51 3.12 / 4.51 / 6.07 / 6.84
x-axis (m)
H 11 H 12 . . . H 1N H 21 H 22 . . . H 2N
x 1 s 1
x 2 . . . = . . . . . . . . . . . . s 2 . . . ,
x M s N
H M 1 H M 2 . . . H M N
In this paper, the sparsity level is the percentage of zero elements in a vector or matrix. A higher sparsity level means a sparser vector or matrix. |
01648383 | en | [
"math.math-ca",
"math.math-oc"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01648383v2/file/RIMS%2C%20final%2C%20JDE%2C%20Feb.19%2C%202018%20%281%29.pdf | Hedy Attouch
email: hedy.attouch@univ-montp2.fr
Alexandre Cabot
email: alexandre.cabot@u-bourgogne.fr
CONVERGENCE OF DAMPED INERTIAL DYNAMICS GOVERNED BY REGULARIZED MAXIMALLY MONOTONE OPERATORS
Keywords: asymptotic stabilization, damped inertial dynamics, Lyapunov analysis, maximally monotone operators, time-dependent viscosity, Yosida regularization AMS subject classification. 37N40, 46N10, 49M30, 65K05, 65K10, 90C25
. In this last paper, the authors considered the case γ(t) = α t , which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.
Introduction
Throughout this paper, H is a real Hilbert space endowed with the scalar product ., . and the corresponding norm . . Let A : H → 2 H be a maximally monotone operator. Given continuous functions γ : [t 0 , +∞[→ R + and λ : [t 0 , +∞[→ R * + where t 0 is a fixed real number, we consider the second-order evolution equation (RIMS) γ,λ ẍ(t) + γ(t) ẋ(t) + A λ(t) (x(t)) = 0, t ≥ t 0 ,
where
A λ = 1 λ I -(I + λA) -1
is the Yosida regularization of A of index λ > 0 (see Appendix A.1 for its main properties). The terminology (RIMS) γ,λ is a shorthand for "Regularized Inertial Monotone System" with parameters γ, λ.
Thanks to the Lipschitz continuity properties of the Yosida approximation, this system falls within the framework of the Cauchy-Lipschitz theorem, which makes it a well-posed system for arbitrary Cauchy data. The above system involves two time-dependent positive parameters: the damping parameter γ(t), and the Yosida regularization parameter λ(t). We shall see that, under a suitable tuning of the parameters γ(t) and λ(t), the trajectories of (RIMS) γ,λ converge to solutions of the monotone inclusion 0 ∈ A(x).
Indeed, the design of rapidly convergent dynamics and algorithms to solve monotone inclusions is a difficult problem of fundamental importance in many domains: optimization, equilibrium theory, economics and game theory, partial differential equations, statistics, among other subjects. Trajectories of Date: February 19, 2018.
(RIMS) γ,λ do so in a robust manner. Indeed, when A is the subdifferential of a closed convex proper function Φ : H → R ∪ {+∞}, we will obtain rates of convergence of the values, which are comparable to the accelerated method of Nesterov. With this respect, as a main advantage of our approach, we can handle nonsmooth functions Φ.
1.1. Introducing the dynamics. The (RIMS) γ,λ system is a natural development of some recent studies concerning rapid inertial dynamics for convex optimization and monotone equilibrium problems. We will rely heavily on the techniques developed in [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] concerning the general damping coefficient γ(t), and in [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF] concerning the general Yosida regularization parameter λ(t).
1.1.1. General damping coefficient γ(t). Some simple observations lead to the introduction of quantities that play a central role in our analysis. Taking A = 0, then A λ = 0, and (RIMS) γ,λ boils down to the linear differential equation ẍ(t) + γ(t) ẋ(t) = 0.
Let us multiply this equality by the integrating factor Throughout the paper, we always assume that condition (H 0 ) is satisfied. For s ≥ t 0 , we then define the quantity Γ(s) by The function s → Γ(s) plays a key role in the asymptotic behavior of the trajectories of (RIMS) γ,λ . This was brought to light by the authors in the potential case, see [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] (no regularization process was used in this work). The theorem below gathers the main results obtained in [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] for a gradient operator A = ∇Φ.
p(t
It enlights the basic assumptions on the function γ(t) which give rates of convergence of the values.
Theorem (Attouch and Cabot [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]). Let Φ : H → R be a convex function of class C 1 such that argmin Φ = ∅. Let us assume that γ : [t 0 , +∞[→ R + is a continuous function satisfying:
(i) +∞ t0 ds p(s) < +∞; (ii) There exist t 1 ≥ t 0 and m < 3 2 such that γ(t)Γ(t) ≤ m for every t ≥ t 1 ; (iii) +∞ t0 Γ(s) ds = +∞.
Then every solution trajectory x : [t 0 , +∞[→ H of (IGS) γ ẍ(t) + γ(t) ẋ(t) + ∇Φ(x(t)) = 0, converges weakly toward some x * ∈ argmin Φ, and satisfies the following rates of convergence:
Φ(x(t)) -min as t → +∞.
The (IGS) γ system was previously studied by Cabot, Engler and Gadat [START_REF] Cabot | On the long time behavior of second order differential equations with asymptotically small dissipation[END_REF][START_REF] Cabot | Second order differential equations with asymptotically small dissipation and piecewise flat potentials[END_REF] in the case of a vanishing damping coefficient γ(t) and for a possibly nonconvex potential Φ. The importance of the dynamics (IGS) γ in the case γ(t) = α/t (α > 1) was highlighted by Su, Boyd and Candés in [START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF]. They showed that taking α = 3 gives a continuous version of the accelerated gradient method of Nesterov. The corresponding rate of convergence for the values is at most of order O(1/t 2 ) as t → +∞. Let us show how this result can be obtained as a consequence of the above general theorem. Indeed, taking γ(t) = α/t gives after some elementary computation
Γ(t) = t t 0 α +∞ t t 0 τ α dτ = t α τ -α+1 -α + 1 +∞ t = t α -1 .
Then, the condition γ(t)Γ(t) ≤ m with m < 3 2 is equivalent to α > 3. As a consequence, for γ(t) = α/t and α > 3, we obtain the convergence of the trajectories of (IGS) γ and the rates of convergence Φ(x(t)) -min
H Φ = o 1 t 2 and ẋ(t) = o 1 t as t → +∞.
This result was first established in [START_REF] Attouch | Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping[END_REF] and [START_REF] May | Asymptotic for a second order evolution equation with convex potential and vanishing damping term[END_REF]. Because of its importance, a rich literature has been devoted to the algorithmic versions of these results, see [START_REF] Attouch | Fast convergence of inertial dynamics and algorithms with asymptotic vanishing damping[END_REF][START_REF] Attouch | The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1 k 2[END_REF][START_REF] Beck | A fast iterative shrinkage-thresholding algorithm for linear inverse problems[END_REF][START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF][START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF] and the references therein.
The above theorem relies on energetical arguments that are not available in the general framework of monotone operators. It ensues that the expected results in this context are weaker than in the potential case, and require different techniques. That's where the Yosida regularization comes into play.
1.1.2. General regularization parameter λ(t). Our approach is in line with Attouch and Peypouquet [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF] who studied the system (RIMS) γ,λ with a general maximally monotone operator, and in the particular case γ(t) = α/t (the importance of this system has been stressed just above). This approach can be traced back to Álvarez-Attouch [START_REF] Álvarez | The heavy ball with friction dynamical system for convex constrained minimization problems, Optimization[END_REF] and Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF] who studied the equation
ẍ(t) + γ ẋ(t) + A(x(t)) = 0,
where A is a cocoercive operator. Several variants of the above equation were considered by Bot and Csetnek (see [START_REF] Bot | Second order forward-backward dynamical systems for monotone inclusion problems[END_REF] for the case of a time-dependent coefficient γ(t), and [START_REF] Bot | Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping[END_REF] for a linear anisotropic damping). Cocoercivity plays an important role, not only to ensure the existence of solutions, but also in analyzing their long-term behavior. Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF] proved the weak convergence of the trajectories to zeros of A if the cocoercivity parameter λ and the damping coefficient γ satisfy the condition λγ 2 > 1.
Taking into account that for λ > 0, the operator A λ is λ-cocoercive and that A -1 λ (0) = A -1 (0) (see Appendix A.1), we immediately deduce that, under the condition λγ 2 > 1, each trajectory of ẍ(t) + γ ẋ(t) + A λ (x(t)) = 0 converges weakly to a zero of A. In the quest for a faster convergence, in the case γ(t) = α/t, Attouch-Peypouquet introduced a time-dependent regularizing parameter λ(•) satisfying
λ(t) × α 2 t 2 > 1
for t ≥ t 0 . So doing, in the case of a general maximal monotone operator, they were able to prove the asymptotic convergence of the trajectories to zeros of A. Our approach will consist in extending these results to the case of a general damping coefficient γ(t), taking advantage of the techniques developed in the above mentioned papers [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF] and [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF].
1.2. Organization of the paper. The paper is divided into three parts. Part A concerns a general maximally monotone operator A. We show that a suitable tuning of the damping parameter and of the Yosida regularization parameter, gives the weak convergence of the trajectories. Then, we specialize our results to some important cases, including the case of the continuous version of the Nesterov method, that is, γ(t) = α t . In part B, we examine the ergodic convergence properties of the trajectories. In part C, we consider the case where A is the subdifferential of a closed convex proper function Φ : H → R ∪ {+∞}. In this case, we will obtain rates of convergence of the values. In the Appendix we have collected several lemmas related to Yosida's approximation, to Moreau's envelopes and to the study of scalar differential inequalities that play a central role in the Lyapunov analysis of our system.
PART A: DYNAMICS FOR A GENERAL MAXIMALLY MONOTONE OPERATOR
In this part, A : H → 2 H is a general maximally monotone operator such that zerA = ∅, and t 0 is a fixed real number.
Convergence results
Let us first establish the existence and uniqueness of a global solution to the Cauchy problem associated with equation (RIMS) γ,λ .
x ∈ C 2 ([t 0 , +∞[, H) to equation (RIMS) γ,λ , satisfying the initial conditions x(t 0 ) = x 0 and ẋ(t 0 ) = v 0 .
Proof. The argument is standard and consists in writing (RIMS) γ,λ as a first-order system in H × H. By setting
X(t) = x(t) ẋ(t) and F (t, u, v) = v -γ(t)v -A λ(t) (u)
, equation (RIMS) γ,λ amounts to the first-order differential system Ẋ(t) = F (t, X(t)). Owing to the To establish the weak convergence of the trajectories of (RIMS) γ,λ , we will apply Opial lemma [START_REF]Weak convergence of the sequence of successive approximations for nonexpansive mappings[END_REF], that we recall in its continuous form. Lemma 2.2 (Opial). Let S be a nonempty subset of H, and let x : [t 0 , +∞[→ H. Assume that (i) for every z ∈ S, lim t→+∞ x(t) -z exists;
(ii) every weak sequential limit point of x(t), as t → +∞, belongs to S. Then x(t) converges weakly as t → +∞ to a point in S.
We associate to the continuous function
γ : [t 0 , +∞[→ R + the function p : [t 0 , +∞[→ R * + given by p(t) = e t t 0 γ(τ ) dτ for every t ≥ t 0 . Under assumption (H 0 ), the function Γ : [t 0 , +∞[→ R *
+ is then defined by Γ(s) = +∞ s du p(u) p(s) for every s ≥ t 0 . Besides the function Γ, to analyze the asymptotic behavior of the trajectory of the system (RIMS) γ,λ we will also use the quantity Γ(s, t), which is defined by, for any s, t p(s). Suppose that there exists ε ∈]0, 1[ such that for t large enough,
(H 1 ) (1 -ε)λ(t)γ(t) ≥ 1 + d dt (λ(t)γ(t)) Γ(t).
Then for any global solution x(.) of (RIMS) γ,λ , we have
(i) +∞ t0 λ(s)γ(s) ẋ(s)
It ensues that ḧ(t) + γ(t) ḣ(t) = ẋ(t) 2 + ẍ(t) + γ(t) ẋ(t), x(t) -z = ẋ(t) 2 -A λ(t) (x(t)), x(t) -z . (4)
Since z ∈ zerA = zerA λ(t) , we have A λ(t) (z) = 0. We then deduce from the λ(t)-cocoercivity of
A λ(t) that A λ(t) (x(t)), x(t) -z ≥ λ(t) A λ(t) (x(t)) 2 , whence (5) ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 -λ(t) A λ(t) (x(t)) 2 .
Writing that A λ(t) (x(t)) = -ẍ(t) -γ(t) ẋ(t), we have
λ(t) A λ(t) (x(t)) 2 = λ(t) ẍ(t) + γ(t) ẋ(t) 2 = λ(t) ẍ(t) 2 + λ(t)γ(t) 2 ẋ(t) 2 + 2 λ(t)γ(t) ẍ(t), ẋ(t) ≥ λ(t)γ(t) 2 ẋ(t) 2 + λ(t)γ(t) d dt ẋ(t) 2 = λ(t)γ(t) 2 - d dt (λ(t)γ(t)) ẋ(t) 2 + d dt (λ(t)γ(t) ẋ(t) 2 ).
In view of (5), we infer that
ḧ(t) + γ(t) ḣ(t) ≤ -λ(t)γ(t) 2 - d dt (λ(t)γ(t)) -1 ẋ(t) 2 - d dt (λ(t)γ(t) ẋ(t) 2 ). Let's use Lemma B.1 (i) with g(t) = -λ(t)γ(t) 2 -d dt (λ(t)γ(t)) -1 ẋ(t) 2 -d dt (λ(t)γ(t) ẋ(t) 2 ). Set- ting k(t) := h(t 0 ) + ḣ(t 0 ) t t0 du p(u)
, we obtain for every t ≥ t 0 ,
h(t) ≤ k(t) - t t0 Γ(s, t) λ(s)γ(s) 2 - d ds (λ(s)γ(s)) -1 ẋ(s) 2 + d ds (λ(s)γ(s) ẋ(s) 2 ) ds = k(t) - t t0 Γ(s, t) λ(s)γ(s) 2 - d ds (λ(s)γ(s)) -1 ẋ(s) 2 ds -Γ(s, t)λ(s)γ(s) ẋ(s) 2 t t0 + t t0 d ds Γ(s, t) λ(s)γ(s) ẋ(s) 2 ds.
Let us observe that Γ(t, t) = 0 and that
d ds Γ(s, t) = d ds t s du p(u) p(s) = -1 + γ(s)Γ(s, t).
Then it follows from the above inequality that
h(t) ≤ k(t) - t t0 λ(s)γ(s) -Γ(s, t) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds +Γ(t 0 , t)λ(t 0 )γ(t 0 ) ẋ(t 0 ) 2 .
Since Γ(t 0 , t) ≤ Γ(t 0 ) and h(t) ≥ 0, we deduce that (6)
t t0 λ(s)γ(s) -Γ(s, t) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds ≤ C 1 ,
with
C 1 := h(t 0 ) + | ḣ(t 0 )| +∞ t0 du p(u) + Γ(t 0 )λ(t 0 )γ(t 0 ) ẋ(t 0 ) 2 . Now observe that Γ(s, t) 1 + d ds (λ(s)γ(s)) ≤ Γ(s, t) 1 + d ds (λ(s)γ(s)) ≤ Γ(s) 1 + d ds (λ(s)γ(s)) .
We then infer from (6) that
t t0 λ(s)γ(s) -Γ(s) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds ≤ C 1 .
By assumption, inequality (H 1 ) holds true for t large enough, say t ≥ t 1 . It ensues that for t ≥ t 1 ,
t t1 ελ(s)γ(s) ẋ(s) 2 ds ≤ C 1 -C 2 ,
with
C 2 = t1 t0 λ(s)γ(s) -Γ(s) 1 + d ds (λ(s)γ(s)) ẋ(s) 2 ds.
Taking the limit as t → +∞, we find
+∞ t1 λ(s)γ(s) ẋ(s) 2 ds ≤ 1 ε (C 1 -C 2 ) < +∞.
By using again (H 1 ), we deduce that +∞ t1 Γ(s) ẋ(s) 2 ds < +∞.
(ii) Let us come back to inequality [START_REF] Attouch | Asymptotic behavior of coupled dynamical systems with multiscale aspects[END_REF]. Using Lemma B.1 (i) with g(t) = ẋ(t) 2 -λ(t) A λ(t) (x(t)) 2 , we obtain for every t ≥ t 0 ,
h(t) ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s, t) ẋ(s) 2 -λ(s) A λ(s) (x(s)) 2 ds.
Since h(t) ≥ 0 and Γ(s, t) ≤ Γ(s), we deduce that
t t0 Γ(s, t)λ(s) A λ(s) (x(s)) 2 ds ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s) ẋ(s) 2 ds.
Recalling from (i) that +∞ t0
Γ(s) ẋ(s) 2 ds < +∞, we infer that for every t ≥ t 0 ,
t t0 Γ(s, t)λ(s) A λ(s) (x(s)) 2 ds ≤ C 3 ,
where we have set
C 3 := h(t 0 ) + | ḣ(t 0 )| +∞ t0 du p(u) + +∞ t0 Γ(s) ẋ(s) 2 ds.
Since Γ(s, t) = 0 for s ≥ t, this yields in turn
+∞ t0 Γ(s, t)λ(s) A λ(s) (x(s)) 2 ds ≤ C 3 .
Letting t tend to +∞, the monotone convergence theorem then implies that
+∞ t0 Γ(s)λ(s) A λ(s) (x(s)) 2 ds ≤ C 3 < +∞.
(iii) From inequality (5), we derive that
ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 on [t 0 , +∞[. Recall from (i) that +∞ t0
Γ(s) ẋ(s) 2 ds < +∞. Applying Lemma B.1 (ii) with g(t) = ẋ(t) 2 , we infer that lim t→+∞ h(t) exists. Thus, we have obtained that lim t→+∞ x(t) -z exists for every z ∈ zerA, whence in particular the boundedness of the trajectory x(•).
(iv) Using that the operator A λ(t) is 1 λ(t) -Lipschitz continuous and that A λ(t) (z) = 0, we obtain that This proves the first inequality of (iv). For the second one, take the norm of each member of the equality ẍ(t) = -γ(t) ẋ(t) -A λ(t) (x(t)). The triangle inequality yields
(7) A λ(t) (x(t)) ≤ 1 λ(t) x(t) -z ≤ C 4 λ(t
ẍ(t) ≤ γ(t) ẋ(t) + A λ(t) (x(t)) .
The announced majorization of ẍ(t) then follows from ( 7) and ( 8).
(v) Recall the estimate of (ii) that we write as ( 9)
+∞ t0 Γ(s) λ(s) u(s) 2 ds < +∞, with the function u : [t 0 , +∞[→ H defined by u(t) = λ(t)A λ(t) (x(t))
. By applying [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF]Lemma A.4] with γ = λ(t), δ = λ(s), x = x(t) and y = x(s) with s, t ≥ t 0 , we find
λ(t)A λ(t) (x(t)) -λ(s)A λ(s) (x(s)) ≤ 2 x(t) -x(s) + 2 x(t) -z |λ(t) -λ(s)| λ(t) .
This shows that the map t → λ(t)A λ(t) (x(t)) is locally Lipschitz continuous, hence almost everywhere differentiable on [t 0 , +∞[. Dividing by t -s with t = s, and letting s tend to t, we infer that
u(t) = d dt (λ(t)A λ(t) (x(t))) ≤ 2 ẋ(t) + 2 x(t) -z | λ(t)| λ(t) ,
for almost every t ≥ t 0 . In view of (8), we deduce that for almost every t large enough,
u(t) ≤ 2 C 5 p(t) t t0 p(s) λ(s) ds + 2 C 4 | λ(t)| λ(t) , with C 4 = sup t≥t0 x(t) -z < +∞.
Recalling the assumption (H 2 ), we obtain the existence of C 6 ≥ 0 such that for almost every t large enough
u(t) ≤ C 6 Γ(t) λ(t) .
Then we have
d dt u(t) 3 ≤ 3 u(t) u(t) 2 ≤ 3 C 6 Γ(t) λ(t) u(t) 2 .
Taking account of estimate [START_REF] Baillon | Une remarque sur le comportement asymptotique des semigroupes non linéaires[END_REF], this shows that
d dt u(t) 3 + ∈ L 1 (t 0 , +∞).
From a classical result, this implies that lim t→+∞ u(t) 3 exists, which entails in turn that lim t→+∞ u(t) exists. Using again the estimate ( 9), together with the assumption (H 3 ), we immediately conclude that lim t→+∞ u(t) = 0. (vi) To prove the weak convergence of x(t) as t → +∞, we use the Opial lemma with S = zerA. Item (iii) shows the first condition of the Opial lemma. For the second one, let t n → +∞ be such that x(t n ) x weakly as n → +∞. By (v), we have lim n→+∞ λ(t n )A λ(tn) (x(t n )) = 0 strongly in H. Since the function λ is minorized by some positive constant on [t 0 , +∞[, we also have lim n→+∞ A λ(tn) (x(t n )) = 0 strongly in H. Passing to the limit in
A λ(tn) (x(t n )) ∈ A x(t n ) -λ(t n )A λ(tn) (x(t n )) ,
and invoking the graph-closedness of the maximally monotone operator A for the weak-strong topology in H × H, we find 0 ∈ A(x). This shows that x ∈ zerA, which completes the proof.
(vii) Let us now assume that +∞ t0 Γ(s) λ(s) ds < +∞. Recalling inequality [START_REF] Attouch | The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1 k 2[END_REF], we deduce that
+∞ t0 Γ(s) A λ(s) (x(s)) ds < +∞.
By applying Lemma B.2 with F (t) = -A λ(t) (x(t)), we obtain that +∞ t0 ẋ(s) ds < +∞, and hence x(t) converges strongly as t → +∞ toward some x ∞ ∈ H.
Remark 2.4. When +∞ t0 Γ(s)
λ(s) ds < +∞, the trajectories of (RIMS) γ,λ have a finite length, and hence are strongly convergent. However, the limit point is not a zero of the operator A in general.
Let us now particularize Theorem 2.3 to the case of a constant parameter λ > 0. In this case, the operator arising in equation (RIMS) γ,λ is constant and equal to the λ-cocoercive operator A λ . On the other hand, it is well-known that every λ-cocoercive operator B : H → H can be viewed as the Yosida regularization A λ of some maximally monotone operator A : H → 2 H , see [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF]Proposition 23.20]. This leads to the following statement.
(1 -ε)λγ(s) ≥ (1 + λ| γ(s)|)Γ(s). (10)
Then for any global solution x(.) of
ẍ(t) + γ(t) ẋ(t) + B(x(t)) = 0, t ≥ t 0 ,
we have
(i) +∞ t0
γ(s) ẋ(s) 2 ds < +∞, and as a consequence
+∞ t0 Γ(s) ẋ(s) 2 ds < +∞. (ii) +∞ t0 Γ(s) B(x(s)) 2 ds < +∞.
(iii) For any z ∈ zerB, lim t→+∞ x(t) -z exists, and hence x(•) is bounded.
(iv) There exists C ≥ 0 such that for t large enough,
ẋ(t) ≤ C ∆(t) and ẍ(t) ≤ C γ(t)∆(t) + C.
Assuming that +∞ t0 Γ(s) ds = +∞, and that ∆(t) = O(Γ(t)) as t → +∞, the following holds
(v) lim t→+∞ B(x(t)) = 0. (vi) There exists x ∞ ∈ zerB such that x(t) x ∞ weakly in H as t → +∞.
Finally assume that +∞ t0 Γ(s) ds < +∞. Then we obtain
(vii) +∞ t0
ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H.
Assume now that the function γ is constant, say γ(t) ≡ γ > 0. In this case, it is easy to check that ( 11)
Γ(t) ∼ 1 γ and ∆(t) ∼ 1 γ as t → +∞, see Proposition 3.1.
As a consequence of Corollary 2.5, we then obtain the following result that was originally discovered by Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF].
Corollary 2.6 (Attouch-Maingé [START_REF] Attouch | Asymptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects[END_REF]). Let λ > 0 and let B : H → H be a λ-cocoercive operator such that zerB = ∅. Let γ > 0 be such that λγ 2 > 1. Then for any global solution x(.) of
(12) ẍ(t) + γ ẋ(t) + B(x(t)) = 0, t ≥ t 0 ,
we have
(i) +∞ t0 ẋ(s) 2 ds < +∞. (ii) +∞ t0 B(x(s)) 2 ds < +∞.
(iii) For any z ∈ zerB, lim t→+∞ x(t) -z exists, and hence x(•) is bounded.
(iv) lim t→+∞ ẋ(t) = 0 and lim t→+∞ ẍ(t) = 0. (v) lim t→+∞ B(x(t)) = 0. (vi) There exists x ∞ ∈ zerB such that x(t)
x ∞ weakly in H as t → +∞.
Proof. Since γ(t) ≡ γ > 0, we have the equivalences [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF] as t → +∞. It ensues that condition [START_REF] Balti | Asymptotic for the perturbed heavy ball system with vanishing damping term[END_REF] is guaranteed by λγ 2 > 1. All points are then obvious consequences of Corollary 2.5, except for (iv). Corollary 2.5 (iv) shows that the acceleration ẍ is bounded on [t 0 , +∞[. Taking account of (i), we deduce classically that lim t→+∞ ẋ(t) = 0. In view of equation ( 12) and the fact that lim t→+∞ B(x(t)) = 0 by (v), we conclude that lim t→+∞ ẍ(t) = 0.
Application to particular classes of functions γ and λ
We now look at special classes of functions γ and λ, for which we are able to estimate precisely the quantities +∞ t ds p(s) and t t0 p(s) λ(s) ds as t → +∞. This consists of the differentiable functions γ, λ
: [t 0 , +∞[→ R * + satisfying (13) lim t→+∞ γ(t) γ(t) 2 = -c and lim t→+∞ d dt (λ(t)γ(t)) λ(t)γ(t) 2 = -c ,
for some c ∈ [0, 1[ and c > -1. Some properties of the functions γ satisfying the first condition above were studied by Attouch-Cabot [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF], in connection with the asymptotic behavior of the inertial gradient system (IGS) γ . The next proposition extends some of these properties. We now show that the key condition (H 1 ) of Theorem 2.3 takes a simple form for functions γ and λ satisfying conditions [START_REF] Bot | Second order forward-backward dynamical systems for monotone inclusion problems[END_REF].
Proposition 3.2. Let γ, λ : [t 0 , +∞[→ R * + be two differentiable functions satisfying conditions (13) for some c ∈ [0, 1[ and c ∈] -1, 1[ such that |c | < 1 -c. Then condition (H 1 ) is equivalent to (15) lim inf t→+∞ λ(t)γ(t) 2 > 1 1 -c -|c | .
Proof. The inequality arising in condition (H 1 ) can be rewritten as
(16) (1 -ε)λ(t) γ(t) Γ(t) - d dt (λ(t)γ(t)) ≥ 1.
The assumption lim t→+∞ γ(t)
γ(t) 2 = -c implies that Γ(t) ∼ 1 (1-c) γ(t) as t → +∞, see Proposition 3.1 (i). It ensues that (17) λ(t) γ(t) Γ(t) = (1 -c)λ(t)γ(t) 2 + o(λ(t)γ(t) 2 ) as t → +∞.
On the other hand, we deduce from the second condition of ( 13) that
(18) d dt (λ(t)γ(t)) = |c |λ(t)γ(t) 2 + o(λ(t)γ(t) 2 ) as t → +∞.
In view of ( 17) and ( 18), inequality [START_REF] Brézis | Nonlinear ergodic theorems[END_REF] amounts to
λ(t)γ(t) 2 [(1 -ε)(1 -c) -|c | + o(1)] ≥ 1 as t → +∞. Therefore condition (H 1 ) is equivalent to the existence of ε ∈]0, 1 -c -|c |[ such that λ(t)γ(t) 2 [1 -c -|c | -ε ] ≥ 1,
for t large enough. This last condition is equivalent to [START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF], which ends the proof.
∈] -1, 1[ such that |c | < 1 -c. Assume moreover that lim inf t→+∞ λ(t)γ(t) 2 > 1 1 -c -|c | .
Then for any global solution x(.) of (RIMS) γ,λ , we have
(i) +∞ t0 λ(s)γ(s) ẋ(s) 2 ds < +∞. (ii) +∞ t0 λ(s) γ(s) A λ(s) (x(s)) 2 ds < +∞.
(iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded.
(iv) ẋ(t) = O 1 λ(t)γ(t)
and
ẍ(t) = O 1 λ(t)
as t → +∞.
Assuming that
+∞ t0 1 λ(s)γ(s) ds = +∞ and that | λ(t)| = O 1 γ(t)
as t → +∞, the following holds
(v) lim t→+∞ λ(t)A λ(t) (x(t)) = 0. (vi)
Γ(t) ∼ 1 (1 -c) γ(t) and 1 p(t) t t0 p(s) λ(s) ds ∼ 1 (1 + c )λ(t)γ(t) as t → +∞.
It ensues that the first condition of (H 2 ) is automatically satisfied, while the second one is given by
| λ(t)| = O 1 γ(t)
as t → +∞. Condition (H 3 ) is implied by the assumption +∞ t0 1 λ(s)γ(s) ds = +∞. Items (i)-(vii) follow immediately from the corresponding points in Theorem 2.3.
Let us now particularize to the case γ(t) = α t q and λ(t) = β t r , for some α, β > 0, q ≥ -1 and r ∈ R.
Corollary 3.4. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Assume that γ(t) = α t q and λ(t) = β t r for every t ≥ t 0 > 0. Suppose that (q, r) ∈ ] -1, +∞[×R is such that 2q + r ≥ 0, and that (α,
β) ∈ R * + × R * + satisfies α 2 β > 1 if 2q + r = 0 (no condition if 2q + r > 0)
. Then for any global solution x(.) of (RIMS) γ,λ , we have
(i) +∞ t0 s q+r ẋ(s) 2 ds < +∞. (ii) +∞ t0 s r-q A λ(s) (x(s)) 2 ds < +∞.
(iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded.
(iv) ẋ(t) = O 1 t q+r and ẍ(t) = O 1 t r as t → +∞. Assuming that q + r ≤ 1, the following holds (v) lim t→+∞ t r A λ(t) (x(t)) = 0. (vi) If r ≥ 0, there exists x ∞ ∈ zerA such that x(t)
x ∞ weakly in H as t → +∞. Finally assume that q + r > 1. Then we obtain (vii) +∞ t0 ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H.
Proof. Since q > -1, the first (resp. second) condition of ( 13) is satisfied with c = 0 (resp. c = 0). On the other hand, we have λ(t)γ(t
) 2 = α 2 β t 2q+r , hence lim t→+∞ λ(t)γ(t) 2 = +∞ if 2q + r > 0 α 2 β if 2q + r = 0.
It ensues that the condition lim inf t→+∞ λ(t)γ(t) 2 > 1 is guaranteed by the hypotheses of Corollary 3.4. Conditions When q = r = 0, the functions γ and λ are constant: γ(t) ≡ α > 0 and λ(t) ≡ β > 0. We then recover the result of [6, Theorem 2.1] with the key condition α 2 β > 1. To finish, let us consider the case q = -1, thus leading to a damping parameter of the form γ(t) = α t . This case was recently studied by Attouch and Peypouquet [START_REF] Attouch | Convergence of inertial dynamics and proximal algorithms governed by maximal monotone operators[END_REF] in the framework of Nesterov's accelerated methods.
Corollary 3.5. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let r ≥ 2, α > r and β ∈ R * + be such that β >
1 α(α-r) if r = 2 (no condition on β if r > 2). Assume that γ(t) = α t
and λ(t) = β t r for every t ≥ t 0 > 0. Then for any global solution x(.) of (RIMS) γ,λ , we have
(i) +∞ t0 s r-1 ẋ(s) 2 ds < +∞. (ii) +∞ t0 s r+1 A λ(s) (x(s)) 2 ds < +∞.
(iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded.
(iv) ẋ(t) = O 1 t r-1 and ẍ(t) = O 1 t r as t → +∞.
Assuming that r = 2, the following holds (v) lim t→+∞ t 2 A λ(t) (x(t)) = 0. (vi) There exists x ∞ ∈ zerA such that x(t)
x ∞ weakly in H as t → +∞. Finally assume that r > 2. Then we obtain (vii) +∞ t0 ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H.
Proof. The first (resp. second) condition of ( 13) is satisfied with c = 1 α (resp. c = 1-r α ). Since r ≥ 2 and α > r, we have
c ∈]0, 1/2[ and |c | = r -1 α < α -1 α = 1 -c.
On the other hand, observe that λ(t)γ(t
) 2 = α 2 β t r-2 , hence lim t→+∞ λ(t)γ(t) 2 = +∞ if r > 2 α 2 β if r = 2. Condition lim inf t→+∞ λ(t)γ(t) 2 > 1 1-c-|c | is automatically satisfied if r > 2, while it amounts to α 2 β > 1 1 -1 α -r-1 α = α α -r ⇐⇒ β > 1 α(α -r) if r = 2.
Items (i)-(vii) follow immediately from the corresponding points in Corollary 3.3.
Taking r = 2 in the previous corollary, we recover the result of [8, Theorem 2.1] as a particular case.
PART B: ERGODIC CONVERGENCE RESULTS
Let A : H → 2 H be a maximally monotone operator. The trajectories associated to the semigroup of contractions generated by A are known to converge weakly in average toward some zero of A, cf. the seminal paper by Brezis and Baillon [START_REF] Baillon | Une remarque sur le comportement asymptotique des semigroupes non linéaires[END_REF]. Our purpose in this part of the paper is to study the ergodic convergence of the solutions of the system (RIMS) γ,λ . When the regularizing parameter λ(•) is minorized by some positive constant, it is established in part A that the trajectories of (RIMS) γ,λ do converge weakly toward a zero of A, see Theorem 2.3 (vi). Our objective is to show that weak ergodic convergence can be expected when the regularization parameter λ(t) tends toward 0 as t → +∞. The key ingredient is the use of some suitable ergodic variant of the Opial lemma.
4. Weak ergodic convergence of the trajectories 4.1. Ergodic variants of Opial's lemma. Ergodic versions of the Opial lemma were derived by Brézis-Browder [START_REF] Brézis | Nonlinear ergodic theorems[END_REF] and Passty [START_REF] Passty | Ergodic convergence to a zero of the sum of monotone operators in Hilbert space[END_REF] Λ(s, t) x(s) ds.
Lemma B.4 in the appendix shows that the map x is well-defined, bounded and that convergence of x(t) as t → +∞ implies convergence of x(t) toward the same limit (Cesaro property). The extension of Opial lemma to a general averaging process satisfying [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] and ( 20) is given hereafter. This result was established in [START_REF] Attouch | Asymptotic behavior of coupled dynamical systems with multiscale aspects[END_REF] for the particular case corresponding to Λ(s, t) = 1 t if s ≤ t and Λ(s, t) = 0 if s > t. Proposition 4.1. Let S be a nonempty subset of H and let x : [t 0 , +∞[→ H be a continuous map, supposed to be bounded on [t 0 , +∞[. Let Λ : [t 0 , +∞[×[t 0 , +∞[→ R + be a measurable function satisfying [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] and [START_REF] Haraux | Systèmes dynamiques dissipatifs et applications[END_REF], and let x : [t 0 , +∞[→ H be the averaged trajectory defined by [START_REF] Imbert | Convex Analysis techniques for Hopf-Lax formulae in Hamilton-Jacobi equations[END_REF]. Assume that (i) for every z ∈ S, lim t→+∞ x(t) -z exists;
(ii) every weak sequential limit point of x(t), as t → +∞, belongs to S. Then x(t) converges weakly as t → +∞ to a point in S.
Proof. From Lemma B.4 (i), the map x is bounded, therefore it is enough to establish the uniqueness of weak limit points. Let ( x(t n )) and ( x(t m )) be two weakly converging subsequences satisfying respectively x(t n )
x 1 as n → +∞ and x(t m )
x 2 as m → +∞. From (ii), the weak limit points x 1 and x 2 belong to S. In view of (i), we deduce that lim t→+∞ x(t) -x 1 2 and lim t→+∞ x(t) -x 2 2 exist. Writing that
x(t) -x 1 2 -x(t) -x 2 2 = 2 x(t) - x 1 + x 2 2 , x 2 -x 1 ,
we infer that lim t→+∞ x(t), x 2 -x 1 exists. Observe that
x(t), x 2 -x 1 = +∞ t0 Λ(s, t) x(s) ds, x 2 -x 1 = +∞ t0 Λ(s, t) x(s), x 2 -x 1 ds.
By applying Lemma B.4 (ii) to the real-valued map t → x(t), x 2 -x 1 , we deduce that lim t→+∞ x(t), x 2x 1 exists. This implies that
lim n→+∞ x(t n ), x 2 -x 1 = lim m→+∞ x(t m ), x 2 -x 1 , which entails that x 1 , x 2 -x 1 = x 2 , x 2 -x 1 .
We conclude that x 2 -x 1 2 = 0, which ends the proof. Assume that (i) for every z ∈ S, lim t→+∞ x(t) -z exists;
(ii) every weak sequential limit point of x(t), as t → +∞, belongs to S. Then x(t) converges weakly as t → +∞ to a point in S.
Proof. Just check that conditions [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] and [START_REF] Haraux | Systèmes dynamiques dissipatifs et applications[END_REF] Γ(s) ds = +∞ we deduce that lim t→+∞ t t0 Γ(u, t) du = +∞, see [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]. We deduce from the above inequality that lim t→+∞ T t0 Λ(s, t) ds = 0, hence property (20) is satisfied. It ensues that Proposition 4.1 can be applied, which ends the proof. 4.2. Ergodic convergence of the trajectories. To each solution x(.) of (RIMS) γ,λ , we associate the averaged trajectory x(.) defined by
x(t) = 1 t t0 Γ(s, t) ds t t0
Γ(s, t) x(s) ds.
We show that under suitable conditions, every averaged trajectory x(.) converges weakly as t → +∞ toward some zero of the operator A. x ∞ weakly in H as n → +∞. Let us fix (z, q) ∈ gphA and define the function h : [t 0 , +∞[→ R + by h(t) = 1 2 x(t) -z 2 . Since q ∈ A(z) and A λ(t) (x(t)) ∈ A x(t) -λ(t)A λ(t) (x(t)) , the monotonicity of A implies that x(t) -λ(t)A λ(t) (x(t)) -z, A λ(t) (x(t)) -q ≥ 0, hence
x(t) -z, A λ(t) (x(t)) ≥ λ(t) A λ(t) (x(t)) 2 + x(t) -λ(t)A λ(t) (x(t)) -z, q ≥ x(t) -λ(t)A λ(t) (x(t)) -z, q .
Recalling equality (4), we obtain for every t ≥ t 0 ,
ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 -x(t) -λ(t)A λ(t) (x(t)) -z, q .
Using Lemma B.1 (i) with g(t) = ẋ(t) 2 -x(t) -λ(t)A λ(t) (x(t)) -z, q , we obtain for every t ≥ t 0 ,
h(t) ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s, t) ẋ(s) 2 -x(s) -λ(s)A λ(s) (x(s)) -z, q ds.
Since h(t) ≥ 0 and Γ(s, t) ≤ Γ(s), we deduce that
t t0 Γ(s, t) x(s) -λ(s)A λ(s) (x(s)) -z, q ds ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 Γ(s) ẋ(s) 2 ds.
Recalling the assumption +∞ t0 du p(u) < +∞ and the estimate +∞ t0
Γ(s) ẋ(s) 2 ds < +∞ (see Theorem 2.3 (i)), we infer that for every t ≥ t 0 , [START_REF] Nesterov | A method of solving a convex programming problem with convergence rate O(1/k 2 )[END_REF] t t0 Γ(s, t) x(s) -λ(s)A λ(s) (x(s)) -z, q ds ≤ C, where we have set
C := h(t 0 ) + | ḣ(t 0 )| +∞ t0 du p(u) + +∞ t0 Γ(s) ẋ(s) 2 ds.
It ensues that
t t0 Γ(s, t) x(s) -z, q ds ≤ C + t t0 Γ(s, t) λ(s)A λ(s) (x(s)), q ds ≤ C + q t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds.
This can be rewritten as
t t0 Γ(s, t)(x(s) -z) ds, q ≤ C + q t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds.
Dividing by t t0 Γ(s, t) ds, we find [START_REF]Weak convergence of the sequence of successive approximations for nonexpansive mappings[END_REF] x(t) -z, q ≤ C t t0 Γ(s, t) ds
+ q t t0 Γ(s, t) ds t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds.
The assumption +∞ t0
Γ(s) ds = +∞ implies that lim t→+∞ t t0 Γ(s, t) ds = +∞, see [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]. On the other hand, we have lim t→+∞ λ(t)A λ(t) (x(t)) = 0 by Theorem 2.3 (v). From the Cesaro property, we infer that 1 t t0 Γ(s, t) ds t t0 Γ(s, t)λ(s) A λ(s) (x(s)) ds → 0 as t -→ +∞, see Lemma B.4 (ii). Taking the limit as t → +∞ in inequality [START_REF]Weak convergence of the sequence of successive approximations for nonexpansive mappings[END_REF], we then obtain lim sup t→+∞ x(t) -z, q ≤ 0.
Recall that the sequence (t n ) is such that x(t n )
x ∞ weakly in H as n → +∞, hence x(t n ) -z, q → x ∞ -z, q as n → +∞. From what precedes, we deduce that x ∞ -z, q ≤ 0 for every (z, q) ∈ gphA. Since the operator A is maximally monotone, we infer that 0 ∈ A(x ∞ ). We have proved that x ∞ ∈ zerA, which shows that condition (ii) of Corollary 4.2 is satisfied.
Let us now consider the alternate averaged trajectory x defined by In view of assumption ( 25), we then obtain [START_REF] Passty | Ergodic convergence to a zero of the sum of monotone operators in Hilbert space[END_REF]. By applying Lemma B.5, we infer that lim t→+∞ x(t)x(t) = 0. On the other hand, Theorem 4.3 shows that there exists x ∞ ∈ zerA such that x(t) x ∞ weakly in H as t → +∞. We then conclude that x(t)
x(t) = 1 t t0 Γ(
x ∞ weakly in H as t → +∞. Now assume that the function Γ :
[t 0 , +∞[→ R + is such that Γ(s) ∼ Γ(s)
λ(t)γ(t) 2 > 1; (b) | λ(t)| = O (1/γ(t)) as t → +∞; (c) +∞ t0 ds λ(s)γ(s) = +∞; (d) +∞ t0 ds γ(s) = +∞.
Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that Let us now particularize to the case γ(t) = α t q and λ(t) = β t r , for some α, β > 0, q ∈] -1, 1] and r ∈ R.
Corollary 4.6. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Assume that γ(t) = α t q and λ(t) = β t r for every t ≥ t 0 > 0. Let (q, r) ∈ ] -1, 1] × R be such that q + r ≤ 1 and 2q + r ≥ 0, and let (α,
β) ∈ R * + × R * + be such that α 2 β > 1 if 2q + r = 0 (no condition if 2q + r > 0)
. Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that
1 t t0 ds s q t t0 x(s) s q ds
x ∞ weakly in H as t → +∞.
Proof. The conditions of ( 27) are guaranteed by q > -1. On the other hand, we have λ(t)γ(t) = +∞ amount respectively to q + r ≤ 1, which holds true by assumption. The condition +∞ t0 ds γ(s) = +∞ is implied by q ≤ 1. Then just apply Corollary 4.5.
PART C: THE SUBDIFFERENTIAL CASE
Let us particularize our study to the case A = ∂Φ, where Φ : H → R ∪ {+∞} is a convex lower semicontinuous proper function. Then A λ = ∇Φ λ is equal to the gradient of Φ λ : H → R, which is the Moreau envelope of Φ of index λ > 0. Let us recall that, for all x ∈ H
(30) Φ λ (x) = inf ξ∈H Φ(ξ) + 1 2λ x -ξ 2 .
In this case, we will study the rate of convergence of the values, when the time t goes to +∞, of the trajectories of the second-order differential equation
(RIGS) γ,λ ẍ(t) + γ(t) ẋ(t) + ∇Φ λ(t) (x(t)) = 0,
called the Regularized Inertial Gradient System with parameters γ, λ. As a main feature, the above system involves two time-dependent positive parameters: the Moreau regularization parameter λ(t), and the damping parameter γ(t). System (RIGS) γ,λ comes as a natural development of several recent studies concerning fast inertial dynamics and algorithms for convex optimization. Indeed, when Φ is a smooth convex function, it was highlighted that the fact of taking a vanishing damping coefficient γ(t) in system (IGS) γ ẍ(t) + γ(t) ẋ(t) + ∇Φ(x(t)) = 0, is a key property for obtaining fast optimization methods. Precisely Su, Boyd and Candès [START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF] showed that, in the particular case γ(t) = 3 t , (IGS) γ is a continuous version of the fast gradient method initiated by Nesterov [START_REF] Nesterov | A method of solving a convex programming problem with convergence rate O(1/k 2 )[END_REF], with Φ(x(t)) -min H Φ = O( 1 t 2 ) in the worst case. Attouch and Peypouquet [START_REF] Attouch | The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than 1 k 2[END_REF] and May [START_REF] May | Asymptotic for a second order evolution equation with convex potential and vanishing damping term[END_REF] have improved this result by showing that Φ(x(t)) -min
H Φ = o( 1 t 2 ) for γ(t) = α t with α > 3.
Recently, in the case of a general damping function γ(•), the study of the speed of convergence of trajectories of (IGS) γ was developed by Attouch-Cabot in [START_REF] Attouch | Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity[END_REF]. Note that a main advantage of (RIGS) γ,λ over (IGS) γ is that Φ is just assumed to be lower semicontinuous (not necessarily smooth). In line with these results, by jointly adjusting the tuning of the two parameters in (RIGS) γ,λ , we will obtain fast convergence results for the values.
Convergence rates and weak convergence of the trajectories
The following assumptions and notations will be needed throughout this section:
Φ : H → R ∪ {+∞} convex, lower semicontinuous, proper, bounded from below, argmin Φ = ∅; γ : [t 0 , +∞[→ R + continuous, with t 0 ∈ R; λ : [t 0 , +∞[→ R * + continuously differentiable, nondecreasing; x : [t 0 , +∞[→ H the solution to (RIGS) γ,λ , with initial conditions x(t 0 ) = x 0 , ẋ(t 0 ) = v 0 ; ξ(t) = prox λ(t)Φ (x(t)) for t ≥ t 0 .
5.1. Preliminaries on Moreau envelopes. For classical facts about the Moreau envelopes we refer the reader to [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF][START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF][START_REF] Parikh | Proximal algorithms[END_REF][START_REF] Peypouquet | Convex optimization in normed spaces: theory, methods and examples[END_REF]. We point out the following properties that will be useful in the sequel:
(i) λ ∈]0, +∞[ → Φ λ (x) is nonincreasing for all x ∈ H; (ii) inf H Φ = inf H Φ λ for all λ > 0;
(iii) argmin Φ = argmin Φ λ for all λ > 0.
It turns out that it is convenient to consider the Moreau envelope as a function of the two variables x ∈ H and λ ∈]0, +∞[. Its differentiability properties with respect to (x, λ) play a crucial role in our analysis. a. Let us first recall some classical facts concerning the differentiability properties with respect to x of the Moreau envelope x → Φ λ (x). The infimum in ( 30) is achieved at a unique point
(31) prox λΦ (x) = argmin ξ∈H Φ(ξ) + 1 2λ x -ξ 2 , which gives Φ λ (x) = Φ(prox λΦ (x)) + 1 2λ x -prox λΦ (x) 2 .
Writing the optimality condition for (31), we get
prox λΦ (x) + λ∂Φ (prox λΦ (x)) x, that is prox λΦ (x) = (I + λ∂Φ) -1 (x).
Thus, prox λΦ is the resolvent of index λ > 0 of the maximally monotone operator ∂Φ. As a consequence, the mapping prox λΨ : H → H is firmly expansive. For any λ > 0, the function x → Φ λ (x) is continuously differentiable, with
∇Φ λ (x) = 1 λ (x -prox λΦ (x)) .
Equivalently
∇Φ λ = 1 λ I -(I + λ∂Φ) -1 = (∂Φ) λ
which is the Yosida approximation of the maximally monotone operator ∂Φ. As such, ∇Φ λ is Lipschitz continuous, with Lipschitz constant 1 λ , and Φ λ ∈ C 1,1 (H). b. A less known result is the C 1 -regularity of the function λ → Φ λ (x), for any x ∈ H. Its derivative is given by
(32) d dλ Φ λ (x) = - 1 2 ∇Φ λ (x) 2 .
This result is known as the Lax-Hopf formula for the above first-order Hamilton-Jacobi equation, see [2, Remark 3.32; Lemma 3.27], and [START_REF] Imbert | Convex Analysis techniques for Hopf-Lax formulae in Hamilton-Jacobi equations[END_REF]. A proof is given in Lemma A.1 for the convenience of the reader. As a consequence of the semi-group property satisfied by the orbits of the autonomous evolution equation (32), for any x ∈ H, λ > 0 and µ > 0,
(Φ λ ) µ (x) = Φ (λ+µ) (x). (33)
5.2. Preliminary estimates. Let us introduce functions W , h z , of constant use in this section.
Global energy.
The global energy of the system W : [t 0 , +∞[→ R + is given by
W (t) = 1 2 ẋ(t) 2 + Φ λ(t) (x(t)) -min H Φ.
Since inf H Φ = inf H Φ λ , we have W ≥ 0. From (RIGS) γ,λ and property (32), we immediately obtain the following equality
Ẇ (t) = -γ(t) ẋ(t) 2 - λ(t) 2 ∇Φ λ(t) (x(t)) 2 . ( 34
)
As a direct consequence of (34), we obtain the following results.
Proposition 5.1. The function W is nonincreasing, and hence W ∞ := lim t→+∞ W (t) exists. In addition,
sup t≥t0 ẋ(t) < +∞, ∞ t0 γ(t) ẋ(t) 2 dt < +∞ and ∞ t0 λ(t) ∇Φ λ(t) (x(t)) 2 dt < +∞.
Proof. From (34), and λ nondecreasing, we deduce that Ẇ (t) ≤ 0. Hence, W is nonincreasing. Since W is nonnegative, W ∞ := lim t→+∞ W (t) exists. After integrating (34) from t 0 to t, we get
W (t) -W (t 0 ) + t t0 γ(s) ẋ(s) 2 ds + 1 2 t t0 λ(s) ∇Φ λ(s) (x(s)) 2 ds ≤ 0.
By definition of W , and using again that inf
H Φ = inf H Φ λ , it follows that 1 2 ẋ(t) 2 + t t0 γ(s) ẋ(s) 2 ds + 1 2 t t0 λ(s) ∇Φ λ(s) (x(s)) 2 ds ≤ W (t 0 ).
This being true for any t ≥ t 0 , we get the conclusion.
Φ) -Γ(t) ∇Φ λ(t) (x(t)), x(t) -x = 2Γ(t) Γ(t)(Φ λ(t) (x(t)) -min H Φ) -Γ(t) ∇Φ λ(t) (x(t)), x(t) -x .
In the above calculation, we have neglected the term -Γ(t) 2 λ(t) 2 ∇Φ λ(t) (x(t)) 2 which is less or equal than zero, because λ(•) is a nondecreasing function. To obtain the last equality, we have used again the equality -Γ(t)γ(t) + Γ(t) + 1 = 0. Let us now use the convexity of Φ λ(t) and equality (37) to obtain
Ė(t) ≤ -(Γ(t) -2Γ(t) Γ(t)) (Φ λ(t) (x(t)) -min H Φ) = -Γ(t)(3 -2γ(t)Γ(t)) (Φ λ(t) (x(t)) -min H Φ).
When (K 1 ) is satisfied, we have 3 -2γ(t)Γ(t) ≥ 0. Since Γ(t) and Φ λ(t) (x(t)) -min H Φ are nonnegative, we deduce that Ė(t) ≤ 0. (i) For every t ≥ t 1 , we have 2 , we obtain that lim t→+∞ h(t) exists. This shows the first point of the Opial lemma. Let us now verify the second point. Let x(t k ) converge weakly to x ∞ as k → +∞. Point (i) implies that ξ(t k ) also converges weakly to x ∞ as k → +∞. Since the function Φ is convex and lower semicontinuous, it is semicontinuous for the weak topology, hence satisfies
Φ λ(t) (x(t)) -min H Φ ≤ E(t 1 ) Γ(t)
[t 0 , +∞[→ R + defined by g(t) = ẋ(t)
Φ(x ∞ ) ≤ lim inf t→+∞ Φ(ξ(t k )) = lim t→+∞ Φ(ξ(t)) = min H Φ,
cf. the last point of Theorem 5.6. It ensues that x ∞ ∈ argmin Φ, which establishes the second point of the Opial lemma, and ends the proof. This will immediately give our result, since, by the derivation chain rule,
d dλ Φ λ (x) = d dλ 1 λ × λΦ λ (x) = 1 λ Φ(J λ (x)) - 1 λ 2 λΦ λ (x) = - 1 λ Φ λ (x) -Φ(J λ (x)) = - 1 2λ 2 x -J λ (x) 2 = - 1 2 ∇Φ λ (x) 2 .
To obtain (47), take two values λ 1 and λ 2 of the parameter λ, and compare the corresponding values of the function λ → λΦ λ (x). By the formulation (46) of λΦ λ (x) as an infimal value, we have
λ 1 Φ λ1 (x) -λ 2 Φ λ2 (x) ≤ λ 1 Φ(J λ2 (x)) + 1 2 x -J λ2 (x) 2 -λ 2 Φ(J λ2 (x)) - 1 2 x -J λ2 (x) 2 = (λ 1 -λ 2 )Φ(J λ2 (x)).
Exchanging the roles of λ 1 and λ 2 , we obtain
(λ 1 -λ 2 )Φ(J λ1 (x)) ≤ λ 1 Φ λ1 (x) -λ 2 Φ λ2 (x) ≤ (λ 1 -λ 2 )Φ(J λ2 (x)).
Then note that the mapping λ → Φ(J λ (x)) is continuous. This follows from (46) and the continuity of the mappings λ → Φ λ (x) and λ → J λ (x). Indeed, these mappings are locally Lipschitz continuous. This is a direct consequence of the resolvent equations (33), see [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF]Proposition 23.28] for further details. Then divide the above formula by λ 1 -λ 2 (successively examining the two cases λ 1 < λ 2 , then λ 2 < λ 1 ). Letting λ 1 → λ 2 , and using the continuity of λ → Φ(J λ (x)) gives the differentiability of the mapping λ → λΦ λ (x), and formula (47). Then, writing Φ λ (x) = 1 λ (λΦ λ (x)), and applying the derivation chain rule gives (45). The continuity of λ → ∇Φ λ (x) gives the continuous differentiability of λ → Φ λ (x).
Appendix B. Some auxiliary results
In this section, we present some auxiliary lemmas that are used throughout the paper. The following result allows us to establish some majorization and also the convergence as t → +∞ of a real-valued function satisfying some differential inequality. Γ(s) w(s) ds = 0.
Lemma
The conclusion follows from the two above relations.
Given a Banach space (X , . ) and a bounded map x : [t 0 , +∞[→ X , the next lemma gives basic properties of the averaged trajectory x defined by [START_REF] Imbert | Convex Analysis techniques for Hopf-Lax formulae in Hamilton-Jacobi equations[END_REF].
Lemma B.4. Let us give (X , . ) a Banach space, Λ : [t 0 , +∞[×[t 0 , +∞[→ R + a measurable function satisfying [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF], and x : [t 0 , +∞[→ X a bounded map. Then we have (i) For every t ≥ t 0 , the vector x(t) = +∞ t0 Λ(s, t) x(s) ds is well-defined. The map x is bounded and sup t≥t0 x(t) ≤ sup t≥t0 x(t) . (ii) Assume moreover that the function Λ satisfies [START_REF] Haraux | Systèmes dynamiques dissipatifs et applications[END_REF]. If lim t→+∞ x(t) = x ∞ for some x ∞ ∈ X , then lim t→+∞ x(t) = x ∞ .
Proof. (i) Let us set M = sup t≥t0 x(t) < +∞. In view of [START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF], observe that for every t ≥ t 0 , Λ(s, t) x(s) ds, and hence x(t) ≤ M in view of (54). (ii) Assume that lim t→+∞ x(t) = x ∞ for some x ∞ ∈ X . Observe that for every t ≥ t 0 ,
x(t) -x ∞ = +∞ t0
Λ(s, t) (x(s) -x ∞ ) ds by using ( 19)
≤ +∞ t0
Λ(s, t) x(s) -x ∞ ds. (55) Fix ε > 0 and let T ≥ t 0 be such that x(t) -x ∞ ≤ ε for every t ≥ T . From (55), we obtain
x(t) -x ∞ ≤ sup t∈[t0,T ] x(t) -x ∞ T t0
Corollary 2 . 5 .
25 Let λ > 0 and let B : H → H be a λ-cocoercive operator such that zerB = ∅. Given a differentiable function γ : [t 0 , +∞[→ R + satisfying (H 0 ), let Γ, ∆ : [t 0 , +∞[→ R + be the functions respectively defined by Γ(s) = p(s) u) du . Assume that there exists ε ∈]0, 1[ such that for s large enough,
)γ(s) = +∞ and | λ(t)| = O (1/γ(t)) as t → +∞ amount respectively to q + r ≤ 1. Items (i)-(vii) are immediate consequences of the corresponding points in Corollary 3.3.
Corollary 5 . 4 .
54 Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ) and (K 1 ).
+∞ t0 Λ
t0 (s, t) x(s) ds ≤ M +∞ t0 Λ(s, t) ds = M.Since X is complete, we classically deduce that the integral +∞ t0 Λ(s, t) x(s) ds is convergent. From the definition of x(t), we then have x(t) ≤ +∞ t0
) = e We obtain p(t) ẋ(t) = ẋ(t 0 ) for every t ≥ t 0 . By integrating again, we findx(t) = x(t 0 ) +
t t 0 γ(τ ) dτ
and integrate on [t 0 , t]. t t0 ds p(s) ẋ(t 0 ).
+∞ t0 ds p(s) < +∞.
It ensues immediately that the trajectory x(.) converges if and only if ẋ(t 0 ) = 0 or (H 0 )
Proposition 2.1. Let A : H → 2 H be a maximally monotone operator, and let γ : [t 0 , +∞[→ R + and λ : [t 0 , +∞[→ R
* + be continuous functions. Then, for any x 0 ∈ H, v 0 ∈ H, there exists a unique global solution
+ be the function defined by Γ(s) =
monotone
convergence theorem then implies that
t +∞
(3) lim t→+∞ t0 Γ(s, t) ds = lim t→+∞ t0 Γ(s, t) ds =
+∞ du
s p(u)
∈ [t 0 , +∞[,
(2) Γ(s, t) = s t du p(u) p(s) if s ≤ t, and Γ(s, t) = 0 if s > t.
For each s ∈ [t 0 , +∞[, the quantity Γ(s, t) tends increasingly toward Γ(s) as t → +∞. The +∞ t0 Γ(s) ds, since Γ(s, t) = 0 for s ≥ t. Let us state the main result of this section. Theorem 2.3. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let γ : [t 0 , +∞[→ R + and λ : [t 0 , +∞[→ R * + be differentiable functions. Assuming (H 0 ), let Γ : [t 0 , +∞[→ R
2 ds < +∞, and as a consequence
+∞
Γ(s) ẋ(s) 2 ds < +∞.
t0
(ii)
(iv) There exists a positive constant C such that for t large enough,
ẋ(t) ≤ C p(t) t t0 p(s) λ(s) ds and ẍ(t) ≤ C γ(t) p(t) t t0 p(s) λ(s) ds + C λ(t) .
Assuming that
(H 2 ) λ(t) p(t) t t0 p(s) λ(s) ds = O(Γ(t)) and | λ(t)| = O(Γ(t)) as t → +∞,
(H 3 ) +∞ t0 Γ(s) λ(s) ds = +∞,
the following holds
(v) lim t→+∞ λ(t)A λ(t) (x(t)) = 0.
(vi) If λ(•) is minorized by some positive constant on [t 0 , +∞[, then there exists x ∞ ∈ zerA such that
x(t) x ∞ weakly in H as t → +∞.
Finally assume that (H 3 ) is not satisfied, i.e. +∞ t0 Γ(s) λ(s) ds < +∞. Then we obtain
+∞
(vii) ẋ(s) ds < +∞, and hence x(•) converges strongly toward some x ∞ ∈ H.
t0
Proof. (i) Let z ∈ zerA, and let us set h(t) = 1 2 x(t) -z 2 for every t ≥ t 0 . By differentiating, we find
for every t ≥ t 0 ,
ḣ(t) = ẋ(t), x(t) -z and ḧ(t) = ẋ(t) 2 + ẍ(t), x(t) -z .
+∞ t0 λ(s)Γ(s) A λ(s) (x(s)) 2 ds < +∞.
(iii) For any z ∈ zerA, lim t→+∞ x(t) -z exists, and hence x(•) is bounded.
Combining Theorem 2.3 and Propositions 3.1 and 3.2, we obtain the following result.
Corollary 3.3. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let γ, λ : [t 0 , +∞[→ R * + be two differentiable functions satisfying conditions (13) for some c ∈ [0, 1[ and c
in a discrete setting. In order to give a continuous ergodic version, let us consider a measurable function Λ : [t 0 , +∞[×[t 0 , +∞[→ R + satisfying the following assumptions To each bounded map x : [t 0 , +∞[→ H, we associate the averaged map x : [t 0 , +∞[→ H by
+∞
(19) Λ(s, t) ds = 1 for every t ≥ t 0 ,
t0
T
(20) lim t→+∞ t0 Λ(s, t) ds = 0 for every T ≥ t 0 .
+∞
(21) x(t) =
t0
of Proposition 4.1 are satisfied for the function Λ : [t 0 , +∞[×[t 0 , +∞[→ R + given by[START_REF] May | Asymptotic for a second order evolution equation with convex potential and vanishing damping term[END_REF]. Property[START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF] clearly holds true. Observe that for every T ≥ t 0 , ds is finite and independent of t. On the other hand, from the assumption
T t0 Λ(s, t) ds = T t0 Γ(s, t) ds t t0 Γ(u, t) du ≤ T t0 Γ(s) ds t0 Γ(u, t) du t .
The quantity t0 Γ(s) +∞ T
t0
+ satisfies (H 0 ). For s, t ≥ t 0 , let Γ(s) and Γ(s, t) be the quantities respectively defined by (1) and (2). Assume that conditions (H 1 )-(H 2 )-(H 3 ) hold, together with
+∞
t0
Theorem 4.3. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅ and let λ : [t 0 , +∞[→ R * + be a differentiable function. Suppose that the differentiable function γ : [t 0 , +∞[→ R Γ(s) ds = +∞. Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that x(t) = 1 t t0 Γ(s, t) ds t t0 Γ(s, t)x(s) ds x ∞ weakly in H as t → +∞. Proof. We apply Corollary 4.2 with S = zerA. Condition (i) of Corollary 4.2 is realized in view of Theorem 2.3 (iii). Let us now assume that there exist x ∞ ∈ H and a sequence (t n ) such that t n → +∞ and x(t n )
Then for any global solution x(.) of (RIMS) γ,λ , there exists x ∞ ∈ zerA such that The latter result still holds true if the function Γ in the above quotient is replaced with a function Γ : [t 0 , +∞[→ R + such that Γ(s) ∼ Γ(s) as s → +∞.Proof. We are going to show that lim t→+∞ x(t) -x(t) = 0, where x is the averaged trajectory of Theorem 4.3. For that purpose, we use Lemma B.5 with the functions Λ 1 , Λ 2 : [t 0 , +∞[×[t 0 , +∞[→ R +
x(t) = x respectively defined by t 1 t t0 Γ(s) ds t0 Γ(s)x(s) ds
Λ 1 (s, t) = Γ(s, t) t0 Γ(u, t) du t , Λ 2 (s, t) = Γ(s) t0 Γ(u) du t , if s ≤ t,
and Λ 1 (s, t) = Λ 2 (s, t) = 0 if s > t. The functions Λ 1 and Λ 2 clearly satisfy property (19). Let us now
check that
+∞
(26) lim t→+∞ t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds = 0.
t t0 Γ(u) du + Γ(s, t) -Γ(s) t t0 Γ(u) du
and hence
|Λ +∞ t ds p(s) t0 Γ(s) ds t t0 p(s) ds t .
Theorem 4.4. Under the hypotheses of Theorem 4.3, assume moreover that
(25) t +∞ ds p(s) t t0 p(s) ds = o t t0 Γ(s) ds as t → +∞.
s) ds t t0 Γ(s)x(s) ds, for every t ≥ t 0 . The next result gives sufficient conditions that ensure the weak convergence of x(t) as t → +∞ toward a zero of A. ∞ weakly in H as t → +∞.
For s ≤ t, we have
Λ 1 (s, t) -Λ 2 (s, t) = Γ(s, t) t t0 Γ(u, t) du t t0 (Γ(u) -Γ(u, t)) du 1 (s, t) -Λ 2 (s, t)| ≤ Γ(s, t) t t0 Γ(u, t) du t t0 |Γ(u) -Γ(u, t)| du t t0 Γ(u) du + |Γ(s, t) -Γ(s)| t t0 Γ(u) du .
By integrating on [t 0 , t], we find t t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds ≤ 2 t t0 |Γ(s, t) -Γ(s)| ds t t0 Γ(s) ds . Recalling that Λ 1 (s, t) = Λ 2 (s, t) = 0 for s > t, this implies that +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds ≤ 2 t t0 |Γ(s, t) -Γ(s)| ds t t0 Γ(s) ds . From the expression of Γ(s) and Γ(s, t), see (1) and (2), we immediately deduce that +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t)| ds ≤ 2
as s → +∞. Let us denote by Λ 2 the function defined by (s, t) = 0 if s > t. The corresponding averaged trajectory is denoted by x. By arguing as above, we obtain that
and Λ 2 +∞ t0 | Λ 2 (s, t) -Λ 2 (s, t)| ds ≤ 2 t t0 | Γ(s) -Γ(s)| ds t0 Γ(s) ds t .
Then, using the estimate
t t
| Γ(s) -Γ(s)| ds = o Γ(s) ds as t → +∞,
t0 t0
we deduce that +∞
| Λ 2 (s, t) -Λ 2 (s, t)| ds -→ 0 as t → +∞.
t0
In view of Lemma B.5, this implies that lim t→+∞ x(t) -x(t) = 0, which ends the proof.
Let us now apply Theorem 4.4 to the class of differentiable functions γ, λ : [t 0 , +∞[→ R * + satisfying
(27) lim t→+∞ γ(t) γ(t) 2 = 0 and lim t→+∞ d dt (λ(t)γ(t)) λ(t)γ(t) 2 = 0.
Λ 2 (s, t) = Γ(s) t0 Γ(u) du t if s ≤ t,
Corollary 4.5. Let A : H → 2 H be a maximally monotone operator such that zerA = ∅. Let γ, λ : [t 0 , +∞[→ R * + be two differentiable functions satisfying conditions
[START_REF] Peypouquet | Convex optimization in normed spaces: theory, methods and examples[END_REF]
. Assume that (a) lim inf t→+∞
as t → +∞. It ensues that the first condition of (H 2 ) is automatically satisfied, while the second one is given by (b). Condition (H 3 ) is implied by the assumption (c). In the same way, condition ds = +∞ is guaranteed by the assumption (d). It remains to establish condition (25) of Theorem 4.4. By applying Proposition 3.1 (ii) with λ(t) ≡ 1 and c = 0, we obtain +∞, the above result holds true with the function 1/γ in place of Γ, see the last assertion of Theorem 4.4.
It ensues that condition (25) amounts to 1 γ(t) 2 = o t t0 Γ(s) ds as t → +∞, which is in turn equivalent
to
(29) 1 γ(t) 2 = o t t0 ds γ(s) as t → +∞.
Since lim t→+∞ γ(t)/γ(t) 2 = 0, we have -γ(t)/γ(t) 3 = o(1/γ(t)) as t → +∞. By integrating on [t 0 , t], we
obtain
1 2γ(t) 2 t t0 = t t0 - γ(s) γ(s) 3 ds = o t t0 ds γ(s) as t → +∞,
because +∞ t0 ds γ(s) = +∞ by assumption. It ensues that condition (29) is fulfilled, hence all the hypotheses
of Theorem 4.4 are satisfied. We deduce that there exists x ∞ ∈ zerA such that
1 t0 Γ(s) ds t t t0 Γ(s)x(s) ds x ∞ weakly in H as t → +∞.
Since Γ(t) ∼ 1/γ(t) as t →
t 1 γ(s) ds x +∞ t t0 t t0 x(s) γ(s) ds ds p(s) ∼ 1 p(t)γ(t) and t t0 p(s) λ(s) ds ∼ p(t) λ(t)γ(t) as t → +∞,
thus implying that Γ(t) ∼ 1 γ(t) +∞ t0 Γ(s) t t0 p(s) ds ∼ p(t) γ(t) as t → +∞.
In view of the first equivalence of (28), we infer that
t +∞ ds p(s) t t0 p(s) ds ∼ 1 γ(t) 2 as t → +∞.
∞ weakly in H as t → +∞.
Proof. Let us check that the assumptions of Theorem 4.4 are satisfied. Assumption (H 0 ) is verified in view of Proposition 3.1 (i) applied with c = 0. Since lim inf t→+∞ λ(t)γ(t) 2 > 1, condition (H 1 ) holds true by Proposition 3.2 used with c = c = 0. On the other hand, Proposition 3.1 shows that
[START_REF] Su | A differential equation for modeling Nesterov's accelerated gradient method: theory and insights[END_REF]
5.2.2. Anchor. Given z ∈ H, we define h z : [t 0 , +∞[→ R by Lemma 5.2. For each z ∈ H and all t ≥ t 0 , we have ḧz (t) + γ(t) ḣz (t) + x(t) -z, ∇Φ λ(t) (x(t)) = ẋ(t) 2 Rate of convergence of the values. Let x : [t 0 , +∞[→ H be a solution of (RIGS) γ,λ . Let us fix x ∈ argmin Φ, and set h= h x , that is, h : [t 0 , +∞[→ R + satisfies h(t) = 1 2 x(t) -x 2 . We define the function p : [t 0 , +∞[→ R + by p(t) = eThe following rate of convergence analysis is based on the decreasing properties of the function E, that will serve us as a Lyapunov function. Proposition 5.3 (Decay of E). Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ). The energy function E : [t 0 , +∞[→ R + satisfies for every t ≥ t 0 ,
Proof. By differentiating the function E, as expressed in (38), we obtain
h z (t) = Ė(t) = Γ(t) 2 Ẇ (t) + 2Γ(t) Γ(t)W (t) + (1 + Γ(t)) ḣ(t) + Γ(t) ḧ(t). 1 x(t) -z 2 . 2 We have the following: Taking into account the expression of W and Ẇ , along with equalities (35) and (37), we obtain
Ė(t) = Γ(t) 2 Ẇ (t) + 2Γ(t) Γ(t)W (t) + Γ(t)( ḧ(t) + γ(t) ḣ(t))
(35) (36) In particular, if z ∈ argmin Φ, then = -Γ(t) 2 γ(t) ẋ(t) 2 + λ(t) 2 ḧz (t) + γ(t) ḣz (t) + Φ λ(t) (x(t)) -Φ λ(t) (z) ≤ ẋ(t) 2 . ∇Φ λ(t) (x(t)) 2 + 2Γ(t) Γ(t) 1 2 ẋ(t) 2 + Φ λ(t) (x(t)) -min H +Γ(t) ẋ(t) 2 -∇Φ λ(t) (x(t)), x(t) -x Φ
≤ ḧz (t) + γ(t) ḣz (t) ≤ ẋ(t) 2 . ẋ(t) 2 Γ(t)(-Γ(t)γ(t) + Γ(t) + 1) + 2Γ(t) Γ(t)(Φ λ(t) (x(t)) -min H
Proof. First observe that
ḣz (t) = x(t) -z, ẋ(t) and ḧz (t) = x(t) -z, ẍ(t) + ẋ(t) 2 .
By (RIGS) γ,λ and the convexity of Φ λ(t) , it ensues that
ḧz (t) + γ(t) ḣz (t) = ẋ(t) 2 + x(t) -z, -∇Φ λ(t) (x(t)) ≤ ẋ(t) 2 + Φ λ(t) (z) -Φ λ(t) (x(t)),
which is precisely (35)-(36). The last statement follows from the fact that argmin Φ λ = argmin Φ for all
λ > 0.
5.3. t t 0 γ(τ ) dτ . Under the assumption
(H 0 ) +∞ t0 ds p(s) < +∞,
the function Γ : [t 0 , +∞[→ R + is defined by Γ(t) = p(t) +∞ t ds p(s) . Clearly, the function Γ is of class C 1
and satisfies
(37) Γ(t) = γ(t)Γ(t) -1, t ≥ t 0 .
Let us define the function E : [t 0 , +∞[→ R by
(38) E(t) = Γ(t) 2 W (t) + h(t) + Γ(t) ḣ(t)
= Γ(t) 2 1 2 ẋ(t) 2 + Φ λ(t) (x(t)) -min H Φ + 1 2 x(t) -x 2 + Γ(t) ẋ(t), x(t) -x
(39) = Γ(t) 2 Φ λ(t) (x(t)) -min H Φ + 1 2 x(t) -x + Γ(t) ẋ(t) 2 .
(40) Ė(t) + Γ(t) (3 -2γ(t)Γ(t)) Φ λ(t) (x(t)) -min H Φ ≤ 0.
Under the assumption
(K 1 ) There exists t 1 ≥ t 0 such that γ(t)Γ(t) ≤ 3/2 for every t ≥ t 1 ,
then we have Ė(t) ≤ 0 for every t ≥ t 1 .
There exist t 1 ≥ t 0 and m < 3/2 such that γ(t)Γ(t) ≤ m for every t ≥ t 1 .Proof. (i) From Proposition 5.3, the function E is nonincreasing on [t 1 , +∞[. It ensues that E(t) ≤ E(t 1 ) for every t ≥ t 1 . Taking into account the expression (39), we deduce that for every t ≥ t 1 , Proposition 5.5. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ) and (K + 1 ). Then, we have Let θ : [t 0 , +∞[→ R + be a differentiable test function, and let t 1 ≥ t 0 be given by the assumption (K + 1 ). Let us multiply the inequality (42) by θ(t) and integrate on [t 1 , t] Theorem 5.6. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ), (K + 1 ), along with(K 2 )In particular, we obtain lim t→+∞ Φ(ξ(t)) = min H Φ, and lim t→+∞ ẋ(t) = 0.
Since E(t) ≥ 0 and γ(t)Γ(t) ≤ m for every t ≥ t 1 , this implies that
t +∞
(3 -2m) t1 Γ(s) (Φ λ(s) (x(s)) -min t0 H Γ(s) ds = +∞. Φ) ds ≤ E(t 1 ).
The inequality (41) is obtained by letting t tend toward infinity. Let x(.) be a solution of (RIGS) γ,λ . Then we have
1/2
(i) As a consequence, setting ξ(t) = prox λ(t)Φ (x(t)), we have +∞ t0 +∞ Φ λ(t) (x(t)) -min H Φ = o 1 t t0 Γ(s) ds and ẋ(t) = o Γ(t) ẋ(t) 2 dt < +∞, and hence t0 Γ(t) W (t) dt < +∞; t0 Γ(s) ds 1 t as t → +∞.
(ii) Proof. By (34) and λ nondecreasing we have +∞ t0 γ(t) t t0 Γ(s) ds ẋ(t) 2 dt < +∞. (44) Φ(ξ(t)) -min H Φ = o 1 t t0 Γ(s) ds and x(t) -ξ(t) = o λ(t) t0 Γ(s) ds t 1/2 as t → +∞.
Ẇ (t) ≤ -γ(t) ẋ(t) 2 . Proof. From Proposition 5.5 (i), we have (42) +∞ t0 Γ(t) W (t) dt < +∞. On the other hand, the energy
function W is nonincreasing by Proposition 5.1. By applying Lemma B.3 in the Appendix, we obtain
t t1 The announced estimates follow immediately. θ(s) Ẇ (s) ds + that W (t) = o t t0 Γ(s) ds t t1 θ(s)γ(s) ẋ(s) 2 ds ≤ 0. 1 as t → +∞. Integrating by parts yields
t t
(43) θ(t)W (t) + θ(s)γ(s) ẋ(s) 2 ds ≤ θ(t 1 )W (t 1 ) + θ(s)W (s) ds.
t1 t1
Using the expression of W and rearranging the terms, we find
t t
θ(t)W (t) + t1 2 . θ(s)γ(s) -θ(s)/2 ẋ(s) 2 ds ≤ θ(t 1 )W (t 1 ) + t1 θ(s)(Φ λ(s) (x(s)) -min
As a consequence, setting ξ(t) = prox λ(t)Φ (x(t)), we have
Φ(ξ(t)) -min Φ ≤ E(t 1 ) Γ(t) 2 and x(t) -ξ(t) 2 ≤ 2λ(t) Γ(t) 2 E(t 1 ).
(ii) Assume moreover that λ(t) t Proof. (i) Since sup t≥t0 t 0
(K + 1 )
Then we have (41) satisfies the following differential inequality +∞ Γ(t) (Φ λ(t) (x(t)) -min H ḧ(t) + γ(t) ḣ(t) ≤ ẋ(t) 2 . Φ) dt ≤ E(t 1 ) 3 -2m t1 From Proposition 5.5 (i), we have +∞ t0 Γ(s) ẋ(s) < +∞.
Under (K + 1 ), we have the estimate +∞ t1 Γ(s)(Φ λ(s) (x(s)) -min H Φ) ds < +∞, see Corollary 5.4 (ii). The
Γ(t) 2 (Φ λ(t) (x(t)) -min H announced estimates follow immediately. Φ) ≤ E(t 1 ) and (ii) Take now θ(t) = t t0 Γ(s) ds. Recalling that W (t) ≥ 0, inequality (43) then implies that for every 1 x(t) -x + Γ(t) ẋ(t) 2 ≤ E(t 1 ). 2 t ≥ t 1 , The first assertion follows immediately. t s t1 t (ii) Now assume (K + 1 ). By integrating (40) on [t 1 , t], we find γ(s) Γ(u) du ẋ(s) 2 ds ≤ Γ(s) ds W (t 1 ) + Γ(s)W (s) ds.
t1 t0 t t0 t1
E(t) + It suffices then to recall that t1 Φ λ(s) (x(s)) -min H +∞ t1 Γ(s)W (s) ds < +∞ under hypothesis (K + Φ Γ(s)(3 -2γ(s)Γ(s)) ds ≤ E(t 1 ). 1 ), see point (i).
H Φ) ds.
(i) Choosing θ(t) = Γ(t)
2
, the above equality gives for every t ≥ t 1 ,
Γ(t) 2 W (t) + t t1 Γ(s)[Γ(s)γ(s) -Γ(s)] ẋ(s) 2 ds ≤ Γ(t 1 ) 2 W (t 1 ) + 2 t t1 Γ(s) Γ(s)(Φ λ(s) (x(s)) -min H Φ) ds.
Recalling that Γ = γΓ -1, we deduce that
Γ(t) 2 W (t) + t t1 Γ(s) ẋ(s) 2 ds ≤ Γ(t 1 ) 2 W (t 1 ) + 2 t t1 Γ(s)(γ(s)Γ(s) -1)(Φ λ(s) (x(s)) -min H Φ) ds.
By assumption (K + 1 ), we have γ(t)Γ(t) ≤ 3/2 for every t ≥ t 1 . Since W (t) ≥ 0, it ensues that
t t1 Γ(s) ẋ(s) 2 ds ≤ Γ(t 1 ) 2 W (t 1 ) + t t1 Γ(s)(Φ λ(s) (x(s)) -min H Φ)
ds. Theorem 5.7. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying (H 0 ), (K + 1 ), and (K 2 ). Suppose that λ : [t 0 , +∞[→ R * + is nondecreasing and satisfies sup t≥t0 λ(t) t t0 Γ(s) ds < +∞.
Then, for every solution x(.) of (RIGS) γ,λ the following properties hold:
(i) lim t→+∞ ξ(t) -x(t) = 0, where ξ(t) = prox λ(t)Φ (x(t));
(ii) x(t) converges weakly as t → +∞ toward some x * ∈ argmin Φ.
Γ(s) ds < +∞, the second estimate of (44) implies that lim t→+∞ ξ(t) -x(t) = 0.
(ii) We apply the Opial lemma, see Lemma 2.2. Let us fix x ∈ argmin Φ, and show that lim t→+∞ x(t)-x exists. For that purpose, let us set h(t) = 1 2 x(t) -x 2 . Recall from Lemma 5.2 that the function h 2 ds < +∞. By applying Lemma B.1 with g :
B.1. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying ) dτ . Let g : [t 0 , +∞[→ R be a continuous function. Assume that h : [t 0 , +∞[→ R + is a function of class C 2 satisfying . Then the nonnegative part ḣ+ of ḣ belongs to L 1 (t 0 , +∞), and hence lim t→+∞ h(t) exists. Proof. (i) Let us multiply each member of inequality (48) by p(t) = e ) dτ and integrate on [t 0 , t]. By integrating again on [t 0 , t], we find h(t) ≤ h(t 0 ) + ḣ(t 0 ) |g(s)| ds < +∞. We easily deduce from (50) that for every t ≥ t 0 , < +∞ by assumption, we deduce from (51) and (52) that ḣ+ ∈ L 1 (t 0 , +∞). Hence lim t→+∞ h(t) exists. Let us now state a vector-valued version of Lemma B.1. Lemma B.2. Let γ : [t 0 , +∞[→ R + be a continuous function satisfying < +∞, where the function p is defined by p(t) = e ) dτ . Let F : [t 0 , +∞[→ H be a measurable map such that +∞ t0 Γ(t) F (t) dt < +∞. Assume that x : [t 0 , +∞[→ H is a map of class C 2 satisfying ) dτ and integrate on [t 0 , t]. We obtain for every t ≥ t 0 , By integrating and applying Fubini theorem as in the proof of Lemma B.1, we find F (s) ds < +∞. The strong convergence of x(t) as t → +∞ follows immediately. Owing to the next lemma, we can estimate the rate of convergence of a function w : [t 0 , +∞[→ R + supposed to be nonincreasing and summable with respect to a weight function Γ. Lemma B.3. Let Γ : [t 0 , +∞[→ R + be a measurable function such that +∞ t0 Γ(t) dt = +∞. Assume that w : [t 0 , +∞[→ R + is nonincreasing and satisfies Proof. Let F : [t 0 , +∞[→ R + be the function defined by F (t) = t t0 Γ(s) ds. It follows from the hypothesis +∞ t0 Γ(s) ds = +∞ that the function F is an increasing bijection from [t 0 , +∞[ onto [0, +∞[. For every t ≥ t 0 , let us set α(t) = F -1 ( 1 2 F (t)). By definition, we have
+∞ t0 p(s)g(s) ds. ds p(s) < +∞, where the +∞ t ds p(s) t t 0 γ(τ We obtain function p is defined by p(t) = e t t 0 γ(τ (48) ḧ(t) + γ(t) ḣ(t) ≤ g(t) on [t 0 , +∞[. (i) For every t ≥ t 0 , we have (49) h(t) ≤ h(t 0 ) + ḣ(t 0 ) t t0 du p(u) + t t0 t s du p(u) (ii) Assume that +∞ t0 t t0 du p(u) + t t0 1 p(u) u t0 p(s) g(s) ds du. From Fubini theorem, we have t t0 1 p(u) u t0 p(s) g(s) ds du = t t0 t s du p(u) p(s)g(s) ds, and the inequality (49) follows immediately. (ii) Let us now assume that +∞ t0 Γ(s) (51) ḣ+ (t) ≤ | ḣ(t 0 )| 1 p(t) + 1 p(t) t t0 p(s) |g(s)| ds. By applying Fubini theorem, we find +∞ t0 1 p(t) t t0 p(s) |g(s)| ds dt = +∞ t0 +∞ s dt p(t) p(s) |g(s)| ds = +∞ t0 Γ(s) |g(s)| ds < +∞. (52) Since +∞ t0 dt p(t) +∞ t0 ds p(s) t t 0 γ(τ (53) t t 0 γ(τ ẋ(t) = ẋ(t 0 ) 1 p(t) + 1 p(t) t t0 p(s) F (s) ds. Taking the norm of each member, we deduce that ẋ(t) ≤ ẋ(t 0 ) 1 p(t) + 1 p(t) t t0 p(s) F (s) ds. +∞ t0 ẋ(t) dt ≤ ẋ(t 0 ) +∞ t0 dt t0 p(t) + +∞ t0 Γ(t)w(t) dt < +∞. Then we have w(t) = o 1 t t0 Γ(s) ds as t → +∞. α(t) t0 Γ(s) ds = 1 2 t t0 Γ(s) ds, hence t α(t) Γ(s) ds = 1 2 t t0 Γ(s) ds. Recalling that the function w is nonincreasing, we obtain t α(t) Γ(s) w(s) ds ≥ w(t) t α(t) Γ(s) ds = 1 2 w(t) t t0 Γ(s) ds. By assumption, we have +∞ t0 Γ(s)w(s) ds < +∞. Since lim t→+∞ α(t) = +∞, we deduce that t Γ(s) +∞ lim t→+∞ α(t)
(50) ḣ(t) ≤ ḣ(t 0 ) 1 p(t) + 1 p(t) t t0 p(s) g(s) ds.
Γ(s) |g(s)| ds < +∞, where Γ : [t 0 , +∞[→ R + is given by Γ(t) = p(t) ẍ(t) + γ(t) ẋ(t) = F (t) on [t 0 , +∞[.
Then ẋ ∈ L 1 (t 0 , +∞), and hence x(t) converges strongly as t → +∞.
Proof. Let us multiply (53) by p(t) = e
with M = sup t≥t0 x(t) -x ∞ < +∞. Taking the upper limit as t → +∞, we deduce from property (20) that lim sup Since this is true for every ε > 0, we conclude that lim t→+∞ x(t) -x ∞ = 0.Lemma B.5. Let (X , . ) be a Banach space, and let x : [t 0 , +∞[→ X be a continuous map, supposed to be bounded on [t 0 , +∞[. Let Λ 1 , Λ 2 : [t 0 , +∞[×[t 0 , +∞[→ R + be measurable functions satisfying[START_REF] Chambolle | On the convergence of the iterates of the Fast Iterative Shrinkage/Thresholding Algorithm[END_REF]. (s, t) -Λ 2 (s, t)| ds = 0.Let us consider the averaged trajectories x 1 , x 2 : [t 0 , +∞[→ X defined byx 1 (t) =Then we have lim t→+∞ x 1 (t) -x 2 (t) = 0.Proof. Let M ≥ 0 be such that x(t) ≤ M for every t ≥ t 0 . Observe thatx 1 (t) -x 2 (t) =
Assume that
+∞
(56) lim t→+∞ |Λ 1 +∞ t0 +∞
Λ 1 (s, t) x(s) ds and x 2 (t) = Λ 2 (s, t) x(s) ds.
t0 t0
+∞
Λ(s, t) ds
T
T
≤ M Λ(s, t) ds + ε,
t0
Λ(s, t) ds
+ ε t→+∞ x(t) -x ∞ ≤ ε. +∞ t0 (Λ 1 (s, t) -Λ 2 (s, t))x(s) ds ≤ +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t) | x(s) ds ≤ M +∞ t0 |Λ 1 (s, t) -Λ 2 (s, t))| ds -→ 0 as t → +∞,
in view of (56).
Appendix A. Yosida regularization and Moreau envelopes A.1. Yosida regularization of an operator A. A set-valued mapping A from H to H assigns to each x ∈ H a set A(x) ⊂ H, hence it is a mapping from H to 2 H . Every set-valued mappping A : H → 2 H can be identified with its graph defined by
The set {x ∈ H : 0 ∈ A(x)} of the zeros of A is denoted by zerA. An operator A : H → 2 H is said to be monotone if for any (x, u), (y, v) ∈ gphA, one has y -x, v -u ≥ 0. It is maximally monotone if there exists no monotone operator whose graph strictly contains gphA. If a single-valued operator A : H → H is continuous and monotone, then it is maximally monotone, cf. [START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF]Proposition 2.4].
Given a maximally monotone operator A and λ > 0, the resolvent of A with index λ and the Yosida regularization of A with parameter λ are defined by
respectively. The operator J λA : H → H is nonexpansive and eveywhere defined (indeed it is firmly non-expansive). Moreover, A λ is λ-cocoercive: for all x, y ∈ H we have
This property immediately implies that A λ : H → H is 1 λ -Lipschitz continuous. Another property that proves useful is the resolvent equation (see, for example, [15, Proposition 2.6] or [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF]Proposition 23.6])
which is valid for any λ, µ > 0. This property allows to compute simply the resolvent of A λ by
for any λ, µ > 0. Also note that for any x ∈ H, and any λ > 0
Finally, for any λ > 0, A and A λ have the same solution set S := A -1 λ (0) = A -1 (0). For a detailed presentation of the properties of the maximally monotone operators and the Yosida approximation, the reader can consult [START_REF] Bauschke | Convex Analysis and Monotone Operator Theory in Hilbert spaces[END_REF] or [START_REF] Brézis | Opérateurs maximaux monotones dans les espaces de Hilbert et équations d'évolution[END_REF]. A.2. Differentiability properties of the Moreau envelopes. Lemma A.1. For each x ∈ H, the real-valued function λ → Φ λ (x) is continuously differentiable on ]0, +∞[, with
Proof. By definition of Φ λ , we have
where the infimum in the above expression is achieved at J λ (x) := (I + λ∂Φ) -1 (x). Let us prove that (47) d dλ λΦ λ (x) = Φ(J λ (x)). |
01697117 | en | [
"math.math-pr",
"qfin.cp",
"qfin.pr"
] | 2024/03/05 22:32:13 | 2019 | https://hal.science/hal-01697117v3/file/AJEE_20180407_Final.pdf | Eduardo Abi Jaber
email: abijaber@ceremade.dauphine.fr
Omar El Euch
email: omar.el-euch@polytechnique.edu
Multi-factor approximation of rough volatility models
Keywords: Rough volatility models, rough Heston models, stochastic Volterra equations, affine Volterra processes, fractional Riccati equations, limit theorems
Rough volatility models are very appealing because of their remarkable fit of both historical and implied volatilities. However, due to the non-Markovian and non-semimartingale nature of the volatility process, there is no simple way to simulate efficiently such models, which makes risk management of derivatives an intricate task. In this paper, we design tractable multi-factor stochastic volatility models approximating rough volatility models and enjoying a Markovian structure. Furthermore, we apply our procedure to the specific case of the rough Heston model. This in turn enables us to derive a numerical method for solving fractional Riccati equations appearing in the characteristic function of the log-price in this setting.
Introduction
Empirical studies of a very wide range of assets volatility time-series in [START_REF] Gatheral | Volatility is rough[END_REF] have shown that the dynamics of the log-volatility are close to that of a fractional Brownian motion W H with a small Hurst parameter H of order 0.1. Recall that a fractional Brownian motion W H can be built from a two-sided Brownian motion thanks to the Mandelbrot-van Ness representation
W H t = 1 Γ(H + 1/2) t 0 (t -s) H-1 2 dW s + 1 Γ(H + 1/2) 0 -∞ (t -s) H-1 2 -(-s) H-1 2 dW s .
The fractional kernel (t -s) H-1 2 is behind the H -ε Hölder regularity of the volatility for any ε > 0. For small values of the Hurst parameter H, as observed empirically, stochastic volatility models involving the fractional kernel are called rough volatility models.
Aside from modeling historical volatility dynamics, rough volatility models reproduce accurately with very few parameters the behavior of the implied volatility surface, see [START_REF] Bayer | Pricing under rough volatility[END_REF][START_REF] Euch | Roughening Heston[END_REF], especially the at-the-money skew, see [START_REF] Fukasawa | Asymptotic analysis for stochastic volatility: Martingale expansion[END_REF]. Moreover, microstructural foundations of rough volatility are studied in [START_REF] Euch | The microstructural foundations of rough volatility and leverage effect[END_REF][START_REF] Jaisson | Rough fractional diffusions as scaling limits of nearly unstable heavy tailed hawkes processes[END_REF].
In this paper, we are interested in a class of rough volatility models where the dynamics of the asset price S and its stochastic variance V are given by dS t = S t V t dW t , S 0 > 0,
(1.1)
V t = V 0 + 1 Γ(H + 1 2 ) t 0 (t -u) H-1 2 (θ(u) -λV u )du + 1 Γ(H + 1 2 ) t 0 (t -u) H-1 2 σ(V u )dB u , (1.2)
for all t ∈ [0, T ], on some filtered probability space (Ω, F, F, P). Here T is a positive time horizon, the parameters λ and V 0 are non-negative, H ∈ (0, 1/2) is the Hurst parameter, σ is a continuous function and W = ρB + 1 -ρ 2 B ⊥ with (B, B ⊥ ) a two-dimensional F-Brownian motion and ρ ∈ [-1, 1]. Moreover, θ is a deterministic mean reversion level allowed to be time-dependent to fit the market forward variance curve (E[V t ]) t≤T as explained in Section 2 and in [START_REF] Euch | Perfect hedging in rough Heston models[END_REF]. Under some general assumptions, we establish in Section 2 the existence of a weak non-negative solution to the fractional stochastic integral equation in (1.2) exhibiting H -ε Hölder regularity for any ε > 0. Hence, this class of models is a natural rough extension of classical stochastic volatility models where the fractional kernel is introduced in the drift and stochastic part of the variance process V . Indeed, when H = 1/2, we recover classical stochastic volatility models where the variance process is a standard diffusion.
Despite the fit to the historical and implied volatility, some difficulties are encountered in practice for the simulation of rough volatility models and for pricing and hedging derivatives with them. In fact, due to the introduction of the fractional kernel, we lose the Markovian and semimartingale structure. In order to overcome theses difficulties, we approximate these models by simpler ones that we can use in practice.
In [START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | The microstructural foundations of rough volatility and leverage effect[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF], the rough Heston model (which corresponds to the case of σ(x) = ν √ x) is built as a limit of microscopic Hawkes-based price models. This allowed the understanding of the microstructural foundations of rough volatility and also led to the formula of the characteristic function of the log-price. Hence, the Hawkes approximation enabled us to solve the pricing and hedging under the rough Heston model. However, this approach is specific to the rough Heston case and can not be extended to an arbitrary rough volatility model of the form (1.1)-(1.2).
Inspired by the works of [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF][START_REF] Carmona | Approximation of some Gaussian processes[END_REF][START_REF] Harms | Affine representations of fractional processes with applications in mathematical finance[END_REF][START_REF] Muravlëv | Representation of fractal Brownian motion in terms of an infinitedimensional Ornstein-Uhlenbeck process[END_REF], we provide a natural Markovian approximation for the class of rough volatility models (1.1)- (1.2). The main idea is to write the fractional kernel
K(t) = t H-1 2
Γ(H+1/2) as a Laplace transform of a positive measure µ
K(t) = ∞ 0 e -γt µ(dγ); µ(dγ) = γ -H-1 2 Γ(H + 1/2)Γ(1/2 -H) dγ. (1.3)
We then approximate µ by a finite sum of Dirac measures µ n = n i=1 c n i δ γ n i with positive weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n , for n ≥ 1. This in turn yields an approxi-mation of the fractional kernel by a sequence of smoothed kernels (K n ) n≥1 given by
K n (t) = n i=1 c n i e -γ n i t , n ≥ 1.
This leads to a multi-factor stochastic volatility model (S n , V n ) = (S n t , V n t ) t≤T , which is Markovian with respect to the spot price and n variance factors (V n,i ) 1≤i≤n and is defined as follows
dS n t = S n t V n t dW t , V n t = g n (t) + n i=1 c n i V n,i t , (1.4)
where dV n,i t = (-γ n i V n,i t -λV n t )dt + σ(V n t )dB t , and g n (t) = V 0 + t 0 K n (t -s)θ(s)ds with the initial conditions S n 0 = S 0 and V n,i 0 = 0. Note that the factors (V n,i ) 1≤i≤n share the same dynamics except that they mean revert at different speeds (γ n i ) 1≤i≤n . Relying on existence results of stochastic Volterra equations in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF], we provide in Theorem 3.1 the strong existence and uniqueness of the model (S n , V n ), under some general conditions. Thus the approximation (1.4) is uniquely well-defined. We can therefore deal with simulation, pricing and hedging problems under these multi-factor models by using standard methods developed for stochastic volatility models. Theorem 3.5, which is the main result of this paper, establishes the convergence of the multifactor approximation sequence (S n , V n ) n≥1 to the rough volatility model (S, V ) in (1.1)-(1.2) when the number of factors n goes to infinity, under a suitable choice of the weights and mean reversions (c n i , γ n i ) 1≤i≤n . This convergence is obtained from a general result about stability of stochastic Volterra equations derived in Section 3. [START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF].
In [START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF], the characteristic function of the log-price for the specific case of the rough Heston model is obtained in terms of a solution of a fractional Riccati equation. We highlight in Section 4.1 that the corresponding multi-factor approximation (1.4) inherits a similar affine structure as in the rough Heston model. More precisely, it displays the same characteristic function formula involving a n-dimensional classical Riccati equation instead of the fractional one. This suggests solving numerically the fractional Riccati equation by approximating it through a n-dimensional classical Riccati equation with large n, see Theorem 4.1. In Section 4.2, we discuss the accuracy and complexity of this numerical method and compare it to the Adams scheme, see [START_REF] Diethelm | A predictor-corrector approach for the numerical solution of fractional differential equations[END_REF][START_REF] Diethelm | Detailed error analysis for a fractional Adams method[END_REF][START_REF] Diethelm | The fracpece subroutine for the numerical solution of differential equations of fractional order[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF].
The paper is organized as follows. In Section 2, we define the class of rough volatility models (1.1)-(1.2) and discuss the existence of such models. Then, in Section 3, we build a sequence of multi-factor stochastic volatility models of the form of (1.4) and show its convergence to a rough volatility model. By applying this approximation to the specific case of the rough Heston model, we obtain a numerical method for computing solutions of fractional Riccati equations that is discussed in Section 4. Finally, some proofs are relegated to Section 5 and some useful technical results are given in an Appendix.
A definition of rough volatility models
We provide in this section the precise definition of rough volatility models given by (1.1)-(1.2). We discuss the existence of such models and more precisely of a non-negative solution of the fractional stochastic integral equation (1.2). The existence of an unconstrained weak solution V = (V t ) t≤T is guaranteed by Corollary B.2 in the Appendix when σ is a continuous function with linear growth and θ satisfies the condition
∀ε > 0, ∃C ε > 0; ∀u ∈ (0, T ] |θ(u)| ≤ C ε u -1 2 -ε . (2.1)
Furthermore, the paths of V are Hölder continuous of any order strictly less than H and sup
t∈[0,T ] E[|V t | p ] < ∞, p > 0. (2.2)
Moreover using Theorem B.4 together with Remarks B.5 and B.6 in the Appendix 1 , the existence of a non-negative continuous process V satisfying (1.2) is obtained under the additional conditions of non-negativity of V 0 and θ and σ(0) = 0. We can therefore introduce the following class of rough volatility models.
Definition 2.1. (Rough volatility models) We define a rough volatility model by any
R × R + - valued continuous process (S, V ) = (S t , V t ) t≤T satisfying dS t = S t V t dW t , V t = V 0 + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 (θ(u) -λV u )du + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 σ(V u )dB u ,
on a filtred probability space (Ω, F, F, P) with non-negative initial conditions (S 0 , V 0 ). Here T is a positive time horizon, the parameter λ is non-negative, H ∈ (0, 1/2) is the Hurst parameter and W = ρB + 1 -ρ 2 B ⊥ with (B, B ⊥ ) a two-dimensional F-Brownian motion and ρ ∈ [-1, 1]. Moreover, to guarantee the existence of such model, σ : R → R is assumed continuous with linear growth such that σ(0) = 0 and θ : [0, T ] → R is a deterministic non-negative function satisfying (2.1).
As done in [START_REF] Euch | Perfect hedging in rough Heston models[END_REF], we allow the mean reversion level θ to be time dependent in order to be consistent with the market forward variance curve. More precisely, the following result shows that the mean reversion level θ can be written as a functional of the forward variance curve (E[V t ]) t≤T .
Proposition 2.2. Let (S, V ) be a rough volatility model given by Definition 2.1. Then, (E[V t ]) t≤T is linked to θ by the following formula
E[V t ] = V 0 + t 0 (t -s) α-1 E α (-λ(t -s) α )θ(s)ds, t ∈ [0, T ], (2.3
)
where α = H + 1/2 and E α (x) = k≥0 x k Γ(α(k+1))
is the Mittag-Leffler function. Moreover, (E[V t ]) t≤T admits a fractional derivative 2 of order α at each time t ∈ (0, T ] and
θ(t) = D α (E[V . ] -V 0 ) t + λE[V t ], t ∈ (0, T ].
(2.4)
1 Theorem B.4 is used here with the fractional kernel
K(t) = t H-1 2
Γ(H+1/2) together with b(x) = -λx and g(t) = V0 + t 0 K(t -u)θ(u)du. 2 Recall that the fractional derivative of order α ∈ (0, 1) of a function f is given by d dt t 0 (t-s) -α Γ(1-α) f (s)ds whenever this expression is well defined.
Proof. Thanks to (2.2) together with Fubini theorem, t → E[V t ] solves the following fractional linear integral equation
E[V t ] = V 0 + 1 Γ(H + 1/2) t 0 (t -s) H-1 2 (θ(s) -λE[V s ])ds, t ∈ [0, T ], (2.5)
yielding (2.3) by Theorem A.3 and Remark A.5 in the Appendix. Finally, (2.4) is obviously obtained from (2.5).
Finally, note that uniqueness of the fractional stochastic integral equation (1.2) is a difficult problem. Adapting the proof in [START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF], we can prove pathwise uniqueness when σ is η-Hölder continuous with η ∈ (1/(1+2H), 1]. This result does not cover the square-root case, i.e. σ(x) = ν √ x, for which weak uniqueness has been established in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF].
Multi-factor approximation of rough volatility models
Thanks to the small Hölder regularity of the variance process, models of Definition 2.1 are able to reproduce the rough behavior of the volatility observed in a wide range of assets. However, the fractional kernel forces the variance process to leave both the semimartingale and Markovian worlds, which makes numerical approximation procedures a difficult and challenging task in practice. The aim of this section is to construct a tractable and satisfactory Markovian approximation of any rough volatility model (S, V ) of Definition 2.1. Because S is entirely determined by (
• 0 V s ds, • 0 √ V s dW s )
, it suffices to construct a suitable approximation of the variance process V . This is done by smoothing the fractional kernel. More precisely, denoting by K(t) = t H-1 2 Γ(H+1/2) , the fractional stochastic integral equation (1.2) reads
V t = V 0 + t 0 K(t -s) ((θ(s) -λV s )ds + σ(V s )dB s ) ,
which is a stochastic Volterra equation. Approximating the fractional kernel K by a sequence of smooth kernels (K n ) n≥1 , one would expect the convergence of the following corresponding sequence of stochastic Volterra equations
V n t = V 0 + t 0 K n (t -s) ((θ(s) -λV n s )ds + σ(V n s )dB s ) , n ≥ 1,
to the fractional one.
The argument of this section runs as follows. First, exploiting the identity (1.3), we construct a family of potential candidates for (K n , V n ) n≥1 in Section 3.1 such that V n enjoys a Markovian structure. Second, we provide convergence conditions of (K n ) n≥1 to K in L 2 ([0, T ], R) in Section 3.2. Finally, the approximation result for the rough volatility model (S, V ) is established in Section 3.3 relying on an abstract stability result of stochastic Volterra equations postponed to Section 3.4 for sake of exposition.
Construction of the approximation
In [START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF][START_REF] Harms | Affine representations of fractional processes with applications in mathematical finance[END_REF][START_REF] Muravlëv | Representation of fractal Brownian motion in terms of an infinitedimensional Ornstein-Uhlenbeck process[END_REF], a Markovian representation of the fractional Brownian motion of Riemann-Liouville type is provided by writing the fractional kernel K(t) = t H-1 2 Γ(H+1/2) as a Laplace transform of a non-negative measure µ as in (1.3). This representation is extended in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF] for the Volterra square-root process. Adopting the same approach, we establish a similar representation for any solution of the fractional stochastic integral equation (1.2) in terms of an infinite dimensional system of processes sharing the same Brownian motion and mean reverting at different speeds. Indeed by using the linear growth of σ together with the stochastic Fubini theorem, see [START_REF] Veraar | The stochastic Fubini theorem revisited[END_REF], we obtain that
V t = g(t) + ∞ 0 V γ t µ(dγ), t ∈ [0, T ],
with
dV γ t = (-γV γ t -λV t )dt + σ(V t )dB t , V γ 0 = 0, γ ≥ 0, and
g(t) = V 0 + t 0 K(t -s)θ(s)ds. (3.1)
Inspired by [START_REF] Carmona | Fractional Brownian motion and the Markov property[END_REF][START_REF] Carmona | Approximation of some Gaussian processes[END_REF], we approximate the measure µ by a weighted sum of Dirac measures
µ n = n i=1 c n i δ γ n i , n ≥ 1,
leading to the following approximation V n = (V n t ) t≤T of the variance process V
V n t = g n (t) + n i=1 c n i V n,i t , t ∈ [0, T ], (3.2)
dV n,i t = (-γ n i V n,i t -λV n t )dt + σ(V n t )dB t , V n,i 0 = 0,
where
g n (t) = V 0 + t 0 K n (t -u)θ(u)du, (3.3)
and
K n (t) = n i=1 c n i e -γ n i t . (3.4)
The choice of the positive weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n , which is crucial for the accuracy of the approximation, is studied in Section 3.2 below. Before proving the convergence of (V n ) n≥1 , we shall first discuss the existence and uniqueness of such processes. This is done by rewriting the stochastic equation (3.2) as a stochastic Volterra equation of the form
V n t = g n (t) + t 0 K n (t -s) (-λV n s ds + σ(V n s )dB s ) , t ∈ [0, T ]. (3.5)
The existence of a continuous non-negative weak solution V n is ensured by Theorem B.4 together with Remarks B.5 and B.6 in the Appendix 3 , because θ and V 0 are non-negative and σ(0) = 0. Moreover, pathwise uniqueness of solutions to (3.5) follows by adapting the standard arugments of [START_REF] Yamada | On the uniqueness of solutions of stochastic differential equations[END_REF], provided a suitable Hölder continuity of σ, see Proposition B.3 in the Appendix. Note that this extension is made possible due to the smoothness of the kernel K n . For instance, this approach fails for the fractional kernel because of the singularity at zero. This leads us to the following result which establishes the strong existence and uniqueness of a non-negative solution of (3.5) and equivalently of (3.2).
Theorem 3.1. Assume that θ : [0, T ] → R is a deterministic non-negative function satisfying (2.1) and that σ : R → R is η-Hölder continuous with σ(0) = 0 and η ∈ [1/2, 1]. Then, there exists a unique strong non-negative solution V n = (V n t ) t≤T to the stochastic Volterra equation (3.5) for each n ≥ 1.
Due to the uniqueness of (3.2), we obtain that V n is a Markovian process according to n state variables (V n,i ) 1≤i≤n that we call the factors of V n . Moreover, V n being non-negative, it can model a variance process. This leads to the following definition of multi-factor stochastic volatility models. Definition 3.2. (Multi-factor stochastic volatility models). We define the following sequence of multi-factor stochastic volatility models (S n , V n ) = (S n t , V n t ) t≤T as the unique R × R +valued strong solution of
dS n t = S n t V n t dW t , V n t = g n (t) + n i=1 c n i V n,i t , with dV n,i t = (-γ n i V n,i t
-λV n t )dt + σ(V n t )dB t , V n,i 0 = 0, S n 0 = S 0 > 0, on a filtered probability space (Ω, F, P, F), where F is the canonical filtration a two-dimensional Brownian motion (W, W ⊥ ) and B = ρW + 1 -ρ 2 W ⊥ with ρ ∈ [-1, 1]. Here, the weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n are positive, σ : R → R is η-Hölder continuous such that σ(0) = 0, η ∈ [1/2, 1] and g n is given by (3.3), that is
g n (t) = V 0 + t 0 K n (t -s)θ(s)ds,
with a non-negative initial variance V 0 , a kernel K n defined as in (3.4) and a non-negative deterministic function θ : [0, T ] → R satisfying (2.1).
Note that the strong existence and uniqueness of (S n , V n ) follows from Theorem 3.1. This model is Markovian with n + 1 state variables which are the spot price S n and the factors of the variance process V n,i for i ∈ {1, . . . , n}.
An approximation of the fractional kernel
Relying on (3.5), we can see the process V n as an approximation of V , solution of (1.2), obtained by smoothing the fractional kernel
K(t) = t H-1 2 Γ(H+1/2) into K n (t) = n i=1 c n i e -γ n i t
. Intuitively, we need to choose K n close to K when n goes to infinity, so that (V n ) n≥1 converges to V . Inspired by [START_REF] Carmona | Approximation of some Gaussian processes[END_REF], we give in this section a condition on the weights (c n i ) 1≤i≤n and mean reversion terms 0 < γ n 1 < ... < γ n n so that the following convergence
K n -K 2,T → 0,
holds as n goes to infinity, where • 2,T is the usual L 2 ([0, T ], R) norm. Let (η n i ) 0≤i≤n be auxiliary mean reversion terms such that η n 0 = 0 and η n i-1 ≤ γ n i ≤ η n i for i ∈ {1, . . . , n}. Writing K as the Laplace transform of µ as in (1.3), we obtain that
K n -K 2,T ≤ ∞ η n n e -γ(•) 2,T µ(dγ) + n i=1 J n i , with J n i = c n i e -γ n i (•) - η n i η n i-1
e -γ(•) µ(dγ) 2,T . We start by dealing with the first term,
∞ η n n e -γ(•) 2,T µ(dγ) = ∞ η n n 1 -e -2γT 2γ µ(dγ) ≤ 1 HΓ(H + 1/2)Γ(1/2 -H) √ 2 (η n n ) -H .
Moreover by choosing
c n i = η n i η n i-1 µ(dγ), γ n i = 1 c n i η n i η n i-1 γµ(dγ), i ∈ {1, . . . , n}, (3.6)
and using the Taylor-Lagrange inequality up to the second order, we obtain
c n i e -γ n i t - η n i η n i-1 e -γt µ(dγ) ≤ t 2 2 η n i η n i-1 (γ -γ n i ) 2 µ(dγ), t ∈ [0, T ]. (3.7)
Therefore,
n i=1 J n i ≤ T 5/2 2 √ 5 n i=1 η n i η n i-1 (γ n i -γ) 2 µ(dγ).
This leads to the following inequality
K n -K 2,T ≤ f (2) n (η i ) 0≤i≤n ,
where f
n is a function of the auxiliary mean reversions defined by
f (2) n ((η n i ) 1≤i≤n ) = T 5 2 2 √ 5 n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) + 1 HΓ(H + 1/2)Γ(1/2 -H) √ 2 (η n n ) -H . (3.8)
Hence, we obtain the convergence of K n to the fractional kernel under the following choice of weights and mean reversions.
Assumption 3.1. We assume that the weights and mean reversions are given by
(3.6) such that η n 0 = 0 < η n 1 < . . . < η n n and η n n → ∞, n i=1 η n i η n i-1 (γ n i -γ) 2 µ(dγ) → 0, (3.9)
as n goes to infinity.
Proposition 3.3. Fix (c n i ) 1≤i≤n and (γ n i ) 1≤i≤n as in Assumption 3.1 and K n given by (3.4), for all n ≥ 1. Then,
(K n ) n≥1 converges in L 2 [0, T ] to the fractional kernel K(t) = t H-1/2 Γ(H+ 1
2 ) as n goes to infinity.
There exists several choices of auxiliary factors such that condition (3.9) is met. For instance, assume that η n i = iπ n for each i ∈ {0, . . . , n} such that π n > 0. It follows from
n i=1 η n i η n i-1 (γ -γ i ) 2 µ(dγ) ≤ π 2 n η n n 0 µ(dγ) = 1 (1/2 -H)Γ(H + 1/2)Γ(1/2 -H) π 5 2 -H n n 1 2 -H , that (3.9) is satisfied for η n n = nπ n → ∞, π 5 2 -H n n 1 2 -H → 0,
as n tends to infinity. In this case,
K n -K 2,T ≤ 1 HΓ(H + 1/2)Γ(1/2 -H) √ 2 (η n n ) -H + HT 5 2 √ 10(1/2 -H) π 2 n (η n n ) 1 2 -H .
This upper bound is minimal for
π n = n -1 5 T √ 10(1 -2H) 5 -2H 2 5 , (3.10)
and
K n -K 2,T ≤ C H n -4H 5
, where C H is a positive constant that can be computed explicitly and that depends only on the Hurst parameter H ∈ (0, 1/2). Remark 3.4. Note that the kernel approximation in Proposition 3.3 can be easily extended to any kernel of the form
K(t) = ∞ 0 e -γt µ(dγ),
where µ is a non-negative measure such that
∞ 0 (1 ∧ γ -1/2 )µ(dγ) < ∞.
Convergence result
We assume now that the weights and mean reversions of the multi-factor stochastic volatility model (S n , V n ) satisfy Assumption 3.1. Thanks to Proposition 3.3, the smoothed kernel K n is close to the fractional one for large n. Because V n satisfies the stochastic Volterra equation (3.5), V n has to be close to V and thus by passing to the limit, (S n , V n ) n≥1 should converge to the rough volatility model (S, V ) of Definition 2.1 as n goes large. This is the object of the next theorem, which is the main result of this paper. Theorem 3.5. Let (S n , V n ) n≥1 be a sequence of multi-factor stochastic volatility models given by Definition 3.2. Then, under Assumption 3.1, the family (S n , V n ) n≥1 is tight for the uniform topology and any point limit (S, V ) is a rough volatility model given by Definition 2.1. Theorem 3.5 states the convergence in law of (S n , V n ) n≥1 whenever the fractional stochastic integral equation (1.2) admits a unique weak solution. In order to prove Theorem 3.5, whose proof is in Section 5.2 below, a more general stability result for d-dimensional stochastic Volterra equations is established in the next subsection.
Stability of stochastic Volterra equations
As mentioned above, Theorem 3.5 relies on the study of the stability of more general ddimensional stochastic Volterra equations of the form
X t = g(t) + t 0 K(t -s)b(X s )ds + t 0 K(t -s)σ(X s )dW s , t ∈ [0, T ], (3.11)
where b : R d → R d , σ : R d → R d×m are continuous and satisfy the linear growth condition, K ∈ L 2 ([0, T ], R d×d ) admits a resolvent of the first kind L, see Appendix A.2, and W is a m-dimensional Brownian motion on some filtered probability space (Ω, F, F, P).
From Proposition B.1 in the Appendix, g : [0, T ] → R d and K ∈ L 2 ([0, T ], R d×d ) should satisfy Assumption B.1, that is |g(t + h) -g(t)| 2 + h 0 |K(s)| 2 ds + T -h 0 |K(h + s) -K(s)| 2 ds ≤ Ch 2γ , (3.12)
for any t, h ≥ 0 with t + h ≤ T and for some positive constants C and γ, to guarantee the weak existence of a continuous solution X of (3.11).
More precisely, we consider a sequence X n = (X n t ) t≤T of continuous weak solutions to the stochastic Volterra equation (3.11) with a kernel K n ∈ L 2 ([0, T ], R d×d ) admitting a resolvent of the first kind, on some filtered probability space (Ω n , F n , F n , P n ),
X n t = g n (t) + t 0 K n (t -s)b(X n s )ds + t 0 K n (t -s)σ(X n s )dW n s , t ∈ [0, T ],
with g n : [0, T ] → R d and K n satisfying (3.12) for every n ≥ 1. The stability of (3.11) means the convergence in law of the family of solutions (X n ) n≥1 to a limiting process X which is a solution to (3.11), when (K n , g n ) is close to (K, g) as n goes large.
This convergence is established by verifying first the Kolmogorov tightness criterion for the sequence (X n ) n≥1 . It is obtained when g n and K n satisfy (3.12) uniformly in n in the following sense.
Assumption 3.2. There exists positive constants γ and C such that
sup n≥1 |g n (t + h) -g n (t)| 2 + h 0 |K n (s)| 2 ds + T -h 0 |K n (h + s) -K n (s)| 2 ds ≤ Ch 2γ , for any t, h ≥ 0 with t + h ≤ T ,
The following result, whose proof is postponed to Section 5.1 below, states the convergence of (X n ) n≥1 to a solution of (3.11).
Theorem 3.6. Assume that
T 0 |K(s) -K n (s)| 2 ds -→ 0, g n (t) -→ g(t),
for any t ∈ [0, T ] as n goes to infinity. Then, under Assumption 3.2, the sequence (X n ) n≥1 is tight for the uniform topology and any point limit X is a solution of the stochastic Volterra equation (3.11).
The particular case of the rough Heston model
The rough Heston model introduced in [START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF] is a particular case of the class of rough volatility models of Definition 2.1, with σ(x) = ν √ x for some positive parameter ν, that is
dS t = S t V t dW t , S 0 > 0, V t = g(t) + t 0 K(t -s) -λV s ds + ν V s dB s ,
where
K(t) = t H-1 2
Γ(H+1/2) denotes the fractional kernel and g is given by (3.1). Aside from reproducing accurately the historical and implied volatility, the rough Heston model displays a closed formula for the characteristic function of the log-price in terms of a solution to a fractional Riccati equation allowing to fast pricing and calibration, see [START_REF] Euch | Roughening Heston[END_REF]. More precisely, it is shown in [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF][START_REF] Euch | Perfect hedging in rough Heston models[END_REF] that
L(t, z) = E exp z log(S t /S 0 ) is given by exp t 0 F (z, ψ(t -s, z))g(s)ds , (4.1)
where ψ(•, z) is the unique continuous solution of the fractional Riccati equation
ψ(t, z) = t 0 K(t -s)F (z, ψ(s, z))ds, t ∈ [0, T ], (4.2)
with
F (z, x) = 1 2 (z 2 -z) + (ρνz -λ)x + ν 2 2 x 2 and z ∈ C such that (z) ∈ [0, 1]
. Unlike the classical case H = 1/2, (4.2) does not exhibit an explicit solution. However, it can be solved numerically through the Adam scheme developed in [START_REF] Diethelm | A predictor-corrector approach for the numerical solution of fractional differential equations[END_REF][START_REF] Diethelm | Detailed error analysis for a fractional Adams method[END_REF][START_REF] Diethelm | The fracpece subroutine for the numerical solution of differential equations of fractional order[END_REF][START_REF] Euch | The characteristic function of rough Heston models[END_REF] for instance. In this section, we show that the multi-factor approximation applied to the rough Heston model gives rise to another natural numerical scheme for solving the fractional Riccati equation. Furthermore, we will establish the convergence of this scheme with explicit errors.
Multi-factor scheme for the fractional Riccati equation
We consider the multi-factor approximation (S n , V n ) of Definition 3.2 with σ(x) = ν √ x, where the number of factors n is large, that is
dS n t = S n t V n t dW t , V n t = g n (t) + n i=1 c n i V n,i t , with dV n,i t = (-γ n i V n,i t -λV n t )dt + ν V n t dB t , V n,i 0 = 0, S n 0 = S 0 .
Recall that g n is given by (3.3) and it converges pointwise to g as n goes large, see Lemma 5.1.
We write the dynamics of (S n , V n ) in terms of a Volterra Heston model with the smoothed kernel K n given by (3.4) as follows
dS n t = S n t V n t dW t , V n t = g n (t) - t 0 K n (t -s)λV n s ds + t 0 K n (t -s)ν V n s dB s .
In [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF], the characteristic function formula of the log-price (4.1) is extended to the general class of Volterra Heston models. In particular,
L n (t, z) = E exp z log(S n t /S 0 ) is given by exp t 0 F (z, ψ n (t -s, z))g n (s)ds , (4.3)
where ψ n (•, z) is the unique continuous solution of the Riccati Volterra equation
ψ n (t, z) = t 0 K n (t -s)F (z, ψ n (s, z))ds, t ∈ [0, T ], (4.4
)
for each z ∈ C with (z) ∈ [0, 1].
Thanks to the weak uniqueness of the rough Heston model, established in several works [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF], and to Theorem 3.5, (S n , V n ) n≥1 converges in law for the uniform topology to (S, V ) when n tends to infinity. In particular, L n (t, z) converges pointwise to L(t, z). Therefore, we expect ψ n (•, z) to be close to the solution of the fractional Riccati equation (4.2). This is the object of the next theorem, whose proof is reported to Section 5.3 below. where an explicit error is given. Indeed, set
ψ n,i (t, z) = t 0 e -γ n i (t-s) F (z, ψ n (s, z))ds, i ∈ {1, . . . , n}.
Then,
ψ n (t, z) = n i=1 c n i ψ n,i (t, z),
and (ψ n,i (•, z)) 1≤i≤n solves the following n-dimensional system of ordinary Riccati equations
∂ t ψ n,i (t, z) = -γ n i ψ n,i (t, z) + F (z, ψ n (t, z)), ψ n,i (0, z) = 0, i ∈ {1, . . . , n}. (4.5)
Hence, (4.5) can be solved numerically by usual finite difference methods leading to ψ n (•, z) as an approximation of the fractional Riccati solution.
Numerical illustrations
In this section, we consider a rough Heston model with the following parameters
λ = 0.3, ρ = -0.7, ν = 0.3, H = 0.1, V 0 = 0.02, θ ≡ 0.02.
We discuss the accuracy of the multi-factor approximation sequence (S n , V n ) n≥1 as well as the corresponding Riccati Volterra solution (ψ n (•, z)) n≥1 , for different choices of the weights (c n i ) 1≤i≤n and mean reversions (γ n i ) 1≤i≤n . This is achieved by first computing, for different number of factors n, the implied volatility σ n (k, T ) of maturity T and log-moneyness k by a Fourier inversion of the characteristic function formula (4.3), see [START_REF] Carr | Option valuation using the fast Fourier transform[END_REF][START_REF] Lewis | A simple option formula for general jump-diffusion and other exponential lévy processes[END_REF] for instance. In a second step, we compare σ n (k, T ) to the implied volatility σ(k, T ) of the rough Heston model. We also compare the Riccati Volterra solution ψ n (T, z) to the fractional one ψ(T, z).
Note that the Riccati Volterra solution ψ n (•, z) is computed by solving numerically the ndimensional Riccati equation (4.5) with a classical finite difference scheme. The complexity of such scheme is O(n × n ∆t ), where n ∆t is the number of time steps applied for the scheme, while the complexity of the Adam scheme used for the computation of ψ(•, z) is O(n 2 ∆t ). In the following numerical illustrations, we fix n ∆t = 200.
In order to guarantee the convergence, the weights and mean reversions have to satisfy Assumption 3.1 and in particular they should be of the form (3.6) in terms of auxiliary mean reversions (η n i ) 0≤i≤n satisfying (3.9). For instance, one can fix
η n i = iπ n , i ∈ {0, . . . , n}, (4.6)
where π n is defined by (3.10), as previously done in Section 3.2. For this particular choice, Figure 1 shows a decrease of the relative error ψ n (T,ib)-ψ(T,ib)
ψ(T,ib)
towards zero for different values of b. We also observe in the Figure 2 below that the implied volatility σ n (k, T ) of the multifactor approximation is close to σ(k, T ) for a number of factors n ≥ 20. Notice that the approximation is more accurate around the money. In order to obtain a more accurate convergence, we can minimize the upper bound f (3.8). Hence, we choose (η n i ) 0≤i≤n to be a solution of the constrained minimization problem inf
(2) n ((η n i ) 0≤i≤n ) of K n -K 2,T defined in
(η n i ) i ∈En f (2) n ((η n i ) 0≤i≤n ), (4.7)
where We notice from Figure 3, that the relative error | ψ n (T,ib)-ψ(T,ib)
E n = {(η n i ) 0≤i≤n ; 0 = η n 0 < η n 1 < ... < η n n }.
ψ(T,ib)
| is smaller under the choice of factors (4.7). Indeed the Volterra approximation ψ n (T, ib) is now closer to the fractional Riccati solution ψ(T, ib) especially for small number of factors. However, when n is large, the accuracy of the approximation seems to be close to the one under (4.6). For instance when n = 500, the relative error is around 1% under both (4.6) and (4.7). In the same way, we observe in Figure 4 that the accuracy of the implied volatility approximation σ n (k, T ) is more satisfactory under (4.7) especially for a small number of factors. Theorem 4.1 states that the convergence of ψ n (•, z) depends actually on the L 1 ([0, T ], R)-error between K n and K. Similarly to the computations of Section 3.2, we may show that,
T 0 |K n (s) -K(s)|ds ≤ f (1) n ((η n i ) 0≤i≤n ),
where
f (1) n ((η n i ) 0≤i≤n ) = T 3 6 n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) + 1 Γ(H + 3/2)Γ(1/2 -H) (η n n ) -H-1 2 .
This leads to choosing (η n i ) 0≤i≤n as a solution of the constrained minimization problem inf
(η n i ) i ∈En f (1) n ((η n i ) 0≤i≤n ). (4.8)
It is easy to show that such auxiliary mean-reversions (η n i ) 0≤i≤n satisfy (3.9) and thus Assumption 3.1 is met. Figures 5 and6 exhibit similar results as the ones in Figures 3 and4 corresponding to the choice of factors (4.7). In fact, we notice in practice that the solution of the minimization problem (4.7) is close to the one in (4.8).
Upper bound for call prices error
Using a Fourier transform method, we can also provide an error between the price of the call
C n (k, T ) = E[(S n
T -S 0 e k ) + ] in the multi-factor model and the price of the same call C(k, T ) = E[(S T -S 0 e k ) + ] in the rough Heston model. However, for technical reasons, this bound is obtained for a modification of the multi-factor approximation (S n , V n ) n≥1 of Definition 3.2 where the function g n given initially by (3.3) is updated into
g n (t) = t 0 K n (t -s) V 0 s -H-1 2 Γ(1/2 -H) + θ(s) ds, (4.9)
where K n is the smoothed approximation (3.4) of the fractional kernel. Note that the strong existence and uniqueness of V n is still directly obtained from Proposition B.3 and its nonnegativity from Theorem B.4 together with Remarks B.5 and B.6 in the Appendix4 . Although for g n satisfying (4.9), (V n ) n≥1 can not be tight5 , the corresponding spot price (S n ) n≥1 converges as shown in the following proposition.
Proposition 4.2. Let (S n , V n ) n≥1 be a sequence of multi-factor Heston models as in Definition 3.2 with σ(x) = ν √ x and g n given by (4.9). Then, under Assumption 3.1, (S n ,
• 0 V n s ds) n≥1 converges in law for the uniform topology to (S,
• 0 V s ds), where (S, V ) is a rough Heston model as in Definition 2.1 with σ(x) = ν √ x.
Note that the characteristic function (4.3) still holds. Using Theorem 4.1 together with a Fourier transform method, we obtain an explicit error for the call prices. We refer to Section 5.5 below for the proof.
Proposition 4.3. Let C(k, T ) be the price of the call in the rough Heston model with maturity T > 0 and log-moneyness k ∈ R. We denote by C n (k, T ) the price of the call in the multifactor Heston model of Definition 3.2 such that g n is given by (4.9). If |ρ| < 1, then there exists a positive constant c > 0 such that
|C(k, T ) -C n (k, T )| ≤ c T 0 |K(s) -K n (s)|ds, n ≥ 1.
Proofs
In this section, we use the convolution notations together with the resolvent definitions of Appendix A. We denote by c any positive constant independent of the variables t, h and n and that may vary from line to line. For any h ∈ R, we will use the notation ∆ h to denote the semigroup operator of right shifts defined by ∆ h f : t → f (h + t) for any function f . We first prove Theorem 3.6, which is the building block of Theorem 3.5. Then, we turn to the proofs of the results contained in Section 4, which concern the particular case of the rough Heston model.
Proof of Theorem 3.6
Tightness of (X n ) n≥1 : We first show that, for any p ≥ 2,
sup n≥1 sup t≤T E[|X n t | p ] < ∞. (5.1)
Thanks to Proposition B.1, we already have
sup t≤T E[|X n t | p ] < ∞. (5.2)
Using the linear growth of (b, σ) and (5.2) together with Jensen and BDG inequalities, we get
E[|X n t | p ] ≤ c sup t≤T |g n (t)| p + T 0 |K n (s)| 2 ds p 2 -1 t 0 |K n (t -s)| 2 (1 + E[|X n s | p ])ds) .
Relying on Assumption 3.2 and the convergence of (g n (0),
T 0 |K n (s)| 2 ds) n≥1 , sup t≤T |g n (t)| p and T 0 |K n (s)| 2 ds are uniformly bounded in n. This leads to E[|X n t | p ] ≤ c 1 + t 0 |K n (t -s)| 2 E[|X n s | p ]ds) .
By the Grönwall type inequality in Lemma A.4 in the Appendix, we deduce that
E[|X n t | p ] ≤ c 1 + t 0 E n c (s)ds) ≤ c 1 + T 0 E n c (s)ds) ,
where We now show that (X n ) n≥1 exhibits the Kolmogorov tightness criterion. In fact, using again the linear growth of (b, σ) and (5.1) together with Jensen and BDG inequalities, we obtain, for any p ≥ 2 and t, h ≥ 0 such that t + h ≤ T ,
E n c ∈ L 1 ([0, T ], R)
E[|X n t+h -X n t | p ] ≤ c |g n (t+h)-g n (t)| p + T -h 0 |K n (h+s)-K n (s)| 2 ds p/2 + h 0 |K n (s)| 2 ds p/2 .
Hence, Assumption 3.2 leads to
E[|X n t+h -X n t | p ] ≤ ch pγ ,
and therefore to the tightness of (X n ) n≥1 for the uniform topology.
Convergence of (X n ) n≥1 : Let M n t = t 0 σ(X n s )dW n s . As M n t = t 0 σσ * (X n s )ds, ( M n ) n≥1 is tight and consequently we get the tightness of (M n ) n≥1 from [18, Theorem VI-4.13]. Let (X, M ) = (X t , M t ) t≤T be a possible limit point of (X n , M n ) n≥1 . Thanks to [18, Theorem VI-6.26], M is a local martingale and necessarily
M t = t 0 σσ * (X s )ds, t ∈ [0, T ].
Moreover, setting Y n t = t 0 b(X n s )ds + M n t , the assoicativity property (A.1) in the Appendix yields
(L * X n ) t = (L * g n )(t) + L * (K n -K) * dY n t + Y n t , (5.3)
where L is the resolvent of the first kind of K defined in Appendix A.2. By the Skorokhod representation theorem, we construct a probability space supporting a sequence of copies of (X n , M n ) n≥1 that converges uniformly on [0, T ], along a subsequence, to a copy of (X, M ) almost surely, as n goes to infinity. We maintain the same notations for these copies. Hence, we have sup
t∈[0,T ] |X n t -X t | → 0, sup t∈[0,T ] |M n t -M t | → 0,
almost surely, as n goes to infinity. Relying on the continuity and linear growth of b together with the dominated convergence theorem, it is easy to obtain for any t ∈ [0, T ]
(L * X n ) t → (L * X) t , t 0 b(X n s )ds → t 0 b(X s )ds,
almost surely as n goes to infinity. Moreover for each t ∈ [0, T ]
(L * g n )(t) → (L * g)(t),
by the uniform boundedness of g n in n and t and the dominated convergence theorem. Finally thanks to the Jensen inequality,
E[| (L * ((K n -K) * dY n )) t | 2 ] ≤ c sup t≤T E[| ((K n -K) * dY n ) t | 2 ].
From (5.1) and the linear growth of (b, σ), we deduce
sup t≤T E[| ((K n -K) * dY n ) t | 2 ] ≤ c T 0 |K n (s) -K(s)| 2 ds,
which goes to zero when n is large. Consequently, we send n to infinity in (5.3) and obtain the following almost surely equality, for each t ∈ [0, T ],
(L * X) t = (L * g)(t) + t 0 b(X s )ds + M t . (5.4)
Recall also that M =
• 0 σσ * (X s )ds. Hence, by [23, Theorem V-3.9], there exists a mdimensional Brownian motion W such that
M t = t 0 σ(X s )dW s , t ∈ [0, T ].
The processes in (5.4) being continuous, we deduce that, almost surely,
(L * X) t = (L * g)(t) + t 0 b(X s )ds + t 0 σ(X s )dW s , t ∈ [0, T ].
We convolve by K and use the associativity property (A.1) in the Appendix to get that, almost surely,
t 0 X s ds = t 0 g(s)ds + t 0 s 0 K(s -u)(b(X u )du + σ(X u )dW u ) ds, t ∈ [0, T ].
Finally it is easy to see that the processes above are differentiable and we conclude that X is solution of the stochastic Volterra equation (3.11), by taking the derivative.
Proof of Theorem 3.5
Theorem 3.5 is easily obtained once we prove the tightness of (V n ) n≥1 for the uniform topology and that any limit point V is solution of the fractional stochastic integral equation (1.2). This is a direct consequence of Theorem 3.6, by setting d = m = 1, g and g n respectively as in (3.1) and (3.3), b(x) = -λx, K being the fractional kernel and K n (t) = n i=1 c n i e -γ n i t its smoothed approximation. Under Assumption 3.1, (K n ) n≥1 converges in L 2 ([0, T ], R) to the fractional kernel, see Proposition 3.3. Hence, it is left to show the pointwise convergence of (g n ) n≥1 to g on [0, T ] and that (K n , g n ) n≥1 satisfies Assumption 3.2. g n (t) → g(t),
as n tends to infinity.
Proof. Because θ satisfies (2.1), it is enough to show that for each t ∈ [0, T ]
t 0 (t -s) -1 2 -ε |K n (s) -K(s)|ds (5.5)
converges to zero as n goes large, for some ε > 0 and K n given by (3.4). Using the representation of K as the Laplace transform of µ as in (1.3), we obtain that (5.5) is bounded by
t 0 (t -s) -1 2 -ε ∞ η n n e -γs µ(dγ)ds + n i=1 t 0 (t -s) -1 2 -ε |c n i e -γ n i s - η n i η n i-1
e -γs µ(dγ)|ds. (5.6) The first term in (5.6) converges to zero for large n by the dominated convergence theorem because η n n tends to infinity, see Assumption 3.1. Using the Taylor-Lagrange inequality (3.7), the second term in (5.6) is dominated by
1 2 t 0 (t -s) -1 2 -ε s 2 ds n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ),
which goes to zero thanks to Assumption 3.1.
Lemma 5.2 (K n satisfying Assumption 3.2). Under Assumption 3.1, there exists C > 0 such that, for any t, h ≥ 0 with t + h ≤ T ,
sup n≥1 T -h 0 |K n (h + s) -K n (s)| 2 ds + h 0 |K n (s)| 2 ds ≤ Ch 2H ,
where K n is defined by (3.4).
Proof. We start by proving that for any t, h ≥ 0 with t
+ h ≤ T h 0 |K n (s)| 2 ds ≤ ch 2H .
(5.7)
In fact we know that this inequality is satisfied for
K(t) = t H-1 2 Γ(H+1/2) . Thus it is enough to prove K n -K 2,h ≤ ch H ,
where • 2,h stands for the usual L 2 ([0, h], R) norm. Relying on the Laplace transform representation of K given by (1.3), we obtain
K n -K 2,h ≤ ∞ η n n e -γ(•) 2,h µ(dγ) + n i=1 J n i,h ,
where
J n i,h = c n i e -γ n i (•) - η n i η n i-1
e -γ(•) µ(dγ) 2,h . We start by bounding the first term,
∞ η n n e -γ(•) 2,h µ(dγ) ≤ ∞ 0 1 -e -2γh 2γ µ(dγ) = h H Γ(H + 1/2)Γ(1/2 -H) √ 2 ∞ 0 1 -e -2γ γ γ -H-1 2 dγ.
As in Section 3.2, we use the Taylor-Lagrange inequality (3.7) to get
n i=1 J n i,h ≤ 1 2 √ 5 h 5 2 n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ).
Using the boundedness of
n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) n≥1 from Assumption 3.
1, we deduce (5.7). We now prove
T -h 0 |K n (h + s) -K n (s)| 2 ds ≤ ch 2H .
(5.8)
In the same way, it is enough to show
(∆ h K n -∆ h K) -(K n -K) 2,T -h ≤ ch H ,
Similarly to the previous computations, we get
(∆ h K n -∆ h K) -(K n -K) 2,T -h ≤ ∞ η n n e -γ(•) -e -γ(h+•) 2,T -h µ(dγ) + n i=1 J n i,h , with J n i,h = c n i (e -γ n i (•) -e -γ n i (h+•) ) - η n i η n i-1 (e -γ(•) -e -γ(h+•) )µ(dγ) 2,T -h . Notice that ∞ η n n e -γ(•) -e -γ(h+•) 2,T -h µ(dγ) = ∞ η n n (1 -e -γh ) 1 -e -2γ(T -h) 2γ µ(dγ) ≤ c ∞ 0 (1 -e -γh )γ -H-1 dγ ≤ ch H .
Moreover, fix h, t > 0 and set χ(γ) = e -γt -e -γ(t+h) . The second derivative reads χ (γ) = h t 2 γe -γt 1 -e -γh γh -he -γ(t+h) -2te -γ(t+h) , γ > 0.
(5.9)
Because x → xe -x and x → 1-e -x
x are bounded functions on (0, ∞), there exists C > 0 independent of t, h ∈ [0, T ] such that
|χ (γ)| ≤ Ch, γ > 0.
The Taylor-Lagrange formula, up to the second order, leads to
|c n i (e -γ n i t -e -γ n i (t+h) ) - η n i η n i-1 (e -γt -e -γ(t+h) )µ(dγ)| ≤ C 2 h η n i η n i-1 (γ -γ n i ) 2 µ(dγ). Thus n i=1 J n i,h ≤ C 2 h n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ).
Finally, (5.8) follows from the boundedness of
n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) n≥1 due to Assump- tion 3.1. Lemma 5.3 (g n satisfying Assumption 3.2). Define g n : [0, T ] → R by (3.3) such that θ : [0, T ] → R satisfies (2.1). Under Assumption 3.1, for each ε > 0, there exists C ε > 0 such that for any t, h ≥ 0 with t + h ≤ T sup n≥1 |g n (t) -g n (t + h)| ≤ C ε h H-ε .
Proof. Because θ satisfies (2.1), it is enough to prove that, for each fixed ε > 0, there exists
C > 0 such that sup n≥1 h 0 (h -s) -1 2 -ε |K n (s)|ds ≤ Ch H-ε , (5.10)
and
sup n≥1 t 0 (t -s) -1 2 -ε |K n (s) -K n (h + s)|ds ≤ Ch H-ε , (5.11)
for any t, h ≥ 0 with t + h ≤ T . (5.10) being satisfied for the fractional kernel, it is enough to establish
h 0 (h -s) -1 2 -ε |K n (s) -K(s)|ds ≤ ch H-ε .
In the proof of Lemma 5.1, it is shown that
h 0 (h -s) -1 2 -ε |K n (s) -K(s)|ds is bounded by (5.6), that is h 0 (h -s) -1 2 -ε ∞ η n n e -γs µ(dγ)ds + n i=1 h 0 (h -s) -1 2 -ε |c n i e -γ n i s - η n i η n i-1 e -γs µ(dγ)|ds.
The first term is dominated by
h 0 (h -s) -1 2 -ε ∞ 0 e -γs µ(dγ)ds = h H-ε B(1/2 -ε, H + 1/2) B(1/2 -H, H + 1/2) ,
where B is the usual Beta function. Moreover thanks to (3.7) and Assumption 3.1, we get
n i=1 h 0 (h -s) -1 2 -ε |c n i e -γ n i s - η n i η n i-1
e -γs µ(dγ)|ds ≤ ch
(t -s) -1 2 -ε |(K n (s) -∆ h K n (s)) -(K(s) -∆ h K(s))| ds ≤ ch H-ε .
By similar computations as previously and using (5.9), we get that
t 0 (t -s) -1 2 -ε |(K n (s) -∆ h K n (s)) -(K(s) -∆ h K(s))| ds is dominated by c t 0 (t -s) 1 2 -ε ∞ η n n (1 -e -γh )e -γs µ(dγ)ds + h n i=1 η n i η n i-1 (γ -γ n i ) 2 µ(dγ) .
The first term being bounded by
t 0 (t -s) 1 2 -ε ∞ 0 (1 -e -γh )e -γs µ(dγ)ds = t 0 (t -s) 1 2 -ε (K(s) -K(h + s))ds ≤ ch H-ε ,
Assumption 3.1 leads to (5.11).
Proof of Theorem 4.1
Uniform boundedness : We start by showing the uniform boundedness of the unique continuous solutions (ψ n (•, a + ib)) n≥1 of (4.4).
Proposition 5.4. For a fixed T > 0, there exists C > 0 such that
sup n≥1 sup t∈[0,T ] |ψ n (t, a + ib)| ≤ C 1 + b 2 ,
for any a ∈ [0, 1] and b ∈ R.
Proof. Let z = a + ib and start by noticing that (ψ n (•, z)) is non-positive because it solves the following linear Volterra equation with continuous coefficients
χ = K n * f + ρν (z) -λ + ν 2 2 (ψ n (•, z)) χ , where f = 1 2 a 2 -a -(1 -ρ 2 )b 2 - 1 2 (ρb + νψ n (•, z)) 2
is continuous non-positive, see Theorem C.1. In the same way (ψ(•, z)) is also non-positive. Moreover, observe that ψ n (•, z) solves the following linear Volterra equation with continuous coefficients
χ = K n * 1 2 (z 2 -z) + (ρνz -λ + ν 2 2 ψ n (•, z))χ , and
ρνz -λ + ν 2 2 ψ n (•, z) ≤ ν -λ. Therefore, Corollary C.4 leads to sup t∈[0,T ] |ψ n (t, z)| ≤ 1 2 |z 2 -z| T 0 E n ν-λ (s)ds,
where
E n ν-λ denotes the canonical resolvent of K n with parameter ν -λ, see Appendix A.3. This resolvent converges in L 1 ([0, T ], R) because K n converges in L 1 ([0, T ], R) to K, see [16, Theorem 2.3.1]. Hence, ( T 0 E n ν-λ (s)ds) n≥1 is bounded, which ends the proof.
End of the proof of Theorem 4.1 : Set z = a + ib and recall that
ψ n (•, z) = K n * F (z, ψ n (•, z)); ψ(•, z) = K * F (z, ψ(•, z)). with F (z, x) = 1 2 z 2 -z + (ρνz -λ)x + ν 2 2 x 2 . Hence, for t ∈ [0, T ], ψ(t, z) -ψ n (t, z) = h n (t, z) + K * F (z, ψ(•, z)) -F (z, ψ n (•, z)) (t), with h n (•, z) = (K n -K) * F (z, ψ n (•, z)).
|h n (t, a + ib)| ≤ C(1 + b 4 ) T 0 |K n (s) -K(s)|ds, (5.12)
for any b ∈ R and a ∈ [0, 1]. Moreover notice that (ψ -ψ n -h n )(•, z) is solution of the following linear Volterra equation with continuous coefficients
χ = K * ρνz -λ + ν 2 2 (ψ + ψ n )(•, z) (χ + h n (•, z)) ,
and remark that the real part of ρνz -λ + ν 2 2 (ψ + ψ n )(•, z) is dominated by ν -λ because (ψ(•, z)) and (ψ n (•, z)) are non-positive. An application of Corollary C.4 together with (5.12) ends the proof.
Proof of Proposition 4.2
We consider for each n ≥ 1, (S n , V n ) defined by the multi-factor Heston model in Definition 3.2 with σ(x) = ν √ x.
Tightness of (
• 0 V n s ds,
• 0
V n s dW s ,
• 0
V n s dB s ) n≥1 : Because the process
• 0 V n s ds is non- decreasing, it is enough to show that sup n≥1 E[ T 0 V n t dt] < ∞, (5.13)
to obtain its tightness for the uniform topology. Recalling that sup t∈
[0,T ] E[V n t ] < ∞ from Proposition B.1 in the Appendix, we get E t 0 V n s dB s = 0,
and then by Fubini theorem
E[V n t ] = g n (t) + n i=1 c n i E[V n,i t ],
with
E[V n,i t ] = t 0 (-γ n i E[V n,i s ] -λE[V n s ])ds.
Thus t → E[V n t ] solves the following linear Volterra equation
χ(t) = t 0 K n (t -s) -λχ(s) + θ(s) + V 0 s -H-1 2 Γ(1/2 -H) ds,
with K n given by (3.4). Theorem A.3 in the Appendix leads to
E[V n t ] = t 0 E n λ (t -s) θ(s) + V 0 s -H-1 2 Γ( 1 2 -H) ds,
and then by Fubini theorem again
t 0 E[V n s ]ds = t 0 t-s 0 E n λ (u)du θ(s) + V 0 s -H-1 2 Γ( 1 2 -H) ds,
where E n λ is the canonical resolvent of K n with parameter λ, defined in Appendix A.3. Because (K n ) n≥1 converges to the fractional kernel K in L 1 ([0, T ], R), we obtain the convergence of E n λ in L 1 ([0, T ], R) to the canonical resolvent of K with parameter λ, see [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 2.3.1]. In particular thanks to Corollary C.2 in the Appendix, t 0 E n λ (s)ds is uniformly bounded in t ∈ [0, T ] and n ≥ 1. This leads to (5.13) and then to the tightness of (
• 0 V n s ds,
• 0
V n s dW s ,
• 0
V n s dB s ) n≥1 by [START_REF] Jacod | Limit theorems for stochastic processes[END_REF].
Convergence of (S n ,
• 0 V n s ds) n≥1 : We set M n,1 t = t 0 V n s dW s and M n,2 t = t 0
V n s dB s . Denote by (X, M 1 , M 2 ) a limit in law for the uniform topology of a subsequence of the tight family (
• 0 V n s ds, M n,1 , M n,2 )
n≥1 . An application of stochastic Fubini theorem, see [START_REF] Veraar | The stochastic Fubini theorem revisited[END_REF], yields
t 0 V n s ds = t 0 t-s 0 (K n (u) -K(u))dudY n s + t 0 K(t -s)Y n s ds, t ∈ [0, T ], (5.14)
where
Y n t = t 0 (s -H-1 2 V 0 Γ(1/2-H) +θ(s)-λV n s )ds+νM n,2 t . Because (Y n ) n≥1 converges in law for the uniform topology to Y = (Y t ) t≤T given by Y t = t 0 (s -H-1 2 V 0 Γ( 1 2 -H) + θ(s))ds -λX t + νM 2
t , we also get the convergence of (
• 0 K(• -s)Y n s ds) n≥1 to • 0 K(• -s)Y s ds. Moreover, for any t ∈ [0, T ], t 0 t-s 0 (K n (u) -K(u))du s -H-1 2 V 0 Γ( 1 2 -H) + θ(s) -λV n s ds is bounded by t 0 |K n (s) -K(s)|ds t 0 (s -H-1 2 V 0 Γ( 1 2 -H) + θ(s))ds + λ t 0 V n s ds ,
which converges in law for the uniform topology to zero thanks to the convergence of (
• 0 V n s ds) n≥1 together with Proposition 3.3. Finally, E t 0 t-s 0 (K n (u) -K(u))dudM n,2 s 2 ≤ c T 0 (K n (s) -K(s)) 2 dsE t 0 V n s ds ,
which goes to zero thanks to (5.13) and Proposition 3.3. Hence, by passing to the limit in (5.14), we obtain
X t = t 0 K(t -s)Y s ds,
for any t ∈ [0, T ], almost surely. The processes being continuous, the equality holds on [0, T ]. Then, by the stochastic Fubini theorem, we deduce that X = • 0 V s ds, where V is a continuous process defined by
V t = t 0 K(t -s)dY s = V 0 + t 0 K(t -s)(θ(s) -λV s )ds + ν t 0 K(t -s)dM 2 s .
Furthermore because (M n,1 , M n,2 ) is a martingale with bracket
• 0 V n s ds 1 ρ ρ 1 ,
[18, Theorem VI-6.26] implies that (M 1 , M 2 ) is a local martingale with the following bracket
• 0 V s ds 1 ρ ρ 1 .
By [23, Theorem V-3.9], there exists a two-dimensional Brownian motion ( W , B) with d W , B t = ρdt such that
M 1 t = t 0 V s d W s , M 2 t = t 0 V s d B s , t ∈ [0, T ].
In particular V is solution of the fractional stochastic integral equation in Definition 2.1 with
σ(x) = ν √ x. Because S n = exp(M n,1 -1 2 • 0 V n s ds),
we deduce the convergence of (S n ,
• 0 V n s ds) n≥1 to the limit point (S,
• 0 V s ds) that displays the rough-Heston dynamics of Definition 2.1. The uniqueness of such dynamics, see [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF][START_REF] Mytnik | Uniqueness for Volterra-type stochastic integral equations[END_REF], enables us to conclude that (S n , V n ) n≥1 admits a unique limit point and hence converges to the rough Heston dynamics.
Proof of Proposition 4.3
We use the Lewis Fourier inversion method, see [START_REF] Lewis | A simple option formula for general jump-diffusion and other exponential lévy processes[END_REF], to write
C n (k, T ) -C(k, T ) = S 0 e k 2 2π b∈R e -ibk b 2 + 1 4 L(T, 1 2 + ib) -L n (T, 1 2 + ib) db.
Hence,
|C n (k, T ) -C(k, T )| ≤ S 0 e k 2 2π b∈R 1 b 2 + 1 4 L(T, 1 2 + ib) -L n (T, 1 2 + ib) db.
(5.15)
Because L(T, z) and L n (T, z) satisfy respectively the formulas (4.1) and (4.3) with g and g n given by
g(t) = t 0 K(t-s) V 0 s -H-1 2 Γ(1/2 -H) +θ(s) ds, g n (t) = t 0 K n (t-s) V 0 s -H-1 2 Γ(1/2 -H) +θ(s) ds,
and ψ(•, z) and ψ n (•, z) solve respectively (4.2) and (4.4), we use the Fubini theorem to deduce that
L(T, z) = exp T 0 ψ(T -s, z) V 0 s -H-1 2 Γ(1/2 -H) + θ(s) ds , (5.16)
and
L n (T, z) = exp T 0 ψ n (T -s, z) V 0 s -H-1 2 Γ(1/2 -H) + θ(s) ds , (5.17)
with z = 1/2 + ib. Therefore, relying on the local Lipschitz property of the exponential function, it suffices to find an upper bound for (ψ n (•, z)) in order to get an error for the price of the call from (5.15). This is the object of the next paragraph.
Upper bound of (ψ n (•, z)) : We denote by φ n η (•, b) the unique continuous function satisfying the following Riccati Volterra equation
φ n η (•, b) = K n * -b + ηφ n η (•, b) + ν 2 2 φ n η (•, b) 2 ,
with b ≥ 0 and η, ν ∈ R.
Proposition 5.5. Fix b 0 , t 0 ≥ 0 and η ∈ R. The functions b → φ n η (t 0 , b) and t → φ n η (t, b 0 ) are non-increasing on R + . Furthermore
φ n η (t, b) ≤ 1 -1 + 2bν 2 ( t 0 E n η (s)ds) 2 ν 2 t 0 E n η (s)ds , t > 0,
where E n η is the canonical resolvent of K n with parameter η defined in Appendix A.
∆ h φ n η (b 0 , t) = ∆ t K n * F (φ n η (•, b 0 )) (h) + K n * F (∆ h φ n η (•, b 0 )) (t) with F (b, x) = -b + ηx + ν 2 2 x 2 . Notice that t → -∆ t K n * F (φ n η (•, b 0 )) (h) ∈ G K , defined in Appendix C, thanks to Theorem C.1. φ n η (•, b) -∆ h φ n η (•, b
) being solution of the following linear Volterra integral equation with continuous coefficients,
x(t) = -∆ t K n * F (b, φ n η (•, b 0 )) (h) + K n * η + ν 2 2 (φ n η (•, b) + ∆ h φ n η (•, b)) x (
φ n η (t, b) = t 0 E n η (t -s)(-b + ν 2 2 φ n η (s, b) 2 ) ≤ t 0 E n η (s)ds -b + ν 2 2 φ n η (t, b) 2 .
We end the proof by solving this inequality of second order in φ n η (t, b) and using that φ n η is non-positive. Notice that t 0 E n η (s)ds > 0 for each t > 0, see Corollary C.2.
Corollary 5.6. Fix a ∈ [0, 1]. We have, for any t ∈ (0, T ] and b ∈ R,
sup n≥1 (ψ n (t, a + ib)) ≤ 1 -1 + (a -a 2 + (1 -ρ 2 )b 2 )ν 2 m(t) 2 ν 2 m(t)
where m(t) = inf n≥1 t 0 E n ρνa-λ (s)ds > 0 for all t ∈ (0, T ] and E n η is the canonical resolvent of K n with parameter η defined in Appendix A.
χ = K * 1 2 (ρb + ν (ψ n (•, a + ib))) 2 + ρνa -λ + ν 2 2 ( (ψ n (•, a + ib)) + φ η (•, r)) χ ,
we use Theorem C.1 together with Proposition 5.5 to get, for all t ∈ [0, T ] and b ∈ R,
(ψ n (t, a + ib)) ≤ 1 -1 + 2rν 2 ( t 0 E n η (s)ds) 2 ν 2 t 0 E n η (s)ds . ( 5.18)
Moreover for any t ∈ [0, T ],
t 0 E n η (s)ds converges as n goes to infinity to t 0 E η (s)ds because K n converges to K in L 1 ([0, T ], R), see [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 2.3.1], where E η denotes the canonical resolvent of K with parameter η. Therefore, m(t) = inf n≥1 t 0 E n η (s)ds > 0, for all t ∈ (0, T ], because t 0 E η (s)ds > 0 and t 0 E n η (s)ds > 0 for all n ≥ 1, see Corollary C.2. Finally we end the proof by using (5.18) together with the fact that x → 1-
√ 1+2rν 2 x 2 ν 2 x
is non-increasing on (0, ∞).
End of the proof of Proposition 4.3 : Assume that |ρ| < 1 and fix a = 1/2. By dominated convergence theorem,
T 0 1 -1 + (a -a 2 + (1 -ρ 2 )b 2 )ν 2 m(T -s) 2 ν 2 m(T -s) (θ(s) + V 0 s -H-1 2 Γ( 1 2 -H) )ds is equivalent to -|b| 1 -ρ 2 ν T 0 (θ(s) + V 0 s -H-1 2 Γ(
A.4. Let x, f ∈ L 1 loc (R + , R) such that x(t) ≤ (λK * x)(t) + f (t), t ≥ 0, a.e.
Then,
x(t) ≤ f (t) + (λE λ * f )(t), t ≥ 0, a.e.
Note that the definition of the resolvent of the second kind and canonical resolvent can be extended for matrix-valued kernels. In that case, Theorem A.3 still holds.
Remark A.5. The canonical resolvent of the fractional kernel
K(t) = t H-1 2 Γ(H+1/2) with param- eter λ is given by t α-1 E α (-λt α ),
where E α (x) =
B Some existence results for stochastic Volterra equations
We collect in this Appendix existence results for general stochastic Volterra equations as introduced in [START_REF] Jaber | Affine Volterra processes[END_REF]. We refer to [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF][START_REF] Jaber | Affine Volterra processes[END_REF] for the proofs. We fix T > 0 and consider the ddimensional stochastic Volterra equation
X t = g(t) + t 0 K(t -s)b(X s )ds + t 0 K(t -s)σ(X s )dB s , t ∈ [0, T ], (B.1)
where b : R d → R d , σ : R d → R d×m are continuous functions with linear growth, K ∈ L 2 ([0, T ], R d×d ) is a kernel admitting a resolvent of the first kind L, g : [0, T ] → R d is a continuous function and B is a m-dimensional Brownian motion on a filtered probability space (Ω, F, F, P). In order to prove the weak existence of continuous solutions to (B.1), the following regularity assumption is needed.
Assumption B.1. There exists γ > 0 and C > 0 such that for any t, h ≥ 0 with t + h ≤ T ,
|g(t + h) -g(t)| 2 + h 0 |K(s)| 2 ds + T -h 0 |K(h + s) -K(s)| 2 ds ≤ Ch 2γ .
The following existence result can be found in [
E[|X t | p ] < ∞, p > 0, (B.2)
and admits Hölder continuous paths on [0, T ] of any order strictly less than γ.
In particular, for the fractional kernel, Proposition B.1 yields the following result.
Corollary B.2. Fix H ∈ (0, 1/2) and θ : [0, T ] → R satisfying ∀ε > 0, ∃C ε > 0; ∀u ∈ (0, T ] |θ(u)| ≤ C ε u -1 2 -ε .
The fractional stochastic integral equation
X t = X 0 + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 (θ(u) + b(X u ))du + 1 Γ(H + 1/2) t 0 (t -u) H-1 2 σ(X u )dB u ,
admits a weak continuous solution X = (X t ) t≤T for any X 0 ∈ R. Moreover X satisfies (B.2) and admits Hölder continuous paths on [0, T ] of any order strictly less than H.
Proof. It is enough to notice that the fractional stochastic integral equation is a particular case of (B.1)
with d = m = 1, K(t) = t H-1 2
Γ(H+1/2) the fractional kernel, which admits a resolvent of the first kind, see Section A.2, and
g(t) = X 0 + 1 Γ(1/2 + H) t 0 (t -u) H-1/2 θ(u)du.
As t → t 1/2+ε θ(t) is bounded on [0, T ], we may show that g is H -ε Hölder continuous for any ε > 0. Hence, Assumption B.1 is satisfied and the claimed result is directly obtained from Proposition B.1.
We now establish the strong existence and uniqueness of (B.1) in the particular case of smooth kernels. This is done by extending the Yamada-Watanabe pathwise uniqueness proof in [START_REF] Yamada | On the uniqueness of solutions of stochastic differential equations[END_REF].
Proposition B.3. Fix m = d = 1 and assume that g is Hölder continuous, K ∈ C 1 ([0, T ], R) admitting a resolvent of the first kind and that there exists C > 0 and η ∈ [1/2, 1] such that for any x, y ∈ R,
|b(x) -b(y)| ≤ C|x -y|, |σ(x) -σ(y)| ≤ C|x -y| η .
Then, the stochastic Volterra equation (B.1) admits a unique strong continuous solution.
Proof. We start by noticing that, K being smooth, it satisfies Assumption B.1. Hence, the existence of a weak continuous solution to (B.1) follows from Proposition B.1. It is therefore enough to show the pathwise uniqueness. We may proceed similarly to [START_REF] Yamada | On the uniqueness of solutions of stochastic differential equations[END_REF] by considering a
0 = 1, a k-1 > a k for k ≥ 1 with a k-1 a k x -2η dx = k and ϕ k ∈ C 2 (R, R) such that ϕ k (x) = ϕ k (-x), ϕ k (0) = 0 and for x > 0 • ϕ k (x) = 0 for x ≤ a k , ϕ k (x) = 1 for x ≥ a k-1 and ϕ k (x) ∈ [0, 1] for a k < x < a k-1 . • ϕ k (x) ∈ [0, 2 k x -2η ] for a k < x < a k-1 .
Let X 1 and X 2 be two solutions of (B.1) driven by the same Brownian motion B. Notice that, thanks to the smoothness of K, X i -g are semimartingales and for i = 1, 2
d(X i t -g(t)) = K(0)dY i t + (K * dY i ) t dt, with Y i t = t 0 b(X i s )ds + t 0 σ(X i s )dB s . Using Itô's formula, we write ϕ k (X 2 t -X 1 t ) = I 1 t + I 2 t + I 3 t ,
where
I 1 t = K(0) t 0 ϕ k (X 2 s -X 1 s )d(Y 1 s -Y 2 s ), I 2 t = t 0 ϕ k (X 2 s -X 1 s )(K * d(Y 1 -Y 2 )) s ds, I 3 t = K(0) 2 2 t 0 ϕ k (X 2 s -X 1 s )(σ(X 2 s ) -σ(X 1 s )) 2 ds. Recalling that sup t≤T E[(X i t ) 2 ] < ∞ for i = 1, 2 from Proposition B.1, we obtain that E[I 1 t ] ≤ E[K(0) t 0 |b(X 2 s ) -b(X 1 s )|ds] ≤ c t 0 E[|X 2 s -X 1 s |]ds, and
E[I 2 t ] ≤ c t 0 E[(|K | * |b(X 2 ) -b(X 1 )|) s ]ds ≤ c t 0 E[|X 2 s -X 1 s |]
E[I 3 t ] ≤ c k , which goes to zero when k is large. Moreover E[ϕ k (X 2 t -X 1 t )] converges to E[|X 2 t -X 1 t |]
when k tends to infinity, thanks to the monotone convergence theorem. Thus, we pass to the limit and obtain
E[|X 2 t -X 1 t |] ≤ c t 0 E[|X 2 s -X 1 s |]ds. Grönwall's lemma leads to E[|X 2 t -X 1 t |] = 0 yielding the claimed pathwise uniqueness.
Under additional conditions on g and K one can obtain the existence of non-negative solutions to (B.1) in the case of d = m = 1. As in [2, Theorem 3.5], the following assumption is needed.
Assumption B.2. We assume that K ∈ L 2 ([0, T ], R) is non-negative, non-increasing and continuous on (0, T ]. We also assume that its resolvent of the first kind L is non-negative and non-increasing in the sense that 0 ≤ L([s, s + t]) ≤ L([0, t]) for all s, t ≥ 0 with s + t ≤ T . admits a unique continuous solution χ. Furthermore if g ∈ G K and w is non-negative, then χ is non-negative and ∆ t 0 χ = g t 0 + K * (∆ t 0 z∆ t 0 χ + ∆ t 0 w) with g t 0 (t) = ∆ t 0 g(t) + (∆ t K * (zχ + w))(t 0 ) ∈ G K , for all for t 0 , t ≥ 0.
Proof. The existence and uniqueness of such solution in χ ∈ L 1 loc (R + , R) is obtained from [2, Lemma C.1]. Because χ is solution of (C.1), it is enough to show the local boundedness of χ to get its continuity. This follows from Grönwall's Lemma A.4 applied on the following inequality |χ(t)| ≤ g ∞,T + (K * ( z ∞,T |χ|(.) + w ∞,T )) (t), for any t ∈ [0, T ] and for a fixed T > 0.
We assume now that g ∈ G K and w is non-negative. The fact that g t 0 ∈ G K , for t 0 ≥ 0, is proved by adapting the computations of the proof of [1, Theorem 3.1] with ν = 0 provided that χ is non-negative. In order to establish the non-negativity of χ, we introduce, for each ε > 0, χ ε as the unique continuous solution of
χ ε = g + K * (zχ ε + w + ε) . (C.2)
It is enough to prove that χ ε is non-negative, for every ε > 0, and that (χ ε ) ε>0 converges uniformly on every compact to χ as ε goes to zero.
Positivity of χ ε : It is easy to see that χ ε is non-negative on a neighborhood of zero because, for small t, χ ε (t) = g(t) + (z(0)g(0) + w(0) + ε)
t 0 K(s)ds + o( t 0 K(s)ds),
as χ, z and w are continuous functions. Hence, t 0 = inf{t > 0; χ ε (t) < 0} is positive. If we assume that t 0 < ∞, we get χ ε (t 0 ) = 0 by continuity of χ ε . χ ε being the solution of (C.2), we have ∆ t 0 χ ε = g t 0 ,ε + K * (∆ t 0 z∆ t 0 χ ε + ∆ t 0 w + ε), with g t 0 ,ε (t) = ∆ t 0 g(t)+(∆ t K * (zχ ε +w +ε))(t 0 ). Then, by using Lemma A.1 with F = ∆ t K, we obtain
g t 0 ,ε (t) = ∆ t 0 g(t) -(d(∆ t K * L) * g)(t 0 ) -(∆ t K * L)(0)g(t 0 ) + (d(∆ t K * L) * χ ε )(t 0 ) + (∆ t K * L)(0)χ ε (t 0 ),
which is continuous and non-negative, because g ∈ G K and ∆ t K * L is non-decreasing for any t ≥ 0, see Remark A.2. Hence, in the same way, ∆ t 0 χ ε is non-negative on a neighborhood of zero. Thus t 0 = ∞, which means that χ ε is non-negative.
Uniform convergence of χ ε : We use the following inequality
|χ -χ ε |(t) ≤ (K * ( z ∞,T |χ -χ ε | + ε)) (t), t ∈ [0, T ],
together with the Gronwall Lemma A.4 to show the uniform convergence on [0, T ] of χ ε to χ as ε goes to zero. In particular, χ is also non-negative.
Corollary C.2. Let K ∈ L 2 loc (R + , R) satisfying Assumption B.2 and define E λ as the canonical resolvent of K with parameter λ ∈ R -{0}. Then, t → t 0 E λ (s)ds is non-negative and non-decreasing on R + . Furthermore t 0 E λ (s)ds is positive, if K does not vanish on [0, t] Proof. The non-negativity of χ =
• 0 E λ (s)ds is obtained from Theorem C.1 and from the fact that χ is solution of the following linear Volterra equation χ = K * (λχ + 1), by Theorem A.3. For fixed t 0 > 0, ∆ t 0 χ satisfies ∆ t 0 χ = g t 0 + K * (λ∆ t 0 χ + 1), with g t 0 (t) = ∆ t K * (λ∆ t 0 χ + 1) (t 0 ) ∈ G K , see Theorem C.1. It follows that ∆ t 0 χ -χ solves x = g t 0 + K * (λx).
Hence, another application of Theorem C.1 yields that χ ≤ ∆ t 0 χ, proving that t → As done in the proof of Theorem C.1, ψ ε converges uniformly on every compact to ψ as ε goes to zero. Thus, it is enough to show that, for every ε > 0 and t ≥ 0,
|h(t)| ≤ ψ ε (t).
for small t, thanks to the continuity of z, w, h, φ h , φ ψε and ψ ε . In both cases, we obtain that |h| ≤ ψ ε on a neighborhood of t 0 . Therefore t 0 = ∞ and for any t ≥ 0
|h(t)| ≤ ψ ε (t).
The following result is a direct consequence of Theorems C.
Theorem 4 . 1 .
41 There exists a positive constant C such that, for any a ∈ [0, 1], b ∈ R and n ≥ 1, sup t∈[0,T ] |ψ n (t, a + ib) -ψ(t, a + ib)| ≤ C(1 + b 4 ) T 0 |K n (s) -K(s)|ds, where ψ(•, a + ib) (resp. ψ n (•, a + ib)) denotes the unique continuous solution of the Riccati Volterra equation (4.2) (resp. (4.4)).Relying on the L 1 -convergence of (K n ) n≥1 to K under Assumption 3.1, see Proposition 3.3, we have the uniform convergence of (ψ n (•, z)) n≥1 to ψ(•, z) on [0, T ]. Hence, Theorem 4.1 suggests a new numerical method for the computation of the fractional Riccati solution (4.2)
Figure 1 :
1 Figure 1: The relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) as a function of b under (4.6) and for different numbers of factors n with T = 1.
Figure 2 :
2 Figure 2: Implied volatility σ n (k, T ) as a function of the log-moneyness k under (4.6) and for different numbers of factors n with T = 1.
Figure 3 :
3 Figure 3: The relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) as a function of b under (4.7) and for different numbers of factors n with T = 1.
Figure 4 :
4 Figure 4: Implied volatility σ n (k, T ) as a function of the log-moneyness k under (4.7) and for different numbers of factors n with T = 1.
Figure 5 :
5 Figure 5: The relative error ψ n (T,ib)-ψ(T,ib) ψ(T,ib) as a function of b under (4.8) and for different numbers of factors n with T = 1.
Figure 6 :
6 Figure 6: Implied volatility σ n (k, T ) as a function of the log-moneyness k under (4.8) and for different numbers of factors n with T = 1.
is the canonical resolvent of |K n | 2 with parameter c, defined in Appendix A.3, and the last inequality follows from the fact that • 0 E n c (s)ds is non-decreasing by Corollary C.2. The convergence of |K n | 2 to |K| 2 in L 1 ([0, T ], R) implies the convergence of E n c to the canonical resolvent of |K| 2 with parameter c in L 1 ([0, T ], R), see [16, Theorem 2.3.1]. Thus, T 0 E n c (s)ds is uniformly bounded in n, yielding (5.1).
Lemma 5 . 1 (
51 Convergence of g n ). Define g n : [0, T ] → R and g : [0, T ] → R respectively by (3.1) and (3.3) such that θ : [0, T ] → R satisfies (2.1). Under assumption (3.1), we have for any t ∈ [0, T ]
5 2
5 -ε , yielding (5.10). Similarly, we obtain(5.11) by showing that t 0
Thanks to Proposition 5.4, we get the existence of a positive constant C such that sup n≥1 sup t∈[0,T ]
3 .
3 Proof. The claimed monotonicity of b → φ n η (t 0 , b) is directly obtained from Theorem C.1. Consider now h, b 0 > 0. It is easy to see that ∆ h φ n η (•, b 0 ) solves the following Volterra equation
t), we deduce its non-negativity using again Theorem C.1. Thus, t ∈ R + → φ n η (t, b 0 ) is nonincreasing and consequently sup s∈[0,t] |φ η (s, b)| = |φ n η (t, b 0 )| as φ n η (0, b) = 0. Hence, Theorem A.3 leads to
3 .
3 Proof. Let r = a -a 2 + (1 -ρ 2 )b 2 and η = ρνa -λ. φ n η (•, r) -(ψ n (•, a + ib))being solution of the following linear Volterra equation with continuous coefficients
k+1)) is the Mittag-Leffler function and α = H + 1/2 for H ∈ (0, 1/2).
ds, because b is Lipschitz continuous and K is bounded on [0, T ]. Finally by definition of ϕ k and the η-Hölder continuity of σ, we have
Theorem C. 1 .
1 Let K ∈ L 2 loc (R + , R) satisfying Assumption B.2 and g, z, w : R + → R be continuous functions. The linear Volterra equation χ = g + K * (zχ + w) (C.1)
t 0 E
0 λ (s)ds is non-decreasing. We now provide a version of Theorem C.1 for complex valued solutions. Theorem C.3. Let z, w : R + → C be continuous functions and h 0 ∈ C. The following linear Volterra equation h = h 0 + K * (zh + w) admits unique continuous solution h : R + → C such that |h(t)| ≤ ψ(t), t ≥ 0, where ψ : R + → R is the unique continuous solution of ψ = |h 0 | + K * ( (z)ψ + |w|). Proof. The existence and uniqueness of a continuous solution is obtained in the same way as in the proof of Theorem C.1. Consider now, for each ε > 0, ψ ε the unique continuous solution of ψ ε = |h 0 | + K * ( (z)ψ + |w| + ε).
1 and C. 3 .Corollary C. 4 . 0 E
340 Let h 0 ∈ C and z, w : R + → C be continuous functions such that (z) ≤ λ for some λ ∈ R. We define h : R + → C as the unique continuous solution ofh = h 0 + K * (zh + w).Then, for any t ∈ [0, T ],|h(t)| ≤ |h 0 | + ( w ∞,T + λ|h 0 |) T 0 E λ (s)ds,where E λ is the canonical resolvent of K with parameter λ.Proof. From Theorem C.3, we obtain that |h| ≤ ψ 1 , where ψ 1 is the unique continuous solution ofψ 1 = |h 0 | + K * ( (z)ψ 1 + |w|).Moreover define ψ 2 as the unique continuous solution ofψ 2 = |h 0 | + K * (λψ 2 + w ∞,T ).Then,ψ 2 -ψ 1 solves χ = K * (λχ + f ), with f = (λ -(z))ψ 1 + w ∞,T -w, which is a non-negative function on [0, T ]. Theorem C.1 now yields |h| ≤ ψ 1 ≤ ψ 2 .Finally, the claimed bound follows by noticing that, for t ∈ [0, T ],ψ 2 (t) = |h 0 | + ( w ∞,T + λ|h 0 |) t λ (s)ds, by Theorem A.3 and that • 0 E λ (s)ds is non-decreasing by Corollary C.2.
, z 2 ∈ C such that (z 1 ), (z 2 ) ≤ c, |e z 1 -e z 2 | ≤ e c |z 1 -z 2 |, Theorem A.3. Let f ∈ L 1 loc (R + , R). The integral equation x = f + λK * x admits a unique solution x ∈ L 1 loc (R + , R) given by x = f + λE λ * f.When K and λ are positive, E λ is also positive, see[START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF] Proposition 9.8.1]. In that case, we have a Grönwall type inequality given by [16, Lemma 9.8.2].
Lemma
1 2 -H) )ds,
as b tends to infinity. Hence, thanks to Corollary 5.6, there exists C > 0 such that for any
b ∈ R
sup (ψ n (t, a + ib)) ≤ C(1 -|b|). (5.19)
n≥1
Recalling that
∀z 1 we obtain
|L n (a+ib, T )-L(a+ib, T )| ≤ e C(1-|b|) sup t∈[0,T ] |ψ n (t, a+ib)-ψ(t, a+ib)| 0 T (θ(s)+V 0 s -H-1 2 Γ( 1 2 -H) )ds,
from (5.16), (5.17) and (5.19). We deduce Proposition 4.3 thanks to (5.15) and Theorem 4.1
together with the fact that b∈R b 4 +1 b 2 + 1
Proposition B.1. Under Assumption B.1, the stochastic Volterra equation (B.1) admits a weak continuous solution X = (X t ) t≤T . Moreover X satisfies
sup
t∈[0,T ]
1, Theorem A.1].
Theorem B.4 is used here with the smoothed kernel K n given by (3.4) together with b(x) = -λx and g defined as in(3.1)
Note that Theorem B.4 is used here for the smoothed kernel K n , b(x) = -λx and g n defined by (4.9).
In fact, V n 0 = 0 while V0 may be positive.
e C(1-|b|) db < ∞.
Acknowledgments
We thank Bruno Bouchard, Christa Cuchiero, Philipp Harms and Mathieu Rosenbaum for many interesting discussions. Omar El Euch is thankful for the support of the Research Initiative "Modélisation des marchés actions, obligations et dérivés", financed by HSBC France, under the aegis of the Europlace Institute of Finance.
Appendix A Stochastic convolutions and resolvents
We recall in this Appendix the framework and notations introduced in [START_REF] Jaber | Affine Volterra processes[END_REF].
A.1 Convolution notation
For a measurable function K on R + and a measure L on R + of locally bounded variation, the convolutions K * L and L * K are defined by
L(ds)K(t -s) whenever these expressions are well-defined. If F is a function on R + , we write K * F = K * (F dt), that is
We can show that L * F is almost everywhere well-defined and belongs to L p loc (R + , R), whenever Finally from Lemma 2.4 in [START_REF] Jaber | Affine Volterra processes[END_REF] together with the Kolmogorov continuity theorem, we can show that there exists a unique version of (K * dM t ) t≥0 that is continuous whenever b and σ are locally bounded. In this paper, we will always work with this continuous version.
Note that the convolution notation could be easily extended for matrix-valued K and L. In this case, the associativity properties exposed above hold.
A.2 Resolvent of the first kind
We define the resolvent of the first kind of a d × d-matrix valued kernel K, as the R d×d -valued measure L on R + of locally bounded variation such that
where id stands for the identity matrix, see [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Definition 5.5.1]. The resolvent of the first kind does not always exist. In the case of the fractional kernel
Γ(H+1/2) the resolvent of the first kind exists and is given by
for any H ∈ (0, 1/2). If K is non-negative, non-increasing and not identically equal to zero on R + , the existence of a resolvent of the first kind is guaranteed by [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 5.5.5].
The following result shown in [2, Lemma 2.6], is stated here for d = 1 but is true for any dimension d ≥ 1.
) such that F * L is right-continuous and of locally bounded variation one has
Here, df denotes the measure such that f (t) = f (0) + [0,t] df (s), for all t ≥ 0, for any right-continuous function of locally bounde variation f on R + .
Remark A.2. The previous lemma will be used with
In particular, ∆ h K * L is of locally bounded variation.
A.3 Resolvent of the second kind
We consider a kernel K ∈ L 1 loc (R + , R) and define the resolvent of the second kind of K as the unique function
For λ ∈ R, we define the canonical resolvent of K with parameter λ as the unique solution
This means that E λ = -R -λK /λ, when λ = 0 and E 0 = K. The existence and uniqueness of R K and E λ is ensured by [16, Theorem 2.3.1] together with the continuity of
We recall [START_REF] Gripenberg | Encyclopedia of Mathematics and its Applications[END_REF]Theorem 2.3.5] regarding the existence and uniqueness of a solution of linear Volterra integral equations in L 1 loc (R + , R).
In [START_REF] Jaber | Markovian structure of the Volterra Heston model[END_REF], the proof of [2, Theorem 3.5] is adapted to prove the existence of a non-negative solution for a wide class of admissible input curves g satisfying 6
We therefore define the following set of admissible input curves Remark B.5. Note that any locally square-integrable completely monotone kernel 7 that is not identically zero satisfies Assumption B.2, see [START_REF] Jaber | Affine Volterra processes[END_REF]Example 3.6]. In particular, this is the case for
, with H ∈ (0, 1/2).
• any weighted sum of exponentials K(t) = n i=1 c i e -γ i t such that c i , γ i ≥ 0 for all i ∈ {1, . . . , n} and c i > 0 for some i.
Remark B.6. Theorem B.4 will be used with functions g of the following form
where ξ is a non-negative measure of locally bounded variation and c is a non-negative constant. In that case, we may show that (B.3) is satisfied, under Assumption B.2.
C Linear Volterra equation with continuous coefficients
In this section, we consider K ∈ L 2 loc (R + , R) satisfying Assumption B.2 with T = ∞ and recall the definition of G K , that is
We denote by . ∞,T the usual uniform norm on [0, T ], for each T > 0.
6 Under Assumption B.2 one can show that ∆ h K * L is non-increasing and right-continuous thanks to Remark A.2 so that the associated measure d(∆ h K * L) is well-defined. 7 A kernel K ∈ L 2 loc (R+, R) is said to be completely monotone, if it is infinitely differentiable on (0, ∞) such that (-1) j K (j) (t) ≥ 0 for any t > 0 and j ≥ 0.
We start by showing the inequality in a neighborhood of zero. Because z, h, w and ψ ε are continuous, we get, taking h 0 = 0,
for small t. Hence, |h| ≤ ψ ε on a neighborhood of zero. This result still holds when h 0 is not zero. Indeed in that case, it is easy to show that for t going to zero,
and
As |h 0 | is now positive, we conclude that |h| ≤ ψ ε on a neighborhood of zero by the Cauchy-Schwarz inequality.
Hence, t 0 = inf{t > 0; ψ ε (t) < |h(t)|} is positive. If we assume that t 0 < ∞, we would get that |h(t 0 )| = ψ ε (t 0 ) by continuity of h and ψ ε . Moreover,
An application of Lemma A.1 with F = ∆ t K for t > 0, yields
Relying on the fact that d(∆ t K * L) is a non-negative measure and ∆ t K * L ≤ 1, by Remark A.2, together with the fact that |h(s)| ≤ ψ ε (s) for s ≤ t 0 , we get that |φ h (t)| ≤ φ ψε (t). We now notice that in the case h(t 0 ) = 0, we have |
01761092 | en | [
"chim.theo",
"phys.phys.phys-chem-ph",
"phys.phys.phys-comp-ph"
] | 2024/03/05 22:32:13 | 2018 | https://hal.sorbonne-universite.fr/hal-01761092/file/RSDH.pdf | Cairedine Kalai
Julien Toulouse
A general range-separated double-hybrid density-functional theory
A range-separated double-hybrid (RSDH) scheme which generalizes the usual range-separated hybrids and double hybrids is developed. This scheme consistently uses a two-parameter Coulombattenuating-method (CAM)-like decomposition of the electron-electron interaction for both exchange and correlation in order to combine Hartree-Fock exchange and second-order Møller-Plesset (MP2) correlation with a density functional. The RSDH scheme relies on an exact theory which is presented in some detail. Several semi-local approximations are developed for the short-range exchangecorrelation density functional involved in this scheme. After finding optimal values for the two parameters of the CAM-like decomposition, the RSDH scheme is shown to have a relatively small basis dependence and to provide atomization energies, reaction barrier heights, and weak intermolecular interactions globally more accurate or comparable to range-separated MP2 or standard MP2. The RSDH scheme represents a new family of double hybrids with minimal empiricism which could be useful for general chemical applications.
I. INTRODUCTION
Over the past two decades, density-functional theory (DFT) [1] within the Kohn-Sham (KS) scheme [2] has been a method of choice to study ground-state properties of electronic systems. KS DFT is formally exact, but it involves the so-called exchange-correlation energy functional whose explicit form in terms of the electron density is still unknown. Hence, families of approximations to this quantity have been developed: semi-local approximations (local-density approximation (LDA), generalized-gradient approximations (GGAs) and meta-GGAs), hybrid approximations, and approximations depending on virtual orbitals (see, e.g., Ref. 3
for a recent review).
This last family of approximations includes approaches combining semi-local density-functional approximations (DFAs) with Hartree-Fock (HF) exchange and second-order Møller-Plesset (MP2) correlation, either based on a range separation of the electron-electron interaction or a linear separation. In the range-separated hybrid (RSH) variant, the Coulomb electron-electron interaction w ee (r 12 ) = 1/r 12 is decomposed as [START_REF] Savin | Recent Developments of Modern Density Functional Theory[END_REF][START_REF] Toulouse | [END_REF] w ee (r 12 ) = w lr,µ ee (r 12 ) + w sr,µ ee (r 12 ),
where w lr,µ ee (r 12 ) = erf(µr)/r 12 is a long-range interaction (written with the error function erf) and w sr,µ ee (r 12 ) = erfc(µr)/r 12 is the complementary short-range interaction (written with the complementary error function erfc), the decomposition being controlled by the parameter µ (0 ≤ µ < ∞). HF exchange and MP2 correlation can then be used for the long-range part of the energy, while a semi-local exchange-correlation DFA is used for * Electronic address: toulouse@lct.jussieu.fr the complementary short-range part, resulting in a method that is denoted by RSH+MP2 [6]. Among the main advantages of such an approach are the explicit description of van der Waals dispersion interactions via the long-range MP2 part (see, e.g., Ref. 7) and the fast (exponential) convergence of the long-range MP2 correlation energy with respect to the size of the one-electron basis set [8]. On the other hand, the short-range exchangecorrelation DFAs used still exhibit significant errors, such as self-interaction errors [9], limiting the accuracy for the calculations of atomization energies or non-covalent electrostatic interactions for example.
Similarly, the double-hybrid (DH) variant [10] (see Ref. 11 for a review) for combining MP2 and a semi-local DFA can be considered as corresponding to a linear separation of the Coulomb electronelectron interaction [12] w ee (r 12 ) = λw ee (r 12 ) + (1 -λ)w ee (r 12 ), (2) where λ is a parameter (0 ≤ λ ≤ 1). If HF exchange and MP2 correlation is used for the part of the energy associated with the interaction λw ee (r 12 ) and a semi-local exchange-correlation DFA is used for the complementary part, then a one-parameter version of the DH approximations is obtained [12]. One of the main advantages of the DH approximations is their quite efficient reduction of the self-interaction error [13] thanks to their large fraction of HF exchange (λ ≈ 0.5 or more). On the other hand, they inherit (a fraction of) the slow (polynomial) basis convergence of standard MP2 [14], and they are insufficiently accurate for the description of van der Waals dispersion interactions and need the addition of semiempirical dispersion corrections [15]. In this work, we consider range-separated double-hybrid (RSDH) [START_REF]The term "range-separated double hybrid (RSDH)" has already been used in Ref. 92[END_REF] approximations which combine the two above-mentioned approaches, based on the following decomposition of the Coulomb electron-electron interaction w ee (r 12 ) = w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ) +(1 -λ)w sr,µ ee (r 12 ),
where, again, the energy corresponding to the first part of the interaction (in square brackets) is calculated with HF exchange and MP2 correlation, and the complementary part is treated by a semi-local exchange-correlation DFA. The expected features of such an approach are the explicit description of van der Waals dispersion interactions through the long-range part, and reduced self-interaction errors in the short-range part (and thus improved calculations of properties such as atomization energies).
The decomposition of Eq. ( 3) is in fact a special case of the decomposition used in the Coulombattenuating method (CAM) [START_REF] Yanai | [END_REF] w ee (r 12 ) = (α + β)w lr,µ ee (r 12 ) + αw sr,µ ee (r 12 ) + (1 -α -β)w lr,µ ee (r 12 ) + (1 -α)w sr,µ ee (r 12 ) , (4) with the parameters α + β = 1 and α = λ. The choice α+β = 1 is known to be appropriate for Rydberg and charge-transfer excitation energies [18] and for reaction-barrier heights [19]. We also expect it to be appropriate for the description of longrange van der Waals dispersion interactions. It should be noted that the CAM decomposition has been introduced in Ref. 17 at the exchange level only, i.e. for combining HF exchange with a semilocal exchange DFA without modifying the semilocal correlation DFA. Only recently, Cornaton and Fromager [20] (see also Ref. 21) pointed out the possibility of a CAM double-hybrid approximation but remarked that the inclusion of a fraction of short-range electron-electron interaction in the MP2 part would limit the basis convergence, and preferred to develop an alternative approach which uses only the perturbation expansion of a longrange interacting wave function. Despite the expected slower basis convergence of DH approximations based on the decomposition in Eq. (3) [or in Eq. ( 4)] in comparison to the RSH+MP2 method based on the decomposition in Eq. ( 1), we still believe it worthwhile to explore this kind of DH approximations in light of the above-mentioned expected advantages. In fact, we will show that the basis convergence of the RSDH approximations is relatively fast, owing to the inclusion of a modest fraction of short-range MP2 correlation.
The decomposition in Eq. ( 3) has been used several times at the exchange level [22][23][24][25][26][27][28][29][30][31][32][33]. A few DH approximations including either long-range exchange or long-range correlation terms have been proposed. The ωB97X-2 approximation [34] adds a full-range MP2 correlation term to a hybrid approximation including long-range HF exchange. The B2-P3LYP approximation [35] and the lrc-XYG3 approximation [36] add a long-range MP2 correlation correction to a standard DH approximation including full-range HF exchange. Only in Ref. 37 the decomposition in Eq. ( 3) was consistently used at the exchange and correlation level, combining a pair coupled-cluster doubles approximation with a semi-local exchangecorrelation DFA, in the goal of describing static correlation. However, the formulation of the exact theory based on the decomposition in Eq. ( 3), as well as the performance of the MP2 and semi-local DFAs in this context, have not been explored. This is what we undertake in the present work.
The paper is organized as follows. In Section II, the theory underlying the RSDH approximations is presented, and approximations for the corresponding short-range correlation density functional are developed. Computational details are given in Section III. In Section IV, we give and discuss the results, concerning the optimization of the parameters µ and λ on small sets of atomization energies (AE6 set) and reaction barrier heights (BH6 set), the study of the basis convergence, and the tests on large sets of atomization energies (AE49 set), reaction barrier heights (DBH24 set), and weak intermolecular interactions (S22 set). Section V contains conclusions and future work prospects. Finally, the Appendix contains the derivation of the uniform coordinate scaling relation and the Coulomb/high-density and shortrange/low-density limits of the short-range correlation density functional involved in this work. Unless otherwise specified, Hartree atomic units are tacitly assumed throughout this work.
II. RANGE-SEPARATED DOUBLE-HYBRID DENSITY-FUNCTIONAL THEORY
A. Exact theory
The derivation of the RSDH density-functional theory starts from the universal density functional [38],
F [n] = min Ψ→n Ψ| T + Ŵee |Ψ , ( 5
)
where T is the kinetic-energy operator, Ŵee the Coulomb electron-electron repulsion operator, and the minimization is done over normalized antisymmetric multideterminant wave functions Ψ giving a fixed density n. The universal density functional is then decomposed as
F [n] = F µ,λ [n] + Ēsr,µ,λ Hxc [n], (6)
where F µ,λ [n] is defined as
F µ,λ [n] = min Ψ→n Ψ| T + Ŵ lr,µ ee + λ Ŵ sr,µ ee |Ψ . (7)
In Eq. ( 7), Ŵ lr,µ ee is the long-range electron-electron repulsion operator and λ Ŵ sr,µ ee is the short-range electron-electron repulsion operator scaled by the constant λ, with expressions:
Ŵ lr,µ ee = 1 2 w lr,µ ee (r 12 )n 2 (r 1 , r 2 )dr 1 dr 2 , (8) Ŵ sr,µ ee = 1 2 w sr,µ ee (r 12 )n 2 (r 1 , r 2 )dr 1 dr 2 , (9)
where n2 (r 1 , r 2 ) = n(r 1 )n(r 2 ) -δ(r 1r 2 )n(r 1 ) is the pair-density operator, written with the density operator n(r). Equation ( 6) defines the complement short-range Hartree-exchange-correlation density functional Ēsr,µ,λ Hxc [n] depending on the two parameters µ and λ. It can itself be decomposed as Ēsr,µ,λ
Hxc [n] = E sr,µ,λ H [n] + Ēsr,µ,λ xc [n], (10)
where E sr,µ,λ H
[n] is the short-range Hartree contribution,
E sr,µ,λ H [n] = (1 -λ) × 1 2 w sr,µ ee (r 12 )n(r 1 )n(r 2 )dr 1 dr 2 , (11)
and Ēsr,µ,λ xc
[n] is the short-range exchangecorrelation contribution.
The exact ground-state electronic energy of a N -electron system in the external nuclei-electron potential v ne (r) can be expressed as
E = min n→N F [n] + v ne (r)n(r)dr = min n→N F µ,λ [n] + Ēsr,µ,λ Hxc [n] + v ne (r)n(r)dr = min Ψ→N Ψ| T + Ŵ lr,µ ee + λ Ŵ sr,µ ee + Vne |Ψ + Ēsr,µ,λ Hxc [n Ψ ] , (12)
where n → N refers to N -representable densities, Ψ → N refers to N -electron normalized antisymmetric multideterminant wave functions, and n Ψ denotes the density coming from Ψ, i.e. n Ψ (r) = Ψ| n(r) |Ψ . In Eq. ( 12), the last line was obtained by using the expression of F µ,λ [n] in Eq. ( 7), introducing the nuclei-electron potential operator Vne = v ne (r)n(r)dr, and recomposing the two-step minimization into a single one, i.e. min n→N min Ψ→n = min Ψ→N . The minimizing wave function Ψ µ,λ in Eq. ( 12) satisfies the corresponding Euler-Lagrange equation, leading to the Schrödinger-like equation
T + Ŵ lr,µ ee + λ Ŵ sr,µ ee + Vne + V sr,µ,λ Hxc [n Ψ µ,λ ] |Ψ µ,λ = E µ,λ |Ψ µ,λ , (13)
where E µ,λ is the Lagrange multiplier associated with the normalization constraint of the wave function. In Eq. ( 13), V sr,µ,λ Hxc [n] = v sr,µ,λ Hxc (r)n(r)dr is the complement short-range Hartree-exchangecorrelation potential operator with v sr,µ,λ Hxc (r) = δ Ēsr,µ,λ Hxc [n]/δn(r). Equation ( 13) defines an effective Hamiltonian Ĥµ,λ = T + Vne + Ŵ lr,µ ee +λ Ŵ sr,µ ee + V sr,µ,λ Hxc [n Ψ µ,λ ] that must be solved iteratively for its ground-state multideterminant wave function Ψ µ,λ which gives the exact ground-state density and the exact ground-state energy via Eq. ( 12), independently of µ and λ.
We have therefore defined an exact formalism combining a wave-function calculation with a density functional. This formalism encompasses several important special cases:
• µ = 0 and λ = 0. In Eq. ( 12), the electron-electron operator vanishes, Ŵ lr,µ=0 ee + 0 × Ŵ sr,µ=0 ee = 0, and the density functional reduces to the KS Hartree-exchange-correlation density functional, Ēsr,µ=0,λ=0
Hxc
[n] = E Hxc [n], so that we recover standard KS DFT
E = min Φ→N Φ| T + Vne |Φ + E Hxc [n Φ ] , ( 14
)
where Φ is a single-determinant wave function.
• µ → ∞ or λ = 1. In Eq. ( 12), the electronelectron operator reduces to the Coulomb interaction Ŵ lr,µ→∞ ee + λ Ŵ sr,µ→∞ ee = Ŵee or Ŵ lr,µ ee + 1 × Ŵ sr,µ ee = Ŵee , and the density functional vanishes, Ēsr,µ→∞,λ
Hxc
[n] = 0 or Ēsr,µ,λ=1
Hxc
[n] = 0, so that we recover standard wave-function theory
E = min Ψ→N Ψ| T + Ŵee + Vne |Ψ . ( 15
)
• 0 < µ < ∞ and λ = 0. In Eq. ( 12), the electron-electron operator reduces to the longrange interaction Ŵ lr,µ ee + 0 × Ŵ sr,µ ee = Ŵ lr,µ ee , and the density functional reduces to the usual shortrange density functional, Ēsr,µ,λ=0
Hxc
[n] = Ēsr,µ Hxc [n], so that we recover range-separated DFT [START_REF] Savin | Recent Developments of Modern Density Functional Theory[END_REF][START_REF] Toulouse | [END_REF]
E = min Ψ→N Ψ| T + Ŵ lr,µ ee + Vne |Ψ + Ēsr,µ Hxc [n Ψ ] . (16)
• µ = 0 and 0 < λ < 1. In Eq. ( 12), the electronelectron operator reduces to the scaled Coulomb interaction Ŵ lr,µ=0 ee + λ Ŵ sr,µ=0 ee = λ Ŵee , and the density functional reduces to the λ-complement density functional, Ēsr,µ=0,λ
Hxc
[n] = Ēλ Hxc [n], so that we recover the multideterminant extension of KS DFT based on the linear decomposition of the electron-electron interaction [12,39]
E = min Ψ→N Ψ| T + λ Ŵee + Vne |Ψ + Ēλ Hxc [n Ψ ] . (17)
B. Single-determinant approximation
As a first step, we introduce a single-determinant approximation in Eq. ( 12),
E µ,λ 0 = min Φ→N Φ| T + Ŵ lr,µ ee + λ Ŵ sr,µ ee + Vne |Φ + Ēsr,µ,λ Hxc [n Φ ] , (18)
where the search is over N -electron normalized single-determinant wave functions. The minimizing single determinant Φ µ,λ is given by the HF-or KS-like equation
T + Vne + V lr,µ Hx,HF [Φ µ,λ ] + λ V sr,µ Hx,HF [Φ µ,λ ] + V sr,µ,λ Hxc [n Φ µ,λ ] |Φ µ,λ = E µ,λ 0 |Φ µ,λ , (19)
where V lr,µ Hx,HF [Φ µ,λ ] and V sr,µ Hx,HF [Φ µ,λ ] are the longrange and short-range HF potential operators constructed with the single determinant Φ µ,λ , and E µ,λ 0 is the Lagrange multiplier associated with the normalization constraint. Equation ( 19) must be solved self-consistently for its single-determinant ground-state wave function Φ µ,λ . Note that, due to the single-determinant approximation, the density n Φ µ,λ is not the exact ground-state density and the energy in Eq. ( 18) is not the exact ground-state energy and depends on the parameters µ and λ. It can be rewritten in the form
E µ,λ 0 = Φ µ,λ | T + Vne |Φ µ,λ + E H [n Φ µ,λ ] +E lr,µ x,HF [Φ µ,λ ] + λE sr,µ x,HF [Φ µ,λ ] + Ēsr,µ,λ xc [n Φ µ,λ ],( 20
)
where E H [n] = (1/2) w ee (r 12 )n(r 1 )n(r 2 )dr 1 dr 2 is the standard Hartree energy with the Coulomb electron-electron interaction, and E lr,µ x,HF and E sr,µ
x,HF are the long-range and short-range HF exchange energies. For µ = 0 and λ = 0, we recover standard KS DFT, while for µ → ∞ or λ = 1 we recover standard HF theory. For intermediate values of µ and λ, this scheme is very similar to the approximations of Refs. 22-33, except that the part of correlation associated with the interaction w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ) is missing in Eq. (20). The addition of this correlation is done in a second step with MP2 perturbation theory.
C. Second-order Møller-Plesset perturbation theory
A rigorous non-linear Rayleigh-Schrödinger perturbation theory starting from the singledeterminant reference of Section II B can be developed, similarly to what was done for the RSH+MP2 method in Refs. 6, 40, 41 and for the one-parameter DH approximations in Ref. 12. This is done by introducing a perturbation strength parameter ǫ and defining the energy expression:
E µ,λ,ǫ = min Ψ→N Ψ| T + Vne + V lr,µ Hx,HF [Φ µ,λ ] +λ V sr,µ Hx,HF [Φ µ,λ ] + ǫ Ŵµ,λ |Ψ + Ēsr,µ,λ Hxc [n Ψ ] ,( 21
)
where the search is over N -electron normalized antisymmetric multideterminant wave functions, and Ŵµ,λ is a Møller-Plesset-type perturbation operator
Ŵµ,λ = Ŵ lr,µ ee + λ Ŵ sr,µ ee -V lr,µ Hx,HF [Φ µ,λ ] -λ V sr,µ Hx,HF [Φ µ,λ ].( 22
)
The minimizing wave function Ψ µ,λ,ǫ in Eq. ( 21) is given by the corresponding Euler-Lagrange equation:
T + Vne + V lr,µ Hx,HF [Φ µ,λ ] + λ V sr,µ Hx,HF [Φ µ,λ ] + ǫ Ŵµ,λ + V sr,µ,λ Hxc [n Ψ µ,λ,ǫ ] |Ψ µ,λ,ǫ = E µ,λ,ǫ |Ψ µ,λ,ǫ . (23)
For ǫ = 0, Eq. ( 23) reduces to the singledeterminant reference of Eq. ( 19), i.e. Ψ µ,λ,ǫ=0 = Φ µ,λ and E µ,λ,ǫ=0 = E µ,λ 0 . For ǫ = 1, Eq. ( 23) reduces to Eq. ( 13), i.e. Ψ µ,λ,ǫ=1 = Ψ µ,λ and E µ,λ,ǫ=1 = E µ,λ , and Eq. ( 21) reduces to Eq. ( 12), i.e. we recover the physical energy E µ,λ,ǫ=1 = E, independently of µ and λ. The perturbation theory is then obtained by expanding these quantities in (k) . Following the same steps as in Ref. 6, we find the zeroth-order energy,
ǫ around ǫ = 0: E µ,λ,ǫ = ∞ k=0 ǫ k E µ,λ,(k) , Ψ µ,λ,ǫ = ∞ k=0 ǫ k Ψ µ,λ,(k) , and E µ,λ,ǫ = ∞ k=0 ǫ k E µ,λ,
E µ,λ,(0) = Φ µ,λ | T + Vne + V lr,µ Hx,HF [Φ µ,λ ] +λ V sr,µ Hx,HF [Φ µ,λ ] |Φ µ,λ + Ēsr,µ,λ Hxc [n Φ µ,λ ], (24)
and the first-order energy correction,
E µ,λ,(1) = Φ µ,λ | Ŵµ,λ |Φ µ,λ , (25)
so that the zeroth+first order energy gives back the energy of the single-determinant reference in Eq. ( 20),
E µ,λ,(0) + E µ,λ,(1) = E µ,λ 0 . (26)
The second-order energy correction involves only double-excited determinants Φ µ,λ ij→ab (of energy E µ,λ 0,ij→ab ) and takes the form a MP2-like correlation energy, assuming a non-degenerate ground state in Eq. ( 19),
E µ,λ,(2) = E µ,λ c,MP2 = - occ i<j vir a<b Φ µ,λ ij→ab | Ŵµ,λ |Φ µ,λ 2 E µ,λ 0,ij→ab -E µ,λ 0 = - occ i<j vir a<b ij| ŵlr,µ ee + λ ŵsr,µ ee |ab -ij| ŵlr,µ ee + λ ŵsr,µ ee |ba 2 ε a + ε b -ε i -ε j , (27)
where i and j refer to occupied spin orbitals and a and b refer to virtual spin orbitals obtained from Eq. ( 19), ε k are the associated orbital energies, and ij| ŵlr,µ ee + λ ŵsr,µ ee |ab are the twoelectron integrals corresponding to the interaction w lr,µ ee (r 12 )+λw sr,µ ee (r 12 ). Note that the orbitals and orbital energies implicitly depend on µ and λ. Just like in standard Møller-Plesset perturbation theory, there is a Brillouin theorem making the singleexcitation term vanish (see Ref. 6). Also, contrary to the approach of Refs. 20, 21, the second-order energy correction does not involve any contribution from the second-order correction to the density. The total RSDH energy is finally
E µ,λ RSDH = E µ,λ 0 + E µ,λ c,MP2 . (28)
It is instructive to decompose the correlation energy in Eq. ( 27) as
E µ,λ c,MP2 = E lr,µ c,MP2 + λE lr-sr,µ c,MP2 + λ 2 E sr,µ c,MP2 , (29)
with a pure long-range contribution,
E lr,µ c,MP2 = - occ i<j vir a<b ij| ŵlr,µ ee |ab -ij| ŵlr,µ ee |ba 2 ε a + ε b -ε i -ε j , (30)
a pure short-range contribution,
E sr,µ c,MP2 = - occ i<j vir a<b | ij| ŵsr,µ ee |ab -ij| ŵsr,µ ee |ba | 2 ε a + ε b -ε i -ε j , (31)
and a mixed long-range/short-range contribution,
E lr-sr,µ c,MP2 = - occ i<j vir a<b ij| ŵlr,µ ee |ab -ij| ŵlr,µ ee |ba ( ab| ŵsr,µ ee |ij -ba| ŵsr,µ ee |ij ) ε a + ε b -ε i -ε j + c.c., (32)
where c.c. stands for the complex conjugate. The exchange-correlation energy in the RSDH approximation is thus
E µ,λ xc,RSDH = E lr,µ x,HF + λE sr,µ x,HF + E lr,µ c,MP2 +λE lr-sr,µ c,MP2 + λ 2 E sr,µ c,MP2 + Ēsr,µ,λ xc [n]. (33)
It remains to develop approximations for the complement short-range exchange-correlation density functional Ēsr,µ,λ xc
[n], which is done in Section II D.
D. Complement short-range exchange-correlation density functional
Decomposition into exchange and correlation
The complement short-range exchangecorrelation density functional Ēsr,µ,λ xc
[n] can be decomposed into exchange and correlation contributions,
Ēsr,µ,λ xc [n] = E sr,µ,λ x [n] + Ēsr,µ,λ c [n], (34)
where the exchange part is defined with the KS single determinant Φ[n] and is linear with respect to λ,
E sr,µ,λ x [n] = Φ[n]| (1 -λ) Ŵ sr,µ ee |Φ[n] -E sr,µ,λ H [n] = (1 -λ)E sr,µ x [n], (35)
where
E sr,µ x [n] = E sr,µ,λ=0
x
[n] is the usual shortrange exchange density functional, as already introduced, e.g., in Ref. 5). Several (semi-)local approximations have been proposed for E sr,µ
x [n] (see, e.g., Refs. [START_REF] Savin | Recent Developments of Modern Density Functional Theory[END_REF][START_REF] Toulouse | [END_REF][42][43][44][45][46][47][48][49]. By contrast, the complement short-range correlation density functional Ēsr,µ,λ c
[n] cannot be exactly expressed in terms of the short-range correlation density functional Ēsr,µ c
[n] = Ēsr,µ,λ=0 c
[n] of Ref. 5 for which several (semi-)local approximations have been proposed [START_REF] Toulouse | [END_REF][45][46][47][48][49][50]. Note that in the approach of Ref. 20 the complement density functional was defined using the pure long-range interacting wave function Ψ µ = Ψ µ,λ=1 and it was possible, using uniform coordinate scaling relations, to find an exact expression for it in terms of previously studied density functionals. This is not the case in the present approach because the complement density functional is defined using the wave function Ψ µ,λ obtained with both long-range and short-range interactions. As explained in the Appendix, uniform coordinate scaling relations do not allow one to obtain an exact expression for Ēsr,µ,λ c
[n] in terms of previously studied density functionals. Therefore, the difficulty lies in developing approximations for Ēsr,µ,λ c
[n]. For this, we first give the exact expression of Ēsr,µ,λ c
[n] in the Coulomb limit µ → 0 (and the related high-density limit) and in the shortrange limit µ → ∞ (and the related low-density limit).
Expression of Ēsr,µ,λ c
[n] in the Coulomb limit µ → 0 and in the high-density limit
The complement short-range correlation density functional Ēsr,µ,λ c
[n] can be written as
Ēsr,µ,λ c [n] = E c [n] -E µ,λ c [n], (36)
where E c [n] is the standard KS correlation density functional and E µ,λ c [n] is the correlation density functional associated with the interaction w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ). For µ = 0, the density functional
E µ=0,λ c [n] = E λ c [n]
corresponds to the correlation functional associated with the scaled Coulomb interaction λw ee (r 12 ), which can be exactly expressed as 51,[START_REF] Levy | Density Functional Theory[END_REF] where n 1/λ (r) = (1/λ 3 )n(r/λ) is the density with coordinates uniformly scaled by 1/λ. Therefore, for µ = 0, the complement short-range correlation density functional is
E λ c [n] = λ 2 E c [n 1/λ ] [
Ēsr,µ=0,λ c [n] = E c [n] -λ 2 E c [n 1/λ ], (37)
which is the correlation functional used in the density-scaled one-parameter double-hybrid (DS1DH) scheme of Sharkas et al. [12]. For a KS system with a non-degenerate ground state, we have in the λ → 0 limit:
E c [n 1/λ ] = E GL2 c [n] + O(λ) where E GL2 c
[n] is the second-order Görling-Levy (GL2) correlation energy [START_REF] Görling | [END_REF]. Therefore, in this case, Ēsr,µ=0,λ c
[n] has a quadratic dependence in λ near λ = 0. In practice with GGA functionals, it has been found that the density scaling in Eq. ( 37) can sometimes be advantageously neglected, i.e.
E c [n 1/λ ] ≈ E c [n] [12, 39], giving Ēsr,µ=0,λ c [n] ≈ (1 -λ 2 )E c [n]. ( 38
)
Even if we do not plan to apply the RSDH scheme with µ = 0, the condition in Eq. ( 37) is in fact relevant for an arbitrary value of µ in the high-density limit, i.e. n γ (r) = γ 3 n(γr) with γ → ∞, since in this limit the short-range interaction becomes equivalent to the Coulomb interaction in the complement short-range correlation density functional: lim γ→∞ Ēsr,µ,λ
c [n γ ] = lim γ→∞ Ēsr,µ=0,λ c [n γ ] (see Appendix).
In fact, for a KS system with a non-degenerate ground state, the approximate condition in Eq. ( 38) is sufficient to recover the exact high-density limit for an arbitrary value of µ which is (see Appendix)
lim γ→∞ Ēsr,µ,λ c [n γ ] = (1 -λ 2 )E GL2 c [n]. ( 39
)
3. Expression of Ēsr,µ,λ c
[n] in the short-range limit µ → ∞ and the low-density limit
The leading term in the asymptotic expansion of Ēsr,µ,λ
c [n] as µ → ∞ is (see Appendix) Ēsr,µ,λ c [n] = (1 -λ) π 2µ 2 n 2,c [n](r, r)dr + O 1 µ 3 , (40)
where n 2,c [n](r, r) is the correlation part of the on-top pair density for the Coulomb interaction. We thus see that, for µ → ∞, Ēsr,µ,λ c
[n] is linear with respect to λ. In fact, since the asymptotic expansion of the usual short-range correlation functional is Ēsr,µ
c [n] = π/(2µ 2 ) n 2,c [n](r, r)dr + O(1/µ 3 ) [5], we can write for µ → ∞, Ēsr,µ,λ c [n] = (1 -λ) Ēsr,µ c [n] + O 1 µ 3 . ( 41
)
The low-density limit, i.e. n γ (r) = γ 3 n(γr) with γ → 0, is closely related to the limit µ → ∞ (see Appendix)
Ēsr,µ,λ c [n γ ] ∼ γ→0 γ 3 (1 -λ)π 2µ 2 n 1/γ 2,c [n](r, r)dr ∼ γ→0 γ 3 (1 -λ)π 4µ 2 -n(r) 2 + m(r) 2 dr, (42)
in which appears n
1/γ 2,c
[n](r, r), the on-top pair density for the scaled Coulomb interaction (1/γ)w ee (r 12 ), and its strong-interaction limit lim γ→0 n [54] where m(r) is the spin magnetization. Thus, in the low-density limit, contrary to the usual KS correlation functional E c [n] which goes to zero linearly in γ [51], times the complicated nonlocal strictlycorrelated electron functional [55], the complement short-range correlation functional Ēsr,µ,λ c
1/γ 2,c [n](r, r) = -n(r) 2 /2 + m(r) 2 /2
[n] goes to zero like γ 3 and becomes a simple local functional of n(r) and m(r). We now propose several simple approximations for Ēsr,µ,λ c
[n]. On the one hand, Eq. ( 38) suggests the approximation
Ēsr,µ,λ c,approx1 [n] = (1 -λ 2 ) Ēsr,µ c [n], (43)
which is correctly quadratic in λ at µ = 0 but is not linear in λ for µ → ∞. On the other hand, Eq. ( 41) suggests the approximation
Ēsr,µ,λ c,approx2 [n] = (1 -λ) Ēsr,µ c [n], (44)
which is correctly linear in λ for µ → ∞ but not quadratic in λ at µ = 0. However, it is possible to impose simultaneously the two limiting behaviors for µ = 0 and µ → ∞ with the following approximation
Ēsr,µ,λ c,approx3 [n] = Ēsr,µ c [n] -λ 2 Ēsr,µ √ λ c
[n], (45) which reduces to Eq. ( 38) for µ = 0 and satisfies Eq. ( 40) for µ → ∞. Another possibility, proposed in Ref. 37, is
Ēsr,µ,λ c,approx4 [n] = Ēsr,µ c [n] -λ 2 Ēsr,µ/λ c [n 1/λ ], (46)
which correctly reduces to Eq. ( 37) for µ = 0. For µ → ∞, its asymptotic expansion is
Ēsr,µ,λ c,approx4 [n] = π 2µ 2 n 2,c [n](r, r)dr -λ 4 π 2µ 2 n 2,c [n 1/λ ](r, r)dr + O 1 µ 3 , (47)
i.e. it does not satisfy Eq. (40). Contrary to what was suggested in Ref. 37, Eq. ( 46) is not exact but only an approximation. However, using the scaling relation on the system-averaged on-top pair density [54]
n 2,c [n γ ](r, r)dr = γ 3 n 1/γ 2,c [n](r, r)dr, (48)
it can be seen that, in the low-density limit γ → 0, Eq. ( 47) correctly reduces to Eq. ( 42). In Ref. 37, the authors propose to neglect the scaling of the density in Eq. ( 46) leading to
Ēsr,µ,λ c,approx5 [n] = Ēsr,µ c [n] -λ 2 Ēsr,µ/λ c [n], (49)
which reduces to Eq. ( 38) for µ = 0, but which has also a wrong λ-dependence for large µ
Ēsr,µ,λ c,approx5 [n] = (1 -λ 4 ) π 2µ 2 n 2,c [n](r, r)dr +O 1 µ 3 , (50)
and does not anymore satisfy the low-density limit.
Another strategy is to start from the decomposition of the MP2-like correlation energy in Eq. ( 29) which suggests the following approximation for the complement short-range correlation functional
Ēsr,µ,λ c,approx6 [n] = (1 -λ)E lr-sr,µ c [n] +(1 -λ 2 )E sr,µ c [n], (51)
where E lr-sr,µ c
[n] = Ēsr,µ c
[n]-E sr,µ c
[n] is the mixed long-range/short-range correlation functional [56,57] and E sr,µ c
[n] is the pure short-range correlation functional associated with the short-range interaction w sr,µ ee (r 12 ) [56,57]. An LDA functional has been constructed for E sr,µ c
[n] [58]. Since 40) or (41). One can also enforce the exact condition at µ = 0, Eq. (38), by introducing a scaling of the density
Ēsr,µ,λ c,approx7 [n] = (1 -λ)E lr-sr,µ c [n] + E sr,µ c [n] -λ 2 E sr,µ/λ c [n 1/λ ]. ( 52
)
5. Assessment of the approximations for Ēsr,µ,λ c
[n] on the uniform-electron gas
We now test the approximations for the complement short-range correlation functional Ēsr,µ,λ c
[n] introduced in Sec. II D 4 on the spin-unpolarized uniform-electron gas.
As a reference, for several values of the Wigner-Seitz radius r s = (3/(4πn)) 1/3 and the parameters µ and λ, we have calculated the complement shortrange correlation energy per particle as
εsr,µ,λ c,unif (r s ) = ε c,unif (r s ) -ε µ,λ c,unif (r s ), (53)
where ε c,unif (r s ) is the correlation energy per particle of the uniform-electron gas with the Coulomb electron-electron w ee (r 12 ) and ε µ,λ c,unif (r s ) is the correlation energy per particle of an uniform-electron gas with the modified electron-electron w lr,µ ee (r 12 )+ λw sr,µ ee (r 12 ). We used what is known today as the direct random-phase approximation + secondorder screened exchange (dRPA+SOSEX) method (an approximation to coupled-cluster doubles) [59,60] as introduced for the uniform-electron gas by Freeman [61] and extended for modified electronelectron interactions in Refs. 4, 45, and which is known to give reasonably accurate correlation energies per particle of the spin-unpolarized electron gas (error less than 1 millihartree for r s < 10). We note that these calculations would allow us to construct a complement short-range LDA correlation functional, but we refrain from doing that since we prefer to avoid having to do a complicated fit of εsr,µ,λ c (r s ) with respect to r s , µ, and λ. Moreover, this would only give a spin-independent LDA functional. We thus use these uniform-electron gas calculations only to test the approximations of Sec. II D 4.
For several values of r s , µ, and λ, we have calculated the complement short-range correlation energy per particle corresponding to the approximations 1 to 7 using the LDA approximation for Ēsr,µ c
[n] from Ref. 50 (for approximations 1 to 7), as well as the LDA approximation for E sr,µ c
[n] from Ref. 58 (for approximations 6 and 7), and the errors with respect to the dRPA+SOSEX results are reported in Fig. 1. Note that the accuracy of the dRPA+SOSEX reference decreases as r s increases, the error being of the order of 1 millihartree for r s = 10, which explains why the curves on the third graph of Fig. 1 appear shifted with respect to zero at large r s .
By construction, all the approximations become exact for λ = 0 (and trivially for λ = 1 or in the µ → ∞ limit since the complement short-range correlation energy goes to zero in these cases). For intermediate values of λ and finite values of µ, all the approximations, except approximation 2, tend to give too negative correlation energies. As it could have been expected, approximation 2, which is the only one incorrectly linear in λ at µ = 0, gives quite a large error (of the order of 0.01 hartree or more) for small µ, intermediate λ, and small r s (it in fact diverges in the high-density limit r s → 0), but the error goes rapidly to zero as µ increases, reflecting the fact that this approximation has the correct leading term of the asymptotic expansion for µ → ∞. On the contrary, approximation 1, being quadratic in λ, gives a smaller error (less than 0.005 hartree) for small µ but the error goes slower to zero as µ increases. Approximation 3 combines the advantages of approximations 1 and 2: it gives a small error for small µ which goes rapidly to zero as µ increases. Approximation 4, which contains the scaling of the den- Error on the complement short-range correlation energy per particle (hartree) FIG. 1: Error on the complement short-range correlation energy per particle εsr,µ,λ c,unif (rs) of the uniformelectron gas obtained with approximations 1 to 7 of Sec. II D 4 with respect to the dRPA+SOSEX results. sity, is exact for µ = 0, and gives a small error (at most about 0.003 hartree) for intermediate values of µ, but the error does not go rapidly to zero as µ increases. Again, this reflects the fact that this approximation does not give the correct leading term of the asymptotic expansion for µ → ∞ for arbitrary λ and r s . This confirms that Eq. ( 46) does not give the exact complement short-range correlation functional, contrary to what was thought in Ref. 37. A nice feature however of approximation 4 is that it becomes exact in the high-density limit r s → 0 of the uniform-electron gas (the scaling of the density at µ = 0 is needed to obtain the correct high-density limit in this zero-gap system). Approximation 5, obtained from approximation 4 by neglecting the scaling of the density in the correlation functional, and used in Ref. 37, gives quite large errors for the uniform-electron gas, approaching 0.01 hartree. Approximations 6 and 7 are quite good. They both have the correct leading term of the asymptotic expansion for µ → ∞, but approximation 7 has the additional advantage of having also the correct µ → 0 or r s → 0 limit. Approximation 7 is our best approximation, with a maximal error of about 1 millihartree.
Unfortunately, approximations 6 and 7 involve the pure short-range correlation functional E sr,µ c
[n], for which we currently have only a spinunpolarized LDA approximation [58]. For this reason, we do not consider these approximations in the following for molecular calculations. We will limit ourselves to approximations 1 to 5 which only involve the complement short-range correlation functional Ēsr,µ c
[n], for which we have spindependent GGAs [START_REF] Toulouse | [END_REF][46][47][48][49].
III. COMPUTATIONAL DETAILS
The RSDH scheme has been implemented in a development version of the MOLPRO 2015 program [START_REF] Werner | version 2015.1, a package of ab initio programs[END_REF]. The calculation is done in two steps: first a self-consistent-field calculation is perform according to Eqs. ( 18)-( 20), and then the MP2-like correlation energy in Eq. ( 27) is evaluated with the previously calculated orbitals. The λ-dependent complement short-range exchange functional is calculated according to Eq. ( 35) and the approximations 1 to 5 [see Eqs. ( 43)-( 49)] have been implemented for the complement short-range correlation functional, using the short-range Perdew-Becke-Ernzerhof (PBE) exchange and correlation functionals of Ref. 48 for E sr,µ
x [n] and Ēsr,µ c
[n]. The RSDH scheme was applied on the AE6 and BH6 sets [START_REF] Lynch | [END_REF], as a first assessment of the approximations on molecular systems and in order to determine the optimal parameters µ and λ. The AE6 set is a small representative benchmark of six atomization energies consisting of SiH [64] at the geometries optimized by quadratic configuration interaction with single and double excitations with the modified Gaussian-3 basis set (QCISD/MG3) [65]. The reference values for the atomization energies and barrier heights are the non-relativistic FC-CCSD(T)/cc-pVQZ-F12 values of Refs. 66, 67. For each approximation, we have first varied µ and λ between 0 and 1 by steps of 0.1 to optimize the parameters on each set. We have then refined the search by steps of 0.02 to find the common optimal parameters on the two sets combined.
RSDH scheme was then tested on the AE49 set of 49 atomization energies [68] (consisting of the G2-1 set [69,70] stripped of the six molecules containing Li, Be, and Na [71]) and on the DBH24/08 set [72,73] of 24 forward and reverse reaction barrier heights. These calculations were performed with the cc-pVQZ basis set, with MP2(full)/6-31G* geometries for the AE49 set, and with the aug-cc-pVQZ basis set [74] with QCISD/MG3 geometries for the DBH24/08 set. The reference values for the AE49 set are the non-relativistic FC-CCSD(T)/cc-pVQZ-F12 values of Ref. 75, and the reference values for the DBH24/08 set are the zeropoint exclusive values from Ref. 73.
Finally, the RSDH scheme was tested on the S22 set of 22 weakly interacting molecular complexes [77]. These calculations were performed with the aug-cc-pVDZ and aug-cc-pVTZ basis sets and the counterpoise correction, using the geometries from Ref. 77 and the complete-basis-set (CBS)-extrapolated CCSD(T) reference interaction energies from Ref. 78. The local MP2 approach [79] is used on the largest systems in the S22 set.
Core electrons are kept frozen in all our MP2 calculations. Spin-restricted calculations are performed for all the closed-shell systems, and spinunrestricted calculations for all the open-shell systems.
As statistical measures of goodness of the different methods, we compute mean absolute errors (MAEs), mean errors (MEs), root mean square deviations (RMSDs), mean absolute percentage errors (MA%E), and maximal and minimal errors.
IV. RESULTS AND DISCUSSION
A. Optimization of the parameters on the AE6 and BH6 sets
We start by applying the RSDH scheme on the small AE6 and BH6 sets, and determining optimal values for the parameters µ and λ. Figure 2 shows the MAEs for these two sets obtained with approximations 1 to 5 of Sec. II D 4 as a function of λ for µ = 0.5 and µ = 0.6. We choose to show plots for only these two values of µ, since they are close the optimal value of µ for RSH+MP2 [12,80] and also for RSDH with all the approximations except approximation 2. This last approximation is anyhow of little value for thermochemistry since it gives large MAEs on the AE6 set for intermediate values of λ, which must be related to the incorrect linear dependence in λ of this approximation in the limit µ → 0 or the high-density limit. We thus only discuss next the other four approximations.
Let us start by analyzing the results for the AE6 set. For the approximations 1, 3, 4, and 5, we can always find an intermediate value of λ giving a smaller MAE than the two limiting cases λ = 0 (corresponding to RSH+MP2) and λ = 1 (corresponding to standard MP2). Among these four approximations, the approximations 1 and 5 are the least effective to reduce the MAE in comparison to RSH+MP2 and MP2, which may be connected to the fact that these two approximations are both incorrect in the low-density limit. The approximations 3 and 4, which become identical in the high-and low-density limits (for systems with non-zero gaps), are the two best approximations, giving minimal MAEs of 2.2 and 2.3 kcal/mol, respectively, at the optimal parameter values (µ, λ) = (0.5, 0.6) and (0.6, 0.65), respectively.
Let us consider now the results for the BH6 set. Each MAE curve displays a marked minimum at an intermediate value of λ, at which the corresponding approximation is globally more accurate than both RSH+MP2 and MP2. All the approximations perform rather similarly, giving minimal MAEs of about 1 kcal/mol. In fact, for µ = 0.5 and µ = 0.6, the approximations 3 and 4 give essentially identical MAEs for all λ. The optimal parameter values for these two approximations are (µ, λ) = (0.5, 0.5), i.e. relatively close to the optimal values found for the AE6 set.
For each of our two best approximations 3 and 4, we also determine optimal values of µ and λ that minimize the total MAE of the combined AE6 + BH6 set, and which could be used for general chemical applications. For the approximation 3, the optimal parameter values are (µ, λ) = (0.46, 0.58), giving a total MAE of 1.68 kcal/mol. For the approximation 4, the optimal parameter values are (µ, λ) = (0.62, 0.60), giving a total MAE of 1.98 kcal/mol. In the following, we further assess the approximations 3 and 4 with these optimal parameters.
B. Assessment on the AE49 and DBH24/08 sets of atomization energies and reaction barrier heights
We assess now the RSDH scheme with the approximations 3 and 4, evaluated with the previously determined optimal parameters (µ, λ), on the larger AE49 and DBH24/08 sets of atomization energies and reaction barrier heights. The results are reported in Tables I and II, and compared with other methods corresponding to limiting cases of the RSDH scheme: DS1DH [12] (with the PBE exchange-correlation functional [76]) corresponding to the µ = 0 limit of the RSDH scheme with TABLE I: Atomization energies (in kcal/mol) of the AE49 set calculated by DS1DH (with the PBE exchangecorrelation functional [76]), RSH+MP2, RSDH with approximations 3 and 4 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref. 48), and MP2. The calculations were carried out using the cc-pVQZ basis set at MP2(full)/6-31G* geometries and with parameters (µ, λ) optimized on the AE6+BH6 combined set. The reference values are the non-relativistic FC-CCSD(T)/cc-pVQZ-F12 values of Ref. 75 approximation 4, RSH+MP2 [6] corresponding to the λ = 0 limit of the RSDH scheme, and standard MP2 corresponding to the µ → ∞ or λ = 1 limit of the RSDH scheme. On the AE49 set, the two RSDH approximations (3 and 4) give very similar results. With a MAE of 4.3 kcal/mol and a RMSD of about 5.1 kcal/mol, they provide an overall improvement over both RSH+MP2 and standard MP2 which give MAEs larger by about 1 kcal/mol and RMSDs larger by about 2 kcal/mol. It turns out that the DS1DH approximation gives a smaller MAE of 3.2 kcal/mol than the two RSDH approximations, but a similar RMSD of 5.0 kcal/mol. On the DBH24/08 set, the two RSDH approximations give less similar but still comparable results with MAEs of 1.9 and 2.7 kcal/mol for approximations 3 and 4, respectively. This is a big improvement over standard MP2 which gives a MAE of 6.2 kcal/mol, but similar to the accuracy of RSH+MP2 which gives a MAE of 2.0 kcal/mol. Again, the smallest MAE of 1.5 kcal/mol is obtained with the the DS1DH approximation.
The fact that the DS1DH approximation ap- pears to be globally more accurate that the RSDH approximations on these larger sets but not on the small AE6 and BH6 sets points to a limited representativeness of the latter small sets, and suggests that there may be room for improvement by optimizing the parameters on larger sets.
C. Assessment of the basis convergence
We study now the basis convergence of the RSDH scheme. Figure 3 shows the convergence of the total energy of He, Ne, N 2 , and H 2 O with respect to the cardinal number X for a series of Dunning basis sets cc-pVXZ (X = 2, 3, 4, 5), calculated with MP2, RSH+MP2, and RSDH with approximations 3 and 4 (with the parameters (µ, λ) optimized on the AE6+BH6 combined set).
The results for MP2 and RSH+MP2 are in agreement with what is already known. MP2 has a slow basis convergence, with the error on the total energy decreasing as a third-power law, ∆E MP2 ∼ A X -3 [81,82], due to the difficulty of describing the short-range part of the correlation hole near the electron-electron cusp. RSH+MP2 has a fast basis convergence, with the error decreasing as an exponential law, ∆E RSH+MP2 ∼ B e -βX [8], since it involves only the long-range MP2 correlation energy.
Unsurprisingly, the RSDH scheme displays a ba-sis convergence which is intermediate between that of MP2 and RSH+MP2. What should be remarked is that, for a given basis, the RSDH basis error is closer to the RSH+MP2 basis error than to the MP2 basis error. The basis dependence of RSDH is thus only moderately affected by the presence of short-range MP2 correlation. This can be understood by the fact that RSDH contains only a modest fraction λ 2 ≈ 0.35 of the pure short-range MP2 correlation energy E sr,µ c,MP2 [see Eq. ( 29)], which should have a third-power-law convergence, while the pure long-range correlation energy E lr,µ c,MP2 and the mixed long-range/short-range correlation energy E lr-sr,µ c,MP2 both should have an exponential-law convergence. We thus expect the RSDH error to decrease as ∆E RSDH ∼ λ 2 A X -3 + B e -βX , with constants A, B, β a priori different from the ones introduced for MP2 and RSH+MP2. The results of Figure 3 are in fact in agreement with such a basis dependence with similar constants A, B, β for MP2, RSH+MP2, and RSDH.
D. Assessment on the S22 set of intermolecular interactions
We finally test the RSDH scheme on weak intermolecular interactions. Table III reports the interaction energies for the 22 molecular dimers of the S22 set calculated by RSH+MP2, RSDH (with approximations 3 and 4), and MP2, using the augcc-pVDZ and aug-cc-pVTZ basis sets. We also report DS1DH results, but since this method is quite inaccurate for dispersion interactions we only did calculations with the aug-cc-pVDZ basis set for a rough comparison. Again, the basis dependence of RSDH is intermediate between the small basis dependence of RSH+MP2 and the larger basis dependence of standard MP2. The basis convergence study in Section IV C suggests that the RSDH results with the aug-cc-pVTZ basis set are not far from the CBS limit.
V D Z V T Z V Q Z V
The two approximations (3 and 4) used in the RSDH scheme give overall similar results, which may be rationalized by the fact low-density regions primarily contribute to these intermolecular interaction energies and the approximations 3 and 4 become identical in the low-density limit. For hydrogen-bonded complexes, RSDH with the augcc-pVTZ basis set gives a MA%E of about 3-4%, similar to standard MP2 but in clear improvement over RSH+MP2 which tends to give too negative interaction energies. Presumably, this is so because the explicit wave-function treatment of the short-range interaction λw sr,µ ee (r 12 ) makes RSDH accurately describe of the short-range component of the intermolecular interaction, while still cor-rectly describe the long-range component. For complexes with a predominant dispersion contribution, RSDH with the aug-cc-pVTZ basis set gives too negative interaction energies by about 30 %, similar to both MP2 and RSH+MP2. Notably, DS1DH gives much too negative interaction energies for the largest and most polarizable systems, leading to a MA%E of more than 100 % with augcc-pVDZ basis set. This can be explained by the fact that the reduced amount of HF exchange at long range in DS1DH leads to smaller HOMO-LUMO gaps in these systems in comparison with RSH+MP2 and RSDH, causing a overlarge MP2 contribution. For mixed complexes, RSDH with the aug-cc-pVTZ basis set gives a MA%E of about 14-15 %, which is a bit worse than MP2 but slightly better than RSH+MP2. Again, DS1DH tends to give significantly too negative interaction energies for the largest dimers.
Overall, for weak intermolecular interactions, RSDH thus provides a big improvement over DS1DH, a small improvement over RSH+MP2, and is quite similar to standard MP2. [76]), RSH+MP2, RSDH with approximations 3 and 4 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref. 48), and MP2. The parameters (µ, λ) used are the ones optimized on the AE6+BH6 combined set, except for the RSH+MP2 values which are taken from Ref. 83 in which µ = 0.50 was used. The basis sets used are aVDZ and aVTZ which refer to aug-cc-pVDZ and aug-cc-pVTZ, respectively, and the counterpoise correction is applied. The values in italics were obtained using the local MP2 approach, the ones with an asterisk (*) were obtained in Ref. 84 with the density-fitting approximation, and the ones with a dagger ( †) were obtained with the approximation: EaVTZ(RSDH approx4) ≈ EaVDZ(RSDH approx4) + EaVTZ(RSDH approx3) -EaVDZ(RSDH approx3). The geometries of the complexes are taken from Ref. 77 and the reference interaction energies are taken as the CCSD(T)/CBS estimates of Ref. 78
V. CONCLUSION
We have studied a wave-function/DFT hybrid approach based on a CAM-like decomposition of the electron-electron interaction in which a correlated wave-function calculation associated with the two-parameter interaction w lr,µ ee (r 12 ) + λw sr,µ ee (r 12 ) is combined with a complement short-range density functional. Specifically, we considered the case of MP2 perturbation theory for the wave-function part and obtained a scheme that we named RSDH. This RSDH scheme is a generalization of the usual one-parameter DHs (corresponding to the special case µ = 0) and the range-separated MP2/DFT hybrid known as RSH+MP2 (corresponding to the special case λ = 0). It allows one to have both 100% HF exchange and MP2 correlation at long interelectronic distances and fractions of HF exchange and MP2 correlation at short interelectronic distances. We have also proposed a number of approximations for the complement short-range exchange-correlation density functional, based on the limits µ = 0 and µ → ∞, and showed their relevance on the uniform-electron gas with the corresponding electron-electron interaction, in particular in the high-and low-density limits.
The RSDH scheme with complement shortrange DFAs constructed from a short-range version of the PBE functional has then been applied on small sets of atomization energies (AE6 set) and reaction barrier heights (BH6 set) in order to find optimal values for the parameters µ and λ. It turns out that the optimal values of these parameters for RSDH, µ ≈ 0.5 -0.6 and λ ≈ 0.6, are very similar to the usual optimal values found separately for RSH+MP2 and one-parameter DHs. With these values of the parameters, RSDH has a relatively fast convergence with respect to the size of the one-electron basis, which can be explained by the fact that its contains only a modest fraction λ 2 ≈ 0.35 of pure short-range MP2 correlation. We have tested the RSDH scheme with the two best complement short-range DFAs (re-ferred to as approximations 3 and 4) on large sets of atomization energies (AE49 set), reaction barrier heights (DBH24 set), and weak intermolecular interactions (S22 set). The results show that the RSDH scheme is either globally more accurate or comparable to RSH+MP2 and standard MP2. If we had to recommend a computational method for general chemical applications among the methods tested in this work, it would be RSDH with approximation 3 with parameters (µ, λ) = (0.46, 0.58).
There is much room however for improvement and extension. The parameters µ and λ could be optimized on larger training sets. More accurate complement short-range DFAs should be constructed. The MP2 correlation term could be replaced by random-phase approximations, which would more accurately describe dispersion interactions [59,83], or by multireference perturbation theory [85], which would capture static correlation effects. The RSDH scheme could be extended to linear-response theory for calculating excitation energies or molecular properties, e.g. by generalizing the methods of Refs. 86-89.
4 .
4 Approximations for Ēsr,µ,λ c [n]
E lr-sr,µ=0 c [n] = 0 and E sr,µ=0 c [n] = E c [n], the approximation in Eq. (51) reduces to Eq. (38) for µ = 0. For µ → ∞, since E sr,µ c [n] decays faster than 1/µ 2 , i.e. E sr,µ c [n] = O(1/µ 3 ) [58], E lr-sr,µ c [n] and Ēsr,µ c [n] have the same leading term in the large-µ expansion, i.e. E lr-sr,µ c [n] = Ēsr,µ c [n] + O(1/µ 3 ), and thus the approximation in Eq. (51) satisfies Eq. (
FIG. 2 :
2 FIG.2: MAEs for the AE6 and BH6 sets obtained with the RSDH scheme using approximations 1 to 5 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref. 48) as a function of λ for µ = 0.5 and µ = 0.6. The basis set used is cc-pVQZ.
4 , S 2 , SiO, C 3 H 4 (propyne), C 2 H 2 O 2 (glyoxal), and C 4 H 8 (cyclobutane). The BH6 set is a small representative benchmark of forward and reverse hydrogen transfer barrier heights of three reactions, OH + CH 4 → CH 3 + H 2 O, H + OH → O + H 2 , and H + H 2 S → HS + H 2 . All the calculations for the AE6 and BH6 sets were performed with the Dunning cc-pVQZ basis set
.
Molecule DS1DH RSH+MP2 RSDH approx3 RSDH approx4 MP2 Reference
(µ,λ) = (0,0.70) (0.58,0) (0.46,0.58) (0.62,0.60)
CH 81.13 78.38 79.93 79.58 79.68 83.87
CH 2 ( 3 B 1 ) 190.68 190.19 190.42 190.32 188.70 189.74
CH 2 ( 1 A 1 ) CH 3 175.20 305.32 170.26 302.91 173.36 304.43 173.24 304.34 174.45 180.62 303.36 306.59
CH 4 NH NH 2 NH 3 415.79 81.39 179.12 293.24 410.84 81.09 177.12 288.76 414.23 80.28 177.03 290.33 414.45 79.49 176.22 290.02 414.83 418.87 78.57 82.79 176.65 181.96 293.11 297.07
OH OH 2 FH 105.26 229.78 140.11 104.49 225.48 137.20 104.02 226.96 138.16 103.81 227.21 138.35 105.78 106.96 233.83 232.56 144.17 141.51
SiH 2 ( 1 A 1 ) 146.66 143.21 146.22 146.15 145.90 153.68
SiH 2 ( 3 B 1 ) 130.48 133.05 130.56 130.39 128.93 133.26
SiH 3 SiH 4 PH 2 222.07 315.08 148.62 220.05 311.89 146.37 222.39 315.80 147.60 222.21 315.81 146.98 220.51 228.08 314.27 324.59 144.95 153.97
PH 3 SH 2 233.35 178.66 229.18 174.18 232.06 177.29 231.69 177.66 230.24 241.47 178.55 183.30
ClH HCCH 105.39 406.75 101.63 399.05 104.43 403.84 104.90 405.17 106.53 107.20 409.58 402.76
H 2 CCH 2 H 3 CCH 3 CN 561.56 708.35 178.63 554.55 701.94 172.93 559.03 706.59 174.03 559.63 706.98 172.54 561.38 561.34 707.15 710.20 168.84 180.06
HCN CO HCO 315.27 262.93 283.19 305.21 254.60 277.00 310.17 258.48 278.81 310.79 258.64 278.50 319.26 311.52 269.29 258.88 285.79 278.28
H 2 CO H 3 COH N 2 H 2 NNH 2 375.40 510.48 229.78 433.52 367.85 505.00 218.09 428.92 370.90 507.31 223.07 428.93 370.96 507.41 222.86 427.80 379.19 373.21 513.32 511.83 234.80 227.44 432.46 436.70
NO O 2 HOOH 156.24 126.21 267.02 151.17 119.73 259.77 151.16 120.29 260.59 149.94 119.49 260.11 156.94 152.19 128.55 120.54 272.51 268.65
F 2 CO 2 Si 2 38.66 400.49 71.64 31.64 390.46 67.21 32.30 393.18 69.78 31.17 393.30 70.72 42.18 409.33 388.59 38.75 70.56 73.41
P 2 S 2 112.91 104.29 107.27 100.90 111.26 102.71 112.87 103.56 113.59 115.95 103.67 103.11
Cl 2 SiO 58.97 192.77 54.19 185.82 57.31 189.17 57.94 189.93 60.43 200.09 192.36 59.07
SC SO 172.35 127.74 163.07 122.46 164.01 123.89 170.43 123.77 175.16 170.98 129.29 125.80
ClO 62.96 60.81 59.82 58.55 59.69 64.53
ClF Si 2 H 6 62.43 521.08 57.94 517.07 58.98 522.83 58.50 522.88 65.20 519.17 535.47 62.57
CH 3 Cl 393.93 388.23 392.44 393.14 394.57 394.52
CH 3 SH 470.26 463.90 468.46 469.10 469.94 473.49
HOCl SO 2 164.83 260.22 158.46 244.46 160.70 250.91 160.75 251.27 168.50 165.79 270.72 259.77
MAE 3.19 5.49 4.31 4.31 5.37
ME RMSD Min error Max error -1.18 4.98 -14.39 11.90 -6.32 7.41 -18.40 1.87 -3.98 5.13 -12.64 4.59 -3.97 5.18 -12.59 4.71 -0.24 6.75 -16.30 20.74
TABLE II :
II Forward (F) and reverse (R) reaction barrier heights (in kcal/mol) of the DBH24/08 set calculated by DS1DH (with the PBE exchange-correlation functional[76]), RSH+MP2, RSDH with approximations 3 and 4 of Sec. II D 4 (with the short-range PBE exchange-correlation functional of Ref.48), and MP2. The calculations were carried out using the aug-cc-pVQZ basis set at QCISD/MG3 geometries and with parameters (µ, λ) optimized on the AE6+BH6 combined set. The reference values are taken from Ref.73. CH 3 Cl → ClCH 3 • • • Cl -12.45/12.45 15.40/15.40 14.36/14.36 9.90/9.90 14.64/14.64 13.41/13.41 F -• • • CH 3 Cl → FCH 3 • • • Cl -
Reaction DS1DH RSH+MP2 RSDH approx3 RSDH approx4 MP2 Reference
(µ,λ) = (0,0.70) (0.58,0) (0.46,0.58) (0.62,0.60)
F/R F/R F/R F/R F/R F/R
Heavy-atom transfer
H + N 2 O → OH + N 2 21.64/75.80 19.34/77.14 22.76/80.39 25.01/82.80 35.94/89.26 17.13/82.47
H + ClH → HCl + H 18.51/18.51 19.77/19.77 20.23/20.23 20.99/20.99 22.79/22.79 18.00/18.00
CH 3 + FCl → CH 3 F + Cl 7.54/60.77 8.21/63.59 9.79/64.81 11.25/66.64 19.74/74.29 6.75/60.00
Nucleophilic substitution Cl -• • • 2.83/27.93 4.72/31.46 OH -+ CH 3 F → HOCH 3 + F --3.11/16.76 -1.59/21.56 3.99/30.52 -1.92/19.23 4.27/30.67 -1.53/19.56 4.59/28.88 3.44/29.42 -1.75/17.86 -2.44/17.66
Unimolecular and association
H + N 2 → HN 2 16.36/10.27 14.03/13.09 17.00/11.57 18.75/11.45 27.60/8.06 14.36/10.61
H + C 2 H 4 → CH 3 CH 2 4.15/44.13 2.70/45.76 4.34/45.49 5.40/45.89 9.32/46.54 1.72/41.75
HCN → HNC 49.13/33.01 48.52/34.81 49.07/33.59 50.05/33.95 34.46/52.09 48.07/32.82
Hydrogen transfer
OH + CH 4 → CH 3 + H 2 O 4.54/19.33 6.03/19.75 6.53/20.38 7.33/21.35 7.66/25.01 6.70/19.60
H + OH → O + H 2 12.22/11.02 13.44/10.00 13.49/12.64 14.47/13.94 17.56/15.58 10.70/13.10
H + H 2 S → H 2 + HS 4.04/15.09 4.73/15.35 5.00/15.81 5.46/15.96 6.42/16.36 3.60/17.30
MAE ME RMSD 1.52 -0.09 2.09 2.01 1.06 2.36 1.85 1.50 2.30 2.65 1.95 3.26 6.17 4.70 8.56
Min error Max error -6.67 4.51 -5.33 4.01 -2.08 5.63 -3.51 7.88 -13.61 19.61
TABLE III :
III Interaction energies (in kcal/mol) for the complexes of the S22 set calculated by DS1DH (with the PBE exchange-correlation functional
. The MP2 values are also taken from Ref. 78. -13.31 -11.76 -11.95 * -12.70 -11.42 -10.77 -9.49 † -9.80 -10.63 -9.74 Indole/benzene -17.26 -6.95 -6.96 * -8.83 -6.97 -9.25 -7.39 † -7.13 -7.74 -4.59 Adenine/thymine stack -20.84 -15.11 -14.71 * -14.28 -14.56 -14.25 -14.53 † -13.24 -14.26 -11.
Complex DS1DH RSH+MP2 RSDH approx3 RSDH approx4 MP2 Reference
(µ,λ) = (0,0.70) (0.50,0) (0.46,0.58) (0.62,0.60)
aVDZ aVDZ aVTZ aVDZ aVTZ aVDZ aVTZ aVDZ aVTZ
Hydrogen-bonded complexes
Ammonia dimer -2.70 -3.13 -3.25 -3.00 -3.18 -2.94 -3.16 -2.68 -2.99 -3.17
Water dimer -4.63 -5.34 -5.45 -5.03 -5.19 -4.93 -5.12 -4.36 -4.69 -5.02
Formic acid dimer -17.28 -21.20 -21.57 -19.31 -20.14 -18.86 -19.80 -15.99 -17.55 -18.80
Formamide dimer -14.63 -17.44 -17.64 -16.30 -16.81 -15.98 -16.60 -13.95 -15.03 -16.12
Uracile dimer C 2h -18.86 -22.62 -22.82 * -20.52 -21.77 -20.53 -21.78 † -18.41 -19.60 -20.69
2-pyridoxine/2-aminopyridine -18.65 -18.86 -18.60 * -17.43 -17.93 -17.04 -17.55 † -15.56 -16.64 -17.00
Adenine/thymine WC -17.52 -18.26 -18.12 * -16.47 -17.28 -16.23 -17.04 † -14.71 -15.80 -16.74
MAE 1.16 1.34 1.42 0.26 0.68 0.23 0.51 1.70 0.75
ME 0.46 -1.33 -1.42 -0.07 -0.68 0.22 -0.50 1.70 0.75
RMSD 1.28 1.56 1.66 0.29 0.81 0.29 0.64 1.88 0.85
MA%E 9.00 8.36 9.03 2.04 4.14 2.35 3.01 12.63 5.62
Complexes with predominant dispersion contribution
Methane dimer -0.25 -0.46 -0.48 -0.42 -0.47 -0.42 -0.47 -0.39 -0.46 -0.53
Ethene dimer -0.84 -1.45 -1.55 -1.38 -1.68 -1.33 -1.55 -1.18 -1.46 -1.50
Benzene/methane -0.87 -1.62 -1.71 -1.56 -1.70 -1.56 -1.63 -1.47 -1.71 -1.45
Benzene dimer C 2h -7.21 -4.08 -4.24 * -3.52 -3.78 -4.14 -4.40 † -4.25 -4.70 -2.62
Pyrazine dimer -8.97 -5.97 -6.04 * -6.50 -6.21 -6.02 -5.73 † -6.00 -6.55 -4.20
Uracil dimer C 2
66
MAE 4.54 1.42 1.43 1.67 1.33 1.50 1.19 1.01 1.43
ME -4.16 -1.39 -1.42 -1.61 -1.31 -1.43 -1.11 -0.90 -1.40
RMSD 6.14 1.83 1.80 2.23 1.67 2.10 1.65 1.37 1.85
MA%E 102.09 28.48 29.60 34.02 28.31 34.42 27.45 27.96 33.65
Mixed complexes
Ethene/ethyne -1.28 -1.62 -1.68 -1.57 -1.68 -1.43 -1.67 -1.39 -1.58 -1.51
Benzene/water -2.66 -3.49 -3.68 -3.33 -3.55 -3.29 -3.53 -2.98 -3.35 -3.29
Benzene/ammonia -1.70 -2.49 -2.63 -2.39 -2.58 -2.38 -2.59 -2.21 -2.52 -2.32
Benzene/hydrogen cyanide -3.86 -5.31 -5.38 -4.93 -5.26 -4.89 -5.26 -4.37 -4.92 -4.55
Benzene dimer C 2v -4.57 -3.33 -3.49 * -3.26 -3.47 -3.26 -3.47 † -3.09 -3.46 -2.71
Indole/benzene T-shaped -11.71 -6.55 -6.85 * -7.49 -6.50 -7.91 -6.92 † -6.10 -6.71 -5.62
Phenol dimer -8.05 -8.05 -8.09 * -6.89 -7.57 -7.15 -7.83 † -6.79 -7.36 -7.09
MAE 1.58 0.54 0.67 0.45 0.50 0.48 0.60 0.27 0.40
ME -0.97 -0.54 -0.67 -0.40 -0.50 -0.46 -0.60 0.02 -0.40
RMSD 2.31 0.76 0.86 0.75 0.57 0.90 0.71 0.50 0.69
MA%E 38.01 12.34 17.40 10.43 13.78 11.03 15.28 7.55 10.58
Total MAE 2.52 1.11 1.18 0.83 0.86 0.77 0.75 0.99 0.89
Total ME -1.67 -1.09 -1.18 -0.73 -0.85 -0.60 -0.75 0.22 -0.40
Total RMSD 4.03 1.45 1.49 1.42 1.15 1.37 1.13 1.35 1.25
Total MA%E 52.08 16.95 19.06 16.34 16.00 16.77 15.80 16.59 17.36
Acknowledgements
We thank Bastien Mussard for help with the MOLPRO software.
We also thank Labex MiChem for providing PhD financial support for C. Kalai.
Here, we generalize the uniform coordinate scaling relation, known for the KS correlation functional E c [n] [51,[START_REF] Levy | Density Functional Theory[END_REF]90] and for the complement short-range correlation functional Ēsr,µ c
[n] [56,91], to the λdependent complement short-range correlation functional Ēsr,µ,λ c
[n]. We first define the universal density functional, for arbitrary parameters µ ≥ 0, λ ≥ 0, and ξ ≥ 0,
which is a simple generalization of the universal functional F µ,λ [n] in Eq. ( 7) such that
The minimizing wave function in Eq. (A.1) will be denoted by Ψ [n] defined by, for N electrons,
where γ > 0 is a scaling factor. The wave function Ψ µ/γ,λ/γ,ξ/γ γ
[n] yields the scaled density n γ (r) = γ 3 n(γr) and minimizes Ψ| T + ξ Ŵ lr,µ ee
where the right-hand side is minimal by definition of Ψ µ/γ,λ/γ,ξ/γ [n]. Therefore, we conclude that
and
Consequently, the corresponding correlation functional,
with the KS single-determinant wave function Φ[n] = Ψ µ=0,λ=0,ξ [n], satisfies the same scaling relation
Similarly, the associated short-range complement correlation functional,
Applying this relation for ξ = 1 gives the scaling relation for Ēsr,µ,λ
from which we see that the high-density limit γ → ∞ is related to the Coulomb limit µ → 0 and the low-density limit γ → 0 is related to the short-range limit µ → ∞ of Ēsr,µ,λ c
[n]. Note that by applying Eq. (A.9) with λ = 0 and γ = ξ we obtain the short-range complement correlation functional associated with the interaction ξw lr,µ ee in terms of the short-range complement correlation functional associated with the interaction w lr,µ/ξ ee , i.e. Ēsr,µ,0,ξ
as already explained in Ref. 91. Also, by applying Eq. (A.9) with ξ = 1 and γ = λ we obtain the short-range complement correlation functional associated with the interaction w lr,µ ee + λw sr,µ ee in terms of the short-range complement correlation functional associated with the interaction (1/λ)w We first give the limit of Ēsr,µ,λ,ξ c
[n] as µ → 0. Starting from Eq. (A.8) and noting that E µ=0,λ,ξ
where we have used the well-known relation, 51,[START_REF] Levy | Density Functional Theory[END_REF] [a special case of Eq. (A.7)]. In particular, for ξ = 1 we obtain the limit of Ēsr,µ,λ
(A.12)
We can now derive the high-density limit of Ēsr,µ,λ c
[n] using the scaling relation in Eq. (A.10) and the limit µ → 0 in Eq. (A.11)
where we have used
[n] [START_REF] Görling | [END_REF] assuming a KS system with a non-degenerate ground state.
3. Short-range limit and low-density limit of Ēsr,µ,λ
We first derive the leading term of the asymptotic expansion of Ēsr,µ,λ,ξ c
[n] as µ → ∞. Taking the derivative with respect to λ of Eq. (A.6), and using the Hellmann-Feynman theorem which states that the derivative of Ψ µ,λ,ξ [n] does not contribute, we obtain:
where Using now the asymptotic expansion of the short-range interaction [START_REF] Toulouse | [END_REF],
we obtain the leading term of the asymptotic expansion of Ēsr,µ,λ,ξ
where
[n](r, r) is the correlation part of the on-top pair density associated with the scaled Coulomb interaction ξw ee (r 12 ). For the special case ξ = 1, we obtain the leading term of the asymptotic expansion of Ēsr,µ,λ
where n 2,c [n](r, r) is the correlation part of the on-top pair density associated with the Coulomb interaction.
We can now derive the low-density limit of Ēsr,µ,λ c
[n] using the scaling relation in Eq. (A.10) and the asymptotic expansion as µ → ∞ in Eq. (A.17 where we have used the strong-interaction limit of the on-top pair density, lim γ→0 n 1/γ 2,c [n](r, r) = -n(r) 2 /2 + m(r) 2 /2 = -2n ↑ (r)n ↓ (r) [54] where m(r) is the spin magnetization and n σ (r) are the spin densities (σ =↑, ↓). |
01758912 | en | [
"info.info-ni"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01758912v2/file/report.pdf | Céline Comte
email: celine.comte@nokia.com
Dynamic Load Balancing with Tokens *
Efficiently exploiting the resources of data centers is a complex task that requires efficient and reliable load balancing and resource allocation algorithms. The former are in charge of assigning jobs to servers upon their arrival in the system, while the latter are responsible for sharing server resources between their assigned jobs. These algorithms should take account of various constraints, such as data locality, that restrict the feasible job assignments. In this paper, we propose a token-based mechanism that efficiently balances load between servers without requiring any knowledge on job arrival rates and server capacities. Assuming a balanced fair sharing of the server resources, we show that the resulting dynamic load balancing is insensitive to the job size distribution. Its performance is compared to that obtained under the best static load balancing and in an ideal system that would constantly optimize the resource utilization.
Introduction
The success of cloud services encourages operators to scale out their data centers and optimize the resource utilization. The current trend consists in virtualizing applications instead of running them on dedicated physical resources [START_REF] Barroso | The Datacenter As a Computer: An Introduction to the Design of Warehouse-Scale Machines[END_REF]. Each server may then process several applications in parallel and each application may be distributed among several servers. Better understanding the dynamics of such server pools is a prerequisite for developing load balancing and resource allocation policies that fully exploit this new degree of flexibility.
Some recent works have tackled this problem from the point of view of queueing theory [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF]. Their common feature is the adoption of a bipartite graph that translates practical constraints such as data locality into compatibility relations between jobs and servers. These models apply in various systems such as computer clusters, where the shared resource is the CPU [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF], and content delivery networks, where the shared resource is the server upload bandwidth [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF]. However, these pool models do not consider simultaneously the impact of complex load balancing and resource allocation policies. The model of [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF] lays emphasis on dynamic load balancing, assuming neither server multitasking nor job parallelism. The bipartite graph describes the initial compatibilities of incoming jobs, each of them being eventually assigned to a
µ 1 µ 2 µ 3 1 2 ν 1 ν 2
Servers
Job classes
Job types
Figure 1: A compatibility graph between types, classes and servers. Two consecutive servers can be pooled to process jobs in parallel. Thus there are two classes, one for servers 1 and 2 and another for servers 2 and 3. Type-1 jobs can be assigned to any class, while type-2 jobs can only be assigned to the latter. This restriction may result from data locality constraints for instance.
single server. On the other hand, [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] focus on the problem of resource allocation, assuming a static load balancing that assigns incoming jobs to classes at random, independently of the system state. The class of a job in the system identifies the set of servers that can be pooled to process it in parallel. The corresponding bipartite graph, connecting classes to servers, restricts the set of feasible resource allocations.
In this paper, we introduce a tripartite graph that explicitly differentiates the compatibilities of an incoming job from its actual assignment by the load balancer. This new model allows us to study the joint effect of load balancing and resource allocation. A toy example is shown in Figure 1. Each incoming job has a type that defines its compatibilities; these may reflect its parallelization degree or locality constraints, for instance. Depending on the system state, the load balancer matches the job with a compatible class that subsequently determines its assigned servers. The upper part of our graph, which puts constraints on load balancing, corresponds to the bipartite graph of [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF]; the lower part, which restricts the resource allocation, corresponds to the bipartite graph of [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF].
We use this new framework to study load balancing and resource allocation policies that are insensitive, in the sense that they make the system performance independent of fine-grained traffic characteristics. This property is highly desirable as it allows service providers to dimension their infrastructure based on average traffic predictions only. It has been extensively studied in the queueing literature [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF][START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF][START_REF] Bonald | Insensitive load balancing[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF]. In particular, insensitive load balancing policies were introduced in [START_REF] Bonald | Insensitive load balancing[END_REF] in a generic queueing model, assuming an arbitrary insensitive allocation of the resources. These load balancing policies were defined as a generalization of the static load balancing described above, where the assignment probabilities of jobs to classes depend on both the job type and the system state, and are chosen to preserve insensitivity.
Our main contribution is an algorithm based on tokens that enforces such an insensitive load balancing without performing randomized assignments. More precisely, this is a deterministic implementation of an insensitive load balancing that adapts dynamically to the system state, under an arbitrary compatibility graph. The principle is as follows. The assignments are regulated through a bucket containing a fixed number of tokens of each class. An incoming job seizes the longest available token among those that identify a compatible class, and is blocked if it does not find any. The rationale behind this algorithm is to use the release order of tokens as an information on the relative load of their servers: a token that has been available for a long time without being seized is likely to identify a server set that is less loaded than others. As we will see, our algorithm mirrors the first-come, first-served (FCFS) service discipline proposed in [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] to implement balanced fairness, which was defined in [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF] as the most efficient insensitive resource allocation.
The closest existing algorithm we know is assign longest idle server (ALIS), introduced in reference [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF] cited above. This work focuses on server pools without job parallel processing nor server multitasking. Hence, ALIS can be seen as a special case of our algorithm where each class identifies a server with a single token. The algorithm we propose is also related to the blocking version of Join-Idle-Queue [START_REF] Lu | Join-Idle-Queue: A novel load balancing algorithm for dynamically scalable web services[END_REF] studied in [START_REF] Van Der Boor | Load balancing in large-scale systems with multiple dispatchers[END_REF]. More precisely, we could easily generalize our algorithm to server pools with several load balancers, each with their own bucket. The corresponding queueing model, still tractable using known results on networks of quasireversible queues [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF], extends that of [START_REF] Van Der Boor | Load balancing in large-scale systems with multiple dispatchers[END_REF].
Organization of the paper Section 2 recalls known facts about resource allocation in server pools. We describe a standard pool model based on a bipartite compatibility graph and explain how to apply balanced fairness in this model. Section 3 contains our main contributions. We describe our pool model based on a tripartite graph and introduce a new token-based insensitive load balancing mechanism. Numerical results are presented in Section 4.
Resource allocation
We first recall the model considered in [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF][START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] to study the problem of resource allocation in server pools. This model will be extended in Section 3 to integrate dynamic load balancing.
Model
We consider a pool of S servers. There are N job classes and we let I = {1, . . . , N } denote the set of class indices. For now, each incoming job is assigned to a compatible class at random, independently of the system state. For each i ∈ I, the resulting arrival process of jobs assigned to class i is assumed to be Poisson with a rate λ i > 0 that may depend on the job arrival rates, compatibilities and assignment probabilities. The number of jobs of class i in the system is limited by i , for each i ∈ I, so that a new job is blocked if its assigned class is already full. Job sizes are independent and exponentially distributed with unit mean. Each job leaves the system immediately after service completion.
The class of a job defines the set of servers that can be pooled to process it. Specifically, for each i ∈ I, a job of class i can be served in parallel by any subset of servers within the non-empty set S i ⊂ {1, . . . , S}. This defines a bipartite compatibility graph between classes and servers, where there is an edge between a class and a server if the jobs of this class can be processed by this server. Figure 2 shows a toy example.
µ 1 µ 2 µ 3 λ 1 λ 2
Servers
Job classes
Figure 2: A compatibility graph between classes and servers. Servers 1 and 3 are dedicated, while server 2 can serve both classes. The server sets associated with classes 1 and 2 are S 1 = {1, 2} and S 2 = {2, 3}, respectively.
When a job is in service on several servers, its service rate is the sum of the rates allocated by each server to this job. For each s = 1, . . . , S, the capacity of server s is denoted by µ s > 0.
We can then define a function µ on the power set of I as follows: for each A ⊂ I,
µ(A) = s∈ i∈A S i µ s
denotes the aggregate capacity of the servers that can process at least one class in A, i.e., the maximum rate at which jobs of these classes can be served. µ is a submodular, non-decreasing set function [START_REF] Fujishige | Submodular Functions and Optimization[END_REF]. It is said to be normalized because µ(∅) = 0.
Balanced fairness
We first recall the definition of balanced fairness [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF], which was initially applied to server pools in [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF]. Like processor sharing (PS) policy, balanced fairness assumes that the capacity of each server can be divided continuously between its jobs. It is further assumed that the resource allocation only depends on the number of jobs of each class in the system; in particular, all jobs of the same class receive service at the same rate.
The system state is described by the vector x = (x i : i ∈ I) of numbers of jobs of each class in the system. The state space is X = {x ∈ N N : x ≤ }, where = ( i : i ∈ I) is the vector of per-class constraints and the comparison ≤ is taken componentwise. For each i ∈ I, we let φ i (x) denote the total service rate allocated to class-i jobs in state x. It is assumed to be nonzero if and only if x i > 0, in which case each job of class i receives service at rate φ i (x)/x i .
Queueing model Since all jobs of the same class receive service at the same rate, we can describe the evolution of the system with a network of N PS queues with state-dependent service capacities. For each i ∈ I, queue i contains jobs of class i; the arrival rate at this queue is λ i and its service capacity is φ i (x) when the network state is x. An example is shown in Figure 3 for the configuration of Figure 2.
φ 1 (x) x 1 = 3 λ 1 φ 2 (x) x 2 = 2 λ 2
Capacity set
The compatibilities between classes and servers restrict the set of feasible resource allocations. Specifically, the vector (φ i (x) : i ∈ I) of per-class service rates belongs to the following capacity set in any state x ∈ X :
Σ = φ ∈ R N + : i∈A φ i ≤ µ(A), ∀A ⊂ I .
As observed in [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF], the properties satisfied by µ guarantee that Σ is a polymatroid [START_REF] Fujishige | Submodular Functions and Optimization[END_REF].
Balance function It was shown in [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF] that the resource allocation is insensitive if and only if there is a balance function Φ defined on X such that Φ(0) = 1 and
φ i (x) = Φ(x -e i ) Φ(x) , ∀x ∈ X , ∀i ∈ I(x), (1)
where e i is the N -dimensional vector with 1 in component i and 0 elsewhere and I(x) = {i ∈ I : x i > 0} is the set of active classes in state x. Under this condition, the network of PS queues defined above is a Whittle network [START_REF] Serfozo | Introduction to Stochastic Networks[END_REF]. The insensitive resource allocations that respect the capacity constraints of the system are characterized by a balance function Φ such that, for all x ∈ X \ {0},
Φ(x) ≥ 1 µ(A) i∈A Φ(x -e i ), ∀A ⊂ I(x), A = ∅.
Recursively maximizing the overall service rate in the system is then equivalent to minimizing Φ by choosing
Φ(x) = max A⊂I(x), A =∅ 1 µ(A) i∈A Φ(x -e i ) , ∀x ∈ X \ {0}.
The resource allocation defined by this balance function is called balanced fairness.
It was shown in [START_REF] Shah | High-Performance Centralized Content Delivery Infrastructure: Models and Asymptotics[END_REF] that balanced fairness is Pareto-efficient in polymatroid capacity sets, meaning that the total service rate i∈I(x) φ i (x) is always equal to the aggregate capacity µ(I(x)) of the servers that can process at least one active class. By [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF], this is equivalent to
Φ(x) = 1 µ(I(x)) i∈I(x) Φ(x -e i ), ∀x ∈ X \ {0}. (2)
Stationary distribution The Markov process defined by the system state x is reversible, with stationary distribution π(x) = π(0)Φ(x)
i∈I λ i x i , ∀x ∈ X . (3)
By insensitivity, the system state has the same stationary distribution if the jobs sizes within each class are only i.i.d., as long as the traffic intensity of class i (defined as the average quantity of work brought by jobs of this class per unit of time) is λ i , for each i ∈ I. A proof of this result is given in [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF] for Cox distributions, which form a dense subset within the set of distributions of nonnegative random variables.
Job scheduling
We now describe the sequential implementation of balanced fairness that was proposed in [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF]. This will lay the foundations for the results of Section 3.
We still assume that a job can be distributed among several servers, but we relax the assumption that servers can process several jobs at the same time. Instead, each server processes its jobs sequentially in FCFS order. When a job arrives, it enters in service on every idle server within its assignment, if any, so that its service rate is the sum of the capacities of these servers. When the service of a job is complete, it leaves the system immediately and its servers are reallocated to the first job they can serve in the queue. Note that this sequential implementation also makes sense in a model where jobs are replicated over several servers instead of being processed in parallel. For more details, we refer the reader to [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF] where the model with redundant requests was introduced.
Since the arrival order of jobs impacts the rate allocation, we need to detail the system state. We consider the sequence c = (c 1 , . . . , c n ) ∈ I * , where n is the number of jobs in the system and c p is the class of the p-th oldest job, for each p = 1, . . . , n. ∅ denotes the empty state, with n = 0. The vector of numbers of jobs of each class in the system, corresponding to the state introduced in §2.2, is denoted by |c| = (|c| i : i ∈ I) ∈ X . It does not define a Markov process in general. We let I(c) = I(|c|) denote the set of active classes in state c. The state space of this detailed system state is C = {c ∈ I * : |c| ≤ }.
Queueing model Each job is in service on all the servers that were assigned this job but not those that arrived earlier. For each p = 1, . . . , n, the service rate of the job in position p is thus given by
s∈Sc p \ p-1 q=1 Sc q µ s = µ(I(c 1 , . . . , c p )) -µ(I(c 1 , . . . , c p-1 )),
with the convention that (c 1 , . . . , c p-1 ) = ∅ if p = 1. The service rate of a job is independent of the jobs arrived later in the system. Additionally, the total service rate µ(I(c)) is independent of the arrival order of jobs. The corresponding queueing model is an order-independent (OI) queue [START_REF] Berezner | Order independent loss queues[END_REF][START_REF] Krzesinski | Order independent queues[END_REF]. An example is shown in Figure 4 for the configuration of Figure 2.
2 1 2 1 1 c = (1, 1, 2, 1, 2) µ(I(c)) λ 1 λ 2
Stationary distribution
The Markov process defined by the system state c is irreducible. The results of [START_REF] Krzesinski | Order independent queues[END_REF] show that this process is quasi-reversible, with stationary distribution
π(c) = π(∅)Φ(c) i∈I λ i |c| i , ∀c ∈ C, (4)
where Φ is defined recursively on C by Φ(∅) = 1 and
Φ(c) = 1 µ(I(c)) Φ(c 1 , . . . , c n-1 ), ∀c ∈ C \ {∅}. (5)
We now go back to the aggregate state x giving the number of jobs of each class in the system. With a slight abuse of notation, we let
π(x) = c:|c|=x π(c) and Φ(x) = c:|c|=x Φ(c), ∀x ∈ X .
As observed in [START_REF] Krzesinski | Order independent queues[END_REF][START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF], if follows from (4) that
π(x) = π(∅) c:|c|=x Φ(c) i∈I λ i x i = π(0)Φ(x) i∈I λ i x i
in any state x. Using (5), we can show that Φ satisfies (2) with the initial condition Φ(0) = Φ(∅) = 1. Hence, the stationary distribution of the aggregate system state x is exactly that obtained in §2.2 under balanced fairness. It was also shown in [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] that the average per-class resource allocation resulting from FCFS service discipline is balanced fairness. In other words, we have
φ i (x) = c:|c|=x π(c) π(x) µ i (c), ∀x ∈ X , ∀i ∈ I(x),
where φ i (x) is the total service rate allocated to class-i jobs in state x under balanced fairness, given by ( 1), and µ i (c) denotes the service rate received by the first job of class i in state c under FCFS service discipline:
µ i (c) = n p=1 cp=i (µ(I(c 1 , . . . , c p )) -µ(I(c 1 , . . . , c p-1 ))).
Observe that, by ( 3) and ( 4), the rate equality simplifies to
φ i (x) = c:|c|=x Φ(c) Φ(x) µ i (c), ∀x ∈ X , ∀i ∈ I(x). (6)
We will use this last equality later.
As it is, the FCFS service discipline is very sensitive to the job size distribution. [START_REF] Bonald | Balanced fair resource sharing in computer clusters[END_REF] mitigates this sensitivity by frequently interrupting jobs and moving them to the end of the queue, in the same way as round-robin scheduling algorithm in the single-server case. In the queueing model, these interruptions and resumptions are represented approximately by random routing, which leaves the stationary distribution unchanged by quasi-reversibility [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF][START_REF] Serfozo | Introduction to Stochastic Networks[END_REF]. If the interruptions are frequent enough, then all jobs of a class tend to receive the same service rate on average, which is that obtained under balanced fairness. In particular, performance becomes approximately insensitive to the job size distribution within each class.
Load balancing
The previous section has considered the problem of resource sharing. We now focus on dynamic load balancing, using the fact that each job may be a priori compatible with several classes and assigned to one of them upon arrival. We first extend the model of §2.1 to add this new degree of flexibility.
Model
We again consider a pool of S servers. There are N job classes and we let I = {1, . . . , N } denote the set of class indices. The compatibilities between job classes and servers are described by a bipartite graph, as explained in §2.1. Additionally, we assume that the arrivals are divided into K types, so that the jobs of each type enter the system according to an independent Poisson process. Job sizes are independent and exponentially distributed with unit mean. Each job leaves the system immediately after service completion.
The type of a job defines the set of classes it can be assigned to. This assignment is performed instantaneously upon the job arrival, according to some decision rule that will be detailed later.
For each i ∈ I, we let K i ⊂ {1, . . . , K} denote the non-empty set of job types that can be assigned to class i. This defines a bipartite compatibility graph between types and classes, where there is an edge between a type and a class if the jobs of this type can be assigned to this class. Overall, the compatibilities are described by a tripartite graph between types, classes, and servers. Figure 1 shows a toy example.
For each k = 1, . . . , K, the arrival rate of type-k jobs in the system is denoted by ν k > 0. We can then define a function ν on the power set of I as follows: for each A ⊂ I,
ν(A) = k∈ i∈A K i ν k
denotes the aggregate arrival rate of the types that can be assigned to at least one class in A. ν satisfies the submodularity, monotonicity and normalization properties satisfied by the function µ of §2.1.
Randomized load balancing
We now express the insensitive load balancing of [START_REF] Bonald | Insensitive load balancing[END_REF] in our new server pool model. This extends the static load balancing considered earlier. Incoming jobs are assigned to classes at random, and the assignment probabilities depend not only on the job type but also on the system state. As in §2.2, we assume that the capacity of each server can be divided continuously between its jobs. The resources are allocated by applying balanced fairness in the capacity set defined by the bipartite compatibility graph between job classes and servers.
Open queueing model We first recall the queueing model considered in [START_REF] Bonald | Insensitive load balancing[END_REF] to describe the randomized load balancing. As in §2.2, jobs are gathered by class in PS queues with statedependent service capacities given by (1). Hence, the type of a job is forgotten once it is assigned to a class.
Similarly, we record the job arrivals depending on the class they are assigned to, regardless of their type before the assignment. The Poisson arrival assumption ensures that, given the system state, the time before the next arrival at each class is exponentially distributed and independent of the arrivals at other classes. The rates of these arrivals result from the load balancing. We write them as functions of the vector y = -x of numbers of available positions at each class. Specifically, λ i (y) denotes the arrival rate of jobs assigned to class i when there are y j available positions in class j, for each j ∈ I. λ 1 (y)
φ 1 (x) x 1 = 3 φ 2 (x) x 2 = 2 λ 1 ( -x) λ 2 ( -x)
y 1 = 1 λ 2 (y) y 2 = 2 φ 1 (x) x 1 = 3 φ 2 (x) x 2 = 2
Class-1 tokens
Class-2 tokens
(b) A closed queueing system consisting of two Whittle networks. The system can thus be modeled by a network of N PS queues with state-dependent arrival rates, as shown in Figure 5a.
Closed queueing model
We introduce a second queueing model that describes the system dynamics differently. It will later simplify the study of the insensitive load balancing by drawing a parallel with the resource allocation of §2.2.
Our alternative model stems from the following observation: since we impose limits on the number of jobs of each class, we can indifferently assume that the arrivals are limited by the intermediary of buckets containing tokens. Specifically, for each i ∈ I, the assignments to class i are controlled through a bucket filled with i tokens. A job that is assigned to class i removes a token from this bucket and holds it until its service is complete. The assignments to a class are suspended when the bucket of this class is empty, and they are resumed when a token of this class is released.
Each token is either held by a job in service or waiting to be seized by an incoming job. We consider a closed queueing model that reflects this alternation: a first network of N queues contains tokens held by jobs in service, as before, and a second network of N queues contains available tokens. For each i ∈ I, a token of class i alternates between the queues indexed by i in the two networks. This is illustrated in Figure 5b.
The state of the network containing tokens held by jobs in service is x. The queues in this network apply PS service discipline and their service capacities are given by [START_REF] Adan | A loss system with skill-based servers under assign to longest idle server policy[END_REF]. The state of the network containing available tokens is y = -x. For each i ∈ I, the service of a token at queue i in this network is triggered by the arrival of a job assigned to class i. The service capacity of this queue is thus equal to λ i (y) in state y. Since all tokens of the same class are exchangeable, we can assume indifferently that we pick one of them at random, so that the service discipline of the queue is PS.
Capacity set
The compatibilities between job types and classes restrict the set of feasible load balancings. Specifically, the vector (λ i (y) : i ∈ I) of per-class arrival rates belongs to the following capacity set in any state y ∈ X :
Γ = λ ∈ R N + : i∈A λ i ≤ ν(A), ∀A ⊂ I .
The properties satisfied by ν guarantee that Γ is a polymatroid.
Balance function Our token-based reformulation allows us to interpret dynamic load balancing as a problem of resource allocation in the network of queues containing available tokens. This will allow us to apply the results of §2.2.
It was shown in [START_REF] Bonald | Insensitive load balancing[END_REF] that the load balancing is insensitive if and only if there is a balance function Λ defined on X such that Λ(0) = 1, and
λ i (y) = Λ(y -e i ) Λ(y) , ∀y ∈ X , ∀i ∈ I(y). (7)
Under this condition, the network of PS queues containing available tokens is a Whittle network. The Pareto-efficiency of balanced fairness in polymatroid capacity sets can be understood as follows in terms of load balancing. We consider the balance function Λ defined recursively on X by Λ(0) = 1 and
Λ(y) = 1 ν(I(y)) i∈I(y) Λ(y -e i ), ∀y ∈ X \ {0}. (8)
Then Λ defines a load balancing that belongs to the capacity set Γ in each state y. By [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF], this load balancing satisfies i∈I(y)
λ i (y) = ν(I(y)), ∀y ∈ X , meaning that an incoming job is accepted whenever it is compatible with at least one available token.
Stationary distribution
The Markov process defined by the system state x is reversible, with stationary distribution
π(x) = 1 G Φ(x)Λ( -x), ∀x ∈ X , (9)
where G is a normalization constant. Note that we could symmetrically give the stationary distribution of the Markov process defined by the vector y = -x of numbers of available tokens. As mentioned earlier, the insensitivity of balanced fairness is preserved by the load balancing.
Deterministic token mechanism
Our closed queueing model reveals that the randomized load balancing is dual to the balanced fair resource allocation. This allows us to propose a new deterministic load balancing algorithm that mirrors the FCFS service discipline of §2.3. This algorithm can be combined indifferently with balanced fairness or with the sequential FCFS scheduling; in both cases, we show that it implements the load balancing defined by [START_REF] Bonald | Insensitive bandwidth sharing in data networks[END_REF]. All available tokens are now sorted in order of release in a single bucket. The longest available tokens are in front. An incoming job scans the bucket from beginning to end and seizes the first compatible token; it is blocked if it does not find any. For now, we assume that the server resources are allocated to the accepted jobs by applying the FCFS service discipline of §2.3. When the service of a job is complete, its token is released and added to the end of the bucket.
We describe the system state with a couple (c, t) retaining both the arrival order of jobs and the release order of tokens. Specifically, c = (c 1 , . . . , c n ) ∈ C is the sequence of classes of (tokens held by) jobs in service, as before, and t = (t 1 , . . . , t m ) ∈ C is the sequence of classes of available tokens, ordered by release, so that t 1 is the class of the longest available token. Given the total number of tokens of each class in the system, any feasible state satisfies |c| + |t| = .
Queueing model Depending on its position in the bucket, each available token is seized by any incoming job whose type is compatible with this token but not with the tokens released earlier. For each p = 1, . . . , m, the token in position p is thus seized at rate
k∈Kt p \ p-1 q=1 Kt q ν k = ν(I(t 1 , . . . , t p )) -ν(I(t 1 , . . . , t p-1 )).
The seizing rate of a token is independent of the tokens released later. Additionally, the total rate at which available tokens are seized is ν(I(y)), independently of their release order. The bucket can thus be modeled by an OI queue, where the service of a token is triggered by the arrival of a job that seizes this token.
The evolution of the sequence of tokens held by jobs in service also defines an OI queue, with the same dynamics as in §2.3. Overall, the system can be modeled by a closed tandem network of two OI queues, as shown in Figure 6.
2 1 2 1 1 c = (1, 1, 2, 1, 2) µ(I(c)) 1 2 2 t = (1, 2, 2) ν(I(t))
Figure 6: A closed tandem network of two OI queues associated with the server pool of Figure 1.
At most 1 = 2 = 4 jobs can be assigned to each class. The state is (c, t), with c = (1, 1, 2, 1, 2) and t = (1, 2, 2). The corresponding aggregate state is that of the network of Figure 5. An incoming job of type 1 would seize the available token in first position (of class 1), while an incoming job of type 2 would seize the available token in second position (of class 2).
Stationary distribution Assuming S i = S j or K i = K j for each pair {i, j} ⊂ I of classes, the Markov process defined by the detailed state (c, t) is irreducible. The proof is provided in the appendix. Known results on networks of quasi-reversible queues [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF] then show that this process is quasi-reversible, with stationary distribution
π(c, t) = 1 G Φ(c)Λ(t), ∀c, t ∈ C : |c| + |t| = ,
where Φ is defined by the recursion (5) and the initial step Φ(∅) = 1, as in §2.3; similarly, Λ is defined recursively on C by Λ(∅) = 1 and
Λ(t) = 1 ν(I(t)) Λ(t 1 , . . . , t m-1 ), ∀t ∈ C \ {∅}.
We go back to the aggregate state x giving the number of tokens of each class held by jobs in service. With a slight abuse of notation, we define its stationary distribution by
π(x) = c:|c|=x t:|t|= -x π(c, t), ∀x ∈ X . (10)
As in §2.3, we can show that we have
π(x) = 1 G Φ(x)Λ( -x), ∀x ∈ X ,
where the functions Φ and Λ are defined on X by
Φ(x) = c:|c|=x Φ(c) and Λ(y) = t:|t|=y Λ(t), ∀x, y ∈ X ,
respectively. These functions Φ and Λ satisfy the recursions ( 2) and ( 8), respectively, with the initial conditions Φ(0) = Λ(0) = 1. Hence, the aggregate stationary distribution of the system state x is exactly that obtained in §3.2 by combining the randomized load balancing with balanced fairness. Also, using the definition of Λ, we can rewrite (6) as follows: for each x ∈ X and i ∈ I(x),
φ i (x) = c:|c|=x 1 G Φ(c) t:|t|= -x Λ(t) 1 G Φ(x)Λ( -x) µ i (c), = c:|c|=x t:|t|= -x π(c, t) π(x) µ i (c).
Hence, the average per-class service rates are still as defined by balanced fairness. By symmetry, it follows that the average per-class arrival rates, ignoring the release order of tokens, are as defined by the randomized load balancing. Specifically, for each y ∈ X and i ∈ I(y), we have
λ i (y) = c:|c|= -y t:|t|=y π(c, t) π( -y) ν i (t),
where λ i (y) is the arrival rate of jobs assigned to class i in state y under the randomized load balancing, given by ( 7), and ν i (t) denotes the rate at which the first available token of class i is seized under the deterministic load balancing:
ν i (t) = m p=1 tp=i (ν(I(t 1 , . . . , t p )) -ν(I(t 1 , . . . , t p-1 ))).
As in §2.3, the stationary distribution of the system state is unchanged by the addition of random routing, as long as the average traffic intensity of each class remains constant. Hence we can again reach some approximate insensitivity to the job size distribution within each class by enforcing frequent job interruptions and resumptions.
Application with balanced fairness As announced earlier, we can also combine our tokenbased load balancing algorithm with balanced fairness. The assignment of jobs to classes is still regulated by a single bucket containing available tokens, sorted in release order, but the resources are now allocated according to balanced fairness. The corresponding queueing model consists of an OI queue and a Whittle network, as represented in Figure 7. The intermediary state (x, t), retaining the release order of available tokens but not the arrival order of jobs, defines a Markov process. Its stationary distribution follows from known results on networks of quasi-reversible queues [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF]:
2 2 1 t = (1, 2, 2) ν(I(t)) φ 1 (x) x 1 = 3 φ 2 (x) x 2 = 2
Class-1 tokens
Class-2 tokens
π(x, t) = 1 G Φ(x)Λ(t), ∀x ∈ X , ∀t ∈ C : x + |t| = .
We can show as before that the average per-class arrival rates, ignoring the release order of tokens, are as defined by the dynamic load balancing of §3.2.
The insensitivity of balanced fairness to the job size distribution within each class is again preserved. The proof of [START_REF] Bonald | Insensitivity in processor-sharing networks[END_REF] for Cox distributions extends directly. Note that this does no imply that performance is insensitive to the job size distribution within each type. Indeed, if two job types with different size distributions can be assigned to the same class, then the distribution of the job sizes within this class may be correlated to the system state upon their arrival. This point will be assessed by simulation in Section 4.
Observe that our token-based mechanism can be applied to balance the load between the queues of an arbitrary Whittle network, as represented in Figure 7, independently of the system considered. Examples or such systems are given in [START_REF] Bonald | Insensitive load balancing[END_REF].
Numerical results
We finally consider two examples that give insights on the performance of our token-based algorithm. We especially make a comparison with the static load balancing of Section 2 and assess the insensitivity to the job size distribution within each type. We refer the reader to [START_REF] Jonckheere | Asymptotics of insensitive load balancing and blocking phases[END_REF] for a large-scale analysis in homogeneous pools with a single job type, along with a comparison with other (non-insensitive) standard policies.
Performance metrics for Poisson arrival processes and exponentially distributed sizes with unit mean follow from [START_REF] Gardner | Reducing latency via redundant requests: Exact analysis[END_REF]. By insensitivity, these also give the performance when job sizes within each class are i.i.d., as long as the traffic intensity is unchanged. We resort to simulations to evaluate performance when the job size distribution is type-dependent.
Performance is measured by the job blocking probability and the resource occupancy. For each k = 1, . . . , K, we let
β k = 1 G x≤ : x i = i , ∀i∈I:k∈K i Φ(x)Λ( -x)
denote the probability that a job of type k is blocked upon arrival. The equality follows from PASTA property [START_REF] Serfozo | Introduction to Stochastic Networks[END_REF]. Symmetrically, for each s = 1, . . . , S, we let
ψ s = 1 G x≤ : x i =0, ∀i∈I:s∈S i Φ(x)Λ( -x)
denote the probability that server s is idle. These quantities are related by the conservation equation
K k=1 ν k (1 -β k ) = S s=1 µ s (1 -ψ s ). ( 11
)
We define respectively the average blocking probability and the average resource occupancy by
β = K k=1 ν k β k K k=1 ν k and η = S s=1 µ s (1 -ψ s ) S s=1 µ s .
There is a simple relation between β and η. Indeed, if we let ρ = ( K k=1 ν k )/( S s=1 µ s ) denote the total load in the system, then we can rewrite [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF] as ρ(1 -β) = η.
As expected, minimizing the average blocking probability is equivalent to maximizing the average resource occupancy. It is however convenient to look at both metrics in parallel. As we will see, when the system is underloaded, jobs are almost never blocked and it is easier to describe the (almost linear) evolution of the resource occupancy. On the contrary, when the system is overloaded, resources tend to be maximally occupied and it is more interesting to focus on the blocking probability.
Observe that any stable server pool satisfies the conservation equation [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF]. In particular, the average blocking probability β in a stable system cannot be less than 1 -1 ρ when ρ > 1. A similar argument applied to each job type imposes that
β k ≥ max 0, 1 - 1 ν k s∈ i:k∈K i S i µ s , (12)
for each k = 1, . . . , K.
A single job type
We first consider a pool of S = 10 servers with a single type of jobs (K = 1), as shown in Figure 8. Each class identifies a unique server and each job can be assigned to any class. Half of the servers have a unit capacity µ and the other half have capacity 4µ. Each server has = 6 tokens and applies PS policy to its jobs. We do not look at the insensitivity to the job size distribution in this case, as there is a single job type.
Servers with capacity µ Servers with capacity 4µ ν Figure 8: A server pool with a single job type. Classes are omitted because each of them corresponds to a single server.
Comparison We compare the performance of our algorithm with that of the static load balancing of Section 2, where each job is assigned to a server at random, independently of system state, and blocked if its assigned server is already full. We consider two variants, best static and uniform static, where the assignment probabilities are proportional to the server capacities and uniform, respectively. Ideal refers to the lowest average blocking probability that complies with the system stability. According to [START_REF] Kelly | Reversibility and Stochastic Networks[END_REF], it is 0 when ρ ≤ 1 and 1 -1 ρ when ρ > 1. One can think of it as the performance in an ideal server pool where resources would be constantly optimally utilized. The results are shown in Figure 9. The performance gain of our algorithm compared to the static policies is maximal near the critical load ρ = 1, which is also the area where the delta with ideal is maximal. Elsewhere, all load balancing policies have a comparable performance. Our intuition is as follows: when the system is underloaded, servers are often available and the blocking probability is low anyway; when the system is overloaded, resources are congested and the blocking probability is high whichever scheme is utilized. Observe that the performance under uniform static deteriorates faster, even when ρ < 1, because the servers with the lowest capacity, concentrating half of the arrivals with only 1 5 -th of the service capacity, are congested whenever ρ > 2 5 . This stresses the need for accurate rate estimations under a static load balancing.
Asymptotics when the number of tokens increases We now focus on the impact of the number of tokens on the performance of the dynamic load balancing. A direct calculation shows that the average blocking probability decreases with the number of tokens per server, and tends to ideal as → +∞. Intuitively, having many tokens gives a long run feedback on the server loads without blocking arrivals more than necessary (to preserve stability). The results are shown in Figure 10. We observe that the convergence to the asymptotic ideal is quite fast. The largest gain is obtained with small values of and the performance is already close to the optimal with = 10 tokens per server. Hence, we can reach a low blocking probability even when the number of tokens is limited, for instance to guarantee a minimum service rate per job or respect multitasking constraints on the servers.
Several job types
We now consider a pool of S = 6 servers, all with the same unit capacity µ, as shown in Figure 11. As before, there is no parallel processing. Each class identifies a unique server that applies PS policy to its jobs and has = 6 tokens. There are two job types with different arrival rates and compatibilities. Type-1 jobs have a unit arrival rate ν and can be assigned to any of the first four servers. Type-2 jobs arrive at rate 4ν and can be assigned to any of the last four servers. Thus only two servers can be accessed by both types. Note that heterogeneity now lies in the job arrival rates and not in the server capacities.
ν 4ν
Servers with capacity µ
Figure 11: A server pool with two job types.
Comparison We again consider two variants of the static load balancing: best static, in which the assignment probabilities are chosen so as to homogenize the arrival rates at the servers as far as possible, and uniform static, in which the assignment probabilities are uniform. Note that best static assumes that the arrival rates of the job types are known, while uniform static does not. As before, ideal refers to the lowest average blocking probability that complies with the system stability. The results are shown in Figure 12.
Regardless of the policy, the slope of the resource occupancy breaks down near the critical load ρ = 5 6 . The reason is that the last four servers support at least 4 5 -th of the arrivals with only 2 3 -rd of the service capacity, so that their effective load is 6 5 ρ. It follows from ( 12) that the average blocking probability in a stable system cannot be less than 4 5 (1 -5 6 1 ρ ) when ρ ≥ 5 6 . Under ideal, the slope of the resource occupancy breaks down again at ρ = 5 3 . This is the point where the first two servers cannot support the load of type-1 jobs by themselves anymore.
Otherwise, most of the observations of §4.1 are still valid. The performance gain of the dynamic load balancing compared to best static is maximal near the first critical load ρ = 5 6 . Its delta with ideal is maximal near ρ = 5 6 and ρ = 5 3 . Elsewhere, all schemes have a similar performance, except for uniform static that deteriorates faster.
Overall, these numerical results show that our dynamic load balancing algorithm often outperforms best static and is close to ideal. The configurations (not shown here) where it was not the case involved very small pools, with job arrival rates and compatibilities opposite to the server capacities. Our intuition is that our algorithm performs better when the pool size or the number of tokens allow for some diversity in the assignments.
(In)sensitivity We finally evaluate the sensitivity of our algorithm to the job size distribution within each type. Figure 13 shows the results. Lines give the performance when job sizes are exponentially distributed with unit mean, as before. Marks, obtained by simulation, give the performance when the job size distribution within each type is hyperexponential: 1 3 -rd of type-1 jobs have an exponentially distributed size with mean 2 and the other 2 3 -rd have an exponentially distributed size with mean 1 2 ; similarly, 1 6 -th of type-2 jobs have an exponentially distributed size with mean 5 and the other 5 6 -th have an exponentially distributed size with mean 1 5 . The similarity of the exact and simulation results suggests that insensitivity is preserved even when the job size distribution is type-dependent. Further evaluations, involving other job size distributions, would be necessary to conclude.
Also observe that the blocking probability of type-1 jobs increases near the load ρ = 5 3 , which is twice less than the upper bound ρ = 10 3 given by [START_REF] Krzesinski | Order independent queues[END_REF]. This suggests that the dynamic load balancing compensates the overload of type-2 jobs by rejecting more jobs of type 1.
Conclusion
We have introduced a new server pool model that explicitly distinguishes the compatibilities of a job from its actual assignment by the load balancer. Expressing the results of [START_REF] Bonald | Insensitive load balancing[END_REF] in this new model has allowed us to see the problem of load balancing in a new light. We have derived a deterministic, token-based implementation of a dynamic load balancing that preserves the insensitivity of balanced fairness to the job size distribution within each class. Numerical results have assessed the performance of this algorithm.
For the future works, we would like to evaluate the performance of our algorithm in broader classes of server pools. We are also interested in proving its insensitivity to the job size distribution within each type.
µ 1 µ 2 µ 3 ν 1 ν 2 Job types
Job classes Servers Figure 14: A technically interesting toy configuration. We have K 2 = K 3 and S 3 S 2 , so that class-2 tokens can overtake class-3 tokens in the queue of tokens held by jobs in service but not in the queue of available tokens. On the other hand, K 1 K 2 and S 1 = S 2 , so that class-2 tokens can overtake class-1 tokens in the queue of available tokens but not in the queue of tokens held by jobs in service. In none of the queues can class-2 tokens overtake tokens of classes 1 and 3 at once.
It is tempting to consider more sophisticated transitions, for instance where a token overtakes several other tokens at once. Unfortunately, our assumptions do not guarantee that such transitions can occur with a nonzero probability. An example is shown in Figure 14. The two operations circular shift and overtaking will prove to be sufficient. We first combine them to show the following intermediary result:
• From any feasible state, we can reach the state where all class-N tokens are gathered at some selected position in one of the two queues while the position of the other tokens is unchanged.
We finally prove the irreducibility result by induction on the number N of classes. As announced, the proof is constructive: it gives a series of transitions leading from any state to any other state. The induction step can be decomposed in two parts:
• By repeatedly moving class-N tokens at a position where they do not prevent other tokens from overtaking each other, we can order the tokens of classes 1 to N -1 as if class-N tokens were absent. The induction assumption ensures that we can perform this reordering.
• Once the tokens of classes 1 to N -1 are well ordered, class-N tokens can be positioned among them.
We now detail the steps of the proof one after the other.
Circular shift Because of the positive service rate assumption, a token at the head of either of the two queues has a nonzero probability of completing service and moving to the end of the other queue. We refer to such a transition as a circular shift. Now let (c, t) ∈ S and (c , t ) ∈ S, with c = (c 1 , . . . , c n ), t = (t 1 , . . . , t m ), c = (c 1 , . . . , c n ) and t = (t 1 , . . . , t m ). Assume that the sequence (c 1 , . . . , c n , t 1 , . . . , t m ) is a circular shift of the sequence (c 1 , . . . , c n , t 1 , . . . , t m ). Then we can reach state (c , t ) from state (c, t) by applying many circular shifts if necessary. An example is shown in Figure 15 for the configuration of Figure 14. All states that are circular shifts of each other can therefore communicate.
Overtaking We say that a token in second position of one of the two queues overtakes its predecessor if it completes service first. Such a transition allows us to exchange the positions of these two tokens, therefore escaping circular shifts to access other states. Can such a transition occur with a nonzero probability? It depends on the classes of the tokens in second and first positions, denoted by i and j respectively. The token in second position can overtake its predecessor if it receives a nonzero service rate. In the queue of tokens held by jobs in service, this means that there is at least one server that can process class-i jobs but not class-j jobs, that is S i S j . In the queue of available tokens, this means that there is at least one job type that can seize class-i tokens but not class-j tokens, that is K i K j . Since states that are circular shifts of each other can communicate, the queue where the overtaking actually occurs does not matter.
The separability assumption ensures that, for each pair of classes, the tokens of at least one of the two classes can overtake the tokens of the other class, in at least one of the two queues. We now show a stronger result: by reindexing classes if necessary, we can work on the assumption that class-i tokens can overtake the tokens of classes 1 to i -1 in at least one of the two queues (possibly not the same), for each i = 2, . . . , N .
We first use the inclusion relation on the power set of {1, . . . , K} to order the type sets K i for i ∈ I. Specifically, we consider a topological ordering of these sets induced by their Hasse diagram, so that a given type set is not a subset of any type set with a lower index. An example is shown in Figure 16a. The tokens of a class with a given type set can thus overtake (in the first queue) the tokens of all classes with a lower type set index. Only classes with the same type set are not dissociated.
Symmetrically, we use the inclusion relation on the power set of {1, . . . , S} to order the server sets S i for i ∈ I. We consider a topological ordering of these sets induced by their Hasse diagram, so that a given server set is not a subset of any server set with a lower index, as illustrated in Figure 16b. The tokens of a class with a given server set can thus overtake (in the second queue) the tokens of all classes with a lower server set index. Thanks to the separability assumption, if two classes are not dissociated by their type sets, then they are dissociated by their server sets. This allows us to define a permutation of the classes as follows: first, we order classes by increasing type set order, and then, we order the classes that have the same type set by increasing server set order. The separability assumption ensures that all classes are eventually sorted. The tokens of a given class can overtake the tokens of all classes with a lower index, either in the queue of available tokens or in the queue of tokens held by jobs in service (or both). Moving class-N tokens Using the two operations circular shift and overtaking, we show that, from any given state, we can reach the state where all class-N tokens are gathered at some selected position in one of the two queues, while the position of the other tokens is unchanged. We proceed by moving class-N tokens one after the other, starting with the token that is closest to the destination (in number of tokens to overtake) and finishing with the one that is furthest. Consider the class-N token that is closest to the destination but not well positioned yet (if any). This token can move to the destination by overtaking its predecessors one after the other. Indeed, the token that precedes our class-N token has a class between 1 and N -1, so that our class-N token can overtake it in (at least) one of the two queues. By applying many circular shifts if necessary, we can reach the state where this overtaking can occur. Once this state is reached, our class-N token can then overtake its predecessor, therefore arriving one step closer to the destination. We reiterate this operation until our class-N token is well positioned.
For example, consider the state of Figure 15a and assume that we want to move all tokens of class 2 between the two tokens of classes 1 and 3 that are closest to each other. One of the class-2 tokens is already in the correct position. Let us consider the next class-2 token, initially positioned between tokens of classes 3 and 4. We first apply circular shifts to reach the state depicted in Figure 15b. In this state, there is a nonzero probability that our class-2 token overtakes the class-3 token, which would bring our class-2 token directly in the correct position.
Proof by induction We finally prove the stated irreducibility result by induction on the number N of classes. For N = 1, applying circular shifts is enough to show the irreducibility because all tokens are exchangeable. We now give the induction step.
Let N > 1. Assume that the Markov process defined by the state of any tandem network with N -1 classes that satisfies the positive service rate and separability assumptions is irreducible. Now consider a tandem network with N classes that also satisfies these assumptions. We have shown that, starting from any feasible state, we can move class-N tokens at a position where they do not prevent other tokens from overtaking each other. In particular, to reach a state from another one, we can first focus on ordering the tokens of classes 1 and N -1, as if class-N tokens were absent. This is equivalent to ordering tokens in a tandem network with N -1 classes that satisfies the positive service rate and separability assumptions. This reordering is feasible by the induction assumption. Once it is performed, we can move class-N tokens in a correct position, by applying the same type of transitions as in the previous paragraph.
Figure 3 :
3 Figure 3: An open Whittle network of N = 2 queues associated with the server pool of Figure 2.
Figure 4 :
4 Figure 4: An OI queue with N = 2 job classes associated with the server pool of Figure 2. The job of class 1 at the head of the queue is in service on servers 1 and 2. The third job, of class 2, is in service on server 3. Aggregating the state c yields the state x of the Whittle network of Figure 3.
(a) An open Whittle network with state-dependent arrival rates.
Figure 5 :
5 Figure 5: Alternative representations of a Whittle network associated with the server pool of Figure 1. At most 1 = 2 = 4 jobs can be assigned to each class.
Figure 7 :
7 Figure 7: A closed queueing system, consisting of an OI queue and a Whittle network, associated with the server pool of Figure 1. At most 1 = 2 = 4 jobs can be assigned to each class.
Figure 9 :
9 Figure 9: Performance of the dynamic load balancing in the pool of Figure 8. Average blocking probability (bottom plot) and resource occupancy (top plot).
Figure 10 :
10 Figure 10: Impact of the number of tokens on the average blocking probability under the dynamic load balancing in the pool of Figure 8.
Figure 12 :
12 Figure 12: Performance of the dynamic load balancing in the pool of Figure 11. Average blocking probability (bottom plot) and resource occupancy (top plot).
Figure 13 :
13 Figure 13: Blocking probability under the dynamic load balancing in the server pool of Figure 11, with either exponentially distributed job sizes (line plots) or hyperexponentially distributed sizes (marks). Each simulation point is the average of 100 independent runs, each built up of 10 6 jumps after a warm-up period of 10 6 jumps. The corresponding 95% confidence interval, not shown on the figure, does not exceed ±0.001 around the point.
Figure 15 :
15 Figure 15: Circular shift. Sequence of transitions to reach state (b) from state (a): all tokens complete service in the first queue; all tokens before that of class 3 complete service in the second queue; the first two tokens complete service in the first queue.
K 2 = K 3 (
23 a) Hasse diagram of the type sets. (K1 = {1}, K4 = {2}, K2 = K3 = {1, 2}) is a possible topological ordering. Hasse diagram of the server sets. (S3 = {2}, S4 = {3}, S1 = S2 = {1, 2}) is a possible topological ordering.
Figure 16 :
16 Figure 16: A possible ordering of the classes of Figure 14 is 1, 4, 3, 2.
Appendix: Proof of the irreducibility
We prove the irreducibility of the Markov process defined by the state (c, t) of a tandem network of two OI queues, as described in §3.3. Throughout the proof, we will simply refer to such a network as a tandem network, implicitly meaning that it is as described in §3.3.
Assumptions
We first recall and name the two main assumptions that we use in the proof.
• Positive service rate. For each i ∈ I, K i = ∅ and S i = ∅.
• Separability. For each pair {i, j} ⊂ I, either S i = S j or K i = K j (or both).
Result statement
The Markov process defined by the state of the tandem network is irreducible on the state space S = {(c, t) ∈ C 2 : |c| + |t| = } comprising all states with i tokens of class i, for each i ∈ I.
Outline of the proof We provide a constructive proof that exhibits a series of transitions leading from any feasible state to any feasible state with a nonzero probability. We first describe two types of transitions and specify the states where they can occur with a nonzero probability.
• Circular shift: service completion of a token at the head of a queue. This transition is always possible thanks to the positive service rate assumption. Consequently, states that are circular shifts of each other can communicate. We will therefore focus on ordering tokens relative to each other, keeping in mind that we can eventually apply circular shifts to move them in the correct queue.
• Overtaking: service completion of a token that is in second position of a queue, before its predecessor completes service. Such a transition has the effect of swapping the order of these two tokens. By reindexing classes if necessary, we can work on the assumption that class-i tokens can overtake the tokens of classes 1 to i -1 in (at least) one of the two queues, for each i = 2, . . . , N . The proof of this statement relies on the separability assumption. |
01761224 | en | [
"shs.litt"
] | 2024/03/05 22:32:13 | 1990 | https://hal.science/cel-01761224/file/FITZGERALD.pdf | Paul F S Carmignani
Fitzgerald
TENDER IS THE NIGHT F S Fitzgerald
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
The stay at Princeton reinforced his sense of being "déclassé": "I was one of the poorest boys in a rich boys'school". Life on the campus followed the same pattern as before: little academic work, a lot of extracurricular activities (including petting and necking!) and a steady output of librettos for musicals and various literary material including the 1st draft of This Side of Paradise, his 1 st novel. F. suffered at Princeton one of the great blows of his life: his failure to become a member of the most prestigious clubs: "I was always trying to be one of them! [...] I wanted to belong to what every other bastard belonged to: the greatest club in history. And I was barred from that too" ("The Triangle Club"). If F. was foiled in his social aims, he was more successful in his cultural ambitions: he became acquainted with J. Peale Bishop E. Wilson, "his intellectual conscience", and Father Fay, a man of taste and cultivation a Catholic with a catholic mind who exerted a great intellectual influence on F. (possible connection with Diver's priestlike attitude in Tender; I'll revert to that point) and read the great classics. He met Ginevra King, a beautiful and wealthy girl from Chicago it was a romantic love-affair but F. was turned down by her parents who had better hopes for their daughter. Nevertheless, F. was to make her the ideal girl of his generation to the point of having all her letters bound in a volume. In 1917, he left Princeton: "I retired not on my profits, but on my liabilities and crept home to St Paul to finish a novel". In the words of a critic, K. Eble: "The Princeton failure was one of the number of experiences which helped create the pattern to be found in F's fiction: success coming out of abysmal failure or failure following hard upon success". On leaving P., he enlisted in the Army, was commissioned 2 nd Lieutenant but never went overseas and never had any combat experience: 2 nd juvenile regret war heroism (connection with the novel: visit to battlefields under the guidance of Dick, who behaves as if he had been there, pp. 67-69 and the character of "Barban", the thunderbolt of war. See also p. 215 "Tommy B. was a ruler, Tommy was a hero"). The same year, he met Zelda Sayre, a Southern Belle, but F. came up against the same opposition as before; he did not have enough money to win the girl's parents'approval; so he went to New York in search of a job for Zelda was not overanxious to "marry into a life of poverty and struggle". In 1919, F. worked in an advertising agency; quit to rewrite novel and staked all his hopes on its publication: "I have so many things dependent upon its successincluding of course a girl not that I expect it to make me a fortune but it will have a psychological effect on me and all my surroundings". An amusing detail: at the time of publication, telegrams to Zelda consisted largely of reports of book-sales. Married Zelda in 1920 F's early success (This Side of Paradise) gave him the things his romantic self most admired: fame, money and a glamourous girl.
THE GROWTH OF THE LEGEND
With the publication of This Side of Paradise (sold 40,000 copies in less than a year), F. had become a hero to his generation, "a kind of king of our American youth"; Zelda was seen as "a barbarian princess from the South"); both made a strong impression on people and caused a stir wherever they went. The Fitzgeralds, as a critic put it, "were two kinds of brilliance unable to outdazzle the other; two innocents living by the infinite promise of American advertising, two restless temperaments craving both excitement and repose". (Obvious parallel between this description and the Diver couple). Z. played a prominent part in F's life. She admired her husband, but was also jealous of his literary success:
"She almost certainly served to accentuate his tendency to waste himself in fruitless endeavours; she stimulated a certain fatuity of attitude in him and accentuated the split between a taste for popular success and the obligations imposed upon him by his literary talent. […] she speeded up the pace of his existence and the rhythm of his activity" (Perosa).
The kind of life they led meant making money hand over fist so F. set to work, all the more earnestly as he was convinced that money in large quantities was the proper reward of virtue. He averaged $ 16,000 to 17,000 a year during his career as a writer but he was in constant debt and had to write himself out of his financial plight, which led to the composition of numberless short pieces of little or no value. Made 1 st trip to Europe in 1920; met J. Joyce in Paris; disappointment. Back to St Paul for the birth of Frances Scott F. in 1921. Next year was marked by a burst of creativity: published The Beautiful and Damned, Tales of the Jazz Age, The Vegetable, a play was a failure.
1923: made the acquaintance of Ring Lardner, the model for Abe North and the symbol for the man of genius dissipating his energy and wasting his talent.
1924: With the illusion of reducing expenses the family went to live in France: trips to St Raphaël, Rome, Capri and Paris. F. finished Gatsby but his serenity was marred by Z's restlessness and by her affair with a French aviator, Edouard Josanne. F. was deeply hurt: "I knew that something had happened that could never be repaired" (Obvious parallel with Tender).
1925: Summer in Paris described as "one of 1,000 parties and no work". Meets E. Hemingway: a model cf. "Beware of Ernest" when F. was influenced by his stylistic devices and also "I talk with the authority of failure, Ernest with the authority of success". F. began to be drunk for periods of a week or ten days: "Parties are a form of suicide. I love them, but the old Catholic in me secretly disapproves." Began Tender... 1926: 3 rd collection of stories All the Sad Young Men, title betrays different attitude to his former subject matter.
1927: Stay in Hollywood; gathered new material in preparation for the chapter of Rosemary Hoyt. Failure: his script was rejected. Left for Delaware. Meanwhile Z. was getting more and more restless and showing signs of hysteria (as a friend of theirs put it: "Z. was a case not a person"). Z. gets dancing obsession and wants to become a ballet dancer.
1928: New trip abroad to Paris for Z. to take dancing lessons 1929: 2 nd long stay; F. continues to drink heavily; creates disturbances and disappoints his friends. Was even arrested once.
1930. Trip to Montreux to have Z. examined: diagnosis of schizophrenia. F. remembered with special horror that "Christmas in Switzerland". Z. suffers from nervous eczema (like the woman painter in the lunatic asylum in Tender, see p. 201-2). While in hospital, Z. composed in six weeks Save me the Waltz, a thinly fictionalized account of the Fs' lives; F. revised the manuscript and made important changes when some allusions were too obvious; he changed the original name of the hero (Amory Blaine, cf. This Side...) to David Knight. Hence, a possible connection with the title of the novel he was at the time working on: Tender is the Knight.
1931: Death of F's father (see corresponding episode in Tender: F used in Tender an unpublished paper that he wrote on the death of his father). In September, F. returned permanently to the USA; to Hollywood for Metro-Goldwyn-Mayer.
1933: Tender is completed. Z's 3 rd collapse. Confined to a sanitarium and was to be in and out of similar institutions through the rest of her life. F. tried to commit suicide; his mental state contributed to making Z's recovery more difficult, but as Mizener puts it "at bottom Z's trouble was something that was happening to them both".
1934 April 12t h publication of Tender which had a limited success while F's private life was getting more & more pathetic and lonely: "the dream of the writer was wrecked and so was his dream of eternal love". 1935-37: Period of "Crack Up", increasing alcoholism and physical illnesses. F. having apparently exhausted all literary material turned to introspection and laid his heart bare; defined exactly his own "malaise" and analyzed his own decay.
June 1937 signed contract with MGM at $ 1,000 a week. Met Sheilah Graham and this affair had the effect of putting a little order into his life and of enabling him to work on The Last Tycoon. So, "the last three years of his life were marked by a reawakening of his creative forces and by his last desperate struggle with the specter of decadence. He worked with enthusiasm at various film scripts, minimized his social life and succeeded in abstaining from alcohol" (Perosa).
November 1940: 1 st heart attack; December 21 st : 2 nd and fatal one. Death. His books were proscribed and as he had not died a good Catholic, the Bishop refused to allow him to be buried in hallowed ground.
1948: March, 10 th , Z. dies in fire at Highland Sanitarium.
Difficult to find an appropriate conclusion after going over such a dramatic and tragic life; most striking characteristic however, "a sharply divided life": F. was "a man of self-discipline at war with the man of self-indulgence" (Lehan), a situation made even more complex by the rôle Z. played in his life.
She hindered his ambitions because, he said, "she wanted to work too much for her and not enough for my dream". "[...] I had spent most of my ressources, spiritual and material, on her, but I struggled on for 5 years till my health collapsed, and all I cared about was drink and forgetting" (which points to the very theme of "expenditure" explored in Tender). A final ironical touch, however, to this portrait: F's life testifies to the truth of L. Fiedler's opinion that "Among us, nothing succeeds like failure" since F. is now considered as one of the literary greats and sales of his books have averaged 500,000 copies a year since he died in 1941.
SOCIAL AND HISTORICAL BACKGROUND
F. often assumed the rôle of "a commenter on the American scene". Well, what was the American scene like during the period depicted by Tender? The novel spans the period known as "the Jazz Age" i.e. from the riots on May Day, 1919, to the crash of the stock market in 1929. That period was marked by the impact of W.W.I, a conflict which ushered in a new era in the history of the USA. Until the outbreak of the war, the USA was a provincial nation with a naive and somewhat parochial outlook; the conflict changed all that: la guerre apparaît soudain comme le viol d'une conscience collective exceptionnellement paisible, satisfaite et pour tout dire innocente... Pour tous les jeunes Américains, ce fut un appel individuel à l'expérience. Avril 1917 sonne le glas d'une certaine innocence typiquement américaine [...] l'Amérique rentrait dans l'Histoire. (M. Gresset)
The typical representative of that post-war period was "the lost generation" (a phrase coined by G. Stein), a war-wounded generation hurt in its nerves and bereft of all illusions concerning society and human nature after witnessing the atrocities committed on European battlefields. It was also suspicious of the spurious values of a society that had turned into "Vanity Fair". The Jazz Age was also a time of great upheaval in social mores: dans la société déboussolée et faussement libérée des années vingt, [la sexualité] commença à devenir une manière de prurit endémique, une obsession nourrie d'impuissance et de stérilité, ce que D.H. Lawrence allait appeler "le sexe dans la tête". [...] Les rôles traditionnels tendent à s'invertir: les hommes se féminisent, les femmes se masculinisent et les jeunes filles sont des garçonnes. On voit poindre là une "crise des différences" déjà manifeste dans la littérature du XIX e siècle, et dont témoigne, notamment dans le roman français (Balzac, Stendhal, Gautier, Flaubert) la multiplication des personnages masculins dévirilisés et des personnages féminins virils, sans parler des cas-limites androgynes et castrats dans le bas-romantisme et la littérature de la Décadence" (A. Bleikasten).
As a historian put it: "soft femininity was out and figures were becoming almost boyish" and said another "the quest of slenderness, the vogue of short skirts, the juvenile effect of the long waist.
[…] all were signs […] that the woman of this decade worshiped not merely youth but unripened youth". Two typical figures of the period were the "flapper" (la garçonne) and the "sheik" (the young lady-killer, Rudolf Valentino style). All these elements went into the composition of Tender which, as we shall see, was aptly defined as "a portrait of society's disintegration" in the troubled post-war years.
LITERARY BACKGROUND
A short examination of F's literary consciousness and evolution as a writer. A just evaluation, both aesthetic and critical of the F. case is all the more difficult as he never attempted any comprehensive definition of his art and technique; apart from a few observations in his private correspondence, we have precious little to go on. His beginnings as a writer were marked by the dominant influence of such realistic or even naturalistic authors as H. G. Wells and Th. Dreiser or Frank Norris who insisted on a life-like and documentary representation of reality. But pretty soon, F. shifted his allegiance and acknowledged a debt to H. James, and J Conrad. When James's The Art of the Novel was published in 1934 F. immediately read the book; in this collection of prefaces, James put forward his arguments in favour of the "selective" type of novel as opposed to the "discursive" type advocated by Wells. This controversy dominated the literary stage in the early XX th century; Wells wanted to "have all life within the scope of the novel" and maintained that nothing was "irrelevant" in a novel i.e the novel was to be well-documented; he insisted that characterization rather than action should be the center of the novel, and claimed the author had a right to be "intrusive" since the novel was a vehicle for problem discussion. James on the contrary proposed "selection" as the preferred alternative to "saturation", the extract from life as a substitute for the slice of life. For him the true test of the artist was his tact of omission; moreover the novelist must have a "centre of interest", a "controlling idea" or a "pointed intention". This is of course a gross oversimplification of what was at stake in this controversy, but it enables us to place F. within the wider context of a particular literary tradition. Thus F. took sides with the advocates of "the novel démeublé " (as Willa Cather put it) i.e. presenting a scene by suggestion rather than by enumeration: "Whatever is felt upon the page without being specifically named there-that, one might say, is created" (W. Cather). Another preface played a decisive rôle in F.'s evolution as a writer: Conrad's preface to The Nigger of the Narcissus: "I keep hinking of Conrad's Nigger of the Narcissus' preface and I believe that the important thing about a work of fiction is that the essential reaction shall be profound and enduring". (F. in a letter). From Conrad F. borrowed several principles and devices: first of all he subscribed to Conrad's definition of the function of the artist which is "by the power of the written word to make you hear, to make you feel [...] to make you see". F. also learnt from Conrad that "the purpose of a work of fiction is to appeal to the lingering after-effects in the reader's mind". He also adopted the Conradian motif of "the dying fall" i.e. "la fin en lent decrescendo", which, in contrast to the dramatic ending, is a gradual letting down or tapering off, and the device known as "chronological muddlement" i.e.
arranging narratives not as a chronological sequence of events, but as a series of gradual discoveries made by the narrator. Instead of going straight forward, from beginning to end, in order to gradually disclose the true nature of a particular character, a novel should first get the character in "with a strong impression, and then work backwards and forwards over his past" (F. Madox Ford). This has farreaching implications: when a novel has a straight unbroken narrative order, it usually means that the author and his readers share a certain confidence about the nature of moral and material reality. Their narrative world is orderly: chaos is elsewhere and unthreatening. But when we get books in which the narrative order has broken up, melted and regrouped into scattered fragments, when we find gaps and leaps in the time sequence, then we have moved into the modern age, when the author and his public are doubtful about the nature of the moral and material worlds. Conrad's dislocated narrative method "working backwards and forwards" reflects a conviction that the world is more like a "damaged kaleidoscope" than an "orderly panorama". Since we are dealing with technique, let me mention another device, which F. borrowed from James this time and made good use of in Tender: "the hour-glass situation" i.e. a form of reversed symmetry: A turns into B, while B turns into A or a strong character gradually deteriorates while a weak character becomes stronger etc., a situation perfectly illustrated in Tender Is The Night, a novel of deterioration. F. combined all these borrowings into a unique technique which enabled him, in his own words, to aim at "truth, or rather the equivalent of the truth, the attempt at honesty of imagination". To attain this goal he developed "a hard, colorful prose style" not unlike Hemingway's. In an independent way, F. recreated certain stylistic features typical of Hemingway: the hardness and precision of diction, the taste for the essential and the concrete, the predominance of the dialogue, the directness of statement, a refinement of language disguised as simplicity. However, F. himself was aware of the strong influence H's style exerted over him: "remember to avoid Hemingway" or "Beware of Hemingway" are warnings one can read in his manuscripts. He was also threatened by a certain facility: "I honestly believed that with no effort on my part I was a sort of magician with words..." Hence, no doubt, the tremendous amount of second rate work he turned out to pay off his debts. To achieve his hard colorful prose style, F. used verbs whenever possible ("all fine prose is based on the verbs carrying the sentences. They make sentences move"); he strove for naturalness ("People don't begin all sentences with and, but, for and if, do they? They simply break a thought in midparagraph...") and often resorted to the "dramatic method" i.e. what the characters do tells us what they are; and what they are shows us what they can do (cf. James: "What is character but the determination of incident?" "What is incident but the illustration of character ?"):
The dramatic method is the method of direct presentation, and aims to give the readers the sense of being present, here and now in the scene of action...Description is dipensed with by the physical stage setting. Exposition and characterization are both conveyed through the dialogue and action of the characters (J. W. Beach).
F. laid great stress upon the writer's need of self-conscious craft: "The necessity of the artist in every generation has been to give his work permanence in every way by a safe shaping and a constant pruning, lest he be confused with the journalistic material that has attracted lesser men". F. was not like Th. Wolfe or W. Faulkner a "putter-inner" (Faulkner said "I am trying to say it all in one sentence, between one Cap and one period") i.e. never tried to pile words upon words in an attempt to say everything, on the contrary, he was a "leaver-outer", he worked on the principle of selection and tried to achieve some sort of "magic suggestiveness" which did not preclude him from proclaiming his faith in the ideal of a hard and robust type of artistic achievement as witness the quotation opening his last novel: "Tout passe. L'art robuste Seul a l'éternité". (Gautier, Émaux et camées).
2) AN INTRODUCTION TO TENDER IS THE NIGHT
It took F. nine years, from 1925 to 1934 to compose his most ambitious work: he stated that he wanted to write "something new in form, idea, structure, the model of the age that Joyce and Stein are searching for, that Conrad did not find". As we have seen, it was a difficult time in F.'s life when both Zelda and himself were beginning to crack up: F. was confronted with moral, sentimental and financial difficulties. So, it is obvious that the composition of the novel was both a reply to Zelda's Save me the Waltz and a form of therapy, a writing cure. There were 18 stages of composition, 3 different versions and 3 different reading publics since Tender Is The Night was first published as a serial (in Scribner's Magazine from January to April 1934); it came out in book form on April 12th 1934 and there was a revised edition in 1951. Tender Is The Night sold 13,000 copies only in the first year of publication and F's morale dropped lower than ever. Needless to say, I am not going to embark upon an analysis of the different stages of composition and various versions of the novel; it would be a tedious and unrewarding task. Suffice it to say that the first version, entitled The Melarky Case, related the story of Francis Melarky, a technician from Hollywood, who has a love affair on the Riviera and eventually kills his mother in a fit of rage. This version bore several titles: Our Type, The World's Fair, or The Boy who Killed his Mother. F. put it aside and sketched another plan for a new draft, The Drunkard's Holiday or Doctor Diver's Holiday, that was closer to the novel as we know it today since it purported to "Show a man who is a natural idealist, a spoiled priest, giving in for various causes to the ideas of the haute Bourgeoisie, and in his rise to the top of the social world losing his idealism, his talent and turning to drink and dis-sipation...". Somewhere else we read that "The Drunkard's Holiday will be a novel of our time showing the breakup of a fine personality. Unlike The Beautiful and Damned the break-up will be caused not by flabbiness but really tragic forces such as the inner conflicts of the idealist and the compromises forced upon him by circumstances". Such were the immediate predecessors of Tender Is The Night which integrated many aspects and elements from the earlier versions, but the final version is but a distant cousin to the original one. The action depicted in the novel spans 10 years: from 1919, Dick's second stay in Zurich, to 1929 when he leaves for the States. The novel is a "triple-decker": the first book covers the summer of 1925; the first half of the second is a retrospection bringing us back to 1919 then to the years 1919-1925. The 2 nd half (starting from chapter XI) picks up the narrative thread where it had been broken in the first book, i.e. 1925 to describe the lives of the Divers from autumn to Xmas, then skips a year and a half to give a detailed account of a few weeks. The 3rd book just follows from there and takes place between the summer of 1928 and july 1929. Except for two brief passages where Nicole speaks in her own voice, it is a 3 rd -person account by an omniscient narrator relayed by a character-narrator who is used as a reflector; we get 3 different points of view: Rosemary's in Book I, Dick's in Book II, Nicole in Book III. Dick gradually becomes "a diminishing figure" disappearing from the novel as from Nicole's life. This, as you can see, was an application of the Conradian principle of "chronological muddlement" but such an arrangement of the narrative material had a drawback of which F. was painfully conscious: "its great fault is that the true beginning -the young psychiatrist in Switzerland -is tucked away in the middle of the book" (F.). Morever, the reader is often under the impression that Rosemary is the centre and focus of the story and it takes him almost half the book to realize the deceptiveness of such a beginning: the real protagonists are Dick and Nicole. To remedy such faults, F. proposed to reorganize the structure of the novel, which he did in 1939 (it was published in 1953). There is no question that the novel in this revised form is a more straightforward story but in the process it loses much of its charm and mystery; Dick's fate becomes too obvious, too predictable whereas the earlier version, in spite of being "a loose and baggy monster", started brilliantly with a strong impression and a sense of expectancy if not mystery that held the reader in suspense. But the debate over the merits or demerits of each version is still raging among critics, so we won't take sides.
The subject of Tender Is The Night is a sort of transmuted biography which was always F's subject: it is the story of Dick's (and F.'s) emotional bankruptcy. Now I'd like to deal with four elements which, though external to the narrative proper, are to be reckoned with in any discussion of the novel: the title, the subtitle, the reference to Keats's poem and the dedication. F.'s novel is placed under the patronage of Keats as witness the title and the extract from "Ode to a Nightingale" which F. could never read through without tears in his eyes. It is however useless to seek a point by point parallelism between the structure of Tender Is The Night and that of the poem; the resemblance bears only on the mood and some of the motifs. "The Ode" is a dramatized contrasting of actuality and the world of imagination; it also evinces a desire for reason's utter dissolution, a longing for a state of eternality as opposed to man's painful awareness of his subjection to tem-porality. Thus seen against such background, the title bears a vague hint of dissolution and death, a foreboding of the protagonist's gradual sinking into darkness and oblivion. In the novel, there are also echoes of another Keatsian motif: that of "La Belle Dame Sans Merci".
The subtitle, "A Romance", reminds us that American fiction is traditionally categorized into the novel proper and the romance. We owe N. Hawthorne this fundamental distinction; the main difference between those two forms is the way in which they view reality. The novel renders reality closely and in comprehensive detail; it attaches great importance to character, psychology, and strains after verisimilitude. Romance is free from the ordinary novelistic requirements of verisimilitude; it shows a tendency to plunge into the underside of consciousness and often expresses dark and complex truths unavailable to realism. In the Introduction to The House of the Seven Gables (1851), N. Hawthorne defined the field of action of romance as being the borderland of the human mind where the actual and the imaginary intermingle. The distinction is still valid and may account, as some critics have argued, notably R. Chase in The American Novel and its Tradition, for the original and characteristic form of the American novel which Chase calls "romance-novel" to highlight its hybrid nature ("Since the earliest days, the American novel, in its most original and characteristic form, has worked out its destiny and defined itself by incorporating an element of romance"). Of course, this is not the only meaning of the word "romance"; it also refers to "a medieval narrative [...] treating of heroic, fantastic, or supernatural events, often in the form of an allegory" (Random Dict.); the novel, as we shall see, can indeed be interpreted in this light (bear in mind the pun on Night and Knight). A third meaning is apposite: romance is the equivalent of "a love affair", with the traditional connotations of idealism and sentimentalism. It is useful to stress other charecteristics of "romanticism", for instance Th. Mann stated that: "Romanticism bears in its heart the germ of morbidity, as the rose bears the worm; its innermost character is seduction, seduction to death". One should also bear in mind the example of Gatsby whose sense of wonder, trust in life's boundless possibility and opportunity, and lastly, sense of yearning are hallmarks of the true romantic. Cf. a critic's opinion:
If Romanticism is an artistic perspective which makes men more conscious of the terror and the beauty, the wonder of the possible forms of being [...] and, finally, if Romanticism is the endeavor [...] to achieve [...] the illusioned view of human life which is produced by an imaginative fusion of the familiar and the strange the known and the unknown, the real and the ideal, then F. Scott Fitzgerald is a Romantic".
So, be careful not to overlook any of these possible meanings if you are called upon to discuss the nature of Tender Is The Night. Lastly, a word about the identity of the people to whom Tender Is The Night is dedicated. Gerald and Sarah Murphy were a rich American couple F. met in 1924. The Murphys made the Cap d'Antibes the holiday-resort of wealthy Americans; they were famous for their charm, their social skill, and parties. F. drew on G. Murphy to portray Dick Diver. So much for externals; from now on we'll come to grips with the narrative proper.
3) MEN AND WOMEN IN TENDER IS THE NIGHT
F.'s novel anatomizes not only the break-up of a fine personality but also of various couples; it is a study of the relationship between men and women at a particular period in American history when both the times and people were "out of joint". The period was unique in that it witnessed a great switch-over in rôles as a consequence of the war and of America's coming of age. I have already alluded to "la crise des différences" in the historical and social background to the novel and by way of illustration I'd like to point out that in F's world, the distinction between sexes is always fluid and shifting; the boy Francis in The World's Fair (an earlier version of Tender) becomes with no difficulty at all the girl Rosemary in Tender and with the exception of Tommy Barban, all male characters in the novel evince obvious feminine traits; their virility is called into question by feminine or homosexual connotations: Dick appears "clad in transparent black lace drawers" (30), which is clearly described as "a pansy's trick" by one of the guests. Nicole bluntly asks him if he is a sissy (cf. p. 136 "Are you a sissy?"). Luis Campion is sometimes hard put to it to "restrain his most blatant effeminacy...and motherliness" (43). Mr.
Dumphry is also "an effeminate young man" (16). Women on the contrary display certain male connotations: Nicole is decribed as a "hard woman" (29) with a "harsh voice" (25). Her sister Baby Warren, despite her nickname, is also "hard", with "something wooden and onanistic about her" (168); she is likened to an "Amazon" (195) and said to resemble her "grandfather" (193). Oddly enough, Rosemary herself, is said to be economically at least "a boy, not a girl" (50). Thus, there is an obvious reversal of traditional roles or at least attributes, and Tender may be interpreted as a new version or re-enactment of the war between sexes (cf. motif of "La Belle Dame Sans Merci"). Let's review the forces in presence whose battle array can be represented as follows: two trios (two males vying for the same female) with Dick in between.
DICK
Belongs to the tradition of romantic characters such as Jay Gatsby: "an old romantic like me" (68); entertaining "the illusions of eternal strength and health, and of the essential goodness of people" (132). He's got "charm" and "the power of arousing a fascinated and uncritical love" (36) and like Gatsby makes resolutions: he wants to become "a good psychologist [...] maybe to be the greatest one that ever lived" (147) and also likes "showing off". At the same time, Dick's personality is divided and reveals contradictory facets: there is in him a "layer of hardness [...] of self-control and of self-discipline" (28) which is to be contrasted with his self-indulgence. Dick is the "organizer of private gaiety" (87) yet there is in him a streak of "asceticism" (cf. p. 221) prompting him to take pattern by his father and to cultivate the old virtues: "good instincts, honor, courtesy, and courage" (223). Cf. also p. 149 "he used to think that he wanted to be good, he wanted to be kind, he wanted to be brave and wise, but it was all pretty difficult. He wanted to be loved, too, if he could fit it in". The most prepossessing aspect of Dick's personality is a certain form of generosity; he spares no effort and gives away his spiritual and material riches to make people happy and complete: "They were waiting for him and incomplete with-out him. He was still the incalculable element..." (166) almost like a cipher which has no value in itself but increases that of the figure it is added to. Dick sometimes assumes the rôle and function of a "buttress" (cf. p. 265: "it was as if he was condemned to carry with him the egos of certain people, early met and early loved"), or even of a "Saviour" (p. 325: "he was the last hope of a decaying clan"). His relationship with Nicole is based on the same principle: his main function is to serve as a prop to keep her from falling to pieces ("he had stitched her together", p. 153). Dick's downfall will be brought about by two factors: his confusing the rôle of a psychiatrist with that of a husband and the lure of money ("Throw us together! Sweet propinquity and the Warren money" 173); I'll revert to the question of money, but for the time being, I'd like to stress its evil nature and the double penalty Dick incurs for yielding to his desire for money: castration and corruption (p. 220: "He had lost himself [...] Watching his father's [...] his arsenal to be locked up in the Warren safety-deposit vaults"). But the most fateful consequence is that, Nicole gradually depriving him of his vital energy, Dick loses his creative power, enthusiasm and even his soul: p. 187 "Naturally, Nicole wanting to own him [...] goods and money"; p.
227 "a lesion of enthusiasm" and lastly p. 242 "a distinct lesion of his own vitality".
At this stage it is useful to introduce the motif of the "hour glass" situation already mentioned in the "Introduction": as Dick grows weaker and weaker, Nicole gets "stronger every day…[her] illness follows the law of diminishing returns" (p. 288). Dick's process of deterioration parallels the emergence of Nicole's "new self" (254). Dick takes to drink, becomes less and less presentable and efficient as a psychiatrist; morever he's no longer able to perform his usual physical stunts (cf. 304-6). There's a complete reversal: "I'm not much like myself any more" (280) and "But you used to want to create thingsnow you seem to want to smash them up" (287). He eventually loses and this is the final blow his moral superiority over his associates ("They now possessed a moral superiority over him for as long as he proved of any use", 256). Dick, being aware of his degradation, tries to bring retribution on himself by accusing himself of raping a young girl (256). His deterioration assumes spiritual dimensions since it imperils not only his physical being but his soul ("I'm trying to save myself" "From my contamination?" 323). Dick proves to be a tragic character cf. reference to Ophelia (325) whose main defect is incompleteness ("the price of his intactness was incomple teness", 131) and like tragic heroes he has his fall. The dispenser of romance and happiness eventually conjures up the image of the "Black Death" ("I don't seem to bring people happiness any more", 239) and of the "deposed ruler" (301) whose kingdom has been laid waste by some great Evil (Tender calls to mind the motifs of The Waste Land and the Fisher King). There are two important stages in Dick's progress: his meeting with Rosemary and his father's death. Meeting Rosemary arouses in dick a characteristically "paternal interest" (38) and "attitude" (75); Dick sees Rosemary as a child (77) and she, in turn, unconsciously considers him as a surrogate father figure. It is interesting to note that with, her youthful qualities, Rosemary fulfills the same function towards Dick as Dick does to Nicole i.e. she is a source of strength, renewal and rejuvenation. Dick is attracted by Rosemary's vitality (47) and he uses her to restore his own diminishing vigour. But, at the same time, his affair with Rosemary is a "time of self-indulgence" (233), a lapse from virtue and morals, a turning-point: "He knew that what he was doing..." (103). Dick is unable to resist temptation the spirit is willing but the flesh is weak, as the saying goes, so this is the 1 st step (or the 2 nd if we take into account his marrying Nicole) to hell whose way, as you all know, is paved with good intentions. However the promises of love are blighted by the revelation of Rosemary's promiscuity cf. the episode on the train and the leitmotif "Do you mind if I pull down the curtain?" (113). Dick eventually realizes that the affair with Rosemary is just a passing fancy: "he was not in love with her nor she with him" ( 236) and ( 240) "Rome was the end of his dream of Rosemary". His father's death deprives Dick of a moral guide, of one the props of his existence ("how will it affect me now that this earliest and strongest of protections is gone?", p. 222 "he referred judgments to what his father would probably have thought and done"). Altough a minor figure at least in terms of space devoted to his delineation, Dick's father plays an important rôle as a representative of an old-world aristocracy with a high sense of honour, a belief in public service and maintenance of domestic decorum. He represents the Southern Cavalier (gentleman, a descendant of the English squire) as opposed to the Yankee, the product of a society absorbed in money-making and pleasure-seeking. He is a sort of relic from the past, a survival from a phased-out order of things, hence the slightly anachronistic observation to be found on p. 181: "From his father Dick had learned the somewhat conscious good manners of the young Southerner coming north after the Civil War". Dick, whose name, ironically enough, means "powerful and hard", will prove unable, as heir to a genteel tradition, to live up to the values and standards of the Southern Gentleman. He is hard on the outside and soft inside and reminds one of F's desire "to get hard. I'm sick of the flabby semi-intellectual softness in which I floundered with my generation" (Mizener). Hence also, the note of disappointment struck by F. when he stated that "My generation of radicals and breakersdown never found anything to take the place of the old virtues of work and courage and the old graces of courtesy and politeness" (Lehan) i.e. the very same virtues advocated by Dick's father.
THE WARREN SISTERS
Both sisters are the obverse and reverse of the same medal. Nicole is the seducer, as witness her portrait on p. 25; the reference to sculpture (Rodin) and architecture reminds one of an American writer's description of the American leisure woman as a "magnificently shining edifice" (P. Rosenfeld, Port of New York, 1961). F. uses the same image since N. is compared to "a beautiful shell" (134) "a fine slim edifice" (312). Nicole is endowed with a complex and deceptive personality combining both ingenuousness and the innate knowledge of the "mechanics of love and cohabitation" (Wild Palms, 41) that W.
Faulkner attributes to his female characters. Nicole also assumes almost allegorical, symbolical dimensions in that she "represents the exact furthermost evolution of a class" (30) and stands for the American woman, some sort of archetype, if not for America itself. Her own family history is an epitome of the creation of the New Republic since it combines the most characteristic types evolved by Europe and the New World: "Nicole was [...] the House of Lippe" (63). This emblematic quality is further emphasized by the fact that Nicole is defined as "the product of much ingenuity and toil" (65), so she brings Dick "the essence of a Continent" (152). She is quite in keeping with the new spirit of the times; she partakes both of "the Virgin and of the Dynamo" (H. Adams) and is described as an industrial object cf. 301: "Nicole had been designed for change [...] its original self". Not unlike Dick, Nicole has a double personality, which is to be expected from a schizophrenic; so there is a "cleavage between Nicole sick and Nicole well" (185). She is also incomplete and dependent upon other people to preserve a precarious mental balance: "she sought in them the vitality..." (198). The relation ship between Nicole and Dick is not a give-and-take affair but a one-way process; Nicole literally depletes, preys upon him, saps his strength: "she had thought of him as an inexhaustible energy, incapable of fatigue" (323) and she eventually absorbs him: "somehow Dick and nicole had become one and equal..." (209).
Cf. also the fusion of the two names in "Dicole" (116). Thus there is a double transference both psychological and vital (cf. the image of breast feeding p. 300). Actually, Nicole "cherishes her illness as an instrument of power" (259) and uses her money in much the same way: ("owning Dick who did not want to be owned" 198). However, after "playing planet to Dick's sun" (310) for several years, N. gradually comes to "feel almost complete" (311) and comes to a realization that "either you think...sterilize you" (311). From then on, begins what might be called N's bid for independence ("cutting the cord", 324); after ruining Dick, she "takes possession of Tommy Barban" (293), unless it is the other way round; however that may be the experience is akin to a rebirth: "You're all new like a baby" (317). The "coy maiden" gives way to the ruthless huntress: "no longer was she the huntress of corralled game" (322). By the way, it is worth noting that "Nicole" etymologically means "Victory" and that "Warren" means "game park". Thus Nicole reverts to type: "And being well perhaps [...] so there we are" (314) and "better a sane crook than a mad puritan" (315). She becomes exactly like her sister Baby Warren, both "formidable and vulnerable" (166), who had anticipated her in that transformation: "Baby suddenly became her grandfather, cool and experimental" (193) i.e. Sid Warren, "the horse-trader" (159). She is described as "a tall, restless virgin" (167), with "something wooden and onanistic about her" (168). She is the high priestess of Money, the Bitch-Goddess, and as such she symbolizes sterility. So much for the portrait of those two society women.
ROSEMARY HOIT
Cf. the connotations of the name: Hoit→"Hoity-toity": "riotous", "frolicsome". A hoyden (a boisterous girl)? One preliminary observation: Rosemary plays an important rôle as a catalytic agent and stands between the two groups of people i.e. the hard, practical people and the dissipated, run-down romantics. She is a complex character in that she combines several contradictory facets the child woman (ash blonde, childlike etc p. 12) "embodying all the immaturity of the race" (80) and its wildest dreams of success, everlasting youth and charm (she is surrounded by a halo of glamour and the magic of the pictures; note also that she deals with reality as a controlled set and life as a production) and the woman of the world. In spite of her idealism, grafted onto "an Irish, romantic and illogical" nature (181), she is said to be "hard" (21) and to have been "brought up on the idea of work" (49), she embodies certain "virtues" traditionally attached to a Puritan ethos. Her infatuation with Dick is just a case of puppy love, a stepping stone to numerous amorous adventures. In spite of the rôle she plays in the opening chapter of the novel, she turns out to be no more than a "catalytic agent" (63) in Dick and Nicole's evolution but she also fulfills an important symbolical function. This is precisely what I propose to do now, i.e. take a fresh look, from a symbolical standpoint, at the question of men and women in order to bring to light less obvious yet fundamental aspects.
THE FATHER DAUGHTER RELATIONSHIP & INCEST
The father-daughter relationship is of paramount importance in Tender and most encounters between men and women tend to function on that pattern so much so that I am tempted to subscribe to Callahan's opinion that Tender describes a new version of the American Eden where, the male being ousted, the female is left in blessed communion or tête à tête with the Father i.e. God. A case in point being of course Nicole & Devereux Warren, who was in his words "father, mother both to her" (141) and candidly confesses that "They were just like lovers-and then all at once we were lovers..." (144).
Nicole will, to a certain extent, continue the same type of relationship with Dick, thus fulfilling the female fantasy of having both father and husband in the same person; Nicole is a kind of orphan adopted by Dick. Rosemary seeks the very situation responsible for Nicole's schizophrenia: uniting protector and violator in the same man. Brady for instance turns her down by "refusing the fatherly office" (41) that Dick eventually assumes but there is such a difference in their ages as to render their embrace a kind of reenactment of the incestuous affair between Nicole and her father. It is also worth noting that Rosemary owes her celebrity to a film "Daddy's Girl" which is obviously a euphemistic fantasy of the Warren incest, a fact which does escape Dick's attention: "Rosemary and her parent...the vicious sentimentality" (80). Such interpretation is borne out by the reference to the "Arbuckle case" (124) ('Fatty', grown-up fat boy of American silent cinema whose career was ruined after his involvement in a 1921 scandal in which a girl died. Though he never again appeared before a camera, he directed a few films under the name 'Will B. Good'). To this must be added Dick's own involvement in a similar scandal since he is accused of seducing one of his patients' daughter (205).
THE WAR BETWEEN THE SEXES
The relationships between men and women are also described in terms of war between the sexes, cf. what one of the patients in the asylum says: "I'm sharing the fate of the women of my time who challenged men to battle" (203). Thus, just as war is seen through the language of love, so love is told through the metaphor of war. In the description of the battlefield (67-70), we find echoes of D. H.
Lawrence's interpretation of WWI as the fulfilment in history of the death urge of men whose marriages were sexually desolate. The diversion of erotic energy to war is further illustrated by Mrs Speers (a very apt name: "Spear" refers to a weapon for thrusting) who often applies military metaphors to sex: "Wound yourself or him.." (50). Notice the recurrence of the word "spear" to describe Dick's emasculation and also the use of the word "arsenal": "the spear had been blunted" ( 220) "Yet he had been swallowed up like a gigolo, and somehow permitted his arsenal to be locked up in the Warren safety-deposit vault". Last but not least, there is a kind of cannibalism or vampirism going on in the novel. Women, metaphorically feed or batten on their mates and the novel teems with images turning women into predatory females, cf. "dissection" (180), "suckling" (300) or even "spooks" (300) i.e. terms having to do with the transference of vital energy from one person to another. On a symbolical and unconscious level, woman and America are seen as vampires; L. Fiedler, who maintains that many female characters in American fiction can be divided into two categories "the fair maiden" (the pure, gentle virgin) and "the dark lady" (the dangerous seducer, the embodiment of the sexuality denied the snow maiden), also stated that very often, in the same fiction, "the hero finds in his bed not the white bride but the dark destroyer" (313): characteristically enough, Nicole's hair, once fair, darkens (34). In the American version of Eden, Eve is always vying with Lilith (Adam's wife before Eve was created, symbolizing the seducer, "l'instigatrice des amours illégitimes, la perturbatrice du lit conjugal"). Moreover, it is also interesting to note that if (the) man can create (the) woman, she seems to absorb his strength and very being; there is in Tender an echo of Schopenhauer's philosophy (cf. Faulkner's Wild Palms) which claimed that "the masculine will to creativeness is absorbed in the feminine will to reproduction"; cf the description on p. 253 "The American Woman, aroused, stood over him; the clean..that had made a nursery out of a continent". However, in Tender it is absorbed in woman's will to assert herself. Consequently, in F.'s fiction, women are seen as embodiments either of innocence (woman and the Continent before discovery, cf. Gatsby) or of corruption (woman and the Continent spoiled by male exploitation).
As in Faulkner's fiction, it is actually the male protagonist and not woman who symbolizes innocence and the American dream although the hero is too ineffectual to carry out its promise.
4) MOTIFS, IMAGES, AND SYMBOLS
We are far from having exhausted the symbolical connotations attaching to the various characters appearing in Tender; a work of art aims at plurality of meanings and several levels of significance are at play in any novel. To give you an example of such complexity of how things work I'd like to call your attention to the fact that women are consistently described in terms of flowers; there runs throughout the narrative a symbolic vein pertaining to flora. As a starting-point, I'll refer to an observation F. recorded in The Lost Decade: "girls had become gossamer again, perambulatory flora..." (91), an image perfectly illustrated by Tender.
FLORA & FAUNA
Rosemary Hoit is a case in point; her first name means "romarin" and derives from the Latin "ros marinus" i.e. "sea-dew". Now, "dew", as you all know, or at least are about to know, symbolizes "spiritual refreshment; benediction; blessing. Sweet dew is peace and prosperity. Dew can also represent change, illusion and evanescence. It is also related to the moon, nightfall and sleep". Consequently, the various references to "dew" that may have escaped the notice of the unweary reader, assume with the benefit of hindsight, deeper significance than one at first realizes: "she was almost eighteen, nearly complete, but the dew was still on her" (12); "Rosemary, [...] dewy with belief" (43) etc. All these images stress the youth and alleged innocence of Rosemary. Morever, her growth to adulthood is seen as the flowering of talent: "blossomed out at 16" (49); "looks like something blooming" (30); "bright bouquet" (89); "her body calculated to a millimeter to suggest a bud yet guarantee a flower" (117). However, Rosemary, "the white carnation" (75) evolves into different types of flower or plant such as the "blinding belladonna [...] the mandragora that imposes harmony" (181). The connotations are totally different and point to a lapse from innocence or virtue on the part of Rosemary: mandrake, "the plant of enchantment is the emblem of Circe" (a goddess and sorceress who changed Odysseus's men into pigs").
Incidentally, Rosemary is also compared to animals; she is seen as "a young mustang" (181), a "young horse" (226) and Nicole in her turn is likened to "a colt" (157); Dick with his crop and jockey cap rides them both before being thrown (cf. also the reference to "other women with flower-like mouths grooved for bits" p. 166). Nicole too is associated with "gardens" (cf. description on p. 34 and on p. 172 "waiting for you in the garden-holding all myself in my arms like a basket of flowers") and such flowers as "camellia" (34) and "lilac"; note also that she is in charge of the decoration of two wards called "the Eglantine" and "the Beeches" (201). Nicole is also said to "be blooming away" (48) and the more selfreliant and self-confident she grows the more numerous are the references to flora: "Nicole flowering" (220); "She reasoned as gaily as a flower" (297); "Her ego began blooming like a great rich rose" (310).
After her preparations to greet her lover, Tommy Barban, she is "the trimmest of gardens" (312).
As far as animal imagery is concerned, N.'s tragedy turns her into "the young bird with wings crushed" (143); while she is under the sway of Dick's personality, she's like "an obedient retriever" (35).
All these references to flora (cf. Violet McKisko) and fauna with their attractive and repulsive connotations culminate, as regards female characters, in the description of the trio of women on p. 84: "They were all tall and slender...cobra's hoods"; cf. a few lines further down the phrase "cobra-women" or the reference to "Amazons" (195). Thus, women in the the world depicted by Tender are, to use Keats's words, seen as "poison flowers", evil flowers, one has to guard against. Far from being gratuitous, such a bias can be accounted for by the rôle F. ascribed to women in his own life and in American society at large (cf. previous lecture). Two further observations as far as Nicole is concerned: they have to do with two classical refernces to Diana (name of her villa p. 38) and to Pallas Athene (177). They merely highlight her twofold nature since Diana/Artemis is a goddess associated with wooded places, women and childbirth and a virgin huntress, associated with uncultivated places and wild animals. As for Pallas Athene, she was the goddess of war, the patron of the arts and crafts. Lastly, to restore the balance in this presentation, it is necessary to point out that animal similes are also applied to men (McKisko is a rabbit, 59; Dick a cat, 136;, Tommy a watch-dog, 53 etc.,) even if they don't fall into such a coherent pattern as in the case of women.
RELIGIOUS DIMENSIONS
F. called Tender "my testament of faith" and the novel abounds in images having a clearly religious flavour quite in keeping with F.'s avowal that "I guess I am too much a moralist at heart and real-ly want to preach at people in some acceptable form rather than to entertain them". As critic A.
Mizener stated: all his best work is a product of the tension between these two sides of his nature, of his ability to hold in balance "the impulses to achieve and to enjoy, to be prodigal and open-hearted, and yet ambitious and wise, to be strong and self-controlled, yet to miss nothing--to do and to symbolize". Not until 1936 did he lose faith in his ability to realize in his personal life what he called "the old dream of being an entire man in the Goethe-Byron-Shaw tradition, with an opulent American touch, a sort of combination of J.P. Morgan, Topham Beauclerk and St. Francis of Assisi" (64).
Tender is to a certain extent an allegory of sin and penitence, fall and retribution with Everyman Diver journeying through a multitude of temptations and yielding to all of them: money, liquor, anarchy, self-betrayal and sex. In the General Plan, F. calls Dick "a spoiled priest" and the hero of Tender, "the son of a clergyman now retired" (175), can indeed be seen as the high priest of a group of devotees of a new religion worshipping leisure, entertainment and money, the Bitch-Goddess, eventually deposed by his followers. This religious motif is introduced in the opening pages with various images which, on second reading, endow the setting or the characters with new significance: see, for instance "the hotel and its bright tan prayer rug of a beach" (11); the woman with "a tiara" ( 14) and the general atmosphere of a community upon which Rosemary dare not intrude. She looks like a novice eager to be admitted into the Sacred College. Dick holds the group together; in a true spirit of ecumenicism and "as a final apostolic gesture" (36) he invites Mrs Abrams to one of his ritualistic parties. His relationship with Rosemary conjures up the notions of "adoration" and "conversion", see for instance the description on p. 48 "She was stricken [...] chasuble [...] fall to her knees". Nicole too is said "to bring everything to his feet, gifts of sacrificial ambrosia, of worshipping myrtle". Note that Nicole herself who has "the face of a saint" and looks like "a Viking madonna" (43) is also idolized by Tommy Barban whose vindication of the Divers' honour prompts Mrs McKisko to ask: "Are they so sacred?" (53). Dick's process of deterioration is also punctuated with religious references (cf. p. 104 "in sackcloth and ashes"; "let them pray for him", 281) down to the two final scenes when Dick is confronted, "like a priest in a confessional" (326), with the spectacle of depravity, lust and corruption (Mary North and Lady Caroline as embodiments of the Scarlet Woman) and when he makes a sign of (pronounces the) benediction before turning his back on his former associates:(337) "He raised his right hand and with a papal cross he blessed the beach from the high terrace". With this highly symbolical gesture, Dick assumes the rôle of a scapegoat, taking upon himself the sins of the community before being sent out into the wilderness. Now is the time to call attention to Dick's double personality as "spoiled priest" and "man about town"; Dick's yearning for essential virtues is at odds with the kind of flamboyant life he's living, hence the numerous references to "asceticism" ("the old asceticism triumphed", p. 221; "living rather ascetically", p. 187; "the boundaries of asceticism", p. 148) and a "hermit's life", p. 234. There is an obvious kinship between H. Wilbourne, the hero of Faulkner's Wild Palms and Dick Diver; both are would-be ascetics, whose naiveté or innocence are shattered by experience i.e. the encounter with woman and society at large. Dick, after assuming the rôle of a missionary or an apostle and "wast[ing] 8 years teaching the rich the ABC's of human decency" (325), is sent into exile, like "a deposed ruler" (301), "his beach perverted now to the tastes of the tasteless" (301). All the values he stood for are crushed by contact with reality; the forces both external and internal against which the hero conducted a struggle eventually destroy him. Thus, the romantic idealist who wanted to be brave and kind and loved, and thought he "was the last hope of a decaying clan" (325), is utterly defeated and even if his fate illustrates the "futility of effort and the necessity of struggle" one may doubt it is a "splendid failure".
PLACES
The characters of Tender, apparently bereft of any spirit of place, are constantly on the move, and the reader is vicariously taken to numerous foreign countries: France, Italy, Switzerland, the French Riviera and the USA. Some of them even seem to overlap as their distinctive features are not always clearly defined and the characters tend to move in similar circles. Places in Tender, besides providing a setting for the events depicted in the novel, also convey symbolic oppositions. Europe is a sort of vantage point from which America is, to use the biblical phrase, "weighed in the scales and found wanting". Broadly speaking, the novel is the locale of a confrontation between East and West and in Dick Diver, F. questioned the adequacy of postwar America to sustain its heroic past; as I have already pointed out the virtues of the Old South as embodied by Dick's father were discarded in favour of the wealth and power associated with the North. Hence, F.'s vision of Dick as "a sort of superhuman, an approximation of the hero seen in over-civilized terms". Dick is too refined for the world he lives in and he is torn between civilization and decadence; he is heir to a vanishing world whereas Nicole is a harbinger/herald of a brave new world; hence the characteristic pattern of decay and growth on which the novel is based. The protagonist, being confronted with the tragedy of a nation grown decadent without achieving maturity (as a psychiatrist he has to look after culture's casualties cf. what R. Ellison said of the USA: "perhaps to be sane in such society is the best proof of insanity") will move to milder climes i.e. the Riviera. This reverse migration from the New to the Old World represents a profound displacement of the American dream; Dick, at least in the first half of the novel, yields to the lure of the East and responds to "the pastoral quality of the summer Riviera" (197) (note by the way that American literature is particularly hospitable to pastoralism i.e. the theme of withdrawal from society into an idealized landscape, an oasis of peace and harmony in the bosom of Nature). Thus the Riviera, Mediterranean France, are seen as "a psychological Eden" in which F. and his heroes take refuge. The Riviera is a middle ground between East and West, an alternative milieu (it is a feature of "romances" to be located at a distance of time and place from their writer's world) where Dick tries to make a world fit for quality people to live in and above all for Nicole (see, for instance the way people are transfigured during one of Dick's parties; p. 42 "and now they were only their best selves and the Divers' guests"). So, L. Fiedler is perfectly right to define Tender as "an Eastern, a drama in which back-trailers reverse their westward drive to seek in the world which their ancestors abandoned the dream of riches and glory that has somehow evaded them". Yet Europe itself is not immune from decay; the Italian landscape exudes "a sweat of exhausted cultures taint[ing] the morning air" (244); the English "bearing aloft the pennon of decadence , last ensign of the fading empire" (291) "are doing a dance of death (292); and Switzerland is seen as the dumping-ground of the invalids and perverts of all nations: cf. description of the suite on p. 268). Actually, it is the whole of Western civilization that bears the stigmas of a gradual process of degeneracy; F. was deeply influenced by the historical theories of Oswald Spengler, a German philosopher, who in The Decline of the West predicted the triumph of money over aristocracy, the emergence of new Caesars (totalitarian states) and finally the rise of the "colored" races (by which Spengler meant not only the Negroes and Chinese but also the Russians!) that would use the technology of the West to destroy its inventors. F. incorporated some of Spengler's theories into Tender, all the more easily as they tallied with his racial prejudices (in a 1921 letter to E. Wilson, F. wrote such things as "the negroid streak creeps northward to defile the Nordic race", and "already the Italians have the souls of blackamoors", etc...). Tender prophesies not only the decline of the West but also the triumph of a barbaric race of a darker complexion.This enables one to see in a totally different light the affair between Tommy Barban and Nicole Warren. Tommy Barban whose last name is akin to "barb(ari)an" is said to be "less civilized" (28), "the end-product of an archaic world" (45); he is "a soldier" (45), "a ruler, a hero" (215), and the epithet "dark" always crops up in his physical portrait: "Tommy [...] dark, scarred and handsome [...] an earnest Satan" (316) "his figure was darker and stronger than Dick's, with high lights along the rope-twist of muscle" (Ibid.) "his handsome face was so dark as to have lost the pleasantness of deep tan, without attaining the blue beauty of negroes--it was just worn leather" (289).
Tommy, "the emerging fascist", as a critic put it, literally preys upon Nicole, the representative of the waning aristocracy; he usurps the place of Dick, the dreamer, the idealist, the "over-civilized" anti-hero and F. suggests that Tommy's triumph is one of East over West when he tells us that Nicole "symbolically [...] lay across his saddle-bow as surely as if he had wolfed her away from Damascus and they had come out upon the Mongolian plain [...] she welcomed the anarchy of her lover" (320). The echo of hoof-beats conjures up the image of barbaric hordes sweeping across Western countries: "the industrial Warrens in coalition with the militarist Barban form a cartel eager for spoils, with all the world their prey" (Callahan). What are the grounds for such an indictment of western civilization in general and of American culture and society in particular? MONEY America seems to have rejected certain fundamental values, such as "pioneer simplicity" (279) and "the simpler virtues", to adopt an increasingly materialistic outlook. After the "first burst of luxury manufacturing" (27) that followed WWI, society became a "big bazaar" (30), a "Vanity Fair"; this adherence to a materialistic credo is illustrated by the wealth of objects and commodities of every description such characters as Nicole or Baby Warren are surrounded with; see for instance, the description of Nicole's trip list on p. 278. This brings us to the rôle of money whose importance will be underlined by two quotations from F.: "I have never been able to forgive the rich for being rich and it has colored my entire life and works" and also declared that they roused in him "not the conviction of a revolutionist but the smouldering hatred of a peasant". Money is indeed a key metaphor in F.'s fiction where we find the same denunciation of "the cheap money with which the world was now glutted and cluttered" (The Wild Palms, p. 210). Note also that both novels resort to the same monetary image to refer to human feelings; Faulkner speaks of "emotional currency" and Fitzgerald of "emotional bankruptcy". People have become devotees of a new cult whose divinity is Mammon (a personification of wealth). Modern society dehumanizes man by forcing him to cultivate false values and by encouraging atrophy of essential human virtues; money has taken their place and, as F. once said, "American men are incomplete without money" ("The Swimmers"). Thus love and vitality are expressed in terms of money; just as the rich throw money down the drain so Dick Diver wastes his talents and feelings on unworthy people.
The motif of financial waste culminating in the 1929 crisis runs parallel to the motif of intellectual and emotional waste i.e. Dick's emotional bankruptcy ("la banqueroute du coeur"). At this stage it is necessary to stress the obvious similarity between the author and his character; F. also wanted to keep his emotional capital intact and was beset by fears of bad investment. Mizener points out that F. regarded "vitality as if it were a fixed sum, like money in the bank. Against this account you drew until, piece by piece, the sum was spent and you found yourself emotionally bankrupt". In The Crack-Up, F. says that he "had been mortgaging himself physically and spiritually up to the hilt". So Dick uses up the emotional energy which was the source of his personal discipline and of his power to feed other people. "I thought of him," said F., "as an 'homme épuisé', not only an 'homme manqué' ". This "lesion of vitality" turns Dick into a "hollow man" and here again F. transposed his personal experience "the question became one of finding why and where I had changed, where was the leak through which, unknown to myself, my enthusiasm and my vitality had been steadily and prematurely trickling away" ("Pasting It Together"). Cf. also letter to his daughter: "I don't know of anyone who has used up so much personal experience as I have at 27". So both Dick and Fitzgerald were victims of an extravagant expenditure of vitality, talents and emotions; the former wasted it upon "dull" people (336) and the latter upon secondrate literary productions. F. once declared "I have been a mediocre caretaker of most things left in my hands, even of my talent", so were both Dick Diver and America.
PERVERSION AND VIOLENCE
Although F.'s work is "innocent of mere sex", sexual perversions bulk large in Tender and they are just further proof of the fact that both society and the times are out of joint. There are so many references to incest, rape, homosexuality and lesbianism that one feels as if love could only find expression in pathological cases. But sexual perversion, whatever form it may assume in Tender, is but one variation of the other perversions that corrupted a society with an "empty harlot's mind" (80): the perversion of money, the perversion of ideas and talents. This emphasis also indicates that sex or lust has ousted love; Eros has taken the place of Agape (spiritual or brotherly love); relations between men and women are modeled on war or hunting cf. Nicole's definition of herself as "no longer a huntress of corralled game". There runs throughout the novel an undercurrent of violence, "the honorable and traditional resource of this land" (245); it assumes various forms: war, duelling, rape, murder (a Negro is killed in Rosemary's bedroom), to say nothing of bickerings, beatings, and shootings. As a critic noticed, moments of emotional pitch, are often interrupted by a loud report; when Dick falls in love with Nicole, when Abe North leaves on the train from Paris, and when Tommy becomes Nicole's lover, each time a shot is heard that breaks the illusion; it is a recall to reality, a descent from bliss. After the assault at the station, it is said that "the shots had entered into their lives: echoes of violence followed them out onto the pavement..."(p. 97). So, sex, money, violence contribute to the disruption of the fabric of human relationships; Tender is a sort of prose version of "The Wasteland" and the depiction of the disintegration of both the protagonist and society is all the more poignant as it is to be contrasted with the potentialities, the dreams and the promises they offered. Whereas The Great Gatsby was a novel about what could never be, Tender Is The Night is a novel about what could have been. Dick diver had the talent to succeed and he also had in his youth the necessary talent and sense of commitment but he is betrayed by the very rich and by his own weaknesses. Hence the circularity of the plot and of the protagonist's journey that begins with the quest for success and ends with the reality of failure. Thus Dick returns to the States to become a wandering exile in his own country: "to be in hell is to drift; to be in heaven is to steer" (G. B. Shaw).
DREAMS, ROMANCE & MAGIC
As a starting-point for this section, I'd like to remind you of the words of L. Trilling: "Ours is the only nation that prides itself upon a dream and gives its name to one, 'the American dream'". F. is a perfect illustration of the fascination of that dream; in a letter to his 17-year-old daughter, Scottie, he wrote:
When I was your age, I lived with a great dream. The dream grew and I learned how to speak of it and make people listen. Then the dream divided one day when I decided to marry your mother after all, even though I knew she was spoiled and meant no good to me... Dreams and illusions are thus the hallmark of the romantic character who, like Dick, entertains "illusions of eternal strength and health and of the essential goodness of people" (132). A profession of faith that is to be contrasted with the final description of Dick on p. 334 "he was no t young any more with a lot of nice thoughts and dreams to have about himself". Dick's function is to dispense romantic wonder to others and the same function is fulfilled by the cinema with its glamour and magic suggestiveness. Rosemary is also another version of the romantic the romantic at a discount her emotions are fed on "things she had read, seen, dreamed" (75); she often has the "false-and-exalted feeling of being on a set" (p. 83) but unlike Dick, she does not know yet "that splendor is something in the heart" (74).
Her dark magic is nonetheless dangerous for it encourages a certain form of escapism and infantilism. So there's in the novel a constant interplay between actuality and illusion; certain scenes seem to take place on the borderline between the two: the characters and the objects surrounding them seem to be suspended in the air; see, for instance, the description of the party on p. 44: "There were fireflies...and became part of them". The motif of "suspension" emphasizes the contrast between actuality and the world of the imagination.
NIGHT AND DAY
The opposition of night and day has always held symbolic meaning and has been used by writers for centuries to suggest evil, confusion, the dark side of human nature as opposed to light, honesty, reason, wisdom. It is a fundamental opposition and motif in Fitzgerald's fiction (see for instance The Great Gatsby and the title of the novel he started on after Tender : The Count of Darkness) and in his life; in The Crack Up he wrote that he was "hating the night when I couldn't sleep and hating the day because it went towards night" (p. 43); however, he welcomes "the blessed hour of nightmare" (Ibid.,). Thus the symbolic structure of Tender reveals a contrast between the night and the day, darkness and night. The title indicates that night is tender, which seems to imply that the day is harsh and cruel. Actually, the opposition is not so clear-cut (the narrative mentions "the unstable balance between night and day", p. 247) and night and day carry ambivalent connotations. The opening scene is dominated by the glare of sunlight; it troubles the Divers and their friends who seek shelter from it under their umbrellas; Rosemary retreats from the "hot light" (12) and the Mediterranean "yields up its pigments...to the brutal sunshine" (12). Noon dominates sea and sky (19) so that it seems "there was no life anywhere in all this expanse of coast except under the filtered sunlight of those umbrellas" (19). Dick also protects Rosemary from the hot sun on p. 20 and 26. It is also interesting to observe that when Nicole has a fit of temper or madness "a high sun is beating fiercely on the children's hats" (206). Thus the sun is seen as something harsh, painful and even maddening. Conversely, darkness and night are at first referred to in positive terms: cf. "the lovely night", the "soft warm darkness" (296), the "soft rolling night" and also on p. 294 "she felt the beauty of the night"; Rosemary is at one point "cloaked by the erotic darkness" (49). The opposition between night and day is pointed up in F;'s description of Amiens cf. p. 69: "In the daytime one is deflated by such towns [...] the satisfactory inexpensiveness of nowhere". Thus night is the time of enchantment, obliterating the ugliness of reality that the day mercilessly exposes; night is the time of illusion and merriment cf. p. 91 "All of them began to laugh...hot morning". However the symbolism of the night is not merely opposite in meaning to that of the day, for night itself in ambivalent; it signifies according to The Dictionary of Symbols "chaos, death, madness and desintegration, reversal to the foetal state of the world"; such sinister connotations are apparent in the reference to "mental darkness" (236), to "the darkness ahead" (263) to mean death and in Nicole's statement that after the birth of her second child "everything went dark again" (177). Night is threatening and deceptive; it is the refuge of those who are unable to cope with practical daylight reality which is totally uncongenial to the romantic. |
01761227 | en | [
"math.math-ap"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01761227/file/greenart3.pdf | Alexey D Agaltsov
email: agaltsov@mps.mpg.de
Thorsten Hohage
email: hohage@math.uni-goettingen.de
AND Roman G Novikov
email: novikov@cmap.polytechnique.fr
THE IMAGINARY PART OF THE SCATTERING GREEN FUNCTION: MONOCHROMATIC RELATIONS TO THE REAL PART AND UNIQUENESS RESULTS FOR INVERSE PROBLEMS
Keywords: inverse scattering problems, uniqueness for inverse problems, passive imaging, correlation data, imaginary part of Green's function AMS subject classifications. 35R30, 35J08, 35Q86, 78A46
Monochromatic identities for the Green function and uniqueness results for passive imaging
Alexey Agaltsov, Thorsten Hohage, Roman
1. Introduction. In classical inverse scattering problems one considers a known deterministic source or incident wave and aims to reconstruct a scatterer (e.g. the inhomogeniety of a medium) from measurements of scattered waves. In the case of point sources this amounts to measuring the Green function of the underlying wave equation on some observation manifold. From the extensive literature on such problems we only mention uniqueness results in [START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF][START_REF] Nachman | Reconstructions from boundary measurements[END_REF][START_REF] Bukhgeim | Recovering a potential from Cauchy data in the two-dimensional case[END_REF][START_REF] Santos Ferreira | Determining a magnetic Schrödinger operator from partial Cauchy data[END_REF][START_REF] Agaltsov | Uniqueness and non-uniqueness in acoustic tomography of moving fluid[END_REF], stability results in [START_REF] Stefanov | Stability of the inverse problem in potential scattering at fixed energy[END_REF][START_REF] Hähner | New stability estimates for the inverse acoustic inhomogeneous medium problem and applications[END_REF][START_REF] Isaev | New global stability estimates for monochromatic inverse acoustic scattering[END_REF], and the books [26,[START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF] concerning many further aspects.
Recently there has been a growing interest in inverse wave propagation problem with random sources. This includes passive imaging in seismology ( [START_REF] Snieder | A comparison of strategies for seismic interferometry[END_REF]), ocean acoustics ( [START_REF] Burov | The use of low-frequency noise in passive tomography of the ocean[END_REF]), ultrasonics ( [START_REF] Weaver | Ultrasonics without a source: Thermal fluctuation correlations at mhz frequencies[END_REF]), and local helioseismology ( [START_REF] Gizon | Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows[END_REF]). It is known that in such situations cross correlations of randomly excited waves contain a lot of information about the medium. In particular, it has been demonstrated both theoretically and numerically that under certain assumptions such cross correlations are proportional to the imaginary part of the Green function in the frequency domain. This leads to inverse problems where some coefficient(s) in a wave equation have to be recovered given only the imaginary part of Green's function. The purpose of this paper is to prove some first uniqueness results for such inverse problems. For results on related problems in the time domain see, e.g., [START_REF] Garnier | Passive sensor imaging using cross correlations of noisy signals in a scattering medium[END_REF] and references therein.
Recall that for a random solution u(x, t) of a wave equation modeled as a stationary random process, the empirical cross correlation function over an interval [0, T ] with time lag τ is defined by
C T (x 1 , x 2 , τ ) := 1 T T 0 u(x 1 , t)u(x 2 , t + τ ) dt, τ ∈ R,
In numerous papers it has been demonstrated that under certain conditions the time derivative of the cross correlation function is proportional to the symmetrized outgoing Green function
∂ ∂τ E [C T (x 1 , x 2 , τ )] ∼ -[G(x 1 , x 2 , τ ) -G(x 1 , x 2 , -τ )], τ ∈ R.
Taking a Fourier transform of the last equation with respect to τ one arrives at the relation
E C T (x 1 , x 2 , k) ∼ 1 ki G + (x 1 , x 2 , k) -G + (x 1 , x 2 , k) = 2 k ℑG + (x 1 , x 2 , k), k ∈ R.
Generally speaking, these relations have been shown to hold true in situations where the energy is equipartitioned, e.g. in an open domain the recorded signals are a superposition of plane waves in all directions with uncorellated and identically distributed amplitudes or in a bounded domain that amplitudes of normal modes are uncorrelated and identically distributed, see [START_REF] Garnier | Passive sensor imaging using cross correlations of noisy signals in a scattering medium[END_REF][START_REF] Roux | Ambient noise cross correlation in free space: Theoretical approach[END_REF][START_REF] Gizon | Computational helioseismology in the frequency domain: acoustic waves in axisymmetric solar models with flows[END_REF][START_REF] Snieder | Extracting the Green's function of attenuating heterogeneous acoustic media from uncorrelated waves[END_REF][START_REF] Snieder | Extracting earth's elastic wave response from noise measurements[END_REF]. This condition is fulfilled if the sources are uncorrelated and appropriately distributed over the domain or if there is enough scattering. This paper has mainly been motivated by two inverse problems in local helioseismology and in ocean tomography. In both cases we consider the problem of recovering the density ρ and the compressibility κ (or equivalently the sound velocity c = 1/ √ ρκ) in the acoustic wave equation
∇ • 1 ρ(x) ∇p + ω 2 κ(x)p = f, x ∈ R d , d ≥ 2, (1.1)
with random sources f . We assume that correlation data proportional to the imaginary part of Green's function for this differential equation are available on the boundary of a bounded domain Ω ⊂ R d for two different values of the frequency ω > 0 and that ρ and κ are constant outside of Ω. As a main result we will show that ρ and κ are uniquely determined by such data in some open neighborhood of any reference model (ρ 0 , κ 0 ).
Let us first discuss the case of helioseismology in some more detail: Data on the line of sight velocity of the solar surface have been collected at high resolution for several decades by satellite based Doppler shift measurements (see [START_REF] Gizon | Local helioseismology: Three-dimensional imaging of the solar interior[END_REF]). Based on these data, correlations of acoustic waves excited by turbulent convection can be computed, which are proportional to the imaginary part of Green's functions under assumptions mentioned above. These data are used to reconstruct quantities in the interior of the Sun such as sound velocity, density, or flows (see e.g. [START_REF] Hanasoge | Seismic sounding of convection in the sun[END_REF]). The aim of this paper is to contribute to the theoretical foundations of such reconstruction method by showing local uniqueness in the simplified model above.
In the case of ocean tomography we consider measurements of correlations of ambient noise by hydrophones placed at the boundary of a circular area of interest. If the ocean is modeled as a layered separable waveguide, modes do not interact, and each horizontal mode satisfies the two-dimensional wave equation of the form (1.1) (see [START_REF] Burov | The possibility of reconstructing the seasonal variability of the ocean using acoustic tomography methods[END_REF][START_REF] Burov | The use of low-frequency noise in passive tomography of the ocean[END_REF]).
The problem above can be reduced to the following simpler problem of independent interest: Determine the real-valued potential v in the Schrödinger equation
-∆ψ + v(x)ψ = k 2 ψ, x ∈ R d , d ≥ 2, k > 0 (1.2)
given the imaginary part of the outgoing Green function G + v (x, y, k) for one k > 0 and all x, y on the boundary of a domain containing the support of v. This problem is a natural fixed energy version of the multi-dimensional inverse spectral problem formulated originally by M.G. Krein, I.M. Gelfand and B.M. Levitan at a conference on differential equations in Moscow in 1952 (see [START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF]).
In this connection recall that the Schrödinger operator admits the following spectral decomposition in L 2 (R d ):
(1.3) -∆ + v(x) = ∞ 0 λ 2 dE λ + N j=1 E j e j ⊗ e j , dE λ = 2 π ℑR + v (λ)λdλ,
where dE λ is the positive part of the spectral measure for -∆ + v(x), E j are nonpositive eigenvalues of -∆ + v(x) corresponding to normalized eigenfunctions e j , known as bound states, and
R + v (λ) = (-∆ + v -λ 2 -i0) -1
is the limiting absorption resolvent for -∆ + v(x), whose Schwartz kernel is given by G + v (x, y, λ), see, e.g., [START_REF] Hörmander | The Analysis of Linear Partial Differential Operators II[END_REF]Lem.14.6.1].
The plan of the rest of this paper is as follows: In the following section we present our main results, in particular algebraic relations between ℑG + v and ℜG + v on ∂Ω at fixed k (Theorem 2.4 and Theorem 2.5) and local uniqueness results given only the imaginary part of Green's function (Theorem 2.7 and Theorem 2.11). The remainder of the paper is devoted to the proof of these results (see Figure 1). After discussing the mapping properties of some boundary integral operators in section 3 we give the rather elementary proof of the relations between ℑG + v and ℜG + v at fixed k in section 4. By these relations, ℜG + v is uniquely determined by ℑG + v up to the signs of a finite number of certain eigenvalues. To fix these signs, we will have to derive an inertia theorem in section 5, before we can finally show in section 6 that ℜG + v is locally uniquely determined by ℑG + v and appeal to known uniqueness results for the full Green function to complete the proof of our uniqueness theorems. Finally, in section 7 we discuss the assumptions of our uniqueness theorems before the paper ends with some conclusions.
ℑG v T BT * T AT * ℜG v ℑG + v B B | A| A A G v v (2.7) (2.9)
(2.13) Thm.2.5 Prp.6.1
Prp.6.2
T inj.
[ [START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF][START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF] Fig. 1. Schema of demonstration of Theorems 2.7 and 2.11
2. Main results. For the Schrödinger equation (1.2) we will assume that
v ∈ L ∞ (Ω, R), v = 0 on R d \ Ω, (2.1) Ω is an open bounded domain in R d with ∂Ω ∈ C 2,1 , (2.2)
where by definition a ∂Ω ∈ C 2,1 means that ∂Ω is locally a graph of a C 2 function with Lipschitz continuous second derivatives, see [22, p.90] for more details. Moreover, we suppose that
k 2 is not a Dirichlet eigenvalue of -∆ + v(x) in Ω. (2.3)
For equation (1.2) at fixed k > 0 we consider the outgoing Green function
G + v = G + v (x, y, k)
, which is for any y ∈ R d the solution to the following problem:
(-∆ + v -k 2 )G + v (•, y, k) = δ y , (2.4a) ∂ ∂|x| -ik G + v (x, y, k) = o |x| 1-d 2 , |x| → +∞. (2.4b) Recall that G v (x, y, k) = G v (y, x, k) by the reciprocity principle.
In the present work we consider, in particular, the following problem: For the acoustic equation (1.1) we impose the assumptions that
ρ ∈ W 2,∞ (R d , R), ρ(x) > 0, x ∈ Ω, ρ(x) = ρ c > 0, x ∈ Ω, (2.5a) κ ∈ L ∞ (Ω, R), κ(x) = κ c > 0, x ∈ Ω (2.5b)
for some constants ρ c and κ c . For equation (1.1) we consider the radiating Green function P = P ρ,κ (x, y, ω), which is the solution of the following problem:
(2.6) ∇ • 1 ρ ∇P (•, y, ω) + ω 2 κP (•, y, ω) = -δ y , ω > 0, ∂ ∂|x| -iω √ ρ c κ c P (x, y, ω) = o |x| 1-d 2
, |x| → +∞.
In the present work we consider the following problem for equation (1.1):
Problem 2.3. Determine the coefficients ρ, κ in the acoustic equation (1.1) from ℑP ρ,κ (x, y, ω) given at all x, y ∈ ∂Ω, and for a finite number of ω.
Notation. If X and Y are Banach spaces, we will denote the space of bounded linear operators from X to Y by L(X, Y ) and write L(X) := L(X, X). Moreover, we will denote the subspace of compact operators in L(X, Y ) by K(X, Y ), and the subset of operators with a bounded inverse by GL(X, Y ).
Besides, we denote by • ∞ the norm in L ∞ (Ω), and by •, • , • 2 the scalar product and the norm in L 2 (∂Ω). Furthermore, we use the standard notation H s (∂Ω) for L 2 -based Sobolev spaces of index s on ∂Ω (under the regularity assumption (2.2) we need |s| ≤ 3).
In addition, the adjoint of an operator A is denoted by A * .
2.1. Relations between ℜG and ℑG. For fixed k > 0 let us introduce the integral operator
G v (k) ∈ L(L 2 (∂Ω)) by (2.7) (G v (k)ϕ)(x) := ∂Ω G + v (x, y, k)ϕ(y) ds(y), x ∈ ∂Ω
where ds(y) is the hypersurface measure on ∂Ω. For the basic properties of G v (k) see, e.g., [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Chapter 7]. Note that for the case v = 0 the Green function G + 0 is the outgoing fundamental solution to the Helmholtz equation, and G 0 is the corresponding single layer potential operator.
Recall that
σ Ω (-∆) := k > 0 : k 2 is a Dirichlet eigenvalue of -∆ in Ω
is a discrete subset of (0, +∞) without accumulation points.
Theorem 2.4. Suppose that Ω, k, and v satisfy the conditions (2.1), (2.2), (2.3). Then:
1. The mapping
(2.8) (0, +∞) \ σ Ω (-∆) → L(H 1 (∂Ω), L 2 (∂Ω)), λ → Q(λ) := ℑG -1 0 (λ)
has a unique continuous extension to (0, +∞). In following we will often write
Q instead of Q(k). 2. G v (k) ∈ L(L 2 (∂Ω), H 1 (∂Ω)
) and the operators (2.9)
A := ℜG v (k), B := ℑG v (k)
satisfy the following relations:
AQA + BQB = -B (2.10a) AQB -BQA = 0. (2.10b)
Theorem 2.4 is proved in subsection 4.1.
We would like to emphasize that relations (2.10a), (2.10b) are valid in any dimension d ≥ 1.
For the next theorem, recall that the exterior boundary value problem
∆u + k 2 u = 0 in R d \ Ω, (2.11a) u = u 0 on ∂Ω, (2.11b) ∂u ∂|x| -iku = o |x| (1-d)/2
as |x| → ∞ (2.11c) has a unique solution for all u 0 ∈ C(∂Ω), which has the asymptotic behavior
u(x) = e ik|x| |x| (d-1)/2 u ∞ x |x| 1 + O 1 |x| , |x| → ∞.
Here
u ∞ ∈ L 2 (S d-1
) is called the farfield pattern of u.
Theorem 2.5. Suppose that Ω, k, and v satisfy the conditions (2.1), (2.2), and (2.3). Then: I. The operator C(∂Ω) → L 2 (S d-1 ), u 0 → √ ku ∞ mapping Dirichlet boundary values u 0 to the scaled farfield pattern u ∞ of the solution to (2.11) has a continuous extension to an operator T (k) ∈ L L 2 (∂Ω), L 2 (S d-1 )), and T (k) is compact, injective, and has dense range. Moreover, Q(k) defined in Theorem 2.4 has a continuous extension to L(L 2 (∂Ω)) satisfying
Q(k) = -T * (k)T (k). (2.12) II. The operators A, B ∈ L L 2 (S d-1 ) defined by (2.13) A := ℜ G v (k), B := ℑ G v (k), G v (k) := T (k)G v (k)T * (k)
are compact and symmetric and satisfy the relations
A 2 = B -B 2 , (2.14a) A B = B A. (2.14b) III. The operators A, B are simultaneously diagonalisable in L 2 (S d-1 ). Moreover, if G v (k)f = (λ A + iλ B )f for some f = 0 and λ A , λ B ∈ R, then (2.15) λ 2 A = λ B -λ 2 B . Theorem 2.5 is proved in subsection 4.2.
We could replace T (k) by any operator satisfying (2.12) in most of this paper, e.g. -Q(k). However, G * v (k) has a physical interpretation given in Lemma 7.1, and this will be used to verify condition (2.16) below.
In analogy to the relations (2.10a) and (2.10b), the relations (2.14a) and (2.14b) are also valid in any dimension d ≥ 1.
Remark 2.6. The algebraic relations between ℜG + v and ℑG + v given in Theorem 2.4 and Theorem 2.5 involve only one frequency in contrast to well-known Kramers-Kronig relations which under certain conditions are as follows:
ℜG + v (x, y, k) = 1 π p.v. +∞ -∞ ℑG + v (x, y, k ′ ) k ′ -k dk ′ , ℑG + v (x, y, k) = - 1 π p.v. +∞ -∞ ℜG + v (x, y, k ′ ) k ′ -k dk ′ , where x = y, k ∈ R for d = 3 or k ∈ R\{0} for d = 2, and G + v (x, y, -k) := G + v (x, y, k), G + 0 (x, y, -k) := G + 0 (x, y, k), k > 0.
In this simplest form the Kramers-Kronig relations are valid, for example, for the Schrödinger equation (1.2) under conditions (2.1), (2.2), d = 2, 3, if the discrete spectrum of -∆ + v in L 2 (R d ) is empty and 0 is not a resonance (that is, a pole of the meromorphic continuation of the resolvent k → (-∆ + v -k 2 ) -1 ).
Identifiability of
v from ℑG v . We suppose that (2.16) if λ 1 , λ 2 are eigenvalues of G v (k) with ℑλ 1 = ℑλ 2 , then ℜλ 1 = ℜλ 2 ,
where G v (k) = A + i B is the operator defined in Theorem 2.5. Under this assumption any eigenbasis of B in L 2 (S d-1 ) is also an eigenbasis for A in L 2 (S d-1 ) in view of Theorem 2.5 (III).
Theorem 2.7. Let Ω satisfy (2.2), d ≥ 2, v 0 satisfy (2.1) and let k > 0 be such that ℜG v0 (k) is injective in H -1 2 (∂Ω) and (2.16) holds true with v = v 0 . Then there exists δ = δ(Ω, k, v 0 ) > 0 such that for any v 1 , v 2 satisfying (2.1) and
v 1 -v 0 ∞ ≤ δ, v 2 -v 0 ∞ ≤ δ, the equality ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for all x, y ∈ ∂Ω implies that v 1 = v 2 .
Theorem 2.7 is proved in subsection 6.1. In section 7 we present results indicating that the assumptions of this theorem are "generically" satisfied.
We also mention the following simpler uniqueness result for ℜG + v based on analytic continuation if ℑG + v is given not only for one frequency, but for an interval of frequencies. This uniqueness result is even global. However, analytic continuation is notoriously unstable, and computing ℑG + v on an interval of frequencies from time dependent data would require an infinite time window. Therefore, it is preferable to work with a discrete set of frequencies. [START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF][START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF].
Proposition 2.8. Let Ω satisfy (2.2), d ∈ {2, 3}, and v 1 , v 2 satisfy (2.
1). Suppose that the discrete spectrum of the operators
-∆ + v j in L 2 (R d ) is empty and 0 is not a resonance (that is, a pole of the meromorphic continuation of the resolvent R + vj (k) = (-∆ + v j -k 2 -i0) -1 ), j = 1, 2. Besides, let x, y ∈ R d , x = y, be fixed. Then if ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for all k ∈ (k 0 -ε, k 0 + ε) for some fixed k 0 > 0, ε > 0, then G + v1 (x, y, k) = G + v2 (x, y, k) for all k > 0. In addition, if ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for all x, y ∈ ∂Ω, k ∈ (k 0 -ε, k 0 + ε), then v 1 = v 2 . Proof. Under the assumptions of Proposition 2.8 the functions G + vj (x, y, k) at fixed x = y admit analytic continuation to a neighborhood of each k ∈ R (k = 0 for d = 2) in C. It follows that ℑG + vj (x, y, k) are real-analytic functions of k ∈ R (k = 0 for d = 2). Moreover, ℑG + vj (x, y, -k) = -ℑG + vj (x, y, k) for all k > 0. Hence, the equality ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k) for k ∈ (k 0 -ε, k 0 + ε) implies the same equality for all k ∈ R (k = 0 for d = 2). Taking into account Kramers-Kronig relations recalled in Remark 2.6, we obtain, in particular, that G + v1 (x, y, k) = G + v2 (x, y, k), k > 0. Moreover, the equality G + v1 (x, y, k) = G + v2 (x, y, k), x, y ∈ ∂Ω, k > 0, implies v 1 = v 2 see, e.g.,
2.3. Identifiability of ρ and κ from ℑP ρ,κ . Let P ρ,κ (x, y, ω) be the function of (2.6) and define P ρ,κ,ω , P ρ,κ,ω as
P ρ,κ,ω u (x) := ∂Ω P ρ,κ (x, y, ω)u(y) ds(y), x ∈ ∂Ω, u ∈ H -1 2 (∂Ω), P ρ,κ,ω := T (k)P ρ,κ,ω T * (k), k := ω √ ρ c κ c ,
where T (k) is the same as in Theorem 2.5. We suppose that
(2.17) if λ 1 , λ 2 are eigenvalues of P ρ,κ,ω with ℑλ 1 = ℑλ 2 , then ℜλ 1 = ℜλ 2 . Let W 2,∞ (Ω) denote the L ∞ -based Sobolev space of index 2.
The following theorems are local uniqueness results for the acoustic equation (1.1).
Theorem 2.9. Let Ω satisfy (2.2), d ≥ 2, and suppose that ρ 0 , κ 0 satisfy (2.5a), (2.5b) for some known ρ c , κ c . Let ω 1 , ω 2 be such ℜP ρ0,κ0,ωj is injective in H -1 2 (∂Ω) and (2.17) holds true with ρ = ρ 0 , κ = κ 0 , ω = ω j , j = 1, 2. Besides, let ρ 1 , κ 1 and ρ 2 , κ 2 be two pairs of functions satisfying (2.5a), (2.5b). Then there exist constants
δ 1,2 = δ 1,2 (Ω, ω 1 , ω 2 , κ 0 , ρ 0 ) such that if ρ 1 -ρ 0 W 2,∞ ≤ δ 1 , ρ 2 -ρ 0 W 2,∞ ≤ δ 1 , κ 1 -κ 0 ∞ ≤ δ 2 , κ 2 -κ 0 ∞ ≤ δ 2 ,
then the equality ℑP ρ1,κ1 (x, y, ω j ) = ℑP ρ2,κ2 (x, y, ω j ) for all x, y ∈ ∂Ω and j ∈ {1, 2} implies that ρ 1 = ρ 2 and κ 1 = κ 2 .
Proof of Theorem 2.9. Put
v j (x, ω) = ρ 1 2 j (x)∆ρ -1 2 j (x) + ω 2 (ρ c κ c -κ j (x)ρ j (x)), k 2 = ω 2 ρ c κ c .
Then P ρj ,κj (x, y, ω) = ρ c G + vj (x, y, k), where G + vj denotes the Green function for equation (1.2) defined according to (2.4). By assumptions we obtain that
ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k), x, y ∈ ∂Ω, k = k 1 , k 2 , k j = ω j √ ρ c κ c .
Using Theorem 2.7, we obtain that
v 1 (x, ω j ) = v 2 (x, ω j ), x ∈ Ω, j = 1, 2.
Together with the definition of v j it follows that ρ
1 2 1 ∆ρ -1 2 1 = ρ 1 2 2 ∆ρ -1 2 2
and
κ 1 = κ 2 .
In turn, the equality ρ
1 2 1 ∆ρ -1 2 1 = ρ 1 2 2 ∆ρ -1 2 2
together with the boundary conditions
ρ 1 | ∂Ω = ρ 2 | ∂Ω = ρ c imply that ρ 1 = ρ 2 , see, e.g., [1].
Theorem 2.10. Let Ω satisfy (2.2), d ≥ 2, and suppose that ρ 0 , κ 0 satisfy (2.5a), (2.5b) for some known ρ c , κ c . Let ω be such that ℜP ρ0,κ0,ω is injective in H -1 2 (∂Ω) and (2.17) holds true with ρ = ρ 0 , κ = κ 0 . Besides, let κ 1 , κ 2 satisfy (2.5b). Then there exists δ = δ(Ω, ω, κ 0 , ρ 0 ) such that the bounds
κ 1 -κ 0 ∞ < δ, κ 2 -κ 0 ∞ < δ,
and the equality ℑP ρ0,κ1 (x, y, ω) = ℑP ρ0,κ2 (x, y, ω) for all x, y ∈ ∂Ω imply that
κ 1 = κ 2 .
Proof of Theorem 2.10. In analogy to the proof of Theorem 2.9, put
v j (x, ω) = ρ 1 2 0 (x)∆ρ -1 2 0 (x) + ω 2 (ρ c κ c -κ j (x)ρ 0 (x)). Then P ρ0,κj (x, y, ω) = ρ c G + vj (x, y, k)
, where G + vj denotes the Green function for equation (1.2) defined according to (2.4). By assumptions we obtain that
ℑG + v1 (x, y, k) = ℑG + v2 (x, y, k), x, y ∈ ∂Ω.
Using Theorem 2.7 we obtain that
v 1 (x, ω) = v 2 (x, ω), x ∈ Ω. Now it follows from the definition of v j that κ 1 = κ 2 .
The following uniqueness theorem for the coefficient κ only does not require smallness of this coefficient, but only smallness of the frequency ω. Note that it is not an immediate corollary to Theorem 2.7 since the constant δ in Theorem 2.7 depends on k.
Theorem 2.11. Let Ω satisfy (2.2), d ≥ 2, and assume that ρ ≡ 1 and κ c = 1 so that (1.1) reduces to the Helmholtz equation
∆p + ω 2 κ(x)p = f.
Moreover, suppose that κ 1 and κ 2 are two functions satisfying (2.5b) and
κ 1 ∞ ≤ M, κ 2 ∞ ≤ M
for some M > 0. Then for there exists ω 0 = ω 0 (Ω, M ) > 0 such that if ℑP 1,κ1 (x, y, ω) = ℑP 1,κ2 (x, y, ω) for all x, y ∈ ∂Ω, for some fixed ω ∈ (0, ω 0 ], then κ 1 = κ 2 .
Theorem 2.11 is proved in section 6.
Mapping properties of some boundary integral operators.
In what follows we use the following notation:
R + v (k)f (x) = Ω G + v (x, y, k)f (y) dy, x ∈ Ω, k > 0. Remark 3.1. The operator R + v (k) is the restriction from R d to Ω of the outgoing (limiting absorption) resolvent k → (-∆ + v -k 2 -i0) -1 . It is known that if v satisfies (2.1) and k 2 is not an embedded eigenvalue of -∆ + v(x) in L 2 (R d ), then R + v (k) ∈ L L 2 (Ω), H 2 (Ω) ,
G + 0 (x, y, k) = i 4 k 2π|x-y| ν H (1) ν (k|x -y|) with ν := d 2 -1.
In addition, we denote the single layer potential operator for the Laplace equation by
(3.2) (Ef )(x) := ∂Ω E(x -y)f (y) ds(y), x ∈ ∂Ω with E(x -y) := -1 2π ln |x -y|, d = 2, 1 d(d-2)ω d |x -y| 2-d , d ≥ 3,
where ω d is the volume of the unit d-ball and E is the fundamental solution for the Laplace equation in
R d . Note that -∆ x E(x -y) = δ y (x). Lemma 3.2. Let v, v 0 ∈ L ∞ (Ω, R) and let k > 0 be fixed. There exist c 1 = c 1 (Ω, k, v 0 ), δ 1 = δ 1 (Ω, k, v 0 ) such that if v -v 0 ∞ ≤ δ 1 , then R + v (k)f H 2 (Ω) ≤ c 1 (Ω, k, v 0 ) f L 2 (Ω) , for any f ∈ L 2 (Ω).
In addition, for any M > 0 there exist constants
c ′ 1 = c ′ 1 (Ω, M ) and k 1 = k 1 (Ω, M ) such that if v ∞ ≤ M , then R + k 2 v (k)f H 2 (Ω) ≤ c ′ 1 (Ω, M ) f L 2 (Ω) , d ≥ 3, | ln k| c ′ 1 (Ω, M ) f L 2 (Ω) , d = 2
for all f ∈ L 2 (Ω) and all k ∈ (0, k 1 ).
Proof. We begin by proving the first statement of the lemma. The operators R + v (k) and R + v0 (k) are related by a resolvent identity in L 2 (Ω):
(3.3) R + v (k) = R + v0 (k) Id + (v -v 0 )R + v0 (k) -1 ,
see, e.g., [19, p.248] for a proof. The resolvent identity is valid, in particular, if
v -v 0 ∞ < R + v0 (k) -1 , where the norm is taken in L L 2 (Ω), H 2 (Ω) . It follows from (3.3) that R + v (k) ≤ R + v0 (k) 1 -v -v 0 ∞ R + v0 (k)
, where the norms are taken in L L 2 (Ω), H 2 (Ω) . Taking δ 1 < R + v0 (k) -1 , we get the first statement of the lemma.
To prove the second statement of the lemma, we begin with the case of v = 0. The Schwartz kernel of R + 0 (k) is given by the outgoing Green function for the Helmholtz equation defined in formula (3.1). In particular, ℑG + 0 (x -y, k) satisfies
(3.4) ℑG + 0 (x, y, k) = 1 4 (2π) -ν k ν |x -y| -ν J ν (k|x -y|) = k 2ν 2 2ν+2 π ν ν! 1 + O(z 2 )
with the Bessel function
J ν = ℜH (1)
ν of order ν, where z = k|x -y| and O is an entire function with O(0) = 0. In addition, (3.5)
ℜG + 0 (x -y, k) = E(x -y) -1 2π ln k 2 + γ (1 + O 2 (z 2 )) + O 2 (z 2 ), d = 2, E(x -y) 1 + O 3 (z 2 ) , d ≥ 3 odd E(x -y) 1 + O d (z 2 ) -k d-2 2 2ν+3 π ν+1 ν! ln(z/2)(1 + O d (z 2 )), d ≥
R + k 2 v (k) ≤ R + 0 (k) 1 -k 2 M R + 0 (k) , if k 2 M R + 0 (k) < 1
, where the norms are taken in L L 2 (Ω), H 2 (Ω) . This inequality together with the second statement of the lemma for v = 0 imply the second statement of the lemma for general v.
Lemma 3.3. Let v 0 , v 1 , v 2 ∈ L ∞ (Ω, R). Then for any k > 0 (3.6) G v1 (k) -G v2 (k) ∈ L H -3 2 (∂Ω), H 3 2 (∂Ω) .
In addition, there exist
c 2 (Ω, k, v 0 ), δ 2 (Ω, k, v 0 ) such that if v 1 -v 0 ∞ ≤ δ 2 , v 2 - v 0 ∞ ≤ δ 2 , then (3.7) G v1 (k) -G v2 (k) ≤ c 2 (Ω, k, v 0 ) v 1 -v 2 ∞ ,
where the norm is taken in
L H -3 2 (∂Ω), H 3 2 (∂Ω) . Furthermore, for any M > 0 there exist constants c ′ 2 = c ′ 2 (Ω, M ) and k 2 = k 2 (Ω, M ) such that if v 1 ∞ ≤ M , v 2 ∞ ≤ M , then (3.8) G k 2 v1 (k) -G k 2 v2 (k) ≤ c ′ 2 (Ω, M )k 2 v 1 -v 2 ∞ , for d ≥ 3, c ′ 2 (Ω, M )k 2 | ln k| 2 v 1 -v 2 ∞ , for d = 2
holds true for all k ∈ (0, k 2 ), where the norms are taken in L H -3 2 (∂Ω), H Proof. Note that
(3.9) G vj (k) = γR + vj (k)γ * , j = 1, 2,
where
γ ∈ L H s (Ω), H s-1 2 (∂Ω) and γ * ∈ L H -s+ 1 2 (∂Ω), H -s (Ω)
for s ∈ ( 1 2 , 2] are the trace map and its dual (see, e.g., [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.3.37]). Here H s (Ω) denotes the space of distributions u on Ω, which are the restriction of some U ∈ H s (R d ) to Ω, i.e. u = U | Ω , whereas H s (Ω) denotes the closure of the space of distributions on Ω in H s (R d ) (see [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]). Recall that for a Lipschitz domain we have H s (Ω) * = H -s (Ω) for all s ∈ R ( [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm. 3.30]).
The operators R + v1 (k) and R + v2 (k) are subject to the resolvent identity
(3.10) R + v2 (k) -R + v1 (k) = R + v1 (k) v 1 -v 2 R + v2 (k),
see, e.g., [19, p.248] for the proof. Together with (3.10) we obtain that
(3.11) G v2 (k) -G v1 (k) = γR + v1 (k)(v 1 -v 2 )R + v2 (k)γ * .
It follows from Remark 3.1 and from a duality argument that
R + v (k) is bounded in L L 2 (Ω), H 2 (Ω) and in L H -2 (Ω), L 2 (Ω) .
Taking into account that all maps in the sequence
H -3 2 (∂Ω) γ * -→ H -2 (Ω) R + v 2 (k) -→ L 2 (Ω) v1-v2 -→ L 2 (Ω) R + v 1 (k) -→ H 2 (Ω) γ -→ H 3 2 (∂Ω)
are continuous, we get (3.6). It follows from (3.11) that there exists
c ′′ 2 = c ′′ 2 (Ω) such that G v1 (k) -G v2 (k) ≤ c ′′ 2 (Ω) R + v1 (k) R + v2 (k) v 1 -v 2 ∞ ,
where the norm on the left is taken in L H -3 2 (∂Ω), H 3 2 (∂Ω) , and the norms on the right are taken in L L 2 (Ω), H 2 (Ω) . Using this estimate and Lemma 3.2 we obtain the second assertion of the present lemma. Using the estimate for a pair of potentials (k 2 v 1 , k 2 v 2 ) instead of (v 1 , v 2 ) and using Lemma 3.2 we obtain the third assertion of the present lemma. Lemma 3.4. Suppose that (2.2) holds true and v satisfies (2.1). Then G v (k)and
ℜG v (k) are Fredholm operators of index zero in spaces L(H s-1 2 (∂Ω), H s+ 1 2 (∂Ω)), s ∈ [-1, 1], real analytic in k ∈ (0, +∞). If, in addition, v ∈ C ∞ (R d , R), then G v (k) is boundedly invertible in these spaces if and only if (2.3) holds.
Proof. It is known that G 0 (k) is Fredholm of index zero in the aforementioned spaces, see [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.7.17]. Besides, it follows from Lemma 3.3 that G v (k) -G 0 (k) is compact in each of the aforementioned spaces, so that G v (k) is Fredholm of index zero, since the index is invariant with respect to compact perturbations. Moreover, it follows from (3.4) that ℑG 0 (k) has a smooth kernel, which implies that ℜG v (k) is also Fredholm of index zero.
It follows from [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.7.17] and [24, Thm.1.6] that for any v (k) has analytic continuation to a neighborhood of each k > 0 in C and the this is also true for G v (k) in view of formula (3.9). Hence, ℜG v (k) is real analytic for k > 0 and the same is true for ℜG 0 (k).
-1 ≤ s ≤ 1 the operator G v (k) is invertible in L H s-
Let us introduce the operator (3.12)
W (k) := ℜG 0 (k) -E, d ≥ 3, ℜG 0 (k) -E + 1 2π ln k 2 + γ 1, • 1, d = 2
, where γ is the Euler-Mascheroni constant, and 1, • denotes the scalar product with 1 in L 2 (∂Ω).
Lemma 3.5. There exist
c 3 = c 3 (Ω), k 3 = k 3 (Ω) such that W (k) belongs to K H -1 2 (∂Ω), H 1 2
(∂Ω) and
W (k) ≤ k 2 c 3 (Ω), d = 3 or d ≥ 5, k| ln k|c 3 (Ω), d = 4, k 2 | ln k|c 3 (Ω), d = 2 for all k ∈ (0, k 3 ), where the norm is taken in L H -1 2 (∂Ω), H 1 2 (∂Ω) . Proof. It follows from (3.5) that W (k) ∈ L L 2 (∂Ω), H 2 (∂Ω)
. By duality and approximation we get
W (k) ∈ K H -1 2 (∂Ω), H 1
v = 0 that if λ > 0 is such that λ 2 is not a Dirichlet eigenvalue of -∆ in Ω, then Q(λ) ∈ L(H 1 (∂Ω), L 2 (∂Ω))
is well defined as stated in (2.8). It also follows from Lemma 3.4
that G v (k) ∈ L(L 2 (∂Ω), H 1 (∂Ω)) if (2.3) holds.
To prove the remaining assertions of Theorem 2.4 we suppose first that in addition to the initial assumptions of Theorem 2.4 the condition (2.3) holds true for v = 0. Let us define the Dirichlet-to-Neumann map Φ v ∈ L H 1/2 (∂Ω), H -1/2 (∂Ω) by Φ v f := ∂ψ ∂ν where ψ is the solution to
-∆ψ + vψ = k 2 ψ in Ω, ψ = f on ∂Ω,
and ν is the outward normal vector on ∂Ω. Moreover, let Φ 0 denote the corresponding operator for v = 0. It can be shown (see e.g., [START_REF] Nachman | Reconstructions from boundary measurements[END_REF]Thm.1.6]) that under the assumptions of Theorem 2.4 together with (2.3) for v = 0 these operators are related to G and G 0 as follows:
(4.1) G -1 v -G -1 0 = Φ v -Φ 0 .
For an operator T between complex function spaces let T f := T f denote the operator with complex conjugate Schwarz kernel, and note that T
-1 = T -1 if T is invertible.
Since v is assumed to be real-valued, it follows that Φ v = Φ v . Therefore, taking the complex conjugate in (4.1), we obtain
(G v ) -1 -(G 0 ) -1 = Φ v -Φ 0 .
Combining the last two equations yields
(4.2) (G -1 v ) -(G v ) -1 = (G 0 ) -1 -(G 0 ) -1 .
Together with the definitions (2.9) of A, B, and Q, we obtain the relation
(A + iB)iQ(A -iB) = -iB,
which can be rewritten as the two relations (2.10). Thus, Theorem 2.4 is proved under the additional assumption that (2.3) is satisfied for v = 0. Moreover, it follows from formula (4.2) that the mapping (2.8) extends to all k > 0, i.e. the assumption that k 2 is not a Dirichlet eigenvalue of -∆ in Ω can be dropped. More precisely, for any k > 0 one can always find v satisfying (2.1), (2.3) such that the expression on the hand side left of formula (4.2) is well-defined and can be used to define Q(k). The existence of such v follows from monotonicity and upper semicontinuity of Dirichlet eigenvalues.
This completes the proof of Theorem 2.4.
4.2.
Proof of Theorem 2.5. Part I. Let k > 0 be fixed. The fact that T (k) extends continuously to L 2 (∂Ω) and is injective there is shown in [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm.3.28]. More precisely, the injectivity of T (k) in L(L 2 (∂Ω), L 2 (S d-1 )) is proved in this book only in dimension d = 3, but the proof works in any dimension d ≥ 2. In addition,
T (k) ∈ L L 2 (∂Ω), L 2 (S d-1
) is compact as an operator with continuous integral kernel (see [8, (3.
58)]).
It follows from the considerations in the last paragraph of the proof of Theorem 2.4 that for any k > 0 there exists
v ∈ C ∞ (R d , R) with supp v ⊂ Ω such that k 2 is not a Dirichlet eigenvalue of -∆ + v(x) in Ω and such that Q(k) = ℑG -1 v (k). Recall the formula (4.3) ℑG + v (x, y, k) = c 1 (d, k) S d-1 ψ + v (x, kω)ψ + v (y, kω) ds(ω), c 1 (d, k) := 1 8π k 2π d-2
where ψ + v (x, kω) is the total field corresponding to the incident plane wave e ikωx (i.e. ψ + v (•, kω) solves (1.2) and ψ + v (•, kω) -e ikω• satisfies the Sommerfeld radiation condition (2.11c)), see, e.g., [23, (2.26)]. It follows that the operator ℑG v (k) admits the factorization
ℑG v (k) = c 1 (d, k)H v (k)H * v (k),
where the operator
H v (k) ∈ L(L 2 (S d-1 ), L 2 (∂Ω)
) is defined as follows:
H v (k)g (x) := S d-1 ψ + v (x, kω)g(ω) ds(ω)
Recall that H v (k) with v = 0 is the Herglotz operator, see, e.g., [8, (5.62)].
Lemma 4.1. Under the assumption (2.3) H * v (k) extends to a compact, injective operator with dense range in L H -1 (∂Ω), L 2 (S d-1 ) . If (Rh)(ω) := h(-ω), the following formula holds in H -1 (∂Ω):
(4.4) RH * v (k) = 1 √ k c 2 (d, k) T (k)G v (k) with c 2 (d, k) := 1 4π exp -iπ d-3 4 k 2π d-3 2
Proof of Lemma 4.1. We start from the following formula, which is sometimes referred to as mixed reciprocity relation (see [11, (4.15)] or [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm. 3.16]):
G + v (x, y, k) = c 2 (d, k) e ik|x| |x| (d-1)/2 ψ + v y, -k x |x| + O 1 |x| (d+1)/2 , |x| → +∞ This implies (G v (k)h)(x) = c 2 (d, k) e i|k||x| |x| (d-1)/2 (RH * v (k)h)(x) + O 1 |x| (d+1)/2 , |x| → +∞ for h ∈ L 2 (∂Ω)
, where (G v (k)ϕ)(x) is defined in the same way as G v ϕ(x) in formula (2.7) but with x ∈ R d \ Ω, and from this we obtain (4.4) in L 2 (∂Ω).
Recall also that
G v (k) ∈ GL(H -1 (∂Ω), L 2 (∂Ω)) if (2.
3) holds (see Lemma 3.4). This together with injectivity of T (k) in L(L 2 (∂Ω), L 2 (S d-1 )) and formula (4.4) imply that H * v (k) extends by continuity to a compact injective operator with dense range in L H -1 (∂Ω), L 2 (S d-1 ) where it satisfies (4.4). Using (4.3), (4.4), and the identities R * R = I = RR * and R = R, eq. (2.12) can be shown as follows:
-Q(k) = -1 2i G -1 v -G -1 v = G -1 v ℑG v G -1 v = c 1 (d, k)(H * v G -1 v ) * H * v G -1 v = c1(d,k) k|c2(d,k)| 2 T * (k)RR * T (k) = T * (k)T (k).
Part II. The operators A, B ∈ L L 2 (∂Ω) are compact in view of Lemma 3.4 and Part I of Theorem 2.5. The relations (2.14a) and (2.14b) are direct consequences of (2.10a), (2.10b) and of definition (2.13).
Part III. The operators A, B ∈ L L 2 (∂Ω) are real, compact symmetric and commute by (2.14b). It is well known (see e.g. [32, Prop. 8.1.5] that under these conditions A and B must have a common eigenbasis in L 2 (∂Ω).
Moreover, if follows from (2.14a) that if f ∈ L 2 (∂Ω) is a common eigenfunction of A and B, then the corresponding eigenvalues λ A and λ B of A and B, respectively, satisfy the equation (2.15). Theorem 2.5 is proved.
Stability of indices of inertia.
Let S be a compact topological manifold (in what follows it will be S d-1 or ∂Ω). Let A ∈ L L 2 (S) be a real symmetric operator and suppose that (5.1)
L 2 R (S) admits an orthonormal basis of eigenvectors of A.
We denote this basis by {ϕ n : n ≥ 1}, i.e. Aϕ n = λ n ϕ n . Property (5.1) is obviously satisfied if A is also compact. Let us define the projections onto the sum of the non negative and negative eigenspaces by (5.2)
P A + x := n : λn≥0 x, ϕ n ϕ n P A -x := n : λn<0
x, ϕ n ϕ n
In addition, let L A -, L A + denote the corresponding eigenspaces:
(5.3)
L A -= ran P A -, L A + = ran P A + . Then it follows from Ax = ∞ n=1 λ n x, ϕ n ϕ n that Ax, x = | Ax + , x + | -| Ax -, x -| with x ± := P A ± x.
The numbers rk P A + and rk P A -in N 0 ∪ {∞} (where N 0 denotes the set of nonnegative integers) are called positive and negative index of inertia of A, and the triple rk P A + , dim ker A, rk P A -is called inertia of A. A generalization of the Sylvester inertia law to Hilbert spaces states that for a self-adjoint operator A ∈ L(X) on a separable Hilbert space X and an operator Λ ∈ GL(X), the inertias of A and Λ * AΛ coincide (see [START_REF] Cain | Inertia theory[END_REF]Thm.6.1,p.234]). We are only interested in the negative index of inertia, but we also have to consider operators Λ which are not necessarily surjective, but only have dense range.
Lemma 5.1. Let S 1 , S 2 be two compact topological manifolds, (A, A, Λ) be a triple of operators such that A ∈ L(L 2 (S 1 )) and A ∈ L(L 2 (S 2 )) are real, symmetric, Λ ∈ L(L 2 (S 1 ), L 2 (S 2 )), A = ΛAΛ * and A, A satisfy (5.1). Then rk
P A -≤ rk P A -. Moreover, if rk P A -< ∞ and Λ is injective, then rk P A -= rk P A -. Proof. For each x ∈ L A -\ {0} we have 0 > Ax, x = AΛ * x, Λ * x = AP A + Λ * x, P A + Λ * x -AP A -Λ * x, P A -Λ * x ≥ -AP A -Λ * x, P A -Λ * x ,
which shows that P A -Λ * x = 0. Hence, the linear mapping L A -→ L A -, x → P A -Λ * x, is injective. This shows that rk P A -≤ rk P A -. Now suppose that d = rk P A -< ∞ and that Λ is injective. Note that the injectivity of Λ implies that Λ * has dense range (see, e.g., [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm. 4.6]). Let y 1 , . . . , y d be an orthonormal basis of L A -with Ay j = λ j y j . Let l min = min{|λ 1 |, . . . , |λ d |} and let x 1 , . . . , x d be such that
y j -Λ * x j 2 < ε, 5ε √ d A < l min , j = 1, . . . , d.
Let α 1 , . . . , α d ∈ R be such that j |α j | 2 = 1 and put x = j α j x j , y = j α j y j . Note that
y -Λ * x 2 ≤ ε j |α j | ≤ ε √ d, Ay, y ≥ Ax, x -5ε √ d A > Ax, x -l min .
Then we have
-l min ≥ Ay, y > Ax, x -l min = AP A + x, P A + x -AP A -x, P A -x -l min ≥ -AP A -x, P A -x -l min , j = 1, . . . , d.
Hence, P A -x = 0. Thus, the linear mapping L A -→ L A -, defined on the basis by y j → P A -x j , j = 1, . . . , d, is injective and rk P A -≤ rk P A -.
The assumption (5.1) in Lemma 5.1 can be dropped, but then the operators P A ± , P A ± must be defined using the general spectral theorem for self-adjoint operators.
The following two lemmas address the stability of the negative index of inertia under perturbations. We first look at perturbations of finite rank. Lemma 5.2. Let S be a compact topological manifold, let A 1 , A 2 be compact selfadjoint operators in L 2 (S) and set n j := rk P
Aj -for j = 1, 2. If n 1 < ∞ and rk(A 1 - A 2 ) < ∞, then n 2 ≤ n 1 + rk(A 1 -A 2 ). Proof. Let λ Aj 1 ≤ λ Aj 2 ≤ • • • < 0 denote the negative eigenvalues of A j in L 2 (S)
, sorted in ascending order with multiplicities. By the min-max principle we have that max
S k-1 min x∈S ⊥ k-1 , x =1 A j x, x = λ Aj k , 1 ≤ k ≤ n j , sup S k-1 min x∈S ⊥ k-1 , x =1 A j x, x = 0, k > n j
where the maximum is taken over all (k -1)-dimensional subspaces S k-1 of L 2 (S) and S ⊥ denotes the orthogonal complement of
S k-1 in L 2 (S). Let K = A 1 -A 2 , r = rk K and note that A 1 x = A 2 x for any x ∈ ker K = (ran K) ⊥ . Also note that (S k-1 ⊕ ker K) ⊥ ⊂ S ⊥ k-1 ∩ ran K . For k = n 1 + 1 we obtain 0 = sup S k-1 min x∈S ⊥ k-1 , x =1 A 1 x, x ≤ sup S k-1 min A 2 x, x : x ∈ (S k-1 ⊕ ker K ⊥ , x = 1 ≤ sup S k-1+r min x∈S k-1+r , x =1 A 2 x, x = λ A2 k+r if n 2 ≥ k + r, 0 else.
Taking into account that λ A2 k+r < 0, we obtain that only the second case is possible, and this implies n 2 ≤ k -1 + r.
In the next lemma we look at "small" perturbations. The analysis is complicated by the fact that we have to deal with operators with eigenvalues tending to 0. Moreover, we not only have to show stability of rk P A -but also of L A -. Lemma 5.3. Let S 1 be a C 1 compact manifold and S 2 a topological compact manifold. Let (A, A 0 , Λ) be a triple of operators such that A, A 0 ∈ L L 2 (S 1 ) are real, symmetric, satisfying (5.1); Λ ∈ L L 2 (S 1 ), L 2 (S 2 ) is injective and
A 0 ∈ GL H -1 2 (S 1 ), H 1 2 (S 1 ) ∩ GL L 2 (S 1 ), H 1 (S 1 ) , rk P A0 -< ∞, A -A 0 ∈ K H -1 2 (S 1 ), H 1 2 (S 1 ) Put A = ΛAΛ * , A 0 = ΛA 0 Λ * .
The following statements hold true:
1. rk P A -< ∞. 2. For any σ > 0 there exists δ = δ(A 0 , Λ, σ) such that if A -A 0 < δ in L H -1 2 (S 1 ), H 1 2 (S 1 ) , then (a) rk P A -= rk P A0 -, (b) A is injective in H -1 2 (S 1 ), (c) if Af = λf for some f ∈ L 2 (S 2 ) with f 2 = 1, then λ < 0 if and only if d(f, L A0 -) < 1 2 , (d) all negative eigenvalues of A in L 2 (S d-1
) belong to the σ-neighborhood of negative eigenvalues of A 0 .
Proof. First part. We have that
A = |A 0 | 1 2 Id + R + |A 0 | -1 2 (A -A 0 )|A 0 | -1 2 |A 0 | 1 2 ,
with R finite rank compact operator in L 2 (S 1 ). More precisely, starting from the orthonormal eigendecomposition of A 0 in L 2 (S 1 ),
A 0 f = ∞ n=1 λ n f, ϕ n ϕ n ,
we define |A 0 | α for α ∈ R and R as follows:
|A 0 | α f = ∞ n=1 |λ n | α f, ϕ n ϕ n , Rf = -2 n:λn<0 f, ϕ n ϕ n .
By our assumptions and the polar decomposition, |A 0 | -1 is a symmetric operator on L 2 (S 1 ) with domain H 1 (S 1 ), and |A 0 | -1 ∈ L(H 1 (S 1 ), L 2 (S 1 )). Consequently, by complex interpolation we get
|A 0 | -1 2 ∈ L H 1 2 (S 1 ), L 2 (S 1 ) .
In a similar way, we obtain
|A 0 | -1 2 ∈ L L 2 (S 1 ), H -1 2 (S 1 ) , |A 0 | 1 2 ∈ L L 2 (S 1 ), H 1 2 (S 1 ) .
Thus, the operator
|A 0 | -1 2 (A-A 0 )|A 0 | -1 2 is compact in L 2 (S 1
). Hence, its eigenvalues converge to zero. Let us introduce the operators D, D 0 and ∆D by
D := D 0 + ∆D, D 0 := Id + R, ∆D := |A 0 | -1 2 (A -A 0 )|A 0 | -1 2 ,
Then the eigenvalues of D converge to 1, and only finite number of eigenvalues of D in L 2 (S 1 ) can be negative. Applying Lemma 5.1 to the triple (D, A,
|A 0 | 1
2 ), we get the first statement of the present lemma.
Second part. At first, we show that there exists δ ′ such that if A -A 0 < δ ′ , then rk P A -≤ rk P A0 -. Here the norm is taken in L H -1 2 (S 1 ), H 1 2 (S 1 ) . Note that the spectrum of D 0 in L 2 (S 1 ) consists at most of the two points -1 and 1. Thus, the spectrum σ D of D satisfies
σ D ⊆ [-1 -∆D , -1 + ∆D ] ∪ [1 -∆D , 1 + ∆D ],
where ∆D is the norm of ∆D in L L 2 (S 1 ) . It follows that if
x ∈ L D -, x 2 = 1, then Dx, x ≤ -1 + ∆D .
On the other hand,
Dx, x ≥ D 0 x, x -∆D ≥ -D 0 x -, x --∆D for x -= P D0 -x.
It follows from the last two inequalities that
D 0 x -, x -≥ 1 -2 ∆D . Thus, if ∆D < 1 2 , the mapping L D -→ L D0 -, x → P D0 -x is injective, so rk P D -≤ rk P D0 -. Using Lemma 5.1 to the triple (D, A, |A 0 | 1 2
) and taking into account that rk P D0 -= rk P A0 -we also get that rk P A -≤ rk P A0 -. Moreover, there exists
δ ′ = δ ′ (A 0 , Λ) such that if A -A 0 < δ ′ in the norm of L H -1 2 (S 1 ), H 1 2 (
S 1 ) , then ∆D < 1 2 and consequently, rk P A -≤ rk P A0 -. In addition, taking into account that
|A 0 | -1 2 ∈ GL L 2 (S 1 ), H -1 2 (S 1 ) , we obtain that A is injective in H -1 2 (S 1 ) if A -A 0 < δ ′ .
Applying Lemma 5.1 to the triple (A, A, Λ) and using the assumption that Λ is injective, we obtain that rk
P A -≤ rk P A0 -if A -A 0 < δ ′ in L H -1 2 (S 1 ), H 1 2 (S 1
) . Now let Σ be the union of circles of radius σ > 0 in C centered at negative eigenvalues of A 0 in L 2 (S 2 ). It follows from [START_REF] Kato | Perturbation Theory for Linear Operators[END_REF]Thm.3.16 p.212] that there exists
δ ′′ = δ ′′ (A 0 , Λ, σ), δ ′′ < δ ′ , such that if A -A 0 < δ ′′ , then Σ also encloses rk P A0 - negative eigenvalues of A.
Taking into account that rk P A -≤ rk P A0 -if A -A 0 < δ ′′ , we get that rk P A -= rk P A0 -. In addition, it follows from [21, Thm.3.16 p.212] that there exists
δ ′′′ = δ ′′′ (A 0 , Λ, σ), δ ′′′ < δ ′′ , such that if A -A 0 < δ ′′′ , then P A --P A0 - < 1
2 . The second statement now follows from the following standard fact: Lemma 5.4. The following inequalities are valid:
d(f, L A0 -) ≤ P A --P A0 - for all f ∈ L A -with f 2 = 1 and d(f, L A0 -) ≥ 1 -P A --P A0 - for all f ∈ L A + with f 2 = 1.
Lemma 5.3 is proved.
6. Derivation of the uniqueness results. The proof of the uniqueness theorems will be based on the following two propositions: Proposition 6.1. For all κ ∈ L ∞ (Ω, R) and all ω > 0 the operator ℜG -ω 2 κ (ω) (resp. ℜ G -ω 2 κ (ω)) can have at most a finite number of negative eigenvalues in L 2 (∂Ω) (resp. L 2 (S d-1 )), multiplicities taken into account. In addition, for all M > 0 there exists a constant ω 0 = ω 0 (Ω, M ) such that for all κ satisfying κ ∞ ≤ M the condition
(2.3) with v = -ω 2 κ is satisfied and the operator ℜG -ω 2 κ (ω) (resp. ℜ G -ω 2 κ (ω)) is positive definite on L 2 (∂Ω) (resp. L 2 (S d-1 )) if ω ∈ (0, ω 0 ]. Proof.
Step 1. We are going to prove that ℜG -ω 2 κ (ω) can have only finite number of negative eigenvalues in L 2 (∂Ω) and, in addition, there exists
ω ′ 0 = ω ′ 0 (Ω, M ) such that if ω ∈ (0, ω ′ 0 ], then ℜG -ω 2 κ (ω) is positive definite in L 2 (∂Ω)
. Let E be defined according to (3.2). The operator E is positive definite in L 2 (∂D) for d ≥ 3, see, e.g., [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Cor.8.13]. For the case d = 2, the operator
E r = E + 1 2π 1, • ln r is positive definite in L 2 (
E -ℜG -ω 2 κ (ω) ∈ K H -1 2 (∂Ω), H 1 2 (∂Ω) with E -ℜG -ω 2 κ (ω) ≤ ω 2 c ′ 2 (Ω, M ) κ ∞ + ω| ln ω|c ′ 3 (Ω)
for all ω ∈ (0, min{k 2 (Ω, M ), k 3 (Ω)}), with the norm in L H -1 2 (∂Ω), H 1 2 (∂Ω) . Applying Lemma 5.3 to the triple ℜG -ω 2 κ (ω), E, Id , we find that ℜG -ω 2 κ (ω) can have at most finite number of negative eigenvalues in L 2 (∂Ω) and that there exists
ω ′ 0 = ω ′ 0 (Ω, M ) such that ℜG -ω 2 κ (ω) is positive definite in L 2 (∂Ω) if ω ∈ (0, ω ′ 0 ]. d = 2. Let r > Cap ∂Ω . We have that E r ∈ GL H -1 2 (∂Ω), H 1 2 (∂Ω) ∩ GL L 2 (∂Ω), H 1 (∂Ω) .
This follows from [START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF]Thm.7.18 & Thm.8.16]. Using Lemma 3.3 and Lemma 3.5 we also have that
E r -ℜG -ω 2 κ (ω) ∈ K H -1 2 (∂Ω), H 1 2 (∂Ω) . Note that (6.1) ℜG -ω 2 κ (ω) -E r = ℜG -ω 2 κ (ω) -ℜG 0 (ω) + W (ω) -1 2π ln ωr 2 + γ 1,
• 1, where W (ω), γ are defined according to (3.12). Fix r > Cap ∂Ω . Using Lemma 3.3, Lemma 3.5 and formula (6.1) we obtain that
R(ω) -E r ≤ ω 2 | ln ω| 2 c ′ 2 (Ω, M ) κ ∞ + ω 2 | ln ω|c ′ 3 (Ω), with R(ω) := ℜG -ω 2 κ (ω) + 1 2π ln ωr 2 + γ 1, • 1
for all ω ∈ (0, min{k 2 (Ω, M ), k 3 (Ω)}) with the norm in L H -1 2 (∂Ω), H 1 2 (∂Ω) . Applying Lemma 5.3 to the triple R(ω), E r , Id , we find that R(ω) can have only finite number of negative eigenvalues in L 2 (∂Ω) and, in addition, there exists ω ′ 0 = ω ′ 0 (Ω, M, r) such that if ω ∈ (0, ω ′ 0 ], then R(ω) is positive definite in L 2 (∂Ω). Applying Lemma 5.2 to the pair of operators R(ω), ℜG -ω 2 κ (ω) we obtain that ℜG -ω 2 κ (ω) can have only finite number of negative eigenvalues in L 2 (∂Ω), since it is true for R(ω) and 1 2π ln ωr 2 + γ 1, • 1 is a rank one operator. Assuming, without loss of generality, that ω ′ 0 < 2 r e -γ , one can also see that the operator ℜG -ω 2 κ (ω) is positive definite for ω ∈ (0, ω ′ 0 ], as long as R(ω) is positive definite and -1 2π ln ωr 2 + γ 1, • 1 is non-negative definite.
Step 2. Applying Lemma 5.1 to the triple
(ℜG -ω 2 κ (ω), ℜ G -ω 2 κ (ω), T (ω)),
we find that the operator ℜ G -ω 2 κ (ω) can have only finite number of negative eigenvalues in L 2 (S d-1 ) and, in addition, there exists
ω 0 = ω 0 (Ω, M ), ω 0 < ω ′ 0 , such that ℜ G -ω 2 κ (ω) is positive definite in L 2 (S d-1 ) if ω ∈ (0, ω 0 ]. Proposition 6.2. Let v, v 0 ∈ L ∞ (Ω, R). Suppose that k > 0 is such that ℜG v0 (k) is injective in H -1
2 (∂Ω) and (2.16) holds true for v 0 . Moreover, let L v0 -denote the linear space spanned by the eigenfunctions of ℜ G v0 (k) corresponding to negative eigenvalues, and let f ∈ L 2 (S d-1 ) with f 2 = 1, be such that ℜ G v k)f = λf . Then for any σ > 0 there exists δ
= δ(Ω, k, v 0 , σ) such that if v -v 0 ∞ ≤ δ, then 1. ℜG v (k) is injective in H -1 2 (∂Ω), 2. (2.16) holds true for v, 3. λ < 0 if and only if d(f, L v0 -) < 1 2 , 4. all negative eigenvalues of ℜ G v (k) in L 2 (S d-1
) belong to the σ-neighborhood of negative eigenvalues of ℜ G v0 (k).
Proof. Put
A := ℜG v (k), A 0 := ℜG v0 (k), A := ℜ G v (k), A 0 := ℜ G v0 (k).
It follows from Proposition 6.1 that rk P A0 -< ∞. Using Lemma 3.4 and the injectivity of A 0 in H -1/2 (∂Ω), we obtain that
A 0 ∈ GL H -1/2 (∂Ω), H 1/2 (∂Ω) ∩ GL L 2 (∂Ω), H 1 (∂Ω) .
It also follows from Lemma 3.3 that
A -A 0 ∈ K H -1/2 (∂Ω), H 1/2 (∂Ω) .
Applying Lemma 5.3 to the triple A, A 0 , T , we find that there exists
δ ′ = δ ′ (Ω, k, v 0 ) such that if A-A 0 ≤ δ ′ in L H -1/2 (∂Ω), H 1/2 (∂Ω) , then rk P A -= rk P A0 -and A is injective in H -1/2 (∂Ω). Moreover, if Af = λf with f 2 = 1, then λ < 0 if and only if d(f, L v0 -) < 1 2 . Also note that, in view of Lemma 3.3, there exists δ = δ(Ω, k, v 0 ) such that if v -v 0 ∞ ≤ δ, then A -A 0 ≤ δ ′ .
It remains to show that if v -v 0 ∞ ≤ δ for δ small enough and (2.16) holds true for v 0 , then it also holds true for v 0 . But this property follows from the upper semicontinuity of a finite number of eigenvalues of G v (k) with respect to perturbations (see [START_REF] Kato | Perturbation Theory for Linear Operators[END_REF]Thm.3.16 p.212]), from Lemma 3.3 and from the fact that G v (k) has at most a finite number of eigenvalues with negative real part (see Proposition 6.1 with -ω 2 κ = v, ω = k). Proposition 6.2 is proved.
6.1. Proof of Theorem 2.7. Let k > 0 and v 0 be the same as in the formulation of Theorem 2.7. It follows from Proposition 6.1 with v 0 = -ω 2 κ that the operator ℜ G v0 (k) can have only finite number of negative eigenvalues in L 2 (∂Ω), multiplicities taken into account.
Let δ = δ(Ω, k, v 0 ) be choosen as in Proposition 6.2. Suppose that v 1 , v 2 are two functions satisfying the conditions of Theorem 2.7 and put
A j := ℜ G vj (k), B j := ℑ G vj (k), j = 1, 2.
By the assumptions of the present theorem,
B 1 = B 2 .
Together with Theorem 2.5 and formula (2.16) it follows that the operators A 1 and A 2 have a common basis of eigenfunctions in L 2 (S d-1 ) and that if
A 1 f = λ 1 f , A 2 f = λ 2 f , for some f ∈ L 2 (S d-1 ), f 2 = 1, then (6.2) |λ 1 | = |λ 2 |.
More precisely, any eigenbasis of B 1 is a common eigenbasis for A 1 and A 2 .
It follows from Proposition 6.2 that λ 1 < 0 if and only if d(f, L v0 -) < 1 2 , and the same condition holds true for λ 2 . Hence, λ 1 < 0 if and only if λ 2 < 0. Thus, we have (6.3)
A 1 = A 2 .
Since by Theorem 2.5 (I) the operator T is injective with dense range the same is true for T * by [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm. 4.6]. Injectivity of T and (6.3) imply A 1 T * = A 2 T * . This equality, density of the range of T * and continuity of A 1 , A 2 now imply that [START_REF] Novikov | A multidimensional inverse spectral problem for the equation -∆ψ + (v(x) -Eu(x))ψ = 0, Funktsional[END_REF][START_REF] Berezanskii | On the uniqueness theorem in the inverse problem of spectral analysis for the Schrödinger equation[END_REF]. In turn, property (2.3) for v = v j follows from injectivity of ℜG vj (k) in H -1/2 (∂Ω) (see Proposition 6.2). This completes the proof of Theorem 2.7.
A 1 = A 2 and hence G v1 (k) = G v2 (k). Now we can use that fact that G v1 (k) = G v2 (k) implies v 1 = v 2 if (2.3) holds true for v = v 1 and v = v 2 , see
6.2. Proof of Theorem 2.11. Let ω 0 = ω 0 (Ω, M ) be as in Proposition 6.1 and let ω ∈ (0, ω 0 ] be fixed. Put
A j := ℜ G -ω 2 κj (ω), B j := ℑ G -ω 2 κj (ω), j = 1, 2.
It follows from Proposition 6.1 that all the eigenvalues of G -ω 2 κ (ω) have positive real parts so that condition (2.16) is valid. As in the proof of Theorem 2.7 one can show that A 1 , A 2 have a common basis of eigenfunctions in L 2 (S d-1 ) (any eigenbasis of B 1 is a common eigenbasis for A 1 and A 2 ) and the relation (6.2) holds.
In view of Proposition 6.1 we also have that λ 1 > 0, λ 2 > 0 such that λ 1 = λ 2 . Thus, (6.3) holds true. Starting from equality (6.3) and reasoning as in the end of the proof of Theorem 2.7, we obtain that κ 1 = κ 2 , completing the proof of Theorem 2.11.
7. Discussion of the assumptions of Theorem 2.7. The aim of this section is to present results indicating that the assumptions of Theorem 2.7 are always satisfied except for a discrete set of exceptional parameters. As a first step we characterize the adjoint operator G * v (k) as a farfield operator for the scattering of distorted plane waves at Ω with Dirichlet boundary conditions. Note that, in particular, -
(d, k) G * v (k)g = u ∞ for any g ∈ L 2 (S d-1 ) where u ∞ ∈ L 2 (S d-1
) is the farfield pattern of the solution u to the exterior boundary value problem (2.11) with boundary values
u 0 (x) = S d-1 ψ + v (x, -kω)g(ω) ds(ω), x ∈ ∂Ω.
Proof. It follows from the definition of operators H v , T , R in subsection 4.2 that u 0 = H v Rg and u ∞ = k -1/2 T H v Rg. Using eq. (4.4) in Lemma 4.1 we also find that
√ k c 2 (d, k)H v R = G * v T * . Hence, k c 2 (d, k)u ∞ = T G * v T * g = G * v g. Lemma 7.2. Let Ω satisfy (2.
2) and suppose that Ω is stricty starlike in the sense that xν x > 0 for all x ∈ ∂Ω, where ν x is the unit exterior normal to ∂Ω at
x. Let v ∈ L ∞ (Ω, R) and let k > 0 be such that k 2 is not a Dirichlet eigenvalue of -∆ in Ω. Then there exist M = M (k, Ω) > 0, ε = ε(k, Ω) > 0, such that if v ∞ ≤ M , then G v (ξ) satisfies (2.16) for all but a finite number of ξ ∈ [k, k + ε).
Proof. Part I. We first consider the case v = 0. It follows from Lemma 7.1 together with the equality ψ + 0 (x, kω) = e ikωx that the operator G * 0 (k) is the farfield operator for the classical obstacle scattering problem with obstacle Ω. Moreover, S(k) := Id -2i G * 0 (k), is the scattering matrix in the sense of [START_REF] Helton | The first variation of the scattering matrix[END_REF]. It follows from [18, (2.1) and the remark after (1.9)] that all the eigenvalues λ = 1 of S(k) move in the counter-clockwise direction on the circle |z| = 1 in C continuously and with strictly positive velocities as k grows. More precisely, if λ(k) = e iβ(k) , λ(k) = 1 is an eigenvalue of S(k) corresponding to the normalized eigenfunction g(•, k), then
∂β ∂k (k) = 1 4π k 2π d-2 ∂Ω ∂f ∂ν x (x, k) 2 xν x ds(x), f (x, k) = S d-1
g(θ, k) e -ikθx -u(x, θ) ds(θ), x ∈ R d \ Ω, where u(x, θ) is the solution of problem (2.11) with u 0 (x) = e -ikθx (note that [START_REF] Helton | The first variation of the scattering matrix[END_REF] uses a different sign convention in the radiation condition (2.11c) resulting in a different sign of ∂β/∂k). It follows from this formula that ∂β ∂k (k) > 0: 1. the term xη x is positive by assumption, 2. ∂f ∂νx cannot vanish on ∂Ω identically. Otherwise, f vanishes on the boundary together with its normal derivative, and Huygens' principle (see [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm.3.14]) implies that the scattered field S d-1 g(θ, k)u(x, θ) ds(θ) vanishes identically, so that f is equal to f (x, k) = S d-1 g(θ, k)e -ikθx ds(θ). One can see from this formula that f extends uniquely to an entire solution of -∆f = k 2 f . Moreover, f is a Dirichlet eigenfunction for Ω and it implies that f is identically zero, because k 2 is not a Dirichlet eigenvalue of -∆ in Ω by assumption. Now it follows [START_REF] Colton | Inverse Acoustic and Electromagnetic Scattering Theory[END_REF]Thm.3.19] that the Herglotz kernel g(•, k) of f vanishes, but it contradicts the fact that g(•, k) is a normalized eigenfunction of S(k). It follows that all the non-zero eigenvalues of G * 0 (k) move continuously in the clockwise direction on the circle |z + i/2| = 1/2 in C with non-zero velocities as k grows. Moreover, since G * 0 (k) is compact in L 2 (S d-1 ) (see Theorem 2.5), it follows that z = 0 is the only accumulation point for eigenvalues of G * 0 (k). This together with Proposition 6.1 for κ = 0, ω = k, implies that there exist δ(k, Ω) > 0, ε(k, Ω) > 0 such that all the eigenvalues λ of G * 0 (ξ) with ℜλ < 0 belong to the half plane ℑz < -δ for ξ ∈ [k, k + ε).
This proves Lemma 7.2 with v = 0 if we take into account that the eigenvalues of G * 0 (k) and G 0 (k) are related by complex conjugation. Part II. Let k be such that (2.16) holds true for v = 0 and choose δ(k, Ω), ε(k, Ω) as in the first part of the proof. Now let v ∈ L ∞ (Ω, R). It follows from Proposition 6.2 that for any σ > 0 there exists M = M (ξ, σ) such that if v ∞ ≤ M , then G * v (ξ) has a finite number of eigenvalues λ with ℜλ < 0, multiplicities taken into account, and these eigenvalues belong to the σ-neighborhood of the eigenvalues of G * 0 (ξ) if ξ ∈ [k, k + ε). In addition, M (ξ, σ) can be choosen depending continuously on ξ. Hence, (2.16) holds true for v if it holds true for v = 0 and σ is sufficiently small. This together with part I finishes the proof of Lemma 7.2 for a general v if we take into account that the eigenvalues of G * v (k) and G v (k) are related by complex conjugation. Remark 7.3. It follows from analytic Fredholm theory (see, e.g., [START_REF] Gokhberg | An operator generalization of the logarithmic residue theorem and the theorem of Rouché[END_REF]Cor. 3.3]) and Lemma 3.4 below that the condition that ℜG v0 (k) be injective in H -1/2 (∂Ω) is "generically" satisfied. More precisely, it is either satisfied for all but a discrete set of k > 0 without accumulation points or it is violated for all k > 0. Applying analytic Fredholm theory again to z → ℜG z 2 v0 (zk) and taking into account Proposition 6.1, we see that the latter case may at most occur for a discrete set of z > 0 without accumulation points.
Conclusions.
In this paper we have presented, in particular, first local uniqueness results for inverse coefficient problems in wave equations with data given the imaginary part of Green's function on the boundary of a domain at a fixed frequency. In the case of local helioseismology it implies that small deviations of density and sound speed from the solar reference model are uniquely determined by correlation data of the solar surface within the given model.
The algebraic relations between the real and the imaginary part of Green's function established in this paper can probably be extended to other wave equations. An important limitation of the proposed technique, however, is that it is not applicable in the presence of absorption.
To increase the relevance of uniqueness results as established in this paper to helioseismology and other applications, many of the improvements achieved for standard uniqueness results would be desirable: This includes stability results or even variational source conditions to account for errors in the model and the data, the use of many and higher wave numbers to increase stability, and results for data given only on part of the surface.
4 even where z = k|x -y| and O d and O d are entire functions with O d (0) = 0 = O d (0), and γ is the Euler-Mascheroni constant, see, e.g., [22, p.279]. These formulas imply the second statement of the present lemma for v = 0.Using the resolvent identity (3.3) we obtain
3 2 (
2 ∂Ω) .
2 (
2 ∂Ω) . The estimates also follow from (3.5), taking into account that k ln k = o(1) as k ց 0. 4. Derivation of the relations between ℜG v and ℑG v . 4.1. Proof of Theorem 2.4. It follows from Lemma 3.4 for
Remark 7 . 4 .
74 In the particular case of v 0 = 0, Ω = {x ∈ R d : |x| ≤ R}, d = 2, 3, the injectivity of ℜG v0 (k) in H -1/2 (∂Ω) is equivalent to the following finite number of inequalities:(7.1) j l (kR) = 0 and y l (kR) = 0 for 0 ≤ l < kR -π 2 , d = 3, J l (kR) = 0 and Y l (kR) = 0 for 0 ≤ l < kR -π-1 2 , d = 2where j l , y l are the spherical Bessel functions and J l , Y l are the Bessel functions of integer order l. The reason is that the eigenvalues of ℜG v0 (k) are explicitly computable in this case, see, e.g.,[9, p.104 & p.144].
Problem 2.1. Determine the coefficient v in the Schrödinger equation (1.2) from ℑG + v (x, y, k) given at all x, y ∈ ∂Ω, at fixed k. As discussed in the introduction, mathematical approaches to Problem 2.1 are not yet well developed in the literature in contrast with the case of the following inverse problem from G + v (and not only from ℑG + v ): Problem 2.2. Determine the coefficient v in the Schrödinger equation (1.2) from G +
v (x, y, k) given at all x, y ∈ ∂Ω, at fixed k.
see, e.g.,[START_REF] Agmon | Spectral properties of Schrödinger operators and scattering theory[END_REF] Thm.4.2]. In turn, it is known that for the operator -∆ + v(x) with v satisfying (2.1) there are no embedded eigenvalues, see[START_REF] Hörmander | The Analysis of Linear Partial Differential Operators II[END_REF] Thm.14.5.5 & 14.7.2].Recall that the free radiating Green's function is given in terms of the Hankel functions H
(1) ν of the first kind of order ν by
(3.1)
Now, since v satisfies (2.1), operator -∆ + v(x) has no embedded point spectrum in L 2 (R d ) according to[START_REF] Agmon | Spectral properties of Schrödinger operators and scattering theory[END_REF] Thm.4.2] and[START_REF] Hörmander | The Analysis of Linear Partial Differential Operators II[END_REF] Thm.14.5.5 & 14.7.2]. It follows that R +
1 2 (∂Ω), H s+ 1 2 (∂Ω) if and only if k 2 is not a Dirichlet eigenvalue of -∆ + v(x).
∂Ω) if and only if r > Cap ∂Ω , where Cap ∂Ω denotes the capacity of ∂Ω, see, e.g.,[START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF] Thm.8.16]. We consider the cases d ≥ 3 and d = 2 separately. This follows from[START_REF] Mclean | Strongly elliptic systems and boundary integral equations[END_REF] Thm.7.17 & Cor.8.13]. Using Lemma 3.3 and Lemma 3.5 we also get that
d ≥ 3. We have that
E ∈ GL H -1 2 (∂Ω), H
1 2 (∂Ω) ∩ GL L 2 (∂Ω), H 1 (∂Ω) .
Lemma 7.1. Let v satisfy (2.1) and consider ψ + v and c 2 as defined as in subsection 4.2. Then we have kc 2
1 kc2(d,k) a standard farfield operator for Dirichlet scattering at Ω (see e.g. [8, §3.3]). 0 (k) is G * |
01710144 | en | [
"info.info-lg"
] | 2024/03/05 22:32:13 | 2018 | https://hal-lirmm.ccsd.cnrs.fr/lirmm-01710144v2/file/role-location-social-strength-friendship-prediction-lbsn-ipm-jorgeValverde.pdf | Jorge C Valverde-Rebaza
Mathieu Roche
Pascal Poncelet
Alneu De
Andrade Lopes
The role of location and social strength for friendship prediction in location-based social networks
Keywords: Location-based social networks, Link prediction, Friendship recommendation, Human mobility, User behavior
Recent advances in data mining and machine learning techniques are focused on exploiting location data. There, combined with the increased availability of location-acquisition technology, has encouraged social networking services to offer to their users different ways to share their location information. These social networks, called location-based social networks (LBSNs), have attracted millions of users and the attention of the research community. One fundamental task in the LBSN context is the friendship prediction due to its role in different applications such as recommendation systems. In the literature exists a variety of friendship prediction methods for LBSNs, but most of them give more importance to the location information of users and disregard the strength of relationships existing between these users. The contributions of this article are threefold, we: 1) carried out a comprehensive survey of methods for friendship prediction in LBSNs and proposed a taxonomy to organize the existing methods; 2) put forward a proposal of five new methods addressing gaps identified in our survey while striving to find a balance between optimizing computational resources and improving the predictive power; and 3) used a comprehensive evaluation to quantify the prediction abilities of ten current methods and our five proposals and selected the top-5 friendship prediction methods for LBSNs. We thus present a general panorama of friendship prediction task in the LBSN domain with balanced depth so as to facilitate research and real-world application design regarding this important issue.
Introduction
In the real world, many social, biological, and information systems can be naturally described as complex networks in which nodes denote entities (individuals or organizations) and links represent different interactions between these entities. A social network is a complex network in which nodes represent people or other entities in a social context, whilst links represent any type of relationship among them, like friendship, kinship, collaboration or others [START_REF] Barabási | Network Science[END_REF].
With the growing use of Internet and mobile devices, different web platforms such as Facebook, Twitter and Foursquare implement social network environments aimed at providing different services to facilitate the connection between individuals with similar interests and behaviors. These platforms, also called as online social networks (OSNs), have become part of the daily life of millions of people around the world who constantly maintain and create new social relationships [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Yu | Friend recommendation with content spread enhancement in social networks[END_REF]. OSNs providing location-based services for users to check-in in a physical place are called location-based social networks (LBSNs) [START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF][START_REF] Zhu | Understanding the adoption of location-based recommendation agents among active users of social networking sites[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Ozdikis | Evidential estimation of event locations in microblogs using the dempstershafer theory[END_REF].
One fundamental problem in social network analysis is link prediction, which aims to estimate the likelihood of the existence of a future or missing link between two disconnected nodes based on the observed network information [START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Martínez | A survey of link prediction in complex networks[END_REF][START_REF] Wu | A balanced modularity maximization link prediction model in social networks[END_REF]. In the case of LBSNs, the link prediction problem should be dealt with by considering the different kinds of links [START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Li | Mining user similarity based on location history[END_REF]. Therefore, it is called friendship prediction when the objective is to predict social links, i.e. links connecting users, and location prediction when the focus is to predict userlocation links, i.e. links connecting users with places [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Wang | A recommender system research based on location-based social networks[END_REF][START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF].
Since location information is a natural source in LBSNs, several techniques have been proposed to deal with the location prediction problem [START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Zheng | Computing with Spatial Trajectories[END_REF]. However, to the best of our knowledge no studies have analyzed the performance of friendship prediction methods in the LBSN domain.
In this paper, we review existing friendship prediction methods in the LBSN domain. Moreover, we organize the reviewed methods according to the different information sources used to make their predictions. We also analyze the different gaps between these methods and then propose five new friendship prediction methods which more efficiently explore the combination of the different identified information sources. Finally, we perform extensive experiments on well-known LBSNs and analyze the performance of all the friendship prediction methods studied not only in terms of prediction accuracy, but also regarding the quality of the correctly predicted links. Our experimental results highlight the most suitable friendship prediction methods to be used when real-world factors are considered.
The remainder of this paper is organized as follows. Section 2 briefly describes the formal definition of an LBSN. Section 3 formally explains the link prediction problem and how it is dealt with in the LBSN domain. This section also presents a survey of different friendship prediction methods from the literature. Section 4 presents our proposals with a detailed explanation on how they exploit different information sources to improve the friendship prediction accuracy. Section 5 shows experimental results obtained by comparing the efficiency of existing friendship prediction methods against our proposals. Finally, Section 6 closes with a summary of our main contributions and final remarks.
Location-Based Social Networks
A location-based social network (LBSN), also referred to as geographic social network or geo-social network, is formally defined as a specific type of social networking platform in which geographical services complement traditional social networks. This additional information enables new social dynamics, including those derived from visits of users to the same or similar locations, in addition to knowledge of common interests, activities and behaviors inferred from the set of places visited by a person and the location-tagged data generated during these visits [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Allamanis | Evolution of a location-based online social network: Analysis and models[END_REF][START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Narayanan | A study and analysis of recommendation systems for location-based social network (LBSN) with big data[END_REF].
Formally, we represent an LBSN as an undirected network G(V, E, L, Φ), where V is the set of users, E is the set of edges representing social links among users, L is the set of different places visited by all users, and Φ is the set of check-ins representing connections between users and places. This representation reflects the presence of two types of nodes: users and locations, and two kinds of links: user-user (social links) and user-location (check-ins), which is an indicator of the heterogeneity of LBSNs [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Zhang | Transferring heterogeneous links across location-based social networks[END_REF] Multiple links and self-connections are not allowed in the set E of social links. On the other hand, only self-connections are not allowed in the set Φ of check-ins. Since a user can visit the same place more than once, the presence of multiple links connecting users and places is possible if a temporal factor is considered. Therefore, a check-in is defined as a tuple θ = (x, t, ), where x ∈ V , t is the check-in time, and ∈ L. Clearly, θ ∈ Φ and |Φ| defines the total number of check-ins made by all users.
Link Prediction
In this section, we formally describe the link prediction problem and how this mining task is addressed in the LBSN domain. Moreover, we also review a selected number of friendship prediction methods for LBSNs.
Problem Description
Link prediction is a fundamental problem in complex network analysis [START_REF] Barabási | Network Science[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF], hence in social network analysis [START_REF] Wu | A balanced modularity maximization link prediction model in social networks[END_REF][START_REF] Liu | Network growth and link prediction through an empirical lens[END_REF][START_REF] Valverde-Rebaza | Exploiting behaviors of communities of Twitter users for link prediction[END_REF][START_REF] Shahmohammadi | Presenting new collaborative link prediction methods for activity recommendation in Facebook[END_REF]. Formally, the link prediction problem aims at predicting the existence of a future or missing link among all possible pairs of nodes that have not established any connection in the current network structure [START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF].
Consider as potential link any pair of disconnected users x, y ∈ V such that (x, y) / ∈ E. U denotes the universal set containing all potential links between pairs of nodes in
V , i.e. |U | = |V |×(|V |-1)
Also consider a missing link as any potential link in the set of nonexistent links U -E. The fundamental link prediction task here is thus to detect the missing links in the set of nonexistent links, while scoring each link in this set. Thus, a predicted link is any potential link that has received a score above zero as determined by any link prediction method. The higher the score, the more likely the link will be [START_REF] Barabási | Network Science[END_REF][START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF][START_REF] Martínez | A survey of link prediction in complex networks[END_REF].
From the set of all predicted links, L p , obtained by use of a link prediction method, we assume the set of true positives (T P ) as all correctly predicted links, and the set of false positives (F P ) as the wrongly predicted links. Thus, L p = T P ∪ F P . Moreover, the set of false negatives (F N ) is formed by all truly new links that were not predicted.
Therefore, evaluation measures as the imbalance ratio, defined as IR = , can be used as well as the harmonic mean of precision and recall, the F-measure, defined as F 1 = 2 × P ×R P +R [START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF][START_REF] Pham | Ebm: An entropy-based model to infer social strength from spatiotemporal data[END_REF]. However, most of the researches in link prediction consider that these evaluation measures do not give a clear judgment of the quality of predictions. For instance, a right predicted link could not be considered as a true positive if any link prediction method gives it a low score. To avoid this fact, two standard evaluation measures are used, AUC and precisi@L [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF].
The area under the receiver operating characteristic curve (AUC) is defined as AU C = n1+0.5×n2 n , where from a total of n independent comparisons between pairs of positively and negatively predicted links, n 1 times the positively predicted links were given higher scores than negatively predicted links whilst n 2 times they were given equal scores. If the scores are generated from an independent and identical distribution, the AUC should be about 0.5; thus, the extent to which AUC exceeds 0.5 indicates how much better the link prediction method performs than pure chance. On the other hand, precisi@L is computed as precisi@L = Lr L , where L r is the number of correctly predicted links from L top-ranked predicted links.
Friendship Prediction in LBSNs
LBSNs provide services to their users to enable them to take better advantage of different resources within a specific geographical area, so the quality of such services can substantially benefit from improvements in link prediction [START_REF] Zhu | Understanding the adoption of location-based recommendation agents among active users of social networking sites[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Therefore, considering the natural heterogeneity of LBSNs, the link prediction problem for this type of network must consider its two kinds of links [START_REF] Bao | Recommendations in locationbased social networks: A survey[END_REF][START_REF] Zheng | Computing with Spatial Trajectories[END_REF], i.e. friendship prediction involves predicting user-user links [START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF][START_REF] Xu-Rui | An algorithm for friendship prediction on location-based social networks[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF] whilst location prediction focuses on predict user-location links [START_REF] Wang | A recommender system research based on location-based social networks[END_REF][START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF][START_REF] Mcgee | Location prediction in social media based on tie strength[END_REF].
Friendship prediction is a traditional link prediction application, providing users with potential friends based on their relationship patterns and the social structure of the network [START_REF] Yu | Friend recommendation with content spread enhancement in social networks[END_REF]. Friendship prediction have been widely explored in LBSNs since it is possible to use traditional link prediction methods, such as common neighbors, Adamic-Adar, Jaccard, resource-allocation and preferential attachment, which are commonly applied and have been extensively studied in traditional social networks [START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Martínez | A survey of link prediction in complex networks[END_REF]. However, as location information is a natural resource in LBSNs, different authors have proposed friendship prediction methods to exploit it. Therefore, some methods use geographical distance [START_REF] Zhang | Distance and friendship: A distance-based model for link prediction in social networks[END_REF], GPS and/or check-in history [START_REF] Kylasa | Social ties and checkin sites: connections and latent structures in location-based social networks[END_REF], location semantics (tags, categories, etc.) [START_REF] Bayrak | Examining place categories for link prediction in location based social networks[END_REF] and other mobility user patterns [START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Pham | Ebm: An entropy-based model to infer social strength from spatiotemporal data[END_REF][START_REF] Xiao | Inferring social ties between users with human location history[END_REF] as information sources to improve the effectiveness of friendship prediction in LBSNs.
The friendship prediction task in LBSNs is still an open issue where there are constant advances and new challenges. Furthermore, the importance of the friendship prediction task is not only due to its well known application in friendship recommendation systems, but also because it opens doors to new research and application issues, such as companion prediction [START_REF] Liao | Who wants to join me?: Companion recommendation in location based social networks[END_REF], local expert prediction [START_REF] Cheng | Who is the Barbecue King of Texas?: A Geo-spatial Approach to Finding Local Experts on Twitter[END_REF][START_REF] Liou | Design of contextual local expert support mechanism[END_REF][START_REF] Niu | On local expert discovery via geo-located crowds, queries, and candidates[END_REF], user identification [START_REF] Rossi | It's the way you check-in: Identifying users in location-based social networks[END_REF][START_REF] Riederer | Linking users across domains with location data: Theory and validation[END_REF] and others.
Friendship Prediction Methods for LBSNs
Most existing link prediction methods are based on specific measures that capture similarity or proximity between nodes. Due to theirs low computational cost and easy calculation, link prediction methods based on similarity are candidate approaches for real-world applications [START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Xu-Rui | Using multi-features to recommend friends on location-based social networks[END_REF][START_REF] Narayanan | A study and analysis of recommendation systems for location-based social network (LBSN) with big data[END_REF].
Although there is abundant literature related to friendship prediction in the LBSN context, there is a lack of well organised and clearly explained taxonomy of existing methods in the current literature. For the sake of clearly arranging these existing methods, this study proposes a taxonomy for friendship prediction methods for LBSNs based on the information sources used to perform their predictions. Figure 1 shows the proposed taxonomy. Friendship prediction methods for LBSNs use three information sources to compute the similarity between a pair of users: check-in, place, and social information. In turn, each information source has specific similarity criteria. There-fore, methods based on check-in information explore the frequency of visits at specific places and information gain. Methods based on place information commonly explore the number of user visits, regardless of frequency, to distinct places as well as the geographical distance between places. Finally, methods based on social information explore the social strength among users visiting the same places.
Place Information Check-in Information
Here, we will give a systematic explanation of popular methods for friendship prediction in LBSNs belonging to each one of the proposed categories.
Methods based on Check-in Information
User mobility behaviors can be analyzed when the time and geographical information about the location visited are record at check-ins. The number of check-ins may be an indicator of users' preference for visiting a specific type of places and therefore, the key to establishing new friendships. Two of the most common similarity criteria used by methods based on check-in information are the check-in frequency and information gain.
Methods based on check-in frequency consider that the more check-ins at same places have made two users the more likely they will establish a friendship relationship. Some representative methods based on check-in frequency are the collocation, distinct collocation, Adamic-Adar of places, preferential attachment of check-ins, among others [START_REF] Cranshaw | Bridging the gap between physical location and online social networks[END_REF][START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Bellow, we present the definition of two well-known friendship methods for LBSNs based on check-in frequency.
Collocation (Co). This is one of the most popular methods based on the checkin frequency. The collocation method, also referred to as the number of collocations or common check-in count, expresses the number of times that users x and y visited some location at the same period of time. Thus, for a pair of disconnected users x and y, and considering a temporal threshold of time, τ ∈ R, the Co method is defined as:
s Co x,y,τ = |Φ Co (x, y, τ)|, (1)
where, Φ Co (x, y, τ) = {(x, y, t x , t y , ) | (x, t x , ) ∈ Φ(x) ∧ (y, t y , ) ∈ Φ(y) ∧ |t x -t y | ≤ τ}, is the set of check-ins made by both users x and y at the same place and over the same period of time, and Φ(x) = {(x, t, ) | x ∈ V : (x, t, ) ∈ Φ} is the set of check-ins made by the user x at different places.
Adamic-Adar of Places (AAP). This is based on the traditional Adamic-Adar method but considering the number of check-ins of common visited places of users x and y. Thus, for a pair of users x and y, AAP is computed as:
s AAP x,y = ∈Φ L (x,y) 1 log |Φ( )| , (2)
where Although the number of check-ins may be a good indicator for the establishment of friendship between users, the fact that they have many check-ins at visited places may, on the contrary, reduce their chances of getting to know each other. To avoid this situation, some researchers have used the information gain of places as a resource to better discriminate whether a certain place is relevant to the formation of social ties between its visitors [START_REF] Cranshaw | Bridging the gap between physical location and online social networks[END_REF][START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF][START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Some methods based on information gain of places are min entropy, Adamic-Adar of entropy, location category and others. Bellow, we present two well-known friendship methods for LBSNs based on information gain.
Φ L (x, y) = Φ L (x) ∩ Φ L (y)
Adamic Adar of Entropy (AAE). This also applies the traditional Adamic-
Adar method while considering the place entropy for common locations of a pair of users x and y. Therefore, the AAE method is defined as:
s AAE x,y = ∈Φ L (x,y) = 1 log E( ) , (3)
where
E( ) = -x∈Φ V ( ) q x, log(q x, ) is the place entropy of location , q x, = |Φ(x, )| |Φ( )| is the relevance of check-ins of a user, Φ(x, ) = {(x, t, ) | (x, t, ) ∈ Φ(x) ∧ ∈ Φ L (x)} is the set of check-ins of a user x at location , and Φ V ( ) = {x | (x, t, ) ∈ Φ(x) ∧ ∈ Φ L (x)} is the set of visitors of location .
Location Category (LC). This calculates the total sum of the ratio of the number of check-ins of all locations visited by users x and y to the number of check-ins of users x and y at these locations while disregarding those with a high place entropy. Therefore, considering an entropy threshold τ E ∈ R, the LC method is defined as:
s LC x,y = ∈Φ L (x) ∧ E( )<τ E ∈Φ L (y) ∧ E( )<τ E |Φ( )| + |Φ( )| |Φ(x, )| + |Φ(y, )| . (4)
Methods based on Place Information
Friendship prediction methods based on place information consider that locations are the main elements on which different similarity criteria can be used. Two of the most common similarity criteria used by methods based on place information are the number of distinct visitations and geographical distance.
Methods based on distinct visitations consider specific relations among the different visited places by a pair of user as the key to compute the likelihood of a future friendship between them. Some representative methods based on distinct visitations at specific places are the common location, jaccard of places, location observation, preferential attachment of places, among others [START_REF] Cranshaw | Bridging the gap between physical location and online social networks[END_REF][START_REF] Steurer | Predicting social interactions from different sources of location-based knowledge[END_REF][START_REF] Steurer | Acquaintance or partner predicting partnership in online and location-based social networks[END_REF][START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF]. Below, we present two of the most representative friendship prediction methods for LBSNs based on distinct visitations.
Common Location (CL)
. This is inspired by the traditional common neighbor method and constitute the simplest and most popular method based on distinct visitations at places to determine the homophily among pairs of users. Common location method, also known as common places or distinct common locations, expresses the number of common locations visited by users x and y. Thus, CL is defined as:
s CL x,y = |Φ L (x, y)|, (5)
where, Φ L (x, y) = Φ L (x) ∩ Φ L (y) is the previously defined set of common visited places of a pair of users x and y.
Jaccard of Places (JacP). This is inspired by the traditional Jaccard method. Jaccard of places method is defined as the fraction of the number of common locations and the number of locations visited by both users x and y. Therefore, JacP is computed as:
s JacP x,y = |Φ L (x, y)| |Φ L (x) ∪ Φ L (y)| . ( 6
)
On the other hand, since different studies have shown the importance of geographical or geospatial distance in the establishment of social ties, many authors have proposed to exploit this fact to improve friendship prediction. Some of the most representative methods based on geographical distance are the min distance, geodist, weighted geodist, Hausdorff distance and adjusted Hausdorff distance [START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF][START_REF] Zhang | Distance and friendship: A distance-based model for link prediction in social networks[END_REF][START_REF] Li | Geo-social media analytics[END_REF]. Below, we discuss two representative friendship prediction methods for LBSNs based on geographical distance.
GeoDist (GeoD). This method is the most common of those based on geographical distance. Consider as the "home location" of user x, h x , relative to the most checked-in place. Therefore, GeoD computes the geographical distance between the home locations of users x and y. Thus, GeoD is calculated as:
s GeoD x,y = dist( h x , h y ), (7)
where dist( , ) is simply the well-known Haversine formula to calculate the great-circle distance between two points and over the Earth's surface [START_REF] Goodwin | The haversine in nautical astronomy[END_REF]. It is important to note that for this case, two users are more likely to establish a friendship if they have a low GeoD value.
Adjusted Hausdorff Distance (AHD). This method is based on the classic Hausdorff distance but applying an adjustment to improve the friendship prediction accuracy. The AHD method is thus defined as:
s AHD x,y = max{ sup ∈Φ L (x) inf ∈Φ L (y) dist adj ( , ), sup ∈Φ L (y) inf ∈Φ L (x) dist adj ( , )}, (8)
where dist adj ( , ) = dist( , ) × max(diversity( ), diversity( )) is the adjusted geographical distance between two locations and , diversity( ) = exp(E( )) is the location diversity used to represent a location's popularity, and sup and inf represent the supremum (least upper bound) and infimum (greatest lower bound), respectively, from the set of visited places of a user x. Also similar to GeoD method, two users will be more likely to establish a relationship if they have a low AHD value.
Methods based on Social Information
Despite the fact that most of previously described methods capture different social behavior patterns based on the visited places of users, they do not directly use the social strength of ties between visitors of places [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF].
In the last years, some methods have been proposed to compute the friendship probability between a pair of users based on the places visited by their common friends. Some methods based on social strength are common neighbors within and outside of common places, common neighbors of places, common neighbors with total and partial overlapping of places and total common friend common check-ins [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Below, we describe two representative friendship prediction methods for LBSNs based on social strength.
Common Neighbors of Places (CNP). This indicates that a pair of users x and y are more likely have a future friendship if they have many common friends visiting the same places also visited by at least x or y. Thus, the CNP method is defined as:
s CN P x,y = |Λ L x,y |, (9)
where
Λ L x,y = {z ∈ Λ x,y | Φ L (x) ∩ Φ L (z) = ∅ ∨ Φ L (y) ∩ Φ L (z) =
∅} is the set of common neighbors of places of users x and y, and Λ x,y = {z ∈ V | (x, z) ∈ E ∧ (y, z) ∈ E} is the traditional set of common neighbors of pair of users x and y.
Common Neighbors with Total and Partial Overlapping of Places (TPOP).
This considers that a pair of users x and y could develop a friendship if they have more common friends visiting places also visited by both users than common friends who visited places also visited by only one of them. Therefore, the TPOP method is defined as:
s T P OP x,y = |Λ T OP x,y | |Λ P OP x,y | , (10)
where, Λ T OP
x,y
= {z ∈ Λ L x,y | Φ L (x)∩Φ L (z) = ∅∧Φ L (y)∩Φ L (z) =
∅} is the set of common neighbors with total overlapping of places, and
Λ P OP x,y = Λ L x,y -Λ T OP x,y
is the set of common neighbors with partial overlapping of places.
Proposals
We analyzed the reviewed link prediction methods and observed that some of them use more than one information source to improve their prediction accuracy. For example, AAP is naturally a method based on check-in frequency but it also use distinct visitations at specific places as additional information source. Other example is AHD, which is naturally a method based on geographical distance but it also use check-in frequency and information gain as additional information sources. Table 1 provides an overview of different information sources used by each friendship prediction method described in Section 3.3.
Table 1: Summary of the friendship prediction methods for LBSNs, from the literature and our proposals, as well as the information sources used to make their predictions. From Table 1 we found that some information sources were not combined, for instance, social strength is only combined with distinct visitations at specific places. Assuming that combination of some information sources could improve the friendship prediction accuracy, we propose five new methods referred to as check-in observation (ChO), check-in allocation (ChA), friendship allocation within common places (FAW), common neighbors of nearby places (CNNP) and nearby distance allocation (NDA). They are shown in bold in Table 1 and are described as follows:
Method
Check-in Observation (ChO). This is based on both the distinct visitations at specific places and check-in frequency to perform predictions. We define ChO method as the ratio of the sum of the number of check-ins of users x and y at common visited places to the total sum of the number of check-ins at all locations visited by these users. Thus, ChO is computed as:
s ChO x,y = ∈Φ L (x,y) |Φ(x, )| + |Φ(y, )| ∈Φ L (x) |Φ(x, )| + ∈Φ L (y) |Φ(y, )| . ( 11
)
Check-in Allocation (ChA). This is based on the traditional resource allocation method, ChA refines the popularity of all common visited places of users x and y through the count of total check-ins of each of such places. Therefore, ChA is defined as:
s ChA x,y = ∈Φ L (x,y) 1 |Φ( )| . ( 12
)
ChA heavily punishes high numbers of check-ins at popular places (e.g. public venues) by not applying a logarithmic function on the size of sets of all check-ins at these places. Similar to ChO, the ChA method is also based on both the distinct visitations at specific places and check-in frequency to work.
Friendship Allocation Within Common Places (FAW). This is also inspired by the traditional resource allocation method. Let the set of common neighbors within common visited places be Λ W CP
x,y
= {z ∈ Λ x,y | Φ L (x, y)∩ Φ L (z) = ∅},
the FAW method refines the number of check-ins made by all common friends within common visited places of users x and y. Therefore, the FAW is defined as:
s F AW x,y = z∈Λ W CP x,y 1 |Φ(z)| . ( 13
)
Despite the use of check-in frequency and distinct visitations at places by FAW, we consider that this method is mainly based on social strength, due to the fact that this criterion is the filter used to perform predictions.
Common Neighbors of Nearby Places (CNNP). This counts the number of common friends of users x and y whose geographical distance between their home locations and the home location of at least one, x or y, lies within a given radio. Therefore, given a distance threshold τ d , CNNP is computed as:
s CN N P x,y = |{z | ∀z ∈ Λ x,y ∧ (dist( h x , h z ) ≤ τ d ∨ dist( h y , h z ) ≤ τ d )}|. ( 14
)
CNNP uses full place information as well as social information to make predictions, however we consider that it is a method based on social strength due to the fact that this criterion is fundamental for CNNP to work.
Nearby Distance Allocation (NDA). This refines all the minimum adjusted distances calculated between the home locations of users x and y, and the respective home locations of all of their common neighbors of places. Therefore, NDA is defined as:
s N DA x,y = z∈Λ L x,y 1 min{dist adj ( h x , h z ), dist adj ( h y , h z )} . ( 15
)
NDA is the only method that uses full check-in, place and social information. However, as previously applied for the other proposals, since NDA uses social strength as the main criterion, we consider it to be a method based on social information.
Performance Evaluation
In this section, we present an experimental evaluation carried out for all link prediction methods previously studied. This section includes an analysis of three real-world LBSN datasets with which the experiments were performed as well as a deep analysis of the predictive capabilities of each evaluated method.
Dataset Description
The datasets used in our experiments are real-world LBSNs in which users made check-ins to report visits to specific physical locations. In this section, we describe their main properties and ways to construct the training and test datasets.
Dataset Selection
The datasets used for our experiments had to meet certain requirements: i) they had to represent social and location data, i.e. data defining existing connections between users as well as the check-ins of all of them at all of their visited locations, and ii) those connections and/or check-ins had to be time stamped. Based on these two criteria, we selected three datasets collected from real-world LBSNs, which are commonly used by the scientific community for mining tasks in the LBSN domain.
Brightkite. This was once a location-based social networking service provider where users shared their locations by checking-in. The Brightkite service was shut down in 2012, but the dataset was collected over the April 2008 to October 2010 period [START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF]. This publicly available dataset1 consists of 58228 users, 214078 relations, 4491144 check-ins and 772788 places.
Gowalla. This is also another location-based social networking service that ceased operation in 2012. The dataset was collected over the February 2009 to October 2010 period [START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF] and also is publicly available2 . This dataset contains 196591 users, 950327 relations, 6442892 check-ins and 1280969 different places.
Foursquare. Foursquare is one of the most popular online LBSN. Currently, this service report more than 50 million users, 12 billion check-ins and 105 million places in January 2018 3 . The dataset used for us experiments was collected over January 2011 to December 2011 period [START_REF] Gao | gscorr: Modeling geo-social correlations for new check-ins on location-based social networks[END_REF]. This publicly available dataset 4 contains 11326 users, 23582 relations, 2029555 check-ins and 172044 different places.
The various properties of these datasets were calculated an the values depicted in Table 2. This table is divided into two parts, the first shows topological properties [START_REF] Barabási | Network Science[END_REF] whilst the second shows location properties [START_REF] Zheng | Computing with Spatial Trajectories[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Therefore, considering the first part of Table 2 we observe that the analyzed networks have a small average degree, k , which suggests that the users of these networks had between 4 and 10 friends in average. This implies that the average clustering coefficient, C, of networks is also low. However, the low degree heterogeneity, H = k 2 k 2 , of Brightkite and Foursquare indicate that their users are less different from each other than the users of Gowalla. Also, the assortativity coefficient r, which measures the preference of users to attach to others, shows that only Brightkite is assortative, which is why it has a positive value, indicating the presence of few relationships among users with a similar degree. On the other hand, Gowalla and Foursquare are disassortative, since their assortativity coefficients are negative, indicating the presence of a considerable number of relationships among users with a different degree. Considering the second part of Table 2, we observe that the number of users with at least one check-in, |Φ V |, is a little over 85% of total users of networks. Despite the fact that Gowalla and Brightkite have more users and check-ins than Foursquare, the average number of check-ins per user, Φ , of Foursquare users is greater than that of Gowalla and Brightkite users. However, the average of check-ins per place, L Φ , is similar for Brightkite and Gowalla, whilst for Foursquare is greater, i.e. Foursquare users made more check-ins at a specific place than Brightkite and Gowalla users. Finally, the very small average place entropy, E = 1 |L| ∈L E( ), of Brightkite suggests that the location information in this LBSN is a stronger factor to facilitate the establishment of new relationships between users than for Gowalla and Foursquare users.
Data Processing
We preprocess the datasets to make the data suitable for our experiments. Considering that isolated nodes and locations without visits can generate noise when measuring the performance of different link prediction methods, it is necessary to apply a policy for selecting data samples containing more representative information. Therefore, for each dataset, we consider only users with at least one friend and with at least one check-in at any location.
Since our goal is to predict new friendships between users, we divided each dataset into training and test (or probe) sets while taking the time stamps information available into account. Therefore, links formed by Brightkite users who checked-in from April 2008 to January 2010 were used to construct the training set, whilst links formed by users who checked-in from February 2010 to October 2010 were used for the probe set. For Gowalla, the training set was constructed with links formed by users who checked-in from February 2009 to April 2010, and the probe set was constructed with links formed by users who checked-in from May 2010 to October 2010. Whereas, for Foursquare the training set is formed by users who checked-in from January 2011 to September 2011, whilst the probe set is formed by users that made check-ins over the October 2011 to December 2011 period. Table 3 shows the training and testing time ranges for the three datasets. Different studies have used a similar strategy for splitting data into training and probe sets, but they were not concerned about maintaining the consistency between users in both sets [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF][START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF], which could affect the performance of link prediction methods in different ways [START_REF] Yang | Evaluating link prediction methods[END_REF]. To avoid that, we proceeded to remove all links formed by users who checked-in only during the training time range or only in the testing time range. From the links formed by users with check-ins in both the training and testing time ranges, we chose one-third of the links formed by users at random with a higher degree than the average degree for the probe set, while the remaining links were part of the training set. Therefore, we obtained the training set G T (V, E T , L, Φ T ) and probe set G P (V, E P , L, Φ P ), where both sets keep the same users (V ) and locations (L) but differ in the social (E T and E P ) and user-location (Φ T and Φ P ) links.
Table 3
Data Limitations
Although the datasets selected contain thousands of users and links, they can be considered as relatively small compared to other online social network datasets. Notwithstanding this limitation present in the datasets analyzed in this study, we use them since they meet the requirements explained previously in Section 5.1.1 and also because they are frequently used in the state-of-the-art in order to propose a quantitive and qualitative analysis on the social and spatial factors impacting the friendships [START_REF] Allamanis | Evolution of a location-based online social network: Analysis and models[END_REF][START_REF] Cho | Friendship and mobility: User movement in location-based social networks[END_REF][START_REF] Mengshoel | Will we connect again? machine learning for link prediction in mobile social networks[END_REF][START_REF] Bayrak | Contextual feature analysis to improve link prediction for location based social networks[END_REF]. Therefore, this work offers new light on exploiting the different information sources to improve friendship prediction in Brightkite, Gowalla and Foursquare, but our findings could be applied for other LBSNs.
Some studies of the state-of-the-art use other datasets, e.g. Foursquare [START_REF] Luo | Friendship Prediction Based on the Fusion of Topology and Geographical Features in LBSN[END_REF][START_REF] Zhang | Transferring heterogeneous links across location-based social networks[END_REF], Facebook [START_REF] Mcgee | Location prediction in social media based on tie strength[END_REF], Twitter [START_REF] Zhang | Transferring heterogeneous links across location-based social networks[END_REF], Second Life [START_REF] Steurer | Predicting social interactions from different sources of location-based knowledge[END_REF][START_REF] Steurer | Acquaintance or partner predicting partnership in online and location-based social networks[END_REF], and other LBSNs. But we cannot use them for two main reasons: i) generally they are not publicly available, and ii) they do not respect the requirements detailed in Section 5.1.1.
Experimental Setup
For each of the 10 independent partitions of each dataset obtained as explained in Section 5.1.2, we considered 10 executions of each link prediction method presented in Section 3 and our proposals described in Section 4. We then applied different performance measures to the prediction results to determine which were the most accurate and efficient link prediction methods.
All of the evaluation tests were performed using the Geo-LPsource framework, which we developed and is publicly available 5 . We set the default parameters of the link prediction methods as follows: i) for Co method we considered that τ = 1 day, ii) for LC method we considered that τ E = E , iii) for CNNP method we considered that τ d = 1500 m., and iv) for AHD method, for a user x and being the most visited place by him, we considered that the comparison .
Evaluation Results
For the three LBSNs analyzed, Table 4 summarizes the performance results for each link prediction method through different evaluation metrics. Each value in this table was obtained by averaging over 10 runs, over 10 partitions of training and testing sets, as previously detailed in Section 5.2. The values highlighted in bold correspond to the best results achieved for each evaluation metric. From Table 4, imbalance ratio and F-measure results were calculated considering the whole list of predicted links obtained by each evaluated link prediction method. On the other hand, the AUC results were calculated from a list of n = 5000 pairs of wrongly and right predicted links chosen randomly and independently. Due to the number of link prediction methods studied and the different ways they were evaluated, we performed a set of analyses to determine which were the best friendship prediction methods for LBSNs.
Reducing the Prediction Space Size
The prediction space size is related to the size of the set of predicted links, L p . Most existing link prediction methods prioritize an increase in the number of correctly predicted links even at the cost of a huge amount of wrong predictions. This generates a extremely skewed distribution of classes in the prediction space, which in turn impairs the performance of any link prediction method [START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF]. Therefore, efforts should also focus not only on reducing the number of wrong predictions but also on increasing the number of correctly predicted links relative to the total number of predictions.
Previous studies showed that the prediction space size of methods based only on the network topology is around 10 11 ∼ 10 12 links for Brightkite and Gowalla. However, by using methods based on location information, the prediction space can be reduced by about 15-fold or more [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF][START_REF] Scellato | Exploiting place features in link prediction on location-based social networks[END_REF]. Based on that and to determine if reduction of the prediction space is related to different information sources, in Figure 2 we report the average prediction space size of the different link prediction methods analyzed in this study. Figure 2 shows that for the analyzed networks, methods based on checkin frequency, information gain, distinct visitations at places and geographical distance, followed the traditional logic of obtaining a high number of right predictions at the cost of a much higher number of wrong predictions [START_REF] Wang | Human mobility, social ties, and link prediction[END_REF]. On the other hand, methods based on social strength led to a considerably lower number of wrong predictions at the cost of a small decrease in the number of correctly predicted links relative to the results obtained by the first cited methods, which is important in a real scenario [START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Our proposals followed a similar scheme as methods based on social strength, leading to less wrong predictions.
G1 G2 G3 G4 G5 G6 10
This fact is clearly shown by the IR results in Table 4 where, besides highlighting that Co method generally had a better IR performance, we observed that some methods based on check-in frequency, information gain, distinct visitations at places and geographical distance had an IR higher than most methods based on social strength and our proposals. Therefore, Co was the method with the overall best IR performance, whilst GeoD and AHD were the worst ones.
Considering only our proposals, we found that FAW and CNNP performed better in IR. These two methods have social components, which help to significantly reduce the prediction space size. The worst IR performance of our proposals was obtained by NDA, which is based on geographical distance, thus confirming that this type of information source generates a large prediction space.
Measuring the Accuracy
Since the IR results shown that some methods obtained a considerable number of correctly predicted links whilst others obtained an absurdly large number of wrongly predicted links, we adopted the f-measure (F 1 ) to evaluate the performance of prediction methods in terms of relevant predicted links. Therefore, we observe that FAW method, which is one of our proposals, had the best f-measure performance in the three analyzed LBSNs.
To facilitate the analysis of all link prediction methods, based on Table 4 we ranked the average F 1 results obtained by all the link prediction methods in the three analyzed networks, and then we applied the Friedman and Nemenyi posthoc tests [START_REF] Demšar | Statistical comparisons of classifiers over multiple data sets[END_REF]. Therefore, the F-statistics with 14 and 28 degrees of freedom and at the 95 percentile was 2.06. According to the Friedman test using Fstatistics, the null-hypothesis that the link prediction methods behave similarly when compared with respect to their F 1 performance should be rejected. and TPOP, performed better than the others since occupied the first and second position, respectively. Co and ChO are in third and fourth position, respectively, whilst JacP and CL tied for the fifth position. After these methods, and a little further away, we have that ChA, AAP and AAE tied for the sixth position, CNNP and NDA are in seventh and eight position, respectively. CNP is ninth, AHD is tenth , GeoD is eleventh and LC is twelfth. Therefore, we observe that two of our proposals, FAW and ChO, are in the top-5. Moreover, methods based on information gain, such as LC, and methods based on geographical distance, such as GeoD and AHD, were at the end of the ranking.
Analyzing the Predictive Power
Table 4 also shows the prediction results obtained for AUC. From these results, we observed that CNP, GeoD and JacP outperformed all the other link prediction methods in Brightkite, Gowalla and Foursquare, respectively. In addition, we found that all the link prediction methods performed better than pure chance, except for LC in Foursquare.
Furthermore, to gain further insight into the real prediction power of evaluated link prediction methods, we followed the same scheme used previously for F 1 analysis. Therefore, we ranked the average results of AUC obtained by all the link prediction methods, and then we applied Friedman and Nemenyi post-hoc tests. Similarly that for F 1 analysis, the critical value of the F-statistics with 14 and 28 degrees of freedom and at the 95 percentile was 2.06. However, unlike the F 1 analysis, this time the Friedman test suggested that the null-hypothesis that the link prediction methods behave similarly when compared by their AUC performance should not be rejected.
Figure 3(b) shows the Nemenyi test results for the evaluated methods ranked by AUC. The diagram indicates that the CD value calculated at the 95 percentile was 12.38. This test also showed that the link prediction methods have no statistical significant difference, so they are connected by a bold line.
Figure 3(b) indicates that, differently from F 1 analysis, this time the methods based on geographical distance and information gain are in the firsts positions. Thus, GeoD and AAE are in first and second position, respectively. JacP is third whilst FAW and ChA tied for the fourth position and AAP is fifth. The rest of the ranking was in the following order: NDA, CNP, AHD, ChO, CL, TPOP, Co, CNNP and LC. In this ranking, we also have two of our proposals in the top-5. FAW and ChA. To our surprise, LC continues in last position and some methods that have performed well in the F 1 ranking, such as TPOP, Co and CL, this time were in compromising positions.
Obtaining the Top-5 Friendship Prediction Methods
Since some link prediction methods performed better in the prediction space analysis whilst other ones did in the prediction power analysis, we analyzed the F 1 and AUC results at the same time. Therefore, from Table 4 we ranked the average F 1 and AUC results obtained by all the link prediction methods, and then applied Friedman and Nemenyi post-hoc tests to them. The critical F-statistic value with 14 and 70 degrees of freedom and at the 95 percentile was 1.84. Based on this F-statistic, the Friedman test suggested that the nullhypothesis that the methods behave similarly when compared according to their F 1 and AUC performances should be rejected.
Figure 4 shows the Nemenyi test results for the analyzed methods in our final ranking. The diagram in Figure 4 indicates that the CD value at the 95 percentile is 8.76. From diagram in Figure 4, we observe that FAW has statistical significant difference with LC. 4. Diagram shows the final ranking of link prediction methods considering both the optimal reduction of prediction space size and high prediction power. Our proposals are highlighted in bold.
Figure 4 indicates that FAW is in first position, JacP is second, AAE is third, ChA is fourth and AAP is fifth. ChO and TPOP tied for the sixth position. The rest of the ranking was in the following order: CL, GeoD, NDA, Co, CNP, AHD, CNNP and LC. Therefore, two of our proposals, FAW and ChA, are in the top-5 of the final ranking. LC definitively has the worst performance. Note that the methods in the top-5 belong to the different information sources identified in this study, so we have a method based on social strength (FAW), a method based on distinct visitations at places (JacP), a method based on information gain (AAE) and two methods based on check-in frequency (ChA and AAP). The only one missing in the top-5 of the final ranking is some method based on geographical distance.
For recommending to users some links as possible new friendships, we can just select the links with the highest scores [START_REF] Liben-Nowell | The link-prediction problem for social networks[END_REF][START_REF] Lü | Link prediction in complex networks: A survey[END_REF][START_REF] Valverde-Rebaza | Exploiting social and mobility patterns for friendship prediction in location-based social networks[END_REF]. Furthermore, whereas for recommendation task is not enough only a method with good prediction performance, also it is necessary that from a limited portion of the total predicted links it generates a high number of right predictions, good enough to be showed to users as appropriate friendship suggestions [START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF][START_REF] Ahmadian | A social recommendation method based on an adaptive neighbor selection mechanism[END_REF]. Therefore, to assess the performance of top-5 methods from the final ranking through limited segments of the total list of predicted links, we analyzed them by precisi@L. Figure 5 shows the different precisi@L performances for the top-5 methods of our final ranking. These precisi@L results were calculated for different L values and for each analyzed LBSN.
Figure 5 indicates that most of the evaluated methods performed best when L = 100, i.e. they are able to make a few accurate predictions. When link prediction methods have to make more than a thousand predictions, i.e when L > 1000, their prediction abilities decrease considerably. Moreover, Figure 5 shows that the evaluated methods have a similar behavior in the three analyzed LBSNs. Thus, ChA, AAP and AAE performed similarly with a slight superiority of ChA. Moreover, JacP and FAW showed similar performance with a slight superiority of JacP. Analyzing the precis@n performance of methods in each analyzed network, Figure 5(a) shows that in Brightkite, AAP outperformed all the other evaluated methods when L = 100. Thereafter, our proposal FAW performed better than the rest of methods for the rest of L values. JacP outperformed poorly. Figure 5(b) shows that in Gowalla, the methods JacP achieved the best performance for all of the L values. One of our proposals, i.e. ChA, ranks second when L = 100, to remain in third position for the rest of L values. When L = 1000, other of our proposals, i.e. FAW, achieve the second position and it holds that position for the rest of L values. Finally, Figure 5(c) shows that methods in this network achieved very low precisi@L values (less than 0.2). However, in Foursquare, ChA outperformed all the methods when L = 100 but it is overcome by JacP, which keeps the second position for the rest of L values. Our proposal FAW performed poorly when L = 100 but it achieves the third position when L = 1000 and maintain this position since it slightly exceeds AAE and AAP.
Conclusion
In last years, a variety of online services which provide users with easy ways to share their geo-spatial locations and location-related content have become popular. These services, called LBSNs, constitute a new type of social network and give rise to new opportunities and challenges with regard to different social network issues, such as location recommendation [START_REF] Mcgee | Location prediction in social media based on tie strength[END_REF][START_REF] Wang | A recommender system research based on location-based social networks[END_REF][START_REF] Pálovics | Locationaware online learning for top-k recommendation[END_REF], user identification [START_REF] Rossi | It's the way you check-in: Identifying users in location-based social networks[END_REF][START_REF] Riederer | Linking users across domains with location data: Theory and validation[END_REF], discovery of local experts [START_REF] Cheng | Who is the Barbecue King of Texas?: A Geo-spatial Approach to Finding Local Experts on Twitter[END_REF][START_REF] Liou | Design of contextual local expert support mechanism[END_REF][START_REF] Niu | On local expert discovery via geo-located crowds, queries, and candidates[END_REF], and discovery of travel companions [START_REF] Liao | Who wants to join me?: Companion recommendation in location based social networks[END_REF]. Motivated by the important role that LBSNs are playing for millions of users, we conducted a survey of recent related research on friendship prediction and recommendation.
Although there is abundant methods to tackle the friendship prediction problem in the LBSN domain, there is a lack of well organised and clearly explained taxonomy that helps the best use of current literature. Therefore, our first contribution in this work was related to proposes a taxonomy for friendship prediction methods for LBSNs based on five information sources identified: frequency of check-ins, information gain, distinct visitations at places, geographical distance and social strength.
Based on the taxonomy proposed, we identified some gaps in existing friendship prediction methods and proposed five new ones: check-in observation (ChO), check-in allocation (ChA), friendship allocation within common places (FAW), common neighbors of nearby places (CNNP) and nearby distance allocation (NDA). These new friendship prediction methods are exclusive to perform friendship prediction task in the LBSN context and constituted our second contribution.
Due to the fact that we aimed objectively quantify the predictive power of friendship prediction methods in LBSNs as well as determine how good they work in the context of recommender systems, our third contribution is related to the identification of the top-5 friendship prediction methods that better perform in the LBSN context. For this purpose, we performed an exhaustive evaluation process on snapshots of three well known real-world LBSNs.
Based on our results, we empirically demonstrate that some friendship prediction methods for LBSNs could be ranked as the best for some evaluation measure but could perform poorly for other ones. Thus, we stressed the importance of choosing the appropriate measure according to the objective pursued in the friendship prediction task. For instance, in general, some friendship prediction methods performed better with regard to the F-measure than with AUC, so if in any real-world application it is necessary to focus on minimizing the number of wrong predictions, the best option is to consider methods that work well based on the F-measure. However, if the focus is to obtain a high number of right predictions, but with a high chance that these predictions represent strong connections, then the best option could be to consider methods that work well based on AUC.
Nevertheless, in a real-world scenario likely will be necessary to balance both the F-measure and AUC performance of methods. Thus, we finally identified the top-5 friendship prediction methods that performed in a balanced way for different metrics. Moreover, in this top-5 are two of our proposals, FAW in the first position and ChA in the fourth.
Other observation based on our results is related to the fact that the use of a variety of information sources does not guarantee the best performance of a method. For instance, NDA method, which is one of our proposals, is the only one that uses all the information sources identified, but it appears in the ninth position of our final ranking. Finally, we also observe that methods based purely on check-in information or place information performed worse than methods combining these information sources with social information. Therefore, we have empirical foundation to support the argument that the best way to cope with friendship prediction problem in the LBSN context is by combining social strength with location information.
The future directions of our work will focus on location prediction, which will be used to recommend places that users could visit. For that, we hope that the location information sources identified in this work can also be used in the location prediction task.
|Lp| |T P | , precision, defined as P = |T P | |T P |+|F P | , and recall, defined as R = |T P | |T P |+|F N |
friendship prediction methods in LBSNs
Figure 1 :
1 Figure 1: Information sources and the different similarity criteria used by existing methods to perform friendship prediction in LBSNs.
is the set of places commonly visited by users x and y, Φ L (x) = { | ∀ ∈ L : (x, t, ) ∈ Φ(x)} is the set of distinct places visited by user x, and Φ( ) = {(x, t, ) | ∈ L : (x, t, ) ∈ Φ}, is the set of check-ins made by different users at location .
Check
also summarizes the average number of users, |V | , average number of different locations, |L| , average number of training social links, |E T | and average number of testing social links, |E P | , obtained by averaging 10 independent partitions of each dataset. It is important to comment that, for the three datasets, the average number of check-ins in training set, |Φ P | , is twothirds of the total number of check-ins whilst the average number of check-ins in probe set, |Φ T | , is the remainder part.
Figure 2 :
2 Figure 2: Number of correctly and wrongly predicted links for methods based on checkin frequency (G 1 ), information gain (G 2 ), distinct visitations at places (G 3 ), geographical distance (G 4 ), social strength (G 5 ) and our proposals (G 6 ) for (a) Brightkite, (b) Gowalla and (c) Foursquare. The dashed horizontal lines indicate the number of truly new links (links into the probe set) for each dataset. Results averaged over the 10 analyzed partitions and plotted in log 10 scale.
Figure 3 (
3 a) shows the Nemenyi test results for the 15 analyzed link prediction methods considering the F 1 ranking. The critical difference (CD) value for comparing the mean-ranking of two different methods at the 95 percentile was 12.38, as shown on the top of the diagram. The method names are shown on the axis of the diagram, with our proposals highlighted in bold. The lowest (best) ranks are on the left side of the axis. Methods connected by a bold line in the diagram have no statistical significant difference, so the Nemenyi test indicated that FAW has statistical significant difference with LC and GeoD.
Figure 3 :
3 Figure 3: Nemenyi post-hoc test diagrams obtained from (a) f-measure and (b) AUC results showed in Table4. Our proposals are highlighted in bold.
Figure 3 (
3 Figure 3(a) indicates that methods based on social strength, such as FAW
Figure 4 :
4 Figure 4: Nemenyi post-hoc test diagram obtained over the F 1 and AUC average ranks showed in Table4. Diagram shows the final ranking of link prediction methods considering both the optimal reduction of prediction space size and high prediction power. Our proposals are highlighted in bold.
Figure 5 :
5 Figure 5: Precisi@L performance for the top-5 methods of the final ranking considering different L values for (a) Brightkite, (b) Gowalla and (c) Foursquare.
Table 2 :
2 The main properties of the experimental LBSNs.
Brightkite Gowalla Foursquare
|V | 58228 196591 11326
|E| 214078 950327 23582
k 7.35 9.66 4.16
C 0.17 0.24 0.06
H 8.66 31.71 7.66
r 0.01 -0.03 -0.07
|Φ| 4491144 6442892 202955
|Φ V | 50686 107092 9985
Φ 88 60 179
|L| 772788 1280969 172044
L Φ 5 5 11
E 0.05 0.25 0.19
3 https://foursquare.com/about
4 http://www.public.asu.edu/˜hgao16/Publications.html
Table 3 :
3 Details of pre-processed datasets.
Dataset Training time range Testing time range |V | |L| |E T | |E P |
Brightkite 2008/04 -2010/01 2010/02 -2010/10 4606 277515 49460 24800
Gowalla 2009/02 -2010/04 2010/05 -2010/10 19981 607094 232194 87619
Foursquare 2011/01 -2011/09 2011/10 -2011/12 7287 101546 12258 8565
Table 4 :
4 Friendship prediction results for Brightkite, Gowalla and Foursquare. Highlighted values indicate the best results for each evaluation metric considered.
Method IR F1 AUC IR F1 AUC IR F1 AUC
Co 4.934 0.070 0.668 14.972 0.051 0.554 4.488 0.045 0.554
AAP 13.190 0.104 0.682 36.531 0.045 0.728 13.367 0.034 0.655
AAE 13.190 0.104 0.694 36.586 0.045 0.736 13.367 0.034 0.670
LC 34.000 0.055 0.629 180.945 0.011 0.542 27.844 0.017 0.470
CL 13.114 0.105 0.676 36.327 0.045 0.682 13.368 0.034 0.630
JacP 13.114 0.105 0.630 36.327 0.045 0.742 13.368 0.034 0.708
GeoD AHD CNP Brightkite 35.005 0.053 31.689 0.056 31.180 0.060 0.761 0.710 0.685 Gowalla 180.461 0.011 0.767 223.714 0.011 0.681 66.484 0.029 0.687 Foursquare 35.710 0.018 35.782 0.018 23.277 0.027 0.705 0.656 0.608
TPOP 13.441 0.105 0.673 25.383 0.057 0.665 12.588 0.036 0.594
ChO 13.079 0.104 0.608 31.197 0.050 0.714 13.292 0.034 0.671
ChA 13.173 0.104 0.676 36.460 0.045 0.736 13.367 0.034 0.667
FAW 9.678 0.113 0.740 15.821 0.069 0.718 7.764 0.046 0.642
CNNP 9.387 0.048 0.552 18.868 0.046 0.620 4.920 0.039 0.569
NDA 22.496 0.076 0.700 47.540 0.037 0.720 15.325 0.024 0.624
since G is an undirected network.
http://snap.stanford.edu/data/loc-brightkite.html
http://snap.stanford.edu/data/loc-gowalla.html
https://github.com/jvalverr/Geo-LPsource
Acknowledgments
This research was partially supported by Brazilian agencies FAPESP (grants 2015/14228-9 and 2013/12191-5), CNPq (grant 302645/2015-2), and by the French SONGES project (Occitanie and FEDER). |
01761315 | en | [
"chim.mate"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01761315/file/150707_thermal_paper_Bresson_JCOMB%2BrevisionJCOMB_OC.pdf | G Bresson
A Ahmadi-Sénichault
O Caty
V Ayvazyan
M L L Gregori
S F Costa
G L Vignoles
Thermographic and tomographic methods for tridimensional characterization of thermal transfer in silica/phenolic composites
Keywords: A. Polymer-Matrix Composites (PMCs), B. thermal properties, C. Computational modeling, D. Non-destructive testing; D. X-ray tomography, D. Thermal Analysis
thermal properties are compared. Several experimental methods have been used, including space-resolved diffusivity determinations. Numerical results are compared to experimental ones in terms of transverse and longitudinal thermal conductivities of the composites, and were found to be in good agreement. A discussion is made on the different possible sources of uncertainty for both methods.
Nomenclature
Roman letters a
Thermal diffusivity (m 2 s -1 ) c p Mass thermal capacity (J K - Longitudinal macroscopic thermal conductivity of a composite (Wm -1 K -1 )
Introduction:
The development of new composites with low weight and high performance is an important issue for the production of thermal protection systems (TPS) for atmospheric re-entry of space objects [START_REF] Laub | Thermal Protection System Technology and facility Needs for Demanding Future Planetary Missions[END_REF]. Among them, silica-phenolic composites have shown excellent thermal performances [START_REF] Dauchier | Matériaux composites phénoliques ablatifs[END_REF], and very good ablation resistance [START_REF] Sykes | Decomposition characteristics of a char-forming phenolic polymer used for ablative composites[END_REF][START_REF] Kanno | Development of ablative material for planetary entry[END_REF]. In order to reduce processing costs and to optimize the material with respect to its mission, recent works [START_REF] Gregori | Mechanical and Ablative Properties of Silica-Phenolic Composites for Thermal Protection[END_REF][START_REF] Gregori | Ablative and mechanical properties of quartz phenolic composites[END_REF] have focused on the fabrication and characterization of composites in which the reinforcement is in the form of discontinuous chopped fabric pieces. This kind of improvement has been studied on aramid/epoxy composites [START_REF] Sohn | Mode II delamination toughness of carbon-fibre/epoxy composites with chopped kevlar fibre reinforcement[END_REF], and more generally on polymer-matrix composite prepregs for damage tolerance through mechanical testing [START_REF] Greenhalgh | The assessment of novel materials and processes for the impact tolerant design of stiffened composite aerospace structures[END_REF]. The result is an elimination of the tendency of the material to delaminate [START_REF] Gregori | Mechanical and Ablative Properties of Silica-Phenolic Composites for Thermal Protection[END_REF], as compared to the same material with 2D continuous fabric layup reinforcement, but at the expense of some abatement in mechanical properties like stiffness [START_REF] Kumar | Thermo-structural analysis of composite structures[END_REF].
In this paper, we focus on the determination of the thermal properties for this class of materials, comparing the « chopped » arrangement to the continuous fabric layup.
Two approaches will be developed in parallel and compared to each other. First, thermal measurements are performed by flash, hot disk and DSC methods. Second, the material structure assessed by microscopy and tomography and the properties of the components are used in an ad-hoc numerical procedure to predict the effective thermal diffusivity of the composite. The simultaneous development of these methods aims to contribute to the creation of a « material design tool » which can predict the influence of the reinforcement architecture on the effective thermal diffusivity or conductivity of the composite.
The materials and methods are first described; then, results from the thermographic and tomographic methods are presented and discussed.
Materials
The studied material samples were processed by Plastflow Ltd, Curitiba, Brazil, from a regular fabric, about 0.9 mm thick, of pure silica with an areal mass of 680 g/m 2 and a phenolic resin with 165 mPa.s viscosity, used at room temperature to obtain the prepreg material [START_REF] Gregori | Ablative and mechanical properties of quartz phenolic composites[END_REF]. Two configurations of the silica fiber reinforcement were studied (Figure 1). The first one is obtained by simply stacking the as-received prepreg layers, while the other one was made from the same prepregs, but in a chopped form. The "chopped fabric" composite is composed of a random arrangement of small, approximately square pieces of prepreg with an edge size of 20 to 28 mm. The two kinds of reinforcements are placed in multilayer configurations to obtain plates. Then, a hot compression process (15 min./70 bar/175ºC) and a second cure are applied to both materials (12h/1 atm/180°C). The final dimensions are 240 × 240 × 8 mm 3 .
Experimental methods
Properties of the components
Individual components were tested and data are reported in Table 1. Density measurements were made with an Accupyc 1330 He pycnometer; the results are consistent with ref.
[5]. The fiber average diameter was estimated using microscopic observations on approximately 350 fibers. The matrix thermal properties were obtained using flash thermography (diffusivity) and DSC (heat capacity). The fiber heat capacity was obtained by subtraction between the composite and the matrix values, and its diffusivity by a specific laser excitation method [START_REF] Vignoles | Measurement of the thermal diffusivity of a silica fiber bundle using a laser and an IR camera[END_REF]. The reported values will be used in the numerical computations based on tomographic images.
For each composite, the analysis of phase ratios was performed using Hg porosimetry (open porosity, skeletal density, and bulk density), combined to 3D (X-ray tomography) and 2D (optical microscopy) imaging. The results are synthesized in Table 1.
Thermal methods
Thermal analyses were done to obtain conductivities to validate numerical results.
These values are also used to compare materials thermal behaviors. Firstly, an infrared thermographic analysis by a flash method was performed to experimentally measure the thermal diffusivity field (a) of the plates averaged through their thickness. The thermal diffusivity is essential to determine the thermal conductivity (λ) when the mass thermal capacity (c p ) and the density (ρ) are known, since a is λ/ρc p . This method consists in applying a photothermal pulse -the flash -on the front face of the material and measuring the evolution of the temperature on the rear face. The technique described by Degiovanni et al. [START_REF] Degiovanni | Diffusivité thermique et méthode flash[END_REF] has been found more appropriate than the traditional method of Parker et al. [START_REF] Parker | Flash method of determining thermal diffusivity, heat capacity, and thermal conductivity[END_REF], because of non-negligible thermal losses; it has been applied here on IR camera data for both composite samples, in both directions, and for the pure matrix.
The average of the camera-recorded thermal field was used as the signal to be processed. It should be noted that the direction perpendicular to the composite plates will be called transverse, while those parallel to the plate will be called longitudinal.
The experimental setup is composed of a flashlight device (Elinchrom Style RX1200) acting on the front face of the composite sample and an IR camera (FLIR SC7000, 320 × 256 pixels) recording periodically (25 Hz) thermal images on the rear face. Adiabatic conditions were ensured on the sample sides by inserting them inside a PS foam holder. The flash impulsion is spread on the entire front face (30 × 30 mm 2 ) so as to place the sample in 1D heat transfer conditions. All samples being semi-translucid, their flashed face was painted in black.
The hot disk method [13,14] was also used to validate the values measured by thermography and to obtain the thermal conductivities in the principal directions. The measurements were performed with a TPS 2500 device by Hot Disk AB (Sweden).
DSC measurements were carried out on a Setaram Labsys TM 131 Evo apparatus, with a 5°C/min heating rate, between ambient temperature and 110°C.
Calculations
The model used to calculate the diffusivity and conductivity is based on 2D/3D image analysis and on a double change of scale. The method is described in ref.
[19] for a fibrous C/C composite but is also efficient for a large number of fibrous composites when µCT images of the material at different scales are available.
Imaging
The arrangement of fibers, pores and matrix, responsible of the specific thermal properties of the composites, exhibits two scales of heterogeneity that must be analyzed successively. They are illustrated in Figure 2. Standard micrographic imaging of transverse sections of yarns (Figure 2d) is sufficient to extract the structural information at this scale, the fibers being mostly parallel to each other. The lowresolution images (Figures 2a,b,c) provide information only on the weaving pattern in the representative elementary volume (REV), but a 3D information is needed for accuracy; therefore, X-ray tomographs have been also used.
Micrographs were acquired on a standard optical microscope (Nikon Eclipse ME600) and a numerical microscope (Keyence VHX-1000). The ×50 magnification was optimal to analyze the fibers and was mostly used in this study. Several samples were cut in the plates and mechanically polished for observation. The micrographs were numerically acquired and the fibers and matrix components were separated by simple thresholding, followed by the application of a watershed algorithm, using ImageJ [START_REF] Rasband | ImageJ software[END_REF].
X-ray Microscale
First change of scale based on 2D imaging
Knowledge of the detailed geometry and of the thermal properties of the components (fibers, matrix) of the composites is necessary to perform the first change of scale. For this, 2D micrographs taken perpendicularly to the yarn direction have been used. The assumption made is that the yarn is assumed to be unidirectional at this scale (i.e. the perpendicular pattern remains unchanged through the third direction). The parallel thermal conductivity (λ // ) is therefore simply obtained by a law of mixtures given by the following expression:
m m f f λ φ λ φ λ + = // (1)
where f φ and m φ are respectively the volume ratio of fibers and of the matrix in the material. f λ and m λ are respectively the conductivities of fibers and the matrix.
For the transverse conductivities, image analysis and computations are necessary.
The numerical procedure used is based on a change of scale procedure using the Volume Averaging Method. Starting from the equations at a given scale, this method leads to higher scale equations and closure problems allowing the determination of effective properties. The closure problem obtained for heat conduction is solved numerically with a homemade Finite Volume solver in a 1 voxel-thick 3D periodic unit cell representative of the studied medium, thus leading to the determination of the effective heat conductivity tensor [START_REF] Lux | Macroscopic thermal properties of real fibrous materials: Volume averaging method and 3D image analysis[END_REF]. For a given image, the fiber volume fraction is also assessed. The results are used to yield expressions relating the transverse and parallel conductivities to the local fiber volume fraction.
Second change of scale based on 3D µCT
The proposed procedure takes into account the true space distribution of phases and orientations, as sampled at the macroscale (see computation is carried out with the same numerical tool [START_REF] Lux | Macroscopic thermal properties of real fibrous materials: Volume averaging method and 3D image analysis[END_REF] as for the first one.
Results
Experimental
For each sample, the thermal field has been recorded as a function of time. From this thermal field evolution, it is possible to determine the thermal transverse diffusivity (averaged on composite thickness) for each pixel [21,22] (see Figure 3a,b) and to plot the thermal diffusivity histograms (Figure 3c). These maps and histograms demonstrate that, though both composites are quite similar in terms of thermal diffusivity magnitude, the chopped one displays more dispersion, as could be expected from its structure.
Moreover, thermal conductivities along transverse and longitudinal directions were measured by the hot disk method. These measurements only concern chopped composites and were done on 4 × 5 cm² samples. The results are in excellent agreement with those obtained through thermography, as can be seen in Table 2. Values are similar to those reported in [START_REF] Mottram | Thermal conductivity of fibre-phenolic resin composites. Part I. Thermal diffusivity measurements[END_REF].
Numerical results
As already mentioned, the method to calculate the equivalent conductivity tensor is based on 3D µCT images as input geometry of the material. But at the micro-scale, the resolution of µCT images was not sufficient, justifying the use of micrographs. We first present results on 2D micrographs for the first change of scale, and then on 3D µCT scans for the second change of scale.
First change of scale based on 2D imaging
The first change of scale has been performed on micrographs similar to the one presented in Figure 2d. Four of these images obtained after processing are presented on The results of effective conductivity computations were compared to known analytical estimations of the transverse conductivity proposed in the literature. First, the lowest conceivable bound is given by the harmonic average corresponding to the 1D law of conduction in series:
1 1 1 - - - ⊥ + = m m f f λ φ λ φ λ (2)
while the highest bound is given by the arithmetic average corresponding to the 1D law of conduction in parallel. A first more elaborate model has been proposed by Maxwell Garnett [START_REF] Garnett | Colors in metal glasses and in metallic films[END_REF], in which the thermal conductivity of a regular array of cylinders in a matrix is given as a function of fiber fraction by the following relationship:
m f m f f m m d d λ λ λ λ φ λ λ λ λ ) 1 ( ) 1 ( - + - = - + - ⊥ ⊥ (3)
where d is the dimension of the problem. This equation, with d =2, as is the case here, is
f m f f f f F F F F φ λ φ φ λ φ φ ⊥ = - + - - - (4)
and for a hexagonal array:
f m f f f f F F F F φ λ φ φ λ φ φ ⊥ = - + - - - (5) with m f m f F λ λ λ λ + = - .
+ + = ⊥ f f f φ φ φ λ (6)
Second change of scale based on 3D µCT
The second change of scale is performed on an image of a volume of the material large enough to contain an REV (Representative Elementary Volume). The image is divided into several sub-volumes (see Figure 5). For each sub-domain, the fibers and matrix are first thresholded to 50% black / 50% white, and anisotropy directions are then detected using a random walk algorithm where the walkers are only allowed to travel in one phase. This method, simulating a diffusion process in the image, is sensitive to its local structure. The analysis of eigenvectors and eigenvalues of the covariance tensor of the relative walkers' displacements gives the fiber direction in the sub-volume: the direction associated to the highest eigenvalue is the average fiber direction. This procedure has been successfully validated on µCT images of C/C composites [START_REF] Vignoles | Assessment of geometrical and transport properties of a fibrous C/C composite preform as digitized by X-ray computerized microtomography: Part II. Heat and gas transport properties[END_REF][START_REF] Coindreau | Assessment of structural and transport properties in fibrous C/C composite preforms as digitized by X-ray CMT. Part I : Image acquisition and geometrical properties[END_REF].
In addition to this, a correspondence between the average gray scale value of the subdomain and its local values of the fibers, pore, and matrix volume fractions is provided through the choice of correlation functions. As sketched in Figure 6, the gray scale histograms of the images can be modeled using two Gaussian functions: the first one corresponds to the inter-yarn material (i.e. principally matrix), the second one to the yarns themselves. The mode N2 of the first Gaussian corresponds to the matrix density and the mode N3 of the second one corresponds to the average density of yarns, i.e. of a material with fiber density corresponding to the volume fraction of fibers in the yarn, φ f,yarn , and matrix density corresponding to φ m, yarn = 1φ f,yarn , since there are no intrayarn pores. Between N2 and N3, a linear interpolation is made, and it is prolonged until the fiber volume fraction reaches 1. The pores are obtained by direct thresholding of the image, with a threshold value N1 chosen so that the experimental porosity is recovered. for the chopped composite.
Discussion
For both regular and chopped composites, the non-diagonal terms in the conductivity tensors reported in Eqs. ( 7) and ( 8) are negligible: directions x, y and z of the tomographic image match those of thermal conduction. Comparing with flash thermography results, for the regular composite, it can be noted that longitudinal conductivity (directions x and y with similar values) is, as in experiments, higher than the conductivity in the z direction. The agreement in both directions is excellent. The same tendency is obtained for the chopped composite. For this material, the computed λ * xx and λ * yy differ by 5%, indicating that the area selected for computation was somewhat too small with respect to an REV size. However, the computed value of transverse conductivity is, as in experiments, lower than longitudinal ones. The average computed longitudinal value matches the measured value within the error bounds.
The thermal anisotropy can be conveniently quantified by the following parameter:
// * * // * λ λ - λ = τ ⊥ (9)
Experimentally, we obtained anisotropies of 19.6%±7% and 13.7%±1% respectively for the regular and chopped composites, while the computations give values of 12% and 10% respectively, that is, somewhat too low, especially for the regular composite. Despite these remarks, the image-based computational procedure has shown its ability to successfully reproduce the effect of the reinforcement arrangement on heat conduction.
The numerical procedure has been repeated with small variations of the fiber and matrix conductivities, and of the grayscale levels used in the thresholding procedure, in order to obtain the respective normalized sensitivities 3. The sensitivity with respect to matrix or fibers conductivity lying respectively around 0.5 (matrix) and 0.7 (fiber), we conclude that a small error in the knowledge of these constituent properties has no catastrophic effect on the computational predictions (a 1% variation in λ m gives a 0.5% variation in λ * ). In the same manner, the threshold N1, though important for correct density determinations, has a very low impact on the effective conductivity. On the other hand, the method is more sensitive to the N2 and N3 levels chosen for the grayscale/density conversion function. A variation of 1 gray level of N2 or N3 implies a variation of about 1% in λ * . Since these parameters are obtained from the knowledge of the fiber and matrix volume fractions, the experimental determination of the latter quantities is the most critical point of the procedure.
Conclusion
This paper has presented a study of a silica/phenolic composite family based on two main approaches: (i) the measurement of thermal and structural properties through various experiments, and (ii) a double-step computational change of scale. The latter approach is based on 2D and 3D images of the microstructure of the composites. Using the properties of the constituents and their relative spatial organization, the approach yields effective conductivities of the composite.
The numerical results are in good agreement with experiments and can be easily applied to fiber-reinforced composites in general, using microscopic images and tomographic scans. Improvements are now necessary to optimize the change of scale routine calculation in terms of computational time.
The two composites with different spatial organizations have been found to have very close thermal conductivities in different directions. The transverse conductivities of the chopped composite display more dispersion on the plate, as expected from its structure. This may have a positive impact on ablation resistance; indeed, exposed to the same heat flux, the heat penetration depth will be more dispersed, resulting in a wavier pyrolysis front. The heat consumption by pyrolysis will therefore increase by the ratio of the actual surface to the projected surface, and will bring a better protection.
φφλλλλ
value N2,N3 Gray scale values used for density/gray scale correspondence S xi Normalized sensitivity of λ * to the variation of x i Greek letters f Volume fraction of fibers φ f,yarn Volume fraction of fibers in a yarn m Volume fraction of the matrix φ m,yarn Volume fraction of the matrix in a yarn λ Thermal conductivity (Wm -1 K -1 )Local thermal conductivity tensor of a sub-block (Wm -1 K -1 )* Macroscopic thermal conductivity tensor of a composite (Wm -1 K -1 )f Thermal conductivity of the fibers (Wm -1 K -1 ) m Thermal conductivity of the matrix (Wm -1 K -1 )
Computerized Tomographs (µCT) [16] were acquired using a laboratory device (Phoenics X-ray Nanotom) used in absorption mode with a resolution of 5 µm/pixel. The samples were extracted from the initial composite plate to be fully contained in the field of view at the resolution attempted. Reconstruction from the radiographs was performed by the standard filtered back-projection algorithm [17,18]. Examples of 3D µCT reconstructions are presented in Figure 1c,d.
[ 19 ]
19 for details). The hypothesis made is that the phase distribution vs. property correlation is correctly sampled at the microscale and directly transposable for macroscale computations. The procedure consists in computing the effective thermal properties of the low-resolution images from a field of small-scale properties by the method of Volume Averaging. It starts with the subdivision of the low-resolution images into sub-images; then, inside every sub-image, an evaluation of fiber volume fraction and local fiber orientation is performed. This part of the algorithm is carried out by binarizing arbitrarily the image at 50% level threshold, letting random walkers run in one of the two phases, computing the variance of their centered displacements, and selecting the eigenvector associated to the highest eigenvalue (i. e. the direction of fastest diffusion). Once this has been done, local property tensors may be affected to every sub-image; the values of parallel and perpendicular conductivities come from the results of the first change of scale (see above), as a function of the local grayscale level and the tensors are properly rotated to be expressed in the axes of the sample image considered. Finally a change of scale
Figure 4 .
4 Figure 4.
equivalent to the relation of Hasselman & Johnson [25]. More elaborate relations were proposed by Perrins et al. [26] for a square array of cylindrical fibers:
Using this information and the correlations obtained from the first change of scale, longitudinal and transverse thermal conductivities are computed. For each sub-volume, a diagonal thermal conductivity tensor is allocated in its own principal directions then rotated back in the image's global frame [19]. The effective macroscopic conductivity tensor is then obtained from the data of all sub-domains by solving the closure problem associated to the Volume Averaging Method, using the same solver as for the first change of scale [20].Macroscopic conductivity tensors, λ * , for both composites, obtained thanks to this double change of scale method, on the 3D blocks shown in Figure1c,d,
1,2 or 3), as reported in Table
Figure 1 -
1 Figure 1 -Photographs of the studied materials: a,c) regular composite; b,d) chopped
Figure 2 -
2 Figure 2 -Scale heterogeneity evidences with a) macroscopic, b,c) mesoscopic and d)
Figure 3 -
3 Figure 3 -Heat diffusivity maps from the flash experiments on a) plain woven
Figure 4 -
4 Figure 4 -Transverse conductivity vs. fiber ratio calculated by change of scale
Figure 5 -
5 Figure 5 --Illustration of the computations for each subdomain prior to the second
Figure 6 -
6 Figure 6 -Above: histogram of the original image, with two fitted Gaussians, one
Figure 2 .
2 Figure 2.
Figure 4 .Figure 2 .Figure 5 .
425 Figure 4.
The comparison of the transverse conductivity vs. fiber ratio curves calculated with all these laws and the one obtained by fitting the numerical results is reported in Figure 4. It can be seen that the relationships of Maxwell Garnett and of Perrins et al. always underestimate slightly (by 5% or less) the computed values. At high fiber volume fractions, the law of Perrins et al. for square arrays of cylinders matches well the numerical data. For the sake of convenience, the transverse
conductivity ⊥ λ has been fitted by the following polynomial law:
-2.021932 3 4.627975 2 - 2.202157 0.7120661
Table 3 -
3 Normalized sensitivities of the computed macroscopic thermal conductivities to different parameters COLOR FIGURES (WEB VERSION ONLY)
Normalized Regular Chopped
sensitivities Longitudinal Transverse Longitudinal Transverse
to :
λ f 0.70 0.64 0.68 0.61
λ m 0.50 0.55 0.52 0.56
Ν1 0.12% 0.13% 0.11% 0.13%
Ν2 0.9% 1.0% 0.9% 1.0%
Ν3 0.9% 1.0% 0.9% 1.0%
! Figure 1.
Acknowledgements
GIS "Advanced Materials in Aquitaine" is acknowledged for a 1-year post-doc grant to G. B. The authors would like to thank R. Huillery (Fahrenheit) for the hot disc experiments; R. C. Oliveira da Rosa & V.-E. Renaud (Arts et Métiers ParisTech -Bordeaux) for density measurements and help for tomographic acquisition; G. Caillard & V. Rolland de Chambaudoin d'Erceville (Arts et Métiers ParisTech -Bordeaux) for taking micrographs and running the change of scale software; A. Chirazi (Tomomat -Bordeaux) for help in acquiring the tomographs.
Quantity Fibers Matrix
Density (g.cm -3 ) 2.21 ± 0.01 1. [START_REF] Garnett | Colors in metal glasses and in metallic films[END_REF] |
01761320 | en | [
"spi.meca.ther",
"chim.mate"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01761320/file/Battaglia_Pyrocarbon_Composite_revised.pdf | Indrayush De
Jean-Luc Battaglia
Gérard L Vignoles
Thermal properties measurements of a silica/pyrocarbon composite at the microscale
Laminar pyrocarbons are used as interphases or matrices of carbon/carbon and ceramic-matrix composites in several high-temperature aerospace applications. Depending on their organization at the microscale, they can have a variety of mechanical and thermal properties. Hence, it is important to know, before thermal processing, the properties of these matrices at the micrometer scale in order to improve and control the composite behavior in a macroscopic scale. We use Scanning Thermal Microscopy (SThM) on a silica fiber / regenerative laminar pyrocarbon matrix composite to provide an insight into the effective thermal conductivity of pyrocarbon as well as the thermal contact resistance at the interface between fiber and matrix. The conductivity of pyrocarbon is discussed as a function of its nanostructural organization.
I Introduction
Carbon/carbon (C/C) composite materials are choice materials for use in extreme environments, such as space propulsion rocket nozzles, atmospheric re-entry thermal protection systems, aircraft brake discs, and Tokamak plasma-facing components [START_REF] Savage | Carbon-Carbon composites[END_REF] . In addition to carbon fibers, they contain interphases and matrices made of pyrolytic carbon, or pyrocarbon (PyC) [START_REF] Manocha | Carbon reinforcements and C/C composites[END_REF] . This special type of carbon can be though of as a heavily faulted graphite. It is prepared via a gas-phase route, called Chemical Vapor Deposition (CVD) or Infiltration (CVI). It is therefore quite unavailable in bulk form. It has, depending on its processing parameters, a very versatile nanostructure [START_REF] Oberlin | Pyrocarbons[END_REF][4][5] and consequently, broadly varying mechanical and thermal properties, usually anisotropic to a more or less large extent [START_REF] Vignoles | Carbones Pyrolytiques ou Pyrocarbonesdes Matériaux Multi-Echelles et Multi-Performances[END_REF] . Posterior heat treatments may further alter their structure and properties [START_REF] Oberlin | Carbonization and graphitization[END_REF] . Hence, it is important to know the properties of these matrices at the micrometer scale in order to improve and control the composite behavior in a macroscopic scale. In this frame, a large variety of PyC samples have been prepared 8 . That represented in Fig. 1 consists of an asdeposited regenerative laminar (ReL) PyC 9 deposit made on 5-µm radius glass fibers. The general orientation of the anisotropic texture is concentric around the fibers, as exhibited in Fig. 2, and results in orthotropic thermal properties of the matrix in the cylindrical coordinate frame following the fiber axis. This is due to the fact that the graphitic sheets exhibit strong thermal anisotropy. The thermal behavior of these non-homogeneous composites can be captured through characterization that will provide the thermal properties of the PyC. Previous thermoreflectance (TR) experiments [START_REF] Jumel | AIP Conf. Procs[END_REF][START_REF] Jumel | [END_REF][START_REF] Jumel | AIP Conf. Procs[END_REF] have been performed to assess the anisotropic thermal diffusivity of the Smooth Laminar (SL) PyC and of the Rough Laminar (RL) PyC, either pristine or after different heat treatments. It was obtained that the in-plane thermal diffusivity (in orthoradial direction) for the as-prepared SL PyC matrix was 0.14 cm².s -1 while the ratio of the in-plane and out-of-plane thermal diffusivities was 7; the as-prepared RL exhibits higher figures (0.42 cm 2 .s -1 and 20, respectively), denoting a more graphitic and anisotropic structure. ReL PyC, which is a highly anisotropic form of PyC, differs from RL by a larger amount of defects [START_REF] Farbos | [END_REF] and had not been investigated so far. The TR method has in the current case some possible drawbacks: first, its spatial resolution is of the same size as the deposit thickness, a fact that could result in inaccuracies; second, this method requires a rather strong temperature increase on the heating area in order to increase the signal-tonoise ratio, therefore yielding an effective diffusivity characteristic at temperature markedly higher than the ambient and nonlinear effects. On the other hand, the thermal boundary resistance (TBR) at the interface between the fiber and the matrix has not been investigated so far. Since the thermal conductivity for both the silica fiber and the PyC along the radial axis is low, the TBR was not expected to be a key parameter on heat transfer. However, its quantitative identification from measurements at the microscale could bring complementary information regarding the chemical bonding and/or structural arrangement at the interface 14 . In order (i) to overcome the drawbacks of the TR method, (ii) to provide thermal conductivity value for ReL PyC, and (iii) to measure as well the thermal boundary resistance at the interface between the PyC and the glass fiber, we have implemented the scanning thermal microscopy (SThM) experiment involving the 3ω mode 15 . The advantage of using SThM is that (i) the spatial resolution achieved is in the submicron scale and (ii) that high temperature differences are not involved, avoiding thus any risk of nonlinearity. In addition, SThM leads to absolute temperature measurements of the probe as well as to phase measurements when working under the 3 mode. Therefore, advanced inverse techniques can be implemented that can benefit from the frequency and spatial variations of both functions in order to investigate the thermal properties of the PyC and the TBR at the interface between the fiber and the matrix. In the present study the fiber is made of a single glass structure whose properties are available in literature 16 ( k =1.4 W.m -1 .K -1 , r = 2200kg.m -3 , C p = 787 J.kg -1 .K -1 ). The density and specific heat of ReL PyC have been also measured as [START_REF] Farbos | [END_REF] = 2110 kg.m -3 and C p = 748 J.K -1 .kg -1 respectively.
II Scanning Thermal Microscopy in 3 mode -experiment
Scanning thermal microscopy is a well-established and almost ideal tool for investigating nanostructures like semiconductors and nano-electronic devices, due to its intrinsic sensitivity with respect to local material properties and to the thermal wave's ability to propagate through the material with sub micrometer lateral resolution. Since its inception in 1986, based on the principle of Scanning Thermal Profile (STP) [START_REF] Williams | [END_REF] , SThM has seen a lot of improvements and developments [18][19][20][21] including the Wollaston probe and thermocouple probes. The employed SThM (Nanosurf Easyscan AFM) uses a 400 nm-thick silicon nitride (Si 3 N 4 ) AFM probe provided by Kelvin Nanotechnology, on which is deposited a palladium strip (Pd, 1.2 µm thick and 10 µm long) that plays both the roles of the heater and of the a)
10 µm b) ReL PyC Silica fiber c) d) OA = 27°
thermometer. The SThM probe has a tip curvature radius of r s = 100 nm. The contact force between the probe and the sample was chosen between 5 and 10 nN during our experiments and was accurately controlled during the probe motion using a feedback-closed loop on a piezo element, which ensures the displacement in the z direction with precise steps of 1 nm. The contact area between the probe and the surface is assumed to be a disk with radius r 0 . A periodic current I = I 0 cos w t ( ) with angular frequency w = 2p f passes through the strip with electrical resistance R 0 at room temperature, generating Joule's effect and, thus, being a heat source, dissipating the power
P 2w ( ) = P 0 1+ cos 2w t ( ) ( ) 2
DT 2w = 2U 3w R 0 I 0 a R .
In our configuration, R 0 =155 and I 0 =750 µA. The quantity DT 2w must be viewed as an average temperature value of the Pd wire since the change is expected to occur very close to the probe tip when it enters into contact with the investigated material surface. The harmonic contribution is measured using a differential stage coupled with a lock-in amplifier. The thermal coefficient of the Pd strip was calibrated by measuring the change in sensor resistance as a function of temperature and a value of a
R = 1.3± 0.2 ( ) ´10 -3
K -1 was obtained. The contact between the probe and the surface involves a thermal boundary resistance that plays a very significant role on the measured temperature. This resistance involves at least three main contributions 22,23 : (i) the solid-solid contact resistance, (ii) the heat diffusion through the gas surrounding the probe and the water meniscus that forms inevitably at the probe tip and (iii) the change in the temperature gradient within the Pd strip between the out-of-contact mode (used to evaluate the probe thermal impedance) and when the probe comes into contact with the surface. In the present study, although working under argon flow (after a preliminary air removal from primary vacuum), the diffusion through the gas and the water meniscus may still be present even if its contribution is bounded below. Moreover, for silicon nitride probes, the increase in the contact area can also be explained by the flattening of the tip apex when in contact with the sample. As we showed in a previous study 18 , the thermal contact resistance integrates all the physical phenomena listed above and that occur to form the thermal contact resistance R c at the interface between the probe and the surface. On the other hand, we observed 18 that, in this experimental condition, the contact resistance as well as the radius r 0 of the heated area did not vary significantly when the thermal conductivity of the sample varied from 0.2 to 40 W.m -1 .K -1 . This observation was made considering the probe was motionless and the roughness of all samples was less than 10 nm. Finally, it was also observed 18 that the sensitivity to the thermal conductivity variation started to vanish above 25 W.m -1 .K -1 . This is obviously related to the very small contact area since the smaller this area, the lower is the sensitivity to thermal conductivity change for high conductive samples. Finally, it was observed that the measured phase did not vary significantly with respect to the measurement error whatever the thermal conductivity of the material. The heat transfer in the probe and the investigated material, in both the out-of-contact and contact modes, is described in Fig. 3, using the thermal impedances formalism [START_REF] Maillet | Thermal quadrupoles: solving the heat equation through integral transforms[END_REF] . This contact mode configuration assumed that the probe was located on either the fiber or the matrix and was only sensitive to the thermal properties of the contacting material. In other words, this configuration suggests that the probe was put in static contact on both materials far away enough from the interface. Denoting w 2 = 2w , the average temperature T p w ( ) dq of the amplitude versus the parameter = {k r,PyC , k z,PyC , r 0 , R c }, have been calculated and reported on the Fig. 4. Sensitivity on k r,PyC and k z,PyC being exactly the same, they cannot be identify separately. Therefore, the measurements achieved when the probe is in contact with the PyC will only lead to identify its effective thermal conductivity k PyC,eff . On the other hand, the ratio of the sensitivity functions to r 0 and R c is constant; there is thus no chance to identify separately those two parameters simultaneously from frequency dependent measurements.
III Scanning Thermal Microscopy in 3 modeheat transfer model
Z m w 2 ( ) = 2 k z,m p r 0 J 1 (x) é ë ù û 2 x x 2 a r ,m a z,m + j w 2 r 0 2 a z,m dx 0 ¥ ò , j 2 = -1
In order to simulate the probe temperature when the probe sweeps the surface of the composite at a given frequency, we used the analytical model derived by Lepoutre et al. [START_REF] Lepoutre | [END_REF] assuming semi-infinite domains on both sides of the interface. This assumption is realistic since the probe is only sensitive to the material bulk thermal conductivity at distances that do not exceed 5 to 6 times the contact radius r 0 . In addition, in order to validate this assumption, we also performed calculations based on a finite elements model that is not presented here. They confirm the reliability of the solutions obtained using the analytical model that requires less computation times and that can thus be implemented in an inverse procedure to estimate the sought parameters. We assume here the motion of the probe is along the radial direction r, when the probe passes from the fiber to the matrix, through the interface. The reduced sensitivity functions
S
A q,r ( ) =q dA r ( ) dq of the amplitude to the parameters = {k r,Pyc , k z,PyC , r 0 , R c , TBR} at the frequency 1125 Hz, when r varied from -0.5 to 0.5 µm assuming the interface is at r=0, are represented in figure 5a.
The sensitivity functions with respect to r 0 and R c , are, as for the frequency behavior, linearly dependent. However, it appears, as revealed on Fig. 5b, that the parameters k r,Pyc , r 0 and TBR can be identified since the ratios of the associated sensitivity functions are not constant along r.
Whatever the experimental configuration for the probe, static or dynamic, the sought properties are identified by minimizing the quadratic gap:
J = A p r ,w 2 ( ) -A meas r ,w 2 ( ) ( ) 2
IV Experimental results
In a first time, we estimated the contact radius r 0 at the probe/surface interface using a calibrated "step" sample [START_REF] Puyoo | [END_REF] that consists in a 100 nm thick SiO 2 step deposited on a Si substrate. We found r 0 =100 ±10 nm with a constant force of 10 nN applied on the cantilever. It is assumed this value for the contact radius when the probe is in contact either with the glass fiber or the PyC considering the same force (10 nN) applied on the cantilever. We performed frequency dependent measurements when the probe was out-ofcontact and in static contact at the center of the glass fiber and on the PyC matrix far away from the fiber. Figure 6 shows the measured frequency dependent amplitude (Fig. 6a) and phase (Fig. 6b) for those three configurations. As said previously, we retrieve that the difference in the phase for each condition is very small, meaning that only the amplitude can be used. The out-of-contact measurements lead to the probe thermal impedance Z p ( 2 ). Then, since the silica thermal properties are known, we identified the thermal contact resistance at the interface between the SThM probe and the material starting from the amplitude measurements when the probe is in static contact at the center of the fiber. Using the minimization technique and the model for the probe in static contact with the material, we found that
R c = 7.83± 0.3 ( ) ´10 -8 K.m 2 .W -1 .
The fit between experimental data and the theoretical ones is very satisfying as presented in Fig. 6. Finally, using this value for R c , we identified the effective thermal conductivity of the PyC from the measured amplitude when the probe is in static contact with the PyC. We found that k PyC,eff = 20.8± 4.2 W.m -1
.K -1 , which leads again to a very satisfactory fit between the measurements ad the simulation as showed in Fig. 6. It must be emphasized that the standard deviation on the identified thermal conductivity is high (20% uncertainty) since a small variation in r 0 leads to a very large change in k PyC,eff . We noticed also that the minimum value of the quadratic J that is minimized is obtained for r 0 = 110 nm . This value is thus in the expected range obtained using the calibrated sample.
We have however to mention that the value we found for k PyC,eff is at the detection limit (25 W.m -1 .K -1 ) of the instrument, as said in section II. .K -1 and r 0 = 105± 7 nm . The calculated temperature along using those identified parameters is reported in Fig. 8 with the simulations. Therefore we retrieve well the values for r 0 and k PyC,eff that have been determined using the step sample and the frequency identification procedure.
V Conclusion
The SThM method has been applied to a composite made of silica fibers embedded in a regenerative laminar pyrocarbon (RL PyC) matrix. It has allowed obtaining values of the effective conductivity of this type of pyrocarbon, therefore completing the existing database obtained by TR on other types of PyC. The method has proved efficient in yielding effective values of the thermal conductivity. Unfortunately, it cannot give the details of the conductivity tensor elements; the uncertainty margin is also rather large. On the other hand, it allows identifying the thermal boundary resistance between the carbon matrix and silica fibers. A main advance in the field of scanning thermal microscopy is that we implemented an inverse technique in order to identify simultaneously (i) the radius of the contact area between the probe and the sample, (ii) the thermal conductivity of the sample and (iii) the effective thermal conductivity of the PyC. Therefore, whereas it was shown that the frequency dependent temperature at a point located on the surface could not lead to this simultaneously identification, it has been demonstrated in this paper that such an identification can be achieved considering from the spatial temperature variation at a given frequency. However, it is obviously required working with a heterogeneous surface where at least some of the materials are known in terms of their thermal conductivity (in the present case, the SiO 2 fiber). As measured by TR experiments [START_REF] Jumel | AIP Conf. Procs[END_REF][START_REF] Jumel | [END_REF][START_REF] Jumel | AIP Conf. Procs[END_REF] , thermal conductivity of RL and SL PyC are respectively 66.7 and 20.4 W.K -1 .m -1 (using the same heat capacity, and densities of 2120 kg.m -3 for RL [START_REF] Jumel | AIP Conf. Procs[END_REF] and 1930 kg.m -3 for SL [START_REF] Jumel | [END_REF] ). Actually both RL and ReL have the same degree of textural anisotropy, as measured e.g. by polarized light optical microscopy or by selected area electron diffraction in a transmission electron microscope, and only differ by the amount of in-plane defects, as measured by X-ray diffraction, neutron diffraction and by Raman spectroscopy 29 , and confirmed by HRTEM image-based atomistic modeling 30 . On the other hand, SL has a lesser anisotropy but a comparable, though lesser, amount of defects as compared to ReL. We conclude here that the room temperature conductivity is more sensitive to structural perfection than to textural arrangement. Indeed, either phonons or electrons, which are responsible for heat transfer in carbons, are scattered by the defects present in the planes.
The value of TBR is unexpectedly rather low. A possible reason for this low value is that the PyC finds itself in a state of compression around the fiber: as a matter of fact, no decohesion has been found between the fibers and the matrices. Another effect is the fact that, on the carbon side, the conductivity is much larger parallel to the interface instead of perpendicularly, therefore providing easy "escape routes" to heat around defects present at the interface. Therefore, the hypothesis of 1D transfer across the interface could be questioned. Finally, we have also to mention that the surface of the sample is not fully flat at the interface between the fiber and the PyC. This comes from the different mechanical properties of both materials and their impact on the roughness at the end of the surface polishing. On the other hand, the fiber being an insulator already, the sensitivity of the measured temperature vs. the TBR remains low. Additional measurements of the TBR between a carbon fiber and the PyC matrix are in course.
Further investigations are desirable in at least two directions. First, the SThM method should be improved in order to reduce its large degree of uncertainty and to obtain direction-dependent data. Second, measurements should be carried out on other pyrocarbons and fibers, in order to confirm the tendencies obtained here; measurements at higher temperatures are possible and would be highly interesting, since virtually no actual experimental data is available on these materials at elevated temperatures.
"PyroMaN" project, funded by ANR. The authors thank the late Patrick Weisbecker (1973-2015) for the TEM images.
Figure 1 .
1 Figure 1. Microscopic Image of the Composite Structure obtained using a Scanning Electron Microscope (SEM). Silica fibers (5 µm in radius) in grey color, surrounded by the PyC matrix in dark grey color, are perpendicular to the surface that was prepared by mechanical surfacing.
Figure 2 .
2 Figure 2. a) Image showing the silica fiber and the cylindrical arrangement of the graphitic sheets; b) SEM image at a higher resolution confirming the presence of a concentric arrangement of the anisotropic texture, c) Dark field TEM image showing the anisotropic nature of the pyrocarbon, d) High-Resolution 002 Lattice Fringe TEM imaging of the pyrocarbon; the inset is a Selected Area Electron Diffraction Diagram illustrating the high anisotropy through the low value of the 002 diffraction arc opening angle.
2 .
2 The resulting temperature increase ΔT in the strip is composed of a continuous component (DC) and of a periodic one at 2ω, as DT = DT DC + DT 2w cos 2w t +f ( ) . This leads to changes of the strip electrical resistance as R= R is the thermal coefficient. The voltage drop between the two pads of the probe is therefore expressed according to , 2 and 3 .The third harmonic U 3w is related to the transient contribution of the temperature change to resistance as
Figure 3 .
3 Figure 3. Heat transfer model for the out-of-contact operation mode and the contact mode (considering the probe in contact either with the glass fiber or the PyC matrix). T p OFC is the out-of-contact probe temperature that is used to identify the probe thermal impedance Z p (). The current generator represents the heat source, localized within the Pd strip of the probe.
2 () 2 ( ) P w 2 ( 2 (p w 2 ( 2 (
222222 of the Pd strip in the contact mode is related to the total heat flux ) . Expressions for Z p and Z m are required to calculate Tp . We have chosen to express the thermal impedance of the probe as Z ) are respectively the amplitude and the phase measured within the out-of-contact mode. Finally, the heat transfer model within the investigated material leads to express the thermal impedance Z m in an analytical way as25 :
Figure 4 .A w 2 ( ) = T p w 2 ( ) and d p w 2 ( 2 ( ) =q dA w 2
422222 Figure 4. Reduced sensitivities of the amplitude A w 2( ) to: the axial and radial thermal conductivity (k r,PyC , k z,Pyc ) of the PyC, the contact radius r 0 and the thermal contact resistance R c at the interface between the probe and the investigated surface, as a function of frequency. Ratio between sensitivity functions are also presented.In this relation, k z,m is the thermal conductivity along the longitudinal axis (perpendicular to the investigated surface) and a r,m and a z,m are the thermal diffusivity of the material (either the fiber or the matrix, i.e., m=SiO 2 or PyC) along the longitudinal and radial axis respectively. J 1 is the Bessel function of the first kind of order 1. Finally, the theoretical expressions for the temperature and the phase are respectively:A w
Figure 5 .
5 Figure 5. a/ Reduced sensitivity of the amplitude A r ( ) = T p r ,w 2( ) with respect to R c , r 0 , k r,PyC , k z,PyC and TBR at 1125 Hz, along a path crossing the fiber/matrix interface b/ reduced sensitivity ratios along the same path.
Figure 6 .(
6 Figure 6. Measured amplitude (a) and phase (b) vs. 2. Plain lines are simulations using the identified R c when the probe is in contact with the silica fiber (green line) and the identified k eff of the PyC when the probe is in contact with the PyC (red line). We performed an SThM sweep of the specimen at 1125 Hz with a current of 750 µA under atmospheric condition. The topography, amplitude and phase images are recorded during the sweep. The experiments have been performed first over an image edge size of 50 micrometers with 256 points of measurement per line at a speed of 0.25 lines per second (fig. 7a). Then, a sweep over the domain (see fig. 7a) has been performed and reported on Fig. 7b. The value for the amplitude along line (see Fig. 7b), when the probe moves from the fiber to the PyC, is represented in Fig. 8. Using the minimization technique described previously, we found TBR= 5±1 ( ) ´10 -8 K.m 2 .W -1 , k PyC,eff = 20.18± 0.12 W.m -1
Figure 7 .
7 Figure 7. a/ The 5050 µm 2 images obtained via experiments under atmospheric conditions using Scanning Thermal Microscopy at 1125 Hz showing the topography, amplitude and phase from left to right respectively. b/ sweep over the domain.
Figure 8 .
8 Figure 8. Green circles: measured probe temperature along the line (see Fig. 7) and simulated probe temperature values considering the identified values for TBR, r 0 and k PyC,eff and two different values for TBR (in K.m 2 .W -1 ) in order to show the sensitivity of this parameter on the calculated temperature.
Acknowledgement
This work has been funded by Conseil Régional d'Aquitaine and the CNRS 102758 project with Epsilon Engineering Company. The observed sample was produced during the execution of the BLAN-10-0929 |
01761539 | en | [
"info.info-cv",
"info.info-ts",
"info.info-lg"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01761539/file/1801.00055.pdf | Aliaksandr Siarohin
Enver Sangineto
Stéphane Lathuilière
email: stephane.lathuiliere@inria.fr
Nicu Sebe
email: niculae.sebe@unitn.it
Deformable GANs for Pose-based Human Image Generation
In this paper we address the problem of generating person images conditioned on a given pose. Specifically, given an image of a person and a target pose, we synthesize a new image of that person in the novel pose. In order to deal with pixel-to-pixel misalignments caused by the pose differences, we introduce deformable skip connections in the generator of our Generative Adversarial Network. Moreover, a nearest-neighbour loss is proposed instead of the common L 1 and L 2 losses in order to match the details of the generated image with the target image. We test our approach using photos of persons in different poses and we compare our method with previous work in this area showing state-of-the-art results in two benchmarks. Our method can be applied to the wider field of deformable object generation, provided that the pose of the articulated object can be extracted using a keypoint detector.
Introduction
In this paper we deal with the problem of generating images where the foreground object changes because of a viewpoint variation or a deformable motion, such as the articulated human body. Specifically, inspired by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF], our goal is to generate a human image conditioned on two different variables: [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] the appearance of a specific person in a given image and (2) the pose of the same person in another image. The task our networks need to solve is to preserve the appearance details (e.g., the texture) contained in the first variable while performing a deformation on the structure of the foreground object according to the second variable. We focus on the human body which is an articulated "object", important for many applications (e.g., computer-graphics based manipulations or re-identification dataset synthesis). However, our approach can be used with other deformable objects such as human faces or animal bodies, provided that a significant number of keypoints can be automatically extracted from the object of interest in order to represent its pose.
Pose-based human-being image generation is motivated by the interest in synthesizing videos [START_REF] Walker | The pose knows: Video forecasting by generating pose futures[END_REF] with non-trivial human movements or in generating rare poses for human pose estimation [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] or re-identification [START_REF] Zheng | Unlabeled samples generated by GAN improve the person re-identification baseline in vitro[END_REF] training datasets. However, most of the recently proposed, deepnetwork based generative approaches, such as Generative Adversarial Networks (GANs) [START_REF] Goodfellow | Generative adversarial nets[END_REF] or Variational Autoencoders (VAEs) [START_REF] Kingma | Auto-encoding variational bayes[END_REF] do not explicitly deal with the problem of articulated-object generation. Common conditional methods (e.g., conditional GANs or conditional VAEs) can synthesize images whose appearances depend on some conditioning variables (e.g., a label or another image). For instance, Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] recently proposed an "image-toimage translation" framework, in which an input image x is transformed into a second image y represented in another "channel" (see Fig. 1a). However, most of these methods have problems when dealing with large spatial deformations between the conditioning and the target image. For instance, the U-Net architecture used by Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] is based on skip connections which help preserving local information between x and y. Specifically, skip connections are used to copy and then concatenate the feature maps of the generator "encoder" (where information is downsam-pled using convolutional layers) to the generator "decoder" (containing the upconvolutional layers). However, the assumption used in [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] is that x and y are roughly aligned with each other and they represent the same underlying structure.
This assumption is violated when the foreground object in y undergoes to large spatial deformations with respect to x (see Fig. 1b). As shown in [START_REF] Ma | Pose guided person image generation[END_REF], skip connections cannot reliably cope with misalignments between the two poses. Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] propose to alleviate this problem using a two-stage generation approach. In the first stage a U-Net generator is trained using a masked L 1 loss in order to produce an intermediate image conditioned on the target pose. In the second stage, a second U-Net based generator is trained using also an adversarial loss in order to generate an appearance difference map which brings the intermediate image closer to the appearance of the conditioning image. In contrast, the GAN-based method we propose in this paper is end-to-end trained by explicitly taking into account pose-related spatial deformations. More specifically, we propose deformable skip connections which "move" local information according to the structural deformations represented in the conditioning variables. These layers are used in our U-Net based generator. In order to move information according to a specific spatial deformation, we decompose the overall deformation by means of a set of local affine transformations involving subsets of joints, then we deform the convolutional feature maps of the encoder according to these transformations and we use common skip connections to transfer the transformed tensors to the decoder's fusion layers. Moreover, we also propose to use a nearest-neighbour loss as a replacement of common pixelto-pixel losses (such as, e.g., L 1 or L 2 losses) commonly used in conditional generative approaches. This loss proved to be helpful in generating local information (e.g., texture) similar to the target image which is not penalized because of small spatial misalignments.
We test our approach using the benchmarks and the evaluation protocols proposed in [START_REF] Ma | Pose guided person image generation[END_REF] obtaining higher qualitative and quantitative results in all the datasets. Although tested on the specific human-body problem, our approach makes few human-related assumptions and can be easily extended to other domains involving the generation of highly deformable objects. Our code and our trained models are publicly available 1 .
Related work
Most common deep-network-based approaches for visual content generation can be categorized as either Variational Autoencoders (VAEs) [START_REF] Kingma | Auto-encoding variational bayes[END_REF] or Generative Adversarial Networks (GANs) [START_REF] Goodfellow | Generative adversarial nets[END_REF]. VAEs are based on probabilistic graphical models and are trained by maximizing a lower 1 https://github.com/AliaksandrSiarohin/pose-gan bound of the corresponding data likelihood. GANs are based on two networks, a generator and a discriminator, which are trained simultaneously such that the generator tries to "fool" the discriminator and the discriminator learns how to distinguish between real and fake images.
Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] propose a conditional GAN framework for image-to-image translation problems, where a given scene representation is "translated" into another representation. The main assumption behind this framework is that there exits a spatial correspondence between the low-level information of the conditioning and the output image. VAEs and GANs are combined in [START_REF] Zhao | Multi-view image generation from a single-view[END_REF] to generate realistic-looking multi-view clothes images from a single-view input image. The target view is filled to the model via a viewpoint label as front or left side and a two-stage approach is adopted: pose integration and image refinement. Adopting a similar pipeline, Lassner et al. [START_REF] Lassner | A generative model of people in clothing[END_REF] generate images of people with different clothes in a given pose. This approach is based on a costly annotation (fine-grained segmentation with 18 clothing labels) and a complex 3D pose representation.
Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] propose a more general approach which allows to synthesize person images in any arbitrary pose. Similarly to our proposal, the input of their model is a conditioning image of the person and a target new pose defined by 18 joint locations. The target pose is described by means of binary maps where small circles represent the joint locations. Similarly to [START_REF] Lassner | A generative model of people in clothing[END_REF][START_REF] Zhao | Multi-view image generation from a single-view[END_REF], the generation process is split in two different stages: pose generation and texture refinement. In contrast, in this paper we show that a single-stage approach, trained end-to-end, can be used for the same task obtaining higher qualitative results.
Jaderberg et al. [START_REF] Jaderberg | Spatial transformer networks[END_REF] propose a spatial transformer layer, which learns how to transform a feature map in a "canonical" view, conditioned on the feature map itself. However only a global, parametric transformation can be learned (e.g., a global affine transformation), while in this paper we deal with non-parametric deformations of articulated objects which cannot be described by means of a unique global affine transformation.
Generally speaking, U-Net based architectures are frequently adopted for pose-based person-image generation tasks [START_REF] Lassner | A generative model of people in clothing[END_REF][START_REF] Ma | Pose guided person image generation[END_REF][START_REF] Walker | The pose knows: Video forecasting by generating pose futures[END_REF][START_REF] Zhao | Multi-view image generation from a single-view[END_REF]. However, common U-Net skip connections are not well-designed for large spatial deformations because local information in the input and in the output images is not aligned (Fig. 1). In contrast, we propose deformable skip connections to deal with this misalignment problem and "shuttle" local information from the encoder to the decoder driven by the specific pose difference. In this way, differently from previous work, we are able to simultaneously generate the overall pose and the texture-level refinement.
Finally, our nearest-neighbour loss is similar to the perceptual loss proposed in [START_REF] Johnson | Perceptual losses for real-time style transfer and super-resolution[END_REF] and to the style-transfer spatial-analogy approach recently proposed in [START_REF] Liao | Visual attribute transfer through deep image analogy[END_REF]. However, the perceptual loss, based on an element-by-element difference computed in the feature map of an external classifier [START_REF] Johnson | Perceptual losses for real-time style transfer and super-resolution[END_REF], does not take into account spatial misalignments. On the other hand, the patch-based similarity, adopted in [START_REF] Liao | Visual attribute transfer through deep image analogy[END_REF] to compute a dense feature correspondence, is very computationally expensive and it is not used as a loss.
The network architectures
In this section we describe the architectures of our generator (G) and discriminator (D) and the proposed deformable skip connections. We first introduce some notation. At testing time our task, similarly to [START_REF] Ma | Pose guided person image generation[END_REF], consists in generating an image x showing a person whose appearance (e.g., clothes, etc.) is similar to an input, conditioning image x a but with a body pose similar to P (x b ), where x b is a different image of the same person and P (x) = (p 1 , ...p k ) is a sequence of k 2D points describing the locations of the human-body joints in x. In order to allow a fair comparison with [START_REF] Ma | Pose guided person image generation[END_REF], we use the same number of joints (k = 18) and we extract P () using the same Human Pose Estimator (HPE) [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] used in [START_REF] Ma | Pose guided person image generation[END_REF]. Note that this HPE is used both at testing and at training time, meaning that we do not use manually-annotated poses and the so extracted joint locations may have some localization errors or missing detections/false positives.
At training time we use a dataset X = {(x
(i) a , x (i)
b )} i=1,...,N containing pairs of conditioningtarget images of the same person in different poses. For each pair (x a , x b ), a conditioning and a target pose P (x a ) and P (x b ) is extracted from the corresponding image and represented using two tensors H a = H(P (x a )) and H b = H(P (x b )), each composed of k heat maps, where H j (1 ≤ j ≤ k) is a 2D matrix of the same dimension as the original image. If p j is the j-th joint location, then:
H j (p) = exp - p -p j σ 2 , (1)
with σ = 6 pixels (chosen with cross-validation). Using blurring instead of a binary map is useful to provide widespread information about the location p j . The generator G is fed with: (1) a noise vector z, drawn from a noise distribution Z and implicitly provided using dropout [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] and (2) the triplet (x a , H a , H b ). Note that, at testing time, the target pose is known, thus H(P (x b )) can be computed. Note also that the joint locations in x a and H a are spatially aligned (by construction), while in H b they are different. Hence, differently from [START_REF] Ma | Pose guided person image generation[END_REF][START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], H b is not concatenated with the other input tensors. Indeed the convolutional-layer units in the encoder part of G have a small receptive field which cannot capture large spatial displacements. For instance, a large movement of a body limb in x b with respect to x a , is represented in different locations in x a and H b which may be too far apart from each other to be captured by the receptive field of the convolutional units. This is emphasized in the first layers of the encoder, which represent low-level information. Therefore, the convolutional filters cannot simultaneously process texture-level information (from x a ) and the corresponding pose information (from H b ).
For this reason we independently process x a and H a from H b in the encoder. Specifically, x a and H a are concatenated and processed using a convolutional stream of the encoder while H b is processed by means of a second convolutional stream, without sharing the weights (Fig. 2). The feature maps of the first stream are then fused with the layerspecific feature maps of the second stream in the decoder after a pose-driven spatial deformation performed by our deformable skip connections (see Sec. 3.1).
Our discriminator network is based on the conditional, fully-convolutional discriminator proposed by Isola et al. [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF]. In our case, D takes as input 4 tensors: (x a , H a , y, H b ), where either y = x b or y = x = G(z, x a , H a , H b ) (see Fig. 2). These four tensors are concatenated and then given as input to D. The discriminator's output is a scalar value indicating its confidence on the fact that y is a real image.
Deformable skip connections
As mentioned above and similarly to [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], the goal of the deformable skip connections is to "shuttle" local information from the encoder to the decoder part of G. The local information to be transferred is, generally speaking, contained in a tensor F , which represents the feature map activations of a given convolutional layer of the encoder. However, differently from [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], we need to "pick" the information to shuttle taking into account the object-shape deformation which is described by the difference between P (x a ) and P (x b ).
To do so, we decompose the global deformation in a set of local affine transformations, defined using subsets of joints in P (x a ) and P (x b ). Using these affine transformations and local masks constructed using the specific joints, we deform the content of F and then we use common skip connections to copy the transformed tensor and concatenate it with the corresponding tensor in the destination layer (see Fig. 2). Below we describe in more detail the whole pipeline.
Decomposing an articulated body in a set of rigid subparts. The human body is an articulated "object" which can be roughly decomposed into a set of rigid sub-parts. We chose 10 sub-parts: the head, the torso, the left/right upper/lower arm and the left/right upper/lower leg. Each of them corresponds to a subset of the 18 joints defined by the HPE [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF] we use for extracting P (). Using these joint locations we can define rectangular regions which enclose the specific body part. In case of the head, the region is simply chosen to be the axis-aligned enclosing rectangle of all the corresponding joints. For the torso, which is Figure 2: A schematic representation of our network architectures. For the sake of clarity, in this figure we depict P (•) as a skeleton and each tensor H as the average of its component matrices H j (1 ≤ j ≤ k). The white rectangles in the decoder represent the feature maps directly obtained using up-convolutional filters applied to the previous-layer maps. The reddish rectangles represent the feature maps "shuttled" by the skip connections from the H b stream. Finally, blueish rectangles represent the deformed tensors d(F ) "shuttled" by the deformable skip connections from the (x a , H a ) stream.
the largest area, we use a region which includes the whole image, in such a way to shuttle texture information for the background pixels. Concerning the body limbs, each limb corresponds to only 2 joints. In this case we define a region to be a rotated rectangle whose major axis (r 1 ) corresponds to the line between these two joints, while the minor axis (r 2 ) is orthogonal to r 1 and with a length equal to one third of the mean of the torso's diagonals (this value is used for all the limbs). In Fig. 3 we show an example. Let R a h = {p 1 , ..., p 4 } be the set of the 4 rectangle corners in x a defining the h-th body region (1 ≤ h ≤ 10). Note that these 4 corner points are not joint locations. Using R a h we can compute a binary mask M h (p) which is zero everywhere except those points p lying inside R a h . Moreover, let R b h = {q 1 , ..., q 4 } be the corresponding rectangular region in x b . Matching the points in R a h with the corresponding points in R b h we can compute the parameters of a body-part specific affine transformation (see below). In either x a or x b , some of the body regions can be occluded, truncated by the image borders or simply miss-detected by the HPE. In this case we leave the corresponding region R h empty and the h-th affine transform is not computed (see below).
Note that our body-region definition is the only humanspecific part of the proposed approach. However, similar regions can be easily defined using the joints of other articulated objects such as those representing an animal body or a human face.
Computing a set of affine transformations. During the forward pass (i.e., both at training and at testing time) we decompose the global deformation of the conditioning pose with respect to the target pose by means of a set of local affine transformations, one per body region. Specifically, given R a h in x a and R b h in x b (see above), we compute the 6 parameters k h of an affine transformation f h (•; k h ) using Least Squares Error: Figure 3: For each specific body part, an affine transformation f h is computed. This transformation is used to "move" the feature-map content corresponding to that body part.
min k h pj ∈R a h ,qj ∈R b h ||q j -f h (p j ; k h )|| 2 2 (2)
The parameter vector k h is computed using the original image resolution of x a and x b and then adapted to the specific resolution of each involved feature map F . Similarly, we compute scaled versions of each M h . In case either R a h or R b h is empty (i.e., when any of the specific body-region joints has not been detected using the HPE, see above), then we simply set M h to be a matrix with all elements equal to 0 (f h is not computed).
Note that (f h (), M h ) and their lower-resolution variants need to be computed only once per each pair of real images (x a , x b ) ∈ X and, in case of the training phase, this is can be done before starting training the networks (but in our current implementation this is done on the fly).
Combining affine transformations to approximate the object deformation. Once (f h (), M h ), h = 1, ..., 10 are computed for the specific spatial resolution of a given tensor F , the latter can be transformed in order to approximate the global pose-dependent deformation. Specifically, we first compute for each h:
F h = f h (F M h ), (3)
where is a point-wise multiplication and f h (F (p)) is used to "move" all the channel values of F corresponding to point p. Finally, we merge the resulting tensors using:
d(F (p, c)) = max h=1,...,10 F h (p, c), ( 4
)
where c is a specific channel. The rationale behind Eq. 4 is that, when two body regions partially overlap each other, the final deformed tensor d(F ) is obtained by picking the maximum-activation values. Preliminary experiments performed using average pooling led to slightly worse results.
Training
D and G are trained using a combination of a standard conditional adversarial loss L cGAN with our proposed nearest-neighbour loss L N N . Specifically, in our case L cGAN is given by:
L cGAN (G, D) = E (xa,x b )∈X [log D(x a , H a , x b , H b )]+ E (xa,x b )∈X ,z∈Z [log(1 -D(x a , H a , x, H b ))], (5)
where x = G(z, x a , H a , H b ).
Previous works on conditional GANs combine the adversarial loss with either an L 2 [START_REF] Pathak | Context encoders: Feature learning by inpainting[END_REF] or an L 1 -based loss [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF][START_REF] Ma | Pose guided person image generation[END_REF] which is used only for G. For instance, the L 1 distance computes a pixel-to-pixel difference between the generated and the real image, which, in our case, is:
L 1 (x, x b ) = ||x -x b || 1 . (6)
However, a well-known problem behind the use of L 1 and L 2 is the production of blurred images. We hypothesize that this is also due to the inability of these losses to tolerate small spatial misalignments between x and x b . For instance, suppose that x, produced by G, is visually plausible and semantically similar to x b , but the texture details on the clothes of the person in the two compared images are not pixel-to-pixel aligned. Both the L 1 and the L 2 loss will penalize this inexact pixel-level alignment, although not semantically important from the human point of view. Note that these misalignments do not depend on the global deformation between x a and x b , because x is supposed to have the same pose as x b . In order to alleviate this problem, we propose to use a nearest-neighbour loss L N N based on the following definition of image difference:
L N N (x, x b ) = p∈x min q∈N (p) ||g(x(p)) -g(x b (q))|| 1 , (7)
where N (p) is a n × n local neighbourhood of point p (we use 5 × 5 and 3 × 3 neighbourhoods for the DeepFashion and the Market-1501 dataset, respectively, see Sec. 6). g(x(p)) is a vectorial representation of a patch around point p in image x, obtained using convolutional filters (see below for more details). Note that L N N () is not a metrics because it is not symmetric. In order to efficiently compute Eq. 7, we compare patches in x and x b using their representation (g()) in a convolutional map of an externally trained network. In more detail, we use VGG-19 [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF], trained on ImageNet and, specifically, its second convolutional layer (called conv 1 2 ). The first two convolutional maps in VGG-19 (conv 1 1 and conv 1 2 ) are both obtained using a convolutional stride equal to 1. For this reason, the feature map (C x ) of an image x in conv 1 2 has the same resolution of the original image x. Exploiting this fact, we compute the nearest-neighbour field directly on conv 1 2 , without losing spatial precision. Hence, we define: g(x(p)) = C x (p), which corresponds to the vector of all the channel values of C x with respect to the spatial position p. C x (p) has a receptive field of 5 × 5 in x, thus effectively representing a patch of dimension 5 × 5 using a cascade of two convolutional filters. Using C x , Eq. 7 becomes:
L N N (x, x b ) = p∈x min q∈N (p) ||C x(p) -C x b (q)|| 1 , (8)
In Sec. A, we show how Eq. 8 can be efficiently implemented using GPU-based parallel computing. The final L N N -based loss is:
L N N (G) = E (xa,x b )∈X ,z∈Z L N N (x, x b ). (9)
Combining Eq. 5 and Eq. 9 we obtain our objective:
G * = arg min G max D L cGAN (G, D) + λL N N (G), (10)
with λ = 0.01 used in all our experiments. The value of λ is small because it acts as a normalization factor in Eq. 8 with respect to the number of channels in C x and the number of pixels in x (more details in Sec. A).
Implementation details
We train G and D for 90k iterations, with the Adam optimizer (learning rate: 2 * 10 -4 , β 1 = 0.5, β 2 = 0.999). Following [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] we use instance normalization [START_REF] Ulyanov | Instance normalization: The missing ingredient for fast stylization[END_REF]. In the following we denote with: (1) C s m a convolution-ReLU layer with m filters and stride s, (2) CN s m the same as C s m with instance normalization before ReLU and (3) CD s m the same as CN s m with the addition of dropout at rate 50%. Differently from [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], we use dropout only at training time. The encoder part of the generator is given by two streams (Fig. 2), each of which is composed of the following sequence of layers:
CN The discriminator architecture is:
C 2 64 -C 2 128 -C 2 256 -C 2 512 -C 2 1
, where the ReLU of the last layer is replaced with sigmoid.
The generator for the DeepFashion dataset has one additional convolution block (CN 2 512 ) both in the encoder and in the decoder, because images in this dataset have a higher resolution.
Experiments
Datasets The person re-identification Market-1501 dataset [START_REF] Zheng | Scalable person re-identification: A benchmark[END_REF] contains 32,668 images of 1,501 persons captured from 6 different surveillance cameras. This dataset is challenging because of the low-resolution images (128×64) and the high diversity in pose, illumination, background and viewpoint. To train our model, we need pairs of images of the same person in two different poses. As this dataset is relatively noisy, we first automatically remove those images in which no human body is detected using the HPE, leading to 263,631 training pairs. For testing, following [START_REF] Ma | Pose guided person image generation[END_REF], we randomly select 12,000 pairs. No person is in common between the training and the test split.
The DeepFashion dataset (In-shop Clothes Retrieval Benchmark) [START_REF] Liu | Deepfashion: Powering robust clothes recognition and retrieval with rich annotations[END_REF] is composed of 52,712 clothes images, matched each other in order to form 200,000 pairs of identical clothes with two different poses and/or scales of the persons wearing these clothes. The images have a resolution of 256×256 pixels. Following the training/test split adopted in [START_REF] Ma | Pose guided person image generation[END_REF], we create pairs of images, each pair depicting the same person with identical clothes but in different poses. After removing those images in which the HPE does not detect any human body, we finally collect 89,262 pairs for training and 12,000 pairs for testing.
Metrics Evaluation in the context of generation tasks is a problem in itself. In our experiments we adopt a redundancy of metrics and a user study based on human judgments. Following [START_REF] Ma | Pose guided person image generation[END_REF], we use Structural Similarity (SSIM) [START_REF] Wang | Image quality assessment: from error visibility to structural similarity[END_REF], Inception Score (IS) [START_REF] Salimans | Improved techniques for training gans[END_REF] and their corresponding masked versions mask-SSIM and mask-IS [START_REF] Ma | Pose guided person image generation[END_REF]. The latter are obtained by masking-out the image background and the rationale behind this is that, since no background information of the target image is input to G, the network cannot guess what the target background looks like. Note that the evaluation masks we use to compute both the mask-IS and the mask-SSIM values do not correspond to the masks ({M h }) we use for training. The evaluation masks have been built following the procedure proposed in [START_REF] Ma | Pose guided person image generation[END_REF] and adopted in that work for both training and evaluation. Consequently, the maskbased metrics may be biased in favor of their method. Moreover, we observe that the IS metrics [START_REF] Salimans | Improved techniques for training gans[END_REF], based on the entropy computed over the classification neurons of an external classifier [START_REF] Szegedy | Rethinking the inception architecture for computer vision[END_REF], is not very suitable for domains with only one object class. For this reason we propose to use an additional metrics that we call Detection Score (DS). Similarly to the classification-based metrics (FCN-score) used in [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF], DS is based on the detection outcome of the state-of-theart object detector SSD [START_REF] Liu | SSD: Single shot multibox detector[END_REF], trained on Pascal VOC 07 [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2007[END_REF] (and not fine-tuned on our datasets). At testing time, we use the person-class detection scores of SSD computed on each generated image x. DS(x) corresponds to the maximumscore box of SSD on x and the final DS value is computed by averaging the scores of all the generated images. In other words, DS measures the confidence of a person detector in the presence of a person in the image. Given the high accuracy of SSD in the challenging Pascal VOC 07 dataset [START_REF] Liu | SSD: Single shot multibox detector[END_REF], we believe that it can be used as a good measure of how much realistic (person-like) is a generated image.
Finally, in our tables we also include the value of each metrics computed using the real images of the test set. Since these values are computed on real data, they can be considered as a sort of an upper-bound to the results a generator can obtain. However, these values are not actual upper bounds in the strict sense: for instance the DS metrics on the real datasets is not 1 because of SSD failures.
Comparison with previous work
In Tab. 1 we compare our method with [START_REF] Ma | Pose guided person image generation[END_REF]. Note that there are no other works to compare with on this task yet. The mask-based metrics are not reported in [START_REF] Ma | Pose guided person image generation[END_REF] for the DeepFashion dataset. Concerning the DS metrics, we used the publicly available code and network weights released by the authors of [START_REF] Ma | Pose guided person image generation[END_REF] in order to generate new images according to the common testing protocol and ran the SSD detector to get the DS values.
On the Market-1501 dataset our method reports the highest performance with all but the IS metrics. Specifically, our DS values are much higher than those obtained by [START_REF] Ma | Pose guided person image generation[END_REF]. Conversely, on the DeepFashion dataset, our approach significantly improves the IS value but returns a slightly lower SSIM value.
User study
In order to further compare our method with the state-ofthe-art approach [START_REF] Ma | Pose guided person image generation[END_REF] we implement a user study following the protocol of Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. For each dataset, we show 55 real and 55 generated images in a random order to 30 users for one second. Differently from Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF], who used Amazon Mechanical Turk (AMT), we used "expert" (voluntary) users: PhD students and Post-docs working in Computer Vision and belonging to two different departments.
We believe that expert users, who are familiar with GANlike images, can more easily distinguish real from fake images, thus confusing our users is potentially a more difficult task for our GAN. The results 2 in Tab. 2 confirm the significant quality boost of our images with respect to the images produced in [START_REF] Ma | Pose guided person image generation[END_REF]. For instance, on the Market-1501 dataset, the G2R human "confusion" is one order of magnitude higher than in [START_REF] Ma | Pose guided person image generation[END_REF]. Finally, in Sec. D we show some example images, directly comparing with [START_REF] Ma | Pose guided person image generation[END_REF]. We also show the results obtained by training different person re-identification systems after augmenting the training set with images generated by our method. These experiments indirectly confirm that the degree of realism and diversity of our images is very significant.
Table 2: User study (%). ( * ) These results are reported in [START_REF] Ma | Pose guided person image generation[END_REF] and refer to a similar study with AMT users.
Market-1501
DeepFashion Model R2G G2R R2G G2R Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] ( * ) 11.2 5.5 9.2 14.9 Ours 22.67 50.24 12.42 24.61
Ablation study and qualitative analysis
In this section we present an ablation study to clarify the impact of each part of our proposal on the final performance. We first describe the compared methods, obtained by "amputating" important parts of the full-pipeline presented in Sec. 3-4. The discriminator architecture is the same for all the methods.
• Baseline: We use the standard U-Net architecture [START_REF] Isola | Image-to-image translation with conditional adversarial networks[END_REF] without deformable skip connections. The inputs of G and D and the way pose information is represented 2 R2G means #Real images rated as generated / #Real images; G2R means #Generated images rated as Real / #Generated images. (see the definition of tensor H in Sec. 3) is the same as in the full-pipeline. However, in G, x a , H a and H b are concatenated at the input layer. Hence, the encoder of G is composed of only one stream, whose architecture is the same as the two streams described in Sec.5.
• DSC: G is implemented as described in Sec. 3, introducing our Deformable Skip Connections (DSC). Both in DSC and in Baseline, training is performed using an L 1 loss together with the adversarial loss.
• PercLoss: This is DSC in which the L 1 loss is replaced with the Perceptual loss proposed in [START_REF] Johnson | Perceptual losses for real-time style transfer and super-resolution[END_REF]. This loss is computed using the layer conv 2 1 of [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF], chosen to have a receptive field the closest possible to N (p) in Eq. 8, and computing the element-to-element difference in this layer without nearest neighbor search.
• Full: This is the full-pipeline whose results are reported in Tab. 1, and in which we use the proposed L N N loss (see Sec. 4).
In Tab. 3 we report a quantitative evaluation on the Market-1501 and on the DeepFashion dataset with respect to the four different versions of our approach. In most of the cases, there is a progressive improvement from Baseline to DSC to Full. Moreover, Full usually obtains better results than PercLoss. These improvements are particularly evident looking at the DS metrics, which we believe it is a strong evidence that the generated images are realistic. DS values on the DeepFashion dataset are omitted because they are all close to the value ∼ 0.96. In Fig. 4 and Fig. 5 we show some qualitative results. These figures show the progressive improvement through the four baselines which is quantitatively presented above. In fact, while pose information is usually well generated by all the methods, the texture generated by Baseline often does not correspond to the texture in x a or is blurred. In same cases, the improvement of Full with respect to Baseline is quite drastic, such as the drawing on the shirt of the girl in the second row of Fig. 5 or the stripes on the clothes of the persons in the third and in the fourth row of Fig. 4. Further examples are shown in the Appendix.
Conclusions
In this paper we presented a GAN-based approach for image generation of persons conditioned on the appearance and the pose. We introduced two novelties: deformable skip connections and nearest-neighbour loss. The first is used to solve common problems in U-Net based generators when dealing with deformable objects. The second novelty is used to alleviate a different type of misalignment between the generated image and the ground-truth image.
Our experiments, based on both automatic evaluation metrics and human judgments, show that the proposed method is able to outperform previous work on this task. Despite the proposed method was tested on the specific task of human-generation, only few assumptions are used which refer to the human body and we believe that our proposal can be easily extended to address other deformable-object generation tasks.
Appendix
In this Appendix we report some additional implementation details and we show other quantitative and qualitative results. Specifically, in Sec. A we explain how Eq. 8 can be efficiently implemented using GPU-based parallel computing, while in Sec. B we show how the human-body symmetry can be exploited in case of missed limb detections. In Sec. C we train state-of-the-art Person Re-IDentification (Re-ID) systems using a combination of real and generated data, which, on the one hand, shows how our images can be effectively used to boost the performance of discriminative methods and, on the other hand, indirectly shows that our generated images are realistic and diverse. In Sec. D we show a direct (qualitative) comparison of our method with the approach presented in [START_REF] Ma | Pose guided person image generation[END_REF] and in Sec. E we show other images generated by our method, including some failure cases. Note that some of the images in the DeepFashion dataset have been manually cropped (after the automatic generation) to improve the overall visualization quality.
A. Nearest-neighbour loss implementation
Our proposed nearest-neighbour loss is based on the definition of L N N (x, x b ) given in Eq. 8. In that equation, for each point p in x, the "most similar" (in the C x -based feature space) point q in x b needs to be searched for in a n × n neighborhood of p. This operation may be quite time consuming if implemented using sequential computing (i.e., using a "for-loop"). We show here how this computation can be sped-up by exploiting GPU-based parallel computing in which different tensors are processed simultaneously.
Given C x b , we compute n 2 shifted versions of C x b : {C (i,j)
x b }, where (i, j) is a translation offset ranging in a relative n × n neighborhood (i, j ∈ {-n-1 2 , ..., + n-1 2 }) and
C (i,j) x b
is filled with the value +∞ in the borders. Using this translated versions of C x b , we compute n 2 corresponding difference tensors {D (i,j) }, where:
D (i,j) = |C x -C (i,j) x b | (11)
and the difference is computed element-wise. D (i,j) (p) contains the channel-by-channel absolute difference between C x(p) and C x b (p + (i, j)). Then, for each D (i,j) , we sum all the channel-based differences obtaining:
S (i,j) = c D (i,j) (c), (12)
where c ranges over all the channels and the sum is performed pointwise. S (i,j) is a matrix of scalar values, each value representing the L 1 norm of the difference between a point p in C x and a corresponding point p + (i, j) in C x b :
S (i,j) (p) = ||C x(p) -C x b (p + (i, j))|| 1 . (13)
For each point p, we can now compute its best match in a local neighbourhood of C x b simply using:
M (p) = min (i,j) S (i,j) (p). (14)
Finally, Eq. 8 becomes:
L N N (x, x b ) = p M (p). (15)
Since we do not normalize Eq. 12 by the number of channels nor Eq. 15 by the number of pixels, the final value L N N (x, x b ) is usually very high. For this reason we use a small value λ = 0.01 in Eq. 10 when weighting L N N with respect to L cGAN .
B. Exploiting the human-body symmetry
As mentioned in Sec. 3.1, we decompose the human body in 10 rigid sub-parts: the head, the torso and 8 limbs (left/right upper/lower arm, etc.). When one of the joints corresponding to one of these body-parts has not been detected by the HPE, the corresponding region and affine transformation are not computed and the region-mask is filled with 0. This can happen because of either that region is not visible in the input image or because of falsedetections of the HPE.
However, when the missing region involves a limb (e.g., the right-upper arm) whose symmetric body part has been detected (e.g., the left-upper arm), we can "copy" information from the "twin" part. In more detail, suppose for instance that the region corresponding to the right-upper arm in the conditioning image is R a rua and this region is empty because of one of the above reasons. Moreover, suppose that R b rua is the corresponding (non-empty) region in x b and that R a lua is the (non-empty) left-upper arm region in x a . We simply set: R a rua := R a lua and we compute f rua as usual, using the (now, no more empty) region R a rua together with R b rua .
C. Improving person Re-ID via dataaugmentation
The goal of this section is to show that the synthetic images generated with our proposed approach can be used to train discriminative methods. Specifically, we use Re-ID approaches whose task is to recognize a human person in different poses and viewpoints. The typical application of a Re-ID system is a video-surveillance scenario in which images of the same person, grabbed by cameras mounted in different locations, need to be matched to each other. Due to the low-resolution of the cameras, person re-identification is usually based on the colours and the texture of the clothes [START_REF] Zheng | Person reidentification: Past, present and future[END_REF]. This makes our method particularly suited to automatically populate a Re-ID training dataset by generating images of a given person with identical clothes but in different viewpoints/poses.
In our experiments we use Re-ID methods taken from [START_REF] Zheng | Person reidentification: Past, present and future[END_REF][START_REF] Zheng | A discriminatively learned CNN embedding for person reidentification[END_REF] and we refer the reader to those papers for details about the involved approaches. We employ the Market-1501 dataset that is designed for Re-ID method benchmarking. For each image of the Market-1501 training dataset (T ), we randomly select 10 target poses, generating 10 corresponding images using our approach. Note that: (1) Each generated image is labeled with the identity of the conditioning image, (2) The target pose can be extracted from an individual different from the person depicted in the conditioning image (this is different from the other experiments shown here and in the main paper). Adding the generated images to T we obtain an augmented training set A. In Tab. 4 we report the results obtained using either T (standard procedure) or A for training different Re-ID systems. The strong performance boost, orthogonal to different Re-ID methods, shows that our generative approach can be effectively used for synthesizing training samples. It also indirectly shows that the generated images are sufficiently realistic and different from the real images contained in T .
D. Comparison with previous work
In this section we directly compare our method with the results generated by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. The comparison is based on the pairs conditioning image-target pose used in [START_REF] Ma | Pose guided person image generation[END_REF], for which we show both the results obtained by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] and ours.
Figs. 67show the results on the Market-1501 dataset. Comparing the images generated by our full-pipeline with the corresponding images generated by the full-pipeline presented in [START_REF] Ma | Pose guided person image generation[END_REF], most of the times our results are more realistic, sharper and with local details (e.g., the clothes texture or the face characteristics) more similar to the details of the conditioning image. For instance, in the first and the last row of Fig. 6 and in the last row of Fig. 7, our results show human-like images, while the method proposed in [START_REF] Ma | Pose guided person image generation[END_REF] produced images which can hardly be recognized as humans.
Figs. 89show the results on the DeepFashion dataset. Also in this case, comparing our results with [START_REF] Ma | Pose guided person image generation[END_REF], most of the times ours look more realistic or closer to the details of the conditioning image. For instance, the second row of Fig. 8 shows a male face, while the approach proposed in [START_REF] Ma | Pose guided person image generation[END_REF] produced a female face (note that the DeepFashion dataset is strongly biased toward female subjects [START_REF] Ma | Pose guided person image generation[END_REF]). Most of the times, the clothes texture in our case is closer to that depicted in the conditioning image (e.g., see rows 1, 3, 4, 5 and 6 in Fig. 8 and rows 1 and 6 in Fig. 9). In row 5 of Fig. 9 the method proposed in [START_REF] Ma | Pose guided person image generation[END_REF] produced an image with a pose closer to the target; however it wrongly generated pants while our approach correctly generated the appearance of the legs according to the appearance contained in the conditioning image.
We believe that this qualitative comparison using the pairs selected in [START_REF] Ma | Pose guided person image generation[END_REF], shows that the combination of the proposed deformable skip-connections and the nearestneighbour loss produced the desired effect to "capture" and transfer the correct local details from the conditioning image to the generated image. Transferring local information while simultaneously taking into account the global pose deformation is a difficult task which can more hardly be implemented using "standard" U-Net based generators as those adopted in [START_REF] Ma | Pose guided person image generation[END_REF].
E. Other qualitative results
In this section we present other qualitative results. Fig. 10 and Fig. 11 show some images generated using the Market-1501 dataset and the DeepFashion dataset, respectively. The terminology is the same adopted in Sec. 6.2. Note that, for the sake of clarity, we used a skeleton-based visualization of P (•) but, as explained in the main paper, only the point-wise joint locations are used in our method to represent pose information (i.e., no joint-connectivity information is used).
Similarly to the results shown in Sec. 6.2, also these images show that, despite the pose-related general structure is sufficiently well generated by all the different versions of our method, most of the times there is a gradual quality improvement in the detail synthesis from Baseline to DSC to PercLoss to Full.
Finally, Fig. 12 and Fig. 13 show some failure cases (badly generated images) of our method on the Market-1501 dataset and the DeepFashion dataset, respectively. Some common failure causes are:
• Errors of the HPE [START_REF] Cao | Realtime multiperson 2D pose estimation using part affinity fields[END_REF]. For instance, see rows 2, 3 and 4 of Fig. 12 or the wrong right-arm localization in row 2 of Fig. 13.
• Ambiguity of the pose representation. For instance, in row 3 of Fig. 13, the left elbow has been detected in x b although it is actually hidden behind the body. Since P (x b ) contains only 2D information (no depth or occlusion-related information), there is no way for the system to understand whether the elbow is behind or in front of the body. In this case our model chose to generate an arm considering that the arm is in front of the body (which corresponds to the most frequent situation in the training dataset).
• Rare poses. For instance, row 1 of Fig. 13 shows a girl in an unusual rear view with a sharp 90 degree profile face (x b ). The generator by mistake synthesized a neck where it should have "drawn" a shoulder. Note that rare poses are a difficult issue also for the method proposed in [START_REF] Ma | Pose guided person image generation[END_REF]. [START_REF] Zheng | Person reidentification: Past, present and future[END_REF] 73.9 48.8 78.5 55.9 IDE + XQDA [START_REF] Zheng | Person reidentification: Past, present and future[END_REF] 73.2 50.9 77.8 57.9 IDE + KISSME [START_REF] Zheng | Person reidentification: Past, present and future[END_REF] 75.1 51.5 79.5 58.1 Discriminative Embedding [START_REF] Zheng | A discriminatively learned CNN embedding for person reidentification[END_REF] 78.3 55.5 80.6 61.3
• Rare object appearance. For instance, the backpack in row 1 of Fig. 12 is light green, while most of the backpacks contained in the training images of the Market-1501 dataset are dark. Comparing this image with the one generated in the last row of Fig. 10 (where the backpack is black), we see that in Fig. 10 the colour of the shirt of the generated image is not blended with the backpack colour, while in Fig. 12 it is. We presume that the generator "understands" that a dark backpack is an object whose texture should not be transferred to the clothes of the generated image, while it is not able to generalize this knowledge to other backpacks.
• Warping problems. This is an issue related to our specific approach (the deformable skip connections). The texture on the shirt of the conditioning image in row 2 of Fig. 13 is warped in the generated image. We presume this is due to the fact that in this case the affine transformations need to largely warp the texture details of the narrow surface of the profile shirt (conditioning image) in order to fit the much wider area of the target frontal pose.
x a
x b Full (ours) Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] Figure 6: A qualitative comparison on the Market-1501 dataset between our approach and the results obtained by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF]. Columns 1 and 2 show the conditioning and the target image, respectively, which are used as reference by both models. Columns 3 and 4 respectively show the images generated by our full-pipeline and by the full-pipeline presented in [START_REF] Ma | Pose guided person image generation[END_REF].
x a
x b Full (ours) Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF] Figure 7: More qualitative comparison on the Market-1501 dataset between our approach and the results obtained by Ma et al. [START_REF] Ma | Pose guided person image generation[END_REF].
Figure 1 :
1 Figure 1: (a) A typical "rigid" scene generation task, where the conditioning and the output image local structure is well aligned. (b) In a deformable-object generation task, the input and output are not spatially aligned.
x a P (x a ) P (x b ) x b Baseline DSC PercLoss Full
Figure 4 :
4 Figure 4: Qualitative results on the Market-1501 dataset.Columns 1, 2 and 3 represent the input of our model. We plot P (•) as a skeleton for the sake of clarity, but actually no joint-connectivity relation is exploited in our approach. Column 4 corresponds to the ground truth. The last four columns show the output of our approach with respect to different baselines.
Figure 5 :
5 Figure 5: Qualitative results on the DeepFashion dataset with respect to different baselines. Some images have been cropped for visualization purposes.
Figure 8 :
8 Figure 8: A qualitative comparison on the DeepFashion dataset between our approach and the results obtained by Ma et al. [12].
Figure 9 :
9 Figure 9: More qualitative comparison on the DeepFashion dataset between our approach and the results obtained by Ma et al. [12].
PFigure 10 :
10 Figure 10: Other qualitative results on the Market-1501 dataset.
PFigure 11 :
11 Figure 11: Other qualitative results on the DeepFashion dataset.
PFigure 12 :
12 Figure 12: Examples of badly generated images on the Market-1501 dataset. See the text for more details.
PFigure 13 :
13 Figure 13: Examples of badly generated images on the DeepFashion dataset.
Table 1 :
1 Comparison with the state of the art. ( * ) These values have been computed using the code and the network weights released by Ma et al.[START_REF] Ma | Pose guided person image generation[END_REF] in order to generate new images.
Market-1501 DeepFashion
Model SSIM IS mask-SSIM mask-IS DS SSIM IS DS
Ma et al. [12] 0.253 3.460 0.792 3.435 0.39 * 0.762 3.090 0.95 *
Ours 0.290 3.185 0.805 3.502 0.72 0.756 3.439 0.96
Real-Data 1.00 3.86 1.00 3.36 0.74 1.000 3.898 0.98
Table 3 :
3 Quantitative ablation study on the Market-1501 and the DeepFashion dataset.
Market-1501 DeepFashion
Model SSIM IS mask-SSIM mask-IS DS SSIM IS
Baseline 0.256 3.188 0.784 3.580 0.595 0.754 3.351
DSC 0.272 3.442 0.796 3.666 0.629 0.754 3.352
PercLoss 0.276 3.342 0.788 3.519 0.603 0.744 3.271
Full 0.290 3.185 0.805 3.502 0.720 0.756 3.439
Real-Data 1.00 3.86 1.00 3.36 0.74 1.000 3.898
x
a P (x a ) P (x b ) x b Baseline DSC PercLoss Full
Table 4 :
4 Accuracy of Re-ID methods on the Market-1501 test set (%)
Standard training set (T ) Augmented training set (A)
Model Rank 1 mAP Rank 1 mAP
IDE + Euclidean
Acknowledgements
We want to thank the NVIDIA Corporation for the donation of the GPUs used in this project. |
01440167 | en | [
"stat",
"math",
"math.math-pr"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01440167v2/file/EJP1803-036RA0%20%281%29.pdf | Patrice Bertail
Gabriela Ciolek
email: gabrielaciolek@gmail.com
Gabriela Ciolek New Bernstein
Hoeffding 2018 Hal-01440167v2
New Bernstein and Hoeffding type inequalities for regenerative Markov chains
come
New Bernstein and Hoeffding type inequalities for regenerative Markov chains 1
Patrice Bertail * , Gabriela Cio lek * *
Introduction
Exponential inequalities are a powerful tool to control the tail probability that a random variable X exceeds some prescribed value t. They have been extensively investigated by many researchers due to the fact that they are a crucial step in deriving many results in numerous fields such as statistics, learning theory, discrete mathematics, statistical mechanics, information theory or convex geometry. There is a vast literature that provides a comprehensive overview of the theory of exponential inequalities in the i.i.d.
setting. An interested reader is referred to [START_REF] Bai | Probability Inequalities[END_REF], [START_REF] Boucheron | Inequalities. A Nonasymptotic Theory of Independence[END_REF] or van der Vaart and Wellner (1996).
The wealth of possible applications of exponential inequalities has naturally led to development of this theory in the dependent setting. In this paper we are particularly interested in results that establish exponential bounds for the tail probabilities of the additive functional of the regenerative Markov chain of the form
f (X 1 ) + • • • + f (X n ),
where (X n ) n∈N is a regenerative Markov chain. It is noteworthy that when deriving exponential inequalities for Markov chains (or any other process with some dependence structure) one can not expect to recover fully the classical results from the i.i.d. case. The goal is then to get some counterparts of the inequalities for i.i.d. random variables with some extra terms that appear in the bound as a consequence of a Markovian structure of the considered process.
In the recent years such (non-)asymptotic results have been obtained for Markov chains via many approaches: martingale arguments (see [START_REF] Glynn | Hoeffding?s Inequality for Uniformly Ergodic Markov Chains[END_REF], where Hoeffding's inequality for uniformly ergodic Markov chains has been presented), coupling techniques (see [START_REF] Chazottes | Concentration inequalities for Markov processes via coupling[END_REF] and [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF]). In fact, [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] have proved that Hoeffding's inequality holds when the Markov chain is geometrically ergodic and thus weakened the assumptions imposed on the Markov chain in [START_REF] Glynn | Hoeffding?s Inequality for Uniformly Ergodic Markov Chains[END_REF]. Winterberger (2016) has generalized the result of [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] by showing that Hoeffding's inequality is valid also for unbounded functions of geometrically ergodic Markov chains provided that the sum is correctly self-normalized. [START_REF] Paulin | Concentration inequalities for Markov chains by Marton couplings and spectral methods[END_REF] has presented McDiarmid inequality for Markov chains using Merton coupling and spectral methods. [START_REF] Clémençon | Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique[END_REF], [START_REF] Adamczak | A tail inequality for suprema of unbounded empirical processes with applications to Markov chains[END_REF], Bertail and Clémençon (2009), and [START_REF] Adamczak | Exponential concentration inequalities for additive functionals of Markov chains[END_REF] have obtained exponential inequalities for ergodic Markov chains via regeneration techniques (see [START_REF] Smith | Regenerative stochastic processes[END_REF]).
Regeneration techniques for Markov chains are particularly appealing to us mainly due to the fact that it requires much fewer restrictions on the ergodicity properties of the chain in comparison to alternative methods. In this paper we establish Hoeffding and Bernstein type of inequalities for statistics of the form
1 n ∑ n i=1 f (X i )
, where (X n ) n∈N is a regenerative Markov chain. We show that under proper control of the size of class of functions F (measured by its uniform entropy number), one can get non-asymptotic bounds on the suprema over the class of F of such empirical process for regenerative Markov chains. It is noteworthy that it is easy to generalize such results from regenerative case to the Harris recurrent one, using Nummelin extension of the initial chain (see [START_REF] Nummelin | A splitting technique for Harris recurrent chains[END_REF]).
The paper is organized as follows. In chapter 2 we introduce the notation and preliminary assumptions for Markov chains. We also recall some classical results from the i.i.d. setting which we generalize to the Markovian case. In chapter 3 we present the main results -Bernstein and Hoeffding type inequalities for regenerative Markov chains. The main ingredient to provide a crude exponential bound (with bad constants) is based on Montgomery-Smith which allows to reduce the problem on a random number of blocks to a fixed number of independent blocks. We then proposed a refined inequality by first controlling the the number of blocks in the inequality and then applying again Montgomery-Smith inequality on a remainder term. Next, we generalize these results and obtain Hoeffding and Bernstein type of bounds for suprema of empirical processes over a class of functions F. We also present the inequalities when the chain is Harris recurrent. Some technical parts of the proofs are postponed to the Appendix.
Preliminaries
We begin by introducing some notation and recall the key concepts of the Markov chains theory (see [START_REF] Meyn | Markov chains and stochastic stability[END_REF] for a detailed review and references). Let X = (X n ) n∈N be a positive recurrent, ψ-irreducible Markov chain on a countably generated state space (E, E) with transition probability Π and initial probability ν. We assume further, that X is regenerative (see [START_REF] Smith | Regenerative stochastic processes[END_REF]), i.e. there exists a measurable set A, called an atom, such that ψ(A) > 0 and for all (x, y) ∈ A 2 we have Π(x, •) = Π(y, •). We define the sequence of regeneration times (τ A (j)) j≥1 which is the sequence of successive points of time when the chain visits A and forgets its past. Throughout the paper we write τ A = τ A [START_REF] Adamczak | A tail inequality for suprema of unbounded empirical processes with applications to Markov chains[END_REF]. It is well-known that we can cut the sample path of the process into data segments of the form
B j = (X 1+τ A (j) , • • • , X τ A (j+1) ), j ≥ 1
according to consecutive visits of the chain to the regeneration set A. By the strong Markov property the blocks are i.i.d. random variables taking values in the torus ∪ ∞ k=1 E k . In the following, we assume that the mean inter-renewal time α = E A [τ A ] < ∞ and point out that in this case, the stationary distribution is a Pitman occupation measure given by
∀B ∈ E, µ(B) = 1 E A [τ A ] E A [ τ A ∑ i=1 I {X i ∈B} ]
,
where I B is the indicator function of the event B. Assume that we observe (X 1 , • • • , X n ). We introduce few more pieces of notation: throughout the paper we write l n = ∑ n i=1 I{X i ∈ A} for the total number of consecutive visits of the chain to the atom A, thus we observe l n + 1 data blocks. We make the convention that B (n) ln = ∅ when τ A (l n ) = n. Furthermore, we denote by l(B j ) = τ A (j + 1) -τ A (j), j ≥ 1, the length of regeneration blocks. Let f : E → R be µ-integrable function. In the following, we assume without loss of generality that µ(f ) = E µ [f (X 1 )] = 0. We introduce the following notation for partial sums of the regeneration cycles f
(B i ) = ∑ τ A (j+1) i=1+τ A (j) f (X i ).
Then, the regenerative approach is based on the following decomposition of the sum
∑ n i=1 f (X i ) : n ∑ i=1 f (X i ) = ln ∑ i=1 f (B i ) + ∆ n ,
where
∆ n = τ A ∑ i=1 f (X i ) + n ∑ i=τ A (ln-1) f (X i ).
We denote by
σ 2 (f ) = 1 E A (τ A ) E A ( τ A ∑ i=1 {f (X i ) -µ(f )} ) 2
the asymptotic variance.
For the completeness of the exposition, we recall now well-known classical results concerning some exponential inequalities for independent random variables. Firstly, we present the inequality for the i.i.d. bounded random variables due to [START_REF] Hoeffding | Probability inequalities for sums of bounded random variables[END_REF].
a i ≤ X i ≤ b i (i = 1, • • • , n), then for t > 0 P ( 1 n n ∑ i=1 X i -EX 1 ≥ t ) ≤ exp ( - 2t 2 ∑ n i=1 (b i -a i ) 2
) .
Below we recall the generalization of Hoeffding's inequality to unbounded functions. Interested reader, can find different variations of the following inequality (depending on imposed conditions on the random variables) in [START_REF] Boucheron | Inequalities. A Nonasymptotic Theory of Independence[END_REF]. Theorem 2.2 (Bernstein's inequality) Let X 1 , • • • , X n be independent random variables with expectation EX l for X l , l ≥ 1 respectively, such that, for all integers p ≥ 2,
E|X l | p ≤ p!R p-2 σ 2 l /2 for all l ∈ {1, • • • , n}.
Then, for all t > 0,
P ( n ∑ i=1 (X l -EX l ) ≥ t ) ≤ 2 exp ( - t 2 2(σ 2 + Rt)
)
,
where σ 2 = ∑ n i=1 σ 2 l . The purpose of this paper is to derive similar bounds for Markov chains using the nice regenerative structure of Markov chains.
Exponential inequalities for the tail probability for suprema of empirical processes for Markov chains
In the following, we denote f (x) = f (x) -µ(f ). Moreover, we write respec-
tively f (B 1 ) = ∑ τ A i=1 f (X i ) and | f |(B 1 ) = ∑ τ A i=1 | f |(X i
). We will work under following conditions. A1. (Bernstein's block moment condition) There exists a positive constant M 1 such that for any p ≥ 2 and for every f ∈ F
E A f (B 1 ) p ≤ 1 2 p!σ 2 (f )M p-2 1 . ( 1
)
A2. (Non-regenerative block exponential moment assumption) There exists
λ 0 > 0 such that for every f ∈ F we have E ν [ exp [ λ 0 ∑ τ A i=1 f (X i ) ]] < ∞.
A3. (Exponential block moment assumption) There exists λ
1 > 0 such that for every f ∈ F we have E A [ exp [ λ 1 f (B 1 ) ]] < ∞.
Remark 3.1 It is noteworthy to mention that assumption A1 implies the existence of an exponential moment of f (B 1 ) :
E A exp(λ f (B 1 )) ≤ exp ( λ 2 /2 1 -M 1 |λ| ) for all λ < 1 M 1 .
In this section, we formulate two Bernstein type inequalities for Markov chains, one is established via simple use of Montgomery-Smith inequality (see Montgomery-Smith (1993) and de la Peña, and Giné (1999)) which results in much larger constants (comparing to the i.i.d. setting) in the dominating parts of the bound. The second Bernstein's bound contains small constants in the main counterparts of the bound, however at a cost of having an extra term in the bound. Before we state the theorems, we will give a short discussion on already existing results for exponential inequalities for Markov chains.
Remarks 3.1 Since there is plenty of results concerning exponential inequal-
ities for Markov chains under many assumptions, it may be difficult to compare their strength (measured by assumptions imposed on the chain) and applicability. Thus, before we present the proofs of Theorem 3.2 and Theorem 3.3 , we make a short comparison of our result to already existing inequalities for Markov chains. We also strongly recommend seeing an exhaustive overview on the recent results of that type in [START_REF] Adamczak | Exponential concentration inequalities for additive functionals of Markov chains[END_REF].
The bounds obtained in this paper are related to the Fuk and Nagaev sharp bound inequality obtained in Bertail and Clémençon (2010). It is also based on the regeneration properties and decomposition of the chain. However, our techniques of proof differ and allow us to obtain a better rate in the main subgaussian part of the inequality under the hy-
potheses. The proofs of the inequalities are simplified and do not require the partitioning arguments which was used in [START_REF] Bertail | Sharp bounds for the tail of functionals of Markov chains[END_REF]. [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] and [START_REF] Chazottes | Concentration inequalities for Markov processes via coupling[END_REF] or any restrictions on the starting point of the chain as in [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF]. Moreover, [START_REF] Adamczak | Exponential concentration inequalities for additive functionals of Markov chains[END_REF] use the assumption of strong aperiodicity for Harris Markov chain. We state a remark that this condition can be relaxed and we can only assume that Harris Markov chain is aperiodic (see Remark 3.9). [START_REF] Adamczak | A tail inequality for suprema of unbounded empirical processes with applications to Markov chains[END_REF], [START_REF] Clémençon | Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique[END_REF], [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF]). Our inequalities work for unbounded functions satisfying Bernstein's block moment condition. Moreover, all terms involved in our inequalities are given by explicit formulas. Thus, the results can be directly used in practical considerations. Note also that all the constants are given in simple, easy to interpret form and they do not depend on other underlying parameters.
It is noteworthy that we do not impose condition of stationarity of the considered Markov chain as in
Many results concerning exponential inequalities for Markov chains are established for bounded functions f (see for instance
Winterberger (2016) has established exponential inequalities in unbounded
case extending the result of [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] to the case when the chain can start from any x ∈ E. However, the constant involved in the bound of the Theorem 2.1 (obtained for bounded and unbounded functions) is very large. [START_REF] Boucheron | Inequalities. A Nonasymptotic Theory of Independence[END_REF]. As mentioned in the paper of Adamczak, there is many exponential inequalities that satisfy spectral gaps (see for instance Gao and Guillin, [START_REF] Lezaud | Chernoff and Berry-Esseen inequalities for Markov processes[END_REF]). Spectral gap inequalities allow to recover the Bernstein type inequality at its full strength. We need to mention that the geometric ergodicity assumption does not ensure in the non-reversible case that considered Markov chains admit a spectral gap (see Theorem 1.4 in [START_REF] Kontoyiannis | Geometric ergodicity and the spectral gap of non-reversible Markov chains[END_REF]).
We formulate a Bernstein type inequality for Markov chains below. Theorem 3.2 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1 -A3, we have
P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 18 exp [ - x 2 2 × 90 2 (nσ 2 (f ) + M 1 x/90) ] + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] ,
where
C 1 = E ν [ exp λ 0 τ A ∑ i=1 f (X i ) ] , C 2 = E A [ exp[λ 1 f (B 1 )]
] .
Remark 3.2 Observe that we do not impose a moment condition on E
A [τ A ] p < ∞ for p ≥ 2.
At the first glance, this might be surprising since one usually assumes the existence of E A [τ A ] 2 < ∞ when proving central limit theorem for regenerative Markov chains. A simple analysis of the proof of the central limit theorem in a Markovian case (see for instance [START_REF] Meyn | Markov chains and stochastic stability[END_REF])
reveals that it is sufficient to require only E A [τ A ] < ∞ when we consider cen- tered function f instead of f.
Proof. Firstly, we consider the sum of random variables of the following form
Z n ( f ) = ln ∑ i=1 f (B j ).
(2)
Furthermore, we have that
S n ( f ) = Z n ( f ) + ∆ n ( f ).
We recall, that l n is random and correlated with blocks itself. In order to apply Bernstein's inequality for i.i.d. random variables we apply the Montgomery-Smith inequality (see [START_REF] Montgomery-Smith | Comparison of sums of independent identically distributed random vectors[END_REF]) . It follows easily that
P A [ ln ∑ i=1 f (B i ) ≥ x/3 ] ≤ P A [ max 1≤k≤n k ∑ i=1 f (B i ) ≥ x/3 ] ≤ 9P A [ n ∑ i=1 f (B i ) ≥ x/90 ] (3)
and under Bernstein's condition A1 we obtain
P A [ n ∑ i=1 f (B i ) ≥ x/90 ] ≤ 2 exp [ - x 2 2 × 90 2 (M 1 x/90 + nσ 2 (f )) ] . ( 4
)
Next, we want to control the remainder term ∆ n .
∆ n = τ A ∑ i=1 f (X i ) + n ∑ i=τ A (ln-1) f (X i ). ( 5
)
The control of ∆ n is guaranteed by Markov's inequality, i.e.
P ν
[ τ A ∑ i=1 f (X i ) ≥ x 3 ] ≤ E ν [ exp λ 0 τ A ∑ i=1 f (X i ) ] exp [ - λ 0 x 3 ] .
We deal similarly with the last term of ∆ n . We complement the data 1 + τ A (l n ) + 1 by observations up to the next regeneration time 1 + τ A (l n + 1) and obtain
P ν n ∑ i=1+τ A (ln)+1 f (X i ) ≥ x 3 ≤ P ν n ∑ i=1+τ A (ln)+1 f (X i ) ≥ x 3 ≤ P ν 1+τ A (ln+1) ∑ i=1+τ A (ln)+1 f (X i ) ≥ x 3 ≤ E A [ exp[λ 1 f (B 1 )] ] exp [ - λ 1 x 3 ] .
We note that although the Montgomery-Smith inequality allows to obtain easily Bernstein's bound for Markov chains, the constants are rather large. Interestingly, under an additional assumption on E A [τ A ] p we can obtain the Bernstein type inequality for regenerative Markov chains with much smaller constants for the dominating counterparts of the bound.
A4. (Block length moment assumption)
There exists a positive constant M 2 such that for any p ≥ 2
E A [τ A ] p ≤ p!M p-2 2 E A [τ 2 A ] and E ν [τ A ] p ≤ p!M p-2 2 E ν [τ 2 A ].
Before we formulate Bernstein's inequality for regenerative Markov chains we introduce a lemma which provides a bound for tail probability of
√ n ( ln n -1 α )
which will be cruciall for the proof of Bernstein's bound but also may be of independent interest. Lemma 3.1 Suppose that condition A4 holds. Then
P ν ( n 1/2 ( l n n - 1 α ) ≥ x ) is bounded by exp ( - 1 2
(αx √ n -2α) 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) + (αx √ n + E ν τ A )M 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) 1/2
) .
Proof of Lemma 3.1 is postponed to the Appendix.
Remark 3.3 Note that when n → ∞, the dominating part in the exponential term is of order
1 2 α 2 x 2 E A τ 2 A /α + α 1/2 xM 2 (E A τ 2 A ) 1/2 + O(n -1/2 ) = 1 2 α 2 x 2 E A τ 2 A /α(1 + αxM 2 (E A τ 2 A /α) -1/2 ) + O(n -1/2 ) = 1 2 (αx) 2 / (E A τ 2 A /α) (1 + αxM 2 (E A τ 2 A /α) -1/2 ) + O(n -1/2 ),
thus we have a Gaussian tail with the right variance for moderate x and an exponential tail for large x and, in consequence, the constants are asymptotically optimal.
Now we are ready to state an alternative Bernstein type inequality for regenerative Markov chains, where under additional condition on the length of the blocks we can obtain much better inequality in terms of constants. Theorem 3.3 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1-A4 we have for any a > 0, for x > 0 and N > 0
P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 2 exp [ -x 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 3 x 1+a ) ] + 18 exp [ -x 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 x 1+a ) ] + P ν ( n 1/2 [ l n n - 1 α ] > N ) + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] , ( 6
)
where
C 1 = E ν [ exp λ 0 τ A ∑ i=1 f (X i ) ] , C 2 = E A [ exp[λ 1 f (B 1 )] ] .
Remark 3.4 If we choose N = log(n), then by Lemma 3.1 we can see that
P ν ( n 1/2 [ ln n -1 α ] ≥ log(n) ) = o ( 1 n
) and in that case the second term in [START_REF] Chazottes | Concentration inequalities for Markov processes via coupling[END_REF] remains small uniformly in x.
Proof. We start by the obvious observation that
P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ P A [ ln ∑ i=1 f (B i ) ≥ x/3 ] + P ν [ τ A ∑ i=1 f (X i ) ≥ x/3 ] + P A n ∑ i=τ A (ln-1) f (X i ) ≥ x/3 . ( 7
)
Remark 3.5 Instead of dividing x by 3 in [START_REF] Clémençon | Moment and probability inequalities for sums of bounded additive functionals of regular Markov chains via the Nummelin splitting technique[END_REF], one can use a different splitting to improve a little bit the final constants.
The bounds for the first and last non-regenerative blocks can be handled the same way as in Theorem 3.2. Next, we observe that, for any a > 0, we have
P A [ ln ∑ i=1 f (B i ) ≥ x/3 ] ≤ P A ⌊ n α ⌋ ∑ i=1 f (B i ) ≥ x 3(1 + a) + P A ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a) , ( 8
)
where
l n 1 = min( ⌊ n α ⌋ , l n ) and l n 2 = max( ⌊ n α ⌋ , l n ). We observe that ∑ ⌊ n α ⌋ i=1 f (B i )
is a sum of independent, identically distributed and sub-exponential random variables. Thus, we can directly apply Bernstein's bound and obtain
P A ⌊ n α ⌋ ∑ i=1 f (B i ) ≥ x 3(1 + a) ≤ 2 exp [ -x 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 x/3(1 + a) ) ] . ( 9
)
The control of
∑ ln 2 ln 1 f (B i
) is slightly more challenging due to the fact that l n is random and correlated with the blocks itself. In the following, we will make use of the Montgomery-Smith inequality. Notice however, that since we expect the number of terms in this sum to be at most of the order √ n, this term will be much more smaller than the leading term (9) and will be asymptotically negligible. We have
P A ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a) ≤ P A ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a) , √ n [ l n n - 1 α ] ≤ N + P ν ( √ n [ l n n - 1 α ] > N ) = A + B. ( 10
)
Firstly, we will bound term A in (10) using Montgomery-Smith inequality and the fact that if
√ n [ ln n -1 α ] ≤ N, then l n 1 -l n 2 ≤ √ nN. P A ln 2 ∑ ln 1 f (B i ) ≥ x 3(1 + a) , √ n [ l n n - 1 α ] ≤ N ≤ P A ( max 1≤k≤N √ n k ∑ i=1 f (B i ) ≥ x 3(1 + a) ) ≤ 9P A N √ n ∑ i=1 f (B i ) ≥ x 90(1 + a) ≤ 18 exp [ -x 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 x 1+a ) ] .
Lemma 3.1 allows to control term B.
Maximal inequalities under uniform entropy
In empirical processes theory for processes indexed by class of functions, it is important to assess the complexity of considered classes. The information about entropy of F helps us to inspect how large our class is. Generally, control of entropy of certain classes may be crucial step when investigating asymptotic behaviour of empirical processes indexed by a class of functions. In our setting, we will measure the size of class of functions F via covering numbers and uniform entropy number. The following definition is due to [START_REF] Van Der Vaart | Weak Convergence and Empirical Processes With Applications to Statistics[END_REF].
Definition 3.4 (Covering and uniform entropy number)
The covering number N p (ϵ, Q, F) is the minimal number of balls {g : ∥g -f ∥ L p (Q) < ϵ} of radius ϵ needed to cover the set F. The entropy (without bracketing) is the logarithm of the covering number. We define uniform entropy number as
N p (ϵ, F) = sup Q N p (ϵ, Q, F)
, where the supremum is taken over all discrete probability measures Q.
In the following we state assumptions on the size of considered class of functions F. Rather than considering the assumptions A2 and A3, we impose the assumptions on the first and the last non-regenerative blocks for the envelope F of F. A2 ′ . (Non-regenerative block exponential moment assumption) There exists
λ 0 > 0 such that E ν [ exp [ 2λ 0 ∑ τ A i=1 F (X i ) ]] < ∞.
A3 ′ . (Exponential block moment assumption) There exists
λ 1 > 0 such that E A [ exp [ 2λ 1 F (B 1 ) ]] < ∞. A5. (Uniform entropy number condition) N 2 (ϵ, F) < ∞.
Before we formulate Bernstein deviation type inequality for unbounded classes of functions, we introduce one more piece of notation, let
σ 2 m = max f ∈F σ 2 (f ) > η > 0.
Theorem 3.5 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1, A2 ′ , A3 ′ and A5 and for any 0 < ϵ < x and for n large enough we have
P ν [ sup f ∈F n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ N 2 (ϵ, F) { 18 exp [ - (x -2ϵ) 2 2 × 90 2 (nσ 2 m + M 1 (x -2ϵ)/90) ] +C 1 exp [ λ 0 (x -2ϵ) 3 ] + C 2 exp [ - λ 1 (x -2ϵ) 3 ]} , ( 11
)
where
C 1 = E ν [ exp 2λ 0 τ A ∑ i=1 F (X i ) ] , C 2 = E A [exp[2λ 1 |F | (B 1 )]]
and F is an envelope function for F.
Before we proceed with the proof of Theorem 3.
||f || C P (E ′ ) = sup x∈E ′ |f (x)| + sup x 1 ∈E ′ ,x 2 ∈E ′ ( f (x 1 ) -f (x 2 ) d(x 1 , x 2 ) p ) then we have M = sup x∈X F (x) < ∞ as well as L = sup f,g∈F ,f ̸ =g sup z |f (z)-g(z)| ||f -g|| C P (E ′ )
< ∞ so that we can directly control the empirical sum by the obvious inequality
sup f,g∈F 1 n n ∑ i=1 f (X i ) -g(X i ) ≤ L||f -g|| C P (E ′ ) .
It follows that if we replace the notion of uniform covering number N 2 (ε, F)
with respect to the norm ∥.∥ L 2 (Q) by the covering numbers N C p (ε, F) with respect to ||.|| C P (E ′ ) , then the results hold true for any n, provided that N 2 (ε, F) is replaced by N C p ( ε L , F) in the inequality. Proof of Theorem 3.5.
We choose functions
g 1 , g 2 , • • • , g M , where M = N 2 (ϵ, F) such that min j Q|f -µ(f ) -g j + µ(g 1 )| ≤ ϵ for each f ∈ F ,
where Q is any discrete probability measure. We also assume that g 1 , g 2 , • • • , g M belong to F and satisfy conditions A1, A2 ′ , A3 ′ . We write f * for the g j , where the minimum is achieved. Our further reasoning is based on the following remarks.
Remark 3.7 Let f, g be functions with the expectations µ(f ), µ(g) respectively. Then,
∥f -µ(f ) -g + µ(g)∥ L 2 ≤ ∥f -g∥ L 2 + ∥µ(f ) -µ(g)∥ L 2 ≤ 2∥f -g∥ L 2 .
In our reasoning, we will also make use of the following remark.
Remark 3.8 Assume that the functions f, g ∈ F and ∥f -g∥ 2,Pn < ϵ. Then, for n large enough (depending only on ϵ),
P(f -g) 2 ≤ P n (f -g) 2 + |(P n -P)(f -g) 2 | ≤ 2ϵ 2
since uniformly |(P n -P)(f -g) 2 | ≤ ϵ 2 by the uniform strong law of large numbers for regenerative Markov chains (see Theorem 3.6 from [START_REF] Levental | Uniform limit theorems for Harris recurrent Markov chains[END_REF]).
As a consequence, any ϵ-net in L 2 (P n ) is also √ 2ϵ-net in L 2 (P) (see also [START_REF] Kosorok | Introduction to empirical processes and semiparametric inference[END_REF], page 151 for some refinements in the i.i.d. case). Moreover, note that ∃ N such that ∀ n ≥ N ∥g i -g j ∥ 2,P ≤ ϵ and we have
∥g i -g j ∥ 2,Pn -∥g i -g j ∥ 2,P + ∥g i -g j ∥ 2,P ≤ 2ϵ.
Next, by the definition of uniform numbers and the Remark 3.8, we obtain
P ν [ sup f ∈F 1 n n ∑ i=1 (f (X i ) -µ(f )) ≥ x ] ≤ P ν { sup f ∈F [ 1 n n ∑ i=1 |f (X i ) -µ(f ) -f * (X i ) + µ(f * ) + 1 n n ∑ i=1 |f * (X i ) -µ(f * )| ] ≥ x } ≤ P ν [ max j∈{1,••• ,N 2 (ϵ,F)} 1 n n ∑ i=1 g j (X i ) -µ(g 1 ) ≥ x -2ϵ ] ≤ N 2 (ϵ, F) max j∈{1,••• ,N 2 (ϵ,F)} P ν { 1 n n ∑ i=1 g j (X i ) -µ(g 1 ) ≥ x -2ϵ } .
We set the notation that g j = g j -µ(g 1 ).
In what follows, our reasoning is analogous as in the proof of Theorem 3.2. Instead of taking any f ∈ F , we work with the functions g j ∈ F. Thus, we consider now the processes
Z n (g j ) = ln ∑ i=1 g j (B i ) (12)
and S n (g j ) = Z n (g j ) + ∆ n (g j ).
Under the assumptions A1, A2 ′ and A3 ′ for g j , we get the analogous to that from Theorem 3.2 Bernstein's bound for Z n (g j ), namely
P A [ ln ∑ i=1 g j (B i ) ≥ x -ϵ ] ≤ 18 exp [ - (x -2ϵ) 2 2 × 90 2 (nσ 2 (g 1 ) + M 1 (x -2ϵ)/90) ] . (13)
We find the upper bound for the remainder term ∆ n (g j ) applying the same reasoning as in Theorem 3.2. Thus,
P ν [ τ A ∑ i=1 g j (X i ) ≥ x -2ϵ 3 ] ≤ C 1 exp [ - λ 0 (x -2ϵ) 3 ] (14)
and
P A n ∑ i=τ A (ln-1) ḡj (X i ) ≥ x -2ϵ 3 ≤ C 2 exp [ - λ 1 (x -2ϵ) 3 ] , ( 15
)
where
C 1 = E ν [ exp λ 0 τ A ∑ i=1 g j (X i ) ] , C 2 = E A [ exp[λ 1 g j (B 1 )] ] .
Finally, notice that
E ν [ exp λ 0 τ A ∑ i=1 g j (X i ) ] ≤ E ν [ exp 2λ 0 τ A ∑ i=1 F (X i ) ] < ∞ and E A [ exp[λ 1 g j (B 1 )] ] ≤ E A [exp[2λ 1 |F | (B 1 )]] < ∞
and insert it into ( 14) and ( 15) which yields the proof. Below we will formulate a maximal version of Theorem 3.3. Theorem 3.6 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1, A2 ′ , A3 ′ , A4 -A5 and for any 0 < ϵ < x and for n large enough and N > 0 we have
P ν [ sup f ∈F n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ N 2 (ϵ, F) { 2 exp [ -(x -2ϵ) 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 3 x-2ϵ 1+a ) ] + 18 exp -(x -2ϵ) 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 (x-2ϵ) 1+a ) +P ν ( n 1/2 [ l n n - 1 α ] > N ) + C 1 exp [ - λ 0 (x -2ϵ) 3 ] + C 2 exp [ - λ 1 (x -2ϵ) 3 ]} ,
where
C 1 = E ν [ exp 2λ 0 τ A ∑ i=1 F (X i ) ] , C 2 = E A [exp[2λ 1 |F | (B 1 )]] .
Proof. The proof is a combination of the proofs of Theorem 3.3 and Theorem 3.5. We deal with the supremum over F the same way as in Theorem 3.5. Then we apply Theorem 3.3.
We can obtain even sharper upper bound when class F is uniformly bounded. In the following, we will show that it is possible to get a Hoeffding type inequality and have a stronger control of moments of the sum S n (f ) which is a natural consequence of uniform boundedness assumption imposed on F.
A6. The class of functions F is uniformly bounded, i.e. there exists a constant D such that ∀f ∈ F |f | < D.
Theorem 3.7 Assume that X = (X n ) n∈N is a regenerative positive recurrent Markov chain. Then, under assumptions A1, A2 ′ , A3 ′ , A5 -A6 and for any 0 < ϵ < x, we have for n large enough
P ν [ sup f ∈F n ∑ i=1 f (X i ) -µ(f ) σ(f ) ≥ x ] ≤ N 2 (ϵ, F) { 18 exp [ - (x -2ϵ) 2 2n × 90 2 D 2 ] +C 1 exp [ - λ 0 (x -2ϵ) 3 ] + C 2 exp [ - λ 1 (x -2ϵ) 3 ]} , ( 16
)
where
C 1 = E ν exp |2λ 0 τ A D| , C 2 = E A exp |2λ 1 l(B 1 )D| .
Proof. The proof bears resemblance to the proof of Theorem 3.5, with few natural modifications which are a consequence of the uniform boundedness of F.
Under additional condition A4 we can obtain easily the bound with smaller constants, we follow the analogous way as in Theorem 3.6.
General Harris recurrent case
It is noteworthy that Theorems 3.2, 3.5, 3.7 are also valid in Harris recurrent case under slightly modified assumptions. It is well known that it is possible to retrieve all regeneration techniques also in Harris case via the Nummelin splitting technique which allows to extend the probabilistic structure of any chain in order to artificially construct a regeneration set. The Nummelin splitting technique relies heavily on the notion of small set. For the clarity of exposition we recall the definition.
Definition 3.8
We say that a set S ∈ E is small if there exists a parameter δ > 0, a positive probability measure Φ supported by S and an integer m ∈ N * such that
∀x ∈ S, B ∈ E Π m (x, B) ≥ δ Φ(B), (17)
where Π m denotes the m-th iterate of the transition probability Π.
We expand the sample space in order to define a sequence (Y n ) n∈N of independent r.v.'s with parameter δ. We define a joint distribution P ν,M of
X M = (X n , Y n ) n∈N . The construction relies on the mixture representation of Π on S, namely Π(x, B) = δΦ(B) + (1 -δ) Π(x,B)-δΦ(B) 1-δ
. It can be retrieved by the following randomization of the transition probability Π each time the chain X visits the set S. If X n ∈ S and • if Y n = 1 (which happens with probability δ ∈ ]0, 1[), then X n+1 is distributed according to the probability measure Φ,
• if Y n = 0 (that happens with probability 1-δ), then X n+1 is distributed according to the probability measure (1
-δ) -1 (Π(X n , •) -δΦ(•)).
This bivariate Markov chain X M is called the split chain. It takes its values in E × {0, 1} and possesses an atom, namely A = S × {1}. The split chain X M inherits all the stability and communication properties of the chain X. The regenerative blocks of the split chain are i.i.d. (in case m = 1 in ( 17)) (see [START_REF] Meyn | Markov chains and stochastic stability[END_REF] for further details). We will formulate a Bernstein type inequality for unbounded classes of functions in the Harris recurrent case (equivalent of Theorem 3.2). Theorems 3.5 and 3.7 can be reformulated for Harris chains in similar way. We impose the following conditions: AH1. (Bernstein's block moment condition) There exists a positive constant M 1 such that for any p ≥ 2 and for every f ∈ F
sup y∈S E y f (B 1 ) p ≤ 1 2 p!σ 2 (f )M p-2 1 . ( 18
)
AH2. (Non-regenerative block exponential moment assumption) There exists a constant λ 0 > 0 such that for every
f ∈ F we have E ν [ exp λ 0 ∑ τ S i=1 f (X i ) ] < ∞.
AH3. (Exponential block moment assumption) There exists a constant λ
1 > 0 such that for every f ∈ F we have sup y∈S E y [ exp[λ 1 f (B 1 )] ] < ∞. Let sup y∈S E y [τ S ] = α M < ∞.
We are ready to formulate a Bernstein type inequality for Harris recurrent Markov chains.
Theorem 3.9 Assume that X M is a Harris recurrent, strongly aperiodic Markov chain. Then, under assumptions AH1-AH3, we have
P ν [ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 18 exp [ - x 2 2 × 90 2 (nσ 2 (f ) + M 1 x/90) ] + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] , (19)
where
C 1 = E ν [ exp λ 0 τ S ∑ i=1 f (X i ) ] , C 2 = sup y∈S E y [ exp[λ 1 f (B 1 )
] .
The proof of Theorem 3.9 is analogous to the proof of Theorem 3.
[ n ∑ i=1 f (X i ) -µ(f ) ≥ x ] ≤ 2 exp [ -x 2 2 × 3 2 (1 + a) 2 (⌊ n α ⌋ σ 2 (f ) + M 1 3 x 1+a ) ] + 18 exp [ -x 2 2 × 90 2 (1 + a) 2 ( N √ nσ 2 (f ) + M 1 90 x 1+a ) ] + P ν [ n 1/2 ( l n n - 1 α ) > N ] + C 1 exp [ - λ 0 x 3 ] + C 2 exp [ - λ 1 x 3 ] ,
where
C 1 = E ν [ exp λ 0 τ S ∑ i=1 f (X i ) ] , C 2 = sup y∈S E y [ exp[λ 1 f (B 1 )] ] .
. Remark 3.9 In the Theorem 3.9 we assumed that X M is strongly aperiodic.
It is easy, however, to relax this assumption and impose only the aperiodicity condition on Harris chain by using the same trick as in [START_REF] Levental | Uniform limit theorems for Harris recurrent Markov chains[END_REF]. Note that if X M satisfies M(m, S, δ, Φ) for m > 1, then the blocks of data are 1dependent. Denote by S = S ∪ { * }, where { * } is an ideal point which is not in S. Next, we define a pseudo-atom α M = S × {1}. In order to impose only and if 0 < x ≤ √ n(1 -α -1 ), then
P ν ( n 1/2 ( l n n - 1 α ) ≥ x ) = P ν ( l n ≥ n α + x √ n ) ≤ P ν ( l n ≥ [ n α + x √ n ]) ≤ P((∆τ 1 -E ν τ A ) + [ n α +x √ n] ∑ i=2 (∆τ i -α) ≤ n -([ n α + x √ n] -1)α -E ν τ A ),
where [.] is the integer part.
Since
n α + x √ n -1 ≤ [ n α + x √ n] ≤ n α + x √ n, we get n -([ n α + x √ n] -1)α -E ν τ A ) ≤ n -( n α + x √ n -2)α -E ν τ A = -αx √ n + 2α -E ν τ A .
It follows that
P ν ( n 1/2 ( ln n -1 α ) ≥ x ) ≤ P ( (∆τ 1 -E ν τ A ) + ∑ [ n α +x √ n] i=2 (∆τ i -α) ≤ -αx √ n + 2α -E ν τ A ) ,
where [.] is the integer part Now, we can apply any Bennett's or Bernstein's inequality on these centered i.i.d. random variables to get an exponential bound. This can be done since we assumed A4. Note that other bounds (polynomial for instance) can be obtained under appropriate modifications of A4. In our case we get
P((∆τ 1 -E ν τ A ) + [ n α +x √ n] ∑ i=2 (∆τ i -α) ≤ -αx √ n + 2α -E ν τ A ) ≤ exp ( - 1 2 (αx √ n -2α + E ν τ 2 A )/S 2 n 1 + (αx √ n -2α + E ν τ A )M 2 /S n ) ,
where
S 2 n = E ν τ 2 A + ([ n α + x √ n] -1)E A τ 2 A .
The above bound can be reduced to exp
( - 1 2 (αx √ n -2α) 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) + (αx √ n + E ν τ A )M 2 ( E ν τ 2 A + ( n α + x √ n)E A τ 2 A ) 1/2
)
Theorem 2 . 1 (
21 Hoeffding's inequality) Let X 1 , X 2 , • • • , X n be independent identically distributed random variables with common expectation EX 1 and such that
Notice that our bound is a deviation bound in that it holds only for n large enough. This is due to the control of the covering functions
Remark 3.6 (under P n ) by a control under P (see Remark 3.8 in the proof ). However, by
making additional assumptions on the regularity of the class of functions and
by choosing the adequate norm, it is possible to obtain by the same arguments
an exponential inequality valid for any n as in Zou, Zhang and Xu (2009)
or Cucker and Smale (2002). See also examples of such classes of functions
used in statistical learning in this latter. Indeed, if F belongs to a ball of
a Hölder space C
5, we indicate that under
additional assumptions it is possible to obtain Bernstein type concentration
inequality.
P (E ′ ) on a compact set E ′ of an Euclidean space endowed with the norm
2. We can obtain a bound with much smaller constants under an extra block moment condition. AH4. (Block length moment assumption) There exists a positive constant M 2 such that for any p ≥ 2 Assume that X M is a Harris recurrent, strongly aperiodic Markov chain. Then, under assumptions AH1-AH4, we have for any x > 0
sup y∈S E y [τ S ] p ≤ p!M p-2 2 E A τ 2 A ,
E ν [τ S ] p ≤ p!M p-2 2 E ν τ 2 S .
.
Theorem 3.10 and for N ∈ R
P ν
Keywords : uniform entropy, exponential inequalities, empirical processes indexed by classes of functions, regenerative Markov chain. Primary Class : 62G09, Secondary Class : 62G20, 60J10
Acknowledgment
This research was supported by a public grant as part of the Investissement d'avenir, project reference ANR-11-LABX-0056-LMH. The work was also supported by the Polish National Science Centre NCN ( grant No. UMO2016/23/N/ST1/01355 ) and (partly) by the Ministry of Science and Higher Education. This research has been conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01).
aperiodicity in this case it is sufficient to consider two processes {E i } and
, for some k ≥ 0 and E i = * . Every function f : S → R will be considered as defined on S with identification f ( * ) = 0 (see also [START_REF] Levental | Uniform limit theorems for Harris recurrent Markov chains[END_REF] for more details concerning those two processes). Then, we prove Bernstein type of inequality similarly as we prove Theorems 3.2 and 3.9 applying all the reasoning to {E i } and {O i } separately, yielding to a similar inequality up to an additional multiplicative constant 2.
Appendix
Proof of Lemma 3.1. Let τ k be the time of the k-th visit to the atom A (S × {1} in the general case).
In the following we make use of the argument from [START_REF] Dedecker | Subgaussian concentration inequalities for geometrically ergodic Markov chains[END_REF] and observe that we have for any k ≤ n |
01757606 | en | [
"sdv.sp.pharma",
"sdv.mhep.derm",
"sdv.mhep.em"
] | 2024/03/05 22:32:13 | 2018 | https://amu.hal.science/hal-01757606/file/PIIS0190962217328785%5B1%5D.pdf | MD Michael Benzaquen
email: michael.benzaquen@ap-hm.fr.
MD Luca Borradori
MD Philippe Berbis
MS Simone Cazzaniga
Rene Valero
MD Marie-Aleth Richard
MD, PhD Laurence Feldmeyer
MD Michaelbenzaquen
MD Ren Evalero
Dipeptidyl peptidase IV inhibitors, a risk factor for bullous pemphigoid: Retrospective multicenter
Keywords: bullous pemphigoid, case-control study, diabetes, dipeptidyl peptidase-4 inhibitor, gliptin, risk factor Boldface indicates statistical significance. CI, Confidence interval, DPP4i, dipeptidyl peptidase-4 inhibitor, OR, odds ratio. *Univariate conditional logistic regression analysis. y Multivariable
published or not. The documents may come
B ullous pemphigoid (BP) is the most frequent autoimmune subepidermal blistering disease that typically affects the elderly. Its cutaneous manifestations are polymorphic, ranging from pruritus with excoriated, eczematous, papular, and/ or urticaria-like lesions in the nonbullous phase to vesicles and bullae in the bullous phase. [START_REF] Joly | A comparison of two regimens of topical corticosteroids in the treatment of patients with bullous pemphigoid: a multicenter randomized study[END_REF] BP is associated with an immune response directed against 2 molecules, the BP antigen 180 (BP180 [also called BPAG2]) and the BP antigen 230 (BP230 [also called BPAG1]). [START_REF] Feliciani | Management of bullous pemphigoid: the European Dermatology Forum consensus in collaboration with the European Academy of Dermatology and Venereology[END_REF] Since the publication of the first case of BP associated with sulfasalazine in 1970, a wide range of drugs (spironolactone, furosemide, chloroquine, b-blockers, and several antibiotics) have been associated with the disease. [START_REF] Bastuji-Garin | Drugs associated with bullous pemphigoid. A case-control study[END_REF] Recently, several cases of BP have been reported in association with dipeptidyl peptidase-4 inhibitors (DPP4is), which are also known as gliptins. [START_REF] B En E | Bullous pemphigoid and dipeptidyl peptidase IV inhibitors: a case-noncase study in the French Pharmacovigilance Database[END_REF][START_REF] Garc Ia | Dipeptidyl peptidase-IV inhibitors induced bullous pemphigoid: a case report and analysis of cases reported in the European pharmacovigilance database[END_REF][START_REF] Keseroglu | A case of bullous pemphigoid induced by vildagliptin[END_REF][START_REF] Haber | Bullous pemphigoid associated with linagliptin treatment[END_REF][START_REF] Pasmatzi | Dipeptidyl peptidase-4 inhibitors cause bullous pemphigoid in diabetic patients: report of two cases[END_REF][START_REF] Skandalis | Drug-induced bullous pemphigoid in diabetes mellitus patients receiving dipeptidyl peptidase-IV inhibitors plus metformin[END_REF][START_REF] Aouidad | A case report of bullous pemphigoid induced by dipeptidyl peptidase-4 inhibitors[END_REF][START_REF] Attaway | Bullous pemphigoid associated with dipeptidyl peptidase IV inhibitors. A case report and review of literature[END_REF][START_REF] B En E | Bullous pemphigoid induced by vildagliptin: a report of three cases[END_REF][START_REF] Mendonc ¸a | Three cases of bullous pemphigoid associated with dipeptidyl peptidase-4 inhibitorsdone due to linagliptin[END_REF] DPP4is are oral antihyperglycemic drugs administered to patients with type 2 diabetes as monotherapy or in combination with other oral antihyperglycemic medications or insulin. DPP4 is an enzyme that inactivates incretins (glucagon-like peptide-1 and glucose-dependent insulinotropic polypeptide). DPP4is increase levels of incretins, thereby increasing insulin secretion, decreasing glucagon secretion, and improving glycemic control. Sitagliptin was first approved in 2006 by the US Food and Drug Administration, followed by saxagliptin (in 2009), linagliptin (in 2011), and alogliptin (in 2013). Three DPP4is are currently available on the French marketdsitagliptin and vildagliptin (since 2007) and saxagliptin (since 2009)dand 5 are available on the Swiss marketdthe 3 aforementioned DPP4is and linagliptin (since 2011) and alogliptin (since 2013)dboth of which are available only on the Swiss market. They are used alone or in association with metformin in the same tablet. [START_REF] B En E | Bullous pemphigoid and dipeptidyl peptidase IV inhibitors: a case-noncase study in the French Pharmacovigilance Database[END_REF] An increasing number of clinical reports and pharmacovigilance database analyses suggesting an association between DPP4i intake and BP have been published. Nevertheless, this has not been confirmed by a well-designed controlled study.
The main objective of our case-control study was therefore to retrospectively evaluate the association between DPP4i treatment and development of BP. The secondary end points were to determine a potential higher association for a specific DPP4i and to evaluate the disease course after DPP4i withdrawal.
MATERIALS AND METHODS
The investigations were conducted as a retrospective case-control study with a 1:2 design, comparing case patients with BP and diabetes with age-and sexmatched controls with type 2 diabetes from January
Data collection for cases and controls
The study was conducted in 3 university dermatologic departments (Bern, Marseille Nord, and Marseille La Timone). By using the database of the respective histopathology departments and clinical records, we identified all patients with BP diagnosed for the first time between January 1, 2014, and July 31, 2016. The diagnosis of BP was based on the following criteria developed by the French Bullous Study Group 14 : consistent clinical features, compatible histopathology findings, positive direct immunofluorescence studies, and in some cases, positive indirect immunofluorescence microscopy studies and/or positive enzyme-linked immunosorbent assayeBP180/ enzyme-linked immunosorbent assayeBP230 (MBL International, Japan). Among these patients with BP, we identified those having type 2 diabetes.
For these patients, we recorded age, sex, date of BP diagnosis, treatment of BP (with topical steroids, systemic corticosteroids, immune suppressors, or other treatments such as doxycycline or dapsone), evolution of BP (complete remission, partial remission, relapse, or death), comorbidities (including rheumatic, neurologic, cardiovascular, or digestive diseases and neoplasia), treatment with DPP4is, and other cotreatments (including diuretics, antibiotics, neuroleptics, nonsteroidal anti-inflammatory drugs, and antihypertensive drugs).
If a DPP4i was mentioned in the medical record, we examined the type of DPP4i, the chronology between BP diagnosis and onset of the DPP4i treatment, and the evolution after DPP4i withdrawal. Patients who had other autoimmune bullous diseases or did not otherwise fulfill the inclusion criteria were not included.
The control patients were obtained between January 1, 2014, and July 31, 2016, from the endocrinology departments of the same hospitals. For each case, 2 control patients with diabetes visiting the endocrinology department in the same 6-month period and matched to the case by sex and quinquennium of age were then randomly selected from all available patients satisfying the matching criteria. The patient files were reviewed for treatment of diabetes (specifically, the use of DPP4is), other cotreatments, and comorbidities. For the controls, we did not include case patients with any chronic skin diseases, including bullous dermatosis, at the time of the study.
We then compared exposure to DPP4is between case patients and controls with adjustment for potential confounders.
Statistical analysis
Descriptive data were presented as number with percentages or means with standard deviations (SDs) for categoric and continuous variables, respectively. The Mann-Whitney U test was used to assess possible residual differences in the distribution of age between case patients and controls. Differences between case patients and matched controls across different levels of other factors were assessed by means of univariate conditional logistic regression analysis. Factors associated with DPP4i use were also investigated by means of the Pearson x 2 test or Fisher exact test, where required.
All factors with a P value less than .10 in the univariate case-control analysis and associated with DPP4i use, with a P value less than .10 at univariate level, were evaluated as possible confounding factors in multivariate conditional logistic regression models with backward stepwise selection algorithm. The factors retained for adjustment were neurologic and metabolic/endocrine comorbidities, as well as other dermatologic conditions unrelated to BP. The effect of DPP4i use on BP onset in diabetic patients was expressed in terms of an odds ratio (OR) along with its 95% confidence interval (CI) and P value. A stratified analysis by possible effect modifiers, including sex and age group, was also performed. All tests were considered statistically significant at a P value less than .05.
Before starting the study, we planned to recruit at least 183 patients (61 case patients and 122 controls) to detect an OR higher than 2.5 in a 1:2 matched case-control design, supposing to observe a 30% exposure to DPP4i use in the control group (a = 0.05, b = 0.20, multiple correlation coefficients \0.2). Analyses were carried out with SPSS software (version 20.0, IBM Corp, Armonk, NY).
RESULTS
From January 2014 to July 2016, BP was diagnosed in 165 patients (61 in Bern, 47 in Marseille Nord, and 57 in Marseille La Timone). Among these, 61 had diabetes (22 in Bern, 14 in Marseille Nord, and 25 in Marseille La Timone). We collected 2 matched controls for each case patient, resulting in a total of 122 controls.
Of the case patients, 50.8% were female, and the mean age was 79.1 plus or minus 7.0 years. The main comorbidities of cases were cardiovascular (86.9%), neurologic (52.5%), and metabolic and endocrine diseases other than diabetes (39.3%) and uronephrologic diseases (39.3%) (Table I).
In our 3 investigational centers, we collected 28 patients with diabetes and BP who were taking a DPP4i. DPP4is were used more frequently in case patients with BP (45.9%) than in controls (18%), and the difference was statistically significant (P \.001). Of the specific DDP4is, vildagliptin was more common in case patients (23%) than in controls (4.1%). For the other cotreatments, there was no statistical difference between case patients and controls, except for the use of antihistamines (P \ .001). There were no differences in other antidiabetic medications, including metformin, between case patients and controls (P = .08) (Table II).
All patients with BP were treated with high-potency topical steroids as first-line treatment. Systemic corticosteroids were used in half of them (50.8%), immunosuppressive agents in 32.8%, and other treatments such as doxycycline or dapsone in 34.4%. With treatment, 37.7% went into complete remission and 42.6% went into partial remission. Finally, there were no differences in treatment between the patients with diabetes and BP who had taken a DPP4i and the patients with diabetes and BP who had not taken a DPP4i (data not shown), an observation suggesting that presentation and initial severity of BP in these 2 groups were similar.
Abbreviations used:
BP: bullous pemphigoid CI: confidence interval DPP4i: dipeptidyl peptidase-4 inhibitor OR:
odds ratio SD:
standard deviation
DPP4is and BP
The univariate analysis of the association between DPP4i use and BP in diabetic patients yielded an OR of 3.45 (95% CI, 1.76-6.77; P\.001). After adjustment for possible confounding factors associated with BP onset and DPP4i use in multivariate analysis, the OR was 2.64 (95% CI, 1.19-5.85; P = .02) (Table III).
A more detailed analysis of DPP4i use revealed a higher association for vildagliptin, with a crude OR of 7.23 (95% CI, 2.44-21.40; P = .001) and an adjusted OR of 3.57 (95% CI, 1.07-11.84; P = .04). The study was underpowered to detect differences between other DPP4is, with linagliptin and alogliptin being only used in the Swiss cases.
Sex-stratified analysis indicated that the effect of a DPP4i on BP onset was higher in males (adjusted OR, 4.36; 95% CI, 1.38-13.83; P = .01) than in females (adjusted OR, 1.64; 95% CI, 0.53-5.11; P = .39). Age groupestratified analyses showed a stronger association for patients age 80 years or older, with an adjusted OR of 5.31 (95% CI, 1.60-17.62; P = .006).
Clinical course of patients with BP under treatment with a DPP4i
In our 3 centers, we collected a total of 28 patients with diabetes who developed BP under exposure to a DPP4i. The duration of DPP4i use before onset of BP ranged from 10 days to 3 years (median, 8.2 months).
Drug withdrawal was performed in 19 patients upon suspected DPP4i-associated BP. Complete (11 of 19 [58%]) or partial (7 of 19 [37%]) remission with some mild persistent disease was obtained for all patients but 1 (duration of follow up, 3-30 months; median, 16.4 months). First-line treatment was high-potency topical steroids and systemic corticosteroids in severe or refractory cases followed by a standard tapering schedule. [START_REF] Feliciani | Management of bullous pemphigoid: the European Dermatology Forum consensus in collaboration with the European Academy of Dermatology and Venereology[END_REF][START_REF] Joly | A comparison of oral and topical corticosteroids in patients with bullous pemphigoid[END_REF] No further therapy was necessary in these patients after DPP4i withdrawal to obtain BP remission. For 1 patient, sitagliptin was initially stopped, leading to a partial remission, but its reintroduction combined with metformin led to a relapse of the BP. Definitive discontinuation of sitagliptin and its replacement by repaglinide resulted in a partial remission of BP with 12-month follow-up. The clinical outcome in the 9 patients in whom DPP4is were not stopped was unfavorable. There were 3 deaths of unknown causes (33%), 1 relapse (11%), 4 partial remissions (45%), and 1 complete remission (11%).
DISCUSSION
Our study demonstrates that DPP4is are associated with an increased risk for development of BP, with an adjusted OR of 2.64. Association with vildagliptin was significantly higher than that with other DPP4is, with an adjusted OR of 3.57. Our findings further indicate that the rate of DPP4i intake in patients with BP is higher both in male patients and in patients older than 80 years. Finally, DPP4i withdrawal seems to have a favorable impact on the outcome of BP in patients with diabetes, as 95% of them went into remission after management DPP4 inhibition could enhance the activity of proinflammatory chemokines, such as eotaxin, promoting eosinophil activation in the skin, tissue damage, and blister formation. [START_REF] Forssmann | Inhibition of CD26/dipeptidyl peptidase IV enhances CCL11/ eotaxin-mediated recruitment of eosinophils in vivo[END_REF] Thielitz et al reported that DPP4is have an antifibrogenic activity by decreasing expression of transforming growth factor-b 1 and secretion of procollagen type I. [START_REF] Thielitz | Inhibitors of dipeptidyl peptidase IV-like activity mediate antifibrotic effects in normal and keloid-derived skin fibroblasts[END_REF] All these effects could be higher for vildagliptin than other for DPP4is because of molecular differences. Furthermore, vildagliptin administration in monkeys resulted in dose-dependent and reversible skin effects, such as blister formation, peeling, and erosions. [START_REF] Hoffmann | Vascular origin of vildagliptin-induced skin effects in cynomolgus monkeys: pathomechanistic role of peripheral sympathetic system and neuropeptide Y[END_REF] Finally and more importantly, DPP4 is a cell surface plasminogen receptor that is able to activate plasminogen, leading to plasmin formation. Plasmin is a major serine protease that is known to cleave BP180 within the juxtamembranous extracellular noncollagenous 16A domain. Hence, the inhibition of plasmin by a DPP4i may change the proper cleavage of BP180, thereby affecting its antigenicity and its function. [START_REF] Izumi | Autoantibody profile differentiates between inflammatory and noninflammatory bullous pemphigoid[END_REF] Our study has some limitations: we focused the analysis on DPP4i intake, whereas the potential isolated effect of metformin was not analyzed. Nevertheless, after DPP4i withdrawal, metformin was either continued (in those cases in which it was initially combined with a DPP4i) or newly started in 8 of our patients with BP. Among the latter, we observed 5 complete and 3 partial remissions on follow-up. In addition, metformin intake has not been implicated thus far in the development of BP. On the basis of these observations, it is unlikely that metformin plays a triggering role, but specific studies should be designed to examine the effect of metformin on its own. Finally, we included patients with BP identified by searching our histopathology databases. It is therefore possible that we missed a number of BP cases in which either the term pemphigoid was not used in the corresponding histopathologic report or BP was not clinically and/ or histopathologically considered.
In conclusion, our findings in a case-control study confirm that DPP4is are associated with an increased risk for development of BP in patients with diabetes. Therefore, the prescription of a DPP4i, especially vildagliptin, should potentially be limited or avoided in high-risk patients, including males and those age 80 years or older. A larger prospective study might be useful to confirm our findings. with first-line therapeutic options (ie, topical and sometimes systemic corticosteroids).
An increasing number of reports have suggested that DPP4is trigger BP. Fourteen of the 19 BP cases described (74%) appeared to be related to vildagliptin intake. The median age of the affected patients was 72.5 years, with an almost identical number of males and females. [START_REF] Garc Ia | Dipeptidyl peptidase-IV inhibitors induced bullous pemphigoid: a case report and analysis of cases reported in the European pharmacovigilance database[END_REF][START_REF] Keseroglu | A case of bullous pemphigoid induced by vildagliptin[END_REF][START_REF] Haber | Bullous pemphigoid associated with linagliptin treatment[END_REF][START_REF] Pasmatzi | Dipeptidyl peptidase-4 inhibitors cause bullous pemphigoid in diabetic patients: report of two cases[END_REF][START_REF] Skandalis | Drug-induced bullous pemphigoid in diabetes mellitus patients receiving dipeptidyl peptidase-IV inhibitors plus metformin[END_REF][START_REF] Aouidad | A case report of bullous pemphigoid induced by dipeptidyl peptidase-4 inhibitors[END_REF][START_REF] Attaway | Bullous pemphigoid associated with dipeptidyl peptidase IV inhibitors. A case report and review of literature[END_REF][START_REF] B En E | Bullous pemphigoid induced by vildagliptin: a report of three cases[END_REF][START_REF] Mendonc ¸a | Three cases of bullous pemphigoid associated with dipeptidyl peptidase-4 inhibitorsdone due to linagliptin[END_REF] In our study, among the 28 diabetic patients developing BP under DPP4i exposure, males were more affected (56.7%) and the median age was 80 years.
Garcia et al 5 identified 170 cases of BP in patients taking a DPP4i in the EudraVigilance database, suggesting that the intake of DPP4is was more frequently associated with development of BP when compared with that of other drugs. In the latter, a disproportionally high number of cases of vildagliptin use were found. A French case-noncase study recording all spontaneous reports of DPP4i-related BP in the National Pharmacovigilance Database between April 2008 and August 2014 also provided evidence supporting an increased risk for development of BP associated with DPP4i exposure, especially vildagliptin. [START_REF] B En E | Bullous pemphigoid and dipeptidyl peptidase IV inhibitors: a case-noncase study in the French Pharmacovigilance Database[END_REF] Our present study confirms that the association with vildagliptin is stronger than that with the other DPP4is. This cannot be explained by an overprescription of vildagliptin compared with prescription of other DPP4is. In our control group, sitagliptin was the most prescribed DPP4i, with 14 diabetic patients (11.5%), whereas only 5 patients were treated by vildagliptin (4%). Increased prescribing of sitagliptin was confirmed by an analysis of drug sales in France published by the French National Agency for Medicines and Health Products Safety in 2014. In this survey, sitagliptin was the most prescribed DPP4i and the 24th highest-earning drug in 2013, whereas vildagliptin was not ranked. A recent retrospective study suggests that DPP4i-associated BP is frequently noninflammatory or pauci-inflammatory and characterized by small blisters, mild erythema, and a limited skin distribution. The latter is potentially related to a distinct reactivity profile of autoantibodies to BP180. [START_REF] Izumi | Autoantibody profile differentiates between inflammatory and noninflammatory bullous pemphigoid[END_REF] Although in our retrospective evaluation, there was no apparent difference in clinical presentation and initial management between patients with diabetes and BP who had been treated with DPP4i and patients with diabetes and BP who had not been treated with DPP4i (data not shown), prospective studies are required to address the question of whether BP associated with the intake of a DPP4i has unique clinical and immunologic features.
The pathophysiologic mechanisms linking DPP4i intake and BP development remain unclear.
1, 2014, to July 31, 2016. All study procedures adhered to the principles of the Declaration of Helsinki. The French Committee for the Protection of Persons (RO-2016/37) and the Ethics Committee of the Canton of
Bern (KEK-2016/01488) approved the study. The
French Advisory Committee
on Information Processing
in Material Research in the
Field of Health and the
French Commission for
Information Technology and
Civil Liberties also authorized
this study.
Table I .
I Demographics and comorbidities of selected cases and controls
Controls Cases Total
Demographic characteristic/comorbidity N % N % N % P*
Sex
Male 60 49.2% 30 49.2% 90 49.2% d
Female 62 50.8% 31 50.8% 93 50.8%
Age, y (mean, SD = 7) 79.3 7.0 78.7 7.2 79.1 7.0 .63
\75 30 24.6% 17 27.9% 47 25.7%
75-84 62 50.8% 29 47.5% 91 49.7%
$85 30 24.6% 15 24.6% 45 24.6%
Comorbidities
Neurologic 47 38.5% 32 52.5% 79 43.2% .06
Cardiovascular 108 88.5% 53 86.9% 161 88.0% .75
Rheumatic 36 29.5% 11 18.0% 47 25.7% .10
Digestive 34 27.9% 19 31.1% 53 29.0% .65
Metabolic and endocrine y 85 69.7% 24 39.3% 109 59.6% <.001
Pulmonary 27 22.1% 17 27.9% 44 24.0% .41
Uronephrologic 45 36.9% 24 39.3% 69 37.7% .74
Neoplasia 29 23.8% 12 19.7% 41 22.4% .49
Dermatologic z 5 4.1% 12 19.7% 17 9.3% .03
Other 35 28.7% 23 37.7% 58 31.7% .18
SD, Standard deviation. *Mann-Whitney U test was used to assess possible residual differences in the distribution of age between cases and age-and sex-matched controls. Differences between cases and matched controls across different levels of other factors were assessed by means of univariate conditional logistic regression analysis. Boldface indicates statistical significance. y Except for diabetes. z Except for BP.
Table II .
II DPP4i use and other cotreatments in selected cases and controls
Controls Cases Total
Treatment N % N % N % P*
DPP4i <.01
None 100 82.0% 33 54.1% 133 72.7%
Vildagliptin 5 4.1% 14 23.0% 19 10.4%
Sitagliptin 14 11.5% 10 16.4% 24 13.1%
Linagliptin 3 2.5% 3 4.9% 6 3.3%
Saxagliptin 0 0.0% 1 1.6% 1 0.5%
Cotreatment
Diuretics 69 56.6% 28 45.9% 97 53.0% .17
Antihypertensives/antiarrhythmic agents 101 82.8% 47 77.0% 148 80.9% .36
Neuroleptics 46 37.7% 26 42.6% 72 39.3% .52
Antiaggregants/anticoagulants 85 69.7% 45 73.8% 130 71.0% .56
NSAIDs 12 9.8% 0 0.0% 12 6.6% .14
Analgesics 22 18.0% 12 19.7% 34 18.6% .79
Statins 71 58.2% 31 50.8% 102 55.7% .34
Antihistamines 5 4.1% 19 31.1% 24 13.1% <.001
Antidiabetics y 122 100.0% 51 83.6% 173 94.5% .08
Endocrine or metabolic treatment z 45 36.9% 27 44.3% 72 39.3% .32
Proton pump inhibitors 59 48.4% 28 45.9% 87 47.5% .75
Others 50 41.0% 23 37.7% 73 39.9% .67
DPP4i, Dipeptidyl peptidase-4 inhibitor; NSAID, nonsteroidal anti-inflammatory drug. *Boldface indicates statistical significance. y Except for DPP4i. z Except for diabetes.
Table III .
III Univariate and multivariate analysis of the association between DPP4i use and BP in patients with diabetes, overall and in strata of sex and age group
DPP4is could induce BP de novo or accelerate the
development of BP in susceptible individuals. Many
cell types, including keratinocytes, T cells, and
endothelial cells, constitutionally express DPP4.
Controls Cases Univariate analysis* Multivariate analysis y
Strata DPP4i use N % N % OR (95% CI) P OR (95% CI) P
Overall No 100 82.0% 33 54.1% 1 1
Yes 22 18.0% 28 45.9% 3.45 (1.76-6.77) <.001 2.64 (1.19-5.85) .02
Overall (detailed) No 100 82.0% 33 54.1% 1 1
Vildagliptin 5 4.1% 14 23.0% 7.23 (2.44-21.40) <.001 3.57 (1.07-11.84) .04
Sitagliptin 14 11.5% 10 16.4% 1.82 (0.73-4.54) .20 2.13 (0.77-5.89) .15
Linagliptin/ 3 2.5% 4 6.6% 5.10 (0.98-26.62) .053 2.90 (0.47-17.74) .25
saxagliptin
Males No 51 85.0% 13 43.3% 1 1
Yes 9 15.0% 17 56.7% 5.85 (2.13-16.08) .001 4.36 (1.38-13.83) .01
Females No 49 79.0% 20 64.5% 1 1
Yes 13 21.0% 11 35.5% 2.00 (0.78-5.15) .15 1.64 (0.53-5.11) .39
Age \80 y No 49 79.0% 18 56.2% 1 1
Yes 13 21.0% 14 43.8% 2.47 (1.00-6.13) .05 1.53 (0.52-4.52) .44
Age $80 y No 51 85.0% 15 51.7% 1 1
Yes 9 15.0% 14 48.3% 4.50 (1.58-12.77) .005 5.31 (1.60-17.62) .006 |
01757616 | en | [
"sdv.mhep.em"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01757616/file/mody.pdf | Intellectual disability in patients with MODY due to hepatocyte nuclear factor 1B (HNF1B) molecular defects
Introduction
Research design and methods
The study population consisted of 107 adult patients with diabetes in whom a molecular abnormality of HNF1B had been identified, as described elsewhere [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF]. The phenotype of the HNF1B-related syndrome was assessed through a questionnaire, filled in by referring physicians, that comprised clinical, biological and morphological items. ID was defined as limitations in intellectual functioning and in adapting to environmental demands, beginning early in life, and was appraised by the need for educational support, protected work or assistance in daily activities, and by the social skills of the patients [START_REF]Diagnostic and statistical manual of mental disorders (DSM 5)[END_REF]. Learning disability (LD) was defined as persistent difficulties in reading, writing, or mathematical-reasoning skills [START_REF]Diagnostic and statistical manual of mental disorders (DSM 5)[END_REF].
Of these 107 patients, 14 had access to detailed evaluations by a geneticist and a neurologist who were blinded to the patient's HNF1B genotype. In case of clinical suspicion of cognitive defects, the evaluation was completed by neuropsychological testing, including the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III), and by further testing of executive functions (Trail-Making Test), memory [84-item Battery of Memory Efficiency (BEM 84)] and visuospatial function [Rey Complex Figure Test and Recognition Trial (RCFT)]. In patients presenting with ID or LD, single-nucleotide polymorphism (SNP) array analyses were performed, using the HumanCytoSNP-12 v2.1 array scanner and assay (Illumina, Inc., San Diego, CA, USA), after excluding fragile X syndrome. Results were analyzed with GenomeStudio software, version 3.1.6 (Illumina). All patients gave their written informed consent.
The frequency of ID was assessed in two control groups of adult patients with diabetes followed in our department: 339 consecutive patients with type 1 diabetes (T1D); and 227 patients presenting with phenotypes suggestive of MODY, including 31 glucokinase (GCK)-MODY, 42 HNF1A-MODY, 13 HNF4A-MODY, five ATP-binding cassette subfamily C member 8 (ABCC8)-MODY, two insulin (INS)-MODY and 134 genetically screened patients with no identifiable molecular aetiology (referred to as MODY-X).
Results are reported as means ± SD or as frequencies (%). Comparisons between groups were made by non-parametric tests or by Fisher's exact test.
Intellectual disability (ID) is characterized by impairments of general mental abilities that have an impact on adaptive functioning in conceptual, social and practical areas, and which begin in the developmental period [START_REF]Diagnostic and statistical manual of mental disorders (DSM 5)[END_REF]. It affects 1-3% of the general population [START_REF] Maulik | Prevalence of intellectual disability: a meta-analysis of population-based studies[END_REF]. Chromosomal aberrations or mutations in almost 500 genes have been associated with ID. Among these genes, several are also involved in diseases with phenotypes that may overlap with ID, such as autism spectrum disorders (ASD) and schizophrenia.
Molecular defects of the hepatocyte nuclear factor 1B (HNF1B) have been associated with a syndrome that includes maturity-onset diabetes of the young 5 (MODY5 or HNF1B-MODY), kidney structural abnormalities, progressive renal failure, pancreatic hypoplasia and exocrine dysfunction, abnormal liver tests and genital tract abnormalities [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF]. In half the cases, the HNF1B-related syndrome is due to HNF1B heterozygous mutations whereas, in the others, it is associated with HNF1B whole-gene deletion [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF]. In all cases examined thus far, the latter results from a 17q12 deletion of 1.4-2.1 Mb, encompassing 20 genes including HNF1B [START_REF] Bellanné-Chantelot | Large genomic rearrangements in the hepatocyte nuclear factor-1beta (TCF2) gene are the most frequent cause of maturity-onset diabetes of the young type 5[END_REF][START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF][START_REF] Laffargue | Towards a new point of view on the phenotype of patients with a 17q12 microdeletion syndrome[END_REF][START_REF] Clissold | Chromosome 17q12 microdeletions but not intragenic HNF1B mutations link developmental kidney disease and psychiatric disorder[END_REF].
Autism and/or ID have been described in patients with various HNF1B -related phenotypes, such as HNF1B-MODY [START_REF] Raile | Expanded clinical spectrum in hepatocyte nuclear factor 1bmaturity-onset diabetes of the young[END_REF], cystic kidney disease [START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF][START_REF] Nagamani | Clinical spectrum associated with recurrent genomic rearrangements in chromosome 17q12[END_REF][START_REF] Dixit | 17q12 microdeletion syndrome: three patients illustrating the phenotypic spectrum[END_REF] and müllerian aplasia [START_REF] Cheroki | Genomic imbalances associated with müllerian aplasia[END_REF], always in the context of 17q12 deletion. On the other hand, in a large population study, the 17q12 deletion was recognized as a strong risk factor for ID, ASD and schizophrenia, being identified in 1/1000 of children referred for those conditions [START_REF] Moreno-De-Luca | Deletion 17q12 is a recurrent copy number variant that confers high risk of autism and schizophrenia[END_REF].
Whether the neurocognitive phenotypes associated with the 17q12 deletion result from deletion of HNF1B itself or another deleted gene, or from a contiguous gene syndrome, remains unknown [START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF][START_REF] Nagamani | Clinical spectrum associated with recurrent genomic rearrangements in chromosome 17q12[END_REF][START_REF] Moreno-De-Luca | Deletion 17q12 is a recurrent copy number variant that confers high risk of autism and schizophrenia[END_REF].
To investigate the role of HNF1B abnormalities in the occurrence of cognitive defects, the frequency of ID was assessed according to the presence of HNF1B mutations or deletion in a large cohort of adult patients with HNF1B-MODY.
Results
The main characteristics of the 107 patients are shown in Table 1. ID was reported in 15 (14 proband) patients (14%). LD was noticed in a further nine patients (Table S1; see supplementary material associated with the article online). Overall, cognitive defects were thus observed in 24/107 patients (22.4%). Common causes of ID were ruled out by the search for fragile X syndrome and SNP array analyses, which excluded other large genomic deletions in all tested patients.
The frequency of ID was significantly higher in HNF1B-MODY patients than in those with T1D [8/339 (2.4%), OR: 5.9, 95% CI: 2.6-13.6; P < 10 -4 ], or in those with other monogenic diabetes or MODY-X [6/227 (2.6%), OR: 6.0, 95% CI: 2.3-16.0; P = 0.0002].
HNF1B-MODY patients with or without ID were similar as regards gender, age at diabetes diagnosis, duration and treatment of diabetes, frequency and severity of renal disease, frequency of pancreatic morphological abnormalities and livertest abnormalities, and frequency of arterial hypertension and dyslipidaemia (Table 1). HbA 1c levels at the time of the study were higher in the patients with ID (9.4 ± 3.0% vs 7.3 ± 1.4%; P = 0.005).
Of the 15 patients presenting with ID, six had HNF1B coding mutations (three missense, two splicing defects, one deletion of exon 5) and nine had a whole-gene deletion (Table S1). Thus, the frequency of ID was not statistically different between patients with HNF1B mutation (11%) or deletion (17%; P = 0.42; Table 1).
Discussion
Our study showed that ID affects 14% of adult patients with HNF1B-MODY, which is higher than the 1-3% reported in the general population [START_REF] Maulik | Prevalence of intellectual disability: a meta-analysis of population-based studies[END_REF] and than the 2.4-2.6% observed in our two control groups of adult patients with other diabetes subtypes.
The main characteristics of the HNF1B-MODY patients with ID did not differ from those without ID, except for the poorer glycaemic control observed in the former.
In patients with HNF1B-related syndrome, the occurrence of cognitive defects has been noted almost exclusively in paediatric series. ID/ASD has been reported in two adolescents with renal cystic disease, livertest abnormalities and diabetes [START_REF] Raile | Expanded clinical spectrum in hepatocyte nuclear factor 1bmaturity-onset diabetes of the young[END_REF]; developmental delay and/or learning difficulties were quoted in three young patients presenting with multicystic renal disease [START_REF] Nagamani | Clinical spectrum associated with recurrent genomic rearrangements in chromosome 17q12[END_REF]; and speech delay in two children with renal cystic disease [START_REF] Dixit | 17q12 microdeletion syndrome: three patients illustrating the phenotypic spectrum[END_REF]. In a series of 86 children with HNF1B-related renal disease, three cases of ASD were noted [START_REF] Loirat | Autism in three patients with cystic or hyperechogenic kidneys and chromosome 17q12 deletion[END_REF]. The systematic evaluation of 28 children with HNF1B-associated kidney disease also suggested an increased risk of neuropsychological disorders in those harbouring the 17q12 deletion [START_REF] Laffargue | Towards a new point of view on the phenotype of patients with a 17q12 microdeletion syndrome[END_REF]. A recent study performed in a UK cohort reported the presence of neurodevelopmental disorders in eight out of 20 patients with renal abnormalities or diabetes due to HNF1B whole-gene deletion [START_REF] Clissold | Chromosome 17q12 microdeletions but not intragenic HNF1B mutations link developmental kidney disease and psychiatric disorder[END_REF]. In all these reports, cognitive defects were observed in the context of the 17q12 deletion.
Conversely, the 17q12 deletion has also been reported in children evaluated for ID, beyond the setting of HNF1B-related syndrome. Indeed, an association between the deletion and cognitive defects has been confirmed in paediatric cases with no renal abnormalities [START_REF] Roberts | Clinical report of a 17q12 microdeletion with additionally unreported clinical features[END_REF][START_REF] Palumbo | Variable phenotype in 17q12 microdeletions: clinical and molecular characterization of a new case[END_REF]. In one population study, the 17q12 deletion was detected in 18/15,749 children referred for ASD and/or ID, but in none of the controls [START_REF] Moreno-De-Luca | Deletion 17q12 is a recurrent copy number variant that confers high risk of autism and schizophrenia[END_REF]. However, detailed phenotypes, available for nine children, were suggestive of the HNF1B-related syndrome, as all but one showed multicystic renal disease and/or kidney morphological abnormalities, and one had diabetes. Altogether, these observations strongly suggest that cognitive defects are part of the phenotype associated with the 17q12 deletion.
Whether cognitive defects may result from molecular alterations of HNF1B itself remains unsolved. Learning difficulties have been reported in two patients with HNF1B frameshift mutations: one was a man with polycystic kidney disease [START_REF] Bingham | Mutations in the hepatocyte nuclear factor-1b gene are associated with familial hypoplastic glomerulocystic kidney disease[END_REF]; the other was a woman with renal disease, diabetes, and livertest and genitaltract abnormalities [START_REF] Shihara | Identification of a new case of hepatocyte nuclear factor-1beta mutation with highly varied phenotypes[END_REF]. ID has also been reported in two patients with HNF1B-kidney disease due to point mutations [START_REF] Faguer | Diagnosis, management, and prognosis of HNF1B nephropathy in adulthood[END_REF]. However, in these four patients, a search for other causes of cognitive defects was not performed. In the above-mentioned UK study, no neurodevelopmental disorders were reported in 18 patients with intragenic HNF1B mutations [START_REF] Clissold | Chromosome 17q12 microdeletions but not intragenic HNF1B mutations link developmental kidney disease and psychiatric disorder[END_REF]. Conversely, in our study, ID was observed in 6/54 patients (11%) with an HNF1B point mutation, a frequency three times greater than in the general population, and common causes of ID were ruled out in four of them. These discrepancies might be explained by the small number of patients (n = 18) with HNF1B mutations in the UK study, and by the fact that neurocognitive phenotypes might be milder in patients with mutations.
Thus, our observations may suggest the involvement of HNF1B defects in the occurrence of cognitive defects in patients with HNF1B-MODY. The links between HNF1B molecular abnormalities and intellectual developmental disorders remain elusive. Nevertheless, it should be noted that HNF1B is one of the evolutionarily conserved genes involved in the hindbrain development of zebrafish and mice [START_REF] Makki | Identification of novel Hoxa1 downstream targets regulating hindbrain, neural crest and inner ear development[END_REF]. However, the role of HNF1B in the human brain has yet to be established.
In our study, because of geographical remoteness, only a small number of patients had access to detailed neurological evaluation. However, the absence of selection bias is supported by the similar spectrum of HNF1B-related syndrome in patients evaluated by either examination or questionnaire (Table S2; see supplementary material associated with the article online). Moreover, the accuracy of the diagnosis made by referring physicians-ID vs no ID-was confirmed in all patients who underwent neurological evaluations.
Conclusion
ID is more frequent in adults with HNF1B-MODY than in the general population or in patients with other diabetes subtypes. Moreover, it may affect patients with HNF1B point mutations as well as those with 17q12 deletion. Further studies are needed to refine the cognitive phenotypes of HNF1B-related syndrome and to precisely define the role of HNF1B itself in their occurrence.
Table 1
1 Main characteristics of 107 HNF1B-MODY patients according to the presence (+) or absence (-) of intellectual disability (ID).
Total ID+ ID- P a
Values are expressed as n or as mean ± SD.
a ID+ vs ID-. b Estimated glomerular filtration rate <60 mL/min/1.73 m 2 [Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) formula], or dialysis or renal transplantation.
p CH de Valenciennes, 59322 France q CHU Pellegrin, 33076 Bordeaux, France r CHU Godinne, UCL Namur, 5530 Belgium s CHU Tenon, 75020 Paris, France t CH de Roanne, 42300 France u CH Belle-Isle, 57045 Metz, France v CH Saint-Joseph, 75014 Paris, France w CH de Lannion-Trestel, 22300 France x CH Louis Pasteur, 28630 Chartres, France y CH Emile Muller, 68100 Mulhouse, France z CH Saint-Philibert, 59160 Lomme, France aa CH Laënnec, 60100 Creil, France bb CHU de Caen, 14003 France cc CH de Compiègnes, 60200 France dd CHU de Strasbourg, 67000 France ee CHU de Poitiers, 86021 France ff CH Lucien Hussel, 38200 Vienne, France gg CH Bretagne Sud, Lorient, 56322 France hh CH de Blois, 41016 France ii CHIC Poissy-Saint-Germain-en-Laye, 78300 France jj CHU Cochin, 75014 Paris, France kk CHU Haut-Lévêque, 33600 Bordeaux, France ll CHU Louis Pradel, 69500 Lyon, France mm CHRU Clocheville, 37000 Tours, France nn CH d'Avignon, 84000 France oo CH Robert Bisson, 14100 Lisieux, France pp CHU Ambroise Paré, 92104 Boulogne, France qq CH de Saint-Nazaire, 44600 France rr CH Sud-Francilien, 91100 Corbeil, France ss CHU de Nantes, 44093 France tt CHRU Jean Minjoz, 25030 Besançon, France uu CHU d'Angers, 49100 France vv CHU de Brest, 29200 France ww CHU de la Conception, 13005 Marseille, France xx CHRU de Lille, 59000 France yy CH Manchester, 08011 Charleville-Mézières, France zz CHU Lapeyronie, 34090 Montpellier, France
Acknowledgements
We thank A. Faudet, C. Vaury and S. Clauin, of the Centre of Molecular Genetics, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, for their technical assistance.
Funding source
None.
Disclosure of interest
The authors declare that they have no competing interest.
Appendix A. Supplementary data
Supplementary material (Tables S1 andS2) related to this article can be found, in the online version, at http://dx.doi.org/10.1016/j.diabet.2016.10.003. |
01721163 | en | [
"info"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01721163v2/file/final-desweb2018.pdf | Paweł Guzewicz
email: pawel.guzewicz@telecom-paristech.fr
Ioana Manolescu
email: ioana.manolescu@inria.fr
Quotient RDF Summaries Based on Type Hierarchies
Summarization has been applied to RDF graphs to obtain a compact representation thereof, easier to grasp by human users. We present a new brand of quotient-based RDF graph summaries, whose main novelty is to summarize together RDF nodes belonging to the same type hierarchy. We argue that such summaries bring more useful information to users about the structure and semantics of an RDF graph.
I. INTRODUCTION
The structure of RDF graphs is often complex and heterogeneous, making them hard to understand for users who are not familiar with them. This problem has been encountered in the past in the data management community, when dealing with other semi-structured graph data formats, such as the Object Exchange Model (OEM, in short) [START_REF] Papakonstantinou | Object exchange across heterogeneous information sources[END_REF]. Structural summaries for RDF graphs. To help discover and exploit such graphs, [START_REF] Goldman | Dataguides: Enabling query formulation and optimization in semistructured databases[END_REF], [START_REF] Nestorov | Representative objects: Concise representations of semistructured, hierarchical data[END_REF] have proposed using Dataguide summaries to represent compactly a (potentially large) data graph by a smaller one, computed from it. In contrast with relational databases where the schema is fixed before it is populated with data (a priori schema), a summary is computed from the data (a posteriori schema). Each node from the summary graph represents, in some sense, a set of nodes from the input graph. Many other graph summarization proposals have been made, for OEM [START_REF] Milo | Index structures for path expressions[END_REF], later for XML trees with ID-IDREF links across tree nodes (thus turning an XML database into a graph) [START_REF] Chen | D(K)-index: An adaptive structural summary for graph-structured data[END_REF], [START_REF] Kaushik | Covering indexes for branching path queries[END_REF], [START_REF] Li | Indexing and querying XML data for regular path expressions[END_REF], and more recently for RDF [START_REF] Udrea | GRIN: A graph based RDF index[END_REF], [START_REF] Gurajada | Using graph summarization for join-ahead pruning in a distributed RDF engine[END_REF], [START_REF] Čebirić | Query-oriented summarization of RDF graphs (demonstration)[END_REF], [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF], [START_REF] Palmonari | ABSTAT: linked data summaries with abstraction and statistics[END_REF]; many more works have appeared in this area, some of which are presented in a recent tutorial [START_REF] Khan | Summarizing static and dynamic big graphs[END_REF]. Related areas are concerned with graph compression, e.g. [START_REF] Sadri | Shrink: Distance preserving graph compression[END_REF], ontology summarization [START_REF] Troullinou | Ontology understanding without tears: The summarization approach[END_REF] (focusing more on the graph semantics than on its data) etc. Quotient-based summaries are a particular family of summaries, computed based on a (summary-specific) notion of equivalence among graph nodes. Given an equivalence relation ≡, for each equivalence class C (that is, maximal set of graph nodes comprising nodes all equivalent to each other), the summary has exactly one node n C in the summary. Example of quotient-based summaries include [START_REF] Milo | Index structures for path expressions[END_REF], [START_REF] Chen | D(K)-index: An adaptive structural summary for graph-structured data[END_REF], [START_REF] Kaushik | Covering indexes for branching path queries[END_REF], [START_REF] Li | Indexing and querying XML data for regular path expressions[END_REF], [START_REF] Campinas | Efficiency and precision trade-offs in graph summary algorithms[END_REF], [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF] Čebirić | Query-oriented summarization of RDF graphs (demonstration)[END_REF], [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF]; other summaries (including Dataguides) are not quotient-based.
This work is placed within the quotient-based RDF summarization framework introduced in [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF]. That framework adapts the principles of quotient-based summarization to RDF graphs, in particular preserves the semantics (ontology), which may come with an RDF graph, in its summary. This is important as it guarantees that any summary defined within the framework is representative, that is: a query having answers on an RDF graph, has answers on its summary. This allows to use summaries as a first user interface with the data, guiding query formulation. Note that here, query answers take into account both the data explicitly present in the RDF graph, and the data implicitly present in the graph, through reasoning based on the explicit data and the graph's ontology.
Two RDF summaries introduced in [START_REF]Query-oriented summarization of RDF graphs[END_REF] have been subsequently [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] redefined as quotients. They differ in their treatment of the types which may be attached to RDF graph nodes. One is focused on summarizing the structure (non-type triples) first and copies type information to summary nodes afterwards; this may erase the distinctions between resources of very different types, leading to confusing summaries. The other starts by separating nodes according to their sets of types (recall that an RDF node may have one or several types, which may or may not be related to each other). This ignores the relationships which may hold among the different classes present in an RDF graph. Contribution and outline. To simultaneously avoid the drawbacks of the two proposals above, in this paper we introduce a novel summary based on the same framework. It features a refined treatment of the type information present in an RDF graph, so that RDF graph nodes which are of related types are represented together in the summary. We argue that such a summary is more intuitive and more informative to potential users of the RDF graph.
The paper is organized as follows. We recall the RDF graph summarization framework introduced in [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF] which frames our work, as well as the two abovementioned concrete summaries. Then, we formally define our novel summary, and briefly discuss a summarization algorithm and its concrete applicability.
II. RDF GRAPHS AND SUMMARIES A. RDF and RDF Schema
We view an RDF graph G as a set of triples of the form s p o. A triple states that its subject s has the property p, and the value of that property is the object o. We consider only well-formed triples, as per the RDF specification [START_REF] W3c | Resource description framework[END_REF], using uniform resource identifiers (URIs), typed or untyped literals (constants) and blank nodes (unknown URIs or literals). The RDF standard [START_REF] W3c | Resource description framework[END_REF] includes the property rdf∶type (τ in short), which allows specifying the type(s) or class(es), of a resource. Each resource can have zero, one or several types, which may or may not be related. We call the set of G triples, whose property is τ , the type triples of G, denoted TG. RDF Schema and entailment. G may include a schema (ontology), denoted SG, and expressed through RDF Schema (RDFS) triples using one of the following standard properties: subclass, subproperty, domain and range, which we denote by the symbols ≺ sc , ≺ sp , ↩ d and ↪ r , respectively. Our proposal is beneficial in the presence of ≺ sc schema statements; we do not constrain SG in any way.
RDF entailment is the mechanism through which implicit RDF triples are derived from explicit triples and schema information. In this work, we consider four entailment rules, each based on one of the four properties above: (i) c 1 ≺ sc c 2 means any resource of type c 1 is also of type c 2 ; (ii) p 1 ≺ sp p 2 means that as long as a triple s p 1 o belongs to G, the triple s p 2 o also holds in G; (iii) p ↩ d c means that any resource s having the property p in G is of type c, that is, s τ c holds in G; finally (iv) p ↪ r c means that any resource that is a value of the property p in G, is also of type c.
The fixpoint obtained by applying entailment rules on the triples of G and the schema rules in SG until no new triple is entailed, is termed saturation (or closure) of G and denoted G ∞ . The saturation of an RDF graph is unique (up to blank node renaming), and does not contain implicit triples (they have all been made explicit by saturation).
We view an RDF graph G as: G = SG ⊍ TG ⊍ DG , where the schema SG and the type triples TG have been defined above; DG contains all the remaining triples, whose property is neither τ nor ≺ sc , ≺ sp , ↩ d or ↪ r . We call DG the data triples of G.
In the presence of an RDFS ontology, the semantics of an RDF graph is its saturation; in particular, the answers to a query posed on G must take into account all triples in G ∞ [START_REF] W3c | Resource description framework[END_REF].
Figure 1 shows an RDF graph we will use for illustration in the paper. Schema nodes and triples are shown in blue; type triples are shown in dotted lines; boxed nodes denote URIs of classes and instances, while d1, d2, e1 etc. denote literal nodes; "desc" stands for "description".
B. Quotient RDF summaries
We recall the summarization framework introduced in [START_REF]A framework for efficient representative summarization of RDF graphs[END_REF]. In a graph G, a class node is an URI appearing as subject or object in a ≺ sc triple, as object in a ↩ d or ↪ r triple, or as object in a τ triple. A property node is an URI appearing as subject or object in a ≺ sp triple, or as subject in a ↩ d or ↪ r triple. The framework brings a generic notion of equivalence among RDF nodes:
Definition 1: (RDF EQUIVALENCE) Let ≡ be a binary relation between the nodes of an RDF graph. We say ≡ is an RDF equivalence relation iff (i) ≡ is reflexive, symmetric and transitive, (ii) any class node is equivalent w.r.t. ≡ only to itself, and (iii) any property node is equivalent w.r.t. ≡ only to itself.
Graph nodes which are equivalent will be summarized (or represented) by the same node in the summary. The reason behind class and property nodes being only equivalent to themselves in every RDF equivalence relation, is to ensure that each such node is preserved in the summary, as they appear in the schema and carry important information for the graph's semantics. A summary is defined as follows:
Definition 2: (RDF SUMMARY) Given an RDF graph G and an RDF node equivalence relation ≡, the summary of G by ≡ is an RDF graph denoted G ≡ and defined as follows:
• G ≡ contains exactly one node for each equivalence class of G nodes through ≡; each such node has a distinct, "fresh" URI (that does not appear in G).
• For each triple s p o ∈ G such that s ≡ , o ≡ are the G ≡ nodes corresponding to the equivalence classes of s and o, the triple s ≡ p o ≡ belongs to G ≡ .
The above definition can also be stated "G ≡ is the quotient graph of G by the equivalence relation ≡", based on the classical notion of quotient graph1 . We make two observations:
• Regardless of the chosen ≡, all SG triples are also part of G ≡ , as class and property nodes are represented by themselves, and thanks to the way G ≡ edges are defined; indeed, G and G ≡ have the same schema;
• No particular treatment is given to type triples: how to take them into account is left to each individual ≡. Different RDF equivalence relations lead to different summaries. At one extreme, if all data nodes are equivalent, the summary has a single data node; on the contrary, if ≡ is "empty" (each node is equivalent only to itself), the summary degenerates into G itself. Well-studied equivalence relations for graph quotient summaries are based on the so-called forward, backward, or forward and backward (FB) bisimulation [START_REF] Milo | Index structures for path expressions[END_REF], [START_REF] Chen | D(K)-index: An adaptive structural summary for graph-structured data[END_REF], [START_REF] Kaushik | Covering indexes for branching path queries[END_REF], [START_REF] Li | Indexing and querying XML data for regular path expressions[END_REF]. It has been noted though, e.g. in [START_REF] Khatchadourian | Constructing bisimulation summaries on a multi-core graph processing framework[END_REF], that RDF graphs exhibit so much structural heterogeneity that bisimulationbased summaries are very large, almost of the size of G, thus not very useful. In contrast, [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF] Campinas | Efficiency and precision trade-offs in graph summary algorithms[END_REF] introduced ≡ relations which lead to compact summaries, many orders of magnitude smaller than the original graphs.
C. Types in summarization: first or last?
Let us consider how type triples can be used in quotient RDF summaries. Two approaches have been studied in the literature, and in particular in quotient summaries. The approach we will call data-first focuses on summarizing the data (or structure) of G, and then carries (or copies) the possible types of G nodes, to the summary nodes representing them. Conversely, type-first approaches summarize graph nodes first (or only) by their types. Below, we recall two quotient summaries described in [START_REF]Query-oriented summarization of RDF graphs[END_REF], [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF], which are the starting point of this work; they are both very compact, and illustrate the data-first and typefirst approaches respectively. They both rely on the notion of property cliques: Definition 3: (PROPERTY RELATIONS AND CLIQUES) Let p 1 , p 2 be two data properties in DG: For example, in Figure 1, the properties email and webpage are source-related since Alice is the subject of both; webpage and officeHours are source-related due to Bob; also due to Alice, registeredIn and attends are source-related to the above properties, leading to a source clique SC 1 = {attends, email, webpage, officeHours, registeredIn}. Another source clique is SC 2 = {desc, givenIn}.
It is easy to see that the set of non-empty source (or target) property cliques is a partition over the data properties of DG. Further, all data properties of a resource r ∈ G are all in the same source clique, which we denote SC(r); similarly, all the properties of which r is a value are in the same target clique, denoted T C(r). If r is not the value of any property (respectively, has no property), we consider its target (respectively, source) is ∅. For instance, in our example, SC 1 is the source clique of Alice, Bob, Carole and David, while Figure 2 shows the weak summary of our sample RDF graph. The URIs W 1 to W 6 are "new" summary nodes, representing literals and/or URIs from G. Thus, W 3 represents Alice, Bob, Carole and David together, due to their common source clique SC 1 . W 3 represents the course and the master program, due to their common source clique SC 2 . Note that the givenIn edge from G leads to a summary edge from W 2 to itself; also, W 1 carries over the types of the nodes it represents, thus it is both of type MasterProgram and MasterCourse. This example shows that data-first summarization may represent together G resources whose types clearly indicate their different meaning; this may be confusing.
In contrast, the typed weak [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] summary recalled below illustrates the type-first approach:
Let ≡ T be an RDF equivalence relation which holds on two nodes iff they have the exact same set of types.
Let ≡ UW be an RDF equivalence relation which holds on two nodes iff (i) they have no type, and (ii) they are weakly equivalent.
Definition 6: (TYPED WEAK SUMMARY) The typed weak summary of an RDF graph G, denoted G TW , is the summary through ≡ UW of the summary through ≡ T of G:
G TW = (G ≡T ) ≡UW This double-quotient summarization acts as follows. First, nodes are grouped by their sets of types (inner quotient through ≡ T ); second, untyped nodes only are grouped according to weak (structural) equivalence.
For instance, our sample G has six typed data nodes (Alice to David, BigDataMaster and HadoopCourse), each of which has a set of exactly one type; all these types are different. Thus, ≡ T is empty, and G TW (drawing omitted) has eight typed nodes U T W 1 to U T W 8 , each with a distinct type and the property(ies) of one of these nodes. We now consider G's eight untyped data nodes. We have d1 ≡ W d2 due to their common target clique {desc}, and similarly w1 ≡ W w2 and h2 ≡ W h3 ≡ W h4. Thus, G TW has four untyped nodes, each of which is an object of desc, email, webpage and respectively officeHours triples.
The typed weak summary, as well as other type-first summaries, e.g. [START_REF] Campinas | Efficiency and precision trade-offs in graph summary algorithms[END_REF], also have limitations:
• They are defined based on the type triples of G, which may change through saturation, leading to different G TW summaries for conceptually the same graph (as all G leading to the same G ∞ are equivalent). Thus, for a typefirst summary to be most meaningful, one should build it on the saturated graph G ∞ . Note the reason for saturation at Figure 8 in [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF].
• They do not exploit the relationships which the ontology may state among the types. For instance, AssistantProfessor nodes like Carole are summarized separately from Professors like David, although they are all instructors.
III. SUMMARIZATION AWARE OF TYPE HIERARCHIES
A. Novel type-based RDF equivalence
Our first goal is to define an RDF equivalence relation which:
1) takes type information into account, thus belongs to the "types-first" approach; 2) leads (through Definition 2) to a summary which represents together, to the extent possible (see below), nodes that have the same most general type. Formally, let C = {c 1 , c 2 , . . . , } be the set of class nodes present in G (that is, in SG and/or in TG). We can view these nodes as organized in a directed graph where there is an edge c 1 → c 2 as long as G's saturated schema SG ∞ states that c 1 is a subclass of c 2 . By a slight abuse of notation, we use C to also refer to this graph2 . In principle, C could have cycles, but this does not appear to correspond to meaningful schema designs. Therefore, we assume without loss of generality that C is a directed acyclic graph (DAG, in short) 3 . In Figure 1, C is the DAG comprising the eight (blue) class nodes and edges between them; this DAG has four roots.
First, assume that C is a tree, e.g., with Instructor as a root type and PhDStudent, AssistantProfessor as its subclasses. In such a case, we would like instances of all the abovementioned types to be represented together, because they are all instances of the top type Instructor. This extends easily to the case when C is a forest, e.g., a second type hierarchy in C could feature In general, though, C may not be a forest, but instead it may be a graph where some classes have multiple superclasses, potentially unrelated. For instance, in Figure 1, PhDStudent has two superclasses, Student and Instructor. Therefore, it is not possible to represent G nodes of type PhDStudent based on their most general type, because they have more than one such type. Representing them twice (once as Instructor, once as Student) would violate the framework (Definition 2), in which any summary is a quotient and thus, each G node must be represented by exactly one summary node.
To represent resources as much as possible according to their most general type, we proceed as follows.
Definition 7: (TREE COVER) Given a DAG C, we call a tree cover of C a set of trees such that: (i) each node in C appears in exactly one tree; (ii) together, they contain all the nodes of C; and (iii) each C edge appears either in one tree or connects the root of one tree to a node in another.
Given C admits many tree covers, however, it can be shown that there exists a tree cover with the least possible number of trees, which we will call min-size cover. This cover can be computed in a single traversal of the graph by creating a tree root exactly from each C node having two supertypes such that none is a supertype of the other, and attaching to it all its descendants which are not themselves roots of another tree. For instance, the RDF schema from Figure 1 leads to a min-size cover of five trees:
• A tree rooted at Instructor and the edges connecting it to its children AssistantProfessor and Professor;
• A single-node tree rooted at PhDStudent;
• A tree rooted at Student with its child MasterStudent;
• A single-node tree for MasterProgram and another for MasterCourse.
Figure 3 illustrates min-size covers on a more complex RDF schema, consisting of the types A to Q. Every arrow goes from a type to one of its supertypes (for readability, the figure does not include all the implicit subclass relationships, e.g., that E is also a subclass of H, I, J etc.). The pink areas each denote a tree in the corresponding min-size cover. H and L are tree roots because they have multiple, unrelated supertypes.
To complete our proposal, we need to make an extra hypothesis on G:
( †) Whenever a data node n is of two distinct types c 1 , c 2 which are not in the same tree in the min-size tree cover of C, then (i) c 1 and c 2 have some common subclasses, (ii) among these, there exists a class c 1,2 that is a superclass of all the others, and (iii) n is of type c 1,2 .
For instance, in our example, hypothesis ( †) states that if a node n is an Instructor and a Student, these two types must have a common subclass (in our case, this is PhDStudent), and n must be of type PhDStudent. The hypothesis would be violated if there was another common subclass of Instructor and Student, say MusicLover4 , that was neither a subclass of PhDStudent nor a superclass of it.
( †) may be checked by a SPARQL query on G. While it may not hold, we have not found such counter-examples in a set of RDF graphs we have examined (see Section IV). In particular, ( †) immediately holds in the frequent case when C is a tree (taxonomy) or, more generally, a forest: in such cases, the min-size cover of C is exactly its set of trees, and any types c 1 , c 2 of a data node n are in the same tree.
( †) holds, we can state: Lemma 1 (Lowest branching type): Let G be an RDF graph satisfying ( †), n be a data node in G, cs n be the set of types of n in G, and cs ∞ n be the classes from cs n together with all their superclasses (according to the saturated schema of G). Assume that cs ∞ n ≠ ∅. Then there exists a type lbt n , called lowest branching type, such that:
• cs ∞ n = cs ′ n ⊍ cs ′′ n
, where {lbt n } ∈ cs ′ n and cs ′′ n may be empty;
• the types in cs ′ n (if any) can be arranged in a tree according to ≺ sc relation between them, and the most general one is lbt n ;
• if cs ′′ n is not empty, it is at least of size two, and all its types are superclasses of lbt n . Proof: Let's assume to the contrary that there exists an RDF graph G 1 satisfying ( †), a node n in G 1 , cs n the set of types of n, cs ∞ n ≠ ∅ is the set of types of n with all their supertypes (according to saturated schema of G 1 ) and there is no lowest branching type for n.
Let G be the set of all such RDF graphs and let G be the G graph containing a node n that violates the lemma and such that cs ∞ n is the smallest, across all such lemma-violating nodes n in any graph from G.
Let k = cs ∞ n . Note that k > 0 by definition. Let's consider the cases:
1) k = 1 In this case, the lemma trivially holds. From the above discussion, it follows that Carole ≡ TH David, matching the intuition that they are both instructors and do not belong to other type hierarchies. In contrast, PhD students (such as Bob) are only type-hierarchy equivalent to each other; they are set apart by their dual Student and Instructor status. Master students such as Alice are only type-hierarchy equivalent among themselves, as they only belong to the student type hierarchy. Every other typed node of G is only type-hierarchy equivalent to itself. More summaries based on ≡ TH could be obtained by replacing UW with another RDF equivalence relation.
IV. ALGORITHM AND APPLICATIONS
A. Constructing the weak type-hierarchy summary
An algorithm which builds G WTH is as follows:
1) From SG, build C and its min-size cover.
2) For every typed node n of G, identify its lowest branching type lbt n and (the first time a given lbt n is encountered) create a new URI U RI lbtn : this will be the G WTH node representing all the typed G nodes having the same lbt n . 3) Build the weak summary of the untyped nodes of G, using the algorithm described in [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF]. This creates the untyped nodes in G WTH and all the triples connecting them.
4) Add type edges: for every triple n τ c in G, add (unless already in the summary) the triple U RI lbtn τ c to G WTH . 5) Connect the typed and untyped summary nodes: for every triple n 1 p n 2 in G such that n 1 has types in G and n 2 does not, add (unless already in the summary) the triple U RI lbtn 1 p U W n2 to G WTH , where U W n2 is the node representing n 2 , in the weak summary of the untyped part of G. Apply a similar procedure for the converse case (when n 1 has no types but n 2 does).
Step 1) is the fastest as it applies on the schema, typically orders of magnitude smaller than the data. The cost of the steps 2)-4) depend on the distribution of nodes (typed or untyped) and triples (type triples; data triples between typed/untyped nodes) in G. [START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] presents an efficient, almost-linear time (in the size of G) weak summarization algorithm (step 3). The complexity of the other steps is linear in the number of triples in G, leading to an overall almost-linear complexity.
B. Applicability
To understand if G WTH summarization is helpful for an RDF graph, the following questions should be answered:
1) Does SG feature subclass hierarchies? If it does not, then G WTH reduces to the weak summary G TW . 2) Does SG feature a class with two unrelated superclasses? a) No: then C is a tree or a forest. In this case, G WTH represents every typed node together with all the nodes whose type belong to the same type hierarchy (tree). b) Yes: then, does G satisfy ( †)?
i) Yes: one can build G WTH to obtain a refined representation of nodes according to the lowest branching type in their type hierarchy. ii) No: G WTH is undefined, due to the lack of a unique representative for the node(s) violating ( †). Among the RDF datasets frequently used, DBLP 5 , the BSBM benchmark [START_REF] Bizer | The Berlin SPARQL Benchmark[END_REF], and the real-life Slegger ontology 6whose description has been recently published [START_REF] Hovland | Ontology-based data access to slegge[END_REF] exhibited subclass hierarchies. Further, BSBM graphs and the Slegger ontology feature multiple inheritance. BSBM graphs satisfy ( †). On Slegger we were unable to check this, as the data is not publicly shared; our understanding of the application though as described implies that ( †) holds.
An older study [START_REF] Magkanaraki | Benchmarking RDF schemas for the semantic web[END_REF] of many concrete RDF Schemas notes a high frequence of class hierarchies, of depth going up to 12, as well as a relatively high incidence of multiple inheritance; graphs with such schema benefit from G WTH summarization when our hypothesis ( †) holds.
Figure 1 :
1 Figure 1: Sample RDF graph.
1) p 1 , p 2 ∈ G are source-related iff either: (i) a data node in DG is the subject of both p 1 and p 2 , or (ii) DG holds a data node r and a data property p 3 such that r is the subject of p 1 and p 3 , with p 3 and p 2 being source-related. 2) p 1 , p 2 ∈ G are target-related iff either: (i) a data node in DG is the object of both p 1 and p 2 , or (ii) DG holds a data node r and a data property p 3 such that r is the object of p 1 and p 3 , with p 3 and p 2 being target-related. A maximal set of properties in DG which are pairwise source-related (respectively, target-related) is called a source (respectively, target) property clique.
SC 2 is the source clique of the BigDataMaster and of the HadoopCourse. Definition 4: (WEAK EQUIVALENCE) Two data nodes are weakly equivalent, denoted n 1 ≡ W n 2 , iff: (i) they have the same non-empty source or non-empty target clique, or (ii) they both have empty source and empty target cliques, or (iii) they are both weakly equivalent to another node of G.
Figure 2 :
2 Figure 2: Weak summary of the sample RDF graph in Figure 1. Definition 5: (WEAK SUMMARY) The weak summary of the graph G, denoted G W , is the RDF summary obtained from the weak equivalence ≡ W .Figure2shows the weak summary of our sample RDF graph. The URIs W 1 to W 6 are "new" summary nodes, representing literals and/or URIs from G. Thus, W 3 represents Alice, Bob, Carole and David together, due to their common source clique SC 1 . W 3 represents the course and the master program, due to their common source clique SC 2 . Note that the givenIn edge from G leads to a summary edge from W 2 to itself; also, W 1 carries over the types of the nodes it represents, thus it is both of type MasterProgram and MasterCourse. This example shows that data-first summarization may represent together G resources whose types clearly indicate their different meaning; this may be confusing.In contrast, the typed weak[START_REF] Čebirić | Query-Oriented Summarization of RDF Graphs[END_REF] summary recalled below illustrates the type-first approach:Let ≡ T be an RDF equivalence relation which holds on two nodes iff they have the exact same set of types.Let ≡ UW be an RDF equivalence relation which holds on two nodes iff (i) they have no type, and (ii) they are weakly equivalent.Definition 6: (TYPED WEAK SUMMARY) The typed weak summary of an RDF graph G, denoted G TW , is the summary through ≡ UW of the summary through ≡ T of G:G TW = (G ≡T ) ≡UW This double-quotient summarization acts as follows. First, nodes are grouped by their sets of types (inner quotient through ≡ T ); second, untyped nodes only are grouped according to weak (structural) equivalence.For instance, our sample G has six typed data nodes (Alice to David, BigDataMaster and HadoopCourse), each of which has a set of exactly one type; all these types are different. Thus, ≡ T is empty, and G TW (drawing omitted) has eight typed
Figure 3 :
3 Figure 3: Sample RDF schema and min-size cover of the corresponding C. a root type Paper whose subclasses are ConferencePaper, JournalPaper etc. In this case, we aim to represent all authors together because they are instances of Paper.In general, though, C may not be a forest, but instead it may be a graph where some classes have multiple superclasses, potentially unrelated. For instance, in Figure1, PhDStudent has two superclasses, Student and Instructor. Therefore, it is not possible to represent G nodes of type PhDStudent based on their most general type, because they have more than one such type. Representing them twice (once as Instructor, once as Student) would violate the framework (Definition 2), in which any summary is a quotient and thus, each G node must be represented by exactly one summary node.To represent resources as much as possible according to their most general type, we proceed as follows.Definition 7: (TREE COVER) Given a DAG C, we call a tree cover of C a set of trees such that: (i) each node in C appears in exactly one tree; (ii) together, they contain all the nodes of C; and (iii) each C edge appears either in one tree or connects the root of one tree to a node in another.Given C admits many tree covers, however, it can be shown that there exists a tree cover with the least possible number of trees, which we will call min-size cover. This cover can be computed in a single traversal of the graph by creating a tree root exactly from each C node having two supertypes such that none is a supertype of the other, and attaching to it all its descendants which are not themselves roots of another tree. For instance, the RDF schema from Figure1leads to a min-size cover of five trees:
Figure 4 :
4 Figure 4: Weak type-hierarchy summary of the RDF graph in Figure 1. The roots of the trees in the min-size cover of C are underlined. B. RDF summary based on type hierarchy equivalence Based on ≡ TH defined above, and the ≡ UW structural equivalence relation (two nodes are ≡ UW if they have no types, and are weakly equivalent), we introduce a novel summary belonging to the "type-first" approach: Definition 9: (WEAK TYPE-HIERARCHY SUMMARY) The type hierarchy summary of G, denoted G WTH , is the summary through ≡ UW of the summary through ≡ TH of G: G WTH = (G ≡TH ) ≡UW Figure 4 illustrates the G WTH summary of the RDF graph in Figure 1. Different from the weak summary (Figure 2), it does not represent together nodes of unrelated types, such as BigDataMaster and HadoopCourse. At the same time, different from the typed weak summary of the same graph, it does not represent separately each individual, and instead it keeps Carole and David together as they only belong to the instructor type hierarchy.More summaries based on ≡ TH could be obtained by replacing UW with another RDF equivalence relation.
2) k ≥ 2 In this case, let t 1 , . . . , t k be the types of node n (their order not important).Let's consider graph G ′ which is the same as G but without node n having type t k . From the way we chose G and G ′ , G ′ satisfies the lemma, thus there exists a lowest branching type lbt n for n in G ′ . Now, let's add t k to the types of n in G ′ . There are 3 possibilities: a) t k is a subclass of lbt n . Then lbt n is also lowest branching type after this addition. b) t k is a superclass of lbt n . If it's the only superclass of lbt n then t k is a new lowest branching type, else n still admits the lowest branching type lbt n . c) t k is neither a sub-nor a superclass of lbt n . Then it is in another tree in min-size cover of G, thus by ( †) it follows that t k and some other type between t 1 , . . . , t k-1 have a common subtype which serves as a lowest branching type for n. From the above discussion we conclude that the node n for which k = cs ∞ n is not the lemma counterexample with the smallest k, which contradicts the assumption we made when picking it! Therefore no graph exists in G, thus all Gs satisfy the lemma. ◻ For instance, let n be Bob in Figure 1, then cs n is {PhDStudent}, thus cs ∞ n is {PhDStudent, Student, Instructor}. In this case, lbt n is PhDStudent, cs ′ n is {PhDStudent} and cs ′′ n is {Student, Instructor}. If we take n to be Carole, cs ∞ n is {AssistantProfessor, Instructor}; no type from this set has two distinct superclasses, thus cs ′′ n must be empty, lbt Carole is Instructor, and cs ′ n is {AssistantProfessor, Instructor}. By a similar reasoning, lbt David is Instructor, and lbt Alice is Student. When n has a type without subclasses or superclasses, such as BigData-Master, it leads to cs ′′ n being empty, and cs ′ n is lbt n , the only type of n. Thus, lbt BigDataMaster is MasterProgram and lbt HadoopCourse is MasterCourse. For a more complex example, recall the RDF schema in Figure 3, and let n be a node of type E in an RDF graph having this schema. In this case, cs n is {E, G, H, B, I, J}, lbt n is H, cs ′ n is {E, G, H} while cs ′′ n is {B, I, J}. Based on Lemma 1, we define our novel notion of equivalence, reflecting the hierarchy among the types of G data nodes: Definition 8: (TYPE-HIERARCHY EQUIVALENCE) Typehierarchy equivalence, denoted ≡ TH , is an RDF node equivalence relation defined as follows: two data nodes n 1 and n 2 are type-hierarchy equivalent, noted n 1 ≡ TH n 2 , iff lbt n1 = lbt n2 .
https://en.wikipedia.org/wiki/Quotient graph
Ontology languages such as RDF Schema or OWL feature a top type, that is a supertype of any other type, such as rdfs:Resource. We do not include such a generic, top type in C.
If C has cycles, the types in each cycle can all be seen as equivalent, as each is a specialization of all the other, and could be replaced by a single (new) type in a simplified ontology. The process can be repeated until C becomes a DAG, then the approach below can be applied, following which the simplified types can be restored, replacing the ones we introduced. We omit the details.
MusicLover may be a subclass of yet another class (distinct type c 3 in third other min-size tree) and it would still violate the hypothesis
http://dblp.uni-trier.de/
http://slegger.gitlab.io/ |
01516011 | en | [
"math.math-oc"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01516011v3/file/SIAM-REV.pdf | Joseph Frédéric Bonnans
Axel Kröner
email: axel.kroener@math.hu-berlin.de
J Fr Éd
Éric Bonnans
email: eric.bonnans@inria.fr
Axel Kr Öner
Variational analysis
Keywords: finance, options, partial differential equations, variational formulation, parabolic variational inequalities AMS subject classifications. 35K20, 35K85, 91G80
HAL is
Introduction.
In this paper we consider variational analysis for the partial differential equations associated with the pricing of European or American options. For an introduction to these models, see Fouque et al., [START_REF] Fouque | Derivatives in financial markets with stochastic volatility[END_REF]. We will set up a general framework of variable volatility models, which is in particular applicable on the following standard models which are well established in mathematical finance.
The well-posedness of PDE formulations of variable volatility poblems was studied in [START_REF] Achdou | Computational methods for option pricing[END_REF][START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF][START_REF] Achdou | A partial differential equation connected to option pricing with stochastic volatility: regularity results and discretization[END_REF][START_REF] Pironneau | Partial differential equations for option pricing, Handbook of numerical analysis[END_REF], and in the recent work [START_REF] Feehan | Schauder a priori estimates and regularity of solutions to boundary-degenerate elliptic linear second-order partial differential equations[END_REF][START_REF] Feehan | Degenerate-elliptic operators in mathematical finance and higher-order regularity for solutions to variational equations[END_REF].
Let the W i (t) be Brownian motions on a filtered probability space. The variable s denotes a financial asset, and the components of y are factors that influence the volatility:
(i) The Achdou-Tchou model [START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF], see also Achdou, Franchi, and Tchou [START_REF] Achdou | A partial differential equation connected to option pricing with stochastic volatility: regularity results and discretization[END_REF]:
(1. [START_REF] Achdou | A partial differential equation connected to option pricing with stochastic volatility: regularity results and discretization[END_REF] ds(t) = rs(t)dt + σ(y(t))s(t)dW 1 (t), dy(t) = θ(µ -y(t))dt + νdW 2 (t), with the interest rate r, the volatility coefficient σ function of the factor y whose dynamics involves a parameter ν > 0, and positive constants θ and µ.
(ii) The Heston model [START_REF] Heston | A closed-form solution for options with stochastic volatility with applications to bond and currency options[END_REF] (1.2) ds(t) = s(t) rdt + y(t)dW 1 (t) , dy(t) = θ(µ -y(t))dt + ν y(t)dW 2 (t).
(iii) The Double Heston model, see Christoffersen, Heston and Jacobs [START_REF] Jacobs | The shape and term structure of the index option smirk: Why multifactor stochastic volatility models work so well[END_REF], and also Gauthier and Possamaï [START_REF] Gauthier | Efficient simulation of the double Heston model[END_REF]:
(1.3)
ds(t) = s(t) rdt + y 1 (t)dW 1 (t) + y 2 (t)dW 2 (t) , dy 1 (t) = θ 1 (µ 1 -y 1 (t))dt + ν 1 y 1 (t)dW 3 (t), dy 2 (t) = θ 2 (µ 2 -y 2 (t))dt + ν 2 y 2 (t)dW 4 (t).
In the last two models we have similar interpretations of the coefficients;
in the double Heston model, denoting by •, • the correlation coefficients, we assume that there are correlations only between W 1 and W 3 , and W 2 and W 4 .
Consider now the general multiple factor model
(1.4) ds = rs(t)dt + N k=1 f k (y k (t))s β k (t)dW k (t), dy k = θ k (µ k -y k (t))dt + g k (y k (t))dW N +k (t), k = 1, . . . , N.
Here the y k are volatility factors, f k (y k ) represents the volatility coefficient due to y k , g k (y k ) is a volatility coefficient in the dynamics of the kth factor with positive constants θ k and µ k . Let us denote the correlation between the ith and jth Brownian motions by κ ij : this is a measurable function of (s, y, t) with value in [0, 1] (here s ∈ (0, ∞) and y k belongs to either (0, ∞) or R), see below. We asssume that we have nonzero correlations only between the Brownian motions W k and W N +k , for k = 1 to N , i.e. (1.5) κ ij = 0 if i = j and |j -i| = N .
Note that, in some of the main results, we will assume for the sake of simplicity that the correlations are constant.
We apply the developed analysis to a subclass of stochastic volatility models, obtained by assuming that κ is constant and
(1.6) |f k (y k )| = |y k | γ k ; |g k (y k )| = ν k |y k | 1-γ k ; β k ∈ (0, 1]; ν k > 0; γ k ∈ (0, ∞).
This covers in particular a variant of the Achdou and Tchou model with multiple factors (VAT), when γ k = 1, as well as a generalized multiple factor Heston model (GMH), when γ k = 1/2, i.e., for k = 1 to N :
(1.7) VAT:
f k (y k ) = y k , g k (y k ) = ν k , GMH: f k (y k ) = √ y k , g k (y k ) = ν k √ y k .
For a general class of stochastic volatility models with correlation we refer to Lions and Musiela [START_REF] Lions | Correlations and bounds for stochastic volatility models[END_REF].
The main contribution of this paper is variational analysis for the pricing equation corresponding to the above general class in the sense of the Feynman-Kac theory.
This requires in particular to prove continuity and coercivity properties of the corresponding bilinear form in weighted Sobolev spaces H and V , respectively, which have the Gelfand property and allow the application of the Lions and Magenes theory [START_REF] Lions | Non-homogeneous boundary value problems and applications[END_REF] recalled in Appendix A and the regularity theory for parabolic variational inequalities recalled in Appendix B. A special emphasis is given to the continuity analysis of the rate term in the pricing equation. Two approaches are presented, the standard one and an extension of the one based on the commutator of first-order differential operators as in Achdou and Tchou [START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF], extended to the Heston model setting by Achdou and Pironneau [START_REF] Pironneau | Partial differential equations for option pricing, Handbook of numerical analysis[END_REF]. Our main result is that the commutator analysis gives stronger results for the subclass defined by (1.6), generalizing the particular cases of the VAT and GMH classes, see remarks 6.2 and 6.4. In particular we extend some of the results by [START_REF] Achdou | Variational analysis for the Black and Scholes equation with stochastic volatility[END_REF].
This paper is organized as follows. In section 2 we give the expression of the bilinear form associated with the original PDE, and check the hypotheses of continuity and semi-coercivity of this bilinear form. In section 3 we show how to refine this analysis by taking into account the commutators of the first-order differential operators
This manuscript is for review purposes only.
associated with the variational formulation. In section 4 we show how to compute the weighting function involved in the bilinear form. In section 5 we develop the results for a general class introduced in the next section. In section 6 we specialize the results to stochastic volatility models. The appendix recalls the main results of the variational theory for parabolic equations, with a discussion on the characterization of the V functional spaces in the case of one dimensional problems.
Notation. We assume that the domain Ω of the PDEs to be considered in the sequel of this paper has the following structure. Let (I, J) be a partition of {0, . . . , N }, with 0 ∈ J, and
(1.8) Ω := N Π k=0 Ω k ; with Ω k := R when k ∈ I, (0, ∞) when k ∈ J.
Let L 0 (Ω) denote the space of measurable functions over Ω. For a given weighting function ρ : Ω → R of class C 1 , with positive values, we define the weighted space
(1.9) L 2,ρ (Ω) := {v ∈ L 0 (Ω); Ω v(x) 2 ρ(x)dx < ∞},
which is a Hilbert space endowed with the norm
(1.10) v ρ := Ω v(x) 2 ρ(x)dx 1/2
.
By D(Ω) we denote the space of C ∞ functions with compact support in Ω. By H 2 loc (Ω) we denote the space of functions over Ω whose product with an element of D(Ω) belongs to the Sobolev space H 2 (Ω).
Besides, let Φ be a vector field over Ω (i.e., a mapping Ω → R n ). The first-order differential operator associated with Φ is, for u : Ω → R the function over Ω defined by
(1.11) Φ[u](x) := n i=0 Φ i (x) ∂u ∂x i (x), for all x ∈ Ω.
2. General setting. Here we give compute the bilinear form associated with the original PDE, in the setting of the general multiple factor model (1.4). Then we will check the hypotheses of continuity and semi-coercivity of this bilinear form.
2.1. Variational formulation. We compute the bilinear form of the variational setting, taking into account a general weight function. We wil see how to choose the functional spaces for a given ρ, and then how to choose the weight itself.
2.1.1. The elliptic operator. In financial models the underlying is solution of stochastic differential equations of the form
dX(t) = b(t, X(t))dt + nσ i=1 σ i (t, X(t))dW i . (2.1)
Here X(t) takes values in Ω, defined in (1.8). That is, X 1 corresponds to the s variable, and X k+1 , for k = 1 to N , corresponds to y k . We have that n σ = 2N . So, b and σ i , for i = 1 to n σ , are mappings (0, T ) × Ω → R n , and the W i , for i = 1 to n σ , are standard Brownian processes with correlation κ ij : (0, T ) × Ω → R This manuscript is for review purposes only.
between W i and W j for i, j ∈ {1, . . . , n σ }. The n σ × n σ symmetric correlation matrix
κ(•, •) is nonnegative with unit diagonal: (2.2) κ(t, x) 0; κ ii = 1, i = 1, . . . , n σ , for a.a. (t, x) ∈ (0, T ) × Ω.
Here, for symmetric matrices B and C of same size, by "C B" we mean that C -B is positive semidefinite. The expression of the second order differential operator A corresponding to the dynamics (2.1) is, skipping the time and space arguments, for u : (0, T ) × Ω → R:
(2.3) Au := ru -b • ∇u -1 2 nσ i,j=1 κ ij σ j u xx σ i , where (2.4) σ j u xx σ i := nσ k, =1 σ kj ∂u 2 ∂x k ∂x σ i , r(x, t
) represents an interest rate, and u xx is the matrix of second derivatives in space of u. The associated backward PDE for a European option is of the form
(2.5) -u(t, x) + A(t, x)u(t, x) = f (t, x), (t, x) ∈ (0, T ) × Ω; u(x, T ) = u T (x), x ∈ Ω,
with u the notation for the time derivative of u, u T (x) payoff at final time (horizon)
T and the r.h.s. f (t, x) represents dividends (often equal to zero).
In case of an American option we obtain a variational inequality; for details we refer to Appendix D. since v ∈ D(Ω) there will be no contribution from the boundary. We obtain
(2.7) -1 2 Ω σ j u xx σ i vκ ij ρ = 3 p=0 a p ij (u, v), with (2.8) a 0 ij (u, v) := 1 2 Ω n k, =1 σ kj σ i ∂u ∂x k ∂v ∂x κ ij ρ = 1 2 Ω σ j [u]σ i [v]κ ij ρ, (2.9) a 1 ij (u, v) := 1 2 Ω n k, =1 σ kj σ i ∂u ∂x k ∂(κ ij ρ) ∂x v = 1 2 Ω σ j [u]σ i [κ ij ρ] v ρ ρ, (2.10) a 2 ij (u, v) := 1 2 Ω n k, =1 σ kj ∂(σ i ) ∂x ∂u ∂x k vκ ij ρ = 1 2 Ω σ j [u](div σ i )vκ ij ρ,
This manuscript is for review purposes only.
(2.11)
a 3 ij (u, v) := 1 2 Ω n k, =1 ∂(σ kj ) ∂x σ i ∂u ∂x k vκ ij ρ = 1 2 Ω n k=1 σ i [σ kj ] ∂u ∂x k vκ ij ρ.
Also, for the contributions of the first and zero order terms resp. we get (2.12)
a 4 (u, v) := - Ω b[u]vρ; a 5 (u, v) := Ω ruvρ.
Set
(2.13)
a p := nσ i,j=1
a p ij , p = 0, . . . , 3.
The bilinear form associated with the above PDE is
(2.14) a(u, v) := 5 p=0 a p (u, v).
From the previous discussion we deduce that Lemma 2.1. Let u ∈ H 2 oc (Ω) and v ∈ D(Ω). Then we have that
(2.15) a(u, v) = Ω A(t, x)u(x)v(x)ρ(x)dx.
2.1.3. The Gelfand triple. We can view a 0 as the principal term of the bilinear form a(u, v). Let σ denote the n × n σ matrix whose σ j are the columns. Then (2.16)
a 0 (u, v) = nσ i,j=1 Ω σ j [u]σ i [v]κ ij ρ = Ω ∇u σκσ ∇vρ.
Since κ 0, the above integrand is nonnegative when u = v; therefore, a 0 (u, u) ≥ 0.
When κ is the identity we have that a 0 (u, u) is equal to the seminorm a 00 (u, u), where (2.17)
a 00 (u, u) := Ω |σ ∇u| 2 ρ.
In the presence of correlations it is natural to assume that we have a coercivity of the same order. That is, we assume that (2.18) For some γ ∈ (0, 1]: σκσ γσσ , for all (t, x) ∈ (0, T ) × Ω. This manuscript is for review purposes only.
We need to choose a pair (V, H) of Hilbert spaces satisfying the Gelfand conditions for the variational setting of Appendix A, namely V densely and continuously embedded in H, a(•, •) continuous and semi-coercive over V . Additionally, the r.h.s.
and final condition of (2.5) should belong to L 2 (0, T ; V * ) and H resp. (and for the second parabolic estimate, to L 2 (0, T ; H) and V resp. ).
We do as follows: for some measurable function h : Ω → R + to be specified later we define
(2.21)
H := {v ∈ L 0 (Ω); hv ∈ L 2,ρ (Ω)}, V := {v ∈ H; σ i [v] ∈ L 2,ρ (Ω), i = 1, . . . , n σ }, V := {closure of D(Ω) in V},
endowed with the natural norms,
(2.22) v H := hv ρ ; u 2 V := a 00 (u, u) + u 2 H .
We do not try to characterize the space V since this is problem dependent.
Obviously, a 0 (u, v) is a bilinear continuous form over V. We next need to choose h so that a(u, v) is a bilinear and semi-coercive continuous form, and u T ∈ H.
2.2. Continuity and semi-coercivity of the bilinear form over V. We will see that the analysis of a 0 to a 2 is relatively easy. It is less obvious to analyze the term (2.23)
a 34 (u, v) := a 3 (u, v) + a 4 (u, v).
Let qij (t, x) ∈ R n be the vector with kth component equal to
(2.24) qijk := κ ij σ i [σ kj ]. Set (2.25) q := nσ i,j=1 qij , q := q -b.
Then by (2.11)-(2.12), we have that
(2.26) a 34 (u, v) = Ω q[u]vρ.
We next need to assume that it is possible to choose
η k in L 0 ((0, T ) × Ω), for k = 1 to n σ , such that (2.27) q = nσ k=1 η k σ k .
Often the n × n σ matrix σ(t, x) has a.e. rank n. Then the above decomposition is possible. However, the choice for η is not necessarily unique. We will see in examples how to do it. Consider the following hypotheses:
h σ ≤ c σ h, where h σ := nσ i,j=1 |σ i [κ ij ρ]/ρ + κ ij div σ i | , a.e.
, for some c σ > 0, (2.28) h r ≤ c r h, where h r := |r| 1/2 , a.e., for some c r > 0, (2.29)
h η ≤ c η h, where h η := |η|, a.e., for some c η > 0. (2.30)
This manuscript is for review purposes only.
Remark 2.3. Let us set for any differentiable vector field
Z : Ω → R n (2.31) G ρ (Z) := div Z + Z[ρ] ρ .
Since κ ii = 1, (2.28) implies that
(2.32) |G ρ (σ i )| ≤ c σ h, i = 1; . . . , n σ . Remark 2.4. Since (2.33) σ i [κ ij ρ] = σ i [κ ij ]ρ + σ i [ρ]κ ij ,
and |κ ij | ≤ 1 a.e., a sufficient condition for (2.28) is that there exist a positive constants c σ such that
(2.34) h σ ≤ c σ h; h σ := nσ i,j=1 |σ i [κ ij ]| + nσ i=1 (|div σ i | + |σ i [ρ]/ρ|) .
We will see in section 4 how to choose the weight ρ so that |σ i [ρ]/ρ| can be easily estimated as a function of σ.
Lemma 2.5. Let (2.28)-(2.30) hold. Then the bilinear form a(u, v) is both (i) continuous over V , and (ii) semi-coercive, in the sense of (A.5).
Proof. (i)
We have that a 1 + a 2 is continuous, since by (2.9)-(2.10), (2.28) and the Cauchy-Schwarz inequality:
(2.35)
|a 1 (u, v) + a 2 (u, v)| ≤ nσ i,j=1 |a 1 ij (u, v) + a 2 ij (u, v)| ≤ nσ j=1 σ j [u] ρ nσ i=1 (σ i [κ ij ρ]/ρ + κ ij div σ i ) v ρ ≤ c σ n σ v H nσ j=1 σ j [u] ρ .
(ii) Also, a 34 is continuous, since by (2.27) and (2.30):
(2.36)
|a 34 (u, v)| ≤ nσ k=1 σ k [u] ρ η k v ρ ≤ c η v H nσ k=1 σ k [u] ρ . Set c := c σ n σ + c 2 η . By (2.35)-(2.36), we have that (2.37) |a 5 (u, v)| ≤ |r| 1/2 u 2,ρ |r| 1/2 v 2,ρ ≤ c 2 r u H v H , |a 1 (u, v) + a 2 (u, v) + a 34 (u, v)| ≤ ca 00 (u) 1/2 v H .
Since a 0 is obviously continuous, the continuity of a(u, v) follows.
(iii) Semi-coercivity. Using (2.37) and Young's inequality, we get that
(2.38) a(u, u) ≥ a 0 (u, u) -a 1 (u, u) + a 2 (u, u) + a 34 (u, u) -a 5 (u, u) ≥ γa 00 (u) -ca 00 (u) 1/2 u H -c r u 2 H ≥ 1 2 γa 00 (u) -1 2 c 2 γ + c r u 2 H ,
which means that a is semi-coercive.
This manuscript is for review purposes only.
The above consideration allow to derive well-posedness results for parabolic equations and parabolic variational inequalities.
Theorem 2.6. (i) Let (V, H) be given by (2.21), with h satisfying (2.28)-(2.30),
(f, u T ) ∈ L 2 (0, T ; V * )×H. Then equation (2.5) has a unique solution u in L 2 (0, T ; V ) with u ∈ L 2 (0, T ; V * ), and the mapping (f, u T ) → u is nondecreasing. (ii) If in addition the semi-symmetry condition (A.8) holds, then u in L ∞ (0, T ; V ) and u ∈ L 2 (0, T ; H).
Proof. This is a direct consequence of Propositions A.1, A.2 and C.1.
We next consider the case of parabolic variational inequalities associated with the set (2.39) (ii) Let in addition the semi-symmetry condition (A.8) be satisfied. Then u is the unique solution of the strong formulation (B.2), belongs to L ∞ (0, T ; V ), and u belongs to L 2 (0, T ; H).
K := {ψ ∈ V : ψ(x) ≥ Ψ(x) a.e. in
Proof. This follows from Propositions B.1 and C.2.
3. Variational analysis using the commutator analysis. In the following a commutator for first order differential operators is introduced, and calculus rules are derived.
3.1. Commutators. Let u : Ω → R be of class C 2 . Let Φ and Ψ be two vector fields over Ω, both of class C 1 . Recalling (1.11), we may define the commutator of the first-order differential operators associated with Φ and Ψ as
(3.1) [Φ, Ψ][u] := Φ[Ψ[u]] -Ψ[Φ[u]]. Note that (3.2) Φ[Ψ[u]] = n i=1 Φ i ∂(Ψu) ∂x i = n i=1 Φ i n k=1 ∂Ψ k ∂x i ∂u ∂x k + Ψ k ∂ 2 u ∂x k ∂x i .
So, the expression of the commutator is
(3.3) [Φ, Ψ] [u] = n i=1 Φ i n k=1 ∂Ψ k ∂x i ∂u ∂x k -Ψ i n k=1 ∂Φ k ∂x i ∂u ∂x k = n k=1 n i=1 Φ i ∂Ψ k ∂x i -Ψ i ∂Φ k ∂x i ∂u ∂x k .
It is another first-order differential operator associated with a vector field (which happens to be the Lie bracket of Φ and Ψ, see e.g. [START_REF] Aubin | A course in differential geometry[END_REF]).
This manuscript is for review purposes only.
Adjoint.
Remembering that H was defined in (2.21), given two vector fields Φ and Ψ over Ω, we define the spaces
V(Φ, Ψ) := {v ∈ H; Φ[v], Ψ[v] ∈ H} , (3.4) V (Φ, Ψ) := {closure of D(Ω) in V(Φ, Ψ)} . (3.5)
We define the adjoint Φ of Φ (view as an operator over say C ∞ (Ω, R), the latter being endowed with the scalar product of L 2,ρ (Ω)), by
(3.6) Φ [u], v ρ = u, Φ[v] ρ for all u, v ∈ D(Ω),
where •, • ρ denotes the scalar product in L 2,ρ (Ω). Thus, there holds the identity
(3.7) Ω Φ [u](x)v(x)ρ(x)dx = Ω u(x)Φ[v](x)ρ(x)dx for all u, v ∈ D(Ω). Furthermore, (3.8
)
Ω u n i=1 Φ i ∂v ∂x i ρdx = - n i=1 Ω v ∂ ∂x i (uρΦ i )dx = - n i=1 Ω v ∂ ∂x i (uΦ i ) + u ρ Φ i ∂ρ ∂x i ρdx.
Hence,
(3.9) Φ [u] = - n i=1 ∂ ∂x i (uΦ i ) -uΦ i ∂ρ ∂x i /ρ = -u div Φ -Φ[u] -uΦ[ρ]/ρ.
Remembering the definition of G ρ (Φ) in (2.31), we obtain that
(3.10) Φ[u] + Φ [u] + G ρ (Φ)u = 0.
Continuity of the bilinear form associated with the commutator.
Setting, for v and w in V (Φ, Ψ):
(3.11) ∆(u, v) := Ω [Φ, Ψ][u](x)v(x)ρ(x)dx, we have (3.12) ∆(u, v) = Ω (Φ[Ψ[u]]v -Ψ[Φ[u]]v)ρdx = Ω Ψ[u]Φ [v] -Φ[u]Ψ [v])ρdx = Ω (Φ[u]Ψ[v] -Ψ[u]Φ[v]) ρdx + Ω (Φ[u]G ρ (Ψ)v -Ψ[u]G ρ (Φ)v) ρdx.
Lemma 3.1. For ∆(•, •) to be a continuous bilinear form on V (Φ, Ψ), it suffices that, for some c ∆ > 0:
(3.13) |G ρ (Φ)| + |G ρ (Ψ)| ≤ c ∆ h a.e.,
and we have then:
(3.14) |∆(u, v)| ≤ Ψ[u] ρ Φ[v] ρ + c ∆ v H + Φ[u] ρ Ψ[v] ρ + c ∆ v H .
This manuscript is for review purposes only.
Proof. Apply the Cauchy Schwarz inequality to (3.12), and use (3.13) combined with the definition of the space H.
We apply the previous results with Φ := σ i , Ψ := σ j . Set for v, w in V :
(3.15) ∆ ij (u, v) := Ω [σ i , σ j ][u](x)v(x)ρ(x)dx, i, j = 1, . . . , n σ .
We recall that V was defined in (2.21).
Corollary 3.2. Let (2.28) hold. Then the ∆ ij (u, v), i, j = 1, . . . , n σ , are continuous bilinear forms over V .
Proof. Use remark 2.3 and conclude with lemma 3.1.
3.4.
Redefining the space H. In section 2.2 we have obtained the continuity and semi-coercivity of a by decomposing q, defined in (2.26), as a linear combination (2.27) of the σ i . We now take advantage of the previous computation of commutators and assume that, more generally, instead of (2.27), we can decompose q in the form
(3.16) q = nσ k=1 η k σ k + 1≤i<j≤nσ η ij [σ i , σ j ] a.e.
We assume that η and η are measurable functions over [0, T ] × Ω, that η is weakly differentiable, and that for some c η > 0:
(3.17)
h η ≤ c η h, where h η := |η | + N i,j=1 σ i [η ij ] a.e., η ∈ L ∞ (Ω). Lemma 3.
Ω σ k [u]η k vρ ≤ σ k [u] ρ σ k [u]η k v ρ ≤ σ k [u] ρ v H .
(ii) Setting w := η ij v and taking here (Φ, Ψ) = (σ i , σ j ), we get that
(3.19) Ω η ij [σ i , σ j )[u]vρ = ∆(u, w),
where ∆(•, •) was defined in (3.11). Combining with lemma 3.1, we obtain
(3.20) |∆ ij (u, v)| ≤ σ j [u] ρ σ i [w] ρ + c σ η ij ∞ v H + σ i [u] ρ σ j [w] ρ + c σ η ij ∞ v H . Since (3.21) σ i [η ij v] = η ij σ i [v] + σ i [η ij ]v,
This manuscript is for review purposes only.
by (3.17):
(3.22) σ i [w] ρ ≤ η ij ∞ σ i [v] ρ + σ i [η ij ]v ρ ≤ η ij ∞ σ i [v] ρ + c η v H .
Combining these inequalities, point (i) follows.
(ii) Use u = v in (3.21) and (3.12). We find after cancellation in (3.12) that
(3.23) ∆ ij (u, η ij u) = Ω u(σ i [u]σ j [η ij ] -σ j [u]σ i (η ij ))ρ + Ω (σ i [u]G ρ (σ j ) -σ j [u]G ρ (σ i )) η ij uρ.
By (3.17), an upper bound for the absolute value of the first integral is
(3.24) σ i [u] ρ + σ j [u] ρ hu ρ ≤ 2 u V u H .
With (2.28), we get an upper bound for the absolute value of the second integral in the same way, so, for any ε > 0:
(3.25) |∆ ij (u, η ij u)| ≤ 4 u V u H .
We finally have that for some c > 0
(3.26) a(u, u) ≥ a 0 (u, u) -c u V u H , ≥ a 0 (u, u) -1 2 u 2 V -1 2 c 2 u 2 H , = 1 2 u 2 V -1 2 (c 2 + 1) u 2 H .
The conclusion follows.
Remark 3.4. The statements analogous to theorems 2.6 and 2.7 hold, assuming now that h satisfies (2.28), (2.29), and (3.17) (instead of (2.28)-(2.30)). We remind that (I, J) is a partition of {0, . . . , N }, with 0 ∈ J and that Ω was defined in (1.8). , with index from 0 to N . Let G(γ , γ ) be the class of functions ϕ : Ω → R such that for some c > 0:
(4.1) |ϕ(x)| ≤ c Π k∈I (e γ k x k + e -γ k x k ) Π k∈J (x γ k k + x -γ k k
) .
We define G as the union of G(γ , γ ) for all nonnegative (γ , γ ). We call γ k and γ k the growth order of ϕ, w.r.t. x k , at -∞ and +∞ (resp. at zero and +∞).
Observe that the class G is stable by the operations of sum and product, and that if f , g belong to that class, so does h = f g, h having growth orders equal to the sum of the growth orders of f and g. For a ∈ R, we define (4.2) a + := max(0, a); a -:= max(0, -a); N (a) := (a 2 + 1)
1/2 ,
This manuscript is for review purposes only.
as well as
(4.3) ρ := ρ I ρ J ,
where
ρ I (x) := Π k∈I e -α k N (x + k )-α k N (x - k ) , (4.4) ρ J (x) := Π k∈J x α k k 1 + x α k +α k k , (4.5)
for some nonnegative constants α k , α k , to be specified later.
Lemma 4.2. Let ϕ ∈ G(γ , γ ). Then ϕ ∈ L 1,ρ (Ω) whenever ρ is as above, with α satisfying, for some positive ε and ε , for all k = 0 to N :
(4.6) α k = ε + γ k , α k = ε + γ k , k ∈ I, α k = (ε + γ k -1) + , α k = 1 + ε + γ k , k ∈ J.
In addition we can choose for k = 0 (if element of J):
(4.7) α 0 := (ε + γ 0 -1) + ; α 0 := 0 if ϕ(s, y) = 0 when s is far from 0,
α 0 := 0, α 0 := 1 + ε + γ 0 , if ϕ(s, y) = 0 when s is close to 0.
Proof. It is enough to prove (4.6), the proof of (4.7) is similar. We know that ϕ satisfy (4.1) for some c > 0 and γ. We need to check the finiteness of (4.8)
Ω Π k∈I (e γ k y k + e -γ k y k ) Π k∈J (y γ k k + y -γ k k ) ρ(s, y)d(s, y).
But the above integral is equal to the product p I p J with
p I := Π k∈I R (e γ k x k + e -γ k x k )e -α k N (x + k )-α k N (x - k ) dx k , (4.9) p J := Π k∈J R+ x α k +γ k k + x α k -γ k k 1 + x α k +α k k dx k . (4.10)
Using (4.6) we deduce that p I is finite since for instance (4.11)
R+ (e γ k x k + e -γ k x k )e -α k N (x + k )-α k N (x - k ) dx k ≤ 2 R+ e γ k x k e -(1+γ k )x k dx k = 2 R+ e -x k dx k = 2,
and p J is finite since (4.12)
p J = Π k∈J R+ x ε +γ k +γ k k + x ε -1 k 1 + x ε +ε +γ k +γ k k dx k < ∞.
The conclusion follows.
This manuscript is for review purposes only.
4.2. On the growth order of h. Set for all k (4.13)
α k := α k + α k .
Remember that we take ρ in the form (4.3)-(4.4).
Lemma 4.3. We have that:
(i) We have that
(4.14) ρ x k ρ ∞ ≤ α k , k ∈ I; x ρ ρ x k ∞ ≤ α k , k ∈ J.
(ii) Let h satisfying either (2.28)-(2.30) or (2.28)-(2.29), and (3.17). Then the growth order of h does not depend on the choice of the weighting function ρ.
Proof. (i) For k ∈ I this is an easy consequence of the fact that N (•) is non expansive. For k ∈ J, we have that
(4.15) x ρ ρ x k = x ρ α k x α k -1 (1 + x α k ) -x α k α k x α k -1 (1 + x α k ) 2 = α k -α k x α k 1 + x α k .
We easily conclude, discussing the sign of the numerator.
(ii) The dependence of h w.r.t. ρ is only through the last term in (2.28), namely,
i |σ i [ρ]/ρ. By (i) we have that (4.16) σ k i [ρ] ρ ≤ ρ x k ρ ∞ |σ k i | ≤ α k |σ k i |, k ∈ I, (4.17) σ k i [ρ] ρ ≤ x k ρ x k ρ ∞ σ k i x k ≤ α k σ k i x k , k ∈ J.
In both cases, the choice of α has no influence on the growth order of h.
European option.
In the case of a European option with payoff u T (x), we need to check that u T ∈ H, that is, ρ must satisfy (4.18)
Ω |u T (x)| 2 h(x) 2 ρ(x)dx < ∞.
In the framework of the semi-symmetry hypothesis (A.8), we need to check that u T ∈ V , which gives the additional condition
(4.19) nσ i=1 Ω |σ i [u T ](x)| 2 ρ(x)dx < ∞.
In practice the payoff depends only on s and this allows to simplify the analysis.
Applications using the commutator analysis. The commutator analysis
is applied to the general multiple factor model and estimates for the function h characterizing the space H (defined in (2.21)) are derived. The estimates are compared to the case when the commutator analysis is not applied. The resulting improvement wil be established in the next section.
This manuscript is for review purposes only.
Commutator and continuity analysis.
We analyze the general multiple factor model (1.4), which belongs to the class of models (2.1) with Ω ⊂ R 1+N , n σ = 2N , and for i = 1 to N :
(5.1)
σ i [v] = f i (y i )s βi v s ; σ N +i [v] = g i (y i )v i ,
with f i and g i of class C 1 over Ω. We need to compute the commutators of the firstorder differential operators associated with the σ i . The correlations will be denoted
(5.3) [Z, Z ][u] = ab x1 u x2 -ba x2 u x1 .
We obtain that
(5.4) [σ i , σ ][u] = (β -β i )f i (y i )f (y )s βi+β -1 u s , 1 ≤ i < ≤ N, (5.5)
[σ i , σ N +i ][u] = -s βi f i (y i )g i (y i )u s , i = 1, . . . , N, and
(5.6) [σ i , σ N + ][u] = [σ N +i , σ N + ][u] = 0, i = .
Also,
(5.7)
div σ i + σ i [ρ] ρ = f i (y i )s βi-1 (β i + s ρ s ρ ), div σ N +i + σ N +i [ρ] ρ = g i (y i ) + g i (y i ) ρ i ρ .
5.1.1. Computation of q. Remember the definitions of q, q and q in (2.24) and
(2.25), where δ ij denote the Kronecker operator. We obtain that, for 1 ≤ i, j, k ≤ N :
(5.8)
qij0 = δ ij β j f 2 i (y i )s 2βi-1 ; qiik = 0; qi,N+j = 0; qN+i,j,0 = δ ij κi f (y i )g i (y i )s βi ; qN+i,j,k = 0; qN+i,N+j,k = δ ijk g i (y i )g i (y i ).
That means, we have for q = 2N i,j=1 qij and q = q -b that (5.9)
q0 = N i=1 β i f 2 i (y i )s 2βi-1 + κi f (y i )g i (y i )s βi ; q 0 = q0 -rs, qk = g k (y k )g k (y k ); q k = qk -θ k (µ k -y k ), k = 1, . . . , N.
This manuscript is for review purposes only.
Computation of η and η .
The coefficients η , η are solution of (3.16).
We can write η = η + η, where (5.10)
q = nσ i=1 η i σ i + 1≤i,j≤nσ η ij [σ i , σ j ], η ij = 0 if i = j. -b = nσ i=1 η i σ i + 1≤i,j≤nσ η ij [σ i , σ j ], η ij = 0 if i = j.
For k = 1 to N , this reduces to
(5.11) η N +k g k (y k ) = g k (y k )g k (y k ); η N +k g k (y k ) = -θ k (µ k -y k ).
So, we have that (5.12)
η N +k = g k (y k ); η N +k = -θ k (µ k -y k ) g k (y k ) .
For the 0th component, (5.10) can be expressed as (5.13)
N k=1 -η k,N +k f k (y k )g k (y k )s β k -κk f k (y k )g k (y k )s β k + N k=1 η k f k (y k )s β k -β k f 2 k (y k )s 2β k -1 + N k=1 -η k,N +k f k (y k )g k (y k )s β k + η k f k (y k )s β k -rs = 0.
We choose to set each term in parenthesis in the first two lines above to zero. It
follows that η k,N +k = -κ k ∈ L ∞ (Ω), η k = β k f k (y k )s β k -1 . (5.14)
If N > 1 we (arbitrarily) choose then to set the last line to zero with (5.15) η k = η k = 0, k = 2, . . . , N.
It remains that
η 1 f 1 (y 1 )s β1 -η 1,N +1 f 1 (y 1 )g 1 (y 1 )s β1 = rs. (5.16)
Here, we can choose to take either η 1 = 0 or η 1,N +1 = 0. We obtain then two possibilities:
(5.17)
(i) η 1 = 0 and η 1,N +1 = -rs 1-β1 f 1 (y 1 )g 1 (y 1 ) , (ii) η 1 = rs 1-β1 f 1 (y 1 ) and η 1,N +1 = 0.
This manuscript is for review purposes only.
Estimate of the h function.
We decide to choose case (i) in (5.17).
The function h needs to satisfy (2.28), (2.29), and (3.17) (instead of (2.30)). Instead of (2.28), we will rather check the stronger condition (2.34). We compute
h σ := N k=1 |f k (y k )|s β k |(κ k ) s | + | ρ s ρ | + |g k (y k )| |(κ k ) k | + | ρ k ρ | (5.18) + N k=1 β k |f k (y k )s β k -1 | + |g k (y k )| , h r := |r| 1 2 , (5.19) h η := ĥ η + h η , (5.20)
where we have ĥ
η := N k=1 β k |f k (y k )|s β k -1 + |g k (y k )| + f k (y k )|s β k ∂κ k ∂s + g k (y k ) ∂κ k ∂y k , (5.21) h η := N k=1 θ k (µ k -y k ) g k (y k ) + r f 1 (y 1 ) f 1 (y 1 )g 1 (y 1 ) + rg 1 (y 1 )s 1-β1 ∂ ∂y 1 1 f 1 (y 1 )g 1 (y 1 )
.
(5.22)
Remark 5.2. Had we chosen (ii) instead of (i) in (5.17), this would only change the expression of h η that would then be
(5.23) h η = N k=1 θ k (µ k -y k ) g k (y k ) + rs 1-β1 f 1 (y 1 ) .
Estimate of the h function without the commutator analysis.
The only change in the estimate of h will be the contribution of h η and h η . We have to satisfy (2.28)-(2.30). In addition, ignoring the commutator analysis, we would solve (5.13) with η = 0, meaning that we choose (5.24)
η k := β k f k (y k )s β k -1 + κk f k (y k )g k (y k ) f k (y k ) , k = 1, . . . , N,
and take η 1 out of (5.16). Then condition (3.17), with here η = 0, would give
(5.25) h ≥ c η h η , where h η := h η + h η , with
h η := N k=1 β k |f k (y k )|s β k -1 + |κ k | f k (y k )g k (y k ) f k (y k ) + |g k (y k )| , (5.26) h η := N k=1 θ k (µ k -y k ) g k (y k ) + rs 1-β1 f 1 (y 1 ) . (5.27)
We will see in applications that this is in general worse.
This manuscript is for review purposes only.
6. Application to stochastic volatility models. The results of Section 5 are specified for a subclass of the multiple factor model, in particular for the VAT and GMH models. We show that the commutator analysis allows to take smaller values for the function h (and consequently to include a larger class of payoff functions).
6.1. A useful subclass. Here we assume that (6.1)
|f k (y k )| = |y k | γ k ; |g k (y k )| = ν k |y k | 1-γ k ; β k ∈ (0, 1]; ν k > 0; γ k ∈ (0, ∞).
Furthermore, we assume κ to be constant and
(6.2) |f k (y k )g k (y k )| = const for all y k , k = 1, . . . , N.
Set (6.3)
c s := sρ s /ρ ∞ ; c k = ρ k /ρ ∞ if Ω k = R, 0 otherwise. c k = 0 if Ω k = R, y k ρ k /ρ ∞ otherwise.
We get, assuming that γ 1 = 0: (6.4)
h σ := N k=1 c s |y k | γ k s β k -1 + ν k c k |y k | 1-γ k +ν k c k |y k | -γ k + β k |y k | γ k s β k -1 + (1 -γ k )ν k |y k | -γ k , ĥ η := N k=1 β k |y k | γ k s β k -1 + (1 -γ k )ν k |y k | -γ k , (6.5) h η := N k=1 θ k |µ k -y k | ν k |y k | 1-γ k + r|y 1 | γ1 γ 1 ν 1 . (6.6)
Therefore when all y k ∈ R, we can choose h as (6.7)
h := 1 + N k=1 |y k | γ k (1 + s β k -1 ) + (1 -γ k )|y k | -γ k + |y k | γ k -1 + k∈I |y k | 1-γ k + k∈J |y k | -γ k .
Without the commutator analysis we would get
ĥη := N k=1 (β k |y k | γ k s β k -1 + ν k |κ k ||y k | -γ k + (1 -γ k )ν k |y k | -γ k ), (6.8) hη := N k=1 θ k |µ k -y k | ν k |y k | 1-γ k + rs 1-β1 |y 1 | -γ1 . (6.9)
This manuscript is for review purposes only.
Therefore we can choose (6.10)
h := h ; h := h + rs 1-β1 /|y 1 | γ1 + k ν k |κ k ||y k | -γ k .
So, we always have that h ≤ h , meaning that it is advantageous to use the commutator analysis, due to the term rs 1-β1 /|y 1 | γ1 above in particular. The last term in the above r.h.s. has as contribution only when γ k = 1 (since otherwise h includes a term of the same order).
Application to the VAT model. For the variant of the Achdou and
Tchou model with multiple factors (VAT), i.e. when γ k = 1, for k = 1 to N , we can take h equal to (6.11)
h T A := 1 + N k=1 |y k |(1 + s β k -1 ),
when the commutator analysis is used, and when it is not, take h equal to (6.12)
h T A := h T A + rs 1-β1 |y 1 | -1 + N k=1 ν k |κ k ||y k | -1 .
Remember that u T (s) = (s -K) + for a call option, and u T (s) = (K -s) + for a put option, both with strike K > 0.
Lemma 6.1. For the VAT model, using the commutator analysis, in case of a call (resp. put) option with strike K > 0, we can take ρ = ρ call , (resp. ρ = ρ put ), with (6.13)
ρ call (s, y) := (1 + s 3+ε ) -1 Π N k=1 e -εN (y k ) , ρ put (s, y) := s α P 1 + s α P Π N k=1 e -εN (y k ) ,
where
α P := ε + 2 N k=1 (1 -β k ) -1 + .
Proof. (i) In the case of a call option, we have that (6.14) 1 ≥ c 0 s β k -1 for c 0 > 0 small enough over the domain of integration, so that we can as well take
(6.15) h(s, y) = 1 + N k=1 |y k | ≤ Π N k=1 (1 + |y k |).
So, we need that ϕ(s, y) ∈ L 1,ρ (Ω), with
(6.16) ϕ(s, y) = h 2 (s, y)u 2 T (s) = (s -K) 2 + Π N k=1 (1 + |y k |) 2 .
By lemma 4.2, where here J = {0} and I = {1, . . . , N }, we may take resp.
(6.17)
γ 0 = 2, γ 0 = 0, γ k > 0, γ k > 0, k = 1, . . . , N,
and so we may choose for ε > 0 and ε > 0:
(6.18)
α 0 = 0, α 0 = 3 + ε , α k = ε , α k = ε , k = 1, . . . , N,
This manuscript is for review purposes only.
so that setting ε := ε + ε , we can take ρ = ρ call .
(ii) For a put option with strike K > 0, 1 ≤ c 0 s β k -1 for big enough c 0 > 0, over the domain of integration, so that we can as well take
(6.19) h(s, y) = 1 + N k=1 |y k |s β k -1 ≤ Π N k=1 (1 + |y k |s β k -1 ) 2 ≤ Π N k=1 s 2β k -2 (1 + |y k |) 2 and (6.20) ϕ(s, y) = h 2 (s, y)u 2 T (s) ≤ (K -s) 2 + Π N k=1 s 2β k -2 (1 + |y k |) 2 .
By lemma 4.2, in the case of a put option and since (K -s) 2 + is bounded, we can take γ k , γ k , α k , α k as before, for k = 1 to N , and (6.21)
γ 0 = 0, γ 0 = 2 N k=1 (1 -β k ), α 0 = ε + 2 N k=1 (1 -β k ) -1 + , α 0 = 0 the result follows.
Remark 6.2. If we do not use the commutator analysis, then we have a greater "h" function; we can check that our previous choice of ρ does not apply any more (so we should consider a smaller weight function, but we do not need to make it explicit).
And indeed, we have then a singularity when say y 1 is close to zero so that the previous choice of ρ makes the p integral undefined.
Application to the GMH model. For the generalized multiple factor
Heston model (GMH), i.e. when γ k = 1/2, k = 1 to N , we can take h equal to (6.22)
h H := 1 + N k=1 |y k | 1 2 (1 + s β k -1 ) + |y k | -1 2 ,
when the commutator analysis is used, and when it is not, take h equal to (6.23)
h H := h H + rs 1-β1 |y 1 | -1 2 .
Lemma 6.3. (i) For the GMH model, using the commutator analysis, in case of a call option with strike K, meaning that u T (s) = (s -K) + , we can take ρ = ρ call , with (6.24)
ρ call (s, y) := (1 + s ε +3 ) -1 Π N k=1 y ε k (1 + y ε+2 k ) -1 .
(ii) For a put option with strike K > 0, we can take ρ = ρ put , with
(6.25) ρ put (s, y) := Π N k=1 y ε k (1 + y ε+2 k ) -1 .
Proof. (i) For the call option, using (6.14) we see that we can as well take (6.26) h(s, y)
≤ 1 + N k=1 y 1/2 k + y -1/2 k ≤ (s -K) 2 + Π N k=1 (1 + y 1/2 k + y -1/2 k
).
So, we need that ϕ(s, y) ∈ L 1,ρ (Ω), with (6.27)
ϕ(s, y) = h 2 (s, y)u 2 T (s) = (s -K) 2 + Π N k=1 (1 + y 1/2 k + y -1/2 k
).
This manuscript is for review purposes only.
By lemma 4.2, where here J = {0, . . . , N }, we may take resp.
(6.28)
γ 0 = 2, γ 0 = 0, γ k = 1, γ k = 1, k = 1, . . . , N,
and so we may choose for ε > 0 and ε > 0:
(6.29)
α 0 = 0, α 0 = 3 + ε , α k = ε , α k = ε + 2, k = 1, . . . , N,
so that setting ε := ε + ε , we can take ρ = ρ call .
(ii) For a put option with strike K > 0,1 ≤ c 0 s β k -1 for big enough c 0 > 0, over the domain of integration, so that we can as well take (6.30) h(s, y)
= 1 + N k=1 |y k |s β k -1 ≤ Π N k=1 (1 + |y k |s β k -1 ) 2 ≤ Π N k=1 s 2β k -2 (1 + |y k |) 2 and (6.31) ϕ(s, y) = h 2 (s, y)u 2 T (s) ≤ (K -s) 2 + Π N k=1 s 2β k -2 (1 + |y k |) 2 .
By lemma 4.2, in the case of a put option and since (K -s) 2 + is bounded, we can take γ k , γ k , α k , α k as before, for k = 1 to N , and (6.32) γ 0 = 0, γ 0 = 0, α 0 = 0, α 0 = 0. the result follows.
Remark 6.4. If we do not use the commutator analysis, then, again, we have a greater "h" function; we can check that our previous choice of ρ does not apply any more (so we should consider a smaller weight function, but we do not need to make it explicit). And indeed, by the behaviour of the integral for large s the previous choice of ρ makes the p integral undefined.
and since u(T ) = u T we find that
(B.4) T 0 -v(t) + A(t)u(t) -f (t), v -u(t) V ≥ 1 2 u(0) -v(0) 2 H -1 2 u(T ) -v(T ) 2 H for all v ∈ W (0, T ; K), u(T ) = u T .
It can be proved that the two formulation (B.2) and (B.4) are equivalent (they have the same set of solutions), and that they have at most one solution. The weak formulation is as follows: find u ∈ L 2 (0, T ; K) ∩ C(0, T ; H) such that (B.5)
T 0 -v(t) + A(t)u(t) -f (t), v -u(t) V ≥ -1 2 u(T ) -v(T ) 2 H for all v ∈ L 2 (0, T ; K), u(T ) = u T .
Clearly a solution of the strong formulation (B.2) is solution of the weak one.
Proposition B.1 (Brézis [6]). The following holds:
(i) Let u T ∈ K and f ∈ L 2 (0, T ; V * ). Then the weak formulation (B.5) has a unique solution u and, for some c > 0, given v 0 ∈ K:
(B.6) u L ∞ (0,T ;H) + u L 2 (0,T ;V ) ≤ c( u T H + f L 2 (0,T ;V * ) + v 0 V ).
(ii) Let in addition the semi-symmetry hypothesis (A.8) hold, and let u T belong to K. Then u ∈ L ∞ (0, T ; V ), u ∈ L 2 (0, T ; H), and u is the unique solution of the original formulation (B.2). Furthermore, for some c > 0:
(B.7) u L ∞ (0,T ;V ) + u L 2 (0,T ;H) ≤ c( u T V + f L 2 (0,T ;H) ).
Appendix C. Monotonicity. Assume that H is an Hilbert lattice, i.e., is endowed with an order relation compatible with the vector space structure:
(C.1)
x 1 x 2 implies that γx 1 + x γx 2 + x, for all γ ≥ 0 and x ∈ H, such that the maxima and minima denoted by max(x 1 , x 2 ) and min(x 1 , x 2 ) are well defined, the operator max, min be continuous, with min(x 1 , x 2 ) = -max(-x 1 , -x 2 ).
Setting x + := max(x, 0) and x -:= -min(x, 0) we have that x = x + -x -. Assuming that the maximum of two elements of V belong to V we see that we have an induced lattice structure on V . The induced dual order over V * is as follows: for v * 1 and v * 2 in
V * , we say that v * 1 ≥ v * 2 if v * 1 -v * 2 , v V ≥ 0 whenever v ≥ 0.
Assume that we have the following extension of the integration by parts formula
(B.3): for all u, v in W (0, T ) and 0 ≤ t < t ≤ T , (C.2) 2 t t u(s), u + (s) V ds = u + (t ) 2 H -u + (t) 2 H .
and that
(C.3) A(t)u, u + V = A(t)u + , u + V . Proposition C.1. Let u i be solution of the parabolic equation (A.6) for (f, u T ) = (f i , u i T ), i = 1, 2. If f 1 ≥ f 2 and u 1 T ≥ u 2 T , then u 1 ≥ u 2 .
This manuscript is for review purposes only.
This type of result may be extended to the case of variational inequalities. If K and K are two subsets of V , we say that K dominates K if for any u ∈ K and u ∈ K , max(u, u ) ∈ K and min(u, u ) ∈ K .
Proposition C.2. Let u i be solution of the weak formulation (B.5) of the parabolic
variational inequality for (f, u T , K) = (f i , u i T , K i ), i = 1, 2. If f 1 ≥ f 2 , u 1 T ≥ u 2 T , and
K 1 dominates K 2 , then u 1 ≥ u 2 .
The monotonicity w.r.t. the convex K is due to Haugazeau [START_REF] Haugazeau | Sur des inéquations variationnelles[END_REF] (in an elliptic setting, but the result is easily extended to the parabolic one). See also Brézis [START_REF]Problèmes unilatéraux[END_REF].
Appendix D. Link with American options. An American option is the right to get a payoff Ψ(t, x) at any time t < T and u T at time T . We can motivate as follows the derivation of the associated variational inequalities. If the option can be exercized only at times t k = hk, with h = T /M and k = 0 to M (Bermudean option), then the same PDE as for the European option holds over (t k , t k+1 ), k = 0 to M -1. Denoting by ũk the solution of this PDE, we have that u(t k ) = max(Ψ, ũk ).
Assuming that A does not depend on time and that there is a flux f (t, x) of dividents, we compute the approximation u k of u(t k ) as follows. Discretizing the PDE with the implicit Euler scheme we obtain the continuation value ûk solution of
(D.1) ûk -u k+1 h + Aû k = f (t k , •), k = 0, . . . , M -1; u M = max(Ψ, 0), so that u k = u k+1 -hAû k + hf (t k , •), we find that (D.2) u k = max(û k , Ψ) = max(u k+1 -hAû k + hf (t k , •), Ψ),
which is equivalent to
(D.3) min(u k -Ψ, u k -u k+1 h + Aû k -f (t k , •)) = 0.
This suggest for the continuous time model and general operators A and r.h.s. f the following formulation (D.4) min(u(t, x)-Ψ(x), -u(t, x)+A(t, x)u(t, x)-f (t, x)) = 0, (t, x) ∈ (0, T )×Ω.
The above equation has a rigorous mathematical sense in the context of viscosity solution, see Barles [START_REF] Barles | Convergence of numerical schemes for degenerate parabolic equations arising in finance theory[END_REF]. However we rather need the variational formulation which can be derived as follows. Let v(x) satisfy v(x) ≥ Ψ(x) a.e., be smooth enough. Then
(D.5) Ω (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx = {u(t,x)=Ψ(x)} (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx + {u(t,x)>Ψ(x)} (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx.
The first integrand is nonnegative, being a product of nonnegative terms, and the second integrand is equal to 0 since by (D.3), -u(t, x) + A(t, x)u(t, x) -f (t, x)) = 0 a.e. when u(t, x) > Ψ(x). So we have that, for all v ≥ Ψ smooth enough:
(D.6) Ω (-u(t, x) + A(t, x)u(t, x) -f (t, x))) (v(x) -u(t, x))dx ≥ 0.
This manuscript is for review purposes only.
We see that this is of the same nature as a parabolic variational inequality, where K is the set of functions greater or equal to Ψ (in an appropriate Sobolev space).
Appendix E. Some one dimensional problems. It is not always easy to characterize the space V. Let us give a detailed analysis in a simple case.
E.1. The Black-Scholes setting. For the Black-Scholes model with zero interest rate (the extension to a constant nonzero interest rate is easy) and unit volatility coefficient, we have that Au = -1 2 x 2 u (x), with x ∈ (0, ∞). In the case of a put option: u T (x) = (K -x) + we may take H := L 2 (R + ). For v ∈ D(0, ∞) and u sufficiently smooth we have that -
1 2 ∞ 0 x 2 u (x)dx = a(u, v) with (E.1) a(u, v) := 1 2 ∞ 0 x 2 u (x)v (x)dx + ∞ 0 xu (x)v(x)dx.
This bilinear form a is continuous and semi coercive over the set
(E.2) V := {u ∈ H; xu (x) ∈ H}.
It is easily checked that ū(x) := x -1/3 /(1 + x) belongs to V . So, some elements of V are unbounded near zero.
We now claim that D(0, ∞) is a dense subset of V . First, it follows from a standard truncation argument and the dominated convergence theorem that V ∞ := V ∩ L ∞ (0, ∞) is a dense subset of V . Note that elements of V are continuous over (0, ∞). Given ε > 0 and u ∈ V ∞ , define
(E.3) u ε (x) := 0 if x ∈ (0, ε), u(2ε)(x/ε -1) if x ∈ [ε, 2ε], u(2ε) if x > 2ε.
Obviously u ε ∈ V ∞ . By the dominated convergence theorem, u ε → u in H. Set for w ∈ V (E.4) Φ ε (w) := 2ε 0
x 2 w (x) 2 dx.
Since Φ ε is quadratic and v ε → u in H, we have that:
(E.5)
1 2 ∞ 0 x 2 (u ε -u ) 2 dx = 1 2 Φ ε (u ε -u) ≤ Φ ε (u ε ) + Φ ε (u).
Since u ∈ V , Φ ε (u) → 0 and
(E.6) Φ ε (u ε ) ≤ u 2 ∞ 2ε 0 ε -2 x 2 dx = O( u 2 ∞ ε).
So, the l.h.s. of (E.5) has limit 0 when ε ↓ 0. We have proved that the set V 0 of functions in V ∞ equal to zero near zero, is a dense subset of V . Now define for N > 0 (E. Again for the sake of simplicity we will take ρ(x) = 1, which is well-adapted in the case of a payoff with compact support in (0, ∞). For v ∈ D(0, ∞) and u sufficiently smooth we have that We easily deduce that the bilinear form a is continuous and semi coercive over V, when choosing (E.14)
H := {v ∈ L 2 (R + ); (x 1/2 + x -1/2 )v ∈ L 2 (R + )},
Note that then the integrals below are well defined and finite for any v ∈ V:
(E.15) ∞ 0 (x 1/2 v )(x -1/2 v) = ∞ 0 vv = 1 2 ∞ 0 (v 2 ) .
So w := v 2 is the primitive of an integrable function and therefore has a limit at zero.
Since v is continuous over (0, ∞) it follows that v has a limit at zero.
However if this limit is nonzero we get a contradiction with the condition that
x -1/2 v ∈ L 2 (R + ). So, every element of V has zero value at zero.
We now claim that D(0, ∞) is a dense subset of V. First, V ∞ := V ∩ L ∞ (0, ∞) is a dense subset of V. Note that elements of V are continuous over (0, ∞). Given ε > 0
This manuscript is for review purposes only. Since Φ ε is quadratic and u ε → u in H, we have that:
(E.17)
1 2 ∞ 0 x 2 (u ε -u ) 2 dx = 1 2 Φ ε (u ε -u) ≤ Φ ε (u ε ) + Φ ε (u).
Since u ∈ V, Φ ε (u) → 0 and
(E.18) Φ ε (u ε ) ≤ ε -2 u(2ε) 2 2ε 0 xdx = 2u(2ε) 2 → 0.
So, the l.h.s. of (E.17) has limit 0 when ε ↓ 0. We have proved that the set V 0 of functions in V ∞ equal to zero near zero, is a dense subset of V. Define ϕ N as in (E.7)
Given u ∈ V 0 , set u N := uϕ N . As before, u N → u in H, is u N = u ϕ N + uϕ N , xu ϕ N → xu in L 2 (R + ), and it remains to prove that xuϕ N → 0 in L 2 (R + ). But ϕ N is equal to 1/x over its support, so that when N ↑ ∞:
(E.19) x 1/2 uϕ N 2 L 2 (R+) = eN N x -1 u 2 (x)dx ≤ ∞ N u 2 (x)dx → 0.
The claim is proved.
Ω}, where Ψ ∈ V . The strong and weak formulations of the parabolic variational inequality are defined in (B.2) and (B.5) resp. The abstract notion of monotonicity is discussed in appendix B. We denote by K the closure of K in V . Theorem 2.7. (i) Let the assumptions of theorem 2.6 hold, with u T ∈ K. Then the weak formulation (B.5) has a unique solution u in L 2 (0, T ; K) ∩ C(0, T ; H), and the mapping (f, u T ) → u is nondecreasing.
4 .
4 The weight ρ. Classes of weighting functions characterized by their growth are introduced. A major result is the independence of the growth order of the function h on the choice of the weighting function ρ in the class under consideration.4.1. Classes of functions with given growth. In financial models we usually have nonnegative variables and the related functions have polynomial growth. Yet, after a logarithmic transformation, we get real variables whose related functions have exponential growth. This motivates the following definitions.
Definition 4 . 1 .
41 Let γ and γ belong to R N +1 +
by ( 5 Remark 5 . 1 .
551 .2) κk := κ k,N +k , k = 1, . . . , N. We use many times the following rule. For Ω ⊂ R n , where n = 1 + N , u ∈ H 1 (Ω), a, b ∈ L 0 , and vector fields Z[u] := au x1 and Z [u] := bu x2 , we have Z[Z [u]] = a(bu x2 ) x1 = ab x1 u x2 + abu x1x2 , so that
1 - 2 L 2 ( 2
1222 log(x/N ) if x ∈ [N, eN ], 0 if x > eN .Given u ∈ V 0 , set u N := uϕ N . Then u N ∈ H and, by a dominated convergenceargument, u N → u in H. The weak derivative of u N is u N = u ϕ N + uϕ N . By aThis manuscript is for review purposes only.dominated convergence argument, xu ϕ N → xu in L 2 (R + ). It remains to prove that xuϕ N → 0 in L 2 (R + ). But ϕ N is equal to 1/x over its support, so that(E.8) xuϕ N (x)dx → 0 when N ↑ ∞.The claim is proved.E.2. The CIR setting. In the Cox-Ingersoll-Ross model[START_REF] Cox | A theory of the term structure of interest rates[END_REF] the stochastic process satisfies(E.9) ds(t) = θ(µ -s(t))dt + σ √ s dW (t), t ≥ 0We assume the coefficients θ, µ and σ to be constant and positive. The associated PDE is given by (E.10) Au := -θ(µ -x)u -1 2 xσ 2 u = 0 (x, t) ∈ R + × (0, T ), u(x, T ) = u T (x) x ∈ R + .
∞0uu
Au(x)v(x)dx = a(u, v) with (E.11) a(u, v) := θ ∞ 0 (µ -x)u (x)v(x)dx + (x)v(x)dx.So one should take V of the form (E.12)V := {u ∈ H; √ xu (x) ∈ L 2 (R + )}.We next determine H by requiring that the bilinear form is continuous; by the Cauchy-(x)v(x)dx ≤ x 1/2 u 2 x -1/2 v 2 ; ∞ 0 xu (x)v(x)dx ≤ x 1/2 u 2 x 1/2 v 2 .
and u ∈ V ∞ , define u ε (x) as in (E.3). Then u ε ∈ V ∞ . By the dominated convergence theorem, u ε → u in H. Set for w ∈ V (E.16) Φ ε (w) := 2ε 0 xw (x) 2 dx.
This manuscript is for review purposes only.
The first author was suported by the Laboratoire de Finance des Marchés de l'Energie, Paris, France. Both authors were supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Gaspard Monge Program for optimization, operations research and their interactions with data sciences.
Appendix A. Regularity results by Lions and Magenes [START_REF] Lions | Non-homogeneous boundary value problems and applications[END_REF]Ch. 1].
Let H be a Hilbert space identified with its dual and scalar product denoted by (•, •). Let V be a Hilbert space, densely and continuously embedded in H, with duality product denoted by
and that for any u, v in W (0, T ), and 0 ≤ t < t ≤ T , the following integration by parts formula holds:
This manuscript is for review purposes only.
Let A(t) ∈ L ∞ (0, T ; L(V, V * )) satisfy the hypotheses of uniform continuity and semicoercivity, i.e., for some α > 0, λ ≥ 0, and c > 0: Proposition A.1 (first parabolic estimate). The parabolic equation (A.6) has a unique solution u ∈ W (0, T ), and for some c > 0 not depending on (f, u T ):
We next derive a stronger result with the hypothesis of semi-symmetry below:
is measurable with range in H, and for positive numbers α 0 , c A,1 :
Proposition A.2 (second parabolic estimate). Let (A.8) hold. Then the solution u ∈ W (0, T ) of (A.6) belongs to L ∞ (0, T ; V ), u belongs to L 2 (0, T ; H), and for some c > 0 not depending on (f, u T ):
Appendix B. Parabolic variational inequalities.
Let K ⊂ V be a non-empty, closed and convex set, K be the closure of K in H,
We consider parabolic variational inequalities as follows: find u ∈ W (0, T
H
This manuscript is for review purposes only. |
01761754 | en | [
"info.info-cg",
"info.info-dm"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01761754/file/SphericalDrawing_Hal.pdf | Luca Castelli
Gaspard Denis
email: gaspard.denis@hotmail.fr
Éric Fusy
email: fusy@lix.polytechnique.fr
Fast spherical drawing of triangulations: an experimental study of graph drawing tools *
We consider the problem of computing a spherical crossing-free geodesic drawing of a planar graph: this problem, as well as the closely related spherical parameterization problem, has attracted a lot of attention in the last two decades both in theory and in practice, motivated by a number of applications ranging from texture mapping to mesh remeshing and morphing. Our main concern is to design and implement a linear time algorithm for the computation of spherical drawings provided with theoretical guarantees. While not being aesthetically pleasing, our method is extremely fast and can be used as initial placer for spherical iterative methods and spring embedders. We provide experimental comparison with initial placers based on planar Tutte parameterization. Finally we explore the use of spherical drawings as initial layouts for (Euclidean) spring embedders: experimental evidence shows that this greatly helps to untangle the layout and to reach better local minima.
Introduction
In this work we consider the problem of computing in a fast and robust way a spherical layout (crossing-free geodesic spherical drawing) of a genus 0 simple triangulation. While several solutions have been developed in the computer graphics and geometry processing communities [START_REF] Aigerman | Spherical orbifold tutte embeddings[END_REF][START_REF] Aigerman | Orbifold tutte embeddings[END_REF][START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF][START_REF] Friedel | Unconstrained spherical parameterization[END_REF][START_REF] Gotsman | Fundamentals of spherical parameterization for 3d meshes[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF][START_REF] Sheffer | Robust spherical parameterization of triangular meshes[END_REF][START_REF] Zayer | Curvilinear spherical parameterization[END_REF], very few works attempted to test the practical interest of standard tools [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Chambers | Drawing graphs in the plane with a prescribed outer face and polynomial area[END_REF][START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Kobourov | Non-euclidean spring embedders[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] from graph drawing developed for the non-planar (or non-Euclidean) case. On one hand, force-directed methods and iterative solvers are successful to obtain very nice layouts achieving several desirable aesthetic criteria, such as uniform edge lengths, low angle distortion or even the preservation of symmetries. Their main drawbacks rely on the lack of rigorous theoretical guarantees and on their expensive runtime costs, since their implementation requires linear solvers (for large sparse matrices) or sometimes non-linear optimization methods, making these approaches slower and less robust than combinatorial graph drawing tools. On the other hand, some well known tools such as linear-time grid embeddings [START_REF] De Fraysseix | How to draw a planar graph on a grid[END_REF][START_REF] Schnyder | Embedding planar graphs on the grid[END_REF] are provided with worst-case theoretical guarantees allowing us to compute in a fast and robust way a crossing-free layout with bounded resolution: just observe that their practical performances allow processing several millions of vertices per second on a standard (single-core) CPU. Unfortunately, the resulting layouts are rather unpleasing and fail to achieve some basic aesthetic criteria that help readability (they often have long edges and large clusters of tiny triangles).
Motivation. It is commonly assumed that starting from a good initial layout (referred to as initial guess in [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF]) is crucial for both iterative methods and spring embedders. A nice initial configuration, that is closer to the final result, should help to obtain nicer layouts (this was explored in [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF] for the planar case). This could be even more relevant for the spherical case, where an initial layout having many edge-crossings can be difficult to unfold in order to obtain a valid spherical drawing. Moreover, the absence of natural constraints on the sphere prevents in some cases from eliminating all crossings before the layouts collapse to a degenerate configuration. One of the motivations of this work is to get benefit of a prior knowledge of the graph structure: if its combinatorics is known in advance, then one can make use of fast graph drawing tools and compute almost instantaneously a crossing-free layout to be used as starting point for running more expensive force-directed tools.
Related works. A first approach for computing a spherical drawing consists in projecting a (convex) polyhedral representation of the input graph on the unit sphere: one of the first works [START_REF] Shapiro | Polyhedron realization for shape transformation[END_REF] provided a constructive version of Steinitz theorem (unfortunately its time complexity was quadratic). Another very simple approach consists in planarizing the graph and to apply well known tools from mesh parameterizations (see Section 2.1 for more details): the main drawback is that, after spherical projection, the layout does not always remain crossing-free. Along another line of research, several works proposed generalizations of the barycentric Tutte parameterization to the sphere. Unlike the planar case, where boundary constraints guarantees the existence of crossing-free layouts, in the spherical case both the theoretical analysis and the practical implementations are much more challenging. Several works in the geometry processing community [START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF][START_REF] Friedel | Unconstrained spherical parameterization[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF][START_REF] Zayer | Curvilinear spherical parameterization[END_REF] expressed the layout problem as an energy minimization problem (with non-linear constraints) and proposed a variety of iterative or optimization methods to solve the spherical Tutte equations: while achieving nice results on the tested 3D meshes, these methods lack rigorous theoretical guarantees on the quality of the layout in the worst case (for a discussion on the existence of non degenerate solutions of the spherical Tutte equations we refer to [START_REF] Gotsman | Fundamentals of spherical parameterization for 3d meshes[END_REF]). A very recent work [START_REF] Aigerman | Spherical orbifold tutte embeddings[END_REF] proposed an adaptation of the approach based on the Euclidean orbifold Tutte parameterization [START_REF] Aigerman | Orbifold tutte embeddings[END_REF] to the spherical case: the experimental results are very promising and come with some theoretical guarantees (a couple of weak assumptions are still necessary to guarantee the validity of the drawing). However the layout computation becomes much more expensive since it involves solving non-linear problems, as reported in [START_REF] Aigerman | Spherical orbifold tutte embeddings[END_REF]. A few papers in the graph drawing domain also considered the spherical drawing problem. Fowler and Kobourov proposed a framework to adapt force-directed methods [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF] to spherical geometry, and a few recent works [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Chambers | Drawing graphs in the plane with a prescribed outer face and polynomial area[END_REF][START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] extend some combinatorial tools to produce planar layouts of non-planar graphs: some of these tools can be combined to deal with the spherical case, as we will show in this work (as far as we know, there are not existing implementations of these algorithms).
Our contribution
• Our first main contribution is to design and implement a fast algorithm for the computation of spherical drawings. We make use of several ingredients [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] involving the well-known canonical orderings and can be viewed as an adaptation of the shift paradigm proposed by De Fraysseix, Pach and Pollack [START_REF] De Fraysseix | How to draw a planar graph on a grid[END_REF]. As illustrated by our experiments, our procedure is extremely fast, with theoretical guarantees on both the runtime complexity and the layout resolution.
• While not being aesthetically pleasing (as in the planar case), our layouts can be use as initial vertex placement for iterative parameterization methods [START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] or spherical spring embedders [START_REF] Kobourov | Non-euclidean spring embedders[END_REF]. Following the approach suggested by Fowler and Kobourov [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF], we compare our combinatorial algorithm with two standard initial placers used in previous existing works [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF][START_REF] Zayer | Curvilinear spherical parameterization[END_REF] relying on Tutte planar parameterizations: our experimental evaluations involve runtime performances and statistics concerning edge lengths.
• As an application, we show in Section 5 how spherical drawings can be used as initial layouts for (Euclidean) spring embedders: as illustrated by our tests, starting from a spherical drawing greatly helps to entangle the layout and to escape from bad local minima.
All our results are provided with efficient implementations and experimental evaluations on a wide collection of real-world and synthetic datasets.
Preliminaries
Planar graphs and spherical drawings. In this work we deal with planar maps (graphs endowed with a combinatorial planar embedding), and we consider in particular planar triangulations which are simple genus 0 maps where all faces are triangles (they correspond to the combinatorics underlying genus 0 3D triangle meshes). Given a graph G = (V, E) we denote by n = |V | (resp. by |F (G)|) the number of its vertices (resp. faces) and by N (v i ) the set of neighbors of vertex v i ; x(v i ) will denote the Euclidean coordinates of vertex v i .
The notion of planar drawings can be naturally generalized to the spherical case: the main difference is that edges are mapped to geodesic arcs on the unit sphere S 2 , which are minor arcs of great circles (obtained as intersection of S 2 with an hyperplane passing through the origin). A geodesic drawing of a map should preserve the cyclic order of neighbors around each vertex (such an embedding is unique for triangulations, up to reflexions of the sphere).
As in the planar case, we would aim to obtain crossing-free geodesic drawings, where geodesic arcs do not intersect (except at their extremities). In the rest of this work we will make use of the term spherical drawings when referring to drawings satisfying the requirements above. Sometimes, the weaker notion of spherical parameterization (an homeomorphism between an input mesh and S 2 ) is considered for dealing with applications in the geometry processing domain (such as mesh morphing): while the bijectivity between the mesh and S 2 is guaranteed, there are no guarantees that the triangle faces are mapped to spherical triangles with no overlaps (obviously a spherical drawing leads to a spherical parameterization).
Initial Layouts
Part of this work will be devoted to compare our drawing algorithm (Section 3) to two spherical parameterization methods involving Tutte planar parameterization: both methods have been used as initial placers for more sophisticated iterative spherical layout algorithms.
Inverse Stereo Projection layout (ISP). For the first initial placer, we follow the approach suggested in [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] (see Fig. 1). The faces of the input graph G are partitioned into two components homeomorphic to a disk: this is achieved by computing a vertex separator defining a simple cycle of small size (having O( √ n) vertices) whose removal produces a balanced partition (G S , G N ) of the faces of G. The two graphs G S and G N are then drawn in the plane using Tutte's barycentric method: boundary vertices lying on the separator are mapped on the unit disk. Combining a Moebius inversion with the inverse of a stereographic projection we obtain a spherical parameterization of the input graph: while preserving some of the aesthetic appeal of Tutte's planar drawings, this map is bijective but cannot produce in general a crossing-free spherical drawing (straight-line segments in the plane are not mapped to geodesics by inverse stereographic projection). In our experiments we adopt a growing-region heuristic to compute a simple separating cycle: while not having theoretical guarantees, our approach is simple to implement and very fast, achieving balanced partitions in practice (separators are of size roughly Θ( √ n) and the balance ratio
= min(|F (G S )|,|F (G N )|) |F (G)|
is always between 0.39 and 0.49 for the tested data) 1 .
Polar-to-Cartesian layout (PC). The approach adopted in [START_REF] Zayer | Curvilinear spherical parameterization[END_REF] consists in planarizing the graph by cutting the edges along a simple path from a south pole v S to a north pole v N . A planar rectangular layout can be computed by applying standard Tutte parameterization with respect to the azimuthal angle θ ∈ (0, 2π) and to the polar angle φ ∈ [0, π]: the spherical layout, obtained by the polar-to-cartesian projection, is bijective but not guaranteed to be crossing-free.
Spherical drawings and parameterizations
The spherical layouts described above can used as initial guess for more sophisticated iterative schemes and force-directed methods for computing spherical drawings. For the sake of completeness we provide an overview of the algorithms that will be tested in Section 4.
Iterative relaxation: projected Gauss-Seidel. The first method can be viewed as an adaptation of the iterative scheme solving Tutte equations (see Fig. 1). This scheme consists in moving points on the sphere in tangential direction in order to minimize the spring energy
E = 1 2 n i=1 j∈N (i) w ij x(v i ) -x(v j ) 2 (1)
with the only constraint x(v i ) = 1 for i = 1 . . . n (in this work we consider uniform weights w ij , as in Tutte's work). As opposite to the planar case, there are no boundary constraints on the sphere, which makes the resulting layouts collapse in many cases to degenerate solutions. As observed in [START_REF] Gotsman | Fundamentals of spherical parameterization for 3d meshes[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] this method does not always converge to a valid spherical drawing, and its practical performance strongly depends on the geometry of the starting initial layout x 0 . While not having theoretical guarantees, this method is quite fast allowing to quickly decrease the residual error: it thus can be used in a first phase and combined with more stable iterative schemes leading in practice to better convergence results [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] (still lacking of rigorous theoretical guarantees).
Alexa's method In order to avoid the collapse of the layout, without introducing artificial constraints, Alexa [START_REF] Alexa | Merging polyhedral shapes with scattered features[END_REF] modified the iterative relaxation above by penalizing long edges (that tend to move all vertices in a same hemisphere). More precisely, the vertex
v i is moved according to a displacement i = c 1 deg(v i ) j (x(v i ) -x(v j )) x(v i ) -x(v j
) and then reprojected on the sphere. The parameter c regulates the step length, and can be chosen to be proportional to the inverse of the longest edge incident to a vertex, improving the convergence speed. (Spherical) Spring Embedders. While spring embedders are originally designed to produce 2D or 3D layouts, one can adapt them to non euclidean geometries. We have implemented the standard spring-electrical model introduced in [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF] (referred to as FR), and the spherical version following the framework described by Kobourov and Wampler [START_REF] Kobourov | Non-euclidean spring embedders[END_REF] (called Spherical FR). As in [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF] we compute attractive forces (between adjacent vertices) and repulsive forces (for any pair of vertices) acting on vertex u, defined by:
F a (u) = (u,v)∈E x(u) -x(v) K (x(u) -x(v)), F r (u) = v∈V,v =u -CK 2 (x(v) -x(u)) x(u) -x(v) 2
where the values C (the strength of the forces) and K (the optimal distance) are scale parameters. In the spherical case, we shift the repulsive forces by a constant term, making the force acting on pairs of antipodal vertices zero.
3 Fast spherical embedding with theoretical guarantees: SFPP layout
We now provide an overview of our algorithm for computing a spherical drawing of a planar triangulation G in linear time, called SFPP layout (the main steps are illustrated in Fig 2). We make use of an adaptation of the shift method used in the incremental algorithm of de Fraysseix, Pach and Pollack [START_REF] De Fraysseix | How to draw a planar graph on a grid[END_REF] (referred to as FPP layout): our solution relies on the combination of several ideas developed in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]. For the sake of completeness, a more detailed presentation in given the Appendix.
Mesh segmentation. Assuming that there are two non-adjacent faces f N and f S , one can find 3 disjoint and chord-free paths P 0 , P 1 and P 2 from f S to f N (planar triangulations are 3-connected). Denote by u N 0 , u N 1 and u N 2 the three vertices of f N on P 0 , P 1 and P 2 (define similarly the three neighbors u S 0 , u S 1 , u S 2 of the face f S ). We first compute a partition of the faces of G into 3 regions, cutting G along the paths above and removing f S and f N . We thus have three quasi-triangulations G C 0 , G C 1 and G C 2 that are planar maps whose inner faces are triangles, and where the edges on the outer boundary are partitioned into four sides. The first pair of opposite sides only consist of an edge (drawn as vertical segment in Fig. 2), while the remaining pair of opposite sides contains vertices lying on P i and P i+1 respectively (indices being modulo 3): according to these definitions, G C i and G C i+1 share the vertices lying on P i+1 (drawn as a path of on horizontal segments in Fig. 2).
Grid drawing of rectangular frames.
We apply the algorithm described in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF] to obtain three rectangular layouts of G C 0 , G C 1 and G C 2 : this algorithm first separates each G C i into two sub-graphs by removing a so-called river : an outer-planar graph consisting of a face-connected set of triangles which corresponds to a simple path in the dual graph, starting at f S and going toward f N . The two-subgraphs are then processed making use of the canonical labeling defined in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF]: the resulting layouts are stretched and then merged with the set of edges in the river, in order to fit into a rectangular frame. Just observe that in our case a pair of opposite sides only consists of two edges, which leads to an algorithm considerably simpler to implement in practice. Finally, we apply the two-phases adaptation of the shift algorithm described in [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF] to obtain a planar grid drawing of each map G C i , such that the positions of vertices on the path P i in G C i do match the positions of corresponding vertices on P i in G C i+1 . The grid size of drawing of G C i is O(n) × O(n) (using the fact that the two opposite sides (u N i , . . . , u S i ) and (u N i+1 , . . . , u S i+1 ) of G C i are at distance 1).
Spherical layout. To conclude, we glue together the drawings of G C 0 , G C 1 and G C 2 computed above in order to obtain a drawing of G on a triangular prism. By a translation within the 3D ambient space we can make the origin coincides with the center of mass of the prism (upon seeing it as a solid polyhedron). Then a central projection from the origin maps each vertex on M to a point on the sphere: each edge (u, v) is mapped to a geodesic arc, obtained by intersecting the sphere with the plane passing trough the origin and the segment relying u and v on the prism (crossings are forbidden since the map is bijective).
Theorem 1. Let G be a planar triangulation of size n, having two non-adjacent faces f S and f N . Then one can compute in O(n) time a spherical drawing of G, where edges are drawn as (non-crossing) geodesic arcs of length at least Ω( 1 n ).
Some heuristics. We use as last initial placer our combinatorial algorithm of Section 3. For the computation of the three disjoint paths P 0 , P 1 and P 2 , we adopt again an heuristic based on a growing-region approach: while not having theoretical guarantees on the quality of the partition and the length of the paths, our results suggest that well balanced partitions are achieved for most tested graphs. A crucial point to obtain a nice layout resides in the choice of the canonical labeling (its computation is performed with an incremental approach based on vertex removal). A bad canonical labeling could lead to unpleasant configurations, where a large number of vertices on the boundaries of the bottom and top sub-regions of each graph G i are drawn along the same direction: as side effects, a few triangles use a lot of area, and the set of interior chordal edges in the river can be highly stretched, especially those close to the south and north poles. To partially address this problem, we design a few heuristics during the computation of the canonical labeling, in order to obtain more balanced layouts. Firstly, we delay the conquest of the vertices which are close to the south and north poles: this way these extremal vertices are assigned low labels (in the canonical labeling), leading to smaller and thicker triangles close to the poles. Moreover the selection of the vertices is done so as to keep the height of the triangle caps more balanced in the final layout. Finally, we adjust the horizontal stretch of the edges, to get more equally spaced vertices on the paths P 0 , P 1 and P 2 .
Experimental results and comparison
Experimental settings and datasets. In order to obtain a fair comparison of runtime performances, we have written pure Java implementations of all algorithms and drawing methods presented in this work. 2 Our tests involves several graphs including the 1skeleton of 3D models (made available by the AIM@SHAPE repository) as well as random planar triangulations obtained with an uniform random sampler [START_REF] Poulalhon | Optimal coding and sampling of triangulations[END_REF].
In our tests we take as an input the combinatorial structure of a planar map encoded in OFF format: nevertheless we do not make any assumption on the geometric realization of the input triangulation in 2D or 3D space. Moreover, observe that the fact of knowing the combinatorial embedding of the input graph G (the set of its faces) is a rather weak assumption, since such an embedding is essentially unique for planar triangulations and it can be easily retrieved from the graph connectivity in linear time [START_REF] Nagamochi | A simple recognition of maximal planar graphs[END_REF]. We run our experiments on a HP EliteBook, equipped with an Intel Core i7 2.60GHz (with Ubuntu 16.04, Java 1.8 64-bit, using a single core, and 4GB of RAM for the JVM).
Quantitative evaluation of aesthetic criteria
In order to obtain a quantitative evaluation of the layout quality we compute the spring energy E defined by Eq. 1 and two metrics measuring the edge lengths and the triangle areas. As suggested in [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF] we compute the average percent deviation of edge lengths, according to
el := 1 - 1 |E| e∈E |l g (e) -l avg | max(l avg , l max -l avg )
where l g (e) denotes the geodesic distance of the edge e, and l avg (resp. l max ) is the average geodesic edge length (resp. maximal geodesic edge length) in the layout. In a similar manner Table 1: This table reports the runtime performance of all steps involved in the computation of the SFPP layout obtained with the algorithm of Section 3. The overall cost (red chart) includes the preprocessing phase (computing the three rivers and the canonical labeling) and the layout computation (running the two-phases shift algorithm, constructing and projecting the prism). The last two columns report the timing cost for solving the linear systems for the ISP and PC layouts (see blue/green charts), using the MTJ conjugate gradient solver. All results are expressed in seconds.
we compute the average percent deviation of triangle areas, denoted by a. The metrics el and a take values in [0 . . . 1], and higher values indicate more uniform edge lengths and triangle areas 3 .
Timing performances: comparison
The runtime performances reported in Table 1 clearly show that our SFPP algorithm has an asymptotic linear-time behavior and in practice is much faster than other methods based on planar parameterization. For instance the ISP layout adopted in [START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF] requires to solve large linear systems: among the tested Java libraries (MTJ, Colt, PColt, Jama), we found that the linear solvers of the MTJ have the best runtime performances for the solution of large sparse linear systems (in our tests we run the conjugate gradient solver, setting a numeric tolerance of 10 -6 ). Observe that a slightly better performance can be achieved with more sophisticated schemes or tools (e.g. Matlab solvers) as done in [START_REF] Aigerman | Orbifold tutte embeddings[END_REF][START_REF] Saba | Practical spherical embedding of manifold triangle meshes[END_REF]. Nevertheless the timing cost still remains much larger than ours: as reported in [START_REF] Aigerman | Orbifold tutte embeddings[END_REF] the orbifold parameterization of the dragon graph requires 19 seconds (for solving the linear systems, on a 3.5GHz Intel i7 CPU).
Initial layout
Projected Gauss-Seidel Alexa method ρ = 0.42
E =
Evaluation of the layout quality: interpretation and comparisons
All our tests confirm that starting with random vertex locations is almost always a bad choice, since iterative methods lead in most cases to a collapse before reaching a valid spherical drawing (spherical spring embedders do not have this problem, but cannot always eliminate edge crossings, see Fig. 4). Our experiments (see Fig. 3 and4) also confirm two well known facts: Alexa's method is always more robust compared to the projected Gauss-Seidel
Alexa method
Figure 4: Spherical layouts of a random triangulation with 1K faces. While the projected Gauss-Seidel relaxation always collapse, Alexa method is more robust, but also fails when starting from a random initial layout. When using the ISP, PC or our SFPP layouts Alexa method converges toward a crossing-free layout: starting from the SFPP layout allows getting the same aesthetic criteria as the ISP or the PC layouts (with even less iterations). Spring embedders [START_REF] Fowler | Planar preprocessing for spring embedders[END_REF] (Spherical FR) prevent from reaching a degenerate configuration, but have some difficulties to unfold the layout. The charts on the right show the plot of the energy, edge lengths and areas statistics computed when running 800 iterations of Alexa method (we compute these statistics every 10 iterations). relaxation, and the ISP provides a better starting point compared to the PC layout (one can more often converge towards a non-crossing configuration before collapsing, since vertices are distributed in a more balanced way on the sphere).
Layout of mesh-like graphs. When computing the spherical layout of mesh-like structures, the ISP layout seems to be a good choice as initial guess (Fig. 3 show the layout of the dog mesh). The drawing is rather pleasing, capturing the structure of the input graph and being not too far from the final spherical Tutte layout: we mention that the results obtained in our experiments strongly depend on the quality of the separator cycle. Our SFPP layout clearly fails to achieve similar aesthetic criteria: nevertheless, even not being pleasing in the first few iterations, it is possible to reach very often a valid final configuration (crossing-free) without collapsing, and whose quality is very close, in terms of energy and edge lengths and area statistics, to the ones obtained starting from the ISP layout, as confirmed by the charts in Fig. 3. As we observed for most of the tested graphs, when starting from the SFPP layout the number of iterations required to reach a spherical drawing with good aesthetics is larger than starting from an ISP layout. But the convergence speed can be slightly better in a few cases: Fig. 3 shows a valid spherical layout computed after 1058 iterations of the Gauss-Seidel relaxation (1190 iterations are required when starting from the ISP layout).
The charts in Fig. 3 show that our SFPP has higher values of the edge lengths and area statistics in the first iterations: this reflects the fact that our layout has a polynomial resolution and thus triangles have a bounded aspect ratio and side lengths. In the case of the ISP parameterization there could be a large number of tiny triangles clustered in some small regions (the size of coordinates could be exponentially small as n grows).
Layout of random triangulations. When drawing random triangulations the behavior is rather different: the performances obtained starting from our SFPP layout are often better than the ones achieved using the ISP layout. As illustrated by the pictures in Fig. 4) and 6, Alexa's method is able to reach a non-crossing configuration requiring less iterations when starting from our SFPP layout: this is observed in most our experiments, and clearly confirmed by the plots of the energy and statistics el and a that converge faster to the values of the final layout (see charts in Fig. 4).
Spherical preprocessing for Euclidean spring embedders
In this section we investigate the use of spherical drawings as initial placers for spring embedders in 3D space. The fact of recovering the original topological shape of the graph, at least in the case of graphs that have a clear underlying geometric structure, is an important and well known ability of spring embedders. This occurs for the case of regular graphs used in Geometry Processing (the pictures in Fig. 5 show a few force-directed layouts of the cow graph), and also for many mesh-like complex networks involved in physical and real-world applications (such as the networks made available by the Sparse Matrix Collection [START_REF] Davis | The university of florida sparse matrix collection[END_REF]). In the case of uniformly random embedded graphs (called maps) of a large size n on a fixed surface S, the spring embedding algorithms (applied in the 3D ambient space) yield graph layouts that greatly capture the topological and structural features of the map (the genus of the surface is visible, the "central charge" of the model is reflected by the presence of spikes, etc.), a great variety of such representations can be seen at the very nice simulation gallery of Jérémie Bettinelli ( http://www.normalesup.org/∼bettinel/simul.html). While common software and libraries (e.g. GraphViz [START_REF] Ellson | Graphviz -open source graph drawing tools[END_REF], Gephi [START_REF] Bastian | Gephi: An open source software for exploring and manipulating networks[END_REF], GraphStream) for graph visualization provide implementations of many force-directed models, as far as we know they never try to exploit the strong combinatorial structure of surface-like graphs.
Discussion of experimental results. Our main goal is to show empirically that starting from a nice initial shape that captures the topological structure of the input graph greatly improves the convergence speed and layout quality.
In our first experiments (see Figures 5 and6) we run our 3D implementation of the spring electrical model FR [START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF], where we make use of exact force computation and we adopt the cooling system proposed in [START_REF] Walshaw | A multilevel algorithm for force-directed graph-drawing[END_REF] (with repulsive strength C = 0.1). We also perform some tests with the Gephi implementation of the Yifan Hu layout algorithm [START_REF] Hu | Efficient, high-quality force-directed graph drawing[END_REF], which is a more sophisticated spring-embedder with fast approximate calculation of repulsive forces (see the layouts of Fig. 7). In order to quantify the layout quality, we evaluate the number of self-intersections of the resulting 3D shape during the iterative computation process4 .
To be more precise, we plot (over the first 100 iterations) the number of triangle faces that have a collision with a non adjacent triangle in 3D space. The charts of Fig. 5 and 6 clearly confirm the visual intuition suggested by pictures: when starting from a good initial shape the force-directed layouts seem to evolve according to an inflating process, which leads to better and faster untangle the graph layout. This phenomenon is observed in all our tests (on several mesh-like graphs and synthetic data): experimental evidence shows that an initial spherical drawing is a good starting point helping the spring embedder to reach nicer layout aesthetics and also to improve the runtime performances. Finally observe that from the computational point of view the computation of a spherical drawing has a negligible cost: iterative schemes (e.g. Alexa method) require O(n) time per iteration, which must be compared to the complexity cost of force-directed methods, requiring between O(n 2 ) or O(n log n) time per iteration (depending on the repulsive force calculation scheme). This is also confirmed in practice, according to the timing costs reported in Fig 5, 6 and 7.
Concluding remarks
One main feature of our SFPP method is that it always computes a crossing-free layout: unfortunately edge crossings can appear during the beautification process, when running iterative algorithms (projected Gauss-Seidel iteration, Alexa method or more sophisticated schemes). It could be interested to adapt to the spherical case existing methods [START_REF] Simonetto | Impred: An improved force-directed algorithm that prevents nodes from crossing edges[END_REF] (which are designed for the Euclidean case) whose goal is to dissuade edge-crossings: their application could lead to produce a sequence of layouts that converge to the final spherical drawing while always preserving the map. The promising results of Section 5 would suggest that starting from an initial nice layout could lead to faster algorithms and better results for the case of mesh-like structures. It could be interesting to investigate whether this phenomenon arises for other classes of graphs, such as quadrangulated or 3-connected planar graphs, or non planar (e.g. toroidal) graphs, for which fast drawing methods also exist [START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Gonçalves | Toroidal maps: Schnyder woods, orthogonal surfaces and straight-line representations[END_REF].
f f f f
Figure 8: These pictures illustrate the computation of a spherical drawing using our SFPP algorithm.
A Appendix
A.1 Proof of Theorem 1
For the sake of completeness we provide a detailed description of all the steps of the linear-time algorithm computing a SFPP layout of a triangulation G of the sphere, as sketched in Section 3: we combine and adapt many ingredients developed in [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF][START_REF] Castelli-Aleardi | Canonical ordering for triangulations on the cylinder, with applications to periodic straight-line drawings[END_REF][START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF].
Cutting G along three disjoint paths. Let G be a triangulation on the sphere. We assume that there exist two non adjacent faces f and f (with no common incident vertices). If not, one can force the existence of two such faces by adding a new triangle t within a face (and adding edges so as to triangulate the area between t and the face-contour). The first step is to compute 3 vertex-disjoint chord-free paths that start at each of the 3 vertices of f S and end at each of the 3 vertices of f N .
Schnyder woods [START_REF] Schnyder | Embedding planar graphs on the grid[END_REF][START_REF] Brehm | 3-orientations and Schnyder 3-tree-decompositions[END_REF] provide a nice way to achieve this. Taking f as the outer face, where v 0 , v 1 , v 2 are the outer vertices in clockwise (CW) order and inserting a vertex v of degree 3 inside f , we compute a Schnyder wood of the obtained triangulation, and let P 0 , P 1 , P 2 be the directed paths in respective colors 0, 1, 2 starting from v: by well-known properties of Schnyder woods, these paths are chord-free and are disjoint except at v, and they end at the 3 vertices v 0 , v 1 , v 2 of f . Deleting v and its 3 incident edges, (and thus deleting the starting edge in each of P 0 , P 1 , P 2 ) we obtain a triple of disjoint chord-free paths from f to f . Let u 0 , u 1 , u 2 be the vertices on f incident to P 0 , P 1 , P 2 .
As in [START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF] we call 4ST triangulation a graph embedded in the plane with a polygonal outer contour and with triangular inner faces, with 4 distinguished vertices w 0 , w 1 , w 2 , w 3 (in cw order) incident to the outer face, and such that each of the 4 outer paths delimited by the marked outer vertices is chord-free. The external paths between w 0 and w 1 and between w 2 and w 3 are called vertical, and the two other ones are called horizontal. A 4ST is called narrow if the two vertical paths have only one edge. For i ∈ {0, 1, 2} let G i be the narrow 4ST whose outer contour (indices are taken modulo 3) is made of the path P i , the edge {u i , u i+1 }, the path P i+1 , and the edge {v i , v i+1 }.
Note that G can be seen as a prism, with f and f as the two triangular faces and with G 0 , G 1 , G 2 occupying the 3 lateral quadrangular faces of the prism.
Computing compatible drawings of the 3 components G 0 , G 1 , G 2 . A straight-frame drawing of a 4ST H is a straight-line drawing of H where the outer face contour is an axis-aligned rectangle, with the 4 sides of the rectangle corresponding to the 4 paths along the contour. The interspacevector of each of the 4 paths is the vector giving the lengths (in the drawing) of the successive edges along the path, where the path is traversed from left to right for the two horizontal ones and is traversed from bottom to top for the two vertical ones. In order to obtain a drawing of G on the prism (which then yields a geodesic crossing-free drawing on the sphere, using a central projection), we would like to obtain compatible straight-frame drawings of G 0 , G 1 , G 2 , i.e., such that for i ∈ {0, 1, 2} the interspace-vectors of P i in the drawing of G i and in the drawing of G i-1 are the same.
Using an adaptation -given in [START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]-of the algorithm by Duncan et al. [START_REF] Duncan | Planar drawings of higher-genus graphs[END_REF], one gets the following result, where a vector of positive integers is called even if all its components are even, and the total of a vector is the sum of its components: Lemma 1 (from [START_REF] Castelli-Aleardi | Periodic planar straight-frame drawings with polynomial resolution[END_REF]). Let H be a narrow 4ST with m vertices. Then one can compute in linear time a straight-frame drawing of H such that the interspace-vectors U = (u 1 , . . . , u p ) and V = (v 1 , . . . , v q ) of the two horizontal external paths are even, and the grid-size of the drawing is bounded by 4m × (4m + 1).
Moreover, for any pair U = (u 1 , . . . , u p ) and V = (v 1 , . . . , v q ) of even vectors such that U ≥ U , V ≥ V , and U and V have the same total s, one can recompute in linear time a straight-frame drawing of H such that the interspace vectors of the two horizontal external paths are respectively U and V , and the grid-size is 4s × (4s + 1).
For i ∈ {0, 1, 2} let k i be the number of vertices of G i . By the first part of Lemma 1, G i admits a straight-frame drawing -where P i and P i+1 are the two horizontal external paths-such that the interspace-vector U i along P i and the interspace-vector V i+1 along P i+1 are even, and the grid size is bounded by 4k i × (4k i + 1), with k i the number of vertices in G i .
We let W i be the vector max(U i , V i ), and let s i be the total of W i , and set s := max(s 0 , s 1 , s 2 ). It is easy to check that s ≤ 8n. We then let W i be obtained from W i by adding s -s i to the last component. Then we set U i and V i to W i . Note that we have U i ≥ U i and V i ≥ V i for i ∈ {0, 1, 2}, and moreover all the vectors U 0 , V 0 , U 1 , V 1 , U 2 , V 2 now have the same total, which is s. We can thus use the second part of Lemma 1 and obtained straight-frame drawings of G i (for i ∈ {0, 1, 2}) on the grid 4s × (4s + 1) where the interspace-vector for the bottom (resp. upper) horizontal external path is U i (resp. is V i+1 ). Since U i = V i for i ∈ {0, 1, 2}, the drawings of G 0 , G 1 , G 2 are compatible and can thus be assembled to get a drawing of G on the prism (see Figure 8 for an example), which then yields a drawing on the unit sphere using a central projection (with the origin at the center of mass of the prism seen as a solid polyhedron). Note that the prism has its 3 lateral faces of area O(n × n), hence is at distance O(n) from the origin. Since every edge of G drawn on the prism clearly has length at least 1 (as in any straight-line drawing with integer coordinates) we conclude that after the central projection every edge of G has length Ω(1/n), as claimed in Theorem 1.
Remark. To improve the distribution of the points on the sphere, one aims at 3 paths P 0 , P 1 , P 2 such that the graphs G 0 , G 1 , G 2 are of similar sizes. A simple heuristic (using the approach based on Schnyder woods mentioned above), is to do the computation for every face f S non-adjacent to f N , and keep the one that maximizes the objective parameter 0≤i<j≤2 ||G i | -|G j ||.
Projected Gauss-Seidel(x 0 , λ, ε)r = 0; // iteration counter do { } while ( x rx r-1 > ε))for(i = 1; i ≤ n; i++) { s = (1λ)x r (vi) + λ j wijx r (vj)
Figure 1 :
1 Figure 1: (left) Two spherical parameterizations of the gourd graph obtained obtained via Tutte's planar parameterization. (right) The pseudo-code of the Projected Gauss-Seidel method.
2 Figure 2 :
22 Figure 2: Computation of a spherical drawing based on a prism layout of the gourd graph (326 vertices). Three vertex-disjoint chord-free paths lead to the partition of the faces of G into three regions which are each separated by one river (green faces). Our variant of the FPP algorithm allows to produce three rectangular layouts, where boundary vertex locations do match on identified (horizontal) sides. One can thus glue the planar layouts to obtain a 3D prism: its central projection on the sphere produces a spherical drawing. Edge colors (blue, red and black) are assigned during the incremental computation of a canonical labeling [11], according to the Schnyder wood local rule.
Figure 3 :
3 Figure 3: These pictures illustrate the use of different initial placers as starting layouts for two iterative schemes on the dog graph (1480 vertices). For each initial layout, we first run 50 iterations of the projected Gauss-Seidel and Alexa method, and then we run the two methods until a valid spherical drawing (crossing free) is reach. The charts below show the energy, area and edge length statistics obtained running 1600 iterations of the projected Gauss-Seidel and Alexa methods.
Figure 5 :
5 Figure5: These pictures illustrate the use of spherical drawings as initial placers for forcedirected methods: we compute the layouts of the cow graph (2904 vertices, 5804 faces) using our 3D implementation of the FR spring embedder[START_REF] Fruchterman | Graph drawing by force-directed placement[END_REF]. In the charts on the right we plot the number of colliding 3D triangles, over 100 iterations of the algorithm.
Figure 6 :
6 Figure 6: These pictures illustrate the use of spherical drawings as initial placers for the 3D version of the FR spring embedder [16], for a random planar triangulation with 5K faces.
Figure 7 :
7 Figure 7: The spherical drawings of the graphs in Fig. 5 and 6 are used as initial placers for the Yifan Hu algorithm [19]: we test the implementation provided by Gephi (after rescaling the layout by a factor 1000, we set an optimal distance of 10.0 and a parameter ϑ = 2.0).
The computation of small cycle separators for planar triangulations is a very challenging task. This work does not focus on this problem: we refer to recent results[START_REF] Fox-Epstein | Short and simple cycle separators in planar graphs[END_REF] providing the first practical implementations with theoretical guarantees.
Observe that one common metric considered in the geometric processing community is the (angle) distortion: in our case this metric cannot be taken into account since our input is a combinatorial structure (without any geometric embedding).
We compute the intersections between all pairs of non adjacent triangles running a brute-force algorithm: the runtimes reported in Fig.
and
do not count the cost of computing the triangle collisions. |
01761756 | en | [
"spi.meca.mema"
] | 2024/03/05 22:32:13 | 2017 | https://theses.hal.science/tel-01761756/file/GIBAUD_2017_archivage.pdf | Robin Gibaud
Étienne Guesnet
Pierre Lhuissier
email: pierre.lhuissier@simap.grenoble-inp.fr
Luc Salvo
Modeling Large Viscoplastic Strain in Multi-Material with the Discrete Element Method
Keywords: Discrete Element Method, Large Strain, Viscoplasticity, Multi-Material
In this paper, the Discrete Element Method (DEM) is used as a tool to phenomenologically model large compressive viscoplastic strain in metallic composites. The model uses pairwise attractive and repulsive forces between spherical particles. Large packings of particles collectively cope with the prescribed strain, the changes of neighbors model the irreversible strain in the material. Using the proposed calibration method of the model parameters, the macroscopic behavior mimics perfect plasticity in compression, with a strain rate sensitivity approximating a viscoplastic Norton law.
The error on flow stress and volume conservation is estimated for single material. Three bi-material geometrical configurations are built: parallel, series and spherical inclusion. Macroscopic metrics (flow stress and shape factor of inclusion) are confronted to Finite Element Method (FEM) simulations.
The potential of the model, from a computing power point of view, is tested on a complex geometry, using a real microstructure of a crystalline/amorphous metallic composite, obtained by X-ray tomography.
"Je sers la science et c'est ma joie."
Basile, discipulus simplex À Mohammed Colin-Bois, et tous les autres.
General Abstract
Forming of multiphase materials involves complex mechanisms linked with the rheology, morphology and topology of the phases. From a numerical point of view, modeling such phenomena by solving the partial differential equation (PDE) system accounting for the continuous behavior of the phases can be challenging. The description of the motion and the interaction of numerous discontinuities, associated with the phases, can be conceptually delicate and computationally costly. In this PhD, the discrete element method (DEM) is used to phenomenologically model finite inelastic strain in multi-materials. This framework, natively suited for discrete phenomena, allows a flexible handling of morphological and topological changes.
Ad hoc attractive-repulsive interaction laws are designed between fictitious particles, collectively rearranging to model irreversible strain in continuous media. The numerical behavior of a packing of particles can be tuned to mimic key features of isochoric perfect viscoplasticity: flow stress, strain rate sensitivity, volume conservation. The results for compression tests of simple bi-material configurations, simulated with the DEM, are compared to the finite element method (FEM) and show good agreement. The model is extended to cope with tensile loads. A method for the detection of contact and selfcontact events of physical objects is proposed, based on a local approximation of the free surfaces.
The potential of the methodology is tested on complex mesostructures obtained by X-ray tomography. The high temperature compression of a dense metallic composite is modeled. The co-deformation can be observed at the length scale of the phases. Two cases of "porous" materials are considered. Firstly, the simulation of the compression and the tension of aluminum alloys with pores is investigated. These pores stem from the casting of the material, their closure and re-opening is modeled, including the potential coalescence occurring at large strain. Secondly, the compression of a metallic foam, with low relative density, is modeled. The compression up to densification involves numerous interactions between the arms.
Résumé global
La mise en forme de matériaux multiphasés comprend des mécanismes complexes en lien avec la rhéologie, la morphologie et la topologie des phases. Du point de vue numérique, la modélisation de ces phénomènes en résolvant les équations aux dérivées partielles (EDP) décrivant le comportement continu des phases n'est pas trivial. En effet, de nombreuses discontinuités associées aux phases se déplacent et peuvent interagir. Ces phénomènes peuvent être conceptuellement déclicats à intégrer au modèle continu et coûteux en termes de calcul. Dans cette thèse, la méthode des éléments discrets (DEM) est utilisée pour modéliser phénoménologiquement les déformations finies inélastiques dans les multimatériaux.
Des lois d'interactions attractive-répulsive sont imposées à des particules fictives, dont les ré-arrangements collectifs modélisent les déformations irréversibles de milieux continus. Le comportement numérique des empilements de particules est choisi pour reproduire des traits caractéristiques de la viscoplasticité parfaite et isochore: contrainte d'écoulement, sensibilité à la vitesse de déformation, conservation du volume. Les résultats d'essais de compression de bi-matériaux simples, simulés avec la DEM, sont comparés à la méthode des éléments finis (FEM) et sont en bon accord. Le modèle est entendu pour pouvoir supporter des sollicitations de traction. Une méthode de détection de contacts et d'autocontacts d'objets physiques est proposée, basée sur l'approximation locale des surfaces libres.
Les capacités de la méthodologie sont testées sur des mésostructures complexes, obtenues par tomographie aux rayons X. La compression à chaud d'un composite métallique dense est modélisée. La co-déformation peut être observée à l'échelle spatiale des phases. Deux cas de matériaux "poreux" sont considérés. Premièrement la simulation de la compression puis traction d'alliages d'aluminium présentant des pores de solidifications: leur fermeture et ré-ouverture mécanique est modélisée, y compris leur coalescence à grande déformation. Deuxièmement la simulation de la compression de mousse métallique de faible densité: la compression jusqu'à densification provoque de nombreuses interactions entre les bras de matière.
Chapter 1
General Introduction 1.1 Introduction
This PhD is focused on the exploratory development of a numerical method to model the mechanical behavior of metallic alloys under large strain. Most of the efforts were dedicated to numerical issues arising in the design of the model. However, the inspiration of the work stems from practical and experimental issues. The study is aimed at the understanding of the deformation at the scale of the mesostructure of architectured materials.
Historically, the development of materials for structural applications allowed the fulfillment of increasingly demanding requirements. The limits for both service and manufacturing requirements have been pushed further by the introduction of novel materials and the improvement of existing ones. Since the 1960s, and the rise of composite materials [84, p.321], an active axis of development is the association of distinct phases. The design of composites typically allows compromises between mechanical properties, and up to some extent contrives contradictory behaviors.
In the architectured materials, the development is focused on the control of the morphology, the distribution and the topology of the phases. Indeed, such materials assemble several monolithic materials, or materials and empty space, to meet challenging functional requirements. More specifically, metallic architectured materials display promising properties for structural applications. Complex and controlled architectures can be elaborated using for example casting, powder technology or additive manufacturing.
The study of large plastic and viscoplastic strains in architectured materials are of major interest, both from an engineering and scientific point of view. Indeed, such finite inelastic strains can be observed at different stages of the life cycle of a product: during the forming processes (e.g. hot forming) or during service (e.g. shock absorption). The prediction of the mechanical behavior is thus useful in engineering contexts. In addition, the macroscopic behavior is driven by phenomena occurring at the scale of the mesostructure of the constitutive phases, under active scientific investigation:
• The motion and the interaction of the interfaces between phases;
• Topological changes like pore closure or phase fragmentation.
In the objective of describing and understanding these physical phenomena, simulation and experiment are complementary and intimately interlaced tools. Grassroots reference in most scientific queries, the experimental approach is a decisive tool to identify dominant processes and choose a conceptual description of a phenomenon. In the case of finite strain of multi-materials, the deformation mechanisms of the phases can be temporally tracked in three dimensions by techniques as X-ray tomography.
However, the observation of physical systems implies a high level of complexity and heavy limitations on the control of the experimental configuration. The careful design of experimental setups can partially decouple phenomena and isolate the effects of distinct parameters. Thus, a judicious choice of materials can help to decorrelate the effects of morphology or rheology in the deformation mechanisms of multi-materials.
Built and developed in parallel to the experimental route, numerical modeling is a valuable tool to arbitrarily and independently study distinct mechanisms, allowing a fine control on parameter and configuration repeatability. Nevertheless, phenomena involving numerous and massively interacting interfaces are still challenging for simulation tools.
This PhD, starting from a concrete material science example, is thus focused on the development of a modeling method tailored to the description of finite strains in metallic multi-materials. Taking a step aside from more mainstream strategies, first of which the FEM, the proposed model is phenomenological: continuous media are numerically discretized using sets of interacting particles. The re-arrangement of these particles are expected to mimic typical traits of the deformation of the continuum. The developed framework, based on the DEM, is trusted to allow flexible handling of interface interactions and topological events. Its application to the simulation of continuous media is less straightforward, thus concentrating the development efforts.
Although the DEM is now a well established tool for the simulation of elastic and brittle behaviors of continuous media, the design of the proposed model was exploratory. To our knowledge, no anterior DEM algorithm was applicable to inelastic strain in incompressible material. The unconventional and phenomenological characteristics of the approach triggered a specific emphasis on the delimitation of the modeling scope.
Outline
The manuscript is organized in five main parts, each of which is opened by an illustrated page of selected highlights:
• Part I presents on overview of the general context of the PhD, both from an experimental and a modeling point of view.
The chosen reference experiment is described along with a brief review of the material properties and the experimental techniques. The developed simulation method being rather unconventional, the limits of the modeling approach are discussed. The phenomena of interest and their conceptual description are presented.
• Part II reviews some potential numerical modeling strategies.
A specific effort to propose a comparative lecture grid of numerical methods aims to better locate the developed approach. Selected methods are compared, from the algorithmic and conceptual point of view, with a focus on Lagrangian kinematics.
• Part III focuses on the research question.
Based on the idealization of the studied phenomena (Part I) and the potential resolution strategies (Part II), the principle of the developed methodology is presented, along with the generic framework of the discrete element method (DEM) and the chosen numerical tools.
• Part IV details the effective development of the method for the simulation of the compression of dense multi-materials with viscoplastic behavior.
The calibration and setup of the model is first presented for a single material. Biphased test cases are compared to finite element method (FEM) simulation and the mesostructure from a full 3D sample is modeled.
• Part V extends the method to compressive and tensile loading on "porous" materials with low strain rate sensitivity. The self-contact events, i.e. when the pores close, are taken into account.
The behavior of a dense sample is first investigated. The self-contact detection algorithm is then tuned and tested on a simple geometry. Complex mesostructures of close and open cell materials are used to illustrate potential uses of the model.
Along with the highlights proposed at the beginning of each part, the reader may get a picture of the global approach with the three series of "PhD Objective":
• Section 2.5 presents the observed physical phenomena of interest, triggering the investigation.
• Section 3.3 lists requirements for a potential numerical tool to study such phenomena.
• Chapter 7 states the research question, in close link with the chosen method.
The application of the model to complex mesostructures, obtained by X-ray tomography, is discussed:
• Chapter 13 for the compression of a dense composite with spheroidal inclusions.
• Section 16.1 for the mechanical closure and re-opening of casting pores.
• Section 16.2 for the compression of a foam with low relative density.
The proposed appendices include:
• Appendix A is a first approach to the estimation of the local stress field.
• Appendix B briefly discusses some practical implementation issues and provides the key source codes for the DEM simulations.
• Appendix C provides an example of the FEM scripts used as numerical reference.
• Appendix D is the manuscript of an article, presenting the main results of Part IV.
The article was submitted to the International Journal of Mechanical Sciences, minor revisions were requested and we currently wait for the final decision on this amended version.
Part I Part I aims to illustrate the general context of the PhD, and introduce the focus of interest. The part is divided into two chapters:
Mechanical and Numerical Context
• Chapter 2 presents the experimental background, progressively focusing on a designed model material and the physical phenomena of interest. The numerous and complex material science issues are not thoroughly detailed.
• Chapter 3 puts specific emphasis on the choices intrinsic to the design of a model. Some guidelines regarding the conceptual idealization of the studied phenomena are proposed with a "scope statement" for a model.
Highlights -Part I Mechanical and Numerical Context
• The physical phenomena of interest are mechanisms driving finite inelastic strain in architectured metallic materials, at the scale of the constitutive phases.
The effects of morphology and rheology can be partially decorrelated by judicious choice of the tested materials. Experimental observations and predictive simulations are seen as complementary tools, built in close collaboration.
• A metallic composite was previously designed to focus on rheological effects, with an interesting dependency of the rheological contrast on the temperature and the strain rate.
The composite (spheroidal amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 inclusions in a crystalline copper matrix) is elaborated by co-extrusion of powders. In situ X-ray tomography hot compression, with co-deformation of the phases, are used as experimental reference.
• A numerical tool to study the deformation mechanisms is sought for.
The handling of numerous interface interactions and topological events (e.g. pore closure, neck creation and phase decohesion and fragmentation) is necessary.
• The chosen modeling strategy is phenomenological. In addition, its elementary variables are mathematically chaotic.
Sensible metrics of interest must be defined, along with a delimited credible modeling scope. The available computing power, a quantitative limiting factor on the accessible metrics, thus influences the qualitative modeling choices.
Experimental Background
This chapter presents some experimental context to the mainly numerical work of this PhD. Starting from general consideration regarding composites, the chapter progressively focuses on a specific experimental setup. This brief overview will not dive into the complex details of the underlying material science issues. It is divided into five sections:
• Section 2.1 introduces general concepts about metallic composites.
• Section 2.2 concerns the specific case of amorphous/crystalline composites, along with some general consideration regarding amorphous alloys.
• Section 2.3 describes the design process of a model material aimed at the study of rheological effects.
• Section 2.4 introduces the uses of X-ray tomography.
• Section 2.5 sums up the physical phenomena of interest, to be studied experimentally and numerically.
Metallic Composites
Among existing materials, metallic alloys are providing interesting compromises for mechanical structural products. To fulfill new requirements, material development rely on the design of altogether new monolithic materials, or the association of distinct phases in composites. The description of the mechanical behavior of metallic composites involves many parameters. To better understand finite transformations in such materials, these parameters may be separately studied.
In an industrial context, engineers rely on material science to design products and processes, diagnosis symptoms and justify and optimize their choices [32, p.6]. Predictions and anticipations of the behavior can apply to any step of the life cycle, from elaboration to service and finally end of life. Materials must be designed and chosen to fulfill mechanical, economic or reglementary scope statements. Such scope of statements typically involve compromises between contradictory requirements.
In order to visually compare potential choices, maps of the properties of the materials can be drawn. For example on Figure 2.1 classes of materials are compared in the space (density,elastic modulus). In this example, to compare the performance of very diverse materials, isocontours of a comparison criterion can illustrate the compromise between stiffness and weight requirements. "Holes" are defined as regions of the map that are not 19 covered by existing materials. The "holes" lying in regions with interesting comparison criteria are potential axes of development for the design of new materials. Among existing materials, our study will focus on metallic alloys, massively used in structural products for their interesting mechanical properties, among which their strength to density ratio, their toughness and their behavior at high temperature.
A first option to further explore property charts, and find better compromises, is the tuning of existing monolithic materials and the development of new ones. The composition, elaboration process and microstructure of materials have been heavily studied and improved in history. An example of development of a new monolithic material in the last decades is the elaboration of amorphous metallic alloys (see Section 2.2.1).
In parallel to the developments of new monolithic materials, composite materialsmultiphase materials -can be designed. The association of distinct phases allow complementary or even contradictory properties in a single material. In composites, the overall behavior does not necessarily follow a rule of mixture but can follow a trajectory in material space extending current possibilities [12, p.6]. In addition, composites can also fulfill multi-functional requirements.
The distinction, based on length scale, between composite materials and new monolithic materials can be somewhat blurry. The development of nanoscale hybrid materials involves atomic or molecular effects and potentially generates properties that are qualitatively unseen in the initial materials. We focus here on mesoscale composites, where phases are associated at the scale of the micrometer. Among the numerous potential classes of materials, we focus exclusively on composites associating metallic phases (see Figure 2.2).
The challenge of composite design is the tuning -often constrained by practical issuesfor each phase of numerous design variables: • Morphology;
• Topology;
• Volume fraction;
• Relative and absolute size;
• Relative and absolute mechanical behavior.
In composite design, the potentially coupled effects of these parameters on the mechanical properties must be understood [12, p.5]. The prediction of the behavior of the composite is necessary for all the steps of the life cycle of the products, first of which service requirements of the final product and elaboration process from raw composites. Numerous parameters and phenomena drive the deformation mechanisms of metallic composites. As a step toward an independent study of distinct phenomena, an appropriate selection of model materials may allow a partial decorrelation. For example, the comparison of composites with identical compositions but distinct mesostructures allows an emphasis on morphological effect. On Figure 2.2a, the size of the spherical inclusions is controllable at fixed composition. In addition to the size, distinct morphologies of the mesostructure can be compared for an identical composition. For example, an equiaxial morphology (Figure 2.2b) and a lamellar morphology (Figure 2.2c).
The effect of rheological effects can be more independently studied using a composite whose phases have a differential sensitivity to testing configuration. The example of crystalline/amorphous metallic composites is typical (Figure 2.2d): the flow stress of the amorphous phases is strongly dependent on both the strain rate and the temperature, while the crystalline phase is less influenced.
Our work is oriented toward the understanding of the role of the rheological contrast, i.e. the ratio of flow stress between the phases. We focus on the finite transformation context of hot forming, coping with potentially large strains, rotations and displacements. At the laboratory level, the global approach is to couple experimental and numerical approaches. A model material, with tunable rheological contrast, was designed associating amorphous and crystalline metallic phases.
Amorphous/Crystalline Composites
Amorphous Metallic Alloys
Amorphous metallic alloys are an example of development of a new class of monolithic materials. In amorphous metals, the atoms are not arranged on a regular periodic lattice, as it is the case in classical crystallized alloys. This atomic structure leads to radically distinct macroscopic properties, typically a high yield strength and a large elastic domain at room temperature; a large homogeneous plastic domain at high temperature.
At the atomic scale, amorphous alloys do no display long-range order, as it is the case in classical crystalline alloys, where atoms are regularly arranged on a lattice. Although local cluster arrangements can be found (Figure 2.3b), no long-range periodic pattern can be identified.
A historical route -elaboration processes will not be described here -to obtain such atomic structure in a metallic alloy, is fast quenching from liquid state. Below a critical cooling rate (Figure 2.3a), the crystallization kinetic is too slow, the disordered liquid is "frozen": from the large time scale of atomic mobility, stems a solid-like state. The amorphous solid state is only metastable. Two canonical temperatures are used in the literature to roughly quantify the major transitions in the behavior:
• The crystallization temperature (T x ), above which the amorphous structure crystallizes;
• The glass-transition temperature (T g ), above which notable macroscopic viscous flow can be observed.
These transitions are dynamic processes, thermally activated, and the definition of the threshold temperatures is thus not univocal. The temperature window T x -T g , referred to as the supercooled liquid region [226, p.20], is an indicator of the ability of the amorphous alloy to be hot formed. The higher the temperature, the lower the flow stress and the shorter the crystallization time: a compromise must be found for forming application. At macroscopic scale, the transition from amorphous to crystalline state -even only partial -leads to a drastic change in mechanical properties. The sought for properties, e.g. the plastic forming ability at high temperature and the large elastic region at room temperature, are lost. An estimation of the crystallization time may be measured by mechanical testing (Figure 2.4a). The kinetics and thermal activation -atomic motion being a thermally activated phenomenon -of crystallization does not lead to a unique crystallization temperature (T x ) but to a set of crystallizing conditions. For the amorphous alloy used in this work, the crystallization time roughly drops from 10 4 to 10 3 s with an increase of the temperature of 40 • C (Figure 2.4b). The absence of atomic long-range ordering implies that typical lattice faults -for example the dislocations in crystals, whose motion are one of the mechanisms of inelastic strain (Figure 2.5a) -do no exist in amorphous alloys. Deformation mechanisms are thus drastically distinct for an identical chemical composition, justifying the sudden behavior shift in Figure 2.4a. Under shear stress, the atoms collectively reorganize within shear transformation zone (STZ). STZ are ephemeral transition events between equilibrium states (Figure 2.5b).
Whereas clearly distinct mechanisms are complementary to describe deformation regimes in crystalline metals -see for example copper in Figure 2.6a -similar STZ mechanisms lead to different flow regimes in amorphous alloys (Figure 2.6b).
At room temperature, amorphous alloys typically display high yield stress and a large elastic domain, but often little plasticity and a brittle behavior [START_REF] Douglas | Designing metallic glass matrix composites with high toughness and tensile ductility[END_REF]. In this regime -elastic and inhomogeneous deformation regions in Figure 2.6b -no lattice faults accommodate stress via inelastic strain and the atomic mobility is low.
At higher temperature, closer to or above the so-called glass transition (T g ) the atoms are mobile enough for a macroscopic homogeneous strain of the material. The glass transition temperature is not intrinsic to the chemical composition and is influenced by the thermomechanical history of the material [226, p.19]. In addition, as for crystallization (b) Amorphous atomic structure. Mechanisms of shear transformation zone (STZ), first proposed by Argon [11]. Collective rearrangement (dynamic event, not a structural defect) of local clusters of dozens of atoms, from a low energy configuration to another.
Illustration from [204, p.4068].
temperature, the definition of glass transition temperature is not univocal1 and depends on the studied time scales.
In the homogeneous deformation region (Figure 2.6b), the behavior is typically viscoplastic with a strong sensitivity of the flow stress to both temperature and strain rate (Figure 2.7). At high temperature and moderated stresses, the behavior tends toward a Newtonian flow, where flow stress is proportional to strain rate. The specific atomic structure of the material and the lack of grains and crystalline order allow an almost arbitrary large plastic domain: no defects and discontinuities limit the deformation.
The viscoplastic behavior of amorphous alloys is classically approximated using the Norton law, whose tensorial form is [53, p.4]
: ε = 3 2 • σ eq K (1/M ) • dev(σ) σ eq (2.1)
with σ the flow stress tensor, ε the strain rate tensor, K the stress level and M the strain rate sensitivity. The equivalent Mises stress σ eq is a scalar defined from the second invariant of the stress tensor σ:
σ eq = 3 2 dev(σ) : dev(σ) (2.2)
In the two previous equations, dev(σ) denotes the deviatoric part of the stress tensor:
dev(σ) = σ - 1 3 • trace(σ) • I (2.3)
Where I is the identity tensor. In the uniaxial case, the tensorial Norton law (Equation 2.1) can be simplified to the following scalar relation [126, p.106]: Taking the example of Zr 57 Cu 20 Al 10 Ni 8 Ti 5 to illustrate typical hot temperature forming of amorphous alloys, the flow stress varies by a factor 10 when the temperature rises from 380 to 410 • C (Figure 2.7b). As a comparison, in the same temperature range, the flow stress of crystalline copper only varies by a few percent. In the studied temperature and strain rate ranges, the strain rate sensitivity ranges from 0.3 to 1, which is notably higher than the typical creep value for crystalline alloys [START_REF] Kassner | Five-power-law creep in single phase metals and alloys[END_REF]. The behavior is roughly Newtonian2 above 405 • C and below 5•10 -4 s -1 . These strain rate sensitivities are comparatively high, with respect to typical values observed for the creep behavior of crystalline alloys (M ≈ 0.2) [START_REF] Kassner | Five-power-law creep in single phase metals and alloys[END_REF].
σ = K| ε| M • sign( ε) (2.4) (a) (b)
Amorphous/Crystalline Elaboration
The use of amorphous alloys as structural materials is hindered by their brittle behavior at room temperature. Hence the development of metallic composites, associating amorphous and crystalline phases. Elaboration techniques include liquid and solid state processing.
In liquid state, or in situ, processing, little freedom remains on the choice of composition of the phases. Ex situ elaboration, from solid state, allows more flexibility for mechanical and experimental requirements, a major constraint being the crystallization behavior of the amorphous phase.
At room temperature, the typically brittle behavior of amorphous alloys -the shear localization region in the map shown Figure 2.6b -can be a major drawback to their use as structural materials. A work-around in material design is the composite strategy (Figure 2.8): the propagation of cracks in a brittle amorphous alloy may be hindered by an association with a ductile phase (Figure 2.8c), increasing the material toughness [START_REF] Douglas | Designing metallic glass matrix composites with high toughness and tensile ductility[END_REF] (Figure 2.8a). The choice of a crystalline alloy as the ductile phase potentially allows a correct elastic modulus compatibility and chemical compatibility at the interfaces.
In addition to their potential structural use, amorphous/crystalline composites are promising as model materials to study the effect of rheological contrast between the phases in a composite. Indeed, the temperature and strain rate sensitivity of the flow stress in the amorphous alloy allows to selectively tune the flow stress in the phases. A wide range of rheological configurations can thus be tested with a single material. To study equivalently diverse configurations, numerous crystalline composites would need to be elaborated and compared. The interpretation of the results is delicate when the data from distinct composites are compared, as no straightforward method allows the direct control of all the parameters of influence. Using a single composite minimizes the uncertainties. Amorphous/crystalline metallic composites can be elaborated in situ, from an homogeneous liquid state [START_REF] Qiao | In-situ dendrite/metallic glass matrix composites: A review[END_REF], taking advantage of the kinetics of dendrite growth, to selectively grow crystalline dendrites and freeze an amorphous structure (Figure 2.9a). The process can be delicate to control due to the diffusion during the solidification. If an excessive migration occurs, the local chemical composition modification might hinder the amorphous solidification. Our objective is to design a model material to study the effect of rheological contrast in metallic composites, taking advantage of the tunable rheological contrast in amor-CHAPTER 2. EXPERIMENTAL BACKGROUND phous/crystalline composites. In situ elaborated composites are not well suited:
• Their dendritic mesostructure (Figure 2.9a) is too complex to focus the study on rheological issues.
• Their elaboration process is delicate to control and thus constrains the size of the phases and the overall samples.
• Little control is left on the relative properties of the phases.
• A similar chemical composition in the phases make X-ray three-dimensional imaging more challenging.
• Typical suitable alloys contain beryllium, whose toxicity is a supplementary experimental constraint.
In contrast, the ex situ elaboration displays interesting features for our purposes. As the phases are associated at solid state by thermomechanical processes, their choice is less strictly constrained. The mechanical and physical properties of the phases can thus be chosen a little more independently, to fulfill experimental requirements.
In the laboratory context of the PhD, a strong background of metallic alloy codeformation and amorphous alloy elaboration led to the design of stratified composites since 2008 [START_REF] Ragani | Élaboration par co-déformation de matériaux stratifiés alliage léger/verre métallique[END_REF]. Plates of zirconium-based amorphous alloy and light crystalline alloy (magnesium and aluminum based) were co-pressed at high temperature (Figure 2.10). This simple geometry allowed a first approach to the study of the adhesion of the interfaces and the effects of temperature and strain rate on the co-deformation regime. The choice of the forming window is driven by compromises, taking into account crystallization of the amorphous phase at higher temperature and phase fragmentation at lower temperature. Such stratified materials already exhibit numerous key properties, for example crossing stress-strain curves (Figure 2.14 on page 31), but are too anisotropic to study generic mechanisms in composites. In order to find an intermediary complexity between in situ composites and the elementary stratified geometry, ex situ elaboration process can rely on powder technology. Traditional sintering time and temperature scales are unsuited for amorphous alloys, as they crystallize rapidly at high temperatures, triggering the use of less conventional procedures.
An example of powder thermomechanical processing is the spark plasma sintering (SPS). The SPS process is a field assisted sintering technology, where the sample is heated by Joule effect, allowing shorter sintering times. It has successfully been applied to the elaboration of both massive amorphous alloys [START_REF] Nowak | Effet de la composition et de la technique d'élaboration sur le comportement mécanique des verres metalliques base zirconium[END_REF][START_REF] Nowak | Approach of the spark plasma sintering mechanism in Zr 57 Cu 20 Al 10 Ni 8 Ti 5 metallic glass[END_REF] and amorphous/aluminum composites [START_REF] Perrière | Phases distribution dependent strength in metallic glass-aluminium composites prepared by spark plasma sintering[END_REF][START_REF] Ferré | Élaboration et caractérisation 3D de l'endommagement dans les composites amorphes-cristallins à matrice aluminium[END_REF]. For these materials, investigation efforts were focused on the room temperature mechanical behavior, in the objective of studying service requirements in structural materials. Figure 2.11 displays typical stress-strain behavior at various volume fraction of such composites. Intermediary volume fractions provide a range of compromise between strength and ductility of the two phases. Qualitatively, the volume fraction does influence the fracture surfaces [73, p.96]. At 10 %vol of inclusions (Figure 2.12), the overall failure is driven by coalescence of multiple matrix/inclusions decohesion events (Figure 2.12a). Locally (Figure 2.12b), the dimpled surface of the crystalline matrix accounts for large plastic strain and the inclusions seems to display a relatively strong cohesion to the matrix.
Design of a Model Material
An amorphous/crystalline metallic composite is designed as a model material to study the effect of rheological contrast in high temperature forming of metallic composites. Experimental and mechanical concerns drive the selection of the phases and the elaboration process. Within a defined window of strain rate and temperature, the phases of the composite can be co-deformed with tuned relative rheology.
The global objective for the designed material is to exhibit a temperature and strain rate window in which co-deformation of the phases can be observed. The research environment of the laboratory -with strong emphasis on powder technology and co-deformation of materials -leads to the choice of high temperature co-extrusion of powders as elaboration process. The generic chosen geometry is a random dispersion of spheroidal inclusions of amorphous alloy in a crystalline matrix. This simple geometry aims to focus the study on the effects of rheological contrast, without excluding topological events. Two master internships [START_REF] Marciliac | Étude de la déformation à chaud d'un composite métal amorphe / métal cristallin[END_REF][START_REF] Gibaud | Déformation à chaud de composites métal cristallin / métal amorphe[END_REF] focused on the elaboration of such model materials.
The choice of the amorphous phase was driven by practical considerations: Zr 57 Cu 20 Al 10 Ni 8 Ti 5 amorphous alloy is readily available as atomized powder (Figure 2.13a) and its bulk mechanical properties have been extensively studied: its supercooled liquid region is relatively large 3 , allowing a comfortable enough high temperature forming window before crystallization. In addition, it does not contain toxic components as beryllium. The crystalline phase was chosen with respect to the amorphous phase.
Aluminum alloys were the first chosen candidate as crystalline phase. Their moderate flow stress eases-off the extrusion process and their low X-ray absorption guarantee a good phase contrast with zirconium based alloys (see also Section 2.4). However, no satisfactory co-deformation conditions could be found with p.22] nor p.17]. The high temperature, or the low strain rate, needed to reach matching flow stresses between the phases (Figure 2.14) lead to the crystallization of the amorphous phase, before significant strain can be applied. Although aluminum is probably a good candidate for structural composites -due to its low density and comparable elastic modulus with zirconium based amorphous alloys -it is not well suited to build a model material for high temperature forming. Further developments led to the choice of pure copper as crystalline phase. Indeed, from the flow stress of the phases (Figure 2.14), co-deformation configurations may be reached for reasonable temperature and strain rate, for example around 400 • C and 2.5•10 -4 s -1 . Postmortem observations were made of significant inelastic co-deformation of composites (Figure 2.15), in the window 390 -405 • C and 2•10 -4 -2•10 -3 s -1 [START_REF] Gibaud | Déformation à chaud de composites métal cristallin / métal amorphe[END_REF]. For lower temperature, or higher strain rate, the flow stress of the amorphous phase is too high and its deformation is thus negligible. On the contrary, for higher temperature or lower strain rate, the amorphous phase is soft but crystallizes before undergoing significant strain.
In the identified co-deformation configurations, the ratio of single phase flow stresses is in the range 0.25 -4. The relative rheology of the phases of the composite can be tuned within this window, allowing the study of distinct contrast with a unique composite.
The elaboration procedure of the composite -starting from atomized Zr 57 Cu 20 Al 10 Ni 8 Ti 5 amorphous powder and electrolytic copper powders (Figure 2.13)involves the following steps: Extrusion is trusted to be a reliable process to limit the presence of residual porosity [73, p.36]. In the process, the total diameter is reduced from 7 to 3. The procedure allows the elaboration of composites with up to 70 % of volume fraction of amorphous phase. The composites with higher volume fraction were not cohesive enough to withstand the compression tests. In addition, a high volume fraction of amorphous phase increases the required extrusion force, sometimes reaching the maximal capacity (5•10 3 daN) of the setup [85, p.12].
X-Ray Tomography
Two-dimensional and destructive experimental techniques cannot track the temporal evolution of the investigated phenomena. The time and size scale and the studied phenomena justify the use of X-ray tomography for in situ experimental study.
From the experimental point of view, both two-dimensional and destructive techniques are unsuitable to track the temporal evolution of the morphology of a composite. Among existing three-dimensional and nondestructive imaging techniques, X-ray tomography matches well (Figure 2.17a) with the length scales of our designed model material: millimetric for a representative volume, micrometric for the phase morphology (Section 2.3). The global principle of tomography is to reconstruct a 3D representation of an object from various 2D observations, taken from distinct orientations (Figure 2.17b). A classical X-ray imaging technique is based on attenuation contrast4 : the intensity of an X-ray beam is measured after crossing the sample. The relative attenuation of the beam in the distinct phases can induce a contrast on the 2D image, the projections. A series of projections are taken from distinct orientations to build a scan. Algorithmic procedures [START_REF] Avinash | Principles of computerized tomographic imaging[END_REF] allow to infer the 3D field of absorption from a scan: the reconstruction. This 3D image can be filtered and segmented to obtain a spatial distribution of distinct phases of multimaterials.
The non-destructive character of the method allows measurements on the same sample, before and after deformation. Several steps can thus be sequentially studied with interrupted tests [15]. If the scans can be made rapidly enough, a phenomenon can be observed without interruption: in situ [38,[START_REF] Lhuissier | High temperature deformation and associated 3D characterisation of damage in magnesium alloys[END_REF] experiments.
The choice of copper as constituent of the matrix leads to restrictions on the size of observable samples by X-ray tomography and to a more delicate solid-state elaboration of the model material, with issues regarding cohesiveness. Both copper and zirconium, the respective dominant components of the matrix and the inclusions, have a high level of absorption of X-rays (Figure 2.18), hence a high energy beam must go through the sample to image it. Postmortem measurements can be made on regular laboratory tomographs, providing a limited flux, but in situ imaging requires shorter scanning times. The in situ measurements were thus performed [35] at the European synchrotron radiation facility (ESRF). Using the chosen configuration (Table 2.1), for samples of diameter 0.5 mm, a full scan5 is performed in 7 s.
In counterpart, the set-up for in situ experiments at the ESRF imposes heavier security procedures, which are time consuming. With the current set-up, at least several minutes are elapsed between the introduction of the sample in the furnace and the actual beginning of the mechanical test, typically from 5 to 10 min. Such a constraint proves critical for our purpose. Beam energy (MeV) Mass attenueation coefficient (cm
• g -2 ) Element Al Cu Zr
PhD Objective: Observed Physical Phenomena
The study focuses on the rheological parameter effects on quasistatic co-deformation of viscoplastic composites. The length scale of interest is the characteristic length of the phases. The phases are idealized as continuous media, following Norton law, but allowing potential topological changes.
The chosen model material is a composite with a pure crystalline copper matrix with 15 %vol of spheroidal amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 inclusions (see Section 2.3). The co-deformation at high temperature is experimentally studied in situ by X-ray tomography. Sub-millimetric samples are uniaxially compressed at 380 -410 • C and 2•10 -4 -4•10 -3 s -1 typically above strains of 0.3.
The main phenomenon of interest in the frame of our study is the morphological evolution -at the micrometric mesoscale -of the phases in the composite (Figure 2.
19).
Phases are assumed to be continuous media, governed by perfect viscoplastic constitutive behavior -the Norton law -averaging in space all phenomena occurring at smaller length scales. In a finite transformation context, targeted strains are in the range 0.1 -1. Only inelastic strains are considered, the deformations are assumed to be isochoric in both phases. Dynamic effects are neglected, compression in the range 10 -4 -10 -2 s -1 are considered quasistatic. Such changes can stem from the interaction of the interfaces between the phases, the decohesion of the phases or pore closure and opening.
The crystallization is a major limiting factor of the experiments. At 400 • C, the amorphous phases start to crystallize in a little more than 10 min (Figure 2.4b). This crystallization induces a sudden increase of the flow stress. In the range 2.5•10 -4 -5•10 -4 s -1 , strains of 0.3 -0.6 are thus upper bounds.
The crystallization time is cumulative on the whole life cycle of the amorphous alloy, including elaboration and installation in the experimental setup. The hot co-extrusion of the powders at 380 • C last approximately 10 min (Section 2.3). Launching a test in the in situ apparatus at the ESRF lasts from 5 to 10 min, during which the sample is heated. Overall, it has been observed that the effective compression time available is a little above 500 s, thus limiting the macroscopic strain before the behavior change. In the tested configurations, the deformation of the inclusions were very limited after a macroscopic prescribed strain of 0.15 -0.2.
Chapter 3
Simulation Background
"Le chimiste invente des atomes, et puis les décompose en atomes plus petits qui gravitent comme des planètes autour de quelque soleil; belle machine pour penser plus avant; belle construction; idée. Mais s'il croit que c'est une chose, que c'est vrai, que l'objet est ainsi, il n'y a plus de penseur."
Alain [6, §138] In Chapter 2, the experimental background was presented, along with a specific experimental setup and the physical phenomena of interest. The global objective of this PhD is to propose a suitable numerical model and a phenomenological strategy was chosen.
The fact that the proposed model is loosely physically grounded triggers a specific interest to the process of model design in general. This chapter is thus an attempt to define the modelization needs and some general safeguards regarding the potential scope of use. It is divided into three sections:
• Section 3.1 attempts to locate our approach in a broader context, describing modeling issues in general and more specifically for computerized and discrete methods.
• Section 3.2 deals with some concepts to describe the observed phenomena of interest.
• Section 3.3 states the practical modelization objectives and some evaluation metrics.
Limits and Contributions of Modeling Approaches
Experimental and modeling tools can help to understand and predict physical phenomena.
In both cases, the unknowns to be dealt with imply numerous choices that will deeply shape our description of the phenomena. In the objective of proposing a predictive model, the choice of the modeling strategy is also constrained by the foreseen resolution method, which is in turn influenced by the available computing power.
Two complementary and intimately intertwined approaches can be adopted to study a physical phenomenon: the observation through experimentation and the prediction through modeling. In most cases, models and experiments are closely built upon one another and have little independence. At least since Johannes Kepler [212, p.333], a physical phenomenon is considered rationally understood when it is described by a sequence of causal processes, with somewhat coherent logical rules and an acceptable agreement with observations made.
In this context, modeling is an approach where a phenomenon key features are idealized and represented either with a conceptual construction or an analogous system. Observation, far from being a purely passive and objective act, is necessarily an active interpretation of signs -a struggle to use what we know to describe what we seeconditioned by a priori concepts: we are literally blind to phenomena too alien to our preexisting thought structures. This assessment is common to linguistics1 , computing science2 , poetry3 , mathematics4 and solid mechanics.
Experiments and simulations can be designed to be efficient tools to study a phenomenon, but models and observations must not be identified to the reality. Many decisive choices of what is most relevant, in a given objective, restrain our understanding to a necessarily partial and rough sketch. To illustrate the potential and limit of a model, a phone book can be considered as a typical example. Indeed, a phone book can be used to approximate the number of residents of an area or even interact with them (Figure 3.1a). However, the printed letters and digits cannot be identified to these residents. In a discrete element method (DEM) simulation of a granular material, the description of individual physical particles might be even cruder. In place of a phone book, a topographic map of the same area can provide common data, the name of the villages for example. This complementary model will be adapted to distinct modelization objectives.
Limits in Model Design
Model design must cope with inevitable unknowns: a model is intrinsically limited to be a partial and rough sketch. In addition, the ill-posed essence -their chaotic natureof many physical phenomena sets a bound on what is possibly modeled, regardless of the simulation strategy used.
The prediction of physical phenomena, based on its prior understanding or aimed towards it, requires the choice and design of a model, its application and the interpretation of the obtained results. Regardless of the computing power available [22, p.17], all decisions in these tasks must accommodate with numerous unknowns. Even elementary metrics as the strain do not seem to be intrinsic, being described by multiple and sometimes contradictory theories [192, p.5].
In this rather blurry general framework, a dominant criterion in modern science [14] is to evaluate and quantify the reliability of the modeling tools and to assess measurable effects on reality as well as their agreement with our understanding. The delimitation of scopes of study allows the design of credible [START_REF] Schlesinger | Terminology for model credibility[END_REF] and efficient models and experiments. The degree of confidence granted to a model must be evaluated for the foreseen application 5 and allows their practical use, despite the uncertainties. Specific methodologies have been developed for numerical models, due to their complexity and their power, in order to assess their credibility [START_REF] Schlesinger | Terminology for model credibility[END_REF] and to better delimitate scopes of validity [START_REF] William | Verification, validation, and predictive capability in computational engineering and physics[END_REF].
When designing a model, the aim is thus to reproduce a physical phenomenon at a given scale, at least in a descriptive way and if possible in a predictive way. A model will be called phenomenological if it doesn't describe, explain or take into account the phenomena at scales coming immediately lower the studied scale. Stronger based models may be rough, but display more coherence with subscale phenomena.
Before any resolution is attempted, the nature and the physical phenomena must be evaluated. Chaotic and non-deterministic phenomena cannot be conceptually modeled in a well-posed fashion. A well-posed problem poses a unique solution which is stable, in the sense that infinitesimal changes to the initial conditions do not generate discontinuity jumps in the solution [START_REF] Hadamard | Sur les problèmes aux dérivées partielles et leur signification physique[END_REF].
Ill-posed problems are frequent even in simple systems [171, p.9]. In classical mechanics, such an elementary system as collisions of balls moving on a straight line (Figure 3.2) proves to be ill-posed by essence, i.e. regardless of the modeling approach [START_REF] Gale | An indeterminate problem in classical mechanics[END_REF]. Indeed, the final velocities of the balls discontinuously depend on the initial distances.
v
-v Another canonical example of chaotic behavior is the three-body problem, the computation of the dynamics of a system of three punctual masses driven by gravitational force 6 , which is ill-posed even in the hypothesis of absence of collisions [START_REF] Diacu | The solution of the n-body problem[END_REF]. As an interesting side note, preliminary hints to bound the error on the dynamics of a discrete system representing a continuum have been published [154, p.1533-1536], but the assumptions are far too restrictive to be used for our purpose.
Although it is illusory to attempt to compute exactly the chaotic variables of a phenomenon, the impossibility to study and model it is not implied. From a mathematical point of view, the problem needs to be regularized [START_REF] Petrov | Well-posed, ill-posed, and intermediate problems with applications[END_REF], formulated in a way that metrics of interest -for example statistical data on the chaotic variables -can be expressed in a well-posed fashion. From the engineer's point of view, the progressive transition from ill-to well-posed problem is influenced by the smoothness, the stability and the uncertainties [22]: such criteria can be used to evaluate the difficulty of the modeling task.
Resolution Strategies
The existence of a model does not imply that it can practically be solved. The search for an analytical resolution is often illusory and work-around strategies have been heavily used, first of which analogous models, relying on the assumed similarity of two physical phenomena. Phenomenological models focus on key features of the studied phenomenon, without regards for incoherences with phenomena occurring at lower scales. Numerical discretization allows the subdivision of a global unsolvable problem, approximating the solution with local contributions.
A deep understanding of a phenomenon and the design of adequate models does not imply the ability to predict it: the designed model must also be solved.
Historically, the description of many mechanical problems by partial differential equation (PDE) and dynamic laws did not immediately allow the prediction of such phenomena with analytical resolution. Indeed, not only the efforts required for the resolution can be prohibiting, but every however slight modification of the studied system may require an altogether new strategy 7 . In fact, even without regards to mathematically unsolvable cases, the analytical solutions are often limited to canonical study cases [231, p.312], the general cases of arbitrarily complex geometries requiring crude global approximations.
The rise of numerical and discretized approaches opened the route to the efficient resolution -which does not necessarily lead to a deeper understanding 8 -of large complex systems. They rely on the subdivision of a system in many easily solvable elementary problems [46, p.2]. This approach is described in Section 3.1.3.
A widely used work-around to solve complex problems is to rely on physical analogies between phenomena, potentially of distinct nature [18, vol.1, p.300]. The behavior of an easy to study, e.g. easy to measure, phenomenon is used as a model of another one.
To some extent, graphical calculus is an analogous resolution strategy: measurements on well chosen scaled drawing are the model results. Graphical calculus is based on the use of traditional drawing instruments and used to be a widespread tool for industrial applications, providing means to compute static 9 or dynamic efforts [START_REF] Clerk | On reciprocal figures, frames, and diagrams of forces[END_REF][START_REF] Gal | Mécanique: statique graphique, résistance des matériaux[END_REF] in mechanical 6 Poincarré proved the impossibility of solving the problem by first integrals. A purely mathematical solution up to n bodies, based on convergent series, exists but is without any practical interest to study the phenomenon [57, p.68-70]. The mathematical description of chaos changes nothing to its physical nature.
7 "without proper regard for the individuality of the problem the task of computation will become hopeless" [46, p.12]. 8 The human intellectual capacities being limited, an adequate reduction of the number of manipulated variable is always necessary for the practical use of a model [238, p.251]. 9 The most canonical example arguably being the Cremona diagram for statics of trusses [81, p.37]. This method was widely used to solve lattice-like problems, discussed in Section 5.3.1, before the rise of computerized methods.
systems and to study complex three dimensional problems 10 . These methods also provide generic tools to derive or integrate functions, by geometrically estimating surfaces, weighing cut out surfaces or using measuring devices as the planimeter [19, p.374].
Leaving aside drawing methods for less abstract analogous models, analogy has historically been used by Antoni Gaudí to design arches with evenly distributed load, with reverse scale model made of chain assemblies and weight sand bags [175, p.46]. A more generic example is the once widespread use of electric circuits to solve ordinary differential equation, tuning resistors, capacitors and inductors to adapt to the problem parameters [19, p.374]. A similar electric analogy was even used to model the partial differential equations of fluid dynamics in two dimensions [START_REF] Malavard | The use of rheo-electrical analogies in certain aerodynamical problems[END_REF]. For similar problems but with a distinct approach, Atanasoff designed a carving tool to iteratively solve the Laplace's equation with a cube of wax [39, p.876]. An example from more fundamental physics is the study of the atomic structure of liquids by scale models 11 , first with assemblies of balls and rods [29] and then by direct measurements on heaps of ball-bearings [30] (Figure 3.3). In the absence of powerful numerical solving capacities, those models were based on experimental techniques on a physical setup. An analogous model can also be purely conceptual: a conceptual model designed to simulate another one, and were dominant phenomena are expected to be analogous [START_REF] Hrennikoff | Solution of problems of elasticity by the framework method[END_REF][START_REF] Fermi | Studies of non linear problems[END_REF][START_REF] Pasta | Heuristic numerical work in some problems of hydrodynamics[END_REF].
Numerical Resolution
Numerical approximated and discretized approaches rely on the subdivision of a complex system -often in time and space -into many easily solvable elementary problems. From a conceptual point of view, key features of numerical methods appeared very early in history. However, the quantitative computing power available deeply shapes the qualitative design of the model.
Halley's Comet
The idea that complex models can be solved by numerical discretization is not recent in history. Many typical characteristics of modern numerical methods appeared long before massive computing power was available.
Limiting ourselves to models somewhat similar to the method used in this PhD, a pioneer attempt to discretize an otherwise unsolvable physical model can be traced back to 1757, when Clairaut, de Lalande and Lepaute attempted to study the dynamics of the system {Sun, Jupiter, Saturn, Halley's Comet} by numerical approximation [92, p.20]. A remarkable characteristic is that key features of modern numerical methods were displayed:
Sun
Aphelion Perihelion
Halley's Comet
• They chose a dominant physical phenomenon (gravitational attraction between massive points) and simplification assumptions (the Sun is fixed, Halley's Comet actions on other bodies are negligible).
• They discretized time, updating the state of the system every 1 or 2 • on the orbits of Jupiter or Saturn.
• They designed a calibration procedure for their approximated method, based on experimental data from 1531, 1607 and 1682 previous perihelia of Halley's Comet.
• They estimated the error on the result, which proved to be of the correct order of magnitude 12 .
• They parallelized the computing effort: de Lalande and Lepaute computed the state of the system {Sun, Jupiter, Saturn} and passed their results to Clairaut, at the other side of the table, for him to compute Halley's Comet position.
Although all the calculations were done by hand -it took them five months of efforts to calibrate their method and predict the Comet's next perihelion -their approach is in essence very close to modern numerical model resolution.
Model design and Computing Power
The computing power available deeply influences the modeling choices. Since the 40s, the computing speed of sequential machines increased by more than ten orders of magnitude. Such a quantitative leap made possible otherwise unreasonable approaches. In the last decade, the sequential speed tends to stagnate but parallel computing leads to a massive increase in computing power. To take advantage of this technological shift, specific care is required in the design of a model.
Initially purely manual, calculations were progressively machinized and standardized [70, p.6], to improve speed and accuracy, while limiting errors 13 . Among classical tools for human calculators we find tables and abacus -regrouping precomputed values of functions 14 -slide rules [19, p.374], counting frames and diverse computing machines.
In parallel to technical improvements, a strong emphasis on labor division and organization allowed to distribute large computing tasks in teams of calculators [START_REF] David | Human computers: the first pioneers of the information age[END_REF]. Such techniques remained competitive in the infancy of numerical computers, before being rendered obsolete by this much faster challenger.
The chaotic history [START_REF] Hennessy | Computer architecture: a quantitative approach[END_REF] of numerical computers developing speed, available memory, versatility, standardization and ease of programming will not be detailed here. It must however be underlined that the order of magnitude of the computing speed (Figure 3.5) possible with digital electronic computers are altogether impractical with other techniques, human computation or mechanical devices [39, p.877].
For decades up to the beginning of this millennium, around 2004 [START_REF] Flynn | Intel halts development of 2 new microprocessors[END_REF], the progresses of the computing hardware were mostly driven by frequency scaling: the increase of the processor frequency increased the number of operations executable by a machine in a given time (Figure 3.5). Arising issues, among which power consumption of machines, led to a shift from frequency to parallel scaling (Figure 3.6). Without a radical technological breakthrough, the number of operations per cycle and the frequency of the processors may not dramatically increase, leaving unchanged the current order of magnitude of executable operations per second on a single processing unit.
To increase the computing power, industrial companies shifted from building faster processing units to design machines with more units: a shift from frequency scaling to parallel scaling. Faster machines are built associating multiple processing units, from a pair to 10 7 for Sunway TaihuLight15 , the current fastest machine of the TOP500 list [START_REF] Strohmaier | TOP500 list[END_REF]. In consequence, gains in computing power do not imply any gain in the running time of a sequential implementation. An additional development effort is required to parallelize the codes and take advantage of the machine resources: portions of the algorithm have to run simultaneously on various processors. Actual computing time gains can be rather limited [16] and the nature of the algorithm sets an asymptotic limit [101, p.41], to possible time gains with parallel scaling. The irreducible critical path is intrinsically The performance of modern machines can no longer be measured on this scale, they are designed to use multiple units in parallel. Mixed data for multiplication and addition of 10-digit and 16-digit numbers, from [88, p.137], [39, p.887], [148, p.33], [63, p.14], [START_REF]Processing power compared[END_REF] and [START_REF] Strohmaier | TOP500 list[END_REF].
sequential 16 . Whatever the strategy used to save time by massive parallelization, the required resources in terms of energy is rather intrinsic to a resolution tool: modern computers require roughly 10 -9 J to execute a floating point operation [3].
From handmade calculation to massively parallel architectures of modern clusters, many key principles are shared by the numerical methods, independently of the tool used for the resolution. The main invariant issue remains the compromise 17 between accuracy and computation time, which is sometimes closely related to cost. Regardless of the available machines, computing times approaching 10 4 h are impractical, computing times of 10 6 h are ridiculous [238, p.249].
Thus, conceptual numerical methods can only become of any practical interest when an adequate computing power is available. Very small test cases used during this PhD, for example running within minutes on a Xeon X5660, would take about 10 7 years to compute by hand and would have taken months to compute on Maniac, the fastest machine of the 1950s 18 . A quantitative order of magnitude change in the available computing power implies a qualitative change in modeling approaches that are possible [19, p.376] [33]. 16 As a quick example applied to DEM simulations: the particles can be distributed on several processing units, but the explicit time integration is sequential. See also Section 8.5 for a discussion regarding potential algorithmic work-around. 17 The fact that technical progress does not change the nature of key compromises seems rather generic [43, p.95]. 18 Maniac was the machine used to implement and test for the first time DEM-like algorithms (refer to Section 5.3.2.2 and [START_REF] Pasta | Heuristic numerical work in some problems of hydrodynamics[END_REF]).
Computerized Numerical Approach
The massive computing power available is not a guarantee of the quality of a model. For complex models and modern computer architectures, a comprehensive and systematic "error-proof " model is technically unreasonable and contradictory with computing performances. Specific safeguards are required to assess the quality and the potential domain of application of a model.
Modern computing techniques open the route toward the modeling of complex systems. The model life cycle is a long route starting with a physical observation of a dominant physical phenomenon, turned into an idealized conceptual model, translated in a mathematical formalism, algorithmically discretized to be solvable by numerical means and finally implemented in a given programming language.
Although a clear distinction cannot always be respected, the issue of evaluating the credibility of a computed result can be divided into three sub-problems [START_REF] Schlesinger | Terminology for model credibility[END_REF]:
• The evaluation of the accordance of the studied physical phenomena with the proposed conceptual model, the qualification of the model 19 .
• The fidelity of the final computed results to the conceptual model, the verification of the model.
• The validation, which studies the accordance of predict behavior and measurements of the physical phenomena.
Validation and verification are common practice in model design. Qualification is rarer when a strongly based literature is available for the studied phenomena. On the generic sketch for credibility assessment in Figure 3.7, an alternative route is proposed to solve a conceptual model: an analogous model can be designed, typically in the objective of an easier resolution. Such an approach introduces a supplementary "layer" between the model and the physical phenomena. In this specific case, the exact status (validation or verification?) of the benchmark of the results of the analogous model with reference results can be somewhat ambiguous. More generally, the strict respect of this terminology is not always practical. Its main merit lies in the attempt to systematically shed light on various potential pitfalls. The use of computerized numerical approaches introduces specific issues in the verification phase, where the accordance of the results to the conceptual model is checked. Errors can be classified as stemming from distinct sources [162, p.9]:
• Discretization (spatial and temporal);
• Iterative procedure convergence [START_REF] Simo | of Interdisciplinary Applied Mathematics[END_REF]Chap. 6 and 8];
• Computer round-off [START_REF] Goldberg | What every computer scientist should know about floating-point arithmetic[END_REF];
• Computer programming 20 .
For many numerical model implementations, a rigorous mathematical approachproving existence and uniqueness of the solution, computing convergence rates and accumulated round-off errors -is impractical. Similarly, robust programming methodology are time and memory consuming, both at implementation and at run time [189, p.15]. Not only the accuracy of a computerized model can be questioned, but -even with improved accuracy and stability -the reproducibility of the results is not guaranteed 21 and is contradictory with the quest of fast computations [START_REF] Revol | Numerical reproducibility and parallel computations: Issues for interval algorithms[END_REF].
Somewhat like in the design of models of chaotic physical behavior, a nonspecialist in computing science is bound to evaluate the reasonable metrics of interest of an executable program and delimitate ranges of sensible use, confronting it with test cases and physical measurements.
Description of the Studied Phenomena
The observed physical phenomena can be described and idealized, which is a preliminary step in a modeling approach. Some overall effects of the motion and interactions of interfaces of solids are briefly looked into. Classifications are proposed for interface types and topological events. Such taxonomies are not always of practical interest, but can serve as a guide in the design of a model.
As a rough conceptual model, based on the observations of the physical phenomena of interest (Section 2.5), we consider a collection of finite solid continuous media, seen as distinct objects. The objects can undergo finite geometrical transformations, i.e. translation, rotation and strain [192, p.59] that are not considered infinite, as the intimate mixing of two fluids, nor infinitesimal 22 . In addition, the strains are considered irreversible and isochoric and the elastic effect are considered negligible.
The objects can also mechanically interact with one another -or with themselvesthrough contacts of their material boundaries, the interfaces. The interfaces are spatial discontinuities. A canonical dichotomy of discontinuities in solids, is to describe them as weak or strong (Figure 3.8). A weak discontinuity is a model where no relative motion is possible across the interface. Although it is often an oversimplification, a weak discontinuity is a convenient way to idealize cohesive interfaces between objects. Material properties, strain and stress fields can be discontinuous across the interface; but the displacement field is described as continuous. Conceptually, a single object is modeled, with changing properties at the interface. A strong discontinuity model allows arbitrary relative motion between objects, conceptually modeled as distinct. The displacement field can be discontinuous across the interface, allowing interacting interfaces to model contact phenomena.
A weak discontinuity description can often be a first modeling step, allowing the use of a continuous topological framework, by changing only material properties from one side of the interface to the other. A model describing strong discontinuity must include a discrete formalism to compute the mechanical reaction to the interaction phenomena, typically to prevent inter-penetration of the objects [START_REF] De | Notice d'utilisation du contact[END_REF]. Such algorithms must cope with displacements and changes in shapes of the objects: motion and potential interaction of the interfaces must be handled.
From the point of view of their detection in a modeling framework, two categories of interface interaction can be distinguished (Figure 3.9). A first category is the interaction between distinct objects, hereafter referred to as contact. Objects involved in a contact are enclosed within their own boundaries. The distinct objects can be described by non intersecting sets of material points. At a finite precision, it is theoretically possible to explicitly list all members of the objects, and to test membership to detect contact. In contrast, in the self-contact category, the boundaries of a unique object are interacting, the listing of material points membership is not sufficient anymore to detect contact. In the case of limited strain of the materials, the tracking of material points located at the interfaces could be a sufficient strategy to detect contacts and self-contacts. In finite transformation context, the potential migration of material points makes it challenging to design fully automated contact and self-contact detection algorithm. Indeed, the deformation of an object can constrain material points to migrate toward or away from a boundary (Figure 3.10), making the contact detection task non-trivial for modeling tools working at a finite precision.
Figure 3.10: Contact detection in finite transformation context, example of a hole in a matrix. Material points initially far from interfaces might migrate toward them. Thus, using a finite precision, tracking material points initially on the interfaces is not sufficient to detect self-contacts.
The potential creation, motion and interaction of interfaces can lead to topological changes of the objects. A topological event does not necessarily stem from radical changes in the shape of an object. From a mathematical point of view and illustrated with a mechanician's words, topological events occur when successive configurations are not:
• Homotopy equivalent [235, p.163], when the number of objects or of holes changes (e.g. Figures 3.11b and 3.12).
• Homeomorphic [235, p.57], when the configurations are homotopy equivalent, but the number of endpoint changes (e.g. Figure 3.11a). For example, Figure 3.11b, an inclusion is split into two parts: the number of "objects" changes, the configurations are not homotopy equivalent. In Figure 3.12a, as the pore opens, the configurations are not homotopy equivalent: the object remains unique, but now presents a "hole".
In contrast, the configurations before and after the branching of a crack (Figure 3.11) are not homeomorphic: only the number of " end points" varies. In the final configuration, the theoretical removal of the bifurcation point would separate the crack into three "parts". Before the branching, two "parts" at most can be obtained by the removal of a single point on the path of the crack.
In the context of solid mechanics and working at finite precision, rigorous distinctions are probably too abstract to be practical, both with respect to the observation of the physical phenomena and the numerical models. Typical configurations of interest are:
• The creation of the removal of holes (Figure 3.12a) or endpoints (Figure 3.11a) in objects.
• The merging and splitting processes of objects or holes (Figure 3.12b). More than the mathematical typology of topological events, our conceptual model must handle operations on boundaries of the objects, including their creation, destruction, merge and split.
PhD Objective: Requirements for a Modeling Tool
Based on the description (Section 3.2) of the observed phenomena (Section 2.5), practical guidelines can be drawn for the modelization objective.
A model and its resolution strategy shall be designed to:
1. Describe the quasistatic deformation of cohesive and continuous media.
2. Describe irreversible finite transformations in continuous media. The possibility of tracking transformations from a reference configuration may be of interest 23 . However, the assumption of small displacements will not be considered reasonable: reference and deformed geometries will be distinguished. Elastic reversible processes will be neglected.
3. Describe cohesive materials, coping with tensile and compressive loads, displaying a resistance to shape modification. Mechanical transformations shall conserve volume.
The volume conservation will be a metric of quality of the model.
4. Describe inelastic strains, with typical plastic or viscoplastic behaviors. The accuracy of the representation of a targeted stress/strain behavior will be a metric of the quality of the model. Norton law and perfect plasticity will be typically represented behaviors, with little focus on more complex constitutive laws.
5. Describe multi-material flows, with an arbitrary finite number of distinct continuous phases.
6. Provide a detection method for contact events, the interaction between distinct phases, and for self-contact events, the interaction of a phase boundary with itself.
The reliability of the interaction detection algorithms will be a metric of quality of the model.
Handle an arbitrary finite number of distinct interactions occurring simultaneously.
The model shall handle simple interaction behaviors as impenetrability or cohesion. The ease of extension toward more complex interaction behavior, both for weak and strong discontinuities, will be a subjective indicator of quality of the model.
8. Handle topological modification of the continuous phases: merging and splitting. Topological events also concern holes in the phases: opening, closure, merging and splitting. The ease of handling of such events will be a subjective indicator of quality of the model.
Part II
Review of Modeling Strategies In Part I, the general context of the PhD has been introduced. Initiated from an experimental perspective, the study of composite forming led to specific modeling needs. Toward the understanding of the deformation mechanisms, a numerical method able to handle interface interactions and topological events is complementary to the designed experimental setup.
Part II reviews potential numerical methods in computational solid mechanics to describe finite transformation in multi-phase materials. The part is organized in three chapters:
• Chapter 4 briefly presents two key modeling choices: the kinematical standpoint and the topology of the description. Only Lagrangian methods are further considered.
• Chapter 5 is a lecture grid of diverse Lagrangian numerical methods, based as much as possible on algorithmic features. A graphical outline of the chapter is proposed on page 63.
• Chapter 6 focuses on the comparative description of selected methods and their potentiality.
Highlights -Part II Review of Modeling Strategies
• Simulation strategies are shaped by two key modeling choices:
the kinematical description of the flow and the topology of the constitutive law of the material.
Eulerian kinematics are considered as too costly to accurately track numerous interfaces. Among Lagrangian kinematics, two topological approaches are studied: approaches based on a continuous constitutive behavior (solving a partial differential equation) and approaches based on a discrete law (mimicking continua with a set of interacting objects).
• A partial lecture grid, focusing on algorithmic features, can help to highlight distinctions and similarities for a selection of numerical methods.
Methods based on a continuous topology display a variety of strategies to include the description of discontinuities in their framework. Methods based on a discrete topology can in turn propose an analogical route to mimic the behavior of continuous media.
• Among potential modeling tools, the discrete element method (DEM) is innately suited to handle numerous contacts and topological events.
Its versatility and the ease of implementing arbitrary behaviors makes it an appealing tool. To our knowledge, no existing DEM algorithm can describe the inelastic finite strain of incompressible continuous media.
L(x) = f f i = j f j→i (x i ,
Key Modeling Choices
In this short chapter, the conceptual distinction between Lagrangian/Eulerian and Discrete/Continuous modeling approaches is examined. Examples of numerical methods are given and the DEM, used in this PhD, is located.
Two key modeling choices will deeply shape the strategy to model deforming materials. We focus here on potential techniques to model the continuous ideal phases described in Section 3.3.
The first key modeling choice is the topology of the material constitutive law (see Figure 4.1): an idealized material phase can either be considered as a set of discrete interacting objects, or as a continuum 1 . In both cases, the unknown field of the system is x. (a) Continuous material: within a domain a continuous constitutive law is valid, typically described by a partial differential equation (PDE), with a differential operator L and a second member f . (b) Discrete material: distinct elementary entities interact with each other. The action f on a given entity i is the summed effects of all its interactions with surrounding entities j.
L(x) = f (a) f i = j f j→i (x i , x j , . . . ) (b)
The second key modeling choice is the kinematical standpoint used to describe the flowing material [156, p.45] (see Figure 4.2): a Lagrangian (or material ) approach, where the position of specific material points is tracked over time; or an Eulerian (or spatial ) approach, where the flow, seen from fixed points, is measured.
To fix ideas giving concrete examples of numerical methods (see also Table 4.1), Lagrangian finite element method (FEM) is a dominant modeling tool in solid mechanics simulations, using the motion of a mesh to track the continuous material deformation. Eulerian finite volume method (FVM) is widely applied to fluid simulations, the flow of the continuous fluid is observed from a fixed mesh 2 . The lattice Boltzmann method (LBM) [START_REF] Chen | Lattice Boltzmann method for fluid flows[END_REF] Positioning of the DEM, the main numerical tool used in this work.
In the context of solid mechanics, typically to model metallic materials at the structural product scale, a framework based on a Lagrangian standpoint and continuous constitutive law -as the classical Lagrangian FEM implementations -often appears more natural [20, p.315]. A priori, Eulerian kinematics seem better suited to fluid dynamics, as fluids do not possess a preferential reference configuration [75, p.131]. A discrete constitutive law finds more straightforward applications in the simulation of finite sets of objects, typically granular materials or atomic interactions.
Eulerian kinematical standpoints can be successfully applied to solid mechanics problems. Their intrinsic ability to handle arbitrary strain without mesh distortion are appealing to model flowing-like phenomena, and such frameworks are also popular to model 2 It must be emphasized that FVM and FEM are not intrinsically bound to Eulerian and Lagrangian kinematical standpoint, respectively. Both methods are generic variational formulations to solve PDE by weighted residual, closely related and that can be written with common mathematical formalism [108, p.3323]. They differ mostly by the basis function choice: polynomial for the FEM and constant for the FVM [69, p.443]. Both numerical methods can be formalized from both kinematical standpoints and the four possible combinations have been applied to both fluid and solid simulation. Intrinsic ease of application to partial differential equation (PDE) types [108, p.3325] and strong traditions in distinct communities lead to the predominance of Lagrangian FEM for solid simulation and Eulerian FVM for fluid simulation.
fluid-solid interactions. Specific numerical techniques have been developed to allow the tracking of materials and interface positions. Examples in finite transformation context include volume of solid method [5], pseudo-concentration technique [25], reference map technique [START_REF] Kamrin | Reference map technique for finite-strain elasticity and fluid-solid interaction[END_REF]... Mixed strategies, trying to take advantage of both Eulerian and Lagrangian standpoints were developed, as the arbitrary Lagrangian Eulerian (ALE) [START_REF] Donea | Arbitrary Lagrangian-Eulerian methods[END_REF] and the deforming spatial domain/stabilized space time (DSD/SST) [START_REF] Tayfun | A new strategy for finite element computations involving moving boundaries and interfaces-the deformingspatial-domain/space-time procedure: I. the concept and the preliminary tests[END_REF] formalisms.
However, the handling of complex three-dimensional interfaces of arbitrary shape are still challenging using Eulerian or mixed Eulerian-Lagrangian standpoints. A heavier computing effort is required to match the accuracy of the interface description reachable with Lagrangian standpoints. Eulerian kinematical modeling option will not be investigated further in this work 3 .
Our analysis will thus focus on Lagrangian strategies for solid mechanics, which natively and explicitly track the phase motion in space with respect to a reference configuration 4 . Both discrete and continuous topological descriptions of the materials will be considered. In the context of modeling continuous media, topologically discrete simulation tools necessarily rely on an analogous approach to build phenomenological models.
Chapter 5
Lecture Grid: Lagrangian Methods
In Chapter 4, two key modeling choices (topology and kinematics) were examined. To meet our modelization objective, we choose an approach associated in the literature to the prolific "meshless" and "particle" methods.
This chapter is an attempt to build a lecture grid of a subset of the literature landscape of the Lagrangian methods. The objective is to better locate the specificity of our approach. Far from being a strict classification attempt, the lecture grid aims to provide potential comparison features between methods. The chapter is split into three sections:
• Section 5.1 is a short overview of the chapter.
• Section 5.2 describes methods using a topologically continuous constitutive law.
• Section 5.3 describes discrete approaches.
A reader without specific interest in taxonomy issues arising in the vast realm of meshless and particle methods might skip to Chapter 6, where selected methods are briefly compared. [START_REF] Dratt | Coupling of FEM and DEM simulations to consider dynamic deformations under particle load[END_REF]
Overview
Some Lagrangian methods are generic means to solve PDE, accounting for a continuous constitutive law. Specific strategies must be designed to deal with discontinuities. Other Lagrangian methods solve the behavior of systems of discrete interacting bodies. Modeling a continuum is possible, but not straightforward. Up to some point, the two strategies can be converging, especially in the context of a numerical resolution.
As a very general description of the variety of strategies developed in the vast Lagrangian family of numerical methods for solid mechanics, two complementary, and up to some point converging, routes can be drawn. From one side, extensions within continuous constitutive law frameworks were developed to handle discrete phenomena as discontinuities and their motion and their interaction. From the other side, discrete constitutive behavior frameworks were used and modified to phenomenologically mimic continuous behaviors. The naive continuous/discrete distinction is challenged in the context of numerical resolution, by essence discrete at the lowest computing level. As it will be discussed, some borderline strategies may display ambiguous features.
Following the first route -developing discrete phenomena management within a continuous framework -extensions are built on top of a PDE solver. Prolific developments tried to stick as closely as possible to the FEM framework, adding supplementary numerical ingredients for discontinuity management. These strategies, described in Section 5.2.1, take advantage of a strong physical, mathematical and numerical background. Taking one step aside, as discussed in Section 5.2.1.2, the methodology dependency on the mesh can be alleviated building the basis function of the Galerkin method on a cloud of nodes, without prior definition of their connectivity, as in the element-free Galerkin method (EFG). More radically, the FEM variational formulation and matrix formalism can be abandoned, in methods related to the smooth particle hydrodynamics (SPH), by solving directly and locally the partial differential equation, in methods presented in Section 5.2.2.
The second route, more confidential but yet historically early, is described in Section 5.3. Interactions between large sets of independent objects are designed to mimic macroscopic continuous behaviors, thus phenomenologically reproducing continuous behavior with a discrete framework, for example the DEM. Although not as straightforward to describe the continuous phenomena, this approach is innately suited to handle an arbitrary large number of discrete interactions.
The classical sets of methods named in the literature -first of which meshless methods1 and particle methods2 -are broad and provide little help to classify resolution strategy from the algorithmic and conceptual point of view. Indeed, many historical bridges between rather distinct methodologies, and concurrent formalization of essentially similar approaches tend to create a somewhat fuzzy literature landscape. From a practical point of view, the numerical side tools and algorithms used for secondary tasks have a major contribution on the overall exploitability of a conceptual method. The denomination of the method is often heavily influenced by minor algorithmic choices, or even implementation details.
In order to better locate the methodology developed in our work, the lecture grid proposed hereafter attempts to rely as much as possible on algorithmic features rather than final applications3 . The objective is not to propose a strict and comprehensive Positioning of the meshless methods set and the discrete element method (DEM), the main numerical tool used in this work. See also Figure 6.1 for a pairwise comparison of a selection of methods.
classification, nor a historical review 4 , but to contribute to the identification of similarities and specificities of numerical approaches.
Continuous Constitutive Law
A solidly grounded approach to model the deformation of solids is to solve the PDE describing their continuous behavior. To handle discrete and topological events, numerous extensions within these continuous constitutive law frameworks were developed. Two main resolution strategies of the PDE are considered: variational approaches (Section 5.2.1) and direct resolutions (Section 5.2.2). It must be kept in mind that in the context of a numerical resolution, such continuous models will be discretized at some stage.
Variational/Global Resolution
The variational resolution of a PDE is based on energy minimization. Assumptions are made on the local "shape" of the fields of interest, typically the displacement for solid mechanics. The solution is found by minimizing the energy, within the a priori chosen "shape". Powerful mathematical frameworks are designed, based on linear algebra, to aggregate local contribution and solve the system globally. Two potential geometrical supports for the shape functions, used for spatial discretization, are dealt with: a mesh (Section 5.2.1.1) and a cloud of nodes (Section 5.2.1.2).
Basis Function Built on a Mesh
The de facto dominant tool to solve PDE in solid mechanics is the FEM. The method is a variational resolution based on a mesh as geometrical support. After a short introduction, three potential strategies to cope with discontinuities will be investigated: a conforming mesh (Section 5.2.1.1.1), the use of Lagrangian markers (Section 5.2.1.1.2) and the use of extra basis functions (Section 5.2.1.1.3).
Since decades, the FEM [START_REF] Dhatt | Méthode des éléments finis[END_REF] is a strongly established tool for solid mechanics, and more generally to solve partial differential equation systems. The method is based on a finite set of degrees of freedom, used to describe the motion of a continuous medium by a judiciously guessed interpolating function. The solution is found by minimizing the energy of the system, using a matrix formalism.
In the first related attempts by Rayleigh and Ritz the deforming shapes of linear vibrations were guessed globally, for the whole system [START_REF] William | The theory of sound[END_REF]Chap. 4] [START_REF] Ritz | Theorie der Transversalschwingungen einer quadratischen Platte mit freien Rändern[END_REF]. The methodology proved to be impractical for arbitrary geometries 5 . To deal with arbitrary geometries, Courant thus chose to subdivide the system into small domains and used polynomial shape functions in each subdomain [START_REF] Courant | Variational methods for the solution of problems of equilibrium and vibrations[END_REF]. The rise of computing power triggered the development of such methodologies, and led to the success of the FEM [START_REF] Turner | Stiffness and deflection analysis of complex structures[END_REF].
The continuous unknown fields are discretized on a mesh, with a polynomial approximation on its elements. The resolution is based on variational formulation -energy minimization principles -integrating and summing-up local contributions from the elements into global grand matrices and vectors, solving the overall problem by linear algebra. Versatile and generic, the FEM relies on strongly grounded physical and mathematical tools, built or adapted to meet the mechanicians' needs in a wide range of applications.
cited as an early precursor of the FEM, although it conceptually is much closer to the DEM [98, p.149]. 4 One could refer to [244, p.3] or [START_REF] Felippa | A historical outline of matrix structural analysis: a play in three acts[END_REF] for the FEM and to [98, p.149] for the DEM. 5 See footnote 7 on page 40.
Historical challenges -regarding finite transformation, interface and contact handlingtriggered intense research and development of work-around and extension for the method.
In the proposed reading grid, the FEM extensions are organized based on the numerical approach chosen to track mobile interfaces. This key issue can for example be addressed by building a conforming mesh (Section 5.2.1.1.1) or 6 by adding additional data to the continuous description with:
• Lagrangian markers (Section 5.2.1.1.2);
• Extra basis functions 7 (Section 5.2.1.1.3).
Contact detection and handling are not examined here, as they do not seem to be as closely bound to a specific method than the interface tracking.
[FEM] Conforming Mesh
The most straightforward strategy to track interfaces is to explicitly build their geometrical description, directly using the nodes of the mesh: a conforming mesh. Given a meshed geometry, any standard FEM implementation can handle the description of limited strain. Typically, one can model compositesconsidering ideal weak discontinuities, perfectly cohesive -and cellular materials. In a conforming mesh context, the description of strong discontinuities can be enriched, for example by the use of a cohesive zone model (CZM) [START_REF] Elices | The cohesive zone model: advantages, limitations and challenges[END_REF] 8 .
In this standard configuration, describing larger strains involves a periodical remeshing procedure of the geometry (Figure 5.2). Indeed, an excessive distortion of the elements leads to numerical errors and ultimately computation failure. The remeshing involves the transfer of the state parameters, which is not trivial and needs specific numerical care to maintain a controlled accuracy [START_REF] Coupez | 3-D finite element modelling of the forging process with automatic remeshing[END_REF][START_REF] Ch | Remeshing issues in the finite element analysis of metal forming problems[END_REF]. An alternative, used in the particle finite element method with moving mesh (PFEM) 9 [107], is to store all the state parameters at the nodes, and to conserve all nodes at each remesh- 6 Our lecture grid is by no means comprehensive, the immersed boundary method [START_REF] Peskin | Flow patterns around heart valves: a numerical method[END_REF] is another example, among many, of introduction of a dual Lagrangian / Eulerian description. 7 Extra basis functions are often collaboratively used with conforming meshes or Lagrangian markers. 8 This strategy was used to study the room temperature behavior of a composite [73, p.146] similar to our model material (Section 2.3). 9 Our reading grid distinguishes the particle finite element method with moving mesh (PFEM) and the particle finite element method with fixed mesh (PFEM2). The PFEM, described in this section, uses ing iteration. Arbitrary numerous remeshings can be performed without transfer-related accuracy loss, but the cost of the remeshing operations is not lessened.
[MPM]
Lagrangian Markers A somewhat exotic strategy to extend the FEM is to use a set of Lagrangian markers to represent the position and motion of the materials. This supplementary description of discontinuities is overprinted on top of the continuous mesh (Figure 5.3).
Two coordinate systems are simultaneously used:
• A mobile set of Lagrangian points accounting for the material displacements;
• A fixed Eulerian or pseudo-Eulerian grid to perform computations.
Lagrangian material points move from grid cell to grid cell over time and at each step computations are based on the material points present in the cells. This particle in cell method (PIC) -or material point method (MPM) -strategy has been originally designed for fluid dynamics and was applied very early to two-dimensional multiphase flow, with interface tracking [START_REF] Evans | The particle-in-cell method for hydrodynamic calculations[END_REF]. The method was later transposed to twodimensional solid mechanics [START_REF] Sulsky | A particle method for historydependent materials[END_REF][START_REF] Sulsky | Application of a particlein-cell method to solid mechanics[END_REF]. In this solid mechanics applications, the "fixed" mesh is not always strictly Eulerian anymore. At each step, the mesh is deformed with the material in a Lagrangian way, as in the standard FEM. However, the same initial undeformed mesh is used again at the beginning of the following step, the overall material deformation being represented by the material point motions. A cheap regular mesh can thus be used, without remeshing procedure during simulation. An alternative formulation aimed at fluid-structure interaction simulations -the particle finite element method with fixed mesh (PFEM2) 10 -was also initially designed for fluid [START_REF] Rodolfo Idelsohn | A fast and accurate method to solve the incompressible Navier-Stokes equations[END_REF] and extended to solid [21]. In contrast to MPM, the computing mesh is purely Eulerian and fixed.
To our knowledge, the use of Lagrangian markers to track the interfaces themselves instead of the material phases has not been transposed from fluid to solid simulations. See for example the moving Lagrangian interface remeshing technique (MLIRT) [37].
a conforming mesh strategy to track interfaces. The PFEM2, described in Section 5.2.1.1.2, uses a set of Lagrangian markers over an Eulerian computing grid. As a side note, both methods also use extra basis function to improve the accuracy of interface definition [107, p.1763]. 10 Refer to footnote 9 for disambiguation between particle finite element method with moving mesh (PFEM) and particle finite element method with fixed mesh (PFEM2).
[XFEM] Extra Basis Function
A more mainstream approach to overprint discontinuities on top of the continuous mesh is to enrich the elements by extra basis functions. Now widely used in both fluid and solid mechanics, this strategy typically uses singular functions to describe interface positions within the elements. Often based on the level set methodology [START_REF] Osher | Level set methods and dynamic implicit surfaces[END_REF], this approach gives an Eulerian description of interfaces within a Lagrangian mesh of the material [START_REF] Moës | A finite element method for crack growth without remeshing[END_REF]. In FEM context, a current denomination in the literature is the extended finite element method (XFEM). This strategy allows the study, without remeshing procedures, of problems where interfaces move without perturbing too much the global geometry (Figure 5.4). It must be emphasized that from a mathematical point of view, the level set methodology offers efficient tools to deal with topological events [163, p.26], whose handling is much easier to implement and automate than with Lagrangian descriptions, especially those involving a connectivity table. The drawback, lessened by numerous numerical techniques, is a tendency to lose track of mass or volume [163, p.82].
In the context of finite strains, frequent global remeshing cannot be avoided if a conforming mesh or an Eulerian description of interfaces are used. In addition to potentially prohibiting computing cost, the reliable automation of remeshing and the proper handling of topological events are algorithmic challenges. More generally, the overall quality and efficiency of the methods are often bottle necked by these side tools. Developing techniques rely on parallelization paradigms, and the simultaneous use of conforming mesh and Eulerian interfaces, as an auxiliary tool for remeshing [START_REF] Coupez | 3-D finite element modelling of the forging process with automatic remeshing[END_REF]28].
[EFG] Basis Function Built on a Cloud of Nodes
To alleviate the FEM dependency on a mesh, while conserving the framework and the main methodology, the building rules of the basis function can be modified to rely on a cloud of nodes, instead of a mesh.
As in classical FEM, the partial differential equation system is solved in a weak form, using a Galerkin method, via a matrix formulation of the discretized problem. Local contributions are integrated and assembled to build global grand matrices, numerically executed using a background cell structure, with a matrix formalism numerical resolution. However, instead of building polynomial basis function over elements of a mesh, which requires an a priori specification of the connectivity of the nodes in the FEM, they rely on a neighborhood of nodes.
To minimize the importance of implementation details, these methods can all be con-sidered as instances of the partition of unity method (PUM) [START_REF] Melenk | The partition of unity finite element method: basic theory and applications[END_REF]. The pioneer attempt, the diffuse element method (difEM) 11 [START_REF] Nayroles | Generalizing the finite element method: diffuse approximation and diffuse elements[END_REF], used moving least-square approximation as partition of unity and was applied to heat conduction. Very close refactoring, the EFG [23] and the reproducing particle kernel method (RPKM) 12 [START_REF] Wing | Reproducing kernel particle methods for structural dynamics[END_REF], were applied to solid mechanics.
In the natural element method (NEM), the partition of unity is based on the Sibson (or natural neighbor) interpolation [START_REF] Sibson | A brief description of natural neighbor interpolation[END_REF]. Initiated for geophysical problem of large scale mass transports and geometry changes [START_REF] Traversoni | Natural neighbour finite elements[END_REF][START_REF] Sambridge | Geophysical parametrization and interpolation of irregular data using natural neighbours[END_REF], the NEM was then applied to solid mechanics [START_REF] Natarajan Sukumar | The natural element method in solid mechanics[END_REF].
Generalized finite difference method (GFDM) approaches [START_REF] Liszka | The finite difference method at arbitrary irregular grids and its application in applied mechanics[END_REF][START_REF] Liszka | Hp-meshless cloud method[END_REF], working on unstructured grids, are closely related methods [24, p.4].
Although this cloud of nodes approach has been quite popular among FEM-background mechanicians, a major drawback is the numerical cost [24, p.45], intrinsically higher than the FEM, with which it shares a common methodology, and fortuitously higher than SPH, its main challenger. Firstly, to integrate the stiffness matrix [150, p.29]. Secondly, for the matrix system resolution. Indeed, the grand stiffness matrix built in the FEM has a banded structure, stemming from the connectivity table; leading to lower resolution cost. The grand matrices built with cloud of nodes are generally not banded, or at best with larger band width -as nodes being bound to interact with a greater number of neighbors, to ensure numerical stability -inducing higher computational cost of the numerical resolution of the system.
[SPH] Direct/Local Resolution
A historically early attempt to handle drastic changes in size and shape consists in solving directly the partial differential equation without rewriting it in a variational formulation. A cloud of points discretizes the material, but it is not used as a support to build basis function of a Galerkin method. The material behavior, the continuous medium constitutive law, is directly solved locally for each computing point. Although the methods developed are algorithmically discrete, this is a numerical work-around to compute the behavior of a continuum.
A now classical approach uses probabilistic approximation of the local field derivatives, based on a kernel function (Figure 5.5). From this computation, the "efforts" acting on the material points is used to integrate explicitly 13 their positions and velocity over time. This approach is conceptually and mathematically close to the EFG -they use a similar approximation of field derivatives to solve continuous PDE -and algorithmically close to the molecular dynamics (MD), allowing implementations of the formalism in MD solvers [START_REF] Michael | Implementing peridynamics within a molecular dynamics code[END_REF]4]. Initially developed for astrophysics and more generally for compressible fluids [START_REF] Lucy | A numerical approach to the testing of the fission hypothesis[END_REF][START_REF] Gingold | Smoothed particle hydrodynamics: theory and application to non-spherical stars[END_REF], the SPH was later on transposed to solid mechanics [START_REF] Libersky | Smooth particle hydrodynamics with strength of materials[END_REF][START_REF] Libersky | High stain lagrangian hydrodynamics: A three-dimensional SPH code for dynamic material response[END_REF] and successfully applied to finite transformation in three-dimensional context [START_REF] Fraser | A Mesh-Free Solid-Mechanics Approach for Simulating the Friction Stir-Welding Process[END_REF].
As a side note, the algorithmic proximity with the MD can prove challenging to our naive distinction: discrete versus continuous constitutive law. The classical SPH requires the use of a kernel to approximate the derivatives, the "efforts" are conceptually not computed pairwise as interaction forces. Some methods are more ambiguous, as the particle and force method (PAF) [52], where pairwise interactions are directly derived from the conservation of energy in a fluid [52, p.15]. More recently, the Mka3D formalism [START_REF] Monasse | An energy-preserving discrete element method for elastodynamics[END_REF] 11 In this work, to avoid confusions, the acronym difEM is used for the diffuse element method. The acronym DEM is reserved to the discrete element method. 12 It is not always clear in the literature whether RPKM is to be considered closer to SPH (a method described in Section 5.2.2) or to EFG. The former is the claimed filiation in the original paper [137, p.1082] and the latter can for example be read in Sukumar [222, p.3]. The fact that matrices are built within a Galerkin method [136, p.1667] [31, p.1262] oriented our choice. 13 Variants using fully [START_REF] Livne | An implicit method for two-dimensional hydrodynamics[END_REF] or partially [START_REF] Koshizuka | Moving-particle semi-implicit method for fragmentation of incompressible fluid[END_REF] implicit integration schemes have also been proposed.
solves the elastic behavior of a continuum in a DEM framework using only geometrical assumptions. Both examples illustrate the limits of our lecture grid. Numerous variants of the SPH were developed, using diverse kernel function type, mathematical formalism, stabilization technique and side tool. As an example among many from the literature, the state-based peridynamics (statePD)14 [START_REF] Silling | Peridynamic states and constitutive modeling[END_REF][START_REF] Foster | Viscoplasticity using peridynamics[END_REF] is an instance of SPH based on a total Lagrangian formulation [START_REF] Ganzenmüller | On the similarity of meshless discretizations of peridynamics and smooth-particle hydrodynamics[END_REF]. Recent developments of the statePD propose an updated Lagrangian formulation [26], as in the original SPH.
A typical numerical issue of the method is the so-called tensile instability. This potential inconvenient arises when material points move too far or too close to each other [130, p.73], potentially causing a purely numerical fragmentation [153, p.290]. For example, in solids, when an initially homogeneous spatial distribution is anisotropically deformed, the number of neighbors, within a constant smoothing distance, can become too small in the direction of the preferential strain. The integration of the constitutive behavior becomes unreliable, and the continuity of the media might be numerically interrupted.
As a side-note, this problem -well documented in SPH because of its common use in finite transformation context -can arise very similarly in EFG-like methods 15 and PIC-like methods [66, p.17]. Tensile instability can also be considered as closely related to mesh distortion issues in FEM.
Discrete Constitutive Law
Instead of solving a PDE, a discrete topology of the constitutive law can be chosen. The material is conceptually described as a set of interacting objects. After a general introduction, two resolution strategies are examined: global for the whole system (Section 5.3.1) or particle-wise, like in the DEM (Section 5.3.2).
Radically rooting the description of the material in a distinct formalism, numerical methods can be built on a discrete constitutive law. The common denominator of these approaches is to consider a finite set of interacting discrete elementary objects. Unlike methods relying on a continuous constitutive law (see Section 5.2), discrete approaches are not designed to solve partial differential equation systems.
The mathematical and physical consequences of such approaches must be underlined (see also Section 3.1.1). The built models are often indeterminate when applied to static problems and, in addition, become chaotic for dynamic problems. Numerical methods are thus intrinsically 16 ill-conditioned, regardless 17 of the chosen modeling approach: the error on the final state cannot be bounded for an arbitrary small error of the initial state.
Such limitations imply that dynamic discrete models may, at best, model statistical collective behavior of sets of objects. The exact tracking of the state of individual material points is illusory and is not within the modeling scope.
As a historical note, the numerical resolution of a discrete model may seem more straightforward than the numerical discretization of a continuous model. Early attempts thus used discrete topology to analogically model elasticity in continuous media [START_REF] Wieghardt | Über einen Grenzübergang der Elastizitätslehre und seine Anwendung auf die Statik hochgradig statisch unbestimmter Fachwerke[END_REF][START_REF] Riedel | Beiträge zur Lösung des ebenen Problems eines elastischen Körpers mittels der Airyschen Spannungsfunktion[END_REF][START_REF] Hrennikoff | Solution of problems of elasticity by the framework method[END_REF].
Topologically discrete methodologies share three common conceptual ancestors, all of them struggling with necessary computing power necessary to solve such systems:
• Gravitational laws (Section 3.1.3.1), Clairaut modeled only 6 bodies [92, p.20];
• Molecular systems for gases [START_REF] Clerk | On the dynamical theory of gases[END_REF] or solids [START_REF] Gustav | Die Fundamentalgleichungen der Theorie der Elasticität fester Körper, hergeleitet aus der Betrachtung eines Systems von Punkten, welche durch elatische Streben verbunden sind[END_REF], Maxwell computed only average global effects [146, p.62] and Kirsch drew qualitative conclusions, unable to deal with the too numerous unknowns [98, p.149].
• Truss-type systems, for which the first effective computing techniques were designed for discrete systems: graphical [START_REF] Clerk | On reciprocal figures, frames, and diagrams of forces[END_REF] (see also Section 3.1.2) or numerical [START_REF] Cross | Analysis of continuous frames by distributing fixed-end moments[END_REF]. These methods remained dominant until the rise of numerical computers.
From an algorithmic point of view, drawing a clear distinction between numerical methods inspired by truss or molecular phenomena proves delicate. We thus favored the resolution methodology to build our lecture grid: either a global resolution for the whole system like in the lattice models (Section 5.3.1) or a local approach, particle-wise or interaction-wise, like in the DEM (Section 5.3.2).
[Lattice Model] Global Matrix Resolution
Lattice models are conceptually discrete models: a set of interacting objects is considered. Their resolution is based on linear algebra tools, the local contributions are aggregated in global matrices.
Lattice models [START_REF] Servatius | Generic and abstract rigidity[END_REF][START_REF] Ostoja-Starzewski | Lattice models in micromechanics[END_REF], or spring networks, use topologically discrete constitutive behavior of the material: linear elastic springs connecting material points for example.
Potential non-linearities (contact, material properties...) are linearized around the studied state to build a global stiffness matrix, for the whole system. The response of the system is solved using this global linearized force/displacement law between the degrees of freedom of the discrete material points.
The methods are often focused on static or quasistatic phenomena [START_REF] Roux | Quasistatic rheology and the origins of strain[END_REF][START_REF] Roux | Quasi-static methods[END_REF], efficiently solved in their framework, but can be extended to dynamic phenomena somewhat like the FEM 18 , writing a system including mass and damping matrices. To account for geometrical evolution of the system, the stiffness matrix can be rebuilt, or the parameters can be adapted to simulate the breakage of a bond [164, p.42].
Particle-Wise Local Resolution
To take full advantage of the discrete methods, it may be profitable to design algorithm to locally solve such models. For each object within the considered set, the temporal evolution is computed from the surrounding environment. After a short introduction, two temporal integration schemes are dealt with: backward schemes (Section 5.3.2.1) and forward schemes (Section 5.
3.2.2).
Many numerical methods use a particle-wise resolution to solve topologically discrete material behavior: The temporal evolution of the system is solved independently for each elementary object (or particle), based on their interactions with others. This methodology allows a very free handling of local non-linearities of the behavior and the effective resolution of complex interactions between numerous objects. Conceptually, this approach is closely related to the 1757 attempt described in Section 3.1.3.1.
The evolution of the discrete system can be numerically handled using two overall time integration schemes 19 :
• Backward (or implicit), see Section 5.3.2.1;
• Forward (or explicit), see Section 5.3.2.2.
Although the choice of scheme can be of little influence on the final modelization metrics [112, p.2], it must be underlined that the two schemes are not only algorithmic strategies to solve identical systems: backward schemes allow the formulation of interaction laws as strict inequalities, typically for the impenetrability of the particles. In contrast, in forward schemes, such inequalities are regularized, typically with a stiff penalization of interpenetration.
From the computational point of view, the choice of scheme also implies distinct numerical behavior. Backward schemes are unconditionally stable, allowing a user-chosen compromise between computation time and accuracy, while forward schemes are strictly limited (see also Section 8.3). 18 Lattice models share their matrix formalism with the FEM [START_REF] Felippa | A historical outline of matrix structural analysis: a play in three acts[END_REF]. Even after the apparition of the FEM, at more recent periods than the pioneer papers referred to in the beginning of Section 5.3, lattice models were often proposed as alternative methods, for example to reduce computing cost [START_REF] Kawai | New element models in discrete structural analysis[END_REF]. 19 Backward and forward based methods are sometimes respectively referred to as non-smooth and smooth methods [START_REF] Jean | Contact dynamics method[END_REF]. However, the degree of smoothness of the interaction laws between particles is not algorithmically constrained by the choice methodology. Although backward schemes are mathematically more satisfying to integrate arbitrarily non-smooth behaviors, forward schemes are almost always used to integrate behaviors of limited smoothness, among which the contact creations and losses.
[NSCD] Backward Time Scheme
In a backward time scheme integration, the solver tries to find a solution satisfying constraints in the previously elapsed time step. This strategy allows to properly take into account arbitrarily non-smooth behaviors. In addition, the compromise between computing time and accuracy can be chosen. Backward (or implicit) time schemes handle, by design, inequalities and arbitrarily non-smooth interaction laws between elementary objects. The instantaneous values taken by the interactions are not computed and thus do not need to be expressly defined. At each step, for each interacting pair [112, p.11], the solvers attempt to find a final solution, satisfying the system constraints and Newton's second law in the time elapsed since the last known state.
In a first variant, referred to as the event-driven method (EDM), the time stepping is defined by the physical events [START_REF] Pöschel | Computational granular dynamics: models and algorithms[END_REF]Chap. 3]. The system state is updated only at anticipated interactions, computed from the current state. Such approaches, that can be traced back early in the study of molecular systems [7], are of interest only for systems like sparse gases, with rare, independent, non simultaneous interactions, whose time scale is considered negligible with respect to the simulation time (instantaneous interactions). They are thus little suited to dense material modelization.
A second variant of backward scheme use an arbitrary time stepping, independent from physical events [START_REF] Pöschel | Computational granular dynamics: models and algorithms[END_REF]Chap. 5]. Specific implicit integration schemes were developed for solid mechanics and applied to granular materials, as the contact dynamics (CD) and the nonsmooth contact dynamics (NSCD) [START_REF] Jean | Frictional contact in rigid or deformable bodies: numerical simulation of geomaterials[END_REF][START_REF] Jean | Non-smooth contact dynamics approach of cohesive materials[END_REF]. For each elapsed step, potential interactions are listed and expressed as a set of equations and inequations, numerically solved to find a satisfactory state of the system.
In the spirit of the NSCD, the local configurations of each interaction are considered of secondary order and not known enough for a detailed description [START_REF] Jean | Contact dynamics method[END_REF]. Only key phenomenological aspects are modeled, via non-smooth mappings and inequalities, typically the impenetrability of the objects and Coulomb friction. The solver only operates on integrals of the interactions, without computing instantaneous evaluations, and can handle both smooth and non-smooth behaviors.
The intrinsic numerical stability of such schemes allows an interesting latitude in the compromise between accuracy and the cost of the computation. However, numerically efficient implicit solvers can prove more delicate to implement and parallelize than explicit solvers. It must however be noted that, to our knowledge, the largest particle simulations are run using implicit integration [START_REF] Rüde | Algorithmic efficiency and the energy wall. 2nd workshop on poweraware computing[END_REF] and that specific algorithms for general-purpose computing on graphics processing units (GPGPU) have been developed [START_REF] Lichtenheldt | A stable, implicit time integration scheme for discrete element method and contact problems in dynamics[END_REF].
[DEM] Forward Time Scheme
A forward time scheme integration extrapolates future states of the system from the current configuration. In consequence, it is conditionally stable, leading to strong constraints on the chosen time steps. The overall conceptual simplicity of the discrete method using forward integration allows an easy adaptation of the method for distinct purposes. Forward (or explicit) time integration schemes rely on the computation, at each time step, of the interactions between the elementary objects [START_REF] Pöschel | Computational granular dynamics: models and algorithms[END_REF]Chap. 2]. The next state of the system is then computed from the current state -positions, velocities and forcesusing Newton's second law. This lean approach is straightforward to implement, including to model arbitrary complex interaction behaviors, as long as an explicit -but not necessarily smooth 20 -mapping defines them. The choice of the time step, and thus the computing time, is strongly constrained by numerical stability requirements, the smoothness of the interaction laws and acceptable computing cost. However, the relative simplicity to distribute the computing load allows the simulation of large systems. This methodology, used in our work, is detailed in Chapter 8.
Historically, the computerized model of elastically connected material points from 1955 [START_REF] Fermi | Studies of non linear problems[END_REF] lack the essential ability to arbitrarily change neighbors over time. An algorithmically more comprehensive precursor can be found in 1959 [START_REF] Pasta | Heuristic numerical work in some problems of hydrodynamics[END_REF] (Figure 5.6). Although the implementation is somewhat rough 21 , all the key algorithmic features are already present in the main 22 code: interaction forces, arbitrary changes in neighbors, explicit integration. Interestingly, those attempts from the 1950s were using a discrete constitutive law as a numerical work-around to study macroscopic continuum. Nowadays, mainstream usage of such methods focus on the simulation of phenomena that are conceptually described by a discrete topology. Slight variations of the formalism used to compute the interactions between neighboring objects stem from distinct applications. In the MD formalism [START_REF] Rahman | Correlations in the motion of atoms in liquid argon[END_REF], typically applied to study atomic interactions, interactions are defined through potentials around points, within a cutoff distance. In the DEM formalism [START_REF] Cundall | A computer model for simulating progressive, large scale movement in blocky rock systems[END_REF][START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF][START_REF] Mcnamara | Molecular dynamics method[END_REF], widely used in granular flow simulation, interaction forces are computed between interpenetrated particles.
These two classical formalisms do not differ from the algorithmic point of view, and rely on identical numerical tools. To sum up, in the MD, typically punctual bodies interact through a potential, often written as functions of the distance, within a cutoff range chosen for computational convenience. In standard DEM applications, bodies are considered to have a "physical" dimension and shape, they interact through force and torque laws, often written as functions of the indentation. 20 The conditional stability of forward scheme implies that the smoothness of the laws influences the overall numerical quality. 21 All parameters were powers of two, in order to replace multiplications by binary shifts, faster operations [166, p.8]. Refer to Figure 3.5 on page 44 for orders of magnitude of the computing power of Maniac, the historical machine used for this work. 22 The authors do not describe in detail their aborted attempt to treat incompressibility more accurately than with binary repulsion [166, p.11]. Although unsuccessful, this second code was probably an ancestor of the SPH [86, p.376].
Implemented codes can practically be used for both, with minor adaptations 23 . Built upon a historically distinct mathematical formalism, but essentially relying on the same principles, the bond-based peridynamics (bondPD) [START_REF] Silling | A meshfree method based on the peridynamic model of solid mechanics[END_REF] 24 comes to similar results and limitations. Other variants found in the literature are the so-called movable cellular automata (MCA) [START_REF] Grigorievich Psakhie | Method of movable cellular automata as a tool for simulation within the framework of mesomechanics[END_REF][START_REF] Psakhie | Movable cellular automata method for simulating materials with mesostructure[END_REF], using CA algorithms to define the interaction laws between the elementary particles in a MD framework.
In the perspective of modeling continuous phenomena, choosing a discrete constitutive law, from the start, can be seen as an alternative to numerically discretizing and solving continuous equations. In a more modern context than the pioneer attempts cited above, such methodology has been applied to elastic and brittle macroscopic behaviors [START_REF] Donzé | Numerical simulation of faults and shear zones[END_REF][START_REF] Donzé | Formulation of a 3-D numerical model of brittle behaviour[END_REF][START_REF] Kun | Transition from damage to fragmentation in collision of solids[END_REF][START_REF] Silling | A meshfree method based on the peridynamic model of solid mechanics[END_REF][START_REF] Sator | Generic behaviours in impact fragmentation[END_REF][START_REF] Jauffrès | Simulation of the toughness of partially sintered ceramics with realistic microstructures[END_REF]10,[START_REF] Hahn | Life time prediction of metallic materials with the discrete-element-method[END_REF][START_REF] Jebahi | Discrete element method to model 3D continuous materials[END_REF][START_REF] Yu | Modeling mechanical behaviors of composites with various ratios of matrix-inclusion properties using movable cellular automaton method[END_REF]. The general trend is to build ad hoc interaction laws between imaginary particles, used to discretize a continuous medium, in the objective of phenomenologically reproduce a targeted macroscopic behavior. Most examples in the literature focus on the objective of modeling dynamics brittle fracture and fragmentation of elastic solids (Figure 5.7). More unusual applications can be found, for example volumetric plastic strain [START_REF] Jebahi | Simulation of Vickers indentation of silica glass[END_REF] or quasistatic buckling [START_REF] Kumar | Effect of packing characteristics on the discrete element simulation of elasticity and buckling[END_REF]. Chapter 6
Comparison of Selected Methods
In Chapter 5, a partial lecture grid of Lagrangian numerical method was proposed to locate the DEM in a somewhat comprehensive context. This chapter proposes elements to illustrate the specificity of our approach. The chapter focuses on a few selected methods to underline similarities and differences with the DEM. The chapter is split into two short sections:
• Section 6.1 proposes a pair-wise comparison of the conceptual and algorithmic features of FEM, EFG, SPH, DEM and lattice model.
• Section 6.2 reviews some potentialities and issues linked to the different methods, with specific emphasis on the DEM.
FEM, EFG, SPH, DEM and lattice model
This section is an attempt to sum up the key similarities and distinguishable features, from an algorithmic and conceptual point of view, identified in the lecture grid proposed in Chapter 5. The section is limited to widespread methodologies: FEM, EFG, SPH, DEM and lattice model. A graphical synthesis of the section is proposed in Figure 6.1.
Lattice models share their core mathematical tool with the FEM. Both methodologies build-up global grand matrices from local contributions, to globally solve the response of the system via linear algebra [70, p.3]. Unlike the FEM, lattice models are not designed to solve PDE representing the behavior of continuous media.
The FEM is a variational formulation1 of partial differential equation (PDE). The domain is subdivided into elements, formed by explicitly interconnected nodes, on which fields are approximated by polynomial shape functions. The local contributions to the energy are integrated and the problem solved in a global matrix formalism by minimizing the energy.
The EFG shares a common main methodology with the FEM: the variational formulation and global matrix formalism are identical. However, a distinct strategy is used to build the shape functions. Fields are statistically approximated in the neighborhood of the nodes, without a prior table of connectivity.
The SPH shares a mathematical tool with the EFG: the statistical approximation of fields in a neighborhood. However, the SPH does not rely on a variational formulation, the resolution of the partial differential equation is direct. No global matrix is built, the resolution of the continuous behavior is executed locally and independently for each material point by computing "efforts" exerted by the medium and explicitly integrating position and velocity.
The DEM is algorithmically related to the SPH. They both use a forward (explicit) time scheme to integrate the position and velocity of the material points, based on Newton's second law and both use neighbor detection algorithms. The salient distinction is the determination of the "efforts" exerted on material points [86, p.376]. In the DEM, the forces on material points stem from interactions with its neighbors. In the SPH, efforts on the material points are computed as the solution of a PDE, with a continuous constitutive law. Conceptually, many-body (or non local) interaction laws implemented in DEM-like frameworks to account for the local volume free for the particle [START_REF] Grigorievich Psakhie | Modeling nanoindentation of TiC-CaPON coating on Ti substrate using movable cellular automaton method[END_REF] are steps toward the SPH methodology to solve a continuous constitutive law.
Lattice models are conceptually related to the DEM. They share a topologically discrete constitutive behavior: interactions between members of a set of objects. However, they are algorithmically distinct: while the DEM locally computes and integrates the behavior of the objects, independently from one another, lattice models rely on a matrix formalism and a global resolution.
Ongoing Challenges
As-is, numerous powerful methods could be used for our modelization objective. We remark that the strategy of using the DEM, well suited to model contacts and topological events, is hindered by the lack of models describing isochoric inelastic strains.
Within the Lagrangian family, the accumulated efforts of the past decades opened wide fields to modeling tools. While methods related to the EFG still suffer from comparatively higher numerical cost, the FEM and the SPH now allow the modeling of finite transformation of complex multiphase three dimensional geometries, including topological changes. Successful developments of the FEM are based on the massive use of automated remeshing techniques and tracking and reconstruction algorithms of the interfaces. Even though parallel implementations are developed, these approaches are limited by computing power to configurations where the discrete events do not become predominant with respect to the continuous behavior. The SPH is now a promising challenger, for example with hybrid SPH/DEM models, implemented to handle both discrete and continuous behaviors in a common framework [START_REF] Swift | Modeling stress-induced damage from impact recovery experiments[END_REF][START_REF] Komoróczi | Meshless numerical modeling of brittle-viscous deformation: first results on boudinage and hydrofracturing using a coupling of discrete element method (DEM) and smoothed particle hydrodynamics (SPH)[END_REF]. Advanced developments remain specialist tools, and are not widely available in mainstream commercial or free/libre codes.
Strategies using from the beginning a discrete constitutive law, like the DEM, are now widespread and well established. The design of interaction laws and the calibration procedures of the numerical parameters allow the modeling of targeted continuous elastic macroscopic behaviors. The handling of fracture and contact mechanics has very little additional computational cost, including in case of generalized fragmentation, one only needs to define ad hoc interaction cases.
However, to our knowledge, two closely related drawbacks hinder the modeling of finite strain in metallic alloys with the DEM. Firstly, the developed models for the continuous medium rely on initially pairwise bonded neighbors, as in a fixed grid. Most implementations can handle the breakage of such bonds, and arbitrary contacts afterwards, but do not accept neighbor changes within a continuous behavior. This is understood here as a total Lagrangian formulation of the constitutive behavior of the continuous medium, as the initial state -the initially bonded neighbors -is taken as reference. Secondly, the introduced plasticity model intrinsically induces volumetric strain. Indeed, plasticity and viscoplasticity are computed as a pair interaction between particles, allowing them to inelastically interpenetrate [113] [210, p.153]. Summing-up, the discrete constitutive law models lack isochoric plasticity mechanisms and are limited to total Lagrangian descriptions of the continuous behavior. In Part I, the global objective of modeling inelastic strain in multi-material was stated. In Part II, various candidate numerical methods to describe interface interactions and topological events were reviewed and compared.
Part III describes the chosen algorithmic strategy and the effective numerical tools used or developed. It is divided into four chapters:
• Chapter 7 states the research question, closely built upon the specific numerical approach, developed in the DEM framework.
• Chapter 8 reviews the basic algorithms and principles of the classical DEM.
• Chapter 9 presents the algorithmic principles of the developed numerical methods.
The chapter can be considered as a "cookbook", presenting as briefly as possible the introduced methods. The discussion of the choices will not be found in this chapter, but in the corresponding applicative sections, in Parts IV and V.
• Chapter 10 sums-up the choices of numerical solvers for DEM and FEM simulation, as well as pre-and post-processing tools.
Highlights -Part III Modeling Approach
• The PhD focuses on the development of a discrete element method (DEM) algorithm for finite inelastic transformation of incompressible multi-material.
The framework is trusted to natively handle discrete interaction and topological events. Efforts are concentrated on the assessment of the description of the deformation of continuous media.
• The principles of the model is a set of attractive-repulsive spherical particles, discretizing a continuum.
Under external loads, the packing of particles collectively cope with strain. The rearrangements, with arbitrary neighbor changes, account for irreversible strain. Ad hoc interaction law can be designed to cope with compressive and tensile load.
• Algorithms to detect physical events like contact and selfcontact are proposed.
The self-contact detection, based on an approximation of the free surfaces, is to our knowledge a novel approach.
• The chosen DEM simulation engine is liggghts. Custom features are implemented within this open-source code.
The compromise between modularity and performance leads to the choice of the solver, able to massively parallelize large geometries. The studied problem can thus be scaled up to the study of full real samples. This short chapter proposes a research question in relation with our modelization objective (Section 3.3) and the current limitations (Section 6.2) of the discrete element method (DEM).
r seed r crown h ≤ 2r seed f B→A f A→B A B Figure 9.3a h f (h, ḣ) BILIN Repulsive Attractive 2r seed 2r crown (a) (b) ḣ > 0 (c) ḣ ≤ 0 Figure 9.4a
The global purpose of this PhD is to propose a numerical tool handling the modelization objective defined in Section 3.3. In short, a model able to describe quasistatic finite inelastic and isochoric strains in solids, handling interface motion, interaction and self-interaction.
Among existing modeling strategies, a topologically discrete constitutive behavior is chosen. It will not be attempted to solve locally a continuous constitutive behavior. The formalism and numerical framework are taken from the DEM. As an innate drawback of the modeling choices, the powerful tools from the continuum mechanics formalisms cannot be exploited. The model will thus be phenomenological. The macroscopic continuous behavior of the material will be approximated via the collective interaction of sets of particles, without a priori qualitative alikeness between elementary interaction laws and macroscopic behavior.
Nevertheless, foreseen applications include strongly tortuous mesostructures and generalized interactions of interfaces. In such circumstances, the actual local continuous behavior is often uncertain, a rough phenomenological approximation can capture well enough its deformation. The predominance of physical contacts also advocates for a discrete numerical method, designed to handle an arbitrary large number of complex contacts.
Similar approaches have successfully been applied to elastic and brittle behavior (refer to Section 5.3.2.2). However, to our knowledge, no inelastic finite strain model has been proposed (Section 6.2).
The research question is thus exploratory: can a phenomenological DEM model be developed for inelastic and incompressible finite strain of continuous media?
The nature of the question implies a twofold proposition:
• An operational algorithmic proof of concept.
It shall be illustrated that the chosen strategy can effectively be applied. Computational issues will not be considered as secondary, as they may drastically restrain the potential of the method.
• A critical scope of validity assessment.
The quality of the description of the captured continuous phenomena shall be clearly delimited, based on numerical or experimental references. The ill-posed nature of the model will not be eluded.
Chapter 8
The Discrete Element Method
The research question (Chapter 7) is closely built upon a specific numerical method, the DEM. This chapter describes the classical framework of the DEM. After a short introductory overview, each section is dedicated to an algorithmic feature.
The DEM is a dynamic Lagrangian method, conceptually closely related to molecular dynamics (MD)1 , aiming to model the collective motion of sets of interacting objects, interpenetrating one another, governed by the conservation of momentum [149] [174, p.13-134]. At each time step (Figure 8.1), overlaps between objects are detected and used to compute interaction forces. Forces, positions and velocities are integrated using an explicit scheme, independently for each object, to define the next state of the system. From the conceptual point of view, the elementary objects could be of arbitrary shape. This option, aiming to refine the geometrical description of the objects, has been implemented following various routes, among which2 polyhedral objects [START_REF] Dubois | Numerical modeling of granular media composed of polyhedral particles[END_REF], triangulated bodies [START_REF] Smeets | Modeling contact interactions between triangulated rounded bodies for the discrete element method[END_REF], sums of primitives [START_REF] Effeindzourou | Modelling of deformable structures in the general framework of the discrete element method[END_REF] or superquadric particles [START_REF] Podlozhnyuk | Efficient implementation of superquadric particles in discrete element method within an open-source framework[END_REF]. However, neighbor search and overlap computation are algorithmically much heavier than with the exclusive use of spheres as elementary objects. In each modeling context, the compromise between computing efficiency and accurate description can be debated, It must not be forgotten that the method is a priori ill-posed, which sets an asymptotic limit on the accessible local details. In our work, particles have no proper physical sense: they are an arbitrary discretization of a continuous medium. Our description will thus be limited to methods using spheres as elementary objects, favoring the computing efficiency and modeling complex shapes with larger collections of spheres.
Neighbor
The elementary objects could be designed to be able to deform or change size during the simulation. This can be used for example to account for chemical reactions [START_REF] Forgber | Optimal particle parameters for CLC and CLR processes-predictions by intra-particle transport models and experimental validation[END_REF] or to describe a finer mechanical behavior of the objects [START_REF] Effeindzourou | Modelling of deformable structures in the general framework of the discrete element method[END_REF]. In the objective of focusing on the collective behavior of numerous elementary objects, the shape and size of the objects will be considered fixed.
Along with an elementary geometry, we will not describe nor use algorithms that explicitly track the angular position of the particles. The algorithms used here can, with little effort, be extended to implicitly take into account rotational effects, but our models will rely on non oriented and non rotating particles, and will thus not describe torques between objects.
We thus limit ourselves to methodologies where a particle is defined by its mass and diameter, constant in time, and its position and velocity at a given instant. Additional state parameters, including history variables, can be Lagrangianly defined and stored at the particle level. In the following sections, we describe the ground features and algorithms of the classical DEM framework, and associated numerical issues. The three generic steps of the DEM time loop, (neighbor detection, computation of the interaction forces and temporal integration) are described respectively in Sections 8.1, 8.2 and 8.3. Ways of constraining the studied system, through boundary conditions, is described in Section 8.4. In our perspective, the parallel implementation of a code, described in Section 8.5, is not a mere computing add-on and must be thought and designed along with the model itself.
Neighbor Detection
To define which pairs of particles are interacting, it is unreasonably costly to check every potential pair between all particles. The number of potential pairs is reduced by using geometrical criteria.
The systematic check of all possible pairs in a whole system, to detect interacting particles, is costly: O(N 2 ) for N particles. Among various algorithms, a simple and efficient method for large systems is to build cell lists [7, p.465]. The simulation domain is divided into cubic subdomains, the cells. All the particles are geometrically sorted and attributed to the cells. For each particle, a list of potential neighbors can periodically be built, using only the particles attributed to the surrounding cells (Figure 8.2). Implementations can execute the effective interaction check at this stage, using the particles sizes and positions, or leave it up to the interactions computation algorithm (see Section 8.2).
This approach can efficiently handle large systems -O(N • N n ) for N n particles in a neighborhood of 27 cells -as long as the dispersion of the diameters of the particles is limited. Specific variants are designed for systems with an excessive dispersion, using multiple grids.
The performance of the detection can be tuned and adapted to the context, with sensible choices of the cell sizes -typically 10 % larger than the largest particle diameterand of the update frequency of the neighbor lists. Systems with slow motions of the particles and rare neighbor changes, quasistatic packing of dense aggregates for example, accept a lower update frequency than rapid flows.
The neighbor detection is, from the functional point of view, related to the meshing procedures used in the finite element method (FEM). It is much cheaper and easier to implement for arbitrary configuration, as no elements -with quality requirements -are built.
Interaction Forces
Algorithmically, a vast flexibility is possible on the design of interaction laws. The implementation of complex behavior is often straightforward. For the model to simulate the collective effect of local phenomena, the interaction laws should be computed from local metrics.
The possibility of implementing arbitrary interaction laws is a major strength of the methodology. They only have to be explicitly computable, based on the current or past state of the system. Depending on the physical principles that inspired them, they thus take a variety of shapes.
In the most classical case, the relative kinematics -relative position and velocityof the particles are used to compute pairwise interactions, all pair interactions being considered as independent (Figure 8.3). A major strength of the DEM is that little implementation efforts are required to design arbitrary interaction laws, based on the current relative positions and velocities of two interacting particles 3 .
A useful algorithmic add-on is the introduction of state parameters, for example variables linked to the solicitation history, at two distinct levels:
• The particles, storing a numerically Lagrangian data;
• The pairs, limited in time to the lifespan of the interaction, for example to model a bonding behavior between particles.
More complex behaviors can involve dependencies between the interactions. Computing interactions from neighborhoods, for example for compaction of powders [99, p.48], is referred to as many-body potentials in a MD context, and can be understood as non local behaviors in solid mechanics. To some extent, from the algorithmic point of view, such interactions [START_REF] Grigorievich Psakhie | Modeling nanoindentation of TiC-CaPON coating on Ti substrate using movable cellular automaton method[END_REF] can be considered as a step toward continuous methods as the smooth particle hydrodynamics (SPH), presented in Section 5.2.2.
x A x B x A x B v A v B (a) f B→A f A→B (b)
Regardless of the design of the interaction laws -instantaneous or history aware, pairwise or many-body -all external forces, applied by the neighbors, are summed for each particle. Using Newton's second law, the acceleration a of each particle can be computed from its mass m and the sum of external forces f :
f = m • a (8.1)
although, in the effective implementations, the acceleration is not explicitly stored (see Section 8.3).
In the design of complex interaction laws, it must be remembered that the collective behavior may be qualitatively distinct from what is intuitively expected from the elementary behavior. In addition, for the model to effectively simulate the collective effect of local phenomena, artifacts involving metrics that are not locally computed must be considered with care.
Time Integration
The state of the system is explicitly integrated in time. The conditional stability of the schemes imposes a conceptual upper bound on the choice of the time step. Computationally, the theoretical time step is often unreasonably costly to be respected. Numerical parameters are sometimes adapted to allow a faster resolution. The change of the behavior of the system must be considered.
Velocity Verlet Algorithm
A classical explicit integration method is the velocity Verlet scheme. More complex algorithms can be designed to minimize the numerical errors, but will not be considered here.
At each time step, the state is updated particle-wise, from Newton's second law, using an explicit scheme. The current position, velocity, mass and external forces acting on a particle are used to define the next position and velocity, independently from other particles. Arbitrary integration schemes can be built by discretizing Taylor expansion of Newton's second law.
An elementary time integrator is the velocity Verlet scheme. At the beginning of a given step i, the position is updated from the previous velocity and acceleration. The current acceleration is then computed from the sum of the external forces. The current velocity is computed from the previous position and the average of previous and current acceleration.
x i = x i-1 + v i-1 • ∆t + 1 2 a i-1 • ∆t 2 (8.2a) a i = f i (x i , v i , ...) m (8.2b) v i = v i-1 + 1 2 (a i-1 + a i ) • ∆t (8.2c)
A similar scheme can be applied for the rotational behavior, if needed, using the angular positions and velocities and the moment of inertia of the particles.
In the classical form presented in Equation 8.2, the velocity Verlet algorithm would require the storage of two accelerations: current and previous. A leaner variation may be implemented, avoiding the explicit handling of the accelerations:
v = v + 1 2 • ∆t m • f (8.3a) x = x + ∆t • v (8.3b) f = Interaction law(x, v, ...) (8.3c) v = v + 1 2 • ∆t m • f (8.3d)
The velocity Verlet's local truncation error -caused at each step by the scheme only, typically excluding numerical round-off errors -is O(∆t 2 ) for the velocity and O(∆t 4 ) for the position. As the DEM relies on a large number of steps, the global truncation error -accumulated over time, ideally the effective difference between the retrieved result and the exact solution -is a more relevant metric to compare schemes. Both velocity and position have a O(∆t 2 ) global truncation error.
More sophisticated algorithms could be chosen to enhance the time integration accuracy, but their computational efficiency is debatable [191, p.878]. In addition, the intrinsic ill-posed and chaotic nature of the motion of an assembly of objects must be remembered. A lower truncation error of the integration scheme will not help to overcome this limitation and may not significantly affect the metrics of interest, measured at larger scale than the individual position of the particles.
Time Step Choice
The conditional stability of the explicit schemes imposes a conceptual upper bound on the choice of the time step. A practical estimation of this critical time step is based on the evaluation of the natural period of an imaginary spring-mass system.
A canonical numerical choice, for explicit integration schemes, is the size of the time step [START_REF] Courant | On the partial difference equations of mathematical physics[END_REF]. As an image, the data must be able to travel through the domain faster than the objects. A concrete example in the DEM context: given a sufficient velocity, a particle can jump through another one within a time step. Thus, the detection of the interactions -and the computing of its effects -was slower than the motion of the particles. This configuration -the instability of the explicit integration -must be avoided as it leads to unpredictable behavior, the numerical scheme failing to capture the key modeling features.
A standard DEM criterion, to choose an appropriate time scale, is the spring-mass analogy. This idealized case refers to a system of particles, with a minimal mass m, interacting elastically with a maximal stiffness k. The natural oscillation period t 0 of an imaginary spring-mass system (Figure 8.4), of stiffness k and mass m can be computed as follows: Although interactions between particles can be simultaneous and unilateral, changing quite radically the vibratory behavior, setting the time step ∆t to a fraction -typically a hundredth -of the natural period t 04 is often a first reasonable choice. As a short justification that collective effects will not dramatically increase the typical oscillation time of the particles, one can consider a periodic one-dimensional column of N masses m, connected by springs of stiffness k. The N natural periods t 0 i of such a system are given by the relation [13, p.432]:
t 0 = 2π m k (8.
t 0 i = π m k 1 | sin πn N | ; with n ∈ Z * (8.5)
The case n = 0 is the rigid body motion of the whole system, allowed by the periodic boundary conditions. The system is not oscillating, the natural period is infinite. The shortest natural period5 of the system is thus half of natural period of the one degree of freedom system:
t 0 min = π m k = t 0 2 (8.6)
Not only the natural periods do not dramatically drop for large systems, but the propagation of external solicitations is limited by high-frequency cutoff. Indeed, waves with lower period than t 0 min are evanescent [155, p.5], no power will be transmitted to the system and no significant motion will be observed away from the application point. This can be seen as a numerical advantage for the stability of the method, but is also a limitation on the observable time scales.
The final choice of the time step needs to be further tuned to match specific requirements, empirical procedures often being necessary for complex interactions.
Parameter Adaptation
Computationally, the critical time step is often unreasonably costly to simulate the state of the system on large time scales. Numerical parameters can be adapted to allow a faster resolution, but this necessarily modifies the behavior of the system.
The order of magnitudes of the stiffnesses and masses of physical objects sets another strict limitation for discrete numerical methods as the DEM. To illustrate this, we will consider spherical elastic homogeneous physical objects, interacting with Hertzian forces [START_REF] Hertz | Ueber die Berührung fester elastischer Körper[END_REF]:
f Hertz = 4 3 i 3/2 r 1/2 E * (8.7)
With i the indentation between two objects, r the radius and E * = E 1-ν 2 , computed from the Young's E and the Poisson's ratio ν of the material.
An average stiffness k for a given indentation can be computed as:
k = f Hertz i = 4 3 i 3/2 r 1/2 E * i = 4 3 i 1/2 r 1/2 E * (8.8)
Introducing the relative indentation i r = i/r, the stiffness k becomes:
k = 4 3 i r 1/2 rE * (8.9)
The mass m of an spherical object of density ρ can be computed:
m = 4 3 πr 3 • ρ (8.10)
The natural period spring-mass system can be estimated using Equation 8.4:
t 0 = 2π m k 1/2 = 2π 3/2 i r 1/4 ρ E * 1/2 • r (8.11)
For a given relative indentation i r , the natural period is thus proportional to the radius r of the object and to the square root of the ρ/E * ratio of the material. Starting from a map of the existing materials in the space (ρ, E) (see Figure 2.1 on page 20), orders of magnitudes of typical natural period can be computed (Figure 8.5).
The time step required to model realistic masses and stiffnesses are often unreasonably small6 to compute phenomena on the relevant time scales, as the system is governed by stiff differential equations.
From molecular dynamics to granular flows, physical parameters are sometimes modified numerically to maintain the computing time within realistic bounds 7 .
In the objective of a faster execution of each time step, the spatial characteristics of the system can be altered, typically the number of objects or the distance of interaction. To reduce the number of time steps, two main options can be considered, with similar macroscopic results:
• Acting on the solicitations of the system:
-Decrease the total simulation time, for example by increasing the velocity at which the domain is deformed. This is a common strategy in MD.
• Acting on the response of the system, allow the use of larger time steps by: 10 -9 10 -8 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 -13 10 -10 10 -7
10 -4 10 -1 1 1 Radius (m) Natural period t 0 (s) ρ/E * (m • s) "Elastomer" 10 -2
Steel 3•10 -8 "Ceramic" 10 -9 8.11). The values for "elastomer" and "ceramic" are not representative of real materials. They are made-up using extreme ρ and E values for the class of material [12, p.5], to illustrate theoretical critical cases reachable with dense materials. Materials as aluminum and silica display very similar tendency and they cannot be distinguished from steel at the chosen scale of the graph.
-Decreasing the forces, typically via artificially low stiffnesses. Commonly used in granular flows [95, p.535], where the mass flow is a metric of interest.
-Increasing the masses. This approach is specifically relevant to study quasistatic phenomena, as the forces tend to be realistic when the accelerations become reasonably small.
The various strategies are often combined. It must be emphasized that these numerical work-around can influence, not only quantitatively but also qualitatively, the response of the system. When the forces or masses are modified, the similarity between the considered system and the one that is effectively solved is a priori lost. In addition, the modification of the length scales (natural period or time step) can altogether prohibit bifurcation phenomena occurring at smaller time scales than the chosen time step [90, p.7]. In this sense, it is misleading to name "normalization" the parameter modification procedures often used in DEM: they modify the nature of the solved problem.
Boundary Conditions
As for the interaction laws, a lot of flexibility is granted to design of boundary conditions. New objects, in addition to the elementary particles, can for example be introduced.
The most trivial constraint on the system is to let particles fly through the domain boundaries. Particles can either be altogether lost, and be removed from the system, or the boundary positions can be adapted to follow the motion of all particles.
Coming from the MD, where systems are often considered infinite, periodic boundary conditions are a common feature in DEM codes. Particles are made to interact through the opposed boundaries of the domain. This implies simple modifications of the neighbor detection and the interaction computation. Distances have to be taken into account modulo the dimension of the domain. The state of the system can be controlled by modifying the boundary positions, leading the particles to interact and rearrange.
A classical work-around to impose more diverse constraints is to describe boundaries with particles. The "boundary" particles are typically excluded from the standard integration scheme, to control their position. Requiring little developing efforts, the description of even trivial geometries requires a large number of particles, which is potentially costly. The intrinsic "roughness" of the modeled boundary can be a desired feature but is often a handicap.
Arbitrary geometries can also be modeled by the introduction of undeformable objects, either geometrical primitives -as infinite planes -or meshed surfaces. An additional implementation effort is required to include these objects in the neighbor detection and interaction computation algorithms. In addition to static interacting objects, various levels of complexity can be implemented:
• Geometries can follow prescribed motion;
• They can be used as "sensors", to measure the forces acting on them;
• Their motion can be controlled via the particle actions.
Going further, though this strategy was not used in our work, coupling strategies of the DEM with solvers designed for other physical models can be used. In some cases and up to some extent, such couplings can be seen as complex boundary conditions. Classical strategies rely on the communication of data between distinct codes, with a distinct time stepping, adapted to the physical models. The most widespread and fast developing area is the coupling with fluid dynamic solver based on the finite volume method (FVM) [START_REF] Goniva | Influence of rolling friction on single spout fluidized bed simulation[END_REF]. The coupling with solid dynamic models was also successfully implemented, whether for undeformable bodies with solid dynamic solver [START_REF] Hess | Simulation of the dynamic interaction between bulk material and heavy equipment: Calibration and validation[END_REF] or deformable bodies with FEM solvers [START_REF] Michael | A discrete approach to describe the kinematics between snow and a tire tread[END_REF][START_REF] Dratt | Coupling of FEM and DEM simulations to consider dynamic deformations under particle load[END_REF]. Endless combinations can be imagined, for example solid/fluid/particle models [34].
Parallel Computing
Parallel computing is only a technical work-around to distribute the computing load on several processing units, operating simultaneously, in the objective of minimizing the total computation time. Although the modeled phenomena should theoretically remain unchanged, the technical limitations are decisive in the algorithmic and modeling strategies.
The most straightforward way to implement a DEM code -and the most understandable for both developers and end users -is to follow as closely as possible the governing physical model, sequentially programming all operations, looping over time step, particles and neighbors. Algorithmic optimization and performance issues are thus considered as secondary. As mentioned in Section 3.1.3.2, progresses of computing hardware shifted from frequency scaling -over time a code would tend to run faster without modifications -to parallel scaling. To take advantage of improvements, portions of the algorithms have to be modified to run simultaneously on various processors.
The nature of the algorithm sets an asymptotic limit, Amdahl's Law [101, p.41], to potential gains using parallel programming. The limiting, purely sequential part of the algorithm is known as the critical path and cannot be parallelized. In the DEM, the explicit time integration of the state of a particle, is such a critical path: the knowledge of the current state at each step is necessary to compute the following.
Counter intuitively, somewhat exotic parallel-in-time algorithms can be designed for such initial value problems [START_REF] Lions | Résolution d'EDP par un schéma en temps "pararéel[END_REF] and have been applied to particle simulations [START_REF] Speck | A massively space-time parallel N-body solver[END_REF]. These methods are iterative and based on the resolution on various temporal girds. Although all steps need to be computed, and are in fact all computed various times, such algorithms provide a work-around to distribute the computing load. This strategy is still confidential and will not be further considered here, we will only look into the spatial division of a system into mostly independent subdomains. If the boundaries between these subdomains are appropriately dealt with, the simultaneous computation of the state of the subdomains allows to deal with a larger number of particles within a given time.
These purely technical limitations have a major impact on the modeling choices. From one side, the algorithm critical path -strictly constrained by the time integration -cannot be made to run notably faster. From the other side, the size of the system can theoretically be arbitrarily large, only being limited by machine capacity. In the limits of what the physical model accepts, efficient codes can be written by designing rough interaction law and time integration, and expanding the system number of objects. Anyhow, the mechanicians must adapt their modeling strategy, only being able to choose at which level they accept to deal with computing issues, software and hardware.
A first approach to parallel computing is open multi-processing (OpenMP). This interface allows to distribute computing load between various elementary processors, using a shared memory for all of them. Starting from a purely sequential implementation, OpenMP can allow a progressive parallelization of the code, as an add-on, without requiring in depth rewriting work. The main sequential algorithm is run as a unique master process. For critically time consuming operations, for example the loops on the particles in a DEM code, the master process calls subprocesses to split the computing load. Once all subtasks are over, the master process gathers all data and carries on. The scalability of OpenMP codes is strictly limited by the hardware memory architecture capacities, thus making impossible massive parallelization on classical machines. However, more sophisticated hybrid parallelization techniques [27] and hardware can take profit of the OpenMP interface.
To take full advantage of parallel and massively parallel computing, implementations must be designed from the root to fit the chosen parallel paradigm requirements 8 . The task used to be quite delicate when parallel programming started to spread, implementations being hardware dependent, and hardware changes being frequent, diverse and often experimental. Heavy programming effort used to be required, with little guarantee on the lifespan of the code. With time, de facto standards progressively emerged. For applications similar to the DEM, potentially massively parallel computing is mostly shared between the message passing interface (MPI) and general-purpose computing on graphics processing units (GPGPU).
The MPI relies on a distributed main memory and runs on multiple central processing units (CPUs), the most common generic elementary computing devices. Each CPU works on and accesses to its own storage space, and exchanges data with others when needed. The same code is executed on all CPUs, in the case of the DEM each CPU affecting itself to a specific geometrical subdomain. The CPUs exchange data during the execution of the code, for example to properly handle particles lying close to the subdomain boundaries. It is often necessary to entirely rewrite a specific MPI implementation of an algorithm. A key advantage of this parallelization strategy is that distinct operations can be performed simultaneously on distinct sets of data. As a very practical example applied to the DEM, complex interaction laws with numerous conditional statements. The widespread MPI standard is adaptable to diverse hardware architectures, and is used on classical machines and clusters.
The GPGPU attempts to take advantage of elementary graphics processing unit (GPU), originally designed for image processing purpose. As a rough description, with respect to CPUs, the GPUs can only handle less complex operations, work at lower frequency, but can be massively assembled. The memory is shared between the GPU, allowing a fast access to data. The GPGPU programming consists in allowing bidirectional communication between CPU and GPU, in contrast with the classical unidirectional flow, from the CPU, to the GPU and finally to the display. This parallel approach is somewhat akin to the OpenMP interface, with a master process running on CPU calling subroutines on GPUs, the kernels. Being distinct physical devices managing their own memory, the CPUs must send the data along with the instructions to the GPUs. In comparison to a pure CPU parallelization strategy, GPGPU cannot handle distinct tasks simultaneously: the benefits are restricted to identical operations on large sets of data. For scientific computation as the DEM, specific hardware is required and devices are far less common than classical machines.
The domain of parallel computing still undergoes heavy developments, both from hardware and software standpoints, and may well take new radical shifts. Cutting edge techniques require heavy development effort and the resulting code may only have a short lifespan.
Control Metrics
The choice of relevant metrics is often a sensitive issue in modeling. Spatial and temporal averages are almost always necessary in the cases of the DEM. Adimensional metrics can be defined and may help to identify the system key characteristics.
The ill-posed nature of the DEM (Section 3.1.1) implies that specific care must be taken to define relevant metrics. In general, all local metrics defined at the level of the particles can only be interpreted after suitable time and space averaging.
For computational mechanics, fundamental metrics of interest are the stress field σ and the strain field ε. Macroscopic uniaxial stresses can be evaluated by the measure of the total force acting on a boundary (Section 8.4) and the cross-section of the packing. Likewise, macroscopic strain can be estimated by comparing, in a given direction a, the evolution of the typical length A: ε aa = log
σ ab = 1 V i=0...n f a i • l b i (8.12)
With V the typical volume occupied by the particle, n the number of contacts, f a i the components of the force vector due to the interaction with particle i, l a i the components of the branch vector 9 and a and b the coordinate system indices.
The algorithm choice for the strain field estimation heavily depends on the expected strain type, many metrics being targeted to describe the perturbation of a regular lattice. For inelastic strain, with respect to a reference configuration, the components ε ab of the local strain field ε can be approximated in a neighborhood as follow [68, p.7197]:
ε ab = k X ak Y -1 bk -δ ab (8.13) with X ab = i=0...n l a i × l b i (ref)
and
Y ab = i=0...n l a i (ref) × l b i (ref)
More generally, to try to characterize the overall behavior of a discrete system, numerous adimensional metrics have been defined and used. Among them:
• A relative density of particles D:
D = N • πd 3 6 • V tot (8.14)
With V tot the total volume of the system and N the number of particles. This metric is typical of the DEM, where the dimension of the numerical particle is supposed to be linked to a physical dimension of the elementary objects.
• A stiffness level K [194, p.215]:
K = k P • d (8.15)
With k the elastic stiffness of the contacts, d the diameter of the particles and P the confinement stress.
The confinement stress can be approximated with a typical force acting on a particle and its diameter. Considering linear elastic contacts we can write:
P ∝ d 2 k • i (8.16)
With i the indentation between two particles. The stiffness level is thus interpreted as a simple geometrical parameter, accounting for the relative penetration of the particles:
K ∝ d i (8.17)
This dimensionless quantity is thus closely related to he density D.
• An inertial number I [194, p.207]:
I = ε m P d (8.18)
With ε the prescribed strain rate and m the mass of the particles.
Using the same approximation as previously for the confinement stress P , and remembering the definition of the natural period (Equation 8.4), the order of magnitude of the inertial number can be written:
I ∝ ε • t 0 d i = ε • t 0 • √ K (8.19)
The salient criterion is the comparison of the characteristic times of the external load and the packing, respectively the inverse of the strain rate and the natural period.
• A global equilibrium criteria Q [44, p.155], intended for quasistatic configurations:
Q = E k P • d 3 (8.20) With E k = 1 2 m • v 2
the average kinetic energy. In this adimensional parameter, the P • d 3 factor represents the product of a typical force and a typical length, respectively P • d 2 and d. To allow a more direct mechanical interpretation, the criteria can be re-written as the ratio of the kinetic and elastic potential energies:
Q ∝ E k E p (8.21) With E p = 1 2 k • i 2 .
The criteria could be re-written to highlight the role of the natural period t 0 (Equation 8.4):
Q = 1 2 m • v 2 1 2 k • i 2 = t 0 2π • v i 2 (8.22)
This layout has less direct physical sense but gives a hint about the role of t 0 2 in the comparison of distinct configurations.
More generally, the use of such a criterion implies that the absolute value of the velocity can be interpreted, which may be doubtful in a system where only the relative velocities between interacting particles are of interest. Correction procedures to take into account the average velocity have been designed.
To sum up, numerous generic or specialized metrics have been designed to study granular flows, at a macroscopic or local scale, this short proposed list being by no means exhaustive. The characterization of granular flow is still to some extent an open question [152, p.15]. More than a specific metric, the general methodology of identifying driving time and length scales proves efficient, including for systems alien from traditional models.
Chapter 9
Principle of the Developed Method
Chapter 7 introduced the grounding choice of using the DEM framework to meet our modelization objective. Chapter 8 introduced the classical algorithms of the method. The present chapter is dedicated to the synthetic description of the principle of the developed method. The introduction of the chapter sums up the effective algorithmic "ingredients" used for the different phenomena, with an indication of the specific contribution of this PhD. Each section is then dedicated to a specific physical phenomenon. The discussion, justification and test of the algorithmic choices are to be found in Parts IV and V.
Although it is somewhat artificial to strictly distinguish between conceptual model, algorithm and implementation, this section focuses on the principles of the method. Implementation issues with the chosen tools are briefly dealt with in Appendix B.1.
A brief overview of the algorithmic "ingredients" used in the developed models can be found in Table 9.1. These introduced features are contextualized in a generic DEM framework in Figure 9.1. Our method relies on custom interaction laws, particle and pair state variables and non-local behaviors. Algorithmically, the designed interaction laws (particle/particle in Section 9.1 and particle/mesh in Section 9.2) introduce little novel features. Only the conditional attractive force may be an introduced feature. However, the adaptation and tuning of our laws to phenomenologically model large inelastic strains is to our knowledge not found in the literature.
Physical model
The detection of contact between distinct objects (Section 9.4) is taking advantage of the intrinsic properties of the DEM and has already been investigated. This aspect has 99 not been a major focus of the PhD and was added for the sake of completeness.
The material discretization (Section 9.3) is now rather standard in the context of our laboratory [START_REF] Roussel | Strength of hierarchically porous ceramics: Discrete simulations on X-ray nanotomography images[END_REF]. Personal contributions concern the handling of periodic packings and the definition of arbitrary state variables.
To our knowledge, the self-contact detection algorithm (Section 9.5) is a novel contribution to the field.
Finite Inelastic Transformation
The basic assumption of our modeling approach is the analogy between the motion of a collection of spheres, with attractive/repulsive behavior, and incompressible inelastic strain in a continuum. The ad hoc interaction laws are chosen and tuned for the collective rearrangements of the particles to mimic key features of inelastic incompressible transformation of a continuous medium. The choice and behavior of the two interaction laws introduced here are examined respectively in Chapters 11 and 14. A more in depth investigation of the calibration of the numerical parameters can be found in Section 11.2.
Conceptually, a continuous object is discretized using a dense collection of particles, interacting with pairwise attractive/repulsive reciprocal forces. Repulsive forces prevent excessive indentation and attractive forces provide some cohesiveness. This behavior aims at controlling the overall volume variations.
When an external load is applied to this collection, the particles move with respect to one another, the packing rearranges. The particles are freely allowed to change neighbors, thus modeling arbitrary deformation of the object. An early modeling choice was to consider particles subdivided into two concentric and spherical interaction zones with distinct behaviors (Figure 9.3):
• A repulsive seed, mimicking incompressibility, of radius r seed ;
• An attractive crown, adding cohesiveness, of radius r crown > r seed .
Reciprocal interaction forces f A→B = -f B→A are computed for each pair of overlapping particles (A, B), based on the distance between their center h and relative normal velocity ḣ.
This geometric strategy1 is closely akin to MD methodology, where punctual bodies interact through potentials, with an equilibrium state. The algorithm allows an interaction management without the use of history parameters and lists of neighbors, that would need to be stored between time steps. In this PhD, interaction laws are governed by normal forces f , piecewise linear with the pair distance h and linear with the introduced parameters (Figure 9.4). The signed norm f of the force will be used, following a classical DEM convention: repulsive forces are positive and attractive forces are negative. Two distinct laws, but similar in many aspects, are used:
r seed r crown h ≤ 2r seed f B→A f A→B A B (a) r seed r crown h > 2r seed f B→A f A→B A B (b)
• "BILIN "
This interaction law is suitable only for compressive loads (Figure 9.4a, Equation 9.1, Part IV). The repulsive forces are strongly dominant and a calibration procedure of the strain rate sensitivity is proposed.
• "TRILIN " This interaction law can mimic tension and compression (Figure 9.4b, Equation 9.2, Part V). Repulsive and attractive forces are more balanced, no control over the strain rate sensitivity if proposed.
The interaction law BILIN relies on two stiffness parameters k rep and k att , accounting respectively for repulsion and attraction:
f (h, ḣ) BILIN = k rep (2r seed -h) if h ≤ 2r seed (9.1a) if 2r seed < h ≤ 2r crown k att (2r seed -h)
and ḣ > 0 (9.1b) 0 and ḣ ≤ 0 (9.1c)
The attractive force is dependent to the relative normal velocity 2 ḣ: the attractive force is only activated if a pair has a tensile motion, and is canceled in case of compressive motion. This behavior helps to smooth the creation of new contacts between particles and introduces a dissipative effect of the total energy, numerically sufficient within the strain rate validity range of the model, linked to the frequency of oscillation of the pairs. At the pair level, no damping, shear or torque interaction laws are implemented. In the tested configurations, such interactions only introduce second-order effects on the macroscopic behavior of packings.
The behavior of the interaction law TRILIN is very similar to BILIN , with the introduction of an additional parameterf att , a bound on the magnitude of the attractive force:
f (h, ḣ) TRILIN = k rep (2r seed -h) if h ≤ 2r seed (9.2a) if 2r seed < h ≤ 2r crown max(k att (2r seed -h), f att )
and ḣ > 0 (9.2b) 0 and ḣ ≤ 0 (9.2c)
A second additional parameter X wall , will be described in Section 8.4 as it concerns only particle/mesh interactions. Most numerical parameters are set to a fixed value (Table 9.2) for all simulation and considered as constants of the models BILIN and TRILIN :
• All simulations are run using identical radii, for all particles. The dimensions are totally arbitrary and do not represent a physical metric, they are chosen for numerical convenience.
• The relative magnitude between attractive and repulsive forces is fixed.
It will be discussed (Section 11.2) that the linearity of the interaction laws with the introduced parameter allows an independent tuning of the kinematics of the packing and
Model Driving Numerical constant variable r seed r crown t 0 ∆t m k att f att X wall mm mm s s g ➭N • mm -1 ➭N / BILIN t 0 , k rep 0.5 1.5 • r seed 5•10 -4 k rep t 0 2π 2 k rep 10 / / TRILIN k rep 0.5 1.4 • r seed 1 10 -1 k rep t 0 2π 2 k rep 1.8 3 4 k att (r crown - 3 r seed )
Table 9.2: Constants defining the models. The force levels are driven by k rep for both models. For BILIN , the kinematics are driven by t 0 ∈ [10 -2 , 1] s. For TRILIN , two additional constants are introduced: f att and X wall . Arbitrary unit system: (mm, g, s), forces thus expressed in ➭N and stresses in Pa.
the forces acting upon it. The effective driving parameter of the kinematics is the natural period t 0 = 2π k rep /m. The value of t 0 must be chosen with respect to the targeted arbitrary windows, of limited sizes, of macroscopic strain rate. This behavior was only investigated for BILIN , t 0 is considered as a constant in TRILIN . At a given natural period, the time step ∆t is fixed (Section 11.3.2).
The driving parameter of the forces level is k rep , allowing arbitrary stress levels to be modeled. The mass m of the particles is thus computed from k rep and t 0 .
Boundary Conditions
The boundary conditions are rather classical in DEM simulations: free surfaces and mesh-constrained surfaces driven by prescribed displacement. These boundary conditions are used throughout Parts IV and V.
Two types of boundary conditions are applied to the packings:
• Free boundary, where particles are not constrained by any means;
• Kinematically constrained boundary, using rigid meshes interacting with the particles.
Interaction forces between the mesh elements and the particles are computed with a very similar contact law as particle/particle interactions. In the interaction laws (Section 9.1) h is defined as the shortest distance between an element and a particle (Figure 9.5a), and r are to be used instead of 2r.
For uniaxial loads, the following boundary conditions are applied (Figure 9.5b): top and bottom planar meshes and free lateral sides. Meshes are used to apply prescribed macroscopic true strain rate. The total forces acting on the meshes are measured to evaluate the macroscopic flow stress. To smooth the force signal, the total force is averaged over a temporal sliding window. The width of the sliding window is defined in strain, in the range 5•10 -3 -10 -2 for all applications.
Only planar meshes were used, although any arbitrary surfaces meshed with triangular elements are accepted.
A potential conceptual limitation is that each particle has only one interaction with a mesh element 3 and multiple interactions with other particles. Under compressive loads, the model behaves correctly without further modifications: the particle/mesh interaction tends to indent more to balance with the numerous particle/particle interactions. Under tensile loads, two modifications are applied to the particle/mesh law:
3 Each particle interacts only once with a planar mesh. • The normal relative velocity dependency is removed: crown interactions are always attractive.
• A multiplicative factor X wall is applied to the attractive forces.
These numerical recipes are ad hoc means to apply a macroscopic load. Their influence is limited "far", i.e. a few diameters away, from the application points of the load4 .
Discretization of Continua
The generic spatial discretization of continua is based on large random packings of particles. The procedure is used throughout Parts IV and V. The modeling of complex mesostructures, starting from 3D images from X-ray tomography, is more specifically applied in Chapters 13 and 16.
The basic components used to discretize continuous media are cuboids of randomly packed particles. The packings are built using classical interaction laws, with elastic repulsion only, in periodic domains. The procedure is classical in the DEM [START_REF] Voivret | Particle assembling methods[END_REF]:
1. Random granular gas generation; 2. Triaxial5 compaction;
Relaxation.
The compaction is performed at prescribed strain rate, up to a fixed density D (Equation 8.14), approximately corresponding to the equilibrium density of the foreseen interaction law 6 . The initial state of the random packing and the elaboration route, in the context of the large strains studied here, seemed to have little influence on the compression results, and are not detailed here.
Distinct geometric configurations are built starting from random packings:
1. The desired configurations are built using geometrical criteria (setting the material type of the particles, removing particles to create voids...);
2. The attractive-repulsive model and the boundary conditions are applied;
3. A short relaxation is run (typically 500 steps).
The actual tests are run after this last relaxation procedure. Complex material mesostructures are modeled using 3D segmented images (Figure 9.6). The image is reconstructed using a simple box filter [216, p.5] which is used as a mask on the random packing of particles. For each particle, the color at its center is used to set its type or remove it. This has a low algorithmic cost and for very large sets of data, a smaller periodic packing can be replicated in all direction, minimizing the cost of generation of this initial packing. Arbitrary particle state variables (in addition to the particle type) can be set using this procedure, typically to define distinct objects made of the same material [36].
Contact Event Detection
The detection of the contact of distinct objects is implemented by testing the membership of the numerical particles to predefined clusters. The membership can be defined from a 3D tomography image. This algorithm is presented for the sake of completeness but does not introduce key novelties and thus will not be further examined.
The detection of contact events between distinct physical objects (Figure 3.9a) is straightforward: a particle state variable represents the membership to a specific physical object (Figure 9.7).
This state variable is initialized at the beginning of the simulation, typically using the procedure described in Section 9.3. When two particles interact, their respective memberships are compared, the attractive forces are canceled for particles from distinct objects. Arbitrary behavior can be readily implemented. Principle of the detection of contact between distinct objects. Each particle is assigned to an object at the initial state object, here 0, 1 or 2. In the deformed state, four pair interactions (circled) are considered as events between objects 1 and 2. The neighbor changes in object 2 are simply accounting for inelastic strain in this solid, they are not considered as physical contacts.
Self-Contact Event Detection
The detection of the self-contact of the interface of a single object is based on a local approximation of the free surfaces. The metrics are computed for each particle from the position of its neighbors. This algorithm is only used in Part V. More specifically, its tuning, discussion and extension are to be found in Chapter 15. For applications to complex mesostructures, refer to Chapter 16.
In the context of our method, neighbor changes between elementary particles are accounted on to describe inelastic strain in continuous media. A dedicated discrimination algorithm must detect the new pairs that must mimic the physical interaction of interfaces.
The detection of self-contact events (Figure 3.9b on page 48) cannot rely on the initial state of the system (Section 3.2):
• By definition, physical self-contacts involve particles that are members of the same object;
• In a finite transformation context, particles may migrate away or toward the free boundaries: the self-contact detection cannot rely on the particle initial position.
The conceptual work-around is to compute a local metric accounting for the existence, and the orientation, of a free boundary in a neighborhood: a metric somewhat analogous to the classical outward pointing normal.
The outward vector n is computed for each particle i, and is the opposite of the sum of the branch vectors7 l of its neighbors j (Figure 9.8):
n i = - j l j (9.3)
The outward vector is not normalized to unity, as its magnitude roughly quantifies the existence of a free boundary. Figure 9.8: Computation of the outward vector n of a particle, from the positions of its neighbors. (a) A particle surrounded by neighbors. The centroid of the neighbors is very close to the center of particle. The magnitude n is small and its orientation is meaningless. (b) A particle at a free surface. The magnitude n is large and n points "outwards".
In our method, the interaction force in a pair depends not only on the state of the pairs, but also on the state of their neighbors. This methodology is referred to as a many-body law in a MD context.
A pair state variable is initialized at the beginning of the simulation: all initially interacting pairs are considered to represent an "internal" interaction (Figure 9.9a). When the packing deforms, new pairs are created: the outward vectors (magnitude and orientation) of the two particles are compared and the pair state variable is set to:
• "Internal" if the neighbor change is considered to be a normal effect of inelastic strain (Section 9.1), "internal" pairs follow the standard interaction law;
• "Interface" if the new pair interaction is considered to be the result of a self-contact event. The attractive forces are canceled for "interface" pairs.
Qualitatively a self-contact event is detected if the outward vectors have a large magnitude and point toward the new neighbors. In subsequent time steps, if the pair state variable shared with a neighbor is set as "interface", this neighbor is excluded from the centroid computation for the evaluation of the outward vector. Algorithmically, for a new pair of particles {i, j}, the following variables are considered (Figure 9.10): the respective outward vectors {n i , n j } and a unit vector e n pointing from the center of i to the center of j.
Three parameters are introduced: two angle threshold α ij and α en and a magnitude threshold N mag . The numerical values used can be found in Table 9.3. A new pair is classified as "interface" only if: cos(n i , n j ) ≤ cos α ij (9.4a) and cos(n i , e n ) ≥ cos α en and n i ≥ N mag or cos(n j , -e n ) ≥ cos α en and n j ≥ N mag (9.4b)
In short, an interface pair is detected if the outward vectors are not too parallel and at least one outward vector is large and points toward the other particle.
N mag α en α ij mm • •
80 65
Table 9.3: Self-contact detection parameters used with TRILIN .
Chapter 10
Chosen Numerical Tools
Chapter 9 exposed the conceptual principle of the developed method to meet our modelization objectives. This chapter presents the choice of effective numerical tools to implement it. The introduction gives a few guidelines followed in the choices of the tools and two short sections focus respectively on DEM and FEM tools.
The main objective in the choice of the numerical tools was to limit as much as possible the programming effort to the implementation of new features. It was also hoped to reach reasonable computing efficiency of the codes without requiring too specialized skills in computing science and too much programming effort. Both objectives imply to rely as much as possible on preexisting tools -software solutions and libraries -adaptable to the need with at most minor modifications. In a more global perspective, the limitation of the number of manipulated programming languages was also considered. In order to favor reproducibility1 , distributed and collaborative development and task automation, command-line interfaces were often preferred.
Tools were thus chosen on the criteria of being scriptable, expendable, reproducible, tested, documented and supported by a reactive community of developers and users. Key time consuming operations, first of which DEM resolution, needed to be scalable, both from the computational and the license point of view. A technically efficient choice2 can be the orientation toward open-source and free/libre tools. The basic choice of GNU/Linux operating systems was also favored by platform coherence considerations, to smoothly move from different types of machines, including computing clusters.
The choice of the software, language, version control system and parallelization paradigm was iterative, based on trial and error and progressive coherence construction with the adopted simulation software solutions. A specific effort was also devoted to pool tools and portions of code, although it proved delicate. Octave, scilab and cuda were abandoned in favor of python, git, C++ and MPI. The python language not only imposed itself as the tool for the development of numerical methods and pre/post processing procedures -both for DEM and FEM simulations -but it was also the scripting language of a vast majority of the used software solutions.
The main software solutions used are summed-up in Table 10.1, a short description and argumentation of their choice is given in Section 10.1 for DEM related tools and in Section 10.2 for FEM related tools.
Software
DEM Tools
The chosen DEM solver is liggghts, selected for its interesting compromise between performance and ease of development. The specialized postprocessing and visualization tool ovito proved well suited to our needs.
Although the conceptual model used should theoretically not depend on the chosen implementation, the performance and the internal structure of a code largely drive its possible uses and developments.
Existing DEM and MD code include: dp3d [START_REF] Martin | Study of particle rearrangement during powder compaction by the discrete element method[END_REF], edem3 , esys-particle [START_REF] Weatherley | ESyS-particle tutorial and user's guide version 2[END_REF], granoo [9], gromacs [1], lammps [START_REF] Plimpton | Fast parallel algorithms for short-range molecular dynamics[END_REF], liggghts [START_REF] Kloss | Models, algorithms and validation for opensource DEM and CFD-DEM[END_REF], pfc4 , rocky DEM5 , woo [START_REF] Šmilauer | Woo Documentation[END_REF], yade [START_REF] Šmilauer | Yade documentation[END_REF].
In this work, it was from the start assumed that massively parallel paradigms had to be used, in order to scale-up to larger problems without further development effort. The choice of liggghts [START_REF] Kloss | Models, algorithms and validation for opensource DEM and CFD-DEM[END_REF], parallelized in MPI, as the DEM solver led to leave aside an attempt to develop and adapt an embryonary in-house GPGPU parallelized code. Although no in-depth systematic comparison of the existing codes was carried out, the liggghts code was chosen for its interesting compromise between performance, ease of development and modularity.
The code is a fork from lammps, a popular MD code, from which it inherits its modularity, its sequential and parallel performance [START_REF]LAMMPS benchmarks[END_REF], and an active community of users and developers. Developed by a private company stemming from -and still closely related to -the academic world, the public version of the code is open-source and freely available.
Liggghts is currently a reference in terms of available features for the DEM 6 . A vast user community, both industrial and academic, can be relied on to test and share issues. From the user point of view, the documentation is overall comprehensive and sufficient for autonomous use.
Documentation regarding the development of new features is somewhat scarce outside the most standard procedures. By design, the code is in large parts modular, allowing the development of features as simple add-ons. For example, a new interaction law can simply be introduced by adding an autonomous file in the sources. Some features, as the management of pair history, happen to be written in old-fashioned C language 7 , with syntaxes that can be hard to grasp for the neophyte and restrain the flexibility of their implementation. Regular improvements, extensions and bug fixes are published and the developers are accessible and reactive regarding requests.
The use of liggghts imposed the implementation of the developed features in C++ in a MPI framework. It also promoted the use of git as version control system, in first place to easily take profit of the periodical evolution of the code. The software was extensively used on numerous machines, using from 1 to 24 CPUs. In addition to this use and developments of interaction laws, metric evaluation routines and generic procedures8 , our work also involved bug detection in both sequential and parallel issues -and their reporting with eventual resolution -and documentation [START_REF] Guesnet | A heavvvy tutorial for liggghts[END_REF], more specifically regarding development procedures. Liggghts was the most deeply used tool within our work.
Ovito [START_REF] Stukowski | Visualization and analysis of atomistic simulation data with OVITO -the open visualization tool[END_REF] rapidly imposed itself as the visualization tool for particle data. More tailored to discrete simulation needs than a generic visualization software, as paraview for example, it is developing at a rapid pace, regularly introducing useful features, is fully scriptable in python and is thoroughly documented, both for scripted and graphical use. The developer is extremely reactive 9 and has a clear view of the needs of its community. Our work mainly focused on both graphical and command-line use of the software, with periodic bug reports and feature requests.
FEM Tools
The code aster FEM solver is chosen for its comprehensive documentation and its "outof-the-box" handling of viscoplasticity and finite strain. Contact, and remeshing are not considered. Side tools salome and paraview are designed and adapted to the needs of code aster.
The FEM solver was chosen with less emphasis on performance and modularity. The code needed to readily handle finite transformations and viscoplastic behavior in quasiincompressible cases 10 . The objective was to provide a reliable reference for the designed DEM models more than to easily implement new concepts.
The choice of code aster [START_REF]Code aster open source -general FEA software[END_REF] as the FEM solver was strongly triggered by the outstanding quality of its documentation. Not only it comprehensively covers and illustrates the use of the code itself, but it can also be considered as a standalone and didactic review over many mechanical and numerical issues arising in the FEM, including for advanced features 11 .
Similarly, code aster provides a huge collection of test cases, illustrating numerous configurations and potential syntaxes and uses.
In addition, code aster can handle a large variety of mechanical behaviors 12 , is fully scriptable in python and almost systematically provides explicit and insightful error messages, directly pointing to recommended readings in the documentation. The vast and active community is helpful and the developers can provide fast add-ons or patches 13 .
Our work in code aster focused on its use for finite transformation simulations and contributions to the community via bug reports. FEM pre-processing was based on salome 14 which readily produces formats compatible with code aster, is well documented and is used by an active community. The visualization of the FEM results were based on a fork 15 of paraview, provided by salome. Although documentation of paraview is chaotic, it is somewhat compensated by its huge user community. Work with salome and paraview was limited to mere usage, taking advantage of both tools being fully scriptable in python.
Part IV
Compression of Dense Bi-Material
In Part III, the conceptual and algorithmic principles of the developed method were described. To apply the DEM to the simulation on inelastic incompressible strain, a phenomenological model is designed. Numerous spherical particles discretize continuous media and their collective re-arrangements mimic inelastic strains. Innate and powerful handling of discontinuities and topological events are expected.
Part IV effectively illustrates the methodology on uniaxial compression of single and multi-materials. It is structured in three chapters:
• Chapter 11 presents the simulation of single phase materials, including the calibration procedure of the numerical parameters and the quantification of the expectable accuracy.
• Chapter 12 is dedicated to test cases on simple geometries for bi-materials. DEM results are compared to reference data from FEM simulations.
• Chapter 13 applies the methodology to a real composite mesostructure. The morphology of a 3D full sample, starting from X-ray tomography image, is discretized and compressed. In situ data and simulations are compared for local configurations.
The main results of this part were submitted as an article to the International Journal of Mechanical Sciences. Minor revisions were requested for publication and the amended version is proposed in Appendix D.
Highlights -Part IV Compression of Dense Bi-Material
• A calibration procedure allows the tuning of the numerical parameters to mimic targeted macroscopic behaviors: plasticity and viscoplasticity.
The strain rate sensitivities are limited to a maximal value and are valid only for a given strain rate range. Plastic behaviors are valid for various decades of strain rate.
• Single material can be compressed up to large strain (ε = 1) with limited relative error on the volume and the flow stress (typically around 10 %).
• Simple bi-material configurations are compared to results obtained by the finite element method (FEM).
On macroscopic metrics (flow stress and morphology), the error of the developed model is of the same order of magnitude as the error on a single material. No dramatic errors seem to be introduced when simulating multi-material configurations.
• The model is applied to a 3D mesostructure of a full sample, obtained by X-ray tomography.
The discretization procedure from a binarized 3D image is algorithmically cheap. The proposed methods can be scaled up to large and complex geometries.
• The comparison procedure between the numerical results and in situ measurements is possible, but the amorphous phase crystallization in the experiments hinders further analysis.
Single Material
In this chapter, our developed method is applied to the simulation of single materials under compressive loads. It is organized in four sections:
• Section 11.1 gives some elements of discussion regarding the interaction law BILIN , defined in Section 9.1.
• Section 11.2 describes the calibration procedure, to choose the correct numerical parameters to mimic a target viscoplastic behavior.
• Section 11.3 provides some guidelines for the verification of the model.
• Section 11.4 concerns the application of the model to two qualitatively distinct behaviors, with high and low strain rate sensitivities.
A first approach to the study of the local stress fields is proposed in Appendix A.
Interaction Law Choice
The model BILIN is attractive-repulsive. The forces are elastic linear and the attractive force is only activated for tensile motions of the pair.
In Part IV, limited to compressive loads, the BILIN interaction law (Figure 11.1) is applied and tested.
The choice of the elementary interaction law is somewhat arbitrary. Indeed, the local interactions are not individually meant to have physical sense. In counterpart, their collective effects must fulfill the modelization objective. A priori, no direct qualitative link can be established between the individual or pairwise behavior and the collective result. Very practically, the tested configurations often proved to display counter intuitive trends and the macroscopic behavior seems to be dominated by steric effect, with secondary regard for the details of its prescription1 . Interaction laws are thus chosen conceptually as simple as possible and of reasonable cost from a computational point of view.
A useful restriction to creativity in the design of interaction laws is to respect the linearity of the forces with the introduced parameters. This linearity allows a straightforward calibration of the modeled stress level. In the discrete element method (DEM), for each particle, the time integration is based on the ratio of force and mass f /m (Section 8.3). The overall kinematic behavior will thus be kept statistically2 unchanged if both masses and forces are scaled by a common factor. (Section 11.2).
The relative size of the crown r crown /r seed = 1.5 is chosen to avoid the interaction of two particles "across" another one. The choice of using identical radii for all particles is not an algorithmic limitation, the proposed implementation can readily accept a limited dispersion. The effect of such a dispersion has not been investigated, but may be a good strategy to limit excessive numerical crystallization. It was not necessary in this model, as attractive forces are much weaker than the repulsive forces, the distance between two particles is sufficiently free.
Calibration Procedure
This section describes the calibration procedure of the numerical parameters of the interaction law to mimic perfect viscoplastic behavior. It is divided into two steps, tuning respectively the strain rate sensitivity and the stress level. Two sections each examines one step and a third short section draws some limitations.
The objective of our calibration procedure is to model a perfect viscoplastic behavior, described as a relation between the scalar macroscopical strain rate ε and the flow stress σ, by the unidimensional 3 Norton law [126, p.106]:
σ = K| ε| M • sign( ε) (11.1)
. Where M is the strain rate sensitivity and K is the stress level. All cases presented in this chapter being in compressive state, strain, strain rate and stress are given in absolute value.
As the DEM does not rely on a continuous framework, the numerical parameters cannot be derived a priori from the targeted macroscopic behavior. We work here at fixed ratio r crown /r seed = 1.5, to allow a large overlap zone without catching second neighbors. The seed radius is arbitrarily set to a size of r seed = 1 mm. The ratio between attractive and repulsive stiffnesses is set to k rep /k att = 10, to guarantee a numerically predominant repulsion.
During this PhD, the study of the behavior of our system promoted the idea of an intimate relationship between numerical time step, natural period of the packing and prescribed strain rate. The overall behavior of the system seems to be driven by these three parameters. However, no clear trend could be established regarding the role of the time step. We thus work at fixed time step, set for the model BILIN to ∆t = 5•10 -4 s (see also Section 11.3.2). Our calibration procedure thus takes advantage of the relationship between strain rate and natural period, it probably could be improved by taking into account the role of the time step.
The remaining parameters to be chosen are the repulsive stiffness of the interactions k rep and the mass of the particles m. We propose a two-step calibration procedure, based on uniaxial compression test simulations, on cubes of single materials:
1. Calibrate the strain rate sensitivity M , tuning the ratio between mass and repulsive stiffness m/k rep .
2. Calibrate the stress level K, applying a common multiplicative factor to both mass m and repulsive stiffness k rep .
The numerical parameters, obtained independently for each phase, are used in multimaterial simulations without further fitting procedure.
Strain Rate Sensitivity Calibration
The strain rate sensitivity is calibrated by adjusting the natural period with respect to the prescribed strain rate. The ratio mass/stiffness of the model is thus chosen.
The strain rate sensitivity M of a packing depends on its ability to quickly rearrange itself, with regards to the prescribed strain rate. To quantify an image of the reaction time, we use the natural period t 0 of an ideal spring-mass system of stiffness k rep and mass m:
t 0 = 2π m k rep (11.2)
This value is not meant to match the actual oscillation period of particles, but to quantitatively compare sets of parameters. Packings of 5•10 3 particles with natural periods ranging from 1•10 -2 to 1 s are compressed at strain rates from 3•10 -6 to 1 s -1 . For each natural period, the flow stress σ is normalized by the flow stress at the lowest strain rate σ low . The results (Figure 11.2) exhibit a clear influence of the natural period. All packings follow a similar trend: the influence is first limited, then the flow stress increases with the strain rate up to a limit value. Above the limit, the strain rate is too high for the packing to collectively cope, the deformation is localized to the particles near the moving mesh. The overall behavior is shifted to various ranges of strain rate by the value of the natural period.
To sum up, the strain rate sensitivity M , i.e. the slope in the space ( ε, σ/σ low ), is driven by the relation between the natural period and the strain. The common trend for all configurations (Figure 11.3a) is clearly exhibited in the space (t 0 √ ε, σ/σ low ). To quantify the observed trend, the data are approximated by least-square fitting, using a sigmoid of generic expression:
σ/σ low = a + b/(1 + exp(c -d • t 0 √ ε)) (11.
3)
The fitting parameters used here are (a, b, c, d) ≈ (0.9048, 4.116, 3.651, 210.0). Using this fitted common trend, a master curve is built in the space ( ε • t 0 2 , M ), thus describing an Three flow regimes, in terms of strain rate sensitivity, can be identified in Figures 11.3b and 11.3a:
• Plastic: for ε • t 0 2 < 1•10 -7
s the strain rate sensitivity is negligible (M < 4•10 -3 ). A plastic behavior can thus be represented, with stress variation of the order of magnitude of the expected precision of the model, valid over various orders of magnitude of strain rates. The packing rearranges quickly enough when deformed, so that variations of strain rate does not affect the flow structure.
• Collapse: at higher values than ε • t 0 2 > 3•10 -3 s, the packing is not reactive enough for the particles to collectively cope with the strain. The strain localizes next to the moving planes, the flow stress drops and the macroscopic equilibrium is lost. Such configurations are not suitable for our purpose.
• Viscoplastic: in the intermediate window, the ε • t 0 2 value governs the sensitivity of the packing, up to a maximum of 0.6. In this configuration, when the strain rate increases, the particles are forced to indent more to rearrange, leading to higher flow stress. However, the sensitivity is strongly strain rate dependent, An actual viscoplastic behavior can only be modeled via an averaged strain rate sensitivity, with a scope of validity limited to a narrow range of strain rates.
The master curve (Figure 11.3b) allows to directly choose the natural period approximating the desired sensitivity at the targeted strain rate. The m/k rep ratio is thus fixed. If the strain rate range is known a priori, the master curve also gives an approximation of the variation of the strain rate sensitivity within the strain rate range. For example, in order to model a high strain rate sensitivity M ≈ 0.5, the value ε • ∆t 2 must be chosen close to 2•10 -4 s -1 . If the targeted strain rate is 2•10 -4 s -1 , the chosen natural period would be chosen as t 0 ≈ 2•10 -4 /2•10 -4 = 1 s.
Stress Level Calibration
The mass and stiffness are adjusted to meet the required stress level. Their ratio remains unchanged to respect the expected strain rate sensitivity.
For a given kinematical behavior of a packing, the stress level can arbitrarily be set. The integration of motion, for each particle, relies on the acceleration computed from Newton's second law. Hence, a multiplicative factor applied to both forces and masses leaves the kinematics of a packing, and its strain rate sensitivity, unchanged. Since our interaction laws are linear with the introduced parameters, we can use a common multiplicative factor on stiffnesses and masses.
The stiffnesses k rep and k att are scaled up to match the desired flow stress at the targeted strain rate. The mass m is proportionally adjusted, in order to maintain the correct strain rate sensitivity.
Scope of Validity
Arbitrarily high strain rate sensitivities cannot be modeled. Low strain rate sensitivities are valid over several decades of strain rate. High strain rate sensitivities are only valid on narrow ranges. This two-step calibration allows us to reach arbitrary stress level, but displays limitations regarding the reachable strain rate sensitivity and strain rate.
We cannot model arbitrary strain rates with a given set of parameters. The numerical strain rate sensitivity depends on the strain rate. This effect can be controlled for very low sensitivities: a negligible sensitivity can be respected over various orders of magnitude of strain rate. However, a large tolerance must be accepted on higher sensitivities, which can only be reasonably approximated on narrow ranges of strain rate. The model also has intrinsic limits regarding the reachable strain rate sensitivities. Reaching higher sensitivity would require lower natural periods, for which the packings collapse and are unable to cope with the strain.
As a general conclusion for this section, our calibration procedure allows to choose independently the stress level and the strain rate sensitivity, tuning the mass of the particles and the stiffness of the interactions. The scope of validity, for controlled sensitivity, is limited to narrow strain rate ranges.
Verification
Three effects are investigated in dedicated sections: the mechanical equilibrium of the packings, the dependency to the time step of the model and the effects of the size of the packings.
Equilibrium
The DEM is a dynamic method. To model quasistatic phenomena, the mechanical equilibrium must be verified within a sufficient accuracy. Excessive dynamic effects are not suitable for our purpose.
In order to simulate quasistatic phenomena, the behavior of the packing must be independent from the way the strain is applied. The total forces5 acting on the boundary conditions, the top and bottom meshes, respectively mobile and fixed, must balance. If a mesh moves too fast, the macroscopic equilibrium is lost and the strain localizes next to the moving plane.
At a given strain rate, the equilibrium relative error depends on the natural period, but is of the same order of magnitude for all strains. The equilibrium errors (Figure 11.4) are always inferior to 0.1 % for both phases in the studied strain rate range. The macroscopic behavior is indeed kept unchanged by inverting the mesh motions.
Temporal Convergence
The choice of the time step can induce quantitative and qualitative changes in the response of the system. A proper temporal convergence seemed excessively costly and is not strictly necessary for our purpose. The time step is chosen to provide a correct integration of the motion, but does not meet convergence requirements.
Although our modeling objective is to capture quasistatic phenomena, the used numerical relies on an explicit dynamic framework. A fundamental algorithmic issue is thus the choice of the time step (refer to Section 8.3.2). The chosen metric to study the effect of the time step on a packing behavior is the time step normalized by the natural period ∆t/t 0 .
An upper bound to the value of the time step is set by the proper integration of the motion of the particles. Using the BILIN interaction law, if ∆t/t 0 > 2•10 -1 , the coarse time discretization leads to totally unpredictable behavior, particles get massively lost during the simulation. This configuration must thus be avoided.
Below this threshold, the influence of the time step on the stress/strain response of packings of 500 particles6 was tested (Figure 11.5). The time step being a purely numerical parameter, it should not influence the metrics of interest of the studied physical phenomena. From a traditional perspective, a model would be expected to meet time convergence requirements, understood as independence of the macroscopic flow stress with respect to the time step. Relative error on the flow stress at a strain of 0.2 versus the ratio of the time step and the natural period ∆t/t 0 . The reference for t 0 = 1 s is ∆t/t 0 = 10 -6 . The reference for t 0 = 10 -2 s is ∆t/t 0 = 10 -4 . The ∆t/t 0 values are highlighted for the phases A and B, used later on.
It must be emphasized that the time step influences the stress/strain behavior both quantitatively and qualitatively (Figure 11.5a). For the studied systems, the macroscopic strain/strain time convergence is only reached for time steps smaller than ∆t/t 0 < 10 -5 , with a relative error below 1 % (see t 0 = 1 in Figure 11.5b). This constraint would be unreasonably time consuming for our purpose, especially for packing with low natural periods: simulation times would be longer than the PhD duration.
It must be here remembered that we use an analogous model. Requiring the time convergence would be legitimate if a physical system of attractive/repulsive spheres were to mimic the inelastic deformation of continuous media. In such a context, the DEM would be a numerical means to simulate the behavior of the physical analogous model. It would thus be necessary for the numerical model to effectively represent the behavior of the physical model: the time step would be required to meet convergence requirements.
In our case, nothing prohibits to check whether the temporally non-converged numerical model itself 7 displays sufficient analogy to the physical phenomena of interest, the inelastic strain of a continuum. The time step is merely a numerical parameter, whose choice is driven by the respect of a reasonable numerical behavior (typically in our case ∆t/t 0 < 2•10 -1 ) and the macroscopic overall behavior of the system. For computational convenience, some freedom can be allowed on the time step choice. In counterpart, the time step must be set to a fixed value from calibration to simulation.
Summing-up, the time step is considered as a numerical parameter, fixed at an arbitrary value, compatible with a reasonable numerical behavior but free from physical constraints at the scale of the elementary particles.
Spatial Convergence
The behavior of our model must be independent enough from the chosen spatial discretization. Packings of a few hundred of particles can roughly provide the correct order of magnitude of flow stress. The error on the flow stress is quantified with respect to a spatially converged state to provide guidelines on the choice of the number of particles.
As our model relies on a collective motion of particles, too small a packing will not display the expected behavior. The kinematical behavior of a single material cube in uniaxial compression is roughly observed with a few dozen of particles (Figure 11.6). With a few hundred of particles, the stress fails to represent the expected plastic trend, but already exhibits a correct order of magnitude (Figure 11.7a). A few thousand of particles allow a controlled relative error, around 10 %. Single material configurations are typically run with 5•10 4 particles. (Figure 11.7b). The relative error is computed with respect to a packing of 1•10 6 particles, for which spatial convergence is considered to be reached.
Macroscopic Behavior
Under compressive strain the model BILIN allows the packing to rearrange collectively. Large strain can be modeled with controlled volume variation. After a transient regime, the flow strain-stress behavior is precise enough for our purpose. Very distinct behavior can be modeled depending of the strain rate sensitivity, from a plastic-like behavior to a highly strain rate sensitive viscoplasticity.
The two phases behavior are inspired from the experimental model material (Section 2.3). In the identified forming window, around 400 • C, The phases both have a flow stress close to 100 MPa in the strain rate window 1•10 -4 -1•10 -3 s -1 , but with drastically distinct strain rate sensitivities. The negligible strain rate sensitivity phase is referred to as A, with a low natural period, the high sensitivity phase is referred to as B, with a high natural period. The corresponding numerical parameters are given in Table 11.1.
A key feature expected for a set of parameters is the conservation of the packing volume. The volume of the packings is estimated reconstructing a polyhedral mesh, using an algorithm implemented by Stukowski [START_REF] Stukowski | Computational analysis methods in atomistic modeling of crystals[END_REF], based on the alpha-shape method. As a side note, the definition of the boundaries of the modeled objects are somewhat blurry. They are here defined as the envelope of the centers of the particles, for practical For both phases, the volume variation depends little on the strain rate, The prescribed compression decreases the volume, typically about 5 % for A and 10 % for B (Figure 11.8). Before reaching a somewhat stable flow regime, the packing volume decreases in first 0.2 of strain. Most of the volume variation occurs within this initial stage, the volume then stabilizes on a plateau before a final increase of the error at larger strains, above 0.6. This trend, and its initial transient regime, will also be observed for the flow stress (Figure 11.10a). Regarding the kinematical behavior of a packing (Figure 11.9), the overall cuboidal shape is conserved, but the sharp edges tend to be blurred along with the strain. This is understood as an effect of the surface tension induced by the attractive component of the interaction law. As the discretization by particles creates local defects in the geometry, the initially flat faces becomes slightly wavy. Typical profiles of stress-strain curves are presented Figure 11.10a. In this section, the true stress is computed using an estimation of the cross-section, based on the current macroscopic strain and the initial volume, assuming its variations (Figure 11.8) are acceptable. As for the volume evolution, a transitory stage can be observed at the beginning of the deformation, where the stress rises to reach the plastic plateau. The flow stress then oscillates around a fairly constant value. An overshoot effect of the stress can be observed at higher strain rates for the B phase. For each phase, the value at a strain of 0.3 is used to compute the Norton approximation, by least-square fitting (Figure 11.10b and Table 11.1). As discussed in Section 11.2.3, the high sensitivity phase, B, is only valid within one order of magnitude of strain rate, the approximation is not reasonable when the strain rate is out of the studied range.
Phase
Chapter 12
Bi-Material Test Cases
In Chapter 11, the numerical parameters of the DEM model have been calibrated, independently, to mimic two qualitatively distinct behaviors.
Keeping in mind the limitations of the single material model, we evaluate in this chapter the reliability of the model for bi-material configurations. The shape of the phases and the engineering macroscopic stress are metrics compared with results from analytical and finite element method (FEM) references, briefly presented in Section 12.1. Three simple geometrical bi-material configurations are studied, each examined in a specific section:
• Parallel (Section 12.2);
• Series (Section 12.3);
• Unique spherical inclusion (Section 12.4). This test case has been further investigated, in terms of spatial convergence and of local fields.
The three geometries are discretized with 5•10 4 particles and uniaxially compressed up to a strain of 0.3, at prescribed strain rates. In the studied configurations, interaction parameters at the interfaces had little influence on the macroscopic results, they have been set to the average of the phase parameters.
12.1 FEM Reference FEM simulations are used as numerical reference of our bi-material tests. Some key discrepancy sources are looked into: the handling of interfaces, the use of symmetry and the constitutive behavior parameters.
Total Lagrangian FEM simulations, well suited for our elementary geometrical configurations and limited strains, are run using Code Aster [START_REF]Code aster open source -general FEA software[END_REF]. The visualization of the FEM results are rendered using paraview [START_REF] Henderson | ParaView Guide, A Parallel Visualization Application[END_REF]. The elastic strain is numerically negligible with respect to the inelastic strain. This incompressibility is handled with a mixed formulation displacement-pressure-swelling. A finite transformation formulation is used, taking into account potentially large strains, rotations and displacements. A logarithmic metric of the strain is chosen and the geometries are meshed using quadratic tetrahedral elements. Refer to Appendix C for the exact syntax details and choices.
Top and bottom nodes follow prescribed vertical motion, lateral sides deform freely. The geometrical models are reduced using the symmetries of the problems, while the DEM simulates the full geometries. In FEM, at the interface between two phases, the nodes are shared, prohibiting any relative motion, which is the most severe difference with our DEM simulations. In the experimental background of this study, the phases have very little adhesion at the interface. Both materials follow a Norton law (refer to Equation 2.1 on page 24), semi-implicitly integrated using a theta-method. The B phase uses the continuous parameters identified in Section 11.4 (Table 11.1). To allow an easier numerical convergence of the model, the numerical strain rate sensibility of the A phase is slightly increased for the FEM simulations (M = 3.05•10 -2 and K = 120 MPa • s M ). In the range 1•10 -4 -1•10 -3 s -1 , the induced relative error on the flow stress is ±3 %.
Parallel Configuration
In parallel configuration, the phases are mostly independent. The DEM captures the linear mixture law accounting for the macroscopic flow stress.
A cube is vertically divided into two cuboidal phases, for various volume fractions, and vertically compressed at constant strain rates. The engineering stress is compared to a mixture law, linear with the volume fraction.
In this simple configuration, little interaction should take place between the phases, and in ideal conditions, an homogeneous strain for both phases is expected. In the DEM simulations, the global geometry of each phase remains close to a cuboid along the deformation (Figure 12.1). At given strain rate ε, the true stress in the phases being independently defined by the Norton law, the global true stress σ true can be computed with an elementary mixture law [78, p.99], linear with the volume fraction f of the phase B:
σ true (f, ε) = f • K B εM B + (1 -f ) • K A εM A (12.1)
To provide a consistent metric for all configurations, the engineering stress σ engineer is used as reference. It is computed at a given strain ε (Equation 12.2), based on the true stress and the volume conservation:
σ engineer (f, ε, ε) = exp (-ε) • σ true (f, ε) (12.
2)
The engineering stress-true strain profile, as in single material configuration, displays a transitory regime, typically in the first 0.15 of strain, with a progressive rise towards the flow stress (Figure 12.2a).
The DEM model is able to capture, after a transient regime and at the precision of the single phases, the linear pattern of flow stress with the volume fraction (Figure 12.2b). With a rougher discretization, for example only a thousand particles per phase, the result remains qualitatively close, degrading the accuracy by a few percent.
Series Configuration
In series configurations, the deformation is not homogeneous. Depending on the strain rate, a phase will preferentially deform. The macroscopic flow stress modeled with the DEM model comes in good agreement with the FEM reference.
A cube is horizontally divided into two cuboidal phases, for various volume fractions, and vertically compressed at constant strain rate. Using the symmetries, one fourth of the geometry is modeled with the FEM, using approximately 1.3•10 3 nodes. For the full geometry, the ratio DEM particles to FEM nodes would be a little under 10.
In this geometrical configuration, the strain is not a priori homogeneous anymore. Due to distinct strain rate sensitivities, one phase preferentially deforms depending on the strain rate, which is qualitatively observed both in FEM and DEM simulations. Qualitatively (Figure 12.3), the B phase (bottom phase) deforms more at lower strain rate. The A phase, at high strain rates, deforms more homogeneously in DEM than in FEM. The "mushroom" shape is slightly blurred in this strain rate range.
FEM and DEM are in good agreement, after the transient regime observed in DEM, within a few percent of relative error (Figure 12.4a). In the strain rate validity range, the DEM model is thus able to capture the final flow stress evolution with respect to the volume fraction (Figure 12.4b). As a side note, the heterogeneity of the strain in the series configuration is responsible for a nonlinear variation of the flow stress with respect to the volume fraction. This effect of the geometry of the bi-material, clearly displayed at 1•10 -3 s -1 (Figure 12.4b), is correctly reproduced.
Spherical Inclusion Configuration
The deformation of a spherical inclusion in a matrix is qualitatively close to the experimental configuration of interest. Overall, the evolution of the flow stress and the shape factor are correctly captured. A specific section is dedicated to the study of the influence of the discretization on the results. A first approach to the study of the local stress fields is proposed in Appendix A.
A single spherical inclusion of phase B is placed in the center of a phase A cube, with a fixed volume fraction of 20 % of phase B inclusion. Using symmetries, one eighth of the geometry is modeled with the FEM, using 2.1•10 3 nodes. For the full geometry, the ratio DEM particles to FEM nodes would be a little under 3.
Qualitatively, two typical kinematical tendencies of the matrix are displayed in FEM (Figure 12.5), with an intermediary state of homogeneous co-deformation:
• A barrel shape of the sample, when the flow stress of the inclusion is low, at lower strain rates;
• An hourglass shape, when the flow stress of the inclusion is high, at higher strain rates.
In the FEM simulations, the hourglass shape of the matrix is strongly emphasized by the non-sliding interface between phases. While the barrel shape is easily displayed at low strain rates in DEM, the hourglass shape is only clear at higher strain rates, outside of the validity domain studied range. Potential sources are the more permissive contact conditions between phases, the rough discretization (see for example the finer discretization on Figure 12.8) and the lower local stress field seen in the inclusion (refer to Appendix A). Although we would expect lower stresses with a less constrained system, the flow stress is overestimated (Figure 12.6b), by about 10 % on the studied strain rate range, even if the tendency is acceptable after the rise strain (Figure 12.6a). To quantitatively compare the models from a kinematical perspective, we study the macroscopical shape factor S f of the B inclusion, which is less sensitive than the matrix shape to the interface definition. This factor (Equation 12.3) is the ratio of the inclusion height H, in the compression direction, and diameter D, averaged in all perpendicular directions:
S f = H/D (12.3)
For the DEM simulations, this value is approximated computing the shape factor of an equivalent ellipsoid, having the same inertia matrix as the cloud of particles modeling the inclusion. At all strain rates, at the very beginning of the applied strain (Figure 12.7a), the inclusion remains roughly spherical for a few percent of strain, and follow a similar trend as in FEM after a rise strain. In the validity range of the B phase, the final shape factor (Figure 12.7) is underestimated with a relative error of about 5 %, the inclusion deforms more in DEM than in FEM.
Spatial Convergence
The effect of the discretization is studied for the case of the unique spherical inclusion.
The quantification of the error on the modeled shape factor gives direct guidelines for the discretization of geometries from experimental data.
To evaluate the error on the shape factor depending on the roughness of the discretization, an identical geometry is modeled with packings of various sizes (Figure 12.8). The relative error for the final shape factor is computed using the 1•10 6 particles configuration as reference, where 1•10 5 particles discretize the inclusion. For each size, five random initial packings are tested. The chosen test case is harsh for our model: for the smaller packings, the meshes may interact directly with the particles of the inclusion at the end of the deformation.
A very rough description of the inclusion, with 20 particles for example, remains too inaccurate to catch more than an order of magnitude of the deforming trend (Figure 12.9a), and the initial shape factor is already far from a perfect sphere, with little repeatability. In a realistic context, such a rough discretization can only reasonably be used to capture the position of an inclusion in a composite. With a finer discretization, starting with a few hundred particles, the qualitative trend can be captured and the repeatability improves: it becomes possible to estimate the necessary discretization for an arbitrary precision (Figure 12.9b). The purely geometrical error, on the initial state, is about an order of magnitude smaller than the final error, after compression. For a final error under 10 %, more than 200 particles must discretize the inclusion.
Chapter 13
Complex Multi-Material Mesostructure
In Chapter 12, the behavior of our DEM model is compared to numerical reference on very simple geometries. However, our objective is to model complex morphologies of metallic composites. This chapter applies the model to experimental setups, starting from the discretization of a tomography 3D images (Section 9.3). The chapter is split into two sections:
• Section 13.1 is an illustration of the potentiality of the method on a large data set. The agreement between modeled and observed behaviors is not considered. An initial state is discretized and arbitrarily compressed.
• Section 13.2 applies the methodology to compare the numerical results to in situ observations, globally and for local configurations of interest. Numerical and experimental issues are briefly examined.
Computation on a Full Sample
To test the method on large geometry, a full sample of our composite model material is discretized and compressed. The objective is to assess the numerical scaling to larger models than the previous test cases.
As an illustrative example, the methodology is applied to the 3D mesostructure of a full sample, obtained by X-ray microtomography at the European synchrotron radiation facility (ESRF) (beamline ID19). The studied material is a metallic composite, with a crystalline copper matrix and spheroidal inclusions of amorphous zirconium alloy. The total volume of the sample is approximately 0.5 mm 3 , containing a volume fraction of inclusion of 15 %, with diameters up to a few dozens of micrometers. The voxelized image has a size of 594×591×669 voxels, with a voxel size of 1.3 ➭m. The purpose of this section is not to compare quantitatively numerical and experimental results, but to underline the potential of the method for large arbitrary data sets.
Starting from a three dimensional voxelized image, the discretization of the geometry has a low algorithmic cost (Section 9.3). The segmented image is used as a mask on a random packing of particles. For each particle, the color of the voxel geometrically corresponding to the center defines the material type: matrix A or inclusion B.
The image is here binarized in two phases and used as a mask on a packing of about 3.36•10 6 particles, with a number of voxels to number of particles ratio of 70. As shown in 139 Figure 13.1, about 170 physical inclusions are discretized, using between 500 and 5 discrete element particles each. The rough discretization of the smallest inclusions, for example the further left inclusion in Figure 13.2, is not precise enough to allow a strain evaluation, only the inclusion position can be tracked. For the biggest inclusions, estimated numerical error on the shape factor after 0.3 of strain is around 10 % (Figure 12.9b).
The sample is uniaxially compressed up to 0.3 at 1•10 -3 s -1 . Local and global illustrations are given in Figures 13.1 and 13.2, with relative displacement and deformation of the inclusions. The computation was run on an Intel Xeon E5520, using 8 processors. The 6•10 5 steps for 3.36•10 6 particles were executed in 8•10 5 s, less than 10 days. The computation time is linear with the number of steps and of particles. As long as the load is properly balanced between processors and provided that the geometry of the sub-domains keeps the volume of the communications between processors reasonable, the DEM solver scales properly with the number of processors. On the studied geometries, the roughly cuboidal overall shape of the samples allows simple dynamic balance of the load between processors. The computing time can thus be reliably estimated on a given machine, roughly 3•10 -6 cpu second per particle and per time step, for a single processor in the given example. This section illustrated that the proposed methodology can be applied to large arbitrary realistic mesostructure data. The discretization has low algorithmic cost, but the model does not allow yet a simple way to locally adapt the discretization roughness.
The cost of the computation can be reliably estimated as the model does not depend on non-linear resolutions.
In Situ Configurations
The results from the DEM simulations are compared to the temporal evolution of the morphology obtained by in situ X-ray tomography. The discrepancy is stemming from limitations of the model, first of which in terms of strain rate sensitivity, and from the crystallization of the amorphous alloy, delicate to control in this experimental environment.
The maximal strain rate sensitivity that can be modeled with BILIN is limited to M ≈ 0.5, on a limited strain rate range. Classical sensitivities for metal creep, typically 0.2 [START_REF] Kassner | Five-power-law creep in single phase metals and alloys[END_REF], can thus readily be modeled. However, the amorphous phase of our model material (Section 2.3) exhibits unusually high strain rate sensitivities, up to Newtonian behavior M = 1 (Figure 2.7b on page 26).
Experimentally, in the temperature and strain rate range of interest M exp ≈ 0.73 (Figure 13.3a). Numerically, the amorphous phase is modeled in DEM with an underestimated M num ≈ 0.49. FEM simulations were run on the single inclusion test case, to quantify the effect of this underestimation (Figure 13.3b). The variation of the shape factor S f of the inclusion with the strain rate is indeed influenced by M . By design, at the center of the range, the difference is very limited. A typical relative difference of a few percent can be observed at the extrema of the considered strain rate range. However, the order of magnitude of this introduced error is reasonable: from a strictly numerical point of view, the typical FEM to DEM error is similar or higher; experimentally, it is acceptable with respect to the numerous uncertainties. Qualitatively, the behavior of the model is governed by the association of two phases with respectively high and low strain rate sensitivities. Within a restricted strain rate range, the exact values of strain rate sensitivities are of secondary order1 .
To limit the uncertainty on the prescribed strain rate, the nominal strain rate experimentally applied to the samples is not directly used. Several pairs of particles, above and below the region of interest, are tracked (Figure 13.4). The overall estimated strain rate is polynomially fitted (second order typically suited well this series of data) and used as input in the DEM simulations. The error introduced by using rigid planar meshes, over-constraining the system, proved to be of secondary order.
A preliminary series of simulation is run applying the overall macroscopic strain rate, polynomially fitted for full experimental samples, to a single numerical inclusion. The configuration is very similar to the test case studied in Section 12.4, using 1•10 4 particles and a volume fraction for the inclusion of 0.2. The expected relative error of the shape factor with respect to a spatially converged packing is around 1 % (Figure 12.9b). The numerical shape factor was compared to the average measured shape factors for all inclusions (Figure 13.5).
It must be emphasized that the modeled boundary condition may be quite alien to the studied system. Our test case leaves free lateral boundaries. In the sample, in the neighborhood of a physical inclusion, the lateral flow of the matrix is more constrained. As it will be discussed, the introduced discrepancy is of secondary order with respect to the encountered issues. Two nominal strain rates, 5•10 -4 and 2.5•10 -4 s -1 , are tested and experimentally have very similar overall behavior. A large discrepancy can be observed between experiments and simulations after a strain of 0.2 -0.3. At both nominal strain rates, the deformation of the inclusion is largely overestimated by the simulations. The discrepancy potentially introduced by the free lateral boundary conditions would on the contrary let the matrix flow freely around the inclusion. We should overestimate the shape factor. In addition, the experimental deformation seems to slow down after a strain of 0.2 -0.3.
A probably dominant effect in this discrepancy stems from the crystallization of the amorphous alloys of the inclusion. Uniaxial compression test, carried-out on a sample of the amorphous alloy up to the beginning of the crystallization (Figure 13.6), can provide a first set of indications. On Figure 13.6, the macroscopic mechanical effect of the crystallization (see also Figure 2.4a on page 23) can be detected 1•10 3 s after the introduction of the sample in the furnace, with the increase of the flow stress.
Despite these preliminary data, the effective kinetics of the crystallization is not well understood 2 and the effects of the thermomechanical elaboration process (Section 2.3) are 0 0.09 0.27 Strain (/) not known. The data from Figure 13.6 are obtained on a millimetric sample, obtained by casting. The alloy in our composite is obtained by atomization and is then hot coextruded with the copper powder. In addition, the set-up for in situ measurements at the ESRF imposes at least several minutes between the introduction of the sample in the furnace and the beginning of the test, typically from 5 to 10 min (Section 2.4).
A naive attempt to account for these numerous unknowns is implemented by exponentially fitting the experimental crystallization (Figure 13.6) and using the function (Equation 13.1) to implement an explicit time dependence of the numerical parameters of the amorphous phase.
σ/σ ref = a • exp(b • t + c) + d (13.1)
The fitting parameters used here are (a, b, c, d) ≈ (2.91•10 -4 , 1.82•10 -3 , 4.24, 9.51•10 -1 ).
A somewhat arbitrary initial offset of 600 s is used in the simulation, to account for the effects of the set-up time and the elaboration process. The flexibility of the DEM frameworks allows to implement such models with relative ease. As-is, the effect of this correction attempt is not sufficient to capture the tendency observed on the experimental results (Figure 13.7). The temporal evolution of the numerical parameters limits the modeled deformation with respect to the "plain" simulation. However the deformation is still too high for a satisfactory description of the experimental observation. At this stage, too little is known to propose a more detailed approximation. More in-depth comparison between the in situ data and the simulations are thus hindered. The end of this section will thus be limited to a qualitative test of the methodology on typical geometrical configurations of interest, without using the temporal correction. For three chosen configurations, a locally estimated polynomial evolution of the strain rate is applied to the discretized geometry. A first example (Figure 13.8a) of local configuration is the deformation of a large inclusion surrounded by smaller inclusions, to allow an estimation of a local flow of the matrix. Mainly due to the crystallization, the evolution of the shape factor of the main inclusion is largely overestimated in the simulation with respect to the experimental measurements (Figure 13.8b), the quantitative analysis is thus not of interest. Quantitatively, it can be observed that the deformation trend is also quite distinct. In the simulation, the roughly homogeneous deformation leads to a limited relative motion of the particles. Experimentally, the small inclusions tend to flow around the central inclusion, with a large displacement perpendicularly to the compression axis. A second configuration is the behavior of two inclusions of similar size, initially close to one another (Figure 13.9). An interesting feature of this test case is the numerical behavior at this rough discretization. This initial matrix between the inclusion is only modeled by a pair of particles. As isolated particles cannot display the expected, the effect of the thinnest layer of matrix numerically vanishes. Depending on the dominant physical phenomenon driving the experiment, for example if thin film with high mechanical properties is formed, such a numerical behavior can be quite misleading. In addition, as contact phenomena are not included, this example is a limit case of the model.
A last example, typically observed in experimental context, is the presence of defects in the structure of the inclusion. The presence of a large hole in the inclusion (Figure 13.10) is a consequence of the atomization process and influences their overall behavior. Although the behavior seems qualitatively satisfactory, the model reaches two major limitations. Firstly, a further deformation would lead to the self-contact of the interface of the hole, not used in this simulation. Secondly, the model BILIN is only suitable for compressive loads and the stress state is probably more complex on a "porous" mesostructure, potentially with local tensile loads.
The handling of contact events of tensile loads will both be examined in the next part. In Part III, the conceptual and algorithmic principles of the developed method were described. In Part IV, focused on loads dominated by compression, the methods were applied and tested on dense bi-materials. The interaction law used was the model BILIN , whose application is limited to compression only.
Part V extends the model to less restrictive loads, using the model TRILIN . Potential uses for the detection of self-contact events are illustrated on "porous" geometries, as opposed to the dense configurations studied previously. The part is split into three chapters:
• Chapter 14 is about the behavior of the interaction law TRILIN under tensile and compressive loads.
• Chapter 15 describes the self-contact detection procedure and its effect on simple geometries.
• Chapter 16 illustrates the methodology with "porous" mesostructures, obtained by X-ray tomography.
Highlights -Part V Tension-Compression of "Porous" Material
• The TRILIN interaction law can cope with compressive and tensile loads.
The overall behavior of the packing is satisfactory regarding the flow stress and the volume conservation. The symmetry error between compressive and tensile behaviors is around 20 %. The necking and the rupture under tensile load are displayed but not controlled.
• The self-contact detection algorithm allows the tracking of interface interactions.
The closure and re-opening of pore is natively displayed in uniaxial compression-tension tests. The parameters of the self-contact detection are tuned to choose an acceptable compromise between contradictory objectives.
• The proposed framework allows a flexible and controlled handling of topological events.
The implementation of a healing time of the interfaces illustrates the potential of the method to control the evolution of the topology of the sample. Given a locally computable metric, arbitrary behavior can be developed with ease.
• The behavior of the model is illustrated using complex mesostructures, obtained by X-ray tomography: casting pores and a low relative density foam.
Qualitatively, the self-contact algorithm allows the tracking and the re-opening of numerous interface interactions. Some macroscopic metrics are derived for the tested geometries. Good agreement is obtained for the flow stress of the foam.
Local deformation mechanisms can be observed and studied.
Dense Material
In Part IV, the interaction law BILIN was only suitable for compressive loads. This model is thus not suitable for porous mesostructures.
In this chapter the behavior of the interaction law TRILIN (introduced in Section 9.1) is described. This interaction law is able to cope with compressive and tensile loads and is used throughout Part V. The chapter is divided into three sections:
• Section 14.1 briefly presents the modeling choices for the interaction law TRILIN .
• Section 14.2 describes the behavior of a dense packing, without any self-contact detection algorithm.
• Section 14.3 studies the numerical effect of the number of particles on the behavior of a packing.
Interaction Law Choice
The model TRILIN is attractive-repulsive. The repulsive forces are elastic linear. The attractive forces are linear up to a threshold and are only activated for tensile motions of the pair. In the objective of modeling a constant strain rate sensitivity over large ranges of strain rate, a limited strain rate sensitivity is chosen.
The model BILIN was chosen in Part IV for its simplicity. The fixed ratio of stiffnesses k rep /k att = 10 could only reasonably cope with compressive loads. The model TRILIN is designed to be more generic, coping with tension and compression. For the two configurations, a similar absolute macroscopic flow stress is sought for. Keeping an elementary bi-linear interaction law, as the BILIN model, no satisfactory modification of the radii or the stiffness ratio was found. Indeed, the attractive stiffness k att must increase to provide a more cohesive behavior and the crown radius r crown must provide a wide enough geometrical range of interaction: the attractive forces become excessive.
Among potential force profile, a threshold on the attractive force was chosen for the model TRILIN (Figure 14.1). We have no claim whatsoever that this choice is optimal in any respect, this configuration was only the first to provide a sufficiently satisfactory behavior to be used as proof of concept.
In Part IV, it proved impractical to use time-converged model. The required simulation time being excessive, the time step was treated as a mere numerical parameter (Section 8.3.2). Accepting fully this assumption, the time step for TRILIN was chosen quite large with respect to the natural period: ∆t/t 0 = 10 -1 . The linearity of the interaction force with respect to the introduced parameters is respected. The flow stress, for a given kinematic behavior can thus be arbitrarily chosen at fixed k rep /m ratio. Although it has not been investigated, the increase of the strain rate sensitivity at higher strain rate seems to be displayed1 . It is thus probably possible to apply the calibration procedure applied in Part IV to tune the strain rate sensitivity. A new calibration chart (Figure 11.3 on page 122) would have to be computed, the shapes of the interaction laws and the ratio t 0 /∆t being distinct. A similar behavior is to be expected: a large plastic-like domain and a narrow tunable viscoplastic domain.
However, the co-deformation configurations studied in Part IV had, by design, a limited strain rate dispersion in a given sample. Although the limited validity range of strain rate was a handicap, the model could be used to study configurations of interest, where both phases deform notably. By contrast, in the objective of modeling porous material, the strain rate field is necessarily very heterogeneous. In the cases of a truss-like system for example, strains may be concentrated at the joints.
The natural period of the model is thus fixed (t 0 = 1 s) in the objective of mimicking a low strain rate sensitivity, valid over a wide range of strain rates. With the chosen time step (∆t = 10 -1 s), the average strain rate sensitivity over the strain rate range 10 -5 -10 -2 s -1 is M = 7.14•10 -2 . The validity range of the domain in strain rate can be arbitrarily shifted by modifying the natural period and the time step at fixed t 0 /∆t ratio. The mimicked stress level used is loosely inspired from typical flow stresses of Al-7075 at 400 • C, typically 30 MPa at 10 -3 s -1 (Figure 2.14 on page 31). The chosen repulsive stiffness in the pair interaction of the particles k rep = 1.42•10 9 ➭N • mm -1 leads to a macroscopic stress level K = 49.1 MPa • s M in the Norton approximation.
Macroscopic Behavior
With the interaction law TRILIN, the particles of the packing can collectively cope with tensile and compressive load. Overall, the stress-strain behavior and volume conservation is satisfactory. At large strain, tensile loads induce the necking and the rupture of the sample. Some parasitic numerical crystallization is observed and is promoted by the planar meshes.
Under compressive load, pushed by the moving meshes, the particles using the model TRILIN rearrange collectively to cope with strain (Figure 14.2). Compared to the behavior of the model BILIN (Figure 11.9 on page 128), the deformation is less homogeneous: the lateral free surfaces are not as regular and smooth2 . The overall reorganization is however effective.
0
-0.2 -0.4 -0.6 -0.8 -1.0 Strain (/) The strain-stress behavior (Figure 14.4) is coherent with the observed tendencies:
• Under compressive load (negative stress), the stress is stable up to large strain.
• Under tensile load (positive stress), the stress 3 is stable up to strains of 0.4 -0.5, where it drops under the influence of the necking. Once the sample is broken in two parts, the stress is zero.
For both compression and tension, a transient regime is observed at the beginning of the test. The observed width in strain is roughly 0.1 -0.15, which is about twice shorter than for the model BILIN (Figure 11.10a on page 129). At larger strain rate, the oscillations of the stress are of greater magnitude. As expected, the absolute flow stress increases with the absolute strain rate. To compute the approximation by a Norton law of the macroscopic behavior (Figure 14.5a), the flow stress at a given strain rate is computed as the average stress between strains of 0.21 and 0.35. Both for tension and compression, the variation of the flow stress is limited over a wide range of strain rate. The flow stress varies approximately by a factor 1.7 for three decades of strain rate. The compressive behavior is smoother and the Norton approximation is rougher for the tensile loads. The order of magnitude of the flow stress is similar, the tension stress being at most 20 % larger than compression stress.
Qualitatively, the sensitivity-strain rate behavior of TRILIN seems comparable to BILIN (Figure 14.5b). The strain rate sensitivity is stable at lower strain rates and increases rapidly at higher strain rates. A calibration procedure, finding a link between strain rate, natural period and strain rate sensitivity, seems possible. The behavior was however not quantitatively investigated. Firstly because the foreseen application -porous mesostructures -implies large heterogeneities of the strain rate field. Therefore, a viscoplastic behavior valid only on a limited strain rate range is not appropriate. Secondly, the choice of a large time step would induce an excessive motion of the meshes at each step for higher strain rates, where the model should exhibit higher strain rate sensitivity. The model TRILIN is thus focused at a limited strain rate sensitivity over a wide strain rate range.
The minor variation of the strain rate sensitivity is sufficient to influence the necking behavior 4 . At higher strain rates, the increase in the strain rate sensitivity tends to stabilize the necking. Qualitatively comparing the sample deformed at distinct strain rates (Figure 14.6), this effect is seen at 3.16•10 -3 s -1 . At a strain of 0.8, the necking is less pronounced and at 1.0, the sample is not separated into two parts yet 5 . On the stress-strain curves (Figure 14.4) the total rupture (σ = 0) of the sample occurs before a strain of 0.9 for ε ≤ 3.16•10 -4 s -1 .
3 As in Part IV, the macroscopic stress is computed using an estimation of the cross-section based on the current height of the sample and its initial volume. 4 Refer to Section 14.3 regarding the size effects. 5 At a strain rate of 10 -2 s -1 , the stabilizing effect is extremely efficient and the necking is very limited As a side note, the influence of the parameter X wall on the necking behavior is not negligible. This parameter (refer to Section 9.2) is a multiplicative factor used to increase the mesh/particles interaction forces. It is an arbitrary work-around to effectively apply tensile loads. Practically, it drives the ease with which a particle interacting with the mesh will be able to leave it. The chosen value (X wall = 3) tends to promote a rupture in the middle of the sample: the forces necessary to develop a necking are lower than the force to pull particles apart from the meshes.
The overall macroscopic equilibrium of the packings (Figure 14.7) is computed from the difference of the total forces acting on a fixed and on a mobile mesh. Under compressive load, at comparable natural period and strain rate, the relative error is a factor three larger than for the model BILIN (Figure 11.4 on page 124). This seems to be a direct effect of the larger time step chosen. To respect an equilibrium error under 10 -1 %, as in Part IV, the absolute strain rate would need to be limited to roughly 2•10 -4 s -1 .
Qualitatively, on dense samples, well balanced collective rearrangements seem correct up to 10 -2 s -1 . However, parasitic dynamic effects can be observed on very porous geometries at 3.16•10 -3 s -1 . In practice, the simulations were run up to 10 -3 s -1 , corresponding roughly to an error of 1 %. This choice, slightly less conservative than in Part IV, is merely a computational convenience.
A potential issue arising with the model TRILIN is a tendency to numerical crystallization. Indeed, as the repulsive and attractive stiffnesses are more balanced than in BILIN , the distance between the particles is more strictly constrained. Although the overall behavior is satisfactory, the deformation of the lateral free surfaces are not as regular and smooth as for BILIN (Figure 11.9 on page 128), This overall behavior mainly stems from the numerical partial and local crystallization. Some groups of particles (typically 6 3 ) arrange locally on a lattice and tend to follow block-wise motion with the strain. On Figure 14.8, this effect can clearly be seen on the right side of the sample.
This phenomenon is stronger at lower strain rates, as particles have more time to organize in an energy minimizing configuration. On Figure 14.6, no crystallization is at a strain of 1.0. Such a configuration was not actually used in the simulations due to a poor equilibrium respect. Absolute strain rate (s -1 ) Error on macroscopic equilibrium (%) observed at -3.16•10 -3 s -1 . The phenomenon is also stronger for compressive loads, where particles are closer to the meshes, as the planar meshes strongly favor this behavior. On Figure 14.6, crystallization effects under tensile load are mostly limited to the close neighborhood of the meshes. Although the numerical crystallization seems sufficiently limited to mimic the overall rearrangement, it is a limitation for the quality of the model. Preferential strain zone form between the crystallized block and the study of local fields is thus severely limited. Numerous work-around can be imagined to improve the model, first of which the introduction of a little dispersion in the radii of the particles.
Influence of the Number of Particles
The flow stress can be captured with a limited number of particles in the packing. The error with respect to a spatially converged state is evaluated. The number of particles also influences the rupture qualitatively. This effect has not been investigated.
The choice of the spatial discretization introduces a purely numerical length scale. The number of particles in the packing thus influences the strain at which the necking is initiated and the final rupture profile (Figure 14.9). This can be understood as the influence of the ratio between the size of the sample and the size of the typical defect, corresponding roughly to the dimension of the particles. For packings of limited size, typically under 5•10 4 particles (Figure 14.9), the necking is more or less axisymmetric and the effective resisting cross-section progressively loses its thickness and vanishes. For larger packing, multiple defects seem to nucleate in the necking zone and coalescing until the final rupture, somewhat like in the canonical dimple rupture of some ductile alloys.
This dependency of the rupture behavior on the size of the packing6 is not designed nor controlled to have physical sense. Although the proposed modeling approach may well be of interest to model ductile rupture, we focus here on the homogeneous response of the material. Further developments would be necessary to specifically study rupture or necking.
A common stress-strain tendency is displayed for packings larger than a few thousand particles (Figure 14.10a): the transient regime is similar and the flow stress value is roughly captured even with a limited number of particles (e.g. 5•10 2 ). However, with a very small packing of 5•10 2 particles, the crystallization under compressive load becomes excessive. This effect is caused by the harsh boundary condition imposed by the two close meshes. A small sample thus fully crystallizes and jumps from a stable crystallized configuration to another. This parasitic numerical crystallization of the packing is clearly displayed under compressive load by the flow stress oscillation at larger strain. Under tensile load, the effect is not seen on the strain-stress curve or on a cross-section of the sample (see for example the even smaller packing of 3•10 2 particles on Figure 14.9). With 5•10 3 particles, the oscillation of the flow stress due to crystallization starts at larger strain (≈ 0.7).
The size dependency of the necking and the rupture influences the strain-stress behavior under tensile load. On Figure 14.10a the strain at which the necking starts (i.e. the drop of the flow stress) increases with the size of the packing. The final rupture of the sample respectively occurs at a strain of 0.6, 0.8 and 1.0 for packings of 5•10 2 , 5•10 3 and 5•10 5 particles.
In spite of the numerical artifacts of crystallization (mostly under compression) and size effect (mostly under tension), the spatial convergence of the flow stress (Figure 14.10b) is similar to the model BILIN (Figure 11.7b on page 127). The error on the flow stress relative to a considered converged packing of 3•10 6 particles is around 10 and 3 % respectively for packings of 5•10 3 and 5•10 4 particles.
Along with the strain-stress behavior, the volume conservation is a key objective for our model. The volume variation after relaxation (Figure 14.11) is typically around a few percent, which is comparable to the low strain rate sensitivity phase A in the model BILIN (Figure 11.8 on page 128). From the relaxed initial state, after the typical transient regime, the volume decreases by a few percent with compression. The larger discrepancy for compression of the samples of 5•10 the volume starts to drop toward lower value than the initial state at rupture. After the rupture, the volume varies insignificantly as the sample is separated into two parts.
Chapter 15
Self-Contact Detection
In the proposed model, inelastic strains are mimicked by neighbor changes in a packing. A dedicated algorithm (introduced in Section 9.5) must identify the interaction of the modeled interfaces and more specifically the self-contact events. This chapter investigates the behavior of this self-contact detection algorithm for simple geometries. The chapter is split into two sections:
• Section 15.1 concerns the methodology to choose the threshold parameters of the algorithm.
• Section 15.2 applies the algorithm to a single spherical pore in a cubic domain. An example of controlled topological change is illustrated.
Threshold Choice
The detection of self-contact events is based on a local metric. A test case, using two blocks of material, is designed to tune the algorithm parameters. A compromise has to be chosen between contradictory types of errors. The chosen set of parameters only slightly modifies the behavior of a dense sample and misses few self-contacts.
To detect self-contact events (Figure 15.1), an "outward vector" n is computed for each particle from the position of its neighbors. When two particles meet for the first time, the relative orientations and the magnitudes of the vectors are compared to classify the pair as "internal" or "interface". Three thresholds are used: two on the angles (α ij , α en ) and one on the magnitudes (N mag ). Based on the chosen self-contact detection algorithm, a test case is designed to choose the threshold parameters. The principle of the test case is to perform a compression-161 tension test starting with two distinct aggregates of particles (Figure 15.2). Each aggregate, chosen initially cuboidal, is explicitly labeled and tracked. Thus, when a new pair is created, the "internal" or "interface" status can be verified checking if the particles come from identical of distinct initial aggregates. The two aggregates are successively crushed onto one another and pulled apart, while all pair creations are checked. Two metrics are chosen to quantify the quality of a set of parameters:
Inital Compression Tension
• The rate of error at the interface between the two blocks. The "interface error" is the ratio between pairs with particles from distinct aggregates mistakenly considered as "internal" and the total number of new pairs considered as "interface".
• The rate of error inside each block. The "internal error" is the ratio between pairs with particles from identical aggregates mistakenly considered as "interface" and the total number of new pairs considered as "internal".
It must also be considered that the denomination of these errors may be somewhat misleading and only refers to events at the particle level. For example, an "internal" error can occur at the interface and numerous "internal" errors can lead to lose the modeled interface by opening numerical ones. The two error rates are not of equal weight. The creation of new "internal" pairs are much more frequent, thus causing a high number of errors if the two rates are balanced. However, the detection of "interface" pairs is more critical: a few percent of "interface" error is already too poor a description for our purpose, while 10 % of "internal" error can be acceptable. Indeed, particles typically have numerous "internal" neighbors and few "interface" neighbors, an error on an "internal" neighbor is more easily counterbalanced by the others: the general cohesion is sufficient.
As a guide to find a sensible threshold parameter N mag , the distribution of the magnitudes n of the outward vectors is bimodal (Figure 15.3), with a transition between 2 and 3 mm. The first guess on the angle thresholds α en and α ij was found by simple trial-and-error.
A full factorial design of experiment is then executed, sweeping the domain (N mag , α en , α ij ) = ([1.5, 2.6], [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF][START_REF] Hrennikoff | Solution of problems of elasticity by the framework method[END_REF], [35,[START_REF] Gibaud | Déformation à chaud de composites métal cristallin / métal amorphe[END_REF]). One compression-tension test is run for each set of parameters. The explored domain favors configurations with low interface error, more critical for our purpose. As the two chosen error metrics are contradictory, the results sketch a Pareto front (Figure 15.4).
The contradictory behavior of the two error types leads to a necessary compromise in the choice of the interaction parameters. As a rough guide for the choice of a specific configuration on the front, the following maximal errors provide a "visually" acceptable overall behavior:
• "Interface" error < 1 %.
• "Internal" error < 10 %.
Within the range, decreasing the "interface" error to ≈ 10 -1 % notably improves the tracking of the interface. This implies an increase of the "internal" error from 2 to 3 %, only very marginally modifying the overall dense behavior. The used set of parameters is thus chosen on the identified Pareto front at (N mag , α en , α ij ) = (2.3 mm, 80 • , 65 • ) (crossed-out on Figure 15.4).
The chosen set of parameters leads to an "interface" error of ≈ 10 -1 %. To illustrate the effects of this metric, the test case is repeated on various initial geometries (Figure 15.5): cuboids, portions of spheres, cones, cylinders with perpendicular axis and cuboids with a portion of spherical hole. Typically, a few particles are carried by the wrong aggregate after the compression-tension test. An attentive reader may spot some on the cuboid, the cylinder or the spherical hole configurations. Note that this naive metric was not used to choose the parameters and is dependent, among other factors, on the number of particles modeling the interfaces.
The chosen set of parameters leads to an "internal" error of ≈ 3 %. The stress-strain profile for a dense sample, with and without using the self-contact detection, is shown on Figure 15.6. On this Figure, the introduced error is represented by the discrepancy between the lines (without self-contact) and the round markers (with self-contact). The stress is little affected by the chosen self-contact thresholds, including the necking behavior under tensile motion. The average flow stress computed between strains of 0.21 and 0.35 varies by less than 1 %.
As a side note regarding the Pareto front, the dominant parameters to quantitatively tune the metrics of interest seem to be N mag and α en . This purely quantitative approach hides many qualitative tendencies that were not studied in depth. To improve the behavior of the algorithm, it seems equivocated to attempt a heavy quantitative optimization procedure, based on the two chosen quantitative metrics. The understanding of distinct typical configurations may be of interest to design a more comprehensive detection algorithm. "Interface" error (%) "Internal" error (%)
α ij ( • ) (c)
Compression-Tension of a Spherical Pore
A cube with a single spherical pore is compressed up to the mechanical closure of the void. The interface is properly tracked. A tensile load is then applied and the pore reopens on the cohesionless interface. A "healing" behavior is added, the interface thus becomes cohesive after a given time of contact. The pore can be partially or fully closed depending on the chosen healing time.
A simple test case for the self-contact detection algorithm is based on a cube of side 2a with a centered spherical hole of radius r. On a uniaxial compression-tension test, the pore is expected to be closed, while keeping track of the interface and to re-open under the tensile load.
To visualize this behavior, a handy representation is a slice of width 2r crown across the pore displaying the outward vectors only (Figure 15.7). On the chosen example, a relative pore radius r/a = 0.15. The behavior under tensile load and more specifically the strain localization around the pore is influenced by the numerical parameter X wall . This arbitrary parameter, used to apply tensile loads with meshes, can emphasize or minimize the phenomenon by allowing the particles to leave the mesh more or less easily. This behavior is not intended to represent a physical phenomenon.
It must be remembered at this stage that as we work in the discrete element method (DEM) framework, arbitrary topological events can be implemented with ease. To illustrate such behaviors, a healing time is implemented in the interaction law (Figure 15.8). For each "interface" pair, a counter sums the time elapsed from the creation of the pair. After a threshold, the healing time, the status of the pair is switched back to a normal "internal" interaction. The pore is fully closed if the healing time is negligible. For intermediary healing time (like 100 s Figure 15.8), the pore re-opens but is significantly smaller.
The engineering stress for a compression-tension test is compared with and without instantaneous healing for two sizes of pore (Figure 15.9). The expected tendency of a larger peak tension stress with instantaneous healing is observed. For relatively small pores, the difference is not large as the effective variation of resisting tensile section due to the pore is limited after a large compressive strain.
The ability to track or "weld" interfaces is a strength of the method. Our model used with a simple healing time model is a first approach to mimic the pore mechanical
Chapter 16
Complex "Porous" Mesostructures
Reminding the overall objective of this PhD, we are attempting to design a method to describe finite inelastic strain in continuous media. The method must also handle the detection, the displacement and the interaction of numerous self-contacts.
This last chapter illustrates the potential of the method on complex geometries obtained by X-ray tomography (refer to Section 9.3). "Porous" is understood here in a loose sense, merely as materials presenting holes, in opposition to dense materials. The chapter is divided into two distinct applications:
• Section 16.1 briefly deals with the mechanical closure and re-opening of casting pores, under compressive and tensile loads.
• Section 16.2 investigates the compression of low relative density foam. The compression is performed up to large strain and the local deformation mechanisms are looked into. Quantitative macroscopic metrics are computed.
Casting Pores in Aluminum Alloy
The morphology of casting pores in aluminum, obtained by X-ray tomography, is discretized. A compression-tension test is applied: the pores are mechanically closed and re-opened. In spite of the large applied strain and the tortuous geometry, the interfaces are tracked. At large tensile strain, a coalescence phenomenon is observed.
The pores shown on Figure 16.1a are formed during the casting of an aluminum alloy. These pores are considered as defects and limit the mechanical properties of the material [239, p.95], for example leading to fatigue rupture under cyclic load. In subsequent steps of elaboration, the pores can be closed and "welded" by thermomechanical processes as hot rolling, improving the mechanical properties. The phenomenon is classically divided into two steps: the purely mechanical closure of the pores [START_REF] Michel Saby | Void closure criteria for hot metal forming: a review[END_REF] and the growth of a cohesive interface. We will in this section focus on mechanical effects.
The initial image has a resolution of 2.4 ➭m/pixel and a random packing of roughly 5•10 5 particles is used. The chosen ratio pixel/r seed = 2 is coherent with the discretization algorithm, For a finer discretization, a less crude reconstruction than an elementary box filter (Section 9.3) could be used. The pertinence of using the center of the particles can also be discussed. The reconstructed meshed pores on the DEM model are slightly smaller than the pores on the 3D image. During the relaxation procedure, after particles are removed using the image as a mask, the tensile forces in the packing retrieve the global equilibrium resulting in a contraction of the pore. This test case is specifically sensitive to that effect as a few particles are removed in a large packing. To improve the geometrical description, an iterative procedure may be of interest, repeatedly removing particles after a relaxation procedure. During the iterations, it may be useful to start with smaller pores and progressively increase the size toward the final targeted shape.
The objective of the simulation is to compress the sample containing pores up to their mechanical closure. The model must not lose track of the interface. In a second stage, a tensile load is applied on the compressed sample to pull it back to its initial height. The pores must re-open on the tracked interfaces.
A visualization method of the pores must be proposed as our method is fully implicit: no explicit "interface" object that could be shown is defined. The visualization proposed here relies on a mesh reconstruction on the particles with an outward vector magnitude n > 2.3 mm (Figure 16.2). The particles from the exterior of the domain are manually removed. All particles are overprinted with a strong transparency to represent the packing. Some noise resulting from this visualization procedure can be seen on Figure 16.2, for example the small and sharp mesh in the lower part of the initial state.
Little holes are appearing on the reconstructed mesh under tensile load. This "sieve" aspect is a reconstruction artifact, stemming from the increase of the total surface of the interfaces: particles are migrating toward the free surface. As an arbitrary threshold on n selects the particles for the mesh reconstruction, some migrating particles are missed.
Under compressive load (Figure 16.2), the volume of the porosity effectively reduces, until all interfaces are in contact. The interface and their interaction are effectively tracked, in spite of the large prescribed compressive strain. The interfaces that come in contact early actually undergo a large strain while being closed.
The pores re-open during the tensile phase and are not lost during the whole compression-tension process. The pores also coalesce, from a strain of 0.3 to finally form a single big pore in the final state. This complex topological change seems to be described with ease by the method. It must however be remembered that the necking and rupture phenomena are influenced by size effects. At this stage, the behavior is thus illustrative of the potential of the model but not necessarily representative of a physical event.
On the shown example, the discretization is too rough to describe properly the deformation of the smallest pore (nicely rounded and near the center on the initial state). The thickness of the mesh, when the pores are totally closed (for example at a strain of 1.0) stems from the mesh reconstruction: the mesh goes through the center of the particles, located at one side or the other of the interface.
To verify that the pores do indeed re-open on the tracked interfaces, it is possible to visualize thin slices of the sample (Figure 16.3). As the sample deforms, the interfaces move through the fixed slice position. It is however possible to see the re-opening of the pores, knowing from the 3D views that the interfaces are not altogether lost. A typical experimental metric of interest is the evolution of the pore volume. An estimation of this metric (Figure 16.4) can be computed using mesh reconstructions. The numerical parameter driving the mesh reconstructing in ovito is a probing radius [START_REF] Stukowski | Computational analysis methods in atomistic modeling of crystals[END_REF]. Two reconstructions, using all particles, are executed: one with a very large probing radius (5 mm) and a second one with a small probing radius (0.9 mm). The difference between the volumes of the meshes is influenced by the greater roughness of the surface with low probing radius. The evolution of the volume of the pores is thus corrected using a reference configuration, were the pores are considered closed (here a strain of 1.0). This first approximation is rough: at the beginning of the tensile load, the estimated volume is slightly under zero at a strain of 0.8, which is within the measurement noise.
Regarding the general trend, the pore does not seem to instantly re-open. This may not be unrealistic but may also stem from the harshness of the test case for the model. Indeed, the change of velocity of the plane is instantaneous, from a time step to the other. As shown for example on Figure 14.4 on page 154, the initiation of the flow from an equilibrium state starts with a transient regime, roughly for the first 0.1 of strain. A similar effect, but of greater magnitude is to be expected here. The potential effect of the "interface" errors (Section 15.1), i.e. pairs erroneously considered as cohesive, has not been investigated.
Aluminum Open Cell Foam
The geometry of a metallic foam with low relative density is discretized. The foam is compressed up to large strain, for various relative densities and strain rates. After a short description of the discretization procedure, three sections respectively deal with the qualitative deformation mechanisms, the quantification of the macroscopic flow stress and a first approach to the estimation of the local stress field.
To some extent, an open cell foam could be considered as a material where pores are fully percolated. From the point of view of our model, it makes no conceptual difference to consider an arbitrary low density. The limit is the choice of the discretization, only packings display the expected behavior, not isolated particles. A slender beam or a thin shell must be discretized with at least a few particles in their smallest thickness.
We study here the case of a low relative density structure. Figure 16.5a shows a tomography reconstruction [START_REF] Zhang | Local tomography study of the fracture of an ERG metal foam[END_REF] of an ERG aluminum foam1 of relative density 6.6 %. The dimension of the sample is 4×10×15 mm 3 , and the original image resolution is approximately 13 ➭m/pixel. At the chosen discretization (Figure 16.5b), pixel/r seed = 1.5, the typical width of an arm of the foam is discretized with less than a dozen of particles. The total number of particles in the simulation is slightly under 5•10 5 .
Qualitative Behavior
Bending mechanisms dominate the deformation of the foam. Weak zones are preferentially deformed until a self-contact event hinders their deformation. Large strain, displacement and rotation are modeled.
When such a geometry is crushed, the deformation ultimately leads to interactions of the arms on the foam: self-contacts. Our method readily allows the deformation up to large strains (Figure 16 The chosen geometry is very thin and delicate on the boundary of the domain: isolated arms must distribute the load to all the sample. Practically, during the first percent of strain (Figure 16.16), the arms directly in contact with the meshes locally deform and cope with most of the strain. Local details are thus crushed until a sufficient surface of interaction to distribute the load is created. This is not a numerical artifact, but rather the consequence of the geometry choice. Regarding deformation mechanisms, the crushing of the foam is dominated by bending and instabilities. Illustrative examples are taken from the beginning of the deformation (strain range 0 -1) at -1•10 -3 s -1 .
A first example is an isolated arm (dashed on Figure 16.9). Very early in the deformation (before a strain of 0.2), the off axis compressive load (arrows at a strain of 0.05), imposed by the surrounding, induces a severe bending. Being of small section and isolated, the arm cannot withstand the load. At a strain of 0.3, two arms above and below come into contact (circled at 0.30). The deformation of the arm is thus temporarily hindered. This flexion mechanism of local weakness, until the deformation is halted by a contact event, is widespread in the structure. The second example is the collapse of a cell (Figure 16.10). Initially roughly hexagonal 2 (sketched at a strain of 0.05), the cell at first partially withstands the load imposed by the lower and upper vertical pillars (arrows at 0.05). The superior part of the hexagon copes with most of the deformation and progressively deforms, up to a strain of 0.2. The summits of the hexagon behave like hinges and the arms bend to comply with the varying angles. The deformation then accelerates, the shape degenerates to a rectangle (sketched at 0.35), with the pillar pushing in the middle of the horizontal sides. The cell collapses rapidly, while the vertical pillars coped with very little deformation from the beginning. Several typical traits are displayed: the rotation of the arms around hinge-like joints of the foam, the preferential deformation of horizontal arms by "three point" bending. ). The lower dashed circle (at a strain of 0.5) represents a "hole" in the foam, which is not propped by arms of material. During the compression, this volume is thus drastically reduced. In contrast, the upper dashed circle is a strongly sustained cell. It is slightly deformed during the compression, but it keeps its original shape in this first stage of compression. The overall effect is the relative rotation of blocks of material (dashed lines). Successive events of this type occur simultaneously and sequentially, until the motion of weaknesses of this type are all blocked by contacts, and the "strong" cells have to deform. 0.5 1.0 Absolute strain (/) Overall (Figure 16.12), the deformation is dominated by the bending and buckling of the arms, successively deformed in the weaker zones of the foam. These zones are defined by the geometry and orientation of the arms and the cells, as well as the geometrical surrounding configuration, which will transmit or sustain the load. The deformation involves localized large strains and large rotations. The self-contact events ultimately hinder the deformation of the weaker zones, transmitting the load elsewhere.
Macroscopic Stress
The macroscopic flow stress to deform the foam compares well with the literature. Initially very limited compared to the flow stress of the dense material, the stress increases as the structure collapses and the mutual support of the arms generalizes. The effect of the relative density and the strain rate are investigated in two dedicated sections.
To quantify the efforts that the foam can support, the apparent engineering flow stress can be computed, using the initial cross-section of the sample and the sum of the forces acting on the meshes.
The typical stress-strain profile of a compression up to large strain is shown on Figure 16.13. Very qualitatively, on a linear scale, the deformation of the foam requires a limited stress with respect to the flow stress of the dense material (30 MPa at 10 -3 s -1 ). Starting at a strain of 1, the flow stress progressively increases, rapidly increasing in the final range of strain 2 -3. Using a semi logarithmic scale (Figure 16.14a), several traits can be identified. The sharp overshoot observed in the first percent of strain will be examined in Section 16.2.2.2. Up to a strain of 0.5, the created contacts seem isolated enough for other zones to cope with the deformation without notable increase of the macroscopic flow stress. In the strain range 0.1 -1, the flow stress is within the range 0.1 -0.2 MPa.
To further quantify the credibility of the modeled stress, the apparent flow stress of this open cell foam is evaluated as follows [8, Eq. 9a]:
σ foam = σ dense • ρ 3N n + 1 2N n rel • N n + 2 0.6 1 N n • N n 1.7(2N n + 1) (16.1)
With ρ rel ≈ 0.066, N n = 1/M ≈ 14 and σ dense ≈ 30 MPa, the estimated flow stress for the foam is thus σ foam ≈ 0.17 MPa. The order of magnitude of this apparent flow stress is thus correctly captured. From a strain of 0.5 to 2.5, the flow stress increases with a somewhat similar tendency to the strain-density behavior (Figure 16.14b). The flow stress reaches 1 % of the dense flow stress at a strain of 2. Self-contacts progressively generalize to the whole sample, following a somewhat stable trend in the strain range 1 -2.5.
Note that at hypothetical full geometrical densification, the expected behavior is not one of the dense samples: the numerous discontinuities are still tracked and are not cohesive. Although our discretization with spheres introduces a surface roughness, leading to a numerical friction, the asymptotic flow stress of the "dense" foam is expected to be lower than the flow stress of a genuinely dense sample.
The instantaneous cross-section and volume of the sample are delicate to estimate, more specifically at large strains. To estimate the relative density, i.e. the compaction of the foam, mesh reconstructions are used on the particles. The overall volume occupied by the foam and its solid volume are estimated using respectively a large probing sphere and a small probing sphere. As the algorithm of mesh reconstruction of ovito has not been modified to take our interfaces into account, the results are somewhat rough and arbitrary3 , hence subtle variations at large strain cannot be measured.
The general compaction tendency can be captured (Figure 16.14b) and is compared to a hypothesis of constant cross-section. In the strain range 0 -1, the slope is similar: the change of the cross-section indeed seems negligible. If some discrepancy can be found at higher strain, and is coherent with qualitative observation (Figure 16.6), the metric is a little rough for further analysis.
At large strain, close to the full compaction, using metrics based on the local fields may be more appropriate. For example the density could be estimated based on the number of neighbors of each particle. To compute the true stress close to the compaction, an averaging of the local stress field may be possible.
Effect of the Relative Density
The flow stress variation induced by the relative density is in good agreement with our reference. In the studied range of relative densities, no major change of the deformation mechanisms is observed.
Starting from the initial 3D images, with a relative density of 6.6 %, the foam is dilated to densities of 12 and 18 %. The two new images are then discretized using an identical procedure and resolution (Figure 16.15). The number of particles thus increases with the density. The behavior of the three geometries is compared in the strain range 0 -1. With the exception of a few details, some thin links are removed at lower density due to the numerical discretization, the connectivity and overall geometry of the three structures are similar. This similarity of the geometries is illustrated by the qualitative mechanical behavior (Figure 16.16). Although the bending resistance increases with the section of the arms, similar deformation is observed. On local configurations, the changes of relative resistance of the geometrical feature does influence the response. Overall, no major modification is observed within this relative density range.
The effect of the density clearly has an impact on the strain-stress behavior (Figure 16.17a). The overshoot effect, in the first percent of strain, will be examined in Section 16.2.2.2. Before numerous self-contacts occur, for example around a strain of 0.2 at the beginning of the compression, the flow stress increases with the density. Indeed, the thicker arms require more efforts to be deformed. For all three configurations (Figure 16.17a), the order of magnitude of the flow stress (computed using Equation 16.1) is correctly captured.
As the arms of the foam get thicker, they may also start to meet each other earlier in the deformation. Comparing the stresses at strains of 0.2 and 1, the overall mechanical effect seems to be secondary for the studied density and strain ranges. For all three densities, the general trend of evolution of the stress with the strain also seems similar and multiplicative factor of the stress between strains of 0.2 and 1 is comparable.
The Equation 16.1 also exhibits a power law of the flow stress with respect to the density, with an exponent depending solely on the strain rate sensitivity N n = 1/M ≈
14.0: 3N
n + 1 2N n ≈ 1.54 (16.2)
The order of magnitude of this trend is also correctly captured by the model (Fig-ure 16.17b). The choice of a strain and of an averaging window to compute the flow stress is somewhat arbitrary. An exact match of the stress level of the law would thus be fortuitous. However, the measured exponent of the power law seems to be little influenced by these choices: slopes obtained with an average over the strain range 0.2 -0.3, 0.45 -0.55 or 0.7 -0.8 give similar results. This illustrates the similarity of the strain-stress trends for the distinct relative densities studied. The slope tends to be overestimated, numerically slightly between relative densities of 12 and 18 %, more frankly between 6.6 and 12 %. The general study of the effect of the discretization on the flow stress was only carried out on unidirectional tension and compression tests (Section 14.3). In contrast, the deformation of the structure is dominated by bending and instabilities. To check that we indeed observe an effect of the relative density of the foam (Figure 16.17a), two distinct discretization sizes are compared. The geometry at a relative density 6.6 %, initially discretized with pixel/r seed = 1.5, is discretized again with pixel/r seed = 1.2. The number of particles to model the geometry thus rises from 4.8•10 5 to 9.2•10 5 . Qualitatively, the rough discretization misses some fine geometrical details that are correctly captured with more particles. It is thus possible to find local configurations with some discrepancy of the exact deformation mode. Overall, the mechanisms are not affected and the final states are comparable. The order of magnitude of the flow stress at a strain rate of 10 -3 s -1 (Figure 16.18) shows good agreement. The overshoot effect, in the strain range 0 -0.1, is slightly more pronounced for the finer discretization, for reasons that will be examined in Section 16.2.2.2. The larger number of particles tends to smooth the oscillations of the stress. The general tendency and the stress level are similar: the observed effect of the relative density is thus not a numerical artifact stemming from the change of the number of particles.
Effect of the Strain Rate
At low strain rate, an overall buckling of the full sample is observed. Although the time dependence of creep buckling is a physical phenomenon, it will not be investigated here. The limitations of the current model in terms of strain rate are investigated.
The effect of strain rate is investigated, comparing the results at 10 -3 s -1 to a lower strain rate of 3.16•10 -4 s -1 , for the three relative densities tested previously (Figure 16. 19). In the first few percent of strain, no overshoot effect is observed at 3.16•10 -4 s -1 . This effect will be examined in the end of this section.
In the range 0.2 -0.3, the macroscopic flow stress is very similar at both strain rates. This is coherent with the fact that the foam strain rate sensitivity should be equal to the dense material strain rate sensitivity [8,Eq.9a]. With a strain rate sensitivity M = 7.14•10 -2 , the relative variation of flow stress for a factor 3 on the strain rate should be around 8 %. Our model is not able to capture this subtle variation on the chosen geometry. A slight reduction of the stress at a strain of 0.2 can be observed on a supplementary strain rate of 10 -4 s -1 , run for the density 6.6 %. However, this direct effect of the strain rate is masked by another phenomenon and cannot be more precisely investigated.
Starting at strains between 0.3 and 0.4, the stress at 3.16•10 -4 s -1 drops by a factor 1.5 (Figure 16. 19). This drop is far too large to be the result of the strain rate, it is induced by a mechanical instability (Figure 16.20).
The chosen geometry 4 of the foam is rather slender in the loading direction: the height/thickness ratio is 3.75. The chosen boundary conditions also promote such a phenomenon: the particles are free to translate on the meshes. This freedom allows an easy propagation of the instability. It is thus not physically incoherent to observe buckling behavior5 under compressive load. In addition, the creep buckling behavior of the materials proves to be time dependent [START_REF] Chapman | A theoretical and experimental investigation of creep buckling[END_REF]. It is thus plausible that a critical time threshold is reached at lower strain rate.
At a density of 6.6 % (Figure 16.19), this hypothesis can be tested by comparing the stress profile at 3.16•10 -4 and at 10 -4 • The stress drop characterizing the instability respectively seems to occur in the strain range 0.4 -0.6 and near a strain of 0.2. Converted in time, this would lead to respective buckling times of 1.3•10 3 -1.9•10 3 s and 2•10 3 s, at the considered stress.
The quantitative analysis was not pushed further and a throughout verification that we do not merely observe an unwanted numerical artifact was not carried out. For example, although we did not spot such an event, the numerical crystallization depends on the strain rate and could favor preferential modes of deformation. A general conclusion regarding creep buckling cannot be drawn from these very preliminary observations, based on a single geometry, where all buckling events occur on the same defect of the geometry. However, although we do not purposely model buckling, and we do not control it here, the method may be promising to study instability phenomena occurring in foams at large inelastic strain.
We will now examine the sharp overshoot effect, observed in the first few percent of strain at a strain rate or 10 -3 s -1 . This overshoot is a numerical artifact stemming from the numerical method of prescription of the strain rate. Indeed, an interesting feature of this test case for the robustness of the model is the large dimension of the sample in the loading direction. As we work at prescribed strain rate, the absolute velocities of the meshes thus increase with the size of the sample. As we work with a fixed time step, a critical regime is met when the mesh can escape from, or go across, the geometrical zone of interaction of a particle in few time steps. The configuration shown at 10 -3 s -1 is actually close to this critical numerical regime.
Overall, the global rearrangement behavior is still observed at a strain rate of 3.16•10 -3 s -1 . Local qualitative deformation configurations are correct. However, the packing does not manage to fully collectively cope with the deformation: the cells of the foam deform more near the meshes. This "dynamic-like" effect is not suitable for our purpose.
The initial numerical "height" H of the sample is 264 mm. At -3.16•10 -3 s -1 , the initial absolute velocity of variation of height Ḣ can be computed as:
Ḣ = ε • H = 3.16•10 -3 • 264 = 8.34•10 -1 mm • s -1 (16.
3)
The variation of height ∆H at each time step ∆t is thus:
∆H = Ḣ • ∆t = 8.34•10 -1 • 10 -1 = 8.34•10 -2 mm (16.4)
This is numerically large compared to the dimensions of the particles (r seed = 0.5 mm and r crown = 0.7 mm), more than one tenth of the crown radius. If the packing is initially at rest, the meshes6 are able to cross or leave the particles in very few time steps. Our model relies altogether on a collective motion: reasonable local relative velocities are reached after the transient regime where the flow is initiated. For large packings, a numerical work-around must be provided to initiate the flow.
In the spirit of our method, it would be desirable to limit such procedures to the transient regimes 7 , at each non-smooth change in the loading of the samples.
A tested procedure was the prescription of the initial velocity (in three directions, based on the expected overall flow) of the particles, depending on their position in the sample. This naive attempt did not significantly improve the behavior, it seems necessary to at least introduce a random component to obtain suitable relative velocities between the objects.
Local Field
A rough estimation of the local stress field is proposed. On the studied configurations, the observed tendencies are in agreement with the qualitative behavior of the deformation. Further statistical analyses are required to provide quantitative data.
The estimation of local field is not trivial from a statistical point of view. Temporal and spatial averages are necessary, and their effect must be understood (refer to Appendix A). In addition, the presence of interfaces may lead to cumbersome procedures, although this is not a conceptual limitation.
However, the macroscopic behavior of the model is solely driven by the local phenomena. This behavior seems correct with regard to some tested metrics. It should thus be possible to define sensible intermediary scale metrics, between macroscopic and particlewise scales. On dense samples, the behavior of packings of a few hundreds of particles could already be considered as a good approximation of the model (Section 14.3), even with the strong boundary effects of the meshes.
As a mere illustration of the interest of a local field approximation, some examples are given regarding an approximation of the stress field. For each particle, the stress is approximated with Equation 8.12 on page 96. The components of the stress tensor are average particle-wise, over a sliding window of strain of 10 -2 . The signed equivalent Mises stress8 is then computed. Finally, a spatial average within a radius of 2.5 mm is performed at each particle. Looking at the arm bent by the bottom pillar (circled on Figure 16.21), the zones under tensile and compressive loads are qualitatively coherent with the observed bending of the structure. Opposite to the crushing pillar for example, the arm is under tensile load.
Looking at a broader view of the sample (Figure 16.22, the zones currently undergoing deformation could be identified using the map of the estimation of the stress field. At this stage, it is delicate to draw conclusions as most of the structure is bearing very limited local stress. The numerical noise is thus large compared to the observed signal. Chapter 17
General Conclusion
In a nutshell, this PhD questioned the potential of the DEM framework for the description of finite, inelastic and incompressible transformations of continuous media. This exploratory attempt was triggered by the intrinsic numerical properties of the method, as the flexible handling of contact events and a straightforward massive parallelization. This concluding chapter is divided into five short sections respectively dealing with the objectives of the PhD, the adopted strategy, the main results, some limits and finally potential developments.
General Objective
The general starting point of the work is to study the forming of multiphase materials. The physical phenomena of interest are the mechanisms driving finite inelastic strain in architectured metallic materials, at the scale of the constitutive phases.
In order to partially decorrelate the numerous effects (e.g. morphological and rheological) a metallic composite is designed as a model material to focus on the influence of the rheology. The behavior of the composite (spheroidal amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 inclusions in a crystalline copper matrix) is experimentally studied with in situ X-ray tomography hot compression tests (Figure 17.1). The co-deformation of the phases is observed, with an interesting dependency of the rheological contrast on the temperature and the strain rate (Section 2.5). Modeling tools are seen as complementary to the experimental analysis. The modeling objective is thus to describe inelastic finite strains and interaction of interfaces, observed experimentally. The model must handle numerous interface interactions and topological events like for example pore closure and phase decohesion or fragmentation (Section 3.3).
A lecture grid, focusing on algorithmic features, can help to highlight distinctions and similarities for the selection of numerical methods. In our objective, methods designed to solve partial differential equation (PDE) display a variety of strategies to include the description of discontinuities in their framework. Methods fundamentally based on a discrete topology can in turn propose phenomenological routes to mimic the behavior of continuous media.
Among potential modeling tools, the DEM is innately suited to handle numerous contacts and topological events. The chosen modeling strategy is thus phenomenological. The research question focuses on the assessment of the pertinence of this choice to meet our modeling objectives (Chapter 7). The objective of the PhD is to develop a DEM algorithm for finite inelastic transformation of incompressible multi-material.
Modeling Strategy
More than a precise road map, it must be emphasized that the experience of this PhD advocates for a specific development strategy. In short, a too strict attempt to stick to expected elementary mechanisms or physical parameters can be misleading in the design of models. In the context of the DEM, two design strategies were considered:
• Adding-up elementary physical behaviors.
A tempting route in the design strategy is to build a numerical model as closely as possible to the physical model of the elementary observed phenomena: a bottomup approach. The underlying hope is that intrinsic physical parameters may be directly fed into the numerical model. Computational and physical issues are considered separately, with thus no guarantee of their respective requirements to match. In many configurations1 , such a literal transcription may result in impractical numerical models. In such cases, the models are then marginally modified toward a more computationally suitable state. To remain coherent with its grounding assumptions, this approach can only be reasonably applied to a restricted range of phenomena. In many cases, the marginal modifications are barely sufficient for a reasonable numerical behavior, while denaturing the sought-for physics of the elementary mechanisms. This strategy was not followed in this PhD.
• Tuning an overall collective behavior.
The design process can start in a radically opposed direction, explored in this work: an ad hoc computationally reasonable model is built, without requirements on similarity with the physical elementary mechanisms. The design is altogether focused on the modelization objective, i.e. mimicking a physical phenomenon with a collective behavior. The constructed model, potentially highly counter-intuitive, is meant to be as simple as possible at the elementary level and to have an acceptable numerical behavior.
The two strategies may be benchmarked to compare benefits and drawbacks of the approaches for the understanding of a given phenomenon: complementary data may be accessible. The intrinsic properties and limits of the numerical tools may also promote a strategy more than the other. A common configuration in the DEM is that little faith of a "realistic" description of the elementary interaction is to be expected (Section 3.1.1).
An approach focused on collective behavior is thus a practical work-around to mathematical (does a unique solution exist?) and numerical or computational (how do errors accumulate?) unknowns. On a very practical note, the proposed model is thus an attempt to tune the behavior of numerous and simple interacting objects to meet our modelization objectives.
Results and Applications
To our knowledge, the existing "meshless" approaches to model inelastic phenomena implement them at the level of the computing points. Our proposal relies on the neighbor changes of "undeformable" fictitious particles. Ad hoc interaction laws are implemented and the collective behavior of large packings are meant to mimic key features of macroscopic metallic viscoplasticity: macroscopic overall shape, volume conservation, stressstrain behavior and strain rate sensitivity.
The model is solely based on local and relative metrics, at the level of the elementary particles, with no macroscopic artifact promoting an expected solution. Oftentimes, discrete methods use global numerical artifacts to converge faster toward a presumptive solution: global viscous damping, affine transformation... In contrast, all forces acting on our particles are deduced from the relative kinematics of their neighbors of the boundaries. The potential of the model is closely linked to this grounding principle.
A somewhat contentious achievement of this PhD is the conceptual simplicity of the proposed model. Although numerous and complex configurations were investigated during these three years, the final proposal and its implementation are lean. This points out:
• The potentially complex and varied applications of simple modeling principles. If the behavior of the model had constrained us to treat each test case separately with adequate tuning and artifacts, little faith on the predictive ability of the model would be granted.
• The respect of the attempt to limit the implementation to new features, reusing existing and efficient libraries and software solutions.
Three applications of interest are summed-up in the following sections: finite inelastic strain, self-contact and complex mesostructures.
Inelastic Strain
The principles of the model is a set of attractive-repulsive spherical particles, discretizing a continuum. Under external loads, the packing of particles collectively cope with strain. The rearrangements (Figure 17.2), with arbitrary neighbor changes, account for irreversible strain. • The model BILIN , dealing with compressive load only.
• The model TRILIN , able to cope with both compressive and tensile loads.
A calibration procedure allows the tuning of the stress level and the strain rate sensitivity to mimic a perfect viscoplastic Norton law. Strain rate sensitivities up to M ≈ 0.5 can be modeled on one decade of strain rate. Small strain rate sensitivities can be correctly approximated over various decades of strain rate.
Single materials can be compressed up to large strains (ε = 1) with controlled relative error on the volume and the flow stress. After a numerical transient regime, the typical precision for both metrics is around 5 or 10 %. In the case of tensile load, the necking and the rupture are displayed but not controlled.
Simple bi-material configurations are compared to results obtained by the finite element method (FEM) (Figure 17.3). On macroscopic metrics (flow stress and morphology), the error of the developed model is of the same order of magnitude as the error on a single material. No excessive errors seem to be introduced when simulating multi-material configurations.
Self-Contact
An algorithm to detect physical self-contact events, i.e. the interaction of an interface with itself, is proposed. The self-contact detection is based on an approximation of the free surfaces, for each particle, from the position of its neighbors (Figure 17.4). This is to our knowledge a novel approach. The proposed framework allows a flexible and controlled handling of topological events. As an example, a healing time of the interfaces is implemented: two particles modeling opposite sides of a mechanically closed interface will display attractive behavior after a threshold time. Given a locally computable metric, arbitrary behavior can be developed with ease.
Complex Mesostructure
Complex mesostructures are directly discretized for 3D images obtained by X-ray tomography. The procedure is algorithmically cheap: a segmented 3D image is used as a mask on a random packing of particles to set their properties or remove them.
A first application was the forming of the amorphous/crystalline metallic composite, that originally triggered the study. The 170 physical inclusions of a full tomography sample are discretized and the compression is simulated (Figure 17.6). The model describes the co-deformation of the phases in the material. Specific configurations of interest are compared to in situ measurements, applying local measured strain rates. However, the experimental crystallization of the amorphous phase hinders more in-depth analysis. The choice of an efficient solver leads to a low random-access memory load and relatively cheap computation. As rough orders of magnitude: 1 -2 KiB per particle and 2•10 -6 -6•10 -6 cpu second per particle and per time step on an Intel Xeon E5520 (2.3 GHz). As an example, the compression of the ERG foam at 1•10 -3 s -1 up to a strain of 1 (Part V) last 5 h on a single processing unit and use 0.8 GiB of random-access memory.
Limits and Specificities
During this PhD, the efforts were focused on producing a "proof of concept" and most of the work was exploratory. This emphasis implies that the verification, the validation and the understanding of the model is still scarce. As a very practical consequence, the proposed models probably have little robustness, and the proposed algorithms are not optimal solutions. They must be cross-checked and debugged in a necessary trial phase of development. In depth statistical analysis of the results of the model would be necessary. As-is, the estimation of the metrics and their associated errors are only orders of magnitude. The thankless task of assessing and improving the robustness is necessary before the precision and the performance of the model can be further investigated. In this perspective, a specific care was taken to exhibit numerical artifact and work-around.
A major strength of the model is the purely local contribution of all particles to compute the material overall behavior. A shortcoming is that boundary condition cannot be enforced this way on arbitrarily large domains. For example, when enforcing a prescribed strain rate with a mesh starting from settled state, the absolute velocity of the mesh may become sufficient to go through or leave a particle within a single time step. The permanent flow must be established for the relative velocities to become acceptable. In the spirit of the method, developed numerical work-around should be limited to a transient regime, keeping the effective computation purely local.
A clear practical limit concerns the control of the strain rate sensitivity. So far, the reachable strain rate sensitivities are limited and are only valid on narrow ranges of strain rate. Although the attempt of modeling complex continuous constitutive laws is illusory, this limit is a hindrance to model generic viscoplastic configurations. The phenomenon of the numerical flow of attractive-repulsive particles, including features as the initial transient regime and the strain rate sensitivities, might be rather fundamental issues. The improvement of the model requires a more in depth understanding of these issues.
A major drawback of the model is the absence of size adaptivity. All zones of a geometry must be discretized with identical particles. The computing cost is thus set by the smallest required details to be captured. The extension of the model to handle distinct sizes of particles may be possible but is not a trivial task. As-is, the model is not suited to describe phenomena driven by the mechanics of thin films for example.
From the peculiar grounding principle of describing inelastic strain by constrained rearrangement of particles, stems a key property: the computing points are always evenly spread in the material, by design and without regard for the applied transformations (Figure 17.9b). By contrast, all Lagrangian methods based on continuous constitutive laws track the position of effective material points. The distribution of initially evenly spread computing points will thus be geometrically imbalanced after arbitrary finite strain (Figure 17.9a). This potential distortion of the computing "grid" applies to all methods implementing somehow a continuous inelastic behavior at the scale of the computing points, including "meshless" and "particle" methods. To control the distribution of the computing point, remeshing-like techniques or partially Eulerian models can be used.
Our fictitious particles track the materials in a looser fashion. Although our method is algorithmically Lagrangian, as the particles are tracked explicitly with time, the material points are not conceptually tracked by the particles. At a given instant, a particle does mimic some effects of an elementary portion of the material, but the material "flows" through the particles with time. Inelasticity is implemented at a collective level and the location of a given material point may only be tracked by a group of particles, consistently with the chaotic nature of the model.
The implicit computing point rearrangement does not imply that arbitrary large strain can be modeled: if the strain is excessive the number of particles in the "thickness" of the material may become too small. From an experimental point of view, this behavior can be valuable to model processes where the mechanical influence of a phase becomes negligible under a threshold size. The initial discretization must be chosen in accordance with such an objective.
Potential of the Model 17.5.1 Direct Applications
Starting from a clear modelization objective, it is possible to choose a modeling tool displaying adequate numerical behavior with respect to the dominant physical phenomena of interest. Methods based on distinct principles are thus complementary to study a variety of configurations.
As a rough sketch of potential methods (Figure 17.10), the FEM remains a reference tool for solid mechanics, with a potentially fine description of the material constitutive behavior. It should be the favored method if the continuous behavior is the dominant physical phenomenon of interest. In pathological configurations, e.g. perfect plasticity and large strains, the convergence of the resolution may be long and unsure. It is sometimes necessary to use numerical work-around to actually find a solution, for example using a dynamic formalism for quasistatic problems. In such cases, methods like the smooth particle hydrodynamics (SPH), may be a valuable alternative. In SPH-like methods -conceptually dynamic approaches -the continuous constitutive law is modeled at the computing points, but without relying on a connectivity table. The periodic timely and heavy remeshing procedures are thus avoided. In addition, the method allows efficient and arbitrary handling of discrete events: the behavior of the particles can be mixed between SPH-like and DEM-like interactions.
The predominance of physical interface interactions may advocate for a rougher description of the continuous material. Our DEM model conserves by design an even distribution of the computing points. This property is helpful for the detection of physical contacts. In return, less flexibility and control are possible on the continuous behavior.
In short, the proposed DEM model seems suitable for the description of numerous and simultaneous contacts and topological events in solids. However, the adaptation to fine and precise continuous constitutive behaviors would probably be delicate, or progressively evolve toward an SPH-like method.
Very practically, the proposed model seems appropriate to push further the study of the finite strain of architectured, porous or composite materials. As the focus was set in this PhD on the proposal of a functional tool, the time dedicated to more in-depth analysis of the applications was limited. The results were mostly qualitative and macroscopic, with little direct comparison to experimental results.
A systematic comparison of our model with the available in situ X-ray tomography measurements can readily be carried out. As an example in the laboratory context, the mechanical closure and re-opening of casting pores is an experimental study from the ongoing PhD of Pauline Gravier. The compression-tension high temperature tests have been performed in situ at the European synchrotron radiation facility (ESRF). In addition to the purely mechanical closure, the effective welding of the interfaces can be modeled with ease for complex geometries within our model. Based on local metrics, arbitrary healing behavior may be implemented in a straightforward way. This ability to model topological events is an innate and powerful feature of the DEM framework.
In the context of architectured materials and metallic foam, the study of the deformation mechanisms and instabilities seems promising. The potential shift of mechanisms from the initial phase of the deformation to the full compaction of the sample can be studied qualitatively and quantitatively. The bending, buckling and consolidation by self-contacts can be investigated for complex mesostructures, with both local and macroscopic effects. As a more specific focus of interest, the model may also be used to study the effect of structural defects of the constitutive material of a foam. The link between failure mode, local structural defects and loading conditions is particularly appealing.
Going back to the composite that originally triggered the study, some numerical and experimental limitations were drawn. However, this does not invalidate the potential of the method for this type of study. The effect of the morphology and rheology of the phase can be modeled for the forming of metallic composites. Tortuous mesostructures can be handled, along with the potential contact or self-contact of the phases at large strain. The study of the forming of more classical multiphase material, with available in situ data, could help to further validate the model, in parallel to the resolution of the raised issues for our amorphous/crystalline composite.
Algorithmic Development
The implementation of arbitrary healing behavior is already a first hint for further algorithmic development of the model. In the short term, various extensions could help to improve the model or extend its scope.
The design of appropriate statistical procedures to study the local fields, could open interesting application. As the global behavior of the model is not macroscopically imposed, but driven by local events, collecting and interpreting data at a more local scale should thus be possible. First approaches for both strain and stress fields have been attempted, but time lacked to propose well grounded metrics.
The discretization algorithm could be improved to allow a finer description of the geometries. Currently, an image is used as a mask on a random packing to set the properties of the particles or to modify them. The relaxation that follows induces a variation of shape. An iterative procedure, with several mask/relaxation procedures, would be helpful and a limited number of iterations is probably sufficient.
The behavior of the model TRILIN , able to cope with tensile and compressive load, may also be studied for arbitrary load and geometries, benchmarking the results with FEM simulations. The load applied in the simulation of the foam crushing are for example far from being uniaxial. More specifically, the analysis of the effect of the discretization would allow a quantification of the local geometrical errors.
The marginal modification of the model TRILIN to avoid numerical crystallization should not be too cumbersome and could greatly improve its behavior. A straightforward approach to limit the crystallization is the use of a slight dispersion on the dimension of the particles. The impact on the macroscopic and local behavior must be investigated, to check whether the interaction law has to be adapted or can be used as-is. More generally, the extension of the model to handle elementary particles of distinct size could allow an adaptation of the discretization to local geometrical details. It is algorithmically possible, but the tuning of the modification of the model may not be a trivial task. Moreover, in a finite strain context, a dynamic size adaptivity would be required, where particles split or merge. This challenge sounds unreasonably complex at this preliminary stage.
Without entering into the details of the understanding of the transient regime, numerical work-around to initiate the flow would be useful. Two practical tools could be used: a judicious initial velocity of the particles (probably including some randomness) and the progressive rise of the mesh velocities. It seems important to stick to transient procedure, to avoid supplementary numerical artifact on the solution.
Along with these short-term and practical developments, more in-depth extension and studies can be imagined.
As a first example, a tempting extension of our phenomenological description of inelastic flow is the association with a model describing elastic behaviors. Indeed, numerous functional DEM models can describe elasticity in continuous media. Such elastic-plastic models would be of interest for example to study spring-back effects in cold forming, or foam crushing at room temperature. In both examples, contacts and large strains are involved, but the elastic contribution cannot be neglected. Conceptually and algorithmically, nothing prohibits the hybridization of such elastic models and our inelastic model. However, the hard point is the practical design of a local threshold between the two behaviors. The perturbations introduced by the inelastic model are probably too large with respect to the limited magnitude of elastic phenomena. As the displacement length scales are very distinct, a hybrid model might be illusory. Such a fine description of the continuous behavior may be better handled with other numerical tools as the FEM or the SPH. It may be possible to tackle this difficulty within our DEM by investigating the stress threshold for our numerical packings to flow. It has not been quantified in our work, but tuning this purely numerical artifact may allow to mimic partially the sought for behavior.
An interesting issue from the conceptual point of view is the modeling of the contacting surfaces. Our model allows the detection and the tracking of contact and self-contact in complex configurations. However, our discretization with spheres introduces a surface roughness, leading to a potential numerical friction, which is not controlled yet. Increasing the surface friction seems possible but implementing non frictional interfaces would be a challenge.
More fundamentally, investigating toward the understanding of the flow may help in three directions to improve the models:
• Better control the strain rate sensitivity.
• Design time converged models.
The time step has been considered as a numerical parameter, which is fundamentally not physical. Although this is not necessarily a conceptual obstacle, it limits the applicability of a given set of numerical parameters: too low strain rates are excessively time consuming. Limited hope must be granted to the possibility of reducing the global computing time, as the convergence in time seems to require much smaller steps.
• Design simpler models.
The proposed interaction law may be further simplified, once the key properties are clearly identified.
The last point is a prerequisite before algorithmic or computational optimizations of the code are attempted. Although a specific care to computational issue was maintained throughout this work, the "draft " nature of the implementation also implies limited computing efficiency. Implementation choices favored the conceptual simplicity and the ease of test of various behaviors: numerous conditional statements may be simplified, although the potential speedup should be limited. To model larger geometries, the framework of the DEM solver allows a potentially massive parallelization. For effective speedups, the computing load must be correctly balanced between the processors and further developments would be necessary. In this PhD, the needs in computing power and the architectures of the available machines did not advocate for this development.
A conceptual drawback of the followed strategy is the delicate and time-consuming design of interaction laws. The design of an improved version of the model TRILIN could limit the numerical crystallization and limit the force jump when a pair of particles is separated, ideally with a simpler interaction law. However, the automation of the task is not trivial: the use of optimization tools is straightforward to quantitatively tune parameters in a well defined domain. However, how to define the metrics of interest, the objective function and the parameters when the general objective is qualitative?
The rough attempts during this PhD to use optimization tools often did not lead to satisfactory results. Automated screenings of parameters were impractical, as the domains to be explored are vast and the correct configurations narrow. Promising configurations tended to be missed and numerous useless computations were run. A very manual trialand-error approach remained the most effective strategy, specifically in the objective of finding one solution and not necessarily an optimum. The underlying assumption was an intrinsic doubt regarding the very definition of potential optima.
The choice of the numerical parameter is de facto an ongoing challenge for the DEM community, including for conventional applications of this numerical method. The proposition of tuning the interaction law itself instead of the parameters may be of interest.
The key issue of choosing appropriate objective functions for optimization procedures remains open.
Overall, it seems that a shift from parametric optimization based on an a priori chosen algorithm is only efficient for a well understood model: a model where the link between the macroscopic flow and the elementary interactions can be described. This deeper understanding of the flow process allows to effectively restrict and define the tobe-explored domain of parameters and limits the algorithmic sensitivity.
In a situation with little understanding of the link between the scales, an a priori algorithmic choice may be misleading. Useful tools might be found in computing science or statistics to design ad hoc algorithms instead of tuning parameters. In the design of interaction laws to mimic a continuous behavior, the principal component analysis may help to discriminate the most sensitive and decisive elementary features. For classifications of pairs as in our self-contact detection algorithm, tools from data mining techniques, as supervised learning, may provide automated and reliable approaches. made of the boundaries of the domain, the typical variation of local stress is ±25 MPa. The relative error is thus higher for the phase A. Guidelines can thus be deduced regarding the necessary discretization of morphological features in multi-materials.
The local stress estimation is applied to the unique inclusion test case (Section 12.4). At the chosen discretization of 5•10 4 particles, chosen to capture the macroscopic shape of the inclusion, the smaller thickness of the matrix is only discretized by a few particle. The averaging length thus corresponds to the typical observed length. Moreover, the proximity of the planar mesh is a source of perturbation for the local fields. The stress field presented here are thus bound to be rough.
Quantitatively (Figure A.3), at 10 -3 s -1 a high stress zone is captured at the north and south poles of the inclusion. At this rough discretization, a "radial" averaging procedure would greatly help to reproduce better the axisymmetric pattern, but this would mean favoring an a priori known solution. At 10 -4 s -1 , the stress is more evenly spread, with slightly higher values in the periphery of the matrix. Although the distribution is as expected rough, the quantitative trends can be correctly modeled.
Using the interaction BILIN , dominated by a repulsive behavior with k att = k rep /10 (Table11.1), the other components of the stress tensor σ are in principle not properly captured. However, the qualitative distribution of the von Mises equivalent stress σ eq is correct (Figure A.4), as the deformation is dominated by compression. As a side note, the equivalent stress can only be captured using the spatially averaged components of the stress tensor. Computing the equivalent stress particle-wise and then spatially averaging it does not lead to any sensible result.
The distinct response of the two phases can clearly be identified (Figure A.4), even though the equivalent stress is underestimated by a factor three. An interesting critic of the model can be drawn from the comparison of the relative stresses between matrix and inclusion. In the FEM, the flow stress is higher in the matrix at 10 -4 s -1 and higher in the inclusion at 10 -3 s -1 . In the BILIN model, the stress in the inclusion does increase with Conceptually, the proposed self-contact modeling implies that the force in a pair of particle depends on the state of the system in a neighborhood: a non-local behavior. The outward vector n is computed for each particle from the position of its neighbors, only taking into account the "internal" neighbors. At each step, n must be computed, taking into account the "internal" or "interface" status of the pairs. This coupling between a particle-wise variable (n) and a pair-wise variable (the status of the pair) is not trivial. In the proposed implementation, most of the computation is executed in the interaction law.
Physical model
B.2 Sources: Interaction Law BILIN
The simulations of Part IV are run using the interaction law BILIN , defined by the file normal model zcherry faistenau.h.
/ * ---------------------------------------------------------------------f i l e : n o r m a l m o d e l z c h e r r y f a i s t e n a u . h t y p e : l i g g g h t s 3 . P o i n t e r s ( lmp ) , // Custom p a i r p a r a m e t e r k r e p (NULL) , k a t t (NULL) , // Custom p a r t i c l e p a r a m e t e r f i x s e e d ( 0 ) , // C a s e s v e l o c i t y ( f a l s e ) {} // N e c e s a r r y t o c o m p i l e void r e g i s t e r S e t t i n g s ( S e t t i n g s & s e t t i n g s ) { s e t t i n g s . r e g i s t e r O n O f f ( " v e l o c i t y " , v e l o c i t y , t r u e ) ; } i n l i n e void p o s t S e t t i n g s ( ) {} // N e c e s a r r y t o c o m p i l e void c o n n e c t T o P r o p e r t i e s ( P r o p e r t y R e g i s t r y & r e g i s t r y ) { // P a i r p a r a m e t e r s // C r e a t e p a r a m e t e r s r e g i s t r y . r e g i s t e r P r o p e r t y ( " k r e p " , &c r e a t e K r e p ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " k a t t " , &c r e a t e K a t t ) ; // R e t r i e v e v a l u e f r o m s c r i p t r e g i s t r y . c o n n e c t ( " k r e p " , k r e p , " model z c h e r r y / f a i s t e n a u " ) ; r e g i s t r y . c o n n e c t ( " k a t t " , k a t t , " model z c h e r r y / f a i s t e n a u " ) ; // P a r t i c l e p a r a m e t e r , c r e a t e and r e t r i e v e f i x s e e d = s t a t i c c a s t <F i The simulations of Part V are run using the interaction law TRILIN and the self-contact algorithm, defined by the file normal model zcherry outward.h, fix outward.h and fixoutward.cpp.
B.3.1 normal model zcherry outward.h
/ * -- -------------------------------------------------------------------f ------------------------------------------------------------------------C
B.3. SOURCES: INTERACTION LAW TRILIN, CONTACT AND SELF-CONTACT213
f i x o u t w a r d n u m b e r n e x t number o f c o n t a c t s n o t a c r o s s an i n t e r f a c e , c o n t r i b u t e d t o d u r i n g t h e c u r r e n t t i m e s t e p f i x o u t w a r d i n d e n t sum o f t h e s e e d i n d e n t a t i o n on e a c h p a r t i c l e : ( s e e d 1+s e e d 2 )-d i s t a n c e ( a l w a y s whole , p a r t i c l e / p a r t i c l e o r p a r t i c l e / w a l l ) f i x o u t w a r d i n d e n t s q r sum o f t h e p o t e n t i a l e n e r g y ( s q u a r e o f t h e s e e d i n d e n t a t i o n t i m e s t h e r i g i d i t y ) : 0 . 5 * k r e p * ( ( s e e d 1+s e e d 2 )-d i s t a n c e ) ˆ2 ( h a l f i f p a r t i c l e / p a r t i c l e , w h o l e i n p a r t i c l e / w a l l ) C o n t a c t-w i s e v a r i a b l e p a s t i n t e r f a c e h i s t o r y p a r a m e t e r s t o r i n g s t a t u s o f t h e c o n t a c t : f i r s t s t e p o f i n t e r a c t i o n unknown 0 n s t e p s o f i n t e r a c t ------------------------------------------------------------------------- * // c r e a t e K r e p 6 h a s t o be u n i q u e s t a t i c c o n s t char * KREP6 = " k r e p " ; // KREP h a s t o be u n i q u e M a t r i x P r o p e r t y * c r e a t e K r e p 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e P e r T y p e P a i r P r o p e r t y ( r e g i s t r y , KREP6, c a l l e r ) ; } M a t r i x P r o p e r t y * c r e a t e K a t t 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * KATT6 = " k a t t " ; M a t r i x P r o p e r t y * c r e a t e K a t t 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e P e r T y p e P a i r P r o p e r t y ( r e g i s t r y , KATT6, c a l l e r ) ; } // C r i t e r i a f o r i n t e r f a c e S c a l a r P r o p e r t y * c r e a t e M a g O u t ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * MAGOUT = "magOut" ; S c a l a r P r o p e r t y * c r e a t e M a g O u t ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool return MODEL PARAMS : : c r e a t e S c a l a r P r o p e r t y ( r e g i s t r y , COSEN, c a l l e r ) ; } // T r a c t i o n b e h a v i o r p a r a m e t e r M a t r i x P r o p e r t y * c r e a t e F a t t 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * FATT6 = " f a t t " ; M a t r i x P r o p e r t y * c r e a t e F a t t 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e P e r T y p e P a i r P r o p e r t y ( r e g i s t r y , FATT6 , c a l l e r ) ; } s t a t i c c o n s t char * WALL6 = " w a l l " ; S c a l a r P r o p e r t y * c r e a t e W a l l 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e S c a l a r P r o p e r t y ( r e g i s t r y , WALL6, c a l l e r ) ; } s t a t i c c o n s t char * HEAL6 = " h e a l " ; S c a l a r P r o p e r t y * c r e a t e H e a l 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e S c a l a r P r o p e r t y ( r e g i s t r y , HEAL6 , c a l l e r ) ; h i s t o r y o f f s e t = h s e t u p ->a d d h i s t o r y v a l u e ( " p a s t i n t e r f a c e " , " 0 " ) ; // f l a g " 0 " , p a r a m e t e r i s s y m e t r i c } void r e g i s t e r S e t t i n g s ( S e t t i n g s & s e t t i n g s ) { s e t t i n g s . // P a i r p a r a m e t e r s // C r e a t e p a r a m e t e r s r e g i s t r y . r e g i s t e r P r o p e r t y ( " k r e p " , &c r e a t e K r e p 6 ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " k a t t " , &c r e a t e K a t t 6 ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " ma g o ut " , &c r e a t e M a g O u t ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " c o s i j " , &c r e a t e C o s I J ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " c o s e n " , &c r e a t e C o s E N ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " f a t t " , &c r e a t e F a t t 6 ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " w a l l " , &c r e a t e W a l l 6 ) ; r e g i s t r y . r e g i s t e r P r o p e r t y ( " h e a l " , &c r e a t e H e a l 6 ) ; // R e t r i e v e v a l u e f r o m s c r i p t r e g i s t r y . c o n n e c t ( " k r e p " , k r e p , " model z c h e r r y / o u t w a r d " ) ; r e g i s t r y . c o n n e c t ( " k a t t " , k a t t , " model z c h e r r y / o u t w a r d " ) ; --------------------------------------------------------------------f i l e : f i x o u t w a r d . cpp t y p e : l i g g g h t s 3 . 5 . 0 f i x - ------------------------------------------------------------------------ * / #i n c l u d e <cmath> #i n c l u d e " atom . h " #i n c l u d e " e r r o r . h " #i n c l u d e " n e i g h b o r . h " #i n c l u d e " n e i g h l i s t . h " #i n c l u d e " f i x p r o p e r t y a t o m . h " #i n c l u d e " f o r c e . h " #i n c l u d e " p a i r g r a n . h " #i n c l u d e " f i x o u t w a r d . h " u s i n g namespace LAMMPS NS ; u s i n g namespace F i x C o n s t ; FixOutward : : FixOutward (LAMMPS * lmp , i n t n a r g , char * * a r g ) : F i x ( lmp , n a r g , a r g ) { i f ( n a r g != 6 ) e r r o r -> a l l (FLERR, " I l l e g a l f i x o u t w a r d command" ) ; s t r c p y ( t y p e , a r g [ 3 ] ) ; // r e t r i e v e f i x o u t w a r d f r o m i n p u t s t r c p y ( model , a r g [ 4 ] ) ; s t r c a t ( model , " " ) ; s t r c a t ( model , a r g [ 5 ] ) ; s t r c p y ( type mag , t y p e ) ; s t r c a t ( type mag , " mag " ) ; s t r c p y ( t y p e n e x t , t y p e ) ; s t r c a t ( t y p e n e x t , " n e x t " ) ; s t r c p y ( type numb , t y p e ) ; s t r c a t ( type numb , " n u m b e r N e i g h " ) ; s t r c p y ( t y p e n u x t , t y p e ) ; s t r c a t ( t y p e n u x t , " n u m b e r n e x t " ) ; s t r c p y ( t y p e i n d e , t y p e ) ; s t r c a t ( t y p e i n d e , " i n d e n t " ) ; s t r c p y ( t y p e i n s q , t y p e ) ; s t r c a t ( t y p e i n s q , " i n d e n t s q r " ) ; s t r c p y ( t y p e e r i n , t y p e ) ; s t r c a t ( t y p e e r i n , " e r r o r i n t e " ) ; s t r c p y ( t y p e e r b u , t y p e ) ; // c r e a t e f i x o u t w a r d and o u t w a r d n e x t i f t h e y a r e m i s s i n g i n i n p u t s c r i p t f i x o u t w a r d = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d " , " p r o p e r t y / atom " , " v e c t o r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d ) { c o n s t char * f i x a r g [ 1 1 ] ; f i x a r g [ 0 ] = " o u t w a r d " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " o u t w a r d " ; f i x a r g [ 4 ] = " v e c t o r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x a r g [ 9 ] = " 0 . " ; f i x a r g [ 1 0 ] = " 0 . " ; f i x o u t w a r d = m o d i f y->a d d f i x p r o p e r t y a t o m ( 1 1 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x o u t w a r d m a g = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " outward mag " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " outward mag " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " outward mag " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x o u t w a r d m a g = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x o u t w a r d n e x t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d n e x t " , " p r o p e r t y / atom " , " v e c t o r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d n e x t ) { c o n s t char * f i x a r g [ 1 1 ] ; f i x a r g [ 0 ] = " o u t w a r d n e x t " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " o u t w a r d n e x t " ; f i x a r g [ 4 ] = " v e c t o r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x a r g [ 9 ] = " 0 . " ; f i x a r g [ 1 0 ] = " 0 . " ; f i x o u t w a r d n e x t = m o d i f y->a d d f i x p r o p e r t y a t o m ( 1 1 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x n u m b e r N e i g h = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " numberNeigh " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x n u m b e r N e i g h ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " numberNeigh " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " numberNeigh " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x n u m b e r N e i g h = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x n u m b e r n e x t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " n u m b e r n e x t " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x n u m b e r n e x t ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " n u m b e r n e x t " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " n u m b e r n e x t " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x n u m b e r n e x t = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x i n d e n t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " i n d e n t " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " i n d e n t " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " i n d e n t " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x i n d e n t = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x i n d e n t s q r = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " i n d e n t s q r " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " i n d e n t s q r " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " i n d e n t s q r " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x i n d e n t s q r = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x e r r o r i n t e = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " e r r o r i n t e " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " e r r o r i n t e " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " e r r o r i n t e " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x e r r o r i n t e = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x e r r o r b u l k = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " e r r o r b u l k " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " e r r o r b u l k " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " e r r o r b u l k " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g [ 8 ] = " 0 . " ; f i x e r r o r b u l k = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x g o o d i n t e = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " g o o d i n t e " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g [ 9 ] ; f i x a r g [ 0 ] = " g o o d i n t e " ; f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " g o o d i n t e " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g
B.3. SOURCES: INTERACTION LAW TRILIN, CONTACT AND SELF-CONTACT221
f i x a r g [ 1 ] = " a l l " ; f i x a r g [ 2 ] = " p r o p e r t y / atom " ; f i x a r g [ 3 ] = " g o o d b u l k " ; f i x a r g [ 4 ] = " s c a l a r " ; f i x a r g [ 5 ] = " no " ; f i x a r g [ 6 ] = " y e s " ; f i x a r g [ 7 ] = " y e s " ; f i x a r g p a i r g r a n = s t a t i c c a s t <P a i r G r a n * >( f o r c e ->p a i r m a t c h ( " g r a n " , 0 ) ) ; f i x o u t w a r d = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e , " p r o p e r t y / atom " , " v e c t o r " , 0 , 0 , model ) ) ; f i x o u t w a r d m a g = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( type mag , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x o u t w a r d n e x t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e n e x t , " p r o p e r t y / atom " , " v e c t o r " , 0 , 0 , model ) ) ; f i x n u m b e r N e i g h = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( type numb , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x n u m b e r n e x t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e n u x t , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x i n d e n t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e i n d e , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x i n d e n t s q r = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e i n s q , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x e r r o r i n t e = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e e r i n , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x e r r o r b u l k = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e e r b u , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x g o o d i n t e = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e g o i n , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; f i x g o o d b u l k = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( t y p e g o b u , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , model ) ) ; // " v e c t o r " , 3 r e t r i e v e o n l y i f 3 v a l u e s i n v e c t o r . E r r o r i f l e s s . S i l e n c e i f more . u p d a t e P t r s ( ) ; // i n i t i a l i z e custom a r r a y f o r w a r d a l l ( ) ; } FixOutward : : ˜FixOutward ( ) {} i n t FixOutward : : s e t m a s k ( ) { i n t mask = 0 ;
B.4.1 Input Script Template
# ### S i n g l e m a t e r i a l , moving w a l l , p r e s c r i b e d s t r a i n r a t e # -------------------------------------------------------------------------------# -------------------------------------------------------------------------------# ## f i l e : s i n g l e . t e m p l a t e # ## d a t e : 2 0 1 7 / 0 9 / 1 3 # ## t y p e : l i g g g h t s 3 . 5 . 0r o b i n i n p u t s c r i p t # ## a u t h : r o b i n . gibaud@simap . g r e n o b l e -i n p . f r #
------------------------------------------------------------------------------- # ############################################################################### # ## P a
r a m e t e r s # Geometry , k i n e m a t i c s v a r i a b l e d e p s e q u a l @deps # S t r a i n r a t e v a r i a b l e s t r a i n e q u a l @ s t r a i n # F i n a l s t r a i n # N u m e r i c a l v a r i a b l e t i m e S t e p e q u a l @ t i m e S t e p # Time s t e p v a r i a b l e n R e l a x e q u a l @nRelax # Number o f r e l a x a t i o n s t e p s v a r i a b l e epsWin e q u a l @epsWin # A v e r a g i n g window i n s t r a i n v a r i a b l e epsOut e q u a l @epsOut # F r e q u e n c y o f o u t p u t s i n s t r a i n v a r i a b l e r e l T i m e e q u a l v n R e l a x * v t i m e S t e p # M a t e r i a l : d i r e c t l y r e p l a c e d i n s c r i p t v a r i a b l e diam e q u a l @diam # Crown d i a m e t e r o f t h e p a r t i c l e s v a r i a b l e s e e d e q u a l @seed # S e e d d i a m e t e r o f t h e p a r t i c l e s v a r i a b l e mass e q u a l @mass # Mass o f t h e p a r t i c l e s myKatt a l l p r o p e r t y / g l o b a l k a t t p e r a t o m t y p e p a i r 1 @katt # A t t r a c t i v e s t i f f n e s s f i x myFatt a l l p r o p e r t y / g l o b a l f a t t p e r a t o m t y p e p a i r 1 @ f a t t # A t t r a c t i v e f o r c e t h r e s h o l d f i x myWall a l l p r o p e r t y / g l o b a l w a l l s c a l a r @ w a l l # M u l t i p l i c a t i v e f a c t o r f o r p a r t i c l e / w a l l i n t e r a c t i o n f i x myHeal a l l p r o p e r t y / g l o b a l h e a l s c a l a r @ h e a l # H e a l i n g t i m e o f t h e ' ' i n t e r f a c e ' ' i n t e r a c t i o n f i x myMagO a l l p r o p e r t y / g l o b a l magOut s c a l a r @magOu # M ag nit ude t h r e
# ############################################################################### # ## S e t t i n g s # G e n e
# ############################################################################### # ## C o n t
] -c minG [ 3 ] # ############################################################################### # ## S i
m u l a t i o n # ## I n t e g r a t i o n f i x i n t e g a l l nve # Compute box d i m e n s i o n s v a r i a b l e tmp e q u a l l x v a r i a b l e X0 e q u a l $ {tmp} v a r i a b l e tmp e q u a l l y v a r i a b l e Y0 e q u a l $ {tmp} v a r i a b l e tmp e q u a l l z v a r i a b l e Z0 e q u a l $ {tmp} p r i n t " I n i t i a l d i m e n s i o n : ( $ {X0 } , $ {Y0 } , $ {Z0 } ) " v a r i a b l e s e c u r i t y e q u a l 0 . 6 #0 . 6 s h o u l d o . . . # Depend on t e n s i l e o r c o m p r e s s i v e l o a d i f " $ { s t r a i n } > 0 " t h e n & " v a r i a b l e x S i z e e q u a l v s e c u r i t y * v X0 " & " v a r i a b l e y S i z e e q u a l v s e c u r i t y * v Y0 " & " v a r i a b l e z S i z e e q u a l v s e c u r i t y * v Z 0 * exp ( v s t r a i n ) " & e l s e & " v a r i a b l e x S i z e e q u a l v s e c u r i t y * v X0 * exp ( -0.5 * v s t r a i n ) " & " v a r i a b l e y S i z e e q u a l v s e c u r i t y * v Y0 * exp ( -0.5 * v s t r a i n ) " & " v a r i a b l e z S i z e e q u a l v s e c u r i t y * v Z 0 " p r i n t " $ { x S i z e } , $ { y S i z e } , $ { z S i z e } " #p r i n t " $ { l x } , $ { l y } , $ { l z }" c h a n g e b o x a l l x f i n a l $ (-1 * v x S i z e ) $ ( v x S i z e ) y f i n a l $ (-1 * v y S i z e ) $ ( v y S i z e ) z f i n a l $ (-1 * v z S i z e ) $ ( v z S i z e ) b o u n d a r y f f f # i f mmm, s e g F a u l t w i t h 8 p r o c . I f s c a l e 1 . 0 1 , m e s h e s l o s t . # ## B o u n d a r i e s # I m p o r t f i x t o p a l l mesh / s u r f a c e / s t r e s s f i l e @mesh t y p e 1 move 0 0 $ ( 0 . 5 * v Z 0 +0.5 * v s e e d ) r e f e r e n c e p o i n t 0 0 $ ( 0 . 5 * v Z 0 +0.5 * v s e e d ) # I m p o r t and t r a n s l a t e t h e p l a n a r mesh d e f i n i n g t h e b o u n d a r y c o n d i t i o n s , t r a n s l a t e r e f e r e n c e p o i n t f i x b o t a l l mesh / s u r f a c e / s t r e s s f i l e @mesh t y p e 1 move 0 0 $ ( -0. ------------------------------------------------------------------------------# -------------------------------------------------------------------------------# ## f i l e : p a c k i n g . t e m p l a t e # ## d a t e : 2 0 1 6 / 0 7 / 0 1 # ## t y p e : l i g g g h t s 3 . 4 . 1r o b i n i n p u t s c r i p t # ## a u t h : r o b i n . gibaud@simap . g r e n o b l e -i n p . f r # -------------------------------------------------------------------------------# T a r g e t p a r a m e t e r s v a r i a b l e diam e q u a l 1 v a r i a b l e p a r t e q u a l @nb # number o f p a r t i c l e s v a r i a b l e f r a F e q u a l 0 . 6 3 9 # f i n a l r e l a t i v e d e n s i t y # N u m e r i c a l p a r a m e t e r s v a r i a b l e s e e d 0 e q u a l @seed0 # random s e e d s v a r i a b l e s e e d 1 e q u a l @seed1 v a r i a b l e s e e d 2 e q u a l @seed2 v a r i a b l e d e p s e q u a l -1e-5 v a r i a b l e d e n s e q u a l 1 e 1 4 #1 e 2 0 v a r i a b l e k s i e q u a l 0 . 9 8 v a r i a b l e f r a I e q u a l 0 . 2 5 v a r i a b l e r i g i e q u a l 1 e 8 #1 e 1 0 #1e 6 v a r i a b l e mass e q u a l v d e n s * PI * v d i a m ˆ3 / 6 v a r i a b l e w0 e q u a l ( v r i g i / v m a s s ) ˆ0 . 5 v a r i a b l e Dt e q u a l 0 . 0 1 / v w0 # a r b i t r a r y f a c t o r v a r i a b l e v i s c e q u a l 2 * v m a s s * v k s i * v w0 #5 e 1 1 #1e 8 v a r i a b l e v o l S e q u a l v p a r t * PI * v d i a m ˆ3 / 6 v a r i a b l e a F i n e q u a l ( v v o l S / v f r a F ) ˆ( 1 / 3 ) v a r i a b l e a I n i e q u a l ( v v o l S / v f r a I ) ˆ( 1 / 3 ) v a r i a b l e e p s e q u a l l n ( v a F i n / v a I n i ) v a r i a b l e n S t e p e q u a l v e p s / ( v d e p s * v D t ) # v a r i a b l e s t e p O u t e q u a l f l o o r ( v n S t e p / 1 0 ) dmp a l l custom 1 @ d i r e c t o r y / atom @key i d t y p e t y p e x y z i x i y i z vx vy v z f x f y f z omegax omegay omegaz r a d i u s t h e r m o s t y l e custom s t e p c P k e c Con v E v I v K v f r a c r u n 0 # W r i t e f i n a l s t a t e w r i t e d a t a @ f i l e p a c k " " " Template f o r e i g h t h o f s p h e r i c a l i n c l u s i o n t e s t c a s e - ------------------------------------------------------------------------------f Architectured materials display promising properties, allowing fine-tuning of their physical behavior and contriving contradictory functionalities. Typical examples are composite materials, associating complementary phases, whose distribution and topology are controlled towards functional requirements. Such architectures can be elaborated with metallic materials, for example using casting, powder technology or additive manufacturing.
v KEPE v RELIND v P v adimK v a d i m I v adimE & z h i z l o f t o p [ 9 ] f b o t [ 9 ] v E v t t i m e v l x G v l y G v l z G c maxG [ 3 ] c minG [ 3 ] & f a v e [ 1 ] f a v e [ 2 ] f a v e [ 3 ] f a v e [ 4 ] f a v e [ 5 ] f a v e [ 6 ] # f z T f z B mxT mxB myT myB #v adimE v adimK v P& c PE c DELTA c KEPE t h e
v a t [ 1 ] f a v a t [ 2 ] f a v a t [ 3 ] f a v a t [ 6 ] f a v a t [ 5 ] f a v a t [ 4 ] v c u m d e l t a c c u t o f f f o u t M a g f o u t [ 1 ] f o u t [ 2 ] f o u t [ 3 ] f o u t N e i v n
# ########################## # ## S
# ########################## # ## P a
########################## # ## C o n t
$ { diam } ) ) # ########################## # ## S i m u l a t i
-i n p . f r -------------------------------------------------------------------------------- " " " f r o m f u t u r e i m p o r t d i v i s i o n ,
# ############################################################################### # ## Python v a r i
# ############################################################################### # ## A s t e
Structural parts often need to be processed from raw architectured materials. Many manufacturing processes, like hot forming, rely on deformation mechanisms that can involve the motion and interaction of complex 3D interface, large change in shape or topological modification. The typical involved dynamic phenomena -like pore closure, neck creation and phase fragmentationcan be challenging to observe with conventional, or destructive, experimental techniques. Tomography is a non-destructive tool, with active developments toward finer spatial and temporal resolution, allowing in situ observations. It is often used in close combination with digital volume correlation and simulation tools. Tomography data serve both as geometrical initial state and as temporal evolution reference for models, for example in calibration and validation procedures. Examples include the study of crack propagation [1] and the study of creep mechanisms in metallic foams [2].
At a macroscopic scale, the physical description of metallic materials as continuous media is often legitimate. However, from a numerical point of view, modeling the typical architectural discontinuities, and more specifically their interactions and topological changes, can be challenging within a continuous framework. Many strategies have been developed to extend the scope of continuous descriptions, among which:
• Dissociating material and mesh motions, with an Eulerian [3,4,5] or an Arbitrary Lagrangian-Eulerian [6,7,8] kinematical description of the materials.
• Sequentializing a large distortion in smaller steps, periodically re-generating a Lagrangian mesh [9].
• Super-imposing discontinuities description on top of a continuous framework, using additional discontinuous arbitrary shape functions [10,11] or a set of punctual Lagrangian markers, representing material phases [12,13] or interfaces [14].
• Discretizing the materials using a cloud of nodes instead of a mesh. The continuum constitutive law can be integrated globally for the whole system [15,16,17], or locally in the neighborhood of each node, for example in smooth particle hydrodynamics [18,19] or non-ordinary state based peridynamics [20,21] formulations.
A common denominator of these strategies is to derive the local behavior from the macroscopic continuous constitutive law. A distinct route is to describe the material as a set of discrete objects, using ad hoc interaction laws between neighboring objects. Such models are innately suited to describe materials where interfaces motions and interactions are predominant with respect to the continuous behavior.
This approach has historically been used in early attempts to numerically solve solid mechanics problems on arbitrary shapes [22]. In the last decade, variants of the Molecular Dynamics methodology, as the Discrete Element Method (DEM) and bond based peridynamics, have successfully been applied to model elastic continuous media. Three-dimensional work in solid mechanics include the modeling of dynamic brittle failure [23,24], crack propagation [25] and quasi-static buckling [26]. They all demonstrate the possibility to design and calibrate local interaction DEM laws to display a targeted continuous macroscopic behavior. To our knowledge, these work rely on initially pairwise bonded neighbors, and only allow volumetric strain plasticity [27] [28, p.153], plasticity and viscoplasticity being implemented at the pairs levels. Hence, modeling high strain in metallic alloys is hindered by the total Lagrangian description and the lack of isochoric plasticity mechanisms. Other potential extensions of the Molecular Dynamics, like the Movable Cellular Automaton method [29], rely on many-body interactions for such purposes.
In this paper, we focus on the development of a DEM model describing incompressible bi-materials for large quasi-static compressive strain. In both phases, we assume a perfect viscoplastic behavior, described by the Norton law. One peculiarity of our phenomenological approach is that the local laws have no alikeness with the macroscopic behavior. Instead of implementing a continuous behavior at the scale of the numerical discretization, we use the analogy between the motion of a packing of elastic cohesive spheres, collectively sliding on one another, with the plastic shear in continuous media.
The implemented DEM contact law is described in Section 2 and its calibration procedure in Section 3. The macroscopic behavior of a single material is discussed in Section 4. The bi-material behavior of the model is tested, and confronted to continuous models, on elementary geometrical configurations in Section 5. A potential application of the methodology to an experimental microstructure is illustrated in Section 6.
Attractive-repulsive model
In this paper, continuous media are discretized by packings of interpenetrated spherical particles. This section describes the contact laws, used to compute interaction forces, between the particles. As in classical DEM implementations, interaction forces F A→B = -F B→A are computed for each pair of indented particles (A, B) from their distance h and relative velocity V B -V A (Figure 1). Time is discretized in constant steps ∆t and the motion of the particles are integrated from Newton's second law using a Velocity Verlet explicit scheme. While the interaction forces are computed at the level of each pair, as described in this section, it must be understood that our model can only display the expected behavior for a packing of particles collectively interacting.
The main objective of the contact model is to maintain a cohesive packing, able to re-arrange itself with controlled overall volume change. Thus, at the pair level, interaction must alternatively be attractive and repulsive. Among the possible algorithmic strategies, we chose to use a purely geometrical management of the contacts, with no history parameter stored between the time steps. Each particle is subdivided into two concentric and spherical zones (Figure 1a) with distinct behaviors:
• A repulsive seed, mimicking incompressibility, of radius R seed (Figure 1b); Pairwise forces, computed from the kinematics of the pair (Equation 1).
R seed R crown h V A V B A B (a)
F B→A F A→B A B (b)
In both zones, seed and crown, the model is governed by normal elastic forces F N (Equation 1 and Figure 2). The interaction force is piecewise linear with h, the distance between the centers of the two particles. Two normal stiffnesses are used: k rep for repulsive seed contact and k att for attractive crown contact. Each particle has a numerical mass m, used in the integration of the motion.
The attractive behavior, in the crown zone, depends on the relative normal velocity ḣ (time derivative of h). The attractive force is only activated if a pair has a tensile motion ( ḣ > 0), and is canceled in case of compressive motion ( ḣ ≤ 0). This behavior helps to smooth the creation of new contacts between particles and introduces a dissipative effect of the total energy, numerically sufficient within the strain rate validity range of the model, linked to the frequency of oscillation of the pairs. At the pair level, no damping, shear or torque interaction laws are implemented. In the tested configurations, such interactions only introduce second-order effects on the macroscopic behavior of packings. Thus, in this paper, interactions between particles are only normal pairwise forces and piecewise linear with the distance between the centers.
F N = k rep (2R seed -h) if h ≤ 2R seed (1a)
k att (2R seed -h) if 2R seed < h ≤ 2R crown and ḣ > 0 (1b) 0 if 2R seed < h ≤ 2R crown and ḣ ≤ 0 (1c)
The possibility for the particles to arbitrarily change neighbors introduces, at a macroscopic scale, a plastic effect. The interaction laws at the pair level differ qualitatively from the targeted physical phenomena. To represent a continuum, a large packing of such particles is generated. Various phases can be represented, assigning in the initial configuration the properties of distinct materials to clusters of particles.
Two types of boundary conditions (Figure 3) are applied to the packings:
• Free boundary, where particles are not constrained by any means; • Kinetically constrained boundary, using a rigid mesh following prescribed motion.
For a uniaxial compression, the following boundary conditions are applied: a top and bottom planar meshes and free lateral sides (Figure 3b). In this paper, the meshes are used to apply prescribed macroscopic true strain rate. The forces acting onto the planes are summed to evaluate the macroscopic flow stress, computed using updated or initial cross-section. Interaction forces between the mesh elements and the particles are computed with a very similar contact law as particle-particle contacts. h is defined as the shortest distance between an element and the center of a particle (Figure 3a), and R is to be used instead of 2R in Equation 1. In a mesh/particle contact, the interaction parameters of the particle are used. Any arbitrary geometry meshed with triangular planar elements can be used with the implemented model, in this paper only planar meshes where used.
From their generation to compression tests, the packings go through three main steps:
1. Random packing generation and relaxation [30]; 2. Material properties attribution and relaxation; 3. Uniaxial compression.
The initial state and the elaboration route, in the context of the large strains studied here, seemed to have little influence on the compression results, and are not detailed here.
The model was implemented as an independent contact law in the opensource DEM code LIGGGHTS [31]. In this article, all DEM computations were run using LIGGGHTS, and all packing visualizations were rendered using the open-source software OVITO [32]. Summing-up our model, piecewise linear forces, both attractive and repulsive, are computed between particles and with meshes. No history is stored, and the contact management is geometrical.
Calibration procedure
This section describes the calibration procedure of the numerical parameters of the contact model, and the fixed parameters in the scope of this paper. Our objective is to model a perfect viscoplastic behavior, described as a relation between the strain rate ε and the flow stress σ, by the Norton law [33, p.106]:
σ = K εM ( 2
)
Where M is the strain rate sensitivity and K is the stress level. All cases presented in this paper being in compressive state, strain, strain rate and stress will always be given in absolute value. As the DEM does not rely on a continuous framework, the numerical parameters cannot be derived a priori from the targeted macroscopic behavior. We work here at fixed ratio R crown /R seed = 1.5, to allow a large overlap zone without catching second neighbors. The seed radius is arbitrarily set to a size of R seed = 1 mm. The ratio between attractive and repulsive stiffnesses is set to k rep /k att = 10, to guarantee a numerically predominant repulsion. The time step ∆t is fixed to 5×10 -4 s -1 , which is from 20 to 2000 times smaller than the studied natural periods (Equation 3). The force signal, measured on the meshes, is averaged over a sliding window, of typical width 1×10 -2 in strain. The remaining parameters to be calibrated are the repulsive stiffness of the contacts k rep and the mass of the particles m.
We propose a two-step calibration procedure, based on uniaxial compression test simulations, on cubes of single materials:
Step 1. Calibrate the strain rate sensitivity M , tuning the ratio between mass and repulsive stiffness m/k rep .
Step 2. Calibrate the stress level K, applying a common multiplicative factor to both mass m and repulsive stiffness k rep .
The numerical parameters, obtained independently for each phase, are used in multi-material simulations without further fitting procedure.
Strain Rate Sensitivity Calibration
The strain rate sensitivity M of a packing depends on its ability to quickly rearrange itself, with regard to the prescribed strain rate.
To quantify an image of the reaction time, we use the natural period t 0 of an ideal spring-mass system of stiffness k rep and mass m:
t 0 = 2π m k rep (3)
This value is not meant to match the actual oscillation period of particles, but to quantitatively compare sets of parameters. Packing of 5×10 3 particles with natural periods ranging from 1×10 -2 to 1 s are compressed at strain rates from 3×10 -6 to 1 s -1 . The flow stress σ is normalized by the flow stress at the lowest strain rate σ low . The results (Figure 4), are used to build a master curve of the strain rate sensitivity behavior. As shown in Figure 4a, the strain rate sensitivity, i.e. the slope in the space ( ε, σ/σ low ), is driven by the relation between the natural period and the strain. A common trend for all configurations (Figure 4b) is clearly exhibited in the space (t 0 √ ε, σ/σ low ), and approximated by least-square fitting, using a sigmoid of generic expression:
σ/σ low = a + b/(1 + exp(c -d • t 0 √ ε)) (4)
The fitting parameters used here are (a, b, c, d) ≈ (0.9048, 4.116, 3.651, 210.0). Using this fitted common trend, a master curve is built in the space ( ε • t 0 2 , M ), summing-up this behavior for all sets of tested parameters (Figure 4c).
Three flow regimes, in terms of strain rate sensitivity, can be identified in Figures 4c and4b: • Plastic: for ε • t 0 2 < 1×10 -7 s the strain rate sensitivity is negligible (M < 4×10 -3 ). A plastic behavior can thus be represented, with stress variation of the order of magnitude of the expected precision of the model, valid over various orders of magnitude of strain rates. The packing rearranges quickly enough when deformed, so that variations of strain rate does not affect the flow structure.
• Collapse: at higher values than ε • t 0 2 > 3×10 -3 s, the packing is not reactive enough for the particles to collectively cope with the strain. The strain localizes next to the moving planes, the flow stress drops and the macroscopic equilibrium is lost. Such configurations are not suitable for our purpose. • Viscoplastic: in the intermediate window, the ε • t 0 2 value governs the sensitivity of the packing, up to a maximum of 0.6. In this configuration, when the strain rate increases, the particles are forced to indent more to rearrange, leading to higher flow stress. However, the sensitivity is strongly strain rate dependent, An actual viscoplastic behavior can only be modeled via an averaged strain rate sensitivity, with a scope of validity limited to a narrow range of strain rates.
The master curve (Figure 4c) allows to directly choose the natural period approximating the desired sensitivity at the targeted strain rate. The m/k rep ratio is thus fixed. If the strain rate range is known a priori, the master curve also gives an approximation of the variation of the strain rate sensitivity within the strain rate range.
Stress Level Calibration
For a given kinematical behavior of a packing, the stress level can arbitrarily be set. The integration of motion, for each particle, relies on the acceleration computed from Newton's second law. Hence, a multiplicative factor applied to both forces and masses leaves the kinematics of a packing, and its strain rate sensitivity, unchanged. Since our contact laws are linear elastic, we can use a common multiplicative factor on stiffnesses and masses.
The stiffnesses k rep and k att are scaled up to match the desired flow stress at the targeted strain rate. The mass m is proportionally adjusted, in order to maintain the correct strain rate sensitivity.
Scope of Validity
This two-step calibration allows us to reach arbitrary stress level, but displays limitations regarding the reachable strain rate sensitivity and strain rate.
We cannot model arbitrary strain rates with a given set of parameters. The numerical strain rate sensitivity depends on the strain rate. This effect can be controlled for very low sensitivities: a negligible sensitivity can be respected over various order of magnitudes of strain rate. However, a large tolerance must be accepted on higher sensitivities, which can only be reasonably approximated on narrow ranges of strain rate. The model also has intrinsic limits regarding the reachable strain rate sensitivities. Reaching higher sensitivity would require lower natural periods, for which the packings collapse and are unable to cope with the strain.
As a general conclusion for this section, our calibration procedure allows to choose independently the stress level and the strain rate sensitivity, tuning the mass of the particles and the stiffness of the contacts. The scope of validity, for controlled sensitivity, is limited to narrow strain rate ranges.
Application to a Single Material
In this section, we apply our methodology to represent arbitrary homogeneous single materials. Uniaxial compression is performed on initially cubic domains.
As our model relies on a collective motion of particles, a too small packing will not display the expected behavior (Figure 5). The kinematical behavior of single material cube in uniaxial compression is roughly observed with a few dozen of particles. With a few hundred of particles, the stress fails to represent the expected plastic trend, but already exhibits a correct order of magnitude (Figure 5a). A few thousand of particles allow a controlled relative error, around 10 %. Single material configurations are run in this section with 5×10 4 particles, with a typical relative error around 2 % (Figure 5b). The relative error is computed with respect to a packing of 1×10 6 particles, for which spatial convergence is considered to be reached. In this paper, the behavior of the phases are inspired from an experimental setup: the hot forming at 400 • C of a metallic composite, composed of a pure copper matrix and spherical inclusions of zirconium based bulk metallic glass. The numerical phases both have a flow stress close to 100 MPa in the strain rate window 1×10 -4 -1×10 -3 s -1 , but with drastically distinct strain rate sensitivities. The negligible strain rate sensitivity phase is referred to as A, with a low natural period, the high sensitivity phase is referred to as B, with a high natural period. The corresponding numerical parameters are given in Table 1.
A key feature expected for a set of parameters is the conservation of the packing volume. The volume of the packings is estimated reconstructing a polyhedral mesh, using an algorithm implemented by Stukowski [34], based on the alpha-shape method. For both phases, the volume variation depends little on the strain rate. The prescribed compression decreases the volume, typically about 5 % for A and 10 % for B (Figure 6). Before reaching a somewhat stable flow regime, the packing volume decreases in first 0.2 of strain. Most of the volume variation occurs within this initial stage, the volume then stabilizes on a plateau before a final increase of the error at larger strains, above 0.6. This trend, and its initial transitory regime, will also be observed for the flow stress (Figure 9a). Regarding the kinematical behavior of a packing (Figure 7), the overall cuboidal shape is conserved, but the sharp edges tend to be blurred along with the strain. This is understood as an effect of the surface tension induced by the attractive component of the contact law. As the discretization by particles creates local defects in the geometry, the initially flat faces become slightly wavy.
In order to simulate quasi-static phenomena, the behavior of the packing must be independent from the way the strain is applied. The total forces acting on the boundary conditions, the top and bottom meshes, respectively mobile and fixed, must balance. If a mesh moves too fast, the macroscopic equilibrium is lost and the strain localizes next to the moving plane. At a given strain rate, the equilibrium relative error depends on the natural period, but is of the same order of magnitude for all strains. The equilibrium error (Figure 8) is always inferior to 0.1 % for both phases in the studied strain rate range. Typical profiles of stress-strain curves are presented Figure 9a. In this section, the true stress is computed using an estimation of the cross-section, based on the current macroscopic strain and the initial volume, assuming its variations (Figure 6) are acceptable. As for the volume evolution, a transitory stage can be observed at the beginning of the deformation, where the stress rises to reach the plastic plateau. The flow stress then oscillates around a fairly constant value. An overshoot effect of the stress can be observed at higher strain rates for the phase B.
For each phase, the values at a strain of 0.3 is used to compute the Norton approximation, by least-square fitting (Figure 9b and Table 1). As discussed in Section 3.3, the high sensitivity phase, B, is only valid within one order of magnitude of strain rate, the approximation is not reasonable when the strain rate is out of the studied range. To sum-up, two phases, with distinct strain rate sensitivities, are independently defined in this section, with a Norton law approximation of their continuous properties. Numerical properties and precision are evaluated.
Application to Bi-Materials
The DEM parameters have been calibrated separately for each phase. Keeping in mind the limitations of the single material model, we here evaluate the reliability of the model for bi-material composites. Three elementary geometrical bi-material configurations are studied: parallel, series and spherical inclusion. The three geometries are discretized with 5×10 4 particles and uniaxially compressed up to a strain of 0.3, at prescribed strain rates. In the studied configurations, interaction parameters at the interfaces had little influence on the macroscopic results. they have been set to the average of the parameters of phase A and B. The shape of the phase and the engineering macroscopic stress are used to compare the results with analytical and FEM references, for various volume fractions. The choice of engineering over true stress allows to use a simple and consistent comparison metric: no unique true stress can be defined for non homogeneous strain configurations.
Total Lagrangian FEM simulations, well suited for our elementary geometrical configurations and limited strains, are run using Code Aster [35]. The visualization of the FEM results are rendered using PARAVIEW [36]. The quadratic tetrahedral elements used rely on an incompressible finite transformation formulation. Top and bottom nodes follow prescribed vertical motion and lateral sides deform freely. The geometrical models are reduced using the symmetries of the problems, while the DEM simulate the full geometries. In FEM, at the interface between two phases, the nodes are shared, prohibiting any relative motion, which is the most severe difference with our DEM simulations. In the experimental background of this study, the phases have very little adhesion at the interface. Both materials follow a Norton law. The phase B uses the continuous parameters identified in Section 4 (Table 1). To allow an easier numerical convergence of the model, the numerical strain rate sensibility of the phase A is slightly increased for the FEM simulations (M = 3.05×10 -2 and K = 120 MPa • s M ). In the range 1×10 -4 -1×10 -3 s -1 , the induced relative error on the flow stress is ±3 %.
Parallel Configuration
A cube is vertically divided into two cuboidal phases, for various volume fractions, and vertically compressed at constant strain rates. The engineering stress is compared to a mixture law, linear with the volume fraction.
In this simple configuration, little interaction should take place between the phases, and in ideal conditions, a homogeneous strain for both phases is expected. In the DEM simulations, the global geometry of each phase remains close to a cuboid along the deformation (Figure 10). At given strain rate ε, the true stress in the phases being independently defined by the Norton law, the global true stress σ true can be computed with an elementary mixture law [37, p.99], linear with the volume fraction f of the phase B:
σ true (f, ε) = f • K B εM B + (1 -f ) • K A εM A (5)
To provide a consistent metric for all configurations, the engineering stress σ engineer is used as reference. It is computed at a given strain ε (Equation 6), based on the true stress and the volume conservation:
σ engineer (f, ε, ε) = exp (-ε) • σ true (f, ε) (6)
The engineering stress-true strain profile, as in single material configuration, displays a transitory regime, typically in the first 0.15 of strain, with a progressive rise towards the flow stress (Figure 11a).
The DEM model is able to capture, at the precision of the single phases, the linear pattern of flow stress with the volume fraction (Figure 11b). With a rougher discretization, for example only a thousand particles per phase, the result remains qualitatively close, degrading the accuracy by a few percent.
Series Configuration
A cube is horizontally divided into two cuboidal phases, for various volume fractions, and vertically compressed at constant strain rate. Using symmetry, one fourth of the geometry is modeled with the FEM, using approximately 1.3×10 3 nodes. For the full geometry, the ratio DEM particles to FEM nodes would be a little under 10.
In this geometrical configuration, the strain is not a priori homogeneous anymore. Due to distinct strain rate sensitivities, one phase preferentially deforms depending on the strain rate, which is qualitatively observed both in FEM and DEM simulations. Qualitatively (Figure 12), phase B (bottom phase) deforms more at lower strain rate. Phase A, at high strain rates, deforms more homogeneously in DEM than in FEM, possibly due to more permissive contact conditions between phases. Thus, the "mushroom" shape is slightly blurred in this strain rate range.
FEM and DEM are in good agreement, after the transient regime observed in DEM, within a few percents of relative error (Figure 13a). In the strain rate validity range, the DEM model is thus able to capture the final flow stress evolution with respect to the volume fraction (Figure 13b). As a side note, the heterogeneity of the strain in the series configuration is responsible for a nonlinear variation of the flow stress with respect to the volume fraction. This effect of the geometry of the bi-material, clearly displayed at 1×10 -3 s -1 (Figure 13b), is correctly reproduced.
Spherical Inclusion Configuration
A single spherical inclusion of phase B is placed in the center of a phase A cube, with a fixed volume fraction of 20 % of phase B inclusion. Using symmetry, one eighth of the geometry is modeled with the FEM, using 2.1×10 3 nodes. For the full geometry, the ratio DEM particles to FEM nodes would be a little under 3.
Qualitatively, two typical kinematical tendencies of the matrix are displayed in FEM (Figure 14), with an intermediary state of homogeneous co-deformation:
• A barrel shape of the sample, when the flow stress of the inclusion is low, at lower strain rates;
• An hourglass shape, when the flow stress of the inclusion is high, at higher strain rates.
In the FEM simulations, the hourglass shape of the matrix is strongly emphasized by the non-sliding interface between phases. While the barrel shape is easily displayed at low strain rates in DEM, the hourglass shape is only clear at higher strain rates, outside the validity domain studied range. Although we would expect lower stresses with a less constrained system, the flow stress is overestimated (Figure 15b), by about 10 % on the studied strain rate range, even if the tendency is acceptable after the transient regime (Figure 15a).
To quantitatively compare the models from a kinematical perspective, we study the macroscopical shape factor S f of the inclusion, which is less sensitive than the matrix shape to the interface definition. This factor (Equation 7) is the ratio of the inclusion height H, in the compression direction, and diameter D (Figure 14), averaged in all perpendicular directions:
S f = H/D (7)
For the DEM simulations, this value is approximated computing the shape factor of an equivalent ellipsoid, having the same inertia matrix as the cloud of particles modeling the inclusion. At all strain rates, at the very beginning of the applied strain (Figure 16a), the inclusion remains roughly spherical for a few percent of strain, and follow a similar trend as in FEM after the transient regime.
In the validity range of the phase B, the final shape factor (Figure 16b) is underestimated with a relative error of about 5 %. The inclusion deforms more in DEM than in FEM. To evaluate the error on the shape factor depending on the roughness of the discretization, an identical geometry is modeled with smaller packings of particles. The relative error for the final shape factor is computed using the 1×10 6 particles configuration as reference, where 1×10 5 particles discretize the inclusion. For each size, five random initial packings are tested. The chosen test case is harsh for our model: for the smaller packings, the meshes may interact directly with the particles of the inclusion at the end of the deformation. A very rough description of the inclusion, with 20 particles for example, remains too inaccurate to catch more than an order of magnitude of the deforming trend (Figure 17a), and the initial shape factor is already far from a perfect sphere, with little repeatability. In a realistic context, such a rough discretization can only reasonably be used to capture the position of an inclusion in a composite. With a finer discretization, starting with a few hundred particles, the qualitative trend can be captured and the repeatability improves: it becomes possible to estimate the necessary discretization for an arbitrary precision (Figure 17b). The purely geometrical error, on the initial state, is about an order of magnitude smaller than the final error, after compression. For a final error under 10 %, more than 200 particles must discretize the inclusion.
To sum up, in this section, three elementary geometrical bi-material configurations where tested. The flow stress and the macroscopic geometrical evolution are compared to FEM simulations. Using a discretization of a few DEM particles per FEM nodes, the error is of the order of magnitude of the expected precision for a single material. The bi-material simulation does not seem to introduce major error sources. The comparison of the computing time between the numerical methods must be taken with caution and is not detailed here. Indeed, in the FEM approach, a non-linear set of equations must be solved at each time step, the computing time may vary by several orders of magnitude for different strain rates or material parameters. The DEM model computing time is much more calculable, which is discussed at the end of Section 6.
Computation on a Real 3D Full Sample
As an illustrative example, the methodology is applied to the real 3D microstructure of a full sample, obtained by X-ray microtomography at the ESRF (beamline ID19). The studied material is a metallic composite, with a crystalline copper matrix and spheroidal inclusions of amorphous zirconium alloy. The total volume of the sample is approximately 0.5 mm 3 , containing a volume fraction of inclusion of 15 %, with diameters up to a few dozens of micrometers. The voxelized image has a size of 594×591×669 voxels, with a voxel size of 1.3 ➭m. The purpose of this section is not to compare quantitatively numerical results and experimental in situ results [38], but to underline the potential of the method for large arbitrary data sets.
Starting from the three-dimensional voxelized image, used as a mask on a random packing of particles, the discretization of the geometry has a low algorithmic cost:
1. An 3D image is binarized in two colors: matrix and inclusions; 2. A cuboidal random packing is built, with the same aspect ratio as the image; 3. The image is fitted to the size of the packing, using an affine transformation; 4. For each particle, the color at its center is used to set the material type.
For very large data sets, a smaller periodic packing can be replicated in all directions, minimizing the cost of generation of this initial packing.
The used packing contains about 3.36×10 6 particles, with a number of voxels to number of particles ratio of 70. As shown in Figure 18, about 170 physical inclusions are discretized, using between 500 and 5 discrete element particles each. The rough discretization of the smallest inclusions, for example the further left inclusion in Figure 19, is not precise enough to allow a strain evaluation, only the inclusion position can be tracked. For the bigger inclusions, the expected error on the shape factor after 0.3 of strain is around 10 % (Figure 17b).
The sample is uniaxially compressed up to 0.3 at 1×10 -3 s -1 . Local and global illustrations are given in Figures 18 and19, with relative displacement and deformation of the inclusions. The computation was run on an Intel Xeon E5520, using 8 processors. The 6×10 5 steps for 3.36×10 6 particles where executed in 8×10 5 s, less than 10 days. The computation time is linear with the number of steps and of particles. As long as the load is properly balanced between processors and that the geometry of the sub-domains keeps the volume of the communications between processors reasonable, the DEM solver scales properly with the number of processors. On the studied geometries, the roughly cuboidal overall shape of the samples allows simple dynamic balance of the load between processors. The computing time can thus be reliably estimated on a given machine, roughly 3×10 -6 cpu second per particle and per time step, for a single processor in the given example. This section illustrated that the proposed methodology can be applied to large arbitrary real microstructure data. The discretization has low algorithmic cost, but the model does not allow yet a simple way to locally adapt the discretization roughness. The cost of the computation can be reliably estimated as the model does not depend on non-linear resolutions.
Conclusion
In this paper, we propose a DEM model for large compressive strains in plastic and viscoplastic continuous media. The continuous materials are discretized with packings of spheres, with attractive and repulsive interaction forces. A double-radius model is implemented, to geometrically manage the history of the contacts. The interaction laws are kept as elementary as possible, with no damping or tangential forces, which provides only second order improvements of the behavior.
A calibration of the numerical parameters is proposed to target continuous parameters, approximating a macroscopic viscoplastic Norton law. This model can represent large strains and shape changes of dense materials under uniaxial compression, with a controlled macroscopic volume variation. After a transitory regime at the beginning of the strain, the flow stress and volume stabilize to a plateau value. Despite limitations, in terms of reachable strain rate sensitivities and strain rate validity range, the viscoplastic approximation is valid over several orders of magnitude of strain rate at low sensitivities.
Elementary bi-material geometries are compared to analytical and FEM references. At the first order, the inaccuracies in bi-material configurations seem to derive directly from the limitations on the single phase materials. The order of magnitude of the error on macroscopic flow stresses is in both cases 10 % in tested configurations. Developing better single material model should lead to better composite description. Ongoing work focus on widening the validity range in strain rate, and on other solicitation types, like tension or torsion.
The discretization of a real full sample geometry, from a voxelized threedimensional image, has a low algorithmic cost. For the time being, no size adaptivity mechanism has been implemented, as large data sets can be handled through the parallelization of the computing effort. Further developments could allow a local refinement of the discretization of the geometry.
The methodology cannot be as accurate as the FEM to model a continuous medium. However, to describe phenomena where the motions of the discontinuities and their interactions are predominant, the DEM is innately suited and provides a numerically robust alternative. No complex non-linear system has to be solved at each time step, thus the computation time can be predicted and kept under control. An intended extension of the model is the description of self-contact, for example in metallic foam deformation. Other extensions are the description of topological changes, naturally handled in the DEM framework, for example pore closure or phase fragmentation in composites.
Notation
Figure 2 Zr 57 7075 Figure 2 . 14
2577075214 Figure 2.15a
Figure 2
2 Figure 2.19
Figure 2 . 1 :
21 Figure 2.1: Comparison of classes of materials in the space (density,elastic modulus). Definition of "holes" as domains not covered by existing materials in this space. Definition of comparison criteria as isocontours of elastic modulus 1/3 density
Figure 2 . 2 :
22 Figure 2.2: Examples of mesostructures of composites associating two metallic phases. (a) Al 18 % Cu matrix with size controllable Al 2 Cu inclusions [229]. (b) TA6V equiaxial morphology [139, p.762]. (c) TA6V lamellar morphology [56, p.106]. (d) Crystalline dendrites in an amorphous matrix, Mg 71 Zn 28 Ca 1 [129, p.303].
Figure 2 . 3 :
23 Figure 2.3: Liquid state synthesization of amorphous metallic alloys and typical atomic structure. (a) Schematic time-temperature-transformation diagram for solidification. Two routes from liquid to solid state. Route 1 leads to a crystallized structure. Route 2: critical cooling rate, leading to an amorphous solidification. Illustration from [226, p.52]. (b) Amorphous solids are characterized by the absence of long-range order. The atoms are locally organized -icosahedral clusters are circled in red in this example -displaying only short range order. Illustration from [207, p.422].
Figure 2 . 4 :
24 Figure 2.4: Macroscopic effects of crystallization on amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 . (a) Stress-strain plot, uniaxial compression: temperature 405 • C, strain rate 4.42•10 -4 s -1 . Drastic increase of the flow stress at a strain of 0.3. Crystallization time around 700 s. From [85, p.4]. (b) Estimation of the crystallization time with respect to temperature.
Figure 2 . 5 :
25 Figure 2.5: Two-dimensional schematics of atomic scale deformation mechanisms in solid metallic alloys. Macroscopic permanent strain stem from irreversible relative motion of atoms. (a) Crystalline idealized long-range order. A typical deformation mechanism: dislocation motion (physical lattice fault) in a crystal. Illustration from [228, p.369]. (b) Amorphous atomic structure. Mechanisms of shear transformation zone (STZ), first proposed by Argon [11]. Collective rearrangement (dynamic event, not a structural defect) of local clusters of dozens of atoms, from a low energy configuration to another. Illustration from [204, p.4068].
Figure 2 . 6 :
26 Figure 2.6: Deformation maps and macroscopic behavior: influence of the temperature and stress configuration. (a) Dominant deformation mechanisms in pure copper (normalized axis for isomechanical group of f.c.c. metals). Map from [80, Fig. 4.7]. (b) Generic amorphous metallic alloy behavior (absolute stress values given for Zr 41.2 Ti 13.8 Cu 12.5 Ni 10 Be 22.5 ). Dominant flow mechanism for the whole map: shear transformation zone (STZ). Map from [204, p.488].
Figure 2 . 7 :
27 Figure 2.7: Macroscopic viscoplastic behavior of Zr based amorphous alloys, high temperature uniaxial compression tests. (a) Typical stress-strain plot of a strain rate jump: Zr 52.5 Cu 27 Al 10 Ni 8 Ti 2.5 , between 400 -430 • C and 2.5•10 -4 -2.5•10 -3 s -1 . Plot from [182, p.64]. (b) Flow stress map of amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 between 380 -410 • C and 2•10 -4 -4•10 -3 s -1 . Norton law approximation at 400 • C: M ≈ 0.75.
Figure 2 . 8 :
28 Figure 2.8: Ductility in amorphous crystalline composite. Illustrations from [104, p.1086,1088]. (a) Comparison of materials in the space (elastic modulus,mode I fracture toughness). Isocontours of critical energy for crack propagation. Tensile test at room temperature: (b) Ductile rupture of Zr 39.6 Ti 33.9 Nb 7.6 Cu 6.4 Be 12.5 amorphous/crystalline composite. (c) Typical brittle rupture of an amorphous alloy.
Figure 2 . 9 :
29 Figure 2.9: Examples of amorphous / crystalline metallic composites. (a) In situ elaboration from liquid state. Bridgman solidification of Zr 37.5 Ti 32.2 Nb 7.2 Cu 6.1 Be 17.0 . Dendritic crystalline inclusions in an amorphous matrix. SEM image from [179, p.2]. (b) Ex situ elaboration by powder technology from solid state, using spark plasma sintering (SPS). Amorphous spherical inclusion of Zr 57 Cu 20 Al 10 Ni 8 Ti 5 in an aluminum (80 vol • %) matrix. SEM image from [169, p.113].
Figure 2 . 10 :
210 Figure 2.10: Stratified composites elaborated ex situ by hot co-pressing of amorphous Zr 52.5 Cu 27 Al 10 Ni 8 Ti 2.5 and crystalline light alloys. SEM images from [182, p.96,162]. (a) Three-layer with magnesium alloy (AZ31): {amorphous,crystalline,amorphous}. Effect of the co-pressing temperature on the relative thickness of the layers. (b) Multi-layer with aluminum alloy (Al-5056): {11 × crystalline, 10 × amorphous}.
Figure 2 . 11 :
211 Figure 2.11: Stress-strain room temperature behavior of crystalline / amorphous metallic composite (see Figure 2.9b). Effect of the volume fraction, from pure amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 to pure crystalline aluminum [169, p.114].
Figure 2 . 12 :
212 Figure 2.12: Rupture at room temperature of a metallic composite. Crystalline aluminum 1070 matrix and 10 %vol of base Zr alloy amorphous spheroidal inclusions. (a) Final coalescence of multiple events of decohesion between matrix and inclusions. Slice of a reconstructed tomography image [73, p.96]. (b) Postmortem SEM image [73, p.97].
3
Tx ≈ 440 • C and Tg ≈ 380 • C, the forming window is thus Tx -Tg ≈ 60 • C [169, p.113]. See also Figure 2.4b and Figure 2.7b for a qualitative illustration of the kinetics of thermal activation of both transitions.
Figure 2 . 13 : 1 ) 7075 Figure 2 . 14 :
21317075214 Figure 2.13: SEM views of the initial state of the powders at identical scales. (a) Atomized amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 : spheroidal powder. SEM view from [73, p.41]. (b) Electrolytic crystalline copper: dendritic powder.
Figure 2 . 15 :
215 Figure 2.15: Model material composite: crystalline copper matrix, amorphous Zr 57 Cu 20 Al 10 Ni 8 Ti 5 spheroidal inclusions (15 % volume fraction). Cross-sections from 3D tomography reconstructions (refer to Section 2.4. Images from [85, p.18]. (a) As elaborated state. View of a full millimetric sample prepared for compression tests. (b) Postmortem co-deformed state of the same sample. Compression at 400 • C and 2.5•10 -4 s -1 (compression direction: vertical).
Figure 2 . 16 :
216 Figure 2.16: Encapsulated co-extrusion of compacted powders. (a) Schematic of the copper capsule containing the compacted powder (cross-hatched) during the extrusion. The capsule is closed by a steel lid. (b) Hot extruded and initial states. The composite is protected by the capsule and is not visible.
1 mm (Figure 2.16b), the extrusion ratio [18, volume 2 p.118] is thus approximately 5. For low volume fractions of amorphous phase (< 50 %), the extrusion force is typically in the range 2•10 3 -3•10 3 daN. The material stays approximately 10 min at 380 • C, which roughly corresponds to one fourth of the crystallization time at this temperature (Figure 2.4a).
Figure 2 . 17 :
217 Figure 2.17: Overview of tomography principles and scope. (a) Size scales of threedimensional imaging tools. Chart from [234, p.409]. (b) Main step of X-ray attenuation tomography, from 2D images to 3D volume reconstruction. Schematic from [38, p.291].
Figure 2 . 18 :
218 Figure2.18: Energy absorption for distinct elemental media, depending on the energy of the photons. The energy range at beamline ID19 is 10 -2 -2.5•10 -1 MeV. In situ measurements were made at 6.8•10 -2 MeV. Data from[START_REF] Hubbell | X-ray mass attenuation coefficients[END_REF].
Figure 2 . 19 :
219 Figure 2.19: Main phenomenon of interest in the model material: co-deformation of the phases under compressive strain; interface interactions at the mesoscale (potential topological event). Tomography views from [35].
Figure 2 . 20 :
220 Figure 2.20: Typical defects appearing under compressive load in the model material. Additional topological events. (a) Decohesion at the interfaces of the phases. (b) Crystalline matrix decohesion. Tomography views from [85, p.18].
Figure 3 . 1 :
31 Figure 3.1: Two examples of models of the department of Savoie. (a) A phone book. (b) A topographic map.
Figure 3 . 2 :
32 Figure 3.2: Example of an indeterminate problem in classical mechanics. At initial state, the central ball is stationary, the two others move toward it with equal velocity from an equal distance. The final velocities discontinuously depend on the initial distances [82, p.292].
Figure 3 . 3 :
33 Figure 3.3: Example of analogous scale model. Scale model of a heap of 10 3 ballbearings, used to investigate the structure of molecules in liquids. Illustration from [30, plate 15].
Figure 3 . 4 :
34 Figure 3.4: Sketch of the 1757 attempt to compute the orbit of Halley's Comet, with considered gravitational forces. The modeling objective was to predict the next perihelion of the Comet. The Sun is considered fixed, the actions of the Comet on Saturn and Jupiter are neglected. The trajectories of Jupiter and Saturn are computed in parallel with the trajectory of the Comet.
Figure 3 . 5 :
35 Figure 3.5: Rough orders of magnitudes of computing speed for single processing units.The performance of modern machines can no longer be measured on this scale, they are designed to use multiple units in parallel. Mixed data for multiplication and addition of 10-digit and 16-digit numbers, from[88, p.137],[39, p.887],[148, p.33],[63, p.14],[START_REF]Processing power compared[END_REF] and[START_REF] Strohmaier | TOP500 list[END_REF].
Figure 3 . 6 :
36 Figure 3.6: Frequency and consumption of Intel x86 processors: the Prescott consumed much more for little performance improvement. Thermal issues led to the abandonment of the Pentium 4 line, replaced by chips with lower frequency and multiple processors. Illustration from [63, p.14].
Figure 3 . 7 :
37 Figure 3.7: Possible credibility assessment methodology in the modeling process. An alternative route is proposed for the resolution of the conceptual model: an intermediary analogous conceptual model. All the steps to move from one block to the other are potential error sources. Inspired from [203].
Figure 3 . 8 :
38 Figure 3.8: Weak/strong typology of discontinuities, example of a bi-material. (a) Reference configuration of a bi-material. (b) Weak discontinuity: the displacement field is continuous across the interface. The derivatives of the displacement field, e.g. the strain field, may be discontinuous across the interface. (c) Strong discontinuity: the displacement field may be discontinuous across the interfaces. A strong discontinuity does not necessarily imply decohesion.
Figure 3 . 9 :
39 Figure 3.9: Contact/self-contact typology of discontinuity interactions. (a) Contact between two distinct inclusions in a matrix. (b) Self-contact of the boundary of a hole in a matrix.
Figure 3 . 11 :
311 Figure 3.11: Examples of topological events with negligible strain of the solids. (a) The initiation and the branching of the crack are not homeomorphic configurations. The sole propagation of the crack is mathematically not a topological event. Regardless, the phenomenon may be delicate to model. (b) The inclusion fragmentation or healing of the inclusion are not homotopy equivalent. No topological change for the matrix.
Figure 3 . 12 :
312 Figure 3.12: Examples of topological events. (a) Pore opening or closure. (b) Neck creation and breakage: merge and split of the inclusions, merge of the two holes in the matrix.
Figure 6 . 1
61 Figure 6.1
Figure 4 . 1 :
41 Figure 4.1: Key modeling choice: the conceptual topology of the idealized material.In both cases, the unknown field of the system is x. (a) Continuous material: within a domain a continuous constitutive law is valid, typically described by a partial differential equation (PDE), with a differential operator L and a second member f . (b) Discrete material: distinct elementary entities interact with each other. The action f on a given entity i is the summed effects of all its interactions with surrounding entities j.
Figure 4 . 2 :
42 Figure 4.2: Key modeling choice: the kinematical standpoint. (a) Reference configuration, initial computation point position. (b) Lagrangian (or material) standpoint. The position of the material points is explicitly tracked in the deformed state by the motion of the computing points. (c) Eulerian (or spatial) standpoint. The flow of the material is measured at fixed computing points.
Figure 5 . 1 :
51 Figure 5.1: Reading grid for the comparison of some numerical strategies based on a Lagrangian kinematical standpoint. The outline of Chapter 5 is based on this graph, from which the corresponding sections to the modelization choices are cross-referenced.Positioning of the meshless methods set and the discrete element method (DEM), the main numerical tool used in this work. See also Figure6.1 for a pairwise comparison of a selection of methods.
Figure 5 . 2 :
52 Figure 5.2: Large strain in forging process simulation, remeshing procedures. (a) Initial state and first mesh. The geometry is axisymmetric, only one sector is modeled. (b) Deformed state of the first mesh. (c) Remesh: a second mesh is built and replaces the first one. The axisymmetry will be lost, the full part has to be modeled. (b) Intermediary state with the second mesh. (e) Deformed state and a third mesh is built. Illustration from [45, p.127-128].
Figure 5 . 3 :
53 Figure 5.3: Illustration of the PIC from [225, p.247] on a 2D axisymmetric model. Impact of an elastoplastic rod, initially cylindrical, on a rigid surface. Superimposition of the Lagrangian markers and the computing grid for a deformed state.
Figure 5 . 4 :
54 Figure 5.4: Illustration of the principle of the XFEM from [157, p.138]. Arbitrary crack over a mesh. At the circled nodes, enriched shape functions are used.
Figure 5 . 5 :
55 Figure 5.5: Illustration of the kernel function of the SPH [79, p.33]. Fields are locally approximated for each computing point, using the kernel.
Figure 5 . 6 :
56 Figure 5.6: Early MD-like code illustration, with 256 particles in 2D. Discrete constitutive law analogically applied to the study of instabilities in bi-phased liquid (denser phase initially on the top) under the influence of gravity. Typical temporal evolution from initial state. Illustrations from [166, p.7-9].
Figure 5 . 7 :
57 Figure 5.7: Examples of (a) Disk fragmentation under impact [202, p.5]. (b) Crack propagation in composite [217, p.9], with typical path depending on the relative properties of the phases.
Figure 6 . 1 :
61 Figure 6.1: Graphical view of the comparison, proposed in Section 6.1, between discrete element method (DEM), smooth particle hydrodynamics (SPH), element-free Galerkin method (EFG), finite element method (FEM) and lattice model. Emphasize on pairwise and circular relationship. For each pair of methods, a key common feature (in the center) and a key distinct feature (in the periphery) are highlighted. Refer to Figure5.1 for a more comprehensive and systematic lecture grid.
Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Interaction Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Time Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Velocity Verlet Algorithm . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Time Step Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Parameter Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Control Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Principle of the Developed Method 9.1 Finite Inelastic Transformation . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Discretization of Continua . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Contact Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Self-Contact Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chosen Numerical Tools 10.1 DEM Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 FEM Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 9
9 Figure 9.2
Figure 8 . 1 :
81 Figure 8.1: Schematic DEM time loop. Cross-references to the corresponding sections. The boundary conditions of the system are examined in Section 8.4. A more comprehensive graph can be found Figure 9.1 on page 100.
Figure 8 . 2 :
82 Figure 8.2: Neighbor detection algorithm. Potential neighbors for the dark particle are only searched for in its own cell and in the neighboring cells (shaded in red).
Figure 8 . 3 :
83 Figure 8.3: From kinematics to forces. (a) Position and velocity of a pair of interacting particles (A, B). (b) Reciprocal forces: f B→A = -f A→B , typically computed from relative position and velocity of the particles.
Figure 8 . 4 :
84 Figure 8.4: Elementary ideal spring-mass system of stiffness k and mass m (see also Equation 8.4).
Figure 8 . 5 :
85 Figure 8.5: Rough orders of magnitude of oscillation time scale of hypothetical spherical objects with Hertzian contacts, for a relative indentation i r = 1 % (Equation8.11). The values for "elastomer" and "ceramic" are not representative of real materials. They are made-up using extreme ρ and E values for the class of material[12, p.5], to illustrate theoretical critical cases reachable with dense materials. Materials as aluminum and silica display very similar tendency and they cannot be distinguished from steel at the chosen scale of the graph.
Figure 8 . 6 :
86 Figure 8.6: Examples of DEM coupling. (a) Solid dynamics: excavation of rocks [94, p.3]. (b) FEM: interaction between a tire and a snow layer [151, p.167]. (c) FVM: effect of the motion of an impeller in a fluid on a bed of particles [34, p.35].
AA
(ref) . The local fields, at the level of the elementary particles can be approximated and reconstructed in local neighborhoods. The components of the local stress tensor are classically computed as follows[44, p.162] [140, Chap. 2]:
Figure 9 . 1 :
91 Figure 9.1: General DEM framework of the introduced features (circled), with crossreference to the corresponding sections. The chosen numerical tools are presented in Section 10.1, see also Table 10.1.
Figure 9 . 2 :
92 Figure 9.2: Principle of finite inelastic transformation modeling. The particles collectively rearrange to cope with the strain. Particles may arbitrarily change neighbors and the overall volume is meant to be conserved.
Figure 9 . 3 :
93 Figure 9.3: Geometry of the interaction of a pair of particles (A,B) and normal forces f . (a) Seed interaction, repulsive normal forces. (b) Crown interaction, attractive normal forces.
Figure 9 . 4 :
94 Figure 9.4: Interaction laws: signed force f versus distance h. Classical DEM conventions are applied: repulsive forces are positive. For each graph, attractive to repulsive force ratio and radii to scale. Refer to Table 9.2 for the constant used. (a) BILIN : compression dominated loads, tunable strain rate sensitivity (Part IV, Equation 9.1). (b) TRILIN : tension-compression loads, fixed strain rate sensitivity (Part V, Equation 9.2).
Figure 9 . 5 :
95 Figure 9.5: Boundary conditions. (a) Geometry of the mesh/particle interaction. Crown interaction example. (b) Typical boundary conditions for uniaxial loads. Top and bottom planar meshes with prescribed motion, free lateral sides.
Figure 9 . 6 :
96 Figure 9.6: Principle of the discretization from a segmented image. A segmented image is used as a mask on a dense random packing to individually set the properties of the particles.
Figure 9
9 Figure 9.7: Principle of the detection of contact between distinct objects. Each particle is assigned to an object at the initial state object, here 0, 1 or 2. In the deformed state, four pair interactions (circled) are considered as events between objects 1 and 2. The neighbor changes in object 2 are simply accounting for inelastic strain in this solid, they are not considered as physical contacts.
Figure 9 . 9 :
99 Figure 9.9: Principle of the detection of self-contact events. Example of a hole in an infinite packing. (a) Initial state. All pairs are considered as "internal" interactions. The outward vectors n of each particle are computed. (b) Deformed state.The newly created pairs {i, j} are classified as "interface" or "internal", based on the respective orientation and magnitude of n i and n j . The circled pairs, e.g. {7,21} and {13,24}, are classified as "interfaces". In contrast, the new pair {7,16} is detected as "internal". Particle 11 moved away from the free surface, the magnitude n 11 has reduced.
Figure 9 . 10 :
910 Figure 9.10: A newly created pair of particles {i, j}, with the respective outward vectors {n i , n j }.
Figure 11.10b
Figure 12 . 5
125 Figure 12.5
Figure 13
13 Figure 13.1
Figure 11 . 1 :
111 Figure 11.1: Pairwise interaction law BILIN used in this part. Force f versus distance h. Attractive to repulsive force ratio and radii to scale. Classical DEM conventions are applied: repulsive forces are positive. Refer to Section 9.1 for the full description of the law.
10 - 6 2 Figure 11 . 2 : 10 - 9 Figure 11 . 3 :
62112109113 Figure 11.2: Normalized flow stress at a strain of 0.3 for 5•10 3 particles versus prescribed strain rate. Influence of the natural period on strain rate sensitivity.
10 - 6
6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0
Figure 11 . 4 :
114 Figure 11.4: Macroscopic equilibrium for single material. Relative error versus strain rate for 5•10 3 particles, strain 0.3. Effect of the natural period.
Figure 11 . 5 :
115 Figure 11.5: Temporal convergence for 5•10 2 particles. (a) Normalized flow stress versus strain for a natural period t 0 = 1 s. (b)Relative error on the flow stress at a strain of 0.2 versus the ratio of the time step and the natural period ∆t/t 0 . The reference for t 0 = 1 s is ∆t/t 0 = 10 -6 . The reference for t 0 = 10 -2 s is ∆t/t 0 = 10 -4 . The ∆t/t 0 values are highlighted for the phases A and B, used later on.
Figure 11 . 6 : 2 Figure 11 . 7 :
1162117 Figure 11.7: Spatial convergence for single material, from 30 to 1•10 6 particles. (a) Flow stress versus strain, using five distinct initial random packings for each packing size. (b) Relative error versus packing size, in regard to the converged simulation (1•10 6 particles packing).
1 Figure 11 . 8 :
1118 Figure 11.8: Volume conservation for single material. Relative error on volume versus strain for 5•10 4 particles.
Figure 11 . 9 :
119 Figure 11.9: Single material packing. Natural period 0.8 s, strain rate 3.16•10 -4 s -1 , 5•10 4 particles.
Figure 11 . 10 :
1110 Figure 11.10: Single materials. Phases A and B. Flow stress and strain rate sensitivity. (a) Flow stress versus strain. Effect of the strain rate. (b) Norton law approximation, based on flow stress values at a strain of 0.3. Strain rate sensitivity of both phases in the range 1•10 -4 -1•10 -3 s -1 .
Figure 12 . 1 :
121 Figure 12.1: Bi-material parallel configuration. Front view, transverse cross-section. Strain rate 4.64•10 -4 s -1 .
Figure 12 . 2 :
122 Figure 12.2: Bi-material parallel configuration. Effect of the strain rate on the flow stress. Theoretical reference: mixture law from Equation 12.2. (a) Engineering flow stress versus strain. Fixed volume fraction: 0.5. (b) Linear trend of the engineering flow stress, at a strain of 0.3, with the volume fraction.
Figure 12 . 3 :Figure 12 . 4 :
123124 Figure 12.3: Bi-material series configuration. Volume fraction 0.5. Bottom phase: B (high sensitivity). Front view, transverse cross-section.
Figure 12 . 5 :
125 Figure 12.5: Bi-material spherical inclusion (phase B) configuration. Front view, transverse cross-section.
Figure 12 . 6 :
126 Figure 12.6: Bi-material spherical inclusion configuration. Effect of the strain rate on the flow stress. Numerical reference: FEM simulations. Unique volume fraction of inclusion (phase B): 0.2. (a) Engineering flow stress versus strain. (b) Engineering flow stress versus strain strain rate, at a strain of 0.3.
Figure 12 . 7 :
127 Figure 12.7: Bi-material spherical inclusion configuration. Effect of the strain rate on the shape factor of the inclusion. Numerical reference: FEM simulations. Unique volume fraction of inclusion (phase B): 0.2. (a) Shape factor versus strain. (b) Shape factor versus strain strain rate, at a strain of 0.3.
Figure 12 . 8 : 1 Figure 12 . 9 :
1281129 Figure 12.9: Bi-material spherical inclusion configuration. Spatial convergence: effect of the number of particles discretizing the inclusion on the shape factor. (a) Shape factor versus strain. Typical results for three (out of a total five computed) random packings. (b) Error on the shape factor versus the number of particles used to discretize the inclusion. Reference for relative error: 1•10 5 particles used to discretize the inclusion. Minimum, maximum and average error for five random packings.
Figure 13 . 1 :Figure 13 . 2 :
131132 Figure 13.1: Discretization and compression of the full sample. 3D view of the inclusions only, the matrix is hidden. Vertical compression axis.
Figure 13 . 3 :
133 Figure 13.3: Quantification of the error introduced by the underestimation of the strain rate sensitivity. (a) Norton law fit for numerical and experimental data. (b) Shape factor of an initially spherical inclusion after a strain of 0.3 at various strain rates. Effect of the strain rate sensitivity for the FEM simulation.
Figure 13 . 4 :Figure 13 . 5 :Figure 13 . 6 :
134135136 Figure 13.4: Estimation of the strain rate from the distance between tracked inclusions.
Figure 13 . 7 :
137 Figure 13.7: Attempt to apply the temporal evolution measured on Figure13.6 to the evolution of the shape factor of a single inclusion. In the simulation taking into account a crystallization effect, the initial time offset of 10 min is considered. Nominal strain rate 5•10 -4 s -1 .
Figure 13 . 8 :
138 Figure 13.8: Simulation of a local configuration at a nominal strain rate of 5•10 -4 s -1 . Initial diameter of the central inclusion 55 ➭m. (a) Cross-section of the segmented tomography volume. Hidden matrix for the DEM simulation. (b) Evolution of the shape factor S f of the central inclusion with the strain.
Figure 13 . 9 :Figure 13 . 10 :
1391310 Example of a pair of close inclusions [35, p.19]. Cross-sections of the reconstructed and segmented tomography volume. Hidden matrix for the DEM simulation. Example of a hollow inclusion [35, p.19]. Cross-sections of the reconstructed tomography volume. Slice of the inclusion, with hidden matrix, for the DEM simulation.
5 Figure 14 . 4 Figure 15 . 5 Figure 16 . 5 Figure 16 . 10 Chapter 14
5144155165161014 Figure 14.4
Figure 14 . 1 :
141 Figure 14.1: Pairwise interaction law TRILIN used in this part. Force f versus distance h. Attractive to repulsive force ratio and radii to scale. Classical DEM conventions are applied: repulsive forces are positive. Classical mechanical convention will be applied in the discussion: tensile stress will be positive. Refer to Section 9.1 for the full description of the law.
Figure 14 . 2 :Figure 14 . 3 :
142143 Figure 14.2: Cross-section for uniaxial compression. Packing of 5•10 4 particles at -3.16•10 -4 s -1 . See also Figure 14.6.
- 5 Figure 14 . 4 :
5144 Figure 14.4: Stress-strain behavior for TRILIN , using 5•10 4 particles. Under tensile load: stress drop at necking and null stress after rupture.
Figure 14 . 5 :
145 Figure 14.5: Norton law approximation for TRILIN . (a) Stress flow versus strain rate. Norton law parameters for the average behavior: M = 7.14•10 -2 , K = 49.1 MPa • s M . (b) Sensitivity versus strain rate.
Figure 14 . 6 :
146 Figure 14.6: Cross-section for uniaxial test, tensile and compressive load. Packing of 5•10 4 particles.
Figure 14 . 7 :Figure 14 . 8 :
147148 Figure 14.7: Relative error of the macroscopic equilibrium versus strain rate.
6 Figure 14 . 9 :
6149 Figure 14.9: Cross-section for uniaxial tension test. Influence of number of particles on the rupture profile under tensile load at 10 -3 s -1 . A size effect can be observed, with an approximate qualitative threshold between 3•10 4 and 1•10 5 particles.
Figure 14 . 10 :
1410 Figure 14.10: Effect of the geometrical discretization on the flow stress of the packing at a strain rate of 3.16•10 -4 s -1 . (a) Comparative tendencies of the stress-strain curves. (b) Relative error on the flow stress with respect to a packing of 3•10 6 particles.
2 Figure 14 . 11 :
21411 Figure 14.11: Evolution of the relative error on the volume versus strain. Influence of the packing size at 3.16•10 -4 s -1 .
Figure 15 . 1 :
151 Figure 15.1: Self-contact detection algorithm. Refer to Section 9.5 for the full description of the algorithm.
Figure 15 . 2 :
152 Figure 15.2: Test case to quantify the errors of the self-contact detection algorithm, used to choose the self-contact threshold parameters. Two cuboids are crushed and pulled apart. Cross-section for an arbitrary set of parameters.
Figure 15 . 3 :
153 Figure 15.3: Typical distribution of the magnitude n of the outward vector. Initial state of a cuboid with a spherical hole, corresponding to Figure 15.8.
6 N
6 mag /2r seed (/)
Figure 15 . 4 :Figure 15 . 5 :Figure 15 . 6 :
154155156 Figure 15.4: Pareto front of the two error types in the classification of new interactions between particles. The chosen configuration is crossed-out. (a) Magnitude threshold N mag . (b) Angle threshold cos α en . (c) Angle threshold cos α ij .
Figure 15 . 7 :
157 Figure 15.7: Compression-tension of a spherical pore r/a = 0.15. Slice of the sample and visualization of the outward vectors.
Figure 15 . 8 :
158 Figure 15.8: Compression-tension of a spherical pore r/a = 0.15. Effect of a healing time of the "interface" pairs. Slice and visualization of the outward vectors.
Figure 15 . 9 closure
159 Figure 15.9
169
169
Figure 16 . 1 :
161 Figure 16.1: Discretization of a set of porosities. (a) 3D reconstruction of the tomography. The largest pore is roughly 100 ➭m wide. (b) Mesh reconstruction on the particles with large outward vector magnitude n inside the sample.
Figure 16 . 2 :
162 Figure 16.2: Mesh reconstruction of the pores during a uniaxial compression-tension test.
Figure 16 . 3 :
163 Figure 16.3: Re-opening of the pores during the tension phase of a uniaxial compressiontension test.
Figure 16 . 4 :
164 Figure 16.4: Evolution of the relative volume of the pores from the initial state.
Figure 16 . 5 :
165 Figure 16.5: Discretization of an open cell foam. Perspective view. (a) 3D reconstruction of the tomography. (b) Packing after removal of particles and relaxation.
Figure 16 . 6 :
166 Figure 16.6: Compression up to a strain of 3.0 of the foam at 1•10 -3 s -1 . Left view, relative density 6.6 %.
Figure 16 . 7 :
167 Figure 16.7: First percent of strain. Crushing of isolated arms by the plane until the effort is distributed enough to be transmitted to the structure.
Figure 16 . 8 :
168 Figure 16.8: Slice through the middle of the sample (width=2r seed ), with representation of the outward vectors.
Figure 16 . 9 :
169 Figure 16.9: Bending of an isolated arm. Detail from Figure 16.12, on the right side of the sample.
Figure 16 . 10 :
1610 Figure 16.10: Collapse of a cell, by bending of the horizontal arms. Detail from Figure 16.12, a little above the center of the sample.
Figure 16 . 11 :
1611 Figure 16.11: Large rotation, by deformation up to self contact of an "empty" zone. Front view.
Figure 16 . 12 :
1612 Figure 16.12: Right view, relative density 6.6 %, absolute strain rate 10 -3 s -1 . Compression from a strain of 0.05 to 0.85. At a strain of 0.05, the two circled details are zoomed on Figures 16.9 and 16.10.
Figure 16 . 13 :
1613 Figure 16.13: Apparent engineering stress versus strain. Linear scale. Refer to Figure 16.14a for a logarithmic scale.
Figure 16 . 14 :
1614 Figure 16.14: Macroscopic behavior of the foam under large compressive strain. Relative density 6.6 %, strain rate 10 -3 s -1 . (a) Apparent engineering stress versus strain. In dashed lines: asymptotic true flow stress and estimated flow stress for the foam (Equation 16.1). The averaging window is identical for the two proposed sampling intervals. (b) Rough approximations of the relative density versus strain.
Figure 16 . 15 :
1615 Figure 16.15: Discretization of three relative densities. Right view.
Figure 16 . 16 :Figure 16 . 17 :
16161617 Figure 16.16: Limited effect of the relative density on the qualitative deformation. Right view.
Figure 16 . 18 :
1618 Figure 16.18: Minor effect of the chosen discretization. Strain-stress profile for a relative density of 6.6 % at a strain rate of 10 -3 s -1 .
4 Figure 16 . 19 : 3 3Figure 16 . 20 :
4161931620 Figure 16.19: Effect of the strain rate on the mechanical behavior. Mechanical instability at low strain rate. Stress versus strain at various relative densities.
Going back to the example of the collapsing cell (Figure16.10 on page 176), an estimation of the stress field is given on Figure16.21.
Figure 16 . 21 :
1621 Figure 16.21: Collapse of a cell, by bending of the horizontal arms. The arrows mark the two pillars crushing the cell. The dashed ellipse highlights a bending zone of interest. Estimation of the local stress field. Same geometrical zone as Figure 16.10 on page 176. General view on Figure 16.22.
Figure 16 . 22 :
1622 Figure 16.22: Overall view of the local stress field estimation. Strain 0.45, strain rate 10 -3 s -1 and relative density 6.6 %.
Figure 17 . 1 :
171 Figure 17.1: Co-deformation of the model composite material, observed by in situ X-ray tomography. From Figure 2.19 on page 35.
Figure 17 . 2 :
172 Figure 17.2: Principle of the finite strain modeling. The packing of particles rearranges to mimic inelastic transformation. From Figure 9.2 on page 101.
Figure 17 . 3 :
173 Figure 17.3: Viscoplastic bi-material. Comparison of the deformed morphology obtained by FEM and DEM. From Figure 12.5 on page 135.
Figure 17 . 4 :
174 Figure 17.4: Principle of the self-contact detection algorithm. A local "outward vector" is built for each particle. New pairs are classified as "interface" or "internal". From Figure9.9b on page 109.
Figure 17 . 5 :
175 Figure 17.5: Test case for the self-contact detection algorithm. The interface is tracked under compressive load and separates under tensile load. From Figure 15.5 on page 165.
Figure 17 . 6 :Figure 17 . 7 :Figure 17 . 8 :
176177178 Figure 17.6: Discretization and compression of the full sample. 3D view of the inclusions only, the matrix is hidden. Vertical compression axis. From Figure 13.1 on page 140.
Figure 17 . 9 :
179 Figure 17.9: Conceptual distinction between a genuine Lagrangian tracking of material points in continuous media and the proposed DEM method. (a) Motion of material points in a continuous medium. (b) Rearrangement of particles in the proposed model.
Figure 17 . 10 :
1710 Figure 17.10: Dominant physical phenomenon and potential numerical tools.
Figure A. 1 :
1 Figure A.1: Cross-section of a single material for a strain of 0.3, phase B. Map of the local stress component σ zz , the measured macroscopic stress at 10 -3 s -1 is 175 MPa.
Figure A. 2 :
2 Figure A.2: Cross-section of a single material for a strain of 0.3, phase A. Map of the local stress component σ zz , the measured macroscopic stress at 10 -3 s -1 is 94 MPa.
Figure B. 1 :
1 Figure B.1: Illustration of a MPI bug. Compression of a dense sample, using the selfcontact detection. Top view, no transparency for the particles with a least one "interface" detected. (a) MPI grid 1×1×1. Correct behavior: no interface detected. (b) MPI grid 2×2×1. Erroneous behavior: interfaces detected at the boundaries of the processors. The self-contact variables are not correctly passed between processors.
5 . 0 c o n t a c t model------------------------------------------------------------------------- */ #i f d e f NORMAL MODEL NORMAL MODEL(ZCHERRY FAISTENAU, z c h e r r y / f a i s t e n a u , 5 ) #e l s e #i f n d e f NORMAL MODEL ZCHERRY FAISTENAU H #d e f i n e NORMAL MODEL ZCHERRY FAISTENAU H #i n c l u d e " c o n t a c t m o d e l s . h " #i n c l u d e " f i x p r o p e r t y a t o m . h " #i n c l u d e <i o s t r e a m > namespace LIGGGHTS { namespace C o n t a c t M o d e l s { // F u n c t i o n s t o c r e a t e custom p a i r p a r a m e t e r s M a t r i x P r o p e r t y * c r e a t e K r e p ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * KREP = " k r e p " ; M a t r i x P r o p e r t y * c r e a t e K r e p ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) {return MODEL PARAMS : : c r e a t e P e r T y p e P a i r P r o p e r t y ( r e g i s t r y , KREP, c a l l e r ) ; } M a t r i x P r o p e r t y * c r e a t e K a t t ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * KATT = " k a t t " ; M a t r i x P r o p e r t y * c r e a t e K a t t ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e P e r T y p e P a i r P r o p e r t y ( r e g i s t r y , KATT, c a l l e r ) ; } template<> c l a s s NormalModel<ZCHERRY FAISTENAU> : p r o t e c t e d P o i n t e r s { p u b l i c : s t a t i c c o n s t i n t MASK = CM REGISTER SETTINGS | CM CONNECT TO PROPERTIES | CM SURFACES INTERSECT ; NormalModel (LAMMPS * lmp , I C o n t a c t H i s t o r y S e t u p * , c l a s s C o n t a c t M o d e l B a s e * ) :
x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " s e e d " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / f a i s t e n a u " ) ) ; } // N e c e s s a r y t o c o m p i l e i n l i n e double s t r e s s S t r a i n E x p o n e n t ( ) { return 1 . ; } i n l i n e void s u r f a c e s I n t e r s e c t ( S u r f a c e s I n t e r s e c t D a t a & s i d a t a , F o r c e D a t a & i f o r c e s , F o r c e D a t a & j f o r c e s ) { // Used v a r i a b l e s c o n s t double * s e e d = f i x s e e d ->v e c t
c o n s t double s e e d r a d s u m = 0 . 5 * ( s e e d [ i ]+ s e e d [ j ] ) ; // Compute and a p p l y n o r m a l f o r c e c o n s t double Fn damping = 0 ; // No damping i m p l e m e n t e d double F n c o n t a c t ; i f ( s i d a t a . i s w a l l ) { // Wall / p a r t i c l e i n t e r a c t i o n double k r e p = k r e p [ i t y p e ] [ i t y p e ] ; // S e t t o p a r t i c l e p r o p e r t i e s double k a t t = k a t t [ i t y p e ] [ i t y p e ] ; i f ( s i d a t a . r <0.5 * s e e d r a d s u m ) { // S e e d i n t e r s e c t i o n F n c o n t a c t = k r e p * ( 0 . 5 * s e e d r a d s u m-s i d a t a . r ) ; } e l s e { // Crown i n t e r s e c t i o n i f ( s i d a t a . vn>0 or ! v e l o c i t y ) { // T e n s i l e r e l a t i v e v e l o c i t y F n c o n t a c t = k a t t * ( 0 . 5 * s e e d r a d s u m-s i d a t a . r ) ; } e l s e { // C o m p r e s s i v e r e l a t i v e v e l o c i t y F n c o n t a c t = 0 ; } } double Fn = Fn damping + F n c o n t a c t ; // S t o r e n o r m a l f o r c e s i d a t a . Fn = Fn ; i f o r c e s . d e l t a F [ 0 ] = Fn * s i d a t a . en [ 0 ] ; i f o r c e s . d e l t a F [ 1 ] = Fn * s i d a t a . en [ 1 ] ; i f o r c e s . d e l t a F [ 2 ] = Fn * s i d a t a . en [ 2 ] ; } e l s e { // P a r t i c l e / p a r t i c l e i n t e r a c t i o n i f ( s i d a t a . r<s e e d r a d s u m ) { // S e e d i n t e r s e c t i o n F n c o n t a c t = k r e p * ( s e e d r a d s u m-s i d a t a . r ) ; } e l s e { // Crown i n t e r s e c t i o n i f ( s i d a t a . vn>0 or ! v e l o c i t y ) { // T e n s i l e r e l a t i v e v e l o c i t y F n c o n t a c t = k a t t * ( s e e d r a d s u m-s i d a t a . r ) ; } e l s e { // C o m p r e s s i v e r e l a t i v e v e l o c i t y F n c o n t a c t = 0 ; } } double Fn = Fn damping + F n c o n t a c t ; // S t o r e n o r m a l f o r c e s i d a t a . Fn = Fn ; i f o r c e s . d e l t a F [ 0 ] = s i d a t a . Fn * s i d a t a . en [ 0 ] ; i f o r c e s . d e l t a F [ 1 ] = s i d a t a . Fn * s i d a t a . en [ 1 ] ; i f o r c e s . d e l t a F [ 2 ] = s i d a t a . Fn * s i d a t a . en [ 2 ] ; j f o r c e s . d e l t a F [ 0 ] =i f o r c e s . d e l t a F [ 0 ] ; j f o r c e s . d e l t a F [ 1 ] =i f o r c e s . d e l t a F [ 1 ] ; j f o r c e s . d e l t a F [ 2 ] =i f o r c e s . d e l t a F [ 2 ] ; } } void s u r f a c e s C l o s e ( S u r f a c e s C l o s e D a t a & , F o r c e D a t a & , F o r c e D a t a &){} void b e g i n P a s s ( S u r f a c e s I n t e r s e c t D a t a & , F o r c e D a t a & , F o r c e D a t a &){} void e n d P a s s ( S u r f a c e s I n t e r s e c t D a t a & , F o r c e D a t a & , F o r c e D a t a &){} p r o t e c t e d : double * * k r e p ; double * * k a t t ; F i x P r o p e r t y A t o m * f i x s e e d ; bool v e l o c i t y ; } ; } } #e n d i f // NORMAL MODEL ZCHERRY FAISTENAU H #e n d i f B.3 Sources: Interaction Law TRILIN , Contact and Self-Contact
i l e : n o r m a l m o d e l z c h e r r y o u t w a r d . h t y p e : l i g g g h t s 3 . 5 . 0 c o n t a c t model -
o n t a c t d e t e c t i o n b e t w e e n f r e e i n t e r f a c e s Based on o u t w a r d n o r m a l v e c t o r , computed f r o m -Sum ( b r a n c h v e c t o r o f n e i g h b o r s ) f i x o u t w a r d . h f i x o u t w a r d . cpp management o f p a r t i c l e -w i s e v a r i a b l e s , w i t h p r o p e r mpi c o m m u n i c a t i o n memory management n o r m a l m o d e l z c h e r r y o u t w a r d . h u s e a l r e a d y computed s i . d a t a c o n t a c t -w i s e v a r i a b l e s a v o i d s l o o p on n e i g h b o r s t w i c e make n e c e s s a r y t o d o u b l e v a r i a b l e ( c u r r e n t and n e x t ) s t o r e d ( cheap , s o i t s ok ) a v o i d s r e -c o d e m a n u a l l y c o n t a c t d e t e c t i o n i n f i x e a s y c o n t a c t -w i s e v a r i a b l e s e a s y p o t e n t i a l p a r t i c l e -w a l l b e h a v i o r P a r t i c l e -w i s e v a r i a b l e f i x s e e d s e e d ( r e p u l s i v e ) d i a m e t e r o f t h e p a r t i c l e f i x c l u s t e r membership t o a a g g r e g a t e f o r c o n t a c t d e t e c t i o n f i x o u t w a r d n o r m a l o u t w a r d p o i n t i n g v e c t o r , f r o m p r e v i o u s t i m e s t e p f i x o u t w a r d m a g m a g n i t u d e o f f i x o u t w a r d , f r o m p r e v i o u s t i m e s t e p f i x o u t w a r d n e x t n o r m a l o u t w a r d p o i n t i n g v e c t o r , c o n t r i b u t e d t o d u r i n g t h e c u r r e n t t i m e s t e p f i x o u t w a r d n u m b e r N e i g h number o f c o n t a c t s n o t a c r o s s an i n t e r f a c e , f r o m p r e v i o u s t i m e s t e p
i o n i n t e r f a c e n * d t i n s i d e -n * d t I n t e r a c t i o n p a r a m e t e r ma g out t h e r s h o l d on m a g n i t u d e o f o u t w a r d v e c t o r c o s i j t h e r s h o l d on a n g l e b e t w e e n o u w t a r d v e c t o r s c o s e n t h e r s h o l d on a n g l e b e t w e e n o u t w a r d v e c t o r and p o s i t i o n d i f f e r e n c e n o n l o c a l s w i t c h f o r non-l o c a l b e h a v i o r w a l l m u l t i p i c a t i v e f a c t o r f o r w a l l / p a r t i c l e a t t r a c t i o n h e a l t i m e a f t e r w h i c h an " i n t e r f a c e " i n t e r a c t i o n i s t u r n e d i n t o a " b u l k " i n t e r a c t i o n v e r b o u t p u t t e s t i n g d a t a Warning a l s o a ( c u r r e n t l y ) hard-c o d e d p a r a m e t e r e x c l u d i n g t h e new c o n t a c t f r o m t h e f i r s t 5 0 0 s t e p s t o be i n t e r f a c e s : c o r r e s p o n d i n g t o t h e r e l a x a t i o n t i m e u s e d i n t h e s i m u l a t i o n s .
/ #i f d e f NORMAL MODEL NORMAL MODEL(ZCHERRY OUTWARD, z c h e r r y / outward , 6 ) #e l s e #i f n d e f NORMAL MODEL ZCHERRY OUTWARD H #d e f i n e NORMAL MODEL ZCHERRY OUTWARD H #i n c l u d e " c o n t a c t m o d e l s . h " #i n c l u d e " f i x p r o p e r t y a t o m . h " #i n c l u d e <i o s t r e a m > #i n c l u d e <math . h> namespace LIGGGHTS { namespace C o n t a c t M o d e l s { // F u n c t i o n s t o c r e a t e custom p a i r p a r a m e t e r s // P i e c e -w i s e l i n e a r e l e m e n t a r y c h e r r y M a t r i x P r o p e r t y * c r e a t e K r e p 6 ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ;
s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e S c a l a r P r o p e r t y ( r e g i s t r y , MAGOUT, c a l l e r ) ; } S c a l a r P r o p e r t y * c r e a t e C o s I J ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * COSIJ = " c o s I J " ; S c a l a r P r o p e r t y * c r e a t e C o s I J ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) { return MODEL PARAMS : : c r e a t e S c a l a r P r o p e r t y ( r e g i s t r y , COSIJ , c a l l e r ) ; } S c a l a r P r o p e r t y * c r e a t e C o s E N ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) ; s t a t i c c o n s t char * COSEN = " cosEN " ; S c a l a r P r o p e r t y * c r e a t e C o s E N ( P r o p e r t y R e g i s t r y & r e g i s t r y , c o n s t char * c a l l e r , bool s a n i t y c h e c k s ) {
} template<> c l a s s
NormalModel<ZCHERRY OUTWARD > : p r o t e c t e d P o i n t e r s { i n t h i s t o r y o f f s e t ; p u b l i c : s t a t i c c o n s t i n t MASK = CM REGISTER SETTINGS | CM CONNECT TO PROPERTIES | CM SURFACES INTERSECT ; NormalModel (LAMMPS * lmp , I C o n t a c t H i s t o r y S e t u p * h s e t u p , c l a s s C o n t a c t M o d e l B a s e * c ): P o i n t e r s ( lmp ) , cmb ( c ) , // n o t s u r e i t ' s u s e f u l l , c o p i e d f r o m t a n g e n t i a l m o d e l h i s t o r y . h // Custom p a i r p a r a m e t e r k r e p (NULL) , k a t t (NULL) , ma g o ut ( 0 ) , c o s i j ( 0 ) , c o s e n ( 0 ) , f a t t (NULL) , // Custom s c a l a r p a r a m e t e r w a l l ( 0 . 0 ) , h e a l ( 0 . 0 ) , // Custom p a r t i c l e p a r a m e t e r f i x s e e d ( 0 ) , f i x c l u s t e r ( 0 ) , f i x o u t w a r d ( 0 ) , f i x o u t w a r d m a g ( 0 ) , f i x o u t w a r d n e x t ( 0 ) , f i x o u t w a r d n u m b e r N e i g h ( 0 ) , f i x o u t w a r d n u m b e r n e x t ( 0 ) , f i x o u t w a r d i n d e n t ( 0 ) , f i x o u t w a r d i n d e n t s q r ( 0 ) , f i x o u t w a r d e r r o r i n t e ( 0 ) , f i x o u t w a r d e r r o r b u l k ( 0 ) , f i x o u t w a r d g o o d i n t e ( 0 ) , f i x o u t w a r d g o o d b u l k ( 0 ) , // C a s e s v e l o c i t y ( f a l s e ) , n o n l o c a l ( f a l s e ) , v e r b ( f a l s e ) {
r e g i s t e r O n O f f ( " v e l o c i t y " , v e l o c i t y , t r u e ) ; s e t t i n g s . r e g i s t e r O n O f f ( " n o n l o c a l " , n o n l o c a l , t r u e ) ; // d e f a u l t : t r u e s e t t i n g s . r e g i s t e r O n O f f ( " v e r b " , v e r b ) ; // d e f a u l t : f a l s e } i n l i n e void p o s t S e t t i n g s ( ) {} // N e c e s a r r y t o c o m p i l e void c o n n e c t T o P r o p e r t i e s ( P r o p e r t y R e g i s t r y & r e g i s t r y ) {
r e g i s t r y . c o n n e c t ( " m ag out " , mag out , " model z c h e r r y / o u t w a r d " ) ; r e g i s t r y . c o n n e c t ( " c o s i j " , c o s i j , " model z c h e r r y / o u t w a r d " ) ; r e g i s t r y . c o n n e c t ( " c o s e n " , c o s e n , " model z c h e r r y / o u t w a r d " ) ; r e g i s t r y . c o n n e c t ( " f a t t " , f a t t , " model c h e r r y / o u t w a r d " ) ; r e g i s t r y . c o n n e c t ( " w a l l " , w a l l , " model c h e r r y / o u t w a r d " ) ; r e g i s t r y . c o n n e c t ( " h e a l " , h e a l , " model c h e r r y / o u t w a r d " ) ; // P a r t i c l e p a r a m e t e r , c r e a t e and r e t r i e v e f i x s e e d = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " s e e d " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l c h e r r y / o u t w a r d " ) ) ; f i x c l u s t e r = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " c l u s t e r " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d " , " p r o p e r t y / atom " , " v e c t o r " , 3 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d m a g = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " outward mag " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d n e x t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d n e x t " , " p r o p e r t y / atom " , " v e c t o r " , 3 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d n u m b e r N e i g h = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y-> f i n d f i x p r o p e r t y ( " o u t w a r d n u m b e r N e i g h " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d n u m b e r n e x t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y-> f i n d f i x p r o p e r t y ( " o u t w a r d n u m b e r n e x t " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d i n d e n t = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d i n d e n t " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d i n d e n t s q r = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d i n d e n t s q r " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d e r r o r i n t e = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d e r r o r i n t e " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d e r r o r b u l k = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d e r r o r b u l k " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d g o o d i n t e = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d g o o d i n t e " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; f i x o u t w a r d g o o d b u l k = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " o u t w a r d g o o d b u l k " , " p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , " n o r m a l m o d e l z c h e r r y / o u t w a r d " ) ) ; } // N e c e s s a r y t o c o m p i l e i n l i n e double s t r e s s S t r a i n E x p o n e n t ( ) { return 1 . ; } i n l i n e void s u r f a c e s I n t e r s e c t ( S u r f a c e s I n t e r s e c t D a t a & s i d a t a , F o r c e D a t a & i f o r c e s , F o r c e D a t a & j f o r c e s ) { // / /
f a c e ( o r p a r t i c l e / w a l l i n t e r f a c e ) // Outward d a t a f r o m p r e v i o u s t i m e s t e p double * * o u t w a r d = f i x o u t w a r d ->a r r a y a t o m ; double * outward mag = f i x o u t w a r d m a g ->v e c t o r a t o m ; // d o u b l e * o u t w a r d n u m b e r N e i g h = f i x o u t w a r d n u m b e r N e i g h ->v e c t o r a t o m ; // Outward d a t a c o n t r i b u t e d t o d u r i n g t h e t i m e s t e p double * * o u t w a r d n e x t = f i x o u t w a r d n e x t ->a r r a y a t o m ; double * o u t w a r d n u m b e r n e x t = f i x o u t w a r d n u m b e r n e x t ->v e c t o r a t o m ; double * o u t w a r d i n d e n t = f i x o u t w a r d i n d e n t ->v e c t o r a t o m ; double * o u t w a r d i n d e n t s q r = f i x o u t w a r d i n d e n t s q r ->v e c t o r a t o m ; // Outward v e r f i c a t i o n d a t a double * o u t w a r d e r r o r i n t e = f i x o u t w a r d e r r o r i n t e ->v e c t o r a t o m ; // e r r o r s on i n t e r f a c e i n t e r a c t i o n double * o u t w a r d e r r o r b u l k = f i x o u t w a r d e r r o r b u l k ->v e c t o r a t o m ; // e r r o r s on b u l k i n t e r a c t i o n double * o u t w a r d g o o d i n t e = f i x o u t w a r d g o o d i n t e ->v e c t o r a t o m ; // g o o d s on i n t e r f a c e i n t e r a c t i o n double * o u t w a r d g o o d b u l k = f i x o u t w a r d g o o d b u l k ->v e c t o r a t o m ; // g o o d s on b u l k i n t e r a c t i o n // S u f r a c e i n t e r s e c t
double w a l l = w a l l ; // I n t e r a c t i o n computed i n t c l u i , c l u j ; double m a g i , m a g j ; double c o s i j ; double c o s i , c o s j ; // D o ubl e r a d i u s c h e r r y model p a r a m e t e r double * s e e d = f i x s e e d ->v e c t o r a t o m ; double * c l u s t e r = f i x c l u s t e r ->v e c t o r a t o m ; // c o n s t d o u b l e s e e d r a d s u m = 1 . 0 ; // Hard-c o d e d f o r mpi d e b u g g i n g p u r p o s e c o n s t double s e e d r a d s u m = 0 . 5 * ( s e e d [ i ]+ s e e d [ j ] ) ; // / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / // Type o f t h e new i n t e r a c t i o n s // The d e f i n e d s t a t u s i s s t o r e d i f ( n e w c o n t a c t ) {// Only r e l a x a t i o n t i m e and f o r p a r t i c l e / p a r t i c l e i n t e r a c t i o n s ( e x c l u d e m e a n i n g l e s s p a r t i c l e / w a l l ) i f ( ( s t e p >500) and ( not s i d a t a . i s w a l l ) ) { // C o n t a c t b e t w e e n d i s t i n c t o b j e c t s ( w i t h c l u s t e r membership ) c l u i = ( i n t ) r o u n d ( c l u s t e r [ i ] ) ; // ( i n t ) c a s t n o t n e c e s s a r y . p r e f e r r e d t o s t a t e i t e x p l i c i t e l y c l u j = ( i n t ) r o u n d ( c l u s t e r [ j ] ) ; i f ( c l u i==c l u j ) { // P a r t i c l e s a r e i n t h e same c l u s t e r // S e l f -c o n t a c t d e t e c t i o n // Ma gnit ude o f o u t w a r d n o r m a l m a g i = outward mag [ i ] ; // = m a g i ; #i f d e f FIX CLASS F i x S t y l e ( outward , FixOutward ) #e l s e #i f n d e f LMP FIX OUTWARD H #d e f i n e LMP FIX OUTWARD H #i n c l u d e " f i x . h " namespace LAMMPS NS { c l a s s FixOutward : p u b l i c F i x { p u b l i c : FixOutward ( c l a s s LAMMPS * , i n t , char * * ) ; ˜FixOutward ( ) ; void i n i t ( ) ; i n t s e t m a s k ( ) ; void i n i t i a l i n t e g r a t e ( i n t ) ; // f r o m d o c : c a l l e d a t v e r y b e g i n n i n g o f e a c h t i m e s t e p ( o p t i o n a l ) // v o i d p r e f o r c e ( i n t ) ; // void p o s t f o r c e ( i n t ) ; // c a l l e d a f t e r p a i r & m o l e c u l a r f o r c e s a r e computed and communicated ( o p t i o n a l ) void p o s t c r e a t e ( ) ; p r i v a t e : F i x P r o p e r t y A t o m * f i x o u t w a r d ; F i x P r o p e r t y A t o m * f i x o u t w a r d m a g ; F i x P r o p e r t y A t o m * f i x o u t w a r d n e x t ; F i x P r o p e r t y A t o m * f i x n u m b e r N e i g h ; F i x P r o p e r t y A t o m * f i x n u m b e r n e x t ; F i x P r o p e r t y A t o m * f i x i n d e n t ; F i x P r o p e r t y A t o m * f i x i n d e n t s q r ; F i x P r o p e r t y A t o m * f i x e r r o r i n t e ; F i x P r o p e r t y A t o m * f i x e r r o r b u l k ; F i x P r o p e r t y A t o m * f i x g o o d i n t e ; F i x P r o p e r t y A t o m * f i x g o o d b u l k ; void u p d a t e P t r s ( ) ; void f o r w a r d a l l ( ) ; void r e v e r s e a l l ( ) ; double * * o u t w a r d ; double * outward mag ; double * * o u t w a r d n e x t ; double * numberNeigh ; double * n u m b e r n e x t ; double * i n d e n t ; double * i n d e n t s q r ; double * e r r o r i n t e ; double * e r r o r b u l k ; double * g o o d i n t e ; double * g o o d b u l k ; c l a s s P a i r * p a i r g r a n ; char t y p e [ 2 5 6 ] ; char t y p e m a g [ 2 5 6 ] ; char t y p e n e x t [ 2 5 6 ] ; char type numb [ 2 5 6 ] ; char t y p e n u x t [ 2 5 6 ] ; char t y p e i n d e [ 2 5 6 ] ; char t y p e i n s q [ 2 5 6 ] ; char t y p e e r i n [ 2 5 6 ] ; char t y p e e r b u [ 2 5 6 ] ; char t y p e g o i n [ 2 5 6 ] ; char t y p e g o b u [ 2 5 6 ] ; char model [ 2 5 6 ] ; } ; } #e n d i f #e n d i f B.3.3 fix outward.cpp / * -
s t r c a t ( t y p e e r b u , " e r r o r b u l k " ) ; s t r c p y ( t y p e g o i n , t y p e ) ; s t r c a t ( t y p e g o i n , " g o o d i n t e " ) ; s t r c p y ( t y p e g o b u , t y p e ) ; s t r c a t ( t y p e g o b u , " g o o d b u l k " ) ; o u t w a r d = NULL ; outward mag = NULL ; o u t w a r d n e x t = NULL ; numberNeigh = NULL ; n u m b e r n e x t = NULL ; i n d e n t = NULL ; i n d e n t s q r = NULL ; e r r o r i n t e = NULL ; e r r o r b u l k = NULL ; g o o d i n t e = NULL ; g o o d b u l k = NULL ; } void FixOutward : : p o s t c r e a t e ( ) {
[ 8 ] = " 0 . " ; f i x g o o d i n t e = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } f i x g o o d b u l k = s t a t i c c a s t <F i x P r o p e r t y A t o m * >( m o d i f y->f i n d f i x p r o p e r t y ( " g o o d b u l k " , "p r o p e r t y / atom " , " s c a l a r " , 0 , 0 , t h i s ->s t y l e , f a l s e ) ) ; i f ( ! f i x o u t w a r d m a g ) { c o n s t char * f i x a r g[ 9 ] ; f i x a r g [ 0 ] = " g o o d b u l k " ;
[ 8 ] = " 0 . " ; f i x g o o d b u l k = m o d i f y->a d d f i x p r o p e r t y a t o m ( 9 , c o n s t c a s t <char * * >( f i x a r g ) , s t y l e ) ; } } void FixOutward : : f o r w a r d a l l ( ) { f i x o u t w a r d ->d o f o r w a r d c o m m ( ) ; f i x o u t w a r d m a g ->d o f o r w a r d c o m m ( ) ; f i x o u t w a r d n e x t ->d o f o r w a r d c o m m ( ) ; f i x n u m b e r N e i g h ->d o f o r w a r d c o m m ( ) ; f i x n u m b e r n e x t ->d o f o r w a r d c o m m ( ) ; f i x i n d e n t ->d o f o r w a r d c o m m ( ) ; f i x i n d e n t s q r ->d o f o r w a r d c o m m ( ) ; f i x e r r o r i n t e ->d o f o r w a r d c o m m ( ) ; f i x e r r o r b u l k ->d o f o r w a r d c o m m ( ) ; f i x g o o d i n t e ->d o f o r w a r d c o m m ( ) ; f i x g o o d b u l k ->d o f o r w a r d c o m m ( ) ; } void FixOutward : : r e v e r s e a l l ( ) { f i x o u t w a r d ->d o r e v e r s e c o m m ( ) ; f i x o u t w a r d m a g ->d o r e v e r s e c o m m ( ) ; f i x o u t w a r d n e x t ->d o r e v e r s e c o m m ( ) ; f i x n u m b e r N e i g h ->d o r e v e r s e c o m m ( ) ; f i x n u m b e r n e x t ->d o r e v e r s e c o m m ( ) ; f i x i n d e n t ->d o r e v e r s e c o m m ( ) ; f i x i n d e n t s q r ->d o f o r w a r d c o m m ( ) ; f i x e r r o r i n t e ->d o r e v e r s e c o m m ( ) ; f i x e r r o r b u l k ->d o r e v e r s e c o m m ( ) ; f i x g o o d i n t e ->d o r e v e r s e c o m m ( ) ; f i x g o o d b u l k ->d o r e v e r s e c o m m ( ) ; } void FixOutward : : u p d a t e P t r s ( ) { o u t w a r d = f i x o u t w a r d ->a r r a y a t o m ; outward mag = f i x o u t w a r d m a g ->v e c t o r a t o m ; o u t w a r d n e x t = f i x o u t w a r d n e x t ->a r r a y a t o m ; numberNeigh = f i x n u m b e r N e i g h ->v e c t o r a t o m ; n u m b e r n e x t = f i x n u m b e r n e x t ->v e c t o r a t o m ; i n d e n t = f i x i n d e n t ->v e c t o r a t o m ; i n d e n t s q r = f i x i n d e n t s q r ->v e c t o r a t o m ; e r r o r i n t e = f i x e r r o r i n t e ->v e c t o r a t o m ; e r r o r b u l k = f i x e r r o r b u l k ->v e c t o r a t o m ; g o o d i n t e = f i x g o o d i n t e ->v e c t o r a t o m ; g o o d b u l k = f i x g o o d b u l k ->v e c t o r a t o m ; // custom v e c t o r c r e a t e d by t h e f i x // h y p o t h e s i s : h a s t o be u s e d on e v e r y p r o c e s s o r . I f o n l y i n i n i t ( ) , o n l y m a s t e r node w i l l know t h e v e c t o r and o t h e r s w i l l c a u s e s e g m e n t a t i o n f a u l t s . To be u s e d a l s o i n p o s t f o r c e ( ) . } void FixOutward : : i n i t ( ) {
r a l s e t t i n g a t o m s t y l e g r a n u l a r a t o m m o d i f y m a p a r r a y b o u n d a r y f f f # S e e " Compute box d i m e n s i o n " newton o f f communicate s i n g l e v e l y e s u n i t s s i n e i g h b o r $ ( v d i a m / 1 0 ) b i n n e i g h m o d i f y d e l a y 0 # I m p o r t ASCII c o o r d i n a t e s f i l e r e a d d a t a @ f i l e c o o r d # I n i t i a l p a c k i n g v e l o c i t y a l l s e t 0 0 0 # M a t e r i a l s e t g r o u p a l l t y p e 1 s e t g r o u p a l l d i a m e t e r $ { diam } mass $ { mass } f i x mySeed a l l p r o p e r t y / atom s e e d s c a l a r no y e s y e s $ { s e e d } f i x seedOp a l l s e e d s e e d n o r m a l m o d e l z c h e r r y / o u t w a r d f i x o u t a l l p r o p e r t y / atom o u t w a r d v e c t o r no y e s y e s 0 0 0 f i x outMag a l l p r o p e r t y / atom outward mag s c a l a r no y e s y e s 0 f i x o u t N e i a l l p r o p e r t y / atom o u t w a r d n u m b e r N e i g h s c a l a r no y e s y e s 0 . 0 f i x outNxt a l l p r o p e r t y / atom o u t w a r d n u m b e r n e x t s c a l a r no y e s y e s 0 . 0 f i x o u t I n d a l l p r o p e r t y / atom o u t w a r d i n d e n t s c a l a r no y e s y e s 0 . 0 f i x o u t P e e a l l p r o p e r t y / atom o u t w a r d i n d e n t s q r s c a l a r no y e s y e s 0 . 0 f i x e r r I n t a l l p r o p e r t y / atom o u t w a r d e r r o r i n t e s c a l a r no y e s y e s 0 . 0 f i x e r r B u l a l l p r o p e r t y / atom o u t w a r d e r r o r b u l k s c a l a r no y e s y e s 0 . 0 f i x g o o I n t a l l p r o p e r t y / atom o u t w a r d g o o d i n t e s c a l a r no y e s y e s 0 . 0 f i x g o o B u l a l l p r o p e r t y / atom o u t w a r d g o o d b u l k s c a l a r no y e s y e s 0 . 0 f i x o u t C l u a l l p r o p e r t y / atom c l u s t e r s c a l a r no y e s y e s 1 . 0 f i x outOpe a l l o u t w a r d o u t w a r d n o r m a l m o d e l z c h e r r y / o u t w a r d f i x myKrep a l l p r o p e r t y / g l o b a l k r e p p e r a t o m t y p e p a i r 1 @krep # R e p u l s i v e s t i f f n e s s f i x
s h o l d f i x myCosI a l l p r o p e r t y / g l o b a l c o s I J s c a l a r @ c o s I J # A n g l e t h r e s h o l d f i x myCosE a l l p r o p e r t y / g l o b a l cosEN s c a l a r @cosEN # A n g l e t h r e s h o l d p a i r s t y l e g r a n model z c h e r r y / o u t w a r d v e l o c i t y on n o n l o c a l o f f v e r b o f f p a i r c o e f f * *
r o l p a r a m e t e r s # ## Atom ( dump o u t p u t ) # S t r e s s i m a g e compute sV a l l s t r e s s / atom p a i r #p r e s s u r e * v o l u n i t s # O t h e r s compute c o n a l l c o n t a c t / atom # s e e d and crown c o n t a c t s compute c u t o f f a l l c o o r d / atom $ { s e e d } # s e e d c o n t a c t s v a r i a b l e n I n t atom c c o n -f o u t N e i # number o f i n t e r f a c e c o n t a c t s v a r i a b l e pe atom f o u t P e e # c u m u l a t e d p o t e n t i a l e l a s t i c e n e r g y compute k e a l l k e / atom # k i n e t i c e n e r g y v a r i a b l e p e k e atom v p e / c k e v a r i a b l e c u m d e l t a atom f o u t I n d # c u m u l a t e d i n d e n t a t i o n f o r t h e p a r t i c l e # ## Macro ( thermo o u t p u t ) v a r i a b l e t t i m e e q u a l t i m e-v r e l T i m e compute CON a l l r e d u c e a v e c c o n compute SVXX a l l r e d u c e s u m c s V [ 1 ] compute SVYY a l l r e d u c e s u m c s V [ 2 ] compute SVZZ a l l r e d u c e s u m c s V [ 3 ] compute SVYZ a l l r e d u c e s u m c s V [ 6 ] compute SVXZ a l l r e d u c e s u m c s V [ 5 ] compute SVXY a l l r e d u c e s u m c s V [ 4 ] v a r i a b l e myVol e q u a l c o u n t ( a l l ) * PI * v s e e d ˆ3 / 6 v a r i a b l e P e q u a l ( c SVXX+c SVYY+c SVYY ) / ( 3 * v myVol ) compute KE a l l r e d u c e s u m c k e compute PE a l l r e d u c e s u m v p e v a r i a b l e KEPE e q u a l c KE / c PE compute CUTOFF a l l r e d u c e s u m c c u t o f f compute CUMDELTA a l l r e d u c e s u m v c u m d e l t a v a r i a b l e RELIND e q u a l c CUMDELTA/c CUTOFF v a r i a b l e adimK e q u a l @krep / a b s ( v P * v s e e d ) v a r i a b l e a d i m I e q u a l v d e p s * s q r t ( v m a s s / a b s ( v P * v s e e d ) ) v a r i a b l e adimE e q u a l ( c KE / c o u n t ( a l l ) ) / a b s ( v P * v s e e d ˆ3 ) # S t r a i n v a r i a b l e E e q u a l 0 compute maxG a l l r e d u c e m a x x y z compute minG a l l r e d u c e m i n x y z v a r i a b l e lxG e q u a l c maxG [ 1 ]c minG [ 1 ] v a r i a b l e lyG e q u a l c maxG [ 2 ]c minG [ 2 ] v a r i a b l e l z G e q u a l c maxG [ 3
5 * v Z0 -0.5 * v s e e d ) r e f e r e n c e p o i n t 0 0 $ ( -0.5 * v Z0 -0.5 * v s e e d ) f i x m e s h w a l l s a l l w a l l / g r a n model z c h e r r y / o u t w a r d mesh n m e s h e s 2 m e s h e s t o p b o t # D e f i n e i n t e r a c t i o n p r o p e r t i e s # The mesh n e e d s t o d e f i n e a c o r r e c t box s i z e i n x and y d i r e c t i o n # A m a n u a l l y computed z box s i z e i s computed f r o m s t r a i n and Z0 # D e s c r i b e m o t i o n o f s u p e r i o r b o u n d a r y v a r i a b l e veloXY e q u a l 0 v a r i a b l e v e l o Z e q u a l v d e p s * v Z 0 * exp ( v d e p s * v t t i m e ) v a r i a b l e v e l o Z t o p e q u a l v v e l o Z /2 v a r i a b l e v e l o Z b o t e q u a l -1 * v v e l o Z /2 # ## Compute s t e p s v a r i a b l e Dt e q u a l v t i m e S t e p #a b s ( v e p s S t e p / v d e p s ) t i m e s t e p $ { t i m e S t e p } v a r i a b l e nForming e q u a l c e i l ( v s t r a i n / ( v d e p s * v D t ) ) # Forming s t e p s v a r i a b l e nOutRel e q u a l c e i l ( v n R e l a x / 5 ) # Output f r e q u e n c y f o r r e l a x a t i o n ( s t e p s ) v a r i a b l e stepWin e q u a l c e i l ( a b s ( v e p s W i n / ( v d e p s * v D t ) ) ) # A v e r a g i n g window ( s t e p s ) v a r i a b l e nOut e q u a l c e i l ( a b s ( v e p s O u t / ( v d e p s * v D t ) ) ) # Output f r e q u e n c y ( s t e p s ) # ## Output s e t t i n g s # G l o b a l #f i x a v e a l l a v e / t i m e 1 $ { stepWin } $ {nOut} f t o p [ 3 ] mode s c a l a r a v e o n a v e a l l a v e / t i m e 1 $ { stepWin } $ {nOut} f t o p [ 3 ] f b o t [ 3 ] f t o p [ 4 ] f b o t [ 4 ] f t o p [ 5 ] f b o t [ 5 ] mode s c a l a r a v e o n e f i x a v a t a l l a v e / atom 1 $ { stepWin } $ {nOut} c s V [ 1 ] c s V [ 2 ] c s V [ 3 ] c s V [ 4 ] c s V [ 5 ] c s V [ 6 ] #t h e t i m e s t e p s c o n t r i b u t i n g t o t h e a v e r a g e v a l u e c a n n o t o v e r l a p , i . e . N f r e q > ( N r e p e a t -1) * N e v e r y i s r e q u i r e d # work-a r o u n d : keyword a v e window M thermo $ {nOut} t h e r m o s t y l e custom s t e p c PE c KE
r m o m o d i f y f o r m a t f l o a t %.15 e # ## I n i t i a l r e l a x a t i o n w r i t e d u m p a l l custom @ d i r e c t o r y a t o m 0 i d t y p e x y z r a d i u s # p r i n t t h e i n i t i a l s t a t e i n t h e d e f i n e d o u t p u t d i r e c t o r y r u n $ { n R e l a x } w r i t e d u m p a l l custom @ d i r e c t o r y a t o m r e l a x i d t y p e x y z r a d i u s # ## P r e c r i b e d d e f o r m a t i o n f i x moveTop a l l move/ mesh mesh t o p l i n e a r / v a r i a b l e v v e l o X Y v v e l o X Y v v e l o Z t o p f i x moveBot a l l move/ mesh mesh b o t l i n e a r / v a r i a b l e v v e l o X Y v v e l o X Y v v e l o Z b o t v a r i a b l e E e q u a l v d e p s * v t t i m e dump dmpAtom a l l custom $ {nOut} @ d i r e c t o r y a t o m * i d t y p e x y z r a d i u s c c o n f a
3
3 I n t f e r r I n t f e r r B u l r u n $ { nForming } B.4.2 Test Packing LAMMPS d a t a f i l e v i a w r i t e d a t a , v e r s i o n V e r s i o n LIGGGHTS-PUBLIC 3 . l o o p e n d f a c e t f a c e t n o r m a l 0 0 -1 o u t e r l o o p v e r t e x l o o p e n d f a c e t end s o l i d B.4.4 Packing Script Template # ### P a r t i c l e p a c k i n g by random i n s e r t i o n and i s o t r o p i c s h r i n k i n g o f t h e box # -
e t t i n g s # S i m u l a t i o n s e t t i n g a t o m s t y l e g r a n u l a r a t o m m o d i f y m a p a r r a y b o u n d a r y p p p newton o f f communicate s i n g l e v e l y e s u n i t s s i n e i g h b o r $ ( v d i a m / 1 0 ) b i n p r o c e s s o r s @procX @procY @procZ n e i g h m o d i f y d e l a y 0 r e g i o n r e g b l oc k $ ( v a I n i /-2) $ ( v a I n i / 2 ) $ ( v a I n i /-2) $ ( v a I n i / 2 ) $ ( v a I n i /-2) $ ( v a I n i / 2 ) u n i t s box cr e a t e b o x 1 r e g t i m e s t e p $ {Dt} f i x i n t e g r a l l nve f i x comp a l l d e f o r m 1 x t r a t e $ { d e p s } y t r a t e $ { d e p s } z t r a t e $ { d e p s } # M a t e r i a l f i x m4 a l l p r o p e r t y / g l o b a l c o e f f i c i e n t F r i c t i o n p e r a t o m t y p e p a i r 1 0 f i x m6 a l l p r o p e r t y / g l o b a l kn p e r a t o m t y p e p a i r 1 $ { r i g i } f i x m7 a l l p r o p e r t y / g l o b a l k t p e r a t o m t y p e p a i r 1 0 f i x m8 a l l p r o p e r t y / g l o b a l gamman abs p e r a t o m t y p e p a i r 1 $ { v i s c } f i x m9 a l l p r o p e r t y / g l o b a l gammat abs p e r a t o m t y p e p a i r 1 0 p a i r s t y l e g r a n model h o o k e / s t i f f n e s s a b s o l u t e d a m p i n g on l i m i t F o r c e o f f t a n g e n t i a l d a m p i n g on p a i r c o e f f * *
r o l p a r a m e t e r s v a r i a b l e Tosc e q u a l 2 * PI * ( v m a s s / v r i g i ) ˆ0 . 5 v a r i a b l e Tamo e q u a l 2 * v m a s s / v v i s c p r i n t " Tosc = $ { Tosc } ; Tamo = $ {Tamo} ; d t = $ {Dt} ( s ) " compute c o n a l l c o n t a c t / atom #l o c a l c o o r d i n a t i o n number compute Con a l l r e d u c e a v e c c o n #a v e r a g e c o o r d i n a t i o n number #f i x HCon a l l a v e / h i s t o 1 1 $ ( v s t e p O u t ) 1 11 11 c c o n mode v e c t o r f i l e @ d i r e c t o r y / c o n t a c t @ k e y compute P a l l p r e s s u r e t h e r m o t e m p p a i r #-1 * p r e s s u r e u n i t s compute k e a l l k e / atom compute MeKe a l l r e d u c e a v e c k e v a r i a b l e f r a c e q u a l atoms * PI * $ { diam } ˆ3 / ( 6 * v o l ) v a r i a b l e E e q u a l c MeKe / ( c P * v d i a m ˆ2 ) v a r i a b l e I e q u a l a b s ( v d e p s * v d i a m * ( v d e n s / c P ) ˆ0 . 5 ) v a r i a b l e K e q u a l a b s ( v r i g i / ( c P /
a b s o l u t e i m p o r t , p r i n t f u n c t i o n , u n i c o d e l i t e r a l s i m p o r t numpy a s np i m p o r t m a t p l o t l i b . p y p l o t a s p l t f r o m U t i l i t a i . T a b l e i m p o r t T a b l e
a b l e s #P a r a m e t e r s H0 = @H0 # I n i t i a l h e i g t h [mm] DEPS = [ @deps ] # S t r a i n r a t e [ s -1] de = @De # S t r a i n i n c r e m e n t btween o u t p u t t i m e s [ / ] m i n i d e = 1 e -10 ESTEP = [ 0 , @eps ] # G l o b a l s t r a i n [ / ] young = @young # Young ' s modulus [ MPa ] p o i s s o n = @ p o i s s o n # P o i s s o n ' s r a t i o [ / ] m a x i d t = np . a b s o l u t e ( 1 e -3/@deps ) # Maximum t i m e s t e p [ s ] #1e-3 f o r 0 and 2 m i n i d t = np . a b s o l u t e ( m i n i d e / @deps ) #Time and d i s p l a c e m e n t v e c t o r s g e n e r a t i o n T = [ 0 ] # Time [ s ] E = [ 0 ] # G l o b a l s t r a i n [ / ] ENDIND = [ ] f o r i , d e p s i n e n u m e r a t e (DEPS) : n = np . a b s o l u t e ( ( ESTEP [ i +1]-ESTEP [ i ] ) / de ) d t = np . a b s o l u t e ( de / d e p s ) e = np . l i n s p a c e (ESTEP [ i ] , ESTEP [ i + 1 ] , n ) t = np . l i n s p a c e (T[ -1]+ dt , T[ -1]+ n * dt , n ) e = e [ 1 : ] t = t [ 1 : ] E = np . append ( E , e ) T = np . append (T , t ) i f i ==0: ENDIND = np . append (ENDIND, n ) e l s e : ENDIND = np . append (ENDIND, ENDIND[ -1]+ n ) DISP = H0 * ( np . exp (E) -1 ) # D i s p l a c e m e n t o f f a c e : mm
r s c r i p t DEBUT(PAR LOT= ' OUI ' ) ; #D e f a u l t s a f e b e h a v i o r , b l o c k s p y t h o n m a n i p u l a t i o n o f t a b l e [ U1 . 0 3 . 0 2 : 3 ] , r e s u l t s a r e c e r t i f i e d mesh = LIRE MAILLAGE ( FORMAT = 'MED ' ) ; matA = DEFI MATERIAU( ELAS= F ( E = young , NU = p o i s s o n ) , LEMAITRE= F ( #[ U4 . 4 3 . 0 1 : 3 7 ] s e c . 4 . 1 2 O p e r a t e u r DEFI MATERIAU N = @n a , UN SUR K = @ u n s u r k a , UN SUR M = 0 . 0 ) #d e g e n e r a t e L e m a i t r e b e h a v i o r t o N o r t o n l a w ) ; matB = DEFI MATERIAU( ELAS= F ( E = young , NU = p o i s s o n ) , LEMAITRE= F ( N = @n b , UN SUR K = @ u n s u r k b , UN SUR M = 0 . 0 ) ) ;
Figure 1 :
1 Figure 1: Pairwise interactions. (a) Geometry and kinematics of a pair: relative position and velocities. Seed contact example. (b) Interaction reciprocal forces:F A→B = -F B→A .Pairwise forces, computed from the kinematics of the pair (Equation1).
Figure 2 :
2 Figure 2: Definition of the piecewise linear normal interaction force. Dependency on the normal relative velocity ḣ. Slopes and distances to scale. Cases a, b and c from Equation 1.
Figure 3 :
3 Figure 3: Boundary conditions. (a) Elementary mesh / particle interaction. Crown contact example. (b) Compression test: top and bottom meshes with prescribed motion; free lateral sides.
7 10 - 6 10 - 9
76109 10 -5 10 -4 10 -3 10 -2 10 -1 10 -8 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -
Figure 4 :
4 Figure 4: Calibration of the strain rate sensitivity. Successive steps toward the master curve. (a) Influence of the natural period on strain rate sensitivity. Normalized flow stress at a strain of 0.3 for 5×10 3 particles versus the prescribed strain rate. (b) Normalization in the (t 0 √ ε, σ/σ low ) space. Common trend for all natural periods. Sigmoidal fit, see Equation 4. (c) Master curve of strain rate sensitivity. Three flow regimes.
2
2 Number of particlesRelative error on true flow stress (%)(b)
Figure 5 :
5 Figure 5: Spatial convergence for single material, from 30 to 1×10 6 particles. (a) Flow stress versus strain, using five distinct initial random packings for each packing size. (b) Relative error versus packing size, in regards to the converged simulation (1×10 6 particles packing).
Figure 6 :
6 Figure 6: Volume conservation for single material. Relative error on volume versus strain for 5×10 4 particles.
Figure 7 :
7 Figure 7: Single material packing. Natural period 0.8 s, strain rate 3.16×10 -4 s -1 , 5×10 4 particles.
10 - 6
6 10 -5 10 -4 10 -3 10 -2 10 -
Figure 8 :
8 Figure 8: Macroscopic equilibrium for single material. Relative error versus strain rate for 5×10 3 particles, strain 0.3. Effect of the natural period.
Figure 9 :
9 Figure 9: Single materials. Flow stress and strain rate sensitivity. (a) Flow stress vesrus strain. Effect of the strain rate. Phase B. (b) Norton law approximation for phases A and B, based on flow stress values at a strain of 0.3. Strain rate sensitivity of both phases in the range 1×10 -4 -1×10 -3 s -1 .
Figure 10 :
10 Figure 10: Bi-material parallel configuration. Front view, transverse section. Strain rate 4.64×10 -4 s -1 .
Figure 11 :
11 Figure 11: Bi-material parallel configuration. Effect of the strain rate on the flow stress. Theoretical reference: mixture law from Equation 6. (a) Engineering flow stress versus strain. Fixed volume fraction: 0.5. (b) Linear trend of the engineering flow stress, at a strain of 0.3, with the volume fraction.
Figure 12 :
12 Figure 12: Bi-material series configuration. Volume fraction 0.5. Bottom phase: B (high sensitivity). Front view, transverse section.
Figure 13 :
13 Figure 13: Bi-material series configuration. Effect of the strain rate on the flow stress. Numerical reference: FEM simulations. (a) Engineering flow stress versus strain. Fixed volume fraction: 0.5. (b) Non-linear trend of the engineering flow stress, at a strain of 0.3, with the volume fraction.
Figure 14 :
14 Figure 14: Bi-material spherical inclusion (phase B) configuration. Front view, transverse section. Illustration of the height H and diameter D used to compute the shape factor S f (Equation7).
Figure 15 :
15 Figure 15: Bi-material spherical inclusion configuration. Effect of the strain rate on the flow stress. Numerical reference: FEM simulations. Unique volume fraction of inclusion (phase B): 0.2. (a) Engineering flow stress versus strain. (b) Engineering flow stress versus strain rate, at a strain of 0.3.
Figure 16 :
16 Figure 16: Bi-material spherical inclusion configuration. Effect of the strain rate on the shape factor of the inclusion. Numerical reference: FEM simulations. Unique volume fraction of inclusion (phase B): 0.2. (a) Shape factor versus strain. (b) Shape factor versus strain rate, at a strain of 0.3.
1
1 Number of particles in the inclusion (/)Relative error on shape factor (
Figure 17 :
17 Figure 17: Bi-material spherical inclusion configuration. Spatial convergence: effect of the number of particles discretizing the inclusion on the shape factor. (a) Shape factor versus strain. Typical results for three (out of a total five computed) random packings. (b) Error on the shape factor versus the number of particles used to discretize the inclusion. Reference for relative error: 1×10 5 particles used to discretize the inclusion. Minimum, maximum and average error for five random packings.
Figure 18 :
18 Figure 18: Discretization and compression of the full sample. 3D view of the inclusions only, the matrix is hidden. Vertical compression axis.
Figure 19 :
19 Figure 19: Zoom on a local configuration. Discretization and compression. Cross-section of the matrix and the inclusions. Vertical compression axis.
Z 5 α 6 E 6 ε 1 h
5661 tensor of order two Main Introduced Notation a particle acceleration α en angle threshold between n and e n in self-contact detection; Section 9.ij angle threshold between both n in self-contact detection; Section 9.5 d particle diameter D density of particle; Section 8.6 ∆t time step E continuous medium Young's modulus e n unit vector from a particle to its neighbor E k kinetic energy; Section 8.p potential elastic energy; Section 8.uniaxial macroscopic strain ε strain ε uniaxial macroscopic strain rate f signed norm of the interaction force f interaction force f att attractive pair force threshold; Section 9.distance between two particles i indentation of a pair I inertial numer; Section 8.6 i r relative indentation of a pair (indentation/radius) 259
Limits and Contributions of Modeling Approaches . . . . . . . . . . . . . 3.1.1 Limits in Model Design . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Resolution Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Numerical Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3.1 Halley's Comet . . . . . . . . . . . . . . . . . . . . . . . .
3 Simulation Background
3.1
2 Experimental Background 2.1 Metallic Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Amorphous/Crystalline Composites . . . . . . . . . . . . . . . . . . . . . 2.2.1 Amorphous Metallic Alloys . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Amorphous/Crystalline Elaboration . . . . . . . . . . . . . . . . . 2.3 Design of a Model Material . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 X-Ray Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 PhD Objective: Observed Physical Phenomena . . . . . . . . . . . . . . . 3.1.3.2 Model design and Computing Power . . . . . . . . . . . . 3.1.3.3 Computerized Numerical Approach . . . . . . . . . . . . 3.2 Description of the Studied Phenomena . . . . . . . . . . . . . . . . . . . . 3.3 PhD Objective: Requirements for a Modeling Tool . . . . . . . . . . . . . The general introduction of the manuscript briefly stated the engineering and scientific interest of the forming of composite metallic materials.
Ongoing Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Key Modeling Choices
5 Lecture Grid: Lagrangian Methods
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Continuous Constitutive Law . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 Variational/Global Resolution . . . . . . . . . . . . . . . . . . . . .
5.2.1.1 Basis Function Built on a Mesh . . . . . . . . . . . . . .
5.2.1.1.1 [FEM] Conforming Mesh . . . . . . . . . . . . .
5.2.1.1.2 [MPM] Lagrangian Markers . . . . . . . . . . . .
5.2.1.1.3 [XFEM] Extra Basis Function . . . . . . . . . .
5.2.1.2 [EFG] Basis Function Built on a Cloud of Nodes . . . . .
5.2.2 [SPH] Direct/Local Resolution . . . . . . . . . . . . . . . . . . . .
5.3 Discrete Constitutive Law . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 [Lattice Model] Global Matrix Resolution . . . . . . . . . . . . . .
5.3.2 Particle-Wise Local Resolution . . . . . . . . . . . . . . . . . . . .
5.3.2.1 [NSCD] Backward Time Scheme . . . . . . . . . . . . . .
5.3.2.2 [DEM] Forward Time Scheme . . . . . . . . . . . . . . .
6 Comparison of Selected Methods
6.1 FEM, EFG, SPH, DEM and lattice model . . . . . . . . . . . . . . . . . .
6.2
x j , . . . )
Figure 4.1
Discrete Actions on
Law Res-olution Particle-Wise Reciprocal Material Points
DEM
Global Matrix From PDE
Distinct feature Discrete Lattice Model Common feature Discrete Topology Neighbor Particle-Wise Resolution SPH Local Direct
Matrix Field
Formalism Approx-
imation
Variational
Law Resolution PDE
Topology of PDE Resolution
Continuous FEM EFG Variational Global
Mesh Cloud of
Elements Nodes
Basis
Function
Support
Table 9 . 1 :
91 Algorithmic ingredients used in the developed models. Cross-reference to the sections describing the choices (Principle) and to the applicative parts (Application).
Algorithmic feature and choice Principle Application
Inelastic strain Purely geometric and instantaneous 9.1 IV, V
interaction law
Prescribed strain Mobile rigid mesh/particle interaction law 9.2 IV, V
Material discretization Set particle state variable from 3D image 9.3 IV, V
Contact (distinct objects) Particle state variable 9.4 /
Self-contact Non-local interaction law, pair and particle 9.5 V
state variable
Table 10 . 1 :
101 Main software solutions.
Assigned Task Implementation License Personal Work
language
liggghts DEM solver C,C++ (MPI) GPL develop, document,
use, bug report
ovito DEM postprocess C++, python GPL use, bug report
code aster FEM solver fortran, python GPL use, bug report
salome FEM preprocess C++, python LGPL use
paraview FEM postprocess C++, python BSD-3 use
python Swiss army knife PSFL use
Table 11 .
11
Discrete parameters Continuous parameters
Phase Scale of the pairs Macroscopic behavior
A k rep ➭N • mm -1 6.23•10 9 m g 1.58•10 4 1•10 -2 t 0 s M / 2.45•10 -3 K MPa • s M 9.59•10 1
B 2.65•10 9 4.29•10 7 8•10 -1 4.90•10 -1 5.02•10 3
1: Numerical parameters for the two phases. Time step ∆t = 5•10 -4 s -1 . Radii r crown = 0.75 mm, r seed = 0.5 mm. Stiffness ratio k rep /k att = 10. convenience 8 .
Table B . 1 :
B1 Overview of the implemented features. Cross-reference to the source codes, when provided. Indication of the relative implementation effort.
Algorithmic feature Tool Source code in Implementation
section effort
liggghts
Inelastic strain Interaction law BILIN B.2 1
Inelastic strain Interaction law TRILIN B.3 1
Prescribed strain Mesh/particle interaction B.2, B.3 0.1
Contact (distinct Particle state variable, interaction B.3 1
objects) law
Self-contact Interaction law, non-local B.3 10
behavior, coupling between pair
and particle state variable
python,numpy
Material Image/packing mask Not provided 1
discretization
Table B .
B Appendices B.4.2 and B.4.3. A template for the generation of the packings is given in Appendix B.4.4.
@epsWin Width of the averaging 5•10 -3
window (strain)
@epsOut Output period (strain) 5•10 -2
@diam rcrown 1.4
@seed r seed 1
@mass m 3.606•10 7
@krep krep 1.424•10 9
@katt katt 7.909•10 8
@fatt f att 1.186•10 8
@wall X wall 3
@magOut Nmag 2.3
@cosIJ cos αij 0.4226
@cosEN @heal cos αe n Healing time 0.1736 10 12
mask |= INITIAL INTEGRATE ; mask |= POST FORCE ; return mask ; } void FixOutward : : i n i t i a l i n t e g r a t e ( i n t v f l a g ) { u p d a t e P t r s ( ) ; i n t n l o c a l = atom->n l o c a l ; f o r ( i n t i =0 ; i <n l o c a l ; i ++) { // I n i t i a l i z e n e x t d a t a o u t w a r d n e x t [ i ] [ 0 ]
= 0 ; 2: Typical parameters used in Part V. The healing behavior is here deactivated by a numerically infinite healing time. Warning: a hard-coded parameter in the interaction hinders prohibits self-contact detection before the 500 th step. Proceed with caution to modify the number of initial relaxation steps.
Examples of the formats of the packing and the mesh files are given respectively in
Table 1 :
1 Numerical parameters for the two phases. Time step ∆t = 5×10 -4 s -1 . Radii Rcrown = 0.75 mm, R seed = 0.5 mm. Stiffness ratio krep/katt = 10.
Phase Discrete parameters Continuous parameters
Scale of the pairs Macroscopic behavior
A k rep ➭N • mm -1 6.23×10 9 m g 1.58×10 4 1×10 -2 t 0 s M / 2.45×10 -3 9.59×10 1 K MPa • s M
B 2.65×10 9 4.29×10 7 8×10 -1 4.90×10 -1 5.02×10 3
It can be arbitrarily defined as the temperature at which the viscosity of the material is smaller than 10 6 MPa • s [226, p.17].
The flow stress is proportional to the strain rate, thus M = 1 in Equation11.1.
Phase contrast, where the phase variation of the beam is measured instead of its intensity, will not be examined.
This short imaging times minimizes the deformation during the scan. At a typical strain rate of 2.5•10 -4 s -1 , in 7 s, the variation of strain is lower than 2•10 -3 . The geometrical variation is thus limited during the time between the beginning and the end of the scan.
"We dissect nature along lines laid down by our native languages. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds-and this means largely by the linguistic systems in our minds. We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way-an agreement that holds throughout our speech community and is codified in the patterns of our language. The agreement is, of course, an implicit and unstated one, but its terms are absolutely obligatory; we cannot talk at all except by subscribing to the organization and classification of data which the agreement decrees. [...] it means that no individual is free to describe nature with absolute impartiality but is constrained to certain modes of interpretation even while he thinks himself most free."[241, p.212-214].
"There will always be things we wish to say in our programs that in all known languages can only be said poorly."[168, §26].
"We live an epoch in which our inner lives are dominated by the discursive mind. This fraction of the mind divides, sections off, labels -it packages the world and wraps it up as 'understood'. It is the machine in us that reduces the mysterious object which sways and undulates into simply 'a tree'. Since this part of the mind has the upper hand in our inner formation, as we age, [...] we experience more and more generally, no longer perceiving 'things' directly, [...] but rather as signs in a catalogue already familiar to us. The 'unknown', thus narrowed and petrified, is turned into the 'known'. A filter stands between the individual and life."[186, p.5].
"Une chose m'avait déjà frappé [...]: c'est la grossièreté [...] du mode de raisonnement mathématique quand on le confronte avec les phénomènes de la vie, les phénomènes naturels."[START_REF] Grothendieck | Allons-nous continuer la recherche scientifique?[END_REF].
"The behavior of a given material can be represented by a schematic model only in relation to the envisaged usage and the desired precision of the predictions."[126, p.71].
For example the descriptive geometry[142, p.20-25] and its application to the drawing and the development of surfaces in boilermaking[START_REF] Lelong | Le traçage en structures métalliques[END_REF].
Similar goals have been investigated with numerical models, including at a contemporary period[7].
The prediction was 1758/04/15 ± 1 month, the actual perihelion occurred on the 1758/03/13, after 76 years of revolution. Even though, Jean le Rond d'Alembert considered that their numerical method was not of any help in understanding Halley's Comet, they respected the predicted order of magnitude of the time precision of 0.1 %, while periods range from 74 to 79 years, more than 6 % of variation.
The approximation of π to 707 decimals -manually computed between 1853 and 1873 -was only found is 1946 to be erroneous after the 527 th decimal place[START_REF] Ferguson | Value of π[END_REF]. The automation of the computations and the use of machines considerably reduce such errors, although flaws directly stemming from the hardware can still be found on modern computers. See for example the Pentium FDIV bug discovered in 1994, leading to incorrect results in floating-point division at the 5 th decimal place[START_REF] Nicely | Pentium FDIV flaw[END_REF], and the Intel Skylake processor bug, recently found, where hyper-threading activation leads to segmentation faults[START_REF] Leroy | How I found a bug in Intel Skylake processors[END_REF].
Including printed tables of random numbers [205, p.625-629], strange-looking collections in a modern context.
Sunway TaihuLight uses SW26010 chips, shown on Figure3.5, operating at moderate frequency by modern standards.
The qualification terminology, introduced in[START_REF] Schlesinger | Terminology for model credibility[END_REF], seems little used in the literature.
"There are two ways to write error-free programs; only the third one works."[168, §40]. In the context of numerical modeling of physical phenomena, the likeliness of bugs in elementary processing units (see footnote 13 on page 43) can be considered negligible with respect to programming errors: "the chips are one of the least likely sources of error; user input, application software, system software, and other system hardware are much more likely to cause errors."[159, Q11].
A standard library as Intel's MKL is only reproducible under drastic conditions[185, p.7].
(a) (b) (c)
Infinitesimal transformation theories consider that the geometrical variations between the deformed and reference configurations are negligible with respect to the studied length scale.
For example to model history dependent phenomena, which is not a major focus of our study.
In a numerical context, the effective resolution of a continuous description of a material necessarily relies on a discrete re-formulation. The distinction is drawn here at the level of the conceptual model. Algorithmically similar methods may stem from discrete and continuous topologies of the constitutive law (see also Section 6.1).57
However, Eulerian descriptions of interface motion, within Lagrangian frameworks, are now classical and efficient extensions of Lagrangian methods. They are briefly described in Section 5.2.1.
An updated Lagrangian formulation takes the current -or at least a recently computed -state as reference; a total Lagrangian formulation is always written with respect to the initial state[20, p.335].
Meshless methods, or meshfree methods, regroup numerical approaches as global resolution on cloud of nodes (Sections 5.2.1.2), direct and local resolution on independent points 5.2.2, over-impression of Lagrangian markers in a continuous framework 5.2.1.1.2 and topologically discrete constitutive law 5.3.
Particle methods can include practically any modeling approach handling data at nodes or material points.
In the literature, the claimed filiation of the introduced methods sometimes focus on the modeled phenomena. For example, the work on elasticity in continuous media of Hrennikoff[START_REF] Hrennikoff | Solution of problems of elasticity by the framework method[END_REF] is frequently
In our perspective, state-based peridynamics (statePD) is distinct from bond-based peridynamics (bondPD). The statePD, described in this section, locally integrates a continuous constitutive law to solve a PDE. The bondPD, presented in Section 5.3, relies on a discrete constitutive law, computing interactions between sets of objects.
The natural element method (NEM) was designed partially to overcome such problems: nodes are not interacting within a fixed radius as in EFG, but with neighbors defined by Voronoi tessellation. This approach could be considered as a meshing procedure.
Even without taking into account numerical round-off errors, stemming from floating-point arithmetic[START_REF] Goldberg | What every computer scientist should know about floating-point arithmetic[END_REF].
17 As an interesting side note, preliminary hints to bound the error on the dynamics of a discrete system representing a continuum have been published[154, p.1533-1536], under restrictive assumptions like infinitesimal strain.
For example, the DEM solver liggghts is a fork from the MD solver lammps, mainly adding new features and reusing the main structure and algorithms.
Refer to footnote 14 on page 69 for disambiguation between bond-based peridynamics (bondPD) and state-based peridynamics (statePD).
The system is solved by minimizing its energy rather than by direct resolution.
A major distinction between MD and DEM is cultural: the DEM community cites the reference paper of Cundall[START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF]. See also Section 5.3.2.2.
For more possibilities, refer for example to[64, p.143].
Classifications are often used in the DEM to subdivide the interaction law in independent contributions, considered to represent distinct physical phenomena: normal and tangential forces; rolling and twisting torques; repulsive and cohesive effects; elastic and damping effects.
Sometimes called the critical time step.
In the corresponding normal mode, adjacent masses move in opposite directions.
The computing effort required to simulate a time step depends on the number of interacting neighbors. As very rough order of magnitude, with a processor frequency of 2.3 GHz on an Intel Xeon E5520, the DEM code liggghts very roughly requires 10 -6 cpu second per particle and per time step, for less than a dozen of effective neighbors per particle.
7 Such tuning of physical parameters are used in other numerical contexts, as FEM dynamic simulations of quasistatic processes. The increase of the prescribed velocity and the density of the material are common in deep drawing models[43, p.469].
"Adapting old programs to fit new machines usually means adapting new machines to behave like old ones."[168, §120]
The branch vector l i is the geometrical vector from the center of the particle to the center of its neighbor i.
In DEM, the algorithms for cohesive behavior are often based on explicit lists of pairs that have cohesive interactions even if they are not overlapping. Although identical numerical results could be obtained using such algorithms, they conceptually define preferential neighbors, which will be avoided in our work.
Computed for a pair {i, j} as the projection of the difference of the velocities v i -v j on the unit vector en pointing from a particle to the other.
Although it would be bold to pretend to apply Saint-Venant's principle[START_REF] Richard Von | On Saint Venant's principle[END_REF].
Isotropic in the case of cubic domains.
The effective density for a given interaction law varies by a few percent with the strain rate, see for example Figure11.8 on page 128.
The branch vector l is the geometrical vector from the center of a particle to the center of a neighbor.
Reproducibility in time and between machines and users. Although, in a strict sense, numerical repeatability seems illusory[START_REF] Revol | Numerical reproducibility and parallel computations: Issues for interval algorithms[END_REF].
May Richard Stallman have mercy for such a blasphemy.111
Proprietary code commercialized by DEM Solutions Ltd.
Proprietary code commercialized by Itasca.
Proprietary code commercialized by ESSS (engineering simulation and scientific software).
Multi-physics coupling, first of which with fluid dynamics, is also a strong emphasis, but is outside the scope of our work.
This was a choice of the developers: "the C++ routines that do the serious computations in LAMMPS are written in a simple C-like style, using data structures that are nearly equivalent to Fortran arrays. This was done to try and avoid any performance hits."[START_REF][END_REF] FAQ 2.6].
In files respectively named normal model *.h, compute *.cpp and fix *.cpp in the code conventions.
Bug fixes have been provided within hours.
In comparison, appealing libraries like for example deal.II[17] require much more development effort to set-up a specific case.
The quality of the documentation being only moderated in case of the use of the automatic translation, from French to English, often altering the readability of the document. The same remark applies to error messages. Along the same line, command and variable names are only meaningful in French and the release notes don't seem to be translated.
The mechanical behavior described by the MFront tool can be used.
A feature request was treated within days.
The salome-meca platform includes code aster as the solving engine and salome for numerous preand post-processing tool.
To some extent, although they are poorly integrated by explicit schemes, constant attractive and repulsive forces behave well enough to be effective.[START_REF] Gustav | Die Fundamentalgleichungen der Theorie der Elasticität fester Körper, hergeleitet aus der Betrachtung eines Systems von Punkten, welche durch elatische Streben verbunden sind[END_REF]
The numerical errors are not negligible: the position of individual particles does notably change with the f /m ratio.
Refer to Equation 2.1 on page 24 for the full tensorial form.
As a side note, although the analysis was not pushed further, our proposal exhibits the role of t 0 2 to compare the behavior of distinct sets of parameters. This parameter seems to have a driving role in the global equilibrium criteria Q described in Section 8.6.
The torque balance is of little interest in this test case, as resulting torques on the meshes are only numerical noise.
Although the behavior is not considered geometrically converged for packings of 500 particles (refer to Section 11.3.3), the relative error due to the rough discretization is limited. A partial repetition of the test with bigger packings led to similar results. The use of little packings allows a wider range of time steps to be studied.
"It must be considered that a numerical method is in itself a model."[111, p.65]
The envelope of the radius of the particle implies an explicit choice of this radius. Two are defined in our model, none of which has intrinsic physical sense.
See also Section 12.1 for a discussion regarding the low strain rate sensitivity phase.
The reproducibility of the test can be questioned, and the fully crystallized mechanical behavior has not been tested. For some samples, the increase of the flow stress is more progressive and starts earlier.
This point will be discussed in Section 14.2. Refer also to Figure14.5b.
This effect will be examined in the end of the section.
Refer also to the effect of the parameter X wall used with the planar meshes (Section 14.2).
ERG materials & aerospace is a manufacturer of open cell metallic foam: www.ergaerospace.com.
Qualitatively, the configuration is similar to the honeycomb loading shown Figure1cin[8, p.2854].
Chosen small probing radius: 0.9 mm. Chosen large probing radius: 50 mm for the initial state and 30 mm in the strain range 0.5 -2.5. At a strain of 3, this metric measures a density of 100 %.
The experimental sample from which the geometry is taken was indeed designed for tensile loads.
We investigate here the overall buckling of the whole structure, not the local buckling of the arms of the foam[42, p.3399].
To balance the computing load evenly between the processors, both meshes move. Their absolute velocity is thus half Ḣ. The result of the computation must be independent of such numerical tricks.
A classical DEM procedure is to linearly remap the position of the particles when the domain is deformed. This procedure has been avoided in our work to let the system freely rearrange, to capture localization stemming from the local behavior of the material.
The sign of the trace of the stress tensor is applied to the equivalent Mises stress, previously defined in Equation2.2 on page 24.
Refer for example to the discussion of the choice of the time step in the DEM in Section 8.3.3.
This method is much cheaper than estimating a local volume around the particle from the position of its neighbor. In addition, the actual instantaneous distance between two particles is meaningless in our objective to model continuous media: we are only interested in the forces carried by a particle.
Appendix A
Local Field
The first qualitative comparison of the discrete element method (DEM) and the finite element method (FEM) local stress fields are promising. Further statistical analysis is required to draw quantitative conclusions.
The local stress field, reconstructed at the level of the particles from Equation 8.12 on page 96, is not meaningful if taken instantaneous and particle-wise. As for the macroscopic stress, the stress is temporally averaged over a sliding window, here using a width of 5 •10 -3 in strain. This procedure has to be executed component-and particle-wise, at run time. The storage of the particle-wise data at all time steps is too impractical for this procedure to be post-treated.
Even temporally averaged, the particle-wise stress is still too noisy and rough to be interpreted as representing a stress field in a continuous medium. It is then spatially averaged in post-treatment, by two means: Implicitly: The estimation of the volume occupied by a particle (necessary in Equation 8. 12) is here executed in a global fashion 1 . The global relaxed volume of the packing is divided by the number of particles.
Explicitly: The stress at a particle is component-wise averaged with the values at the neighboring particles within a cutoff radius, using ovito.
The local stress field component σ zz is illustrated independently for both phases: B ( Without applying any neighbor averaging, a common displayed trend, is the presence of a thin tension zone in the periphery of the sample. The presence of the mesh also is a source of perturbation of the field. As expected in a DEM approach, the forces are carried by preferential chains and the stress is not evenly distributed. While the packing deforms, the force chains change and reorganize.
The neighbor average operates with shorter cutoff for the phase with slowest reactivity, B. For both phases, the averaged local stresses tend to be overestimated compared to the macroscopic stress. Depending on the required precision, a averaging length can be chosen. A length of 1.5r seed , corresponding to the average over the neighbor detected in the computation, is still very rough. A length between 2 and 3r seed , thus allowing the use of second neighbors, is still numerically cheap, while smoothing most of the roughest gradients. As an order of magnitude, using an averaging radius of 3r seed and exception the strain rate, but at 10 -3 s -1 the stress in the inclusion is still lower than in the matrix. The macroscopic "hourglass" shape, clearly displayed in FEM, will only be captured are higher strain rate.
B.1 Implementation Choices and Issues
Along with a short discussion of some implementation issues, a brief overview of the implemented features is proposed in Table B.1.
The implementation of custom interaction laws (BILIN and TRILIN ), with arbitrary parameters, is rather straightforward in liggghts: an independent file (conventionally named normal *.h) is simply added to the sources.
From the implementation point of view, the mesh/particle interaction are out-of-thebox proposed in liggghts. The efforts should have been limited to the definition of the appropriate interaction behavior in the input scripts. Additional work was necessary to by-pass various errors, including segmentation faults, arising in parallel computation of geometrically large packings. The two operational work-around were:
• The definition of a domain substantially larger than the packing itself.
• The use of fixed boundaries (with the command boundary f ) and not adaptive boundaries (boundary s or m).
The handling of pair-wise state variables (for example the time since the interaction started for the healing behavior in self-contact) is handled directly within the interaction laws. Particle-wise state variables (for example r seed or n) are introduced in interaction laws using fix routines, defined in separate fix files. This is necessary for the proper communication of the variables between processors when the message passing interface (MPI) parallelization is used.
Even though this procedure seemed correct for most cases, a MPI bug was spotted for the self-contact detection in the interaction law TRILIN and has not been corrected yet. All goes as if the n vectors are not properly exchanged between processors ( --------------------------------------------------------------------f i l e : f i x o u t w a r d . h t y p e : l i g g g h t s 3 . 5 . 0 f i x - ------------------------------------------------------------------------ *
r n e x t [ i ] ; // I n i t i a l i z e n e x t d a t a o u t w a r d n e x t [ i ] [ 0 ]
= 0 ; o u t w a r d n e x t [ i ] [ 1 ] = 0 ; o u t w a r d n e x t [ i ] [ 2 ] = 0 ; n u m b e r n e x t [ i ] = 0 ; } }
B.4 Typical Input Script
A typical template input script (Appendix B.4.1) is proposed, using the interaction law TRILIN and the self-contact algorithm, defined in Section B.3. The parameters to be replaced in the template all start with the symbol @, typical values are given in Table B.2. The paths of the output directory, the initial packing and the boundary condition mesh must be defined respectively in @directory, @file coord and @mesh.
Script parameter
Variable Example value @deps ε ±10 -3 @strain ε ±1.0 @timeStep ∆t 0.1 @nRelax Number of initial relaxation steps Appendix D
IJMS Article
The article "Modeling Large Viscoplastic Strain in Multi-Material with the Discrete Element Method" was submitted to the International Journal of Mechanical Sciences. Minor revisions were requested for publication and the amended version is proposed here. We are currently waiting for the final decision. The article presents the main results from Part IV. TRILIN custom interaction law; defined in Section 9.1 and Equation 9.2; applied in Part V.
ALE arbitrary Lagrangian Eulerian; Chapter 4 [START_REF] Donea | Arbitrary Lagrangian-Eulerian methods[END_REF].
bondPD bond-based peridynamics; Section 5.3.2.2 [START_REF] Silling | A meshfree method based on the peridynamic model of solid mechanics[END_REF].
BSD-3 3-clause Berkeley Software Distribution (BSD) license.
C complied programming language.
C++ complied programming language; isocpp.org.
CA cellular automata; Chapter 4 [START_REF] Pöschel | Computational granular dynamics: models and algorithms[END_REF]Chap. 6].
CD contact dynamics; Section 5.3.2.1.
code aster code d'analyse des structures et thermomécanique pour des études et des recherches; FEM solver; released under GPL [START_REF]Code aster open source -general FEA software[END_REF].
contact mechanical interaction by contact of interfaces of distinct physical objects.
CPU central processing unit.
CZM cohesive zone model; Section 5.2.1.1.1 [START_REF] Elices | The cohesive zone model: advantages, limitations and challenges[END_REF].
DEM discrete element method; Section 8 [START_REF] Cundall | A discrete numerical model for granular assemblies[END_REF]; not to be mistaken for the difEM, sharing the acronym DEM in the literature.
difEM diffuse element method; Section 5.2.1.2 [START_REF] Nayroles | Generalizing the finite element method: diffuse approximation and diffuse elements[END_REF]; refered to as the DEM in the literature.
DSD/SST deforming spatial domain/stabilized space time; Chapter 4 [START_REF] Tayfun | A new strategy for finite element computations involving moving boundaries and interfaces-the deformingspatial-domain/space-time procedure: I. the concept and the preliminary tests[END_REF].
EDM event-driven method; Section 5.3.2.1 [START_REF] Pöschel | Computational granular dynamics: models and algorithms[END_REF]Chap. 3].
EFG element-free Galerkin method; Section 5.2.1.2 [23].
ESRF European synchrotron radiation facility; Section 2.4.
FEM finite element method; Section 5.2.1.1.1 [START_REF] Dhatt | Méthode des éléments finis[END_REF]. |
01761764 | en | [
"spi.meca.mema"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01761764/file/LEM3_JNE_2018_MERAGHNI%201.pdf | Pascal Pomarède
Fodil Meraghni
Laurent Peltier
Stéphane Delalande
Nico F Declercq
Damage Evaluation in Woven Glass Reinforced Polyamide 6.6/6 Composites Using Ultrasound Phase-Shift Analysis and X-ray Tomography
Keywords: Composite materials, Ultrasonics, X-ray tomography, Damage Indicator
The paper proposes a new experimental methodology, based on ultrasonic measurements, that aims at evaluating the anisotropic damage in woven semi-crystalline polymer composites through new damage indicators. Due to their microstructure, woven composite materials are characterized by an anisotropic evolution of damage induced by different damage mechanisms occurring at the micro or mesoscopic scales. In this work, these damage modes in polyamide 6.6/6-woven glass fiber reinforced composites have been investigated qualitatively and quantitatively by X-ray micro-computed tomography (mCT) analysis on composite samples cut according to two orientations with respect to the mold flow direction. Composite samples are initially damaged at different levels during preliminary interrupted tensile tests. Ultrasonic investigations using C-scan imaging have been carried out without yielding significant results. Consequently, an ultrasonic method for stiffness constants estimation based on the bulk and guided wave velocity measurements is applied. Two damage indicators are then proposed. The first consists in calculating the Frobenius norm of the obtained stiffness matrix. The second is computed using the phase shift between two ultrasonic signals respectively measured on the tested samples and an undamaged reference sample. Both X-ray mCT and ultrasonic investigations show a higher damage evolution with respect to the applied stress for the samples oriented at 45 • from the warp direction compared to the samples in the 0 • configuration. The evolution of the second ultrasonic damage indicator exhibits a good correlation with the void volume fraction evolution estimated by mCT as well as with the damage calculated using the measured elastic modulus reduction. The merit of this research is of importance for the automotive industry.
Introduction
Woven reinforced thermoplastic composite materials are gaining interest in automotive applications [START_REF] Volgers | New constitutive model for woven thermoplastic composite materials[END_REF][START_REF] Artero-Guerrero | Experimental study of the impactor mass effect on the low velocity impact of carbon/epoxy woven laminates[END_REF][START_REF] Malpot | Effect of relative humidity on mechanical properties of a woven thermoplastic composite for automotive application[END_REF], due to their important weight reduction in comparison with applications using metals. In addition, compared to a unidirectional rein-forcement, a woven fabric allows a better equilibrium and uniformity of the mechanical properties. Furthermore, they also have a better ability to withstand buckling and impact [START_REF] Atas | An overall view on impact response of woven fabric composite plates[END_REF]. The latter characteristic is of critical interest for the automotive industry [START_REF] Golzar | Prototype fabrication of a composite automobile body based on integrated structure[END_REF]; indeed it is essential to ensure a maximal resistance of structural parts, in the case of impact in particular for the driver's safety. To assess the in-service strength of an automotive component made of woven fabric composite that is subjected to damage accumulation, the development of an efficient methodology evaluating the damage is mandatory. This assessment should guide decisions to replace, to repair or to keep a composite component based on its damage tolerance. Non-destructive methods based on ultrasonic investigation naturally emerge as promising damage evaluation techniques thanks to their practicality, efficiency, diversity and their applicability on composite parts in-service. Nevertheless, ultrasonic techniques require appropriate signal processing to extract damage indicators that permit estimation of the residual stiffness, which is related to the component integrity.
In order to nondestructively assess all the elastic constants of a material, Markham [START_REF] Markham | Measurement of the elastic constants of fibre composites by ultrasonics[END_REF] has studied a method based on ultrasonic measurements in immersion of bulk wave velocities and the use of the well-known Christoffel equation. He applied the method on an undamaged transversely isotropic composite material to determine the stiffness components. The method was then successfully used to measure the stiffness reduction of composites submitted to different levels of impact [START_REF] Marguères | Comparison of stiffness measurements and damage investigation techniques for a fatigued and post-impact fatigued GFRP composite obtained by RTM process[END_REF][START_REF] Marguères | Damage induced anisotropy and stiffness reduction evaluation in composite materials using ultrasonic wave transmission[END_REF] and tensile tests [START_REF] Hufenbach | Ultrasonic determination of anisotropic damage in fibre and textile reinforced composite materials[END_REF][START_REF] Baste | Induced anisotropy and crack systems orientations of a ceramic matrix composite under off-principal axis loading[END_REF]. One of the advantages of this method is that the tested material is investigated in all directions (i.e. sample orientations) and therefore can provide all the components of the stiffness tensor and accordingly a complete anisotropic damage evaluation of the tested sample. This method was also extended for use with guided waves in several studies because of their established advantages [START_REF] Ong | Determination of the elastic properties of woven composite panels for Lamb wave studies[END_REF][START_REF] Moreno | Phase velocity method for guided wave measurements in composite plates[END_REF]. In the industry one normally do not require full understanding of all physical phenomena involved but rather seek practical and workable techniques that are reliable. To our knowledge, no damage indicator, with simple interpretation, useful in the industry, for the global damage state, has been reported in the literature. Such task is carried out here, where an experimental methodology is developed for identifying proper damage indicators.
An alternative method, somewhat similar to the method proposed by Markham, is the polar-scan method [START_REF] Satyanarayan | Ultrasonic polar scan imaging of damaged fiber reinforced composites[END_REF][START_REF] Kersemans | Detection and localization of delaminations in thin carbon fiber reinforced composites with the ultrasonic polar scan[END_REF]. As indicated by its name, a specific point of a sample is scanned for all accessible angles of incidence. For each polar angle, the signal amplitude is recorded and represented in a polar plot. Information such as damage level, stiffness, fiber misalignment, etc can be extracted from this polar figure. However, this technique requires a more complex apparatus than the one proposed by Markham because of the unavoidable double angular rotations. This limits therefore its application on site (in-service). Due to this limitation, the present approach measures the elastic constants using the ultrasonic method based on wave velocities measurement and described in the preceding paragraph. The aim of this study is then to propose an ultrasonic methodology to detect and to quantify damage growth, even when the cracks that propagate inside the sample cannot be detected by classical ultrasonic imaging techniques. As mentioned, this method could provide the whole stiffness tensor of a sample. However, it is more interesting to obtain a sole value to efficiently transcribe the global damage state of a tested sample. Two different experimental damage indicators are proposed for this purpose: The first is the norm of the stiffness tensor and the second is the phase angle shift between wave signals propagated through a damaged and an undamaged sample. It is worth noting that the time required to analyze the exper-imental results in order to obtain the first indicator can be quite long.
The woven reinforced composite material exhibits a specific damage scheme that is highly dependent on both the loading direction and the fiber orientation [START_REF] Karayaka | Deformation and failure behavior of woven composite laminates[END_REF][START_REF] Pandita | Tensile fatigue behaviour of glass plain-weave fabric composites in on-and off-axis directions[END_REF]. In this work, the damage characterization and evolution was performed on samples made of co-polyamide 6.6/6 based composites reinforced with woven glass fibers and submitted to tension tests interrupted at different stress levels. Two configurations of material samples were tested: (i) samples oriented at 0 • with respect to the warp direction and (ii) samples oriented at 45 • from the warp direction corresponding to the mold flow direction. A first damage estimation for the two samples configurations was carried out using the elastic modulus reduction. Various nondestructive tests were performed on the samples after loading, such as ultrasonic Cscan imaging that was not able to capture the damage zone in all tested samples. To address this aspect, the damage mechanisms induced by the different tensile tests were investigated by X-ray micro-computed tomography (mCT). This provided a qualitative description of the main damage mechanisms and the quantitative estimation of void volume fraction evolution. The latter was compared to the damage indicators evolution obtained with the ultrasonic technique.
The paper is organized as follows: in Sect. 2 the investigated composite material and the tensile tests procedure are presented. Ultrasonic C-scan imaging results obtained on samples after interrupted tensile tests are presented and discussed. X-ray mCT investigation procedure and results, on samples in both 0 • and 45 • configurations are discussed in Sect. 3. Ultrasonic analysis and the estimation of the damage evolution are presented in the 4th section. The evolution of the stiffness components function of the applied stress level is also discussed for both 0 • and 45 • samples. In Sect. 5, the two proposed damage indicators are calculated using the ultrasonic results from the previous section. Both ultrasonic damaged indicators are compared with the estimation of damage calculated using the elastic modulus reduction (presented in Sect. 2) and with the void volume fraction evolution (discussed in Sect. 3) leading to concluding remarks.
Material Description and Preliminary Tests
Material
The composite material considered for this study is referenced as VizilonTM SB63G1-T1.5-S3 and is manufactured by DuPont. The studied composite is produced by a thermocompression molding process and it consists of a 2/2 twill weave glass fabric reinforced co-polyamide 6.6/6, with three plies, of 1.53 mm thick in total. The overall 3D woven fabric of the studied composite material is shown in Fig. 1. The E 2. The two different microstructures can be observed in the Fig. 3. All the samples are stored in a humidity chamber in order to have the same initial conditions in terms of relative humidity. The latter is set at a level of 50% for all tested composite material. Initial tensile tests are performed on the material until failure for both 0 • and 45 • sample configurations. The experimental stress/strain curves are normalized with respect of the tensile ultimate strength of the 0 • sample configuration. culated after the unloading according to a common damage measurement procedure [START_REF] Ladeveze | Damage modelling of the elementary ply for laminated composites[END_REF]. A representation of the elastic moduli calculation is depicted in Fig. 5a, b for the two tested sample configurations.
The samples are all loaded in tension at room temperature and at a constant strain rate of 10 -4 s -1 to avoid the influence of viscosity. Tensile tests are performed in compliance with the standard ISO 5893 on a machine Z 050 designed by Zwick Roell. The strains are measured using a "clip-on" extensometer manufactured by Epsilon Technology Corp.
As shown in Fig. 5, a noticeable difference is observed between the overall stress-strain responses of the two configuration samples, namely 0 • and 45 • . In fact, for the samples tested at 0 • , the behavior in tension is mostly linear and brittle since it is governed by the fiber breakage whereas the behav-ior of the samples tested at 45 • exhibits a nonlinear ductile response mostly governed by the matrix rheology. This is particularly clear for the evolution of the damage estimated on the elastic modulus reduction when the applied stress increases, as illustrated in Fig. 6. For the samples oriented at 45 • from the warp direction, the damage D reaches a value of 0.53 close to the final failure. However, for the 0 • case, it only goes up to 0.1. The values of damage for every tested sample are summarized in Table 1. This damage estimation is employed as a first reference to validate the proposed ultrasonic damage indicators detailed in further sections. Indeed, as discussed, Fig. 6 indicates a higher amount of damage induce by the loading for the 45 • configuration, which could imply different damage mechanisms with different associated typical scales. In fact, to investigate the sensitivity of the ultrasonic techniques, it is necessary to consider different scales of induced damage. Hence, the typical scale of those mechanisms for the two sample configuration requires further study.
Ultrasonic C-Scan in Transmission
Ultrasonic imaging was performed on all tested samples to investigate the overall damage state of the samples cut at 0 • and 45 • . Classical ultrasonic 2D C-scans in transmission were performed using a 5 MHz frequency transducer, the maximal amplitude of the ultrasonic signal is measured and recorded at each point of the scanned area with a spatial resolution of 0.5 mm. It is then plotted on Fig. 7 for both considered sample configurations. It is worth noting, that the values reported in Fig. 7 are the actual values of the ultrasound amplitude's signal without any averaging process on the scanned area of the tested specimen. As illustrated in Fig. nor accurate for the detection of microscopic damage and the related stiffness reduction in the samples. Note that other detection techniques based on spectrum analysis in the nonlinear ultrasonic domain could be applied for the detection of early damage with more sensitivity and spatial resolution. Among them, one can mention those using vibro-acoustic modulation [START_REF] Eckel | Investigation of damage in composites using nondestructive nonlinear acoustic spectroscopy[END_REF][START_REF] Meo | Detecting damage in composite material using nonlinear elastic wave spectroscopy methods[END_REF] or using higher harmonics [START_REF] Ren | Relationship between second-and third-order acoustic nonlinear parameters in relative measurement[END_REF][START_REF] Shah | Non-linear ultrasonic evaluation of damaged concrete based on higher order harmonic generation[END_REF]. scopic scale are investigated using X-ray micro-computed tomography (mCT). The damage characterization at this scale provides a quantitative estimation of void volume fraction evolution, which is related to the Young's modulus reduction. Fig. 6 Evolution of the the damage for 0 • and 45 • sample's configurations. The damage is estimated from the Young's modulus reduction for different applied stress levels 7a related to the samples tested at 0 • from the warp direction, the attenuation of the ultrasound amplitude's signal remains low and a very slight difference, if any, was noticed when increasing the applied stress level from 0 to 92.6%σ UTS0 • .
For the sample in 45 • , the Fig. 7b shows the evolution of amplitude attenuation (C-scan images) between the unloaded state and the ultimate state level prior failure (92.6%σ UTS45 • ). Two damage zones were actually observed macroscopically and the related high attenuations in the signals were associated to the fibers buckling and the local delamination. In addition, the low signal attenuations observed at the border of the samples were clearly induced by a well-known border effect and were not caused by the mechanically induced damage.
As a partial result, one can conclude that the classical ultrasonic C-scan imaging, as applied in this study, suitably aimed at detecting the macroscopic damage in tested composite samples. It appears as a method which is neither reliable
Damage Investigation Using X-ray Micro
Computed Tomography
Experimental Procedure
The X-ray micro-computed tomography (mCT) investigation and acquisitions were carried-out with an EasyTom (Nano) from RX solutions. The general principle of mCT is detailed in the Fig. 8. The acquisition parameters were chosen in order to analyze a representative volume element (RVE) of the composite material. A resolution of 5.5-micrometer (μm) voxel has been adopted for the acquisitions. With this resolution, material volumes of 16.5 × 16.5 × 1.5 mm were reconstructed after the acquisition. The analyzed areas on the samples were close to the middle of the sample, where the extensometer was clipped-on during the tensile tests. All the information about the selected parameters is summarized in Table 2. The sample was positioned on a rotating table while X-rays pass through to a flat panel detector. Images were recorded as grey level maps, for all rotation angles, on a computer, before a 3D reconstruction with the X-Act software
Table 1 Samples with their defined tensile loading for 0 • and 45 • samples configuration as well as their tensile induced damage The matrix and the fibers of the composite are characterized by distinctive values of X-ray absorption. Hence, they can be easily separated on a grey level map. Air cannot significantly absorb X-ray. Consequently, the voids cannot be easily observed inside the composite samples, however during the image 3D-reconstruction, the voids appear as completely black volumes. A median filtering was first applied to the raw results to reduce the amount of noise in the images. A despeckle filter was then applied to remove the main part of the remaining interference noise. The different reconstructions were then segmented in three classes (void, matrix and fibers) using a grayscale thresholding. The Avizo software segmentation enables an accurate estimation of the volume fraction of each of the three phases. Figure 9 illustrates an example of a 3D reconstruction of the considered composite material containing an initial porosity.
Sample 1 2 3 4 5 6 Configuration 0 • 0 • 0 • 0 • 0 • 0 • UTS0 UTS0 UTS0 UTS0
Prior to a discussion on the evolution of void volume fraction, it must be emphasized that the damage mechanisms have been observed for both considered orientations of samples. A characteristic length of the different damage mechanisms was also determined. Those data are critical to understand the damage scheme of the studied material and to validate the results from the ultrasonic study.
Damage Mechanisms
Sets of Samples Oriented at 0 • from the Warp Direction
Some examples of damage mechanisms observed for samples in the 0 • configuration are illustrated in Fig. 10. For tensile tests along the warp fiber axis, the main damage mechanisms observed were the formation of cracks near the end of the yarns (Fig. 10b,c,d), and transverse cracks inside the yarns (Fig. 10a,c), for example). Some longitudinal cracks that follow the transverse yarns very locally in the last step of damage were also observed (Fig. 10d). As the loading level increased, the number and the size of defects (cracks and voids) increased too (Tables 3,4). The crack inside the yarns has a characteristic thickness of 20 μm, which of the order of magnitude of the size of the fibers. Indeed, it is worth noting that those cracks are caused by fiber/matrix debonding, which propagates perpendicularly to the loading direction from one fiber to another. The cracks at the extremities of the yarn have a typical thickness of 50 μm for the highest damaged samples. These longitudinal cracks can then propagate along the warp yarns as shown in Fig. 10d.
Sets of Samples Oriented at 45 • from the Warp Direction
For samples in the 45 • configuration, all the observed mechanisms are shown in Fig. 11. Firstly, cracks at the extremities of yarns (visible in Fig. 11a) as well as cracks along the yarns were observed in Fig. 11a,b. On the 91.6% σ UTS45 • loaded sample many local fiber breakages were also observed, some of them were already visible in Fig. 11c for the sample tested at 61.1% σ UTS45 • . Those defects were observed around the central region of the sample, along the loading direction, especially near the edges. In Fig. 11d, some micro buckling Fig. 11 The different damage mechanisms that appear when a sample in the 45 • configuration is submitted to tension. These mechanisms consist in transverse cracks, cracks at the extremities of the yarns, fiber breakage, fiber buckling and yarns pseudo-delamination due to the propagation of the longitudinal cracks of fibers was noted, more notably in the sample loaded at the highest stress level. This is due to the reorientation of fibers during the loading test. Indeed, as the load increases the angle between the fibers tends to decrease but when the tension is interrupted, the fibers cannot return to their original position because of the section reduction caused by the Poisson effect. This effect of the fibers' reorientation during off-axis loading was also observed by Vieille and Taleb [START_REF] Vieille | About the influence of temperature and matrix ductility on the behavior of carbon woven-ply PPS or epoxy laminates: Notched and unnotched laminates[END_REF] among others. Pseudo-delamination also appeared locally after roughly 60.1% σ UTS45 • of loading in some specimen, as visible in Fig. 11e. It was propagated inside a yarn and followed the fiber orientation until it reached the location where the weft and warp yarns were intertwined together.
When the stress level continued to increase, the crack turned and started propagating along the direction of the other yarn. It was noticed that the pseudo delamination and fiber buckling were the most important damage mechanisms during this study. These mechanisms are actually the only ones that were detected by the ultrasonic C-scan method described in Sect. 2. They both had a characteristic length of 200 μm.
Micro Computed Tomography Based Void Volume Fraction Estimation
As previously mentioned, a grey level thresholding, was performed on all the X-ray mCT 3D reconstructions. The evolution of void volume fraction was therefore evaluated. Indeed, the increase of void volume fraction can often be used as a good indicator of damage evolution [START_REF] Madra | X-ray microtomography applications for quantitative and qualitative analysis of porosity in woven glass fiber reinforced thermoplastic[END_REF][START_REF] Williams | Characterization of damage evolution in sic particle reinforced Al alloy matrix composites by in-situ X-ray synchrotron tomography[END_REF].
The evolution of the void volume fraction is plotted for both orientations in Fig. 12. The experimental data 0 • and 45 • configurations are plotted as function of the stress to failure of the 0 • sample (respectively Fig. 12a,b). The Fig. 12c allows an easy comparison of the two results.
The void volume fraction clearly increased with the tension loading level for both considered orientations. However, a higher growth of damage in the set of samples oriented at 45 • from warp direction was observed. In fact, for the samples oriented at 0 • and loaded at 92.6%σ UTS0 • , a void volume fraction of 1.56% was measured as it can be seen on the mCT-3D reconstruction in Fig. 13. On the other hand, as shown in Fig. 14, for the sample tested at 45 • for a stress level of 91.6%σ UTS45 • , the estimated volume void content was about 5.51%. These results are in agreement with the difference in terms of the behavior between the two sample's orientations discussed in Sect. 2. Indeed, the overall response of the 45 • samples was more ductile compared to the behavior of a sample tested at 0 • , which exhibited a linear elastic brittle behavior.
The evolution of the void volume fraction as a function of the applied stress level (Fig. 12c) exhibited a similar increase as the evolution of the macroscopic damage estimated from the Young's modulus reduction plotted in Fig. 6. Accordingly, these two quantitative estimations of the damage state, namely void volume content and Young's modulus reduction will be two actual measurements of the damage that validate the proposed new damage indicators.
Qualitatively, Fig. 13b shows a clear preferred orientation of the damage accumulation, which is perpendicular to the
C i j kl n k n j -ρ V 2 i j
Therefore, some of the opened cracks could be partially closed during the unloading and hence cannot be observed. Consequently, the actual void volume content may be to some extent higher than what was measured in the present study. This is especially the case for the 0 • configuration samples. For the 45 • configuration, there is permanent deformation and higher presence of matrix permanent strain at high loading levels reduced the cracks closure effect.
Stiffness Components Measurements
Various methods based on ultrasonic wave propagation measurements can be utilized to detect the damage in composite materials. It was shown in Sect. 2 that the widely used ultrasonic C-scan imaging technique may be inefficient to detect the early damage stages for the composite samples oriented at 0 • and 45 • . As a first approach, a measurement of stiffness components using wave velocity propagation was considered in this section to measure the damage evolution. This stiffness components measurement method is described in the first sub-section. Results on samples oriented at 0 • and 45 • from warp direction are then presented in further sub-sections and compared. Because of the samples' low thickness, guided waves propagation is considered for some of the measured signals, further below called plane 1-2; Whereas bulk waves one can find three different theoretical bulk wave modes: Quasi-Longitudinal, a fast Quasi-Transversal 1 and a slow Quasi-Transversal 2. The QL has a polarization primarily parallel to the direction of n, whereas the QT has a polarization primarily perpendicular to the direction of n.
In order to use this Christoffel's equation to describe the relation between the velocities of wave propagation inside the material and its stiffness components, these components must not depend on the position inside the material i.e. the material must be considered as a homogenous medium [START_REF] Dalmaz | Elastic moduli of a 2.5D C f/SiC composite?: experimental and theoretical estimates[END_REF]. Therefore, the wavelength of the propagating wave must exceed the smallest microstructural features of the undamaged composite i.e. the fibers.
From an experimental point of view, the velocity of waves traveling through the sample is obtained by calculating the time delay δt , whether positive or negative measured using an experimental set-up in transmission, visible in Fig. 15b . The latter provides the difference between the time of flight (ToF) of the wave from the emitter to the receiver with the sample and the time of flight of the ultrasonic waves in water (without the sample). So, a first wave time travel measurement, without the sample, needs to be performed and used as a reference. Then, the two following equations are used to calculate, respectively, the refraction angle θr and the wave phase velocity VP: measurement in transmission are treated for the remaining part of the measured signals, propagating in what we will call, further down, the plane 1-3. The procedure is described θr = atan
1 sin (θi ) \ cos (θi ) -V 0 δ t (3)
in the following sub-section. The difference in approach is of course linked to whether waves propagate as guided waves in the plate or pass through as bulk waves.
Vp = V0 sin (θr ) (4) sin (θi )
Principle of the Method
The main idea of this method is to measure the velocity of wave propagation at different incidence angles on different principal planes. Indeed, it can be shown that, for plane waves, the velocity of wave propagation trough a solid homogenous medium is a function of the stiffness and density of the sample via the following equation, also known as the Christoffel's equation:
The values are calculated for all considered incidence angles in the plane 1-3 (defined in Fig. 15b).
Because of the dimensions of the samples, measurements of bulk waves in transmission in plane 1-3 were performed but not in the plane 2-3. In order to have sufficient measurements to compute at least 7 stiffness constants, additional measurements in the 1-2 plane, (azimuth plane defined in Fig. 15a), were performed. Indeed because of the small thickness of the sample, bulk wave measurement in transmission cannot be done since it may cause the sound passing by the
( p δ \ U l = 0 (2)
sample and not passing through the sample while rotating the emitter-receiver couple. As a consequence, the ultrasonic waves propagating in this plane are guided waves that are Where C is the stiffness tensor, n is the vector normal to the plane of the wave (i.e. n is a vector in the direction of phase propagation), ρ is the material's density, V is the phase velocity, U is the polarization vector of the mechanical wave and δ is the Kronecker delta symbol.
Depending on the incidence angle, various wave modes may appear. Indeed, by solving the Christoffel equation, described by other sets of equations.
Indeed, one must be sure that correct descriptors of the acoustic fields inside the plate are considered. For the first arriving pulse passing through the plate along the shortest (straight) path between two transducers facing each other, plane waves are considered responsible of the measured through transmitted pulse. However, guided waves or later arriving pulses caused by multiple scattering phenomena cannot be described as plane waves.
In the plane 1-3, considered first, such bulk plane wave approach is acceptable and typically performed, however when guided waves are therefore used in the plane 1-2, we are obliged to use more complicated expressions for guided waves that will be described further below. In fact, guided waves, contrary to the bulk waves described above, are known to be dispersive, and accordingly their phase velocities are of the frequency and sample thickness, in addition to the material properties. In addition, this dispersive effect induces a phase velocity and a group velocity of the guided waves modes different from one another which affect the interpretation of measured signals.
Although theoretically they can be linked through the same Christoffel's equation through a plane wave expansion of the acoustic field, experimentally one must take caution as to not confuse one by the other.
Specific experimental procedures must be used if measurements of the first or the second velocity is required. Group velocity is usually obtained by ToF measurements on a specific peak in the wave signal for two distances between emitter and receiver (or time-delay measurement). An experimental set-up different from the first one presented and group velocity. However, when the time of arrival is used in the Sect. 5.2, a phase shift is then extract as is and not convert to velocity.
The two experimental set-ups are depicted in Fig. 15. The relation between the frequencies and the group/phase velocities is usually represented on dispersion curves. They were computed for propagation in the considered composite sample along the direction of the fibers using the commercially available Disperse software and are visible in the Fig. 16. The frequency spectrum of every recorded signal is also carefully measured to compare the experimental results with numerical dispersion curves.
Because guided waves propagation along off-principles planes will be considered, the guided wave equations, in explicit format, are needed to be given for a monoclinic stiffness tensor. The latter is obtained with a rotation along the axis 3 at a chosen angle of the orthotropic stiffness matrix. In this case, the equations of motion are defined with the following systems of equations. Please note that for more details about the guided waves characteristic equations the reader can refer to the book of Nayfeh [START_REF] Nayfeh | Wave Propagation in Layered Anisotropic Media[END_REF]. 15a is used for acquiring the phase velocities
∂ 2 u ∂ 2 u ∂ 2 u depicted in Fig.
C 11 1 + C 44 1 + C 55 1
in the plane 1-2. This set-up in immersion using two transducers in pitch-catch, both at a chosen incidence/reflection
∂ x 2 ∂ 2 u1 ∂ x 2 ∂ 2 u2 ∂ x 2 ∂ 2 u2
∂ 2 u2 angle, can be used to measure the phase velocities [START_REF] Karim | Determination of the elastic constants of composites through the inversion of Leaky Lamb Waves data[END_REF][START_REF] Kim | Measurement of phase velocity dispersion curves and group velocities in a plate using leaky Lamb waves[END_REF]. The phase velocities VLWP, in the plane of the plate, for Lamb
+ 2C 14 ∂ x ∂ x + C14 ∂ x 2 + C24 ∂ x 2 + C56 ∂ x 2
waves, are then obtained with the use of the Snell's law with θr = 90 • i.e. equation ( 4) transforms into:
(C12 + C44) ∂ 2 u2 ∂ x1∂ x2 + (C13 + C55) ∂ 2 u3 ∂ x1∂ x3 V V 0 (5) Lwp = sin (θ ) ∂ 2 u3 + (C34 + C56) ∂ x ∂ x ∂ 2 u1 = ∂t 2 (6) i C 14 ∂ 2 u1 ∂ x 2 + C24 ∂ 2 u1 ∂ x 2 + C56 ∂ 2 u1 ∂ x 2
For each azimuthal direction of propagation (ψ) in the 1-2 plane, the incidence angle is adjusted in order to find the same guided wave mode at every acquisition. If the time of
+ (C12 + C44) ∂ 2 u2 ∂ x1∂ x2 + C44 ∂ 2 u2 ∂ x 2 + C22 ∂ 2 u2 ∂ x 2
arrival would be used, as in the bulk wave approach, then one has to remember that this time is actually
1 1 2 3 linked to the ∂ 2 u2 +C66 ∂ x 2 ∂ 2 u2 + 2C 24 ∂ x ∂ x ∂ 2 u3 + (C34 + C56) ∂ x ∂ x
∂ 2 u3 + (C23 + C66) ∂ x ∂ x ∂ 2 u1 ∂ 2 u1 = ∂t 2 (7) ∂ 2 u1
And
D 1q = C 13 + C 34 V q + C 33 α q W q (C13 + C55) ∂ x ∂ x + (C34 + C56) ∂ x ∂ x D 2q = C 55 ( α q + W q ) + C 56 α q W q ∂ 2 u2 + (C34 + C56) ∂ x ∂ x ∂ 2 u2 + (C23 + C66) ∂ x ∂ x D3q = C56 ( αq + Wq ) + C66αq Wq ( 12
)
∂ 2 u3 + C55 ∂ x 2 ∂ 2 u3 + C66 ∂ x 2 ∂ 2 u3 + C33 ∂ x 2
The amplitude ratio Vq and Wq are given by 1
∂ 2 u3 2 3 ∂ 2 u3 V U 2q q = U 1q and W U 3q q = U 1q ( 13
)
+ 2C56 ∂ x ∂ x = ∂t 2 (8)
An overdetermined minimization problem can then be Based on those wave equations the following guided waves characteristic equations can consequently be obtained:
A = D11 G1 tan (γ α1) -D13 G3 tan (γ α3)
defined. This problem is classically solved by considering a least squares method approach and using a Levenberg-Marquardt algorithm. The stiffness components can be obtained by minimizing the functional F ( Ci j ) defined as:
+ D15 G5 tan (γ α5) = 0;
for the antisymmetric modes (9 )
F ( Ci j ) = [V ex p -V num ( C i j ) 2 ] , n S = D11 G1 cot (γ α1) -D13 G3 cot (γ α3) + D15 G5 cot (γ α5) = 0;
for the symmetric modes [START_REF] Baste | Induced anisotropy and crack systems orientations of a ceramic matrix composite under off-principal axis loading[END_REF] with n : t he number o f ex per i ment al velocities [START_REF] Kersemans | Detection and localization of delaminations in thin carbon fiber reinforced composites with the ultrasonic polar scan[END_REF] αq;q=1,3,5 being the solution of the wave equations when the displacement field is of the forms u j = U j e i ξ (x1+αx3-ct ) . Here U j is the wave amplitude, c is the guided wave phase velocity, ξ is the wavenumber and γ = ξ d/2 = π fd/c where d is the sample's thickness and f the wave frequency. Finally Gi and Diq are given by:
G 1 = D 23 D 35 -D 33 D 25 G 2 = D 21 D 35 -D 31 D 25 G3 = D21 D33 -D31 D23 (11)
The numerical velocities are determined by solving the Christoffel equation and the guided waves characteristic equations for a given set of Ci j . In order to avoid finding a local minimum solution, it is important to initiate the algo-rithm with a first guess of Ci j that is relatively close to the real solution.
In this paper, the initialization values of the algorithm are calculated by periodic homogenization computation [START_REF] Lomov | Predictive analyses and experimental validations of effective elastic properties of 2D and 3D woven composites[END_REF], they are indicated in Table 5. The values of identified stiff-ness components depend on this initial guess but also on the number of experimental velocities considered and the num-ber of principal planes investigated. Therefore it is necessary The numbers between parentheses are the confidence intervals to estimate a confidence interval, ci(i), for each component of the identified stiffness matrix. It was proposed by Audoin et al. [START_REF] Audoin | Estimation de l'intervalle de confiance des constantes d'élasicité identidiées à partir des vitesses de propagation ultrasonore[END_REF] to use the covariance matrix φ to obtain statistical information of the deviation from the analytical solutions for each stiffness component. This covariance matrix is calculated as:
∂ F (C i j ) m
r t * r t 1
to satisfy the homogenous media hypothesis. The shape and the spectrum of the emitted pulse are represented at Fig.
Application to Sets of Samples Oriented at 0 • from the Warp Direction
The composite is considered as orthotropic prior to loading and remains so even after the damage induced by the tensile
φ = n * ([J] - * [J]) - (15)
loading. The elastic linear behavior of the composite can be described using nine stiffness components. However, conwhere: [J] = ∂C i j is the Jacobian matrix, r is the vector sidering the dimensions of the sample, only measurements in two principal planes were performed as illustrated in Fig. of the residual values i.e. the functional evaluated with the identified solution, n is the number of experimental velocities and m is the number of stiffness parameters to be identified.
The values of confidence interval can then be extracted from the diagonal terms of the covariance matrix.
ci (i ) = j φ ii ( 16
)
The acquisitions were carried out using a customer-designed five axes immersion scanner fabricated by Inspection Technology Europe BV. The pulses were emitted by the dual pulser-receiver DPR500 made by JSR ultrasonics. The experimental data were acquired with Winspect software and were post-processed with Matlab. An immersion Panasonic transducer with a central frequency of 2.25 MHz was used in order 15. These measurements only permitted obtaining seven out of nine stiffness components. It was observed that along the plane 1-3, quasi longitudinal and quasi transversal mode 1 can be measured. For the plane 1-2, on which guided waves propagate, the frequency and phase velocity of the transmitted signal was measured. Consequently, after comparison with the dispersion curves in Fig. 16, the mode S2 was propagating in the composite. Before running the optimization procedure for obtaining the stiffness constants, a computation of the expected velocity as a function of the incidence angle was performed. The quasi longitudinal, the quasi transversal 1 and the quasi transversal 2 modes propagated in the 1-3 principle plane were calculated for a refraction angle ranging from 0 to 90 • . This computation was based on the stiffness matrix obtained by periodic 5). The computed velocities were compared to those obtained experimentally for the undamaged sample. The same calculation was made for Lamb wave propagation in the 1-2 planes, more particularly for the S2 mode. The comparison of experimental and analytical velocities of wave propagation is illustrated in Fig. 18. It confirms that the mode experimentally measured corresponds to the actual wave mode that was assumed to propagate in the composite material.
For the undamaged sample oriented along the warp direction (0 • ), the experimental stiffness constants were close to the numerically obtained results as observed in Table 5. The evolution of the stiffness constants is plotted in the Fig. 19 with their respective confidence interval. Furthermore, it was noticed that the global evolution of the stiffness components was close to the results discussed by Hufenbach et al. [START_REF] Hufenbach | Ultrasonic determination of anisotropic damage in fibre and textile reinforced composite materials[END_REF]. An important decrease of the components C11 and C13 was noticed, whereas a smaller decrease of C12 and C55 and almost no change of C33, C22 and C44 compared to the other stiffness components is remarked (Fig. 19).
One can then conclude that the components depending on the loading direction (direction 1), namely C11 and C13 were more affected by the damage. The component C12 was also impacted by the damage but at a lower extent.
Application to Sets of Samples Oriented at 45 • from the Warp Direction
In the 45 • configurations, another propagation plane, named X-3 is considered (defined in Fig. 20). The latter is oriented at 45 • from the plane 1-3 used in the 0 • configuration. Indeed, when the sample is oriented at 45 • from the warp direction, the axis of the sample will correspond to the axis of the plane X-3. The different planes of propagation are illustrated in the Fig. 20. The X-3 plane is not a principal plane of the composite but remains an axis of symmetry in its undamaged state. The same procedure can then be applied. However, when submitted to tension at 45 • from the warp direction, the damage induced in the sample can introduce loss in elastic symmetry. This can be provoked by the fiber's reorientations or unexpected micro cracks perpendicular to the loading direction; even though the Fig. 14b indicates that the majority of the damage is appearing along the two fiber axis (i.e. -/ + 45 • ). Therefore, to describe the elastic behavior of the samples oriented at 45 • with respect to the warp direction, thirteen stiffness components are necessary. The Christoffel equation is consequently modified by considering C56, C14, C24 and C34components for the stiffness tensor. However, ultrasonic measurements in four planes are needed to have a good estimation of these thirteen stiffness components. Some nonnegligible errors may result from this limitation in the number of measurements. The evolution of the stiffness components is plotted in Fig. 21 with their respective confidence interval. During the first steps of the loading, the components that are a function of the loading direction (especially C11 and C13) did not change at Fig. [START_REF] Ren | Relationship between second-and third-order acoustic nonlinear parameters in relative measurement[END_REF] Schematic representation of the different propagation planes of ultrasonic waves. For samples in the 0 • configurations, the planes 1-3 and 1-2 were used. For the samples in the 45 • configuration, the plane X-3 and 1-2 (or azimuth plane) were used all, but they dropped drastically after 61.1%σ UTS45 • . The shear components C44 and C12 were of course impacted from the beginning of the test. The components C33 and C55 did not really change during the tensile tests. This led to a very different damage behavior compared to the other sample configurations. Some minor changes were observed for the C56, C14, C24 and C34 components. However, the confidence interval was very large when compared to the other stiffness components. This is explained by the lack of measurement in two additional planes as mentioned earlier. The major impact from the loading on the shearing components was clearly detected with the present method. Furthermore, this was in agreement with the damage mechanisms observed in the previous section. Indeed, it is worth recalling that the fiber buckling was caused by the reorientation of the fibers which was induced by important local shearing in the mesostructure.
This method can be used to obtain non-destructively the evolution of the stiffness components on a structure submitted to various complex solicitations schemes without performing post-loading tensile tests. This is a significance help for non-destructive damage evaluation.
In the present ultrasonic study, a complete 3D characterization of the stiffness tensor for different damage states was obtained via the velocity of wave propagation through the composite material. An anisotropic evolution of damage different from one another was observed for each of the both samples configurations. In addition, the sensitivity of the chosen method to detect and quantify different damage scheme was verified. However, it is necessary to note the influence of cracks closed effect that may reduce the amount of damage that can be efficiently quantify. Even if X-ray tomography results indicated the presence of an increasing amount of defects, the stiffness reduction measurements in the present study were only representative of the cracks that remain open after unloading to zero force. As mentioned earlier, the 0 • configuration might be concerned by this effect at a higher extent than the 45 • configurations.
It is clear that this leads to an excessive amount of information that may be difficult to analyze quickly in terms of global damage state. For this reason, two damage indicators are considered in the next section to provide information that could be more easily interpreted in terms of global damage state of a sample.
Proposition of New Ultrasound Based Damage Indicators
In the previous section it was shown that damage growth can be detected and quantified via measurement of wave velocities. This allowed obtaining an almost complete stiffness tensor for each sample. However, it still remains difficult to / Fig. [START_REF] Shah | Non-linear ultrasonic evaluation of damaged concrete based on higher order harmonic generation[END_REF] Evolution of seven of the stiffness constants with the increase of the applied stress level for different 45 • configuration of sample with their respective confidence interval compare the damage state between the different samples. The post processing time to obtain the stiffness constants can be long, so it is interesting to identify a method that can quickly estimate the damage state of the studied material by using the ultrasonic signal directly. In this study, a scalar variable as damage indicator is proposed that is applicable in the industry as a practical tool. Usually in the field of ultrasonic, the damage estimation is based on the amplitude attenuation, which is extracted from the raw ultrasonic results (transmitted time signals) presented in the previous section. However, it is not really efficient in the present case because it exhibits important oscillations when increasing the loading. Therefore, another damage indicator is necessarily required. Two damage indicators are presented in this section. The first is the Frobenius norm of the computed stiffness and the second is based on the phase angle shift between wave signals propagated through a damaged and an undamaged sample.
Frobenius Norm of the Stiffness Tensor Based Damage Indicator
The Frobenius norm of a tensor is defined as follows:
N f (C) = tr ace ( C * C ) (17)
With C: the stiffness tensor and C : its conjugate transpose.
Taking into account that a proper damage indicator should have a cumulative evolution with the increase of the damage state, the following damage indicator is adopted:
D I1 = abs ( N f (C) -N f 0 (C0) ) (18)
where N f 0 is the Frobenius norm of an undamaged sample's stiffness matrix and C0 is the undamaged sample's stiffness matrix. It is recalled that the Frobenius norm actually computes the norm of the eigenvector of the matrix C. This is ( \
Phase Shift Based Damage Indicator
The global shape of the transmitted signal remains mostly unchanged whatever the damage severity at this frequency range as it can be observed in Fig. 23. The phase shift of the signal is therefore considered as a pertinent choice to quantify damage state evolution. The non-evolution of the global shape of the signal is induced by the choice of a frequency with which the composite sample is considered as homogenous. It is worth mentioning that the time shift corresponds Fig. 23 Response of ultrasonic signals that propagates through the 1-3 plane, at 0 • of incidence, of a 0 • undamaged sample and loaded at a stress level of 150MPa is possible. This was demonstrated in Sect. 5, and is an important advantage of the proposed damage indicator. Therefore, as for the previous indicator, the average of all incidence angles of the phase shift was calculated. An increase of the phase shift of the signals induces an increase of the damage indicator. The phase shift indicator is calculated as follow: nmax to change in phase or group velocity depending on the case when they are different from each other, as visible in Fig. 16.
1 D I2 = n * 1 abs ( ph (n) -ph (n) ) , (19) 0
However, the damage indicator that is determined is not an extraction of that velocity but is used as is. This shift is of course function of the wave propagation
ph (n) = sp * t ⎛ ph (t , n) (20) I m ( H (t , n) \ ⎞
velocity change that was seen to be effective for quantifying damage, as shown in Sect. 4. In addition, by considering multiple incidence angles, an anisotropic investigation of damage The evolution of this proposed damage indicator for the two samples configurations 0 • and 45 • is respectively plotted in Fig. 24a and in Fig. 24b. In the Fig. 24c, the evolution of the damage indicator is plotted for the two samples configurations for comparison purpose. The Fig. 24 points out that the damage measured, using the phase shift indicator, for the sample in the 45 • configuration loaded at stress level of 91.6%σ UTS45 • reach a level of 0.9. This is clearly higher than the damage level of 0.3 measured on the sample in the 0 • configuration and loaded at a stress level of 92.6%σ UTS0 • . This was also observed with the first damage indicator based on the Frobenius norm depicted in Fig. 22. It must be emphasized that the proposed damage indicator is a relative estimation of the damage with respect to a reference state, which is generally the undamaged state. The experimental procedure and arrangements must be rigorously identical when performing the ultrasonic wave measurements both on the undamaged composite sample and on the damaged one. Indeed, issues with the alignment of the transducers or the measurement triggering could induce errors in the signals comparison and lead to inaccurate diagnosis.
ph (t , n) = atan ⎝ ⎠ = I m ( Log ( H (t , n) \\ , Real H (t , n) (21)
The two proposed ultrasound based damage indicators (DI1 and DI2) are respectively plotted in Fig. 25a,b. As illustrated in this figure, the evolution of the Frobenius norm for the two sets of samples is very close, until an applied stress ratio of 30%σ UTS0 • . Beyond this ratio, the Frobenius norm for the 45 • samples has a faster evolution than that of the 0 • samples. For the phase shift based indicator, the difference between the two samples orientations is more pronounced. For an applied stress ratio of 12%σ UTS0 • on the 45 • samples the value of DI2 reach 0.32 which is close to the highest value reach for the 0 • samples. Globally, the two new ultrasound based damage indicators reveal a higher increase of the damage state for the samples oriented at 45 • from the warp direction.
The estimation of damage based on the elastic modulus reduction determined in Sect. 2 as well as the void volume evolution from the X-ray mCT measurements presented in Sect. 3 are respectively plotted in Fig. 26a,b. This figure presents the experimental results used as validation for the new indicators. They indeed exhibit a similar damage evolution as function of the applied stress ratio with a higher increase for samples in the 45 • configurations. It must be noted that the kinetic growth of damage is better predicted by the proposed phase angle shift indicator when compared to those two validation criteria.
Concluding Remarks
The anisotropic evolution of damage in polymer based woven composite material is strongly affected by fiber orientation and loading direction. This leads to the appearance of various damage mechanisms that influence the structural integrity differently. The development of nondestructive evaluation techniques is therefore necessary to determine the possible degradation of composite components on-site in a very applicable fashion, in service. These techniques should, however simple, still be sensitive to different damage state evolution schemes.
The present study and results aimed at detecting and quantifying the anisotropic evolution of damage with ultrasonic techniques. More specifically polyamide 6.6/6 reinforced with 2/2 twill weave glass fabric samples, preliminary damaged by stepwise increase interrupted tensile loading, were used. Two sample configurations were considered: (i) samples oriented at 0 • and (ii) at 45 • from the mold flow direction (corresponding to the warp direction). This choice was based on the knowledge of the different behavior between the two orientations, which is from brittle to ductile. The decrease of the elastic modulus was also measured for every considered stress level and used as a first validation result for the proposed damage indicators. A higher evolution of damage was observed for the samples oriented at 45 • from the warp fiber direction.
After highlighting the limitations of classical ultrasonic C-scan imaging to detect early damage state, X-ray mCT has been used to observe the different damage mechanisms such as matrix cracking, micro-buckling and fiber breakage. The void volume fraction was also measured on all considered samples. From the first experimental results, it was observed and confirmed that the largest damage mechanisms appear on samples in the 45 • configuration. Those samples exhibited also a higher increase of void volume fraction than the samples oriented at 0 • from the warp direction.
Measurements of the stiffness components on several samples previously damaged at different levels were also performed based on ultrasonic methods. A clear evolution of the stiffness components' values was observed even for the lowest damage state considered. Both studied samples configurations actually exhibit a response that differs from one another. However, it may be difficult to have evident information on the global damage state from the full stiffness matrix. It was proposed to consider scalar variables, obtained using results from the same ultrasonic method, as damage indicators. . It may be true that scalar variables in general lack extensiveness compared with higher dimensional vector or tensor variables, but if what they represent as information is functional as damage indicator then they can be used in the industry. The Frobenius norm of those stiffness matrixes was firstly used as a damage indicator. A damage indicator based on phase angle shift was also proposed to estimate the damage. The information delivered by those two damage indicators showed similarities with the results given by void volume fraction evolution and elastic modulus reduction. In other words, the samples oriented at 45 • from the warp direction exhibited a higher damage increase with increasing the loading level than the samples in the 0 • configuration. However, the kinematic increase of damage of the two studied configurations of samples was more successfully evaluated by the phase angle shift damage indicator, if referring to the tensile tests and X-ray mCT results.
The presented ultrasonic method provides an appropriate approach to evaluate the global damage state of a composite material reinforced with a complex fabric. Now that the method is effective under laboratory condition on standard samples, the next step is to validate the method on more complex automotive parts. The proposed method requires adaptations for practical applications. It must be robust in order to provide good repeatability of the measurement to avoid measurement errors that could alter the quality of the damage evaluation, such as wrong ultrasonic transducers' alignment and measurement triggering. An investigation of the method's sensitivity with air-coupled transducer could also be considered. Indeed, for more practical applications of the damage evaluation procedure on assembly lines, immersion of samples in water should be avoided to improve the method's applicability.
Fig. 1 Fig. 2 Fig. 3 Fig. 4
1234 Fig. 1 Overall 3D mesostructure of the studied composite consisting of 2/2 twill weave fabric reinforced polyamide 6.6/6 matrix
Fig. 5
5 Fig. 5 Schematic represenation of the elastic moduli estimation of the of E0 and En for rerspectively a 0 • and b 45 • configuration samples
Fig. 7 Fig. 8
78 Fig. 7 Preliminary C-scan results on 2/2 twill weave fabric reinforced polyamide composite previously loaded in tension with different amplitude. a Samples in 0 • configuration; b Samples in 45 • configuration. For the latter, besides the large damage zones, one can easily distinguish the 45 • orientation of the yarns
Fig. 9 aFig. 10
910 Fig. 9 a As received mesostructure of the 2/2 twill weave fabric composite obtained by 3D reconstruction prior to mechanical loading. b Initial voids and process-induced defects are represented by light color and their content is estimated using Avizo software segmentation (about 0.6% volume)
Fig. 12
12 Fig. 12 Evolution of void volume fraction for a 0 • and b 45 • configuration of sample with increasing loading. In c data of both configurations are plotted for comparison purpose
e
Fig. 15
15 Fig. 15 Schematic representation of the two planes of interest: a the plane 1-2 or azimuth plane and b the plane 1-3
Fig. 16
16 Fig. 16 Dispersion curves computed with the Disperse software for guided wave propagation in a 1.53mm thick studied composite along the fibers direction. a Phase velocities, b group velocities
Fig. 17 a
17 Fig. 17 a Emitted pulse signal and spectrum b for a 2.25 MHz center frequency transducer. The experimentally mesured center frequency is about 2.1 MHz
17
17
.
Fig. 18 Fig. 19
1819 Fig. 18 Comparison of numerically and experimentally determined propagation wave velocities. The experimental values are represented by points whereas numerical values are represented in continuous line. a Plane 1-3; b Plane 1-2 or azimuth plane
Fig. 22
22 Fig. 22 Evolution of the Frobenius norm of the stiffness matrix for the two configurations of samples a 0 • and b 4 5 • function. In c data of both configurations are plotted for comparison purposemore convenient for this study compared to the quadratic norm or the largest singular value that does not take into account all the components of the eigenvector.The resulting evolution of the Frobenius norm is plotted, for the configuration 0 • and 45 • , in the Fig.22a, b respectively. The Fig.22cwith the two configurations is plotted for an easy comparison. A global increase of the indicator for both cases was clearly observed as expected. Furthermore, it was noticed that the indicator increases moderately faster for the 45 • than for the 0 • configuration. This aspect is consistent with the different response of the samples when submitted to tension, since the behavior is ductile for 45 • and brittle for 0 • . The propagation of more cracks in the 45 • samples was observed by mCT as presented in Sect. 3. The indicator's value of the 45 • sample loaded at 91.6%σ UTS45 • is actually even higher than the corresponding one for the 0 • sample loaded at 92.6%σ UTS0 • .
Fig. 24
24 Fig. 24 Absolute phase angles evolution with increasing tensile stress to the sample for a 0 • and b 45 • configuration of sample. In c data of both configurations are plotted for comparison purpose
Fig. 26
26 Fig. 26 Damage indicators used as validation tools. a Damage estimated on the elastic modulus reduction and b Void volume fraction. Those indicators a plotted for the sets of samples oriented at 0 • and 45 • from the warp direction and function of the normalized applied stress level
Table 2 X
2 -ray mCT acquisition parameters
Detector resolution Exposure Voltage Voxel size Focus to detector (FDD) Focus to object distance (FOD)
2320*2336 px 4s 90 kV 5.5 μm 670 mm 110 mm
Table 3
3 Void volume fraction of the tested samples oriented at 0 • from warp direction
Table 4
4 Void volume fraction of the tested samples oriented at 0 • from warp direction
Sample 7 8 9 10
Defined loading (%σ UTS45 • ) 0 30.5 61.1 91.6
Void volume fraction (%) 0.59 1.69 3.04 5.51
Table 5
5 Comparison between numerical and experimental results of the stiffness constants for undamaged glass fiber reinforced woven composite
C 11 C 12 C 13 C 22 C 23 C 33 C 44 C 55 C 66
Numerical (periodic homogenization) 20 2.1 1.5 20 1.5 4.5 2.3 1.3 1.3
Experimental 22.21 (0.
2) 2.57 (0.15) 1.41 (0.76) 21.81 (0.08) -4.1 (0.07) 2.33 (0.09) 1.58 (0.21) -
Acknowledgements
The present work was funded by PSA Group and made in the framework of the OpenLab Materials and Processes. This OpenLab involves PSA Group, The Arts et Metiers ParisTech and Georgia Tech Lorraine. |
01761771 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/tel-01761771/file/hdr-rakoto-sep2017.pdf | Didier Henrion
Daniel Liberzon
Alexandre Dolgui
Claude Jard
Stephane Lafortune
Jean-Jacques J Loiseau
Jose-Luis Santi
Alice Pascale
Ma Camille
Emilienne Mère
PhD Santi Esteva
J Aguilar-Martin
J L De La Rosa
J Colomer
J C Hennet
E Garcia
J Melendez
V Puig
Jose-Luis L Villa
email: jvilla@emn.fr
M Morari
K E Arzen
M Duque
email: maduque@uniandes.edu.co
A Gauthier
email: agauthie@uniandes.edu.co
N Rakoto
email: naly.rakoto@mines-nantes.fr
Eduardo Mojica
email: ea.mojica70@uniandes.edu.co
P Caines
Nicanor Quijano
P Riedinger
N Rakoto -Supervision
M Quijano
C Ocampo-Martinez
H Gueguen
Jean-Sebastien Besse
Naly Rakoto-Ravalontsalama
Eduardo Mojica-Nava
Germán Obando
Modelling, Control and Supervision for a Class of Hybrid Systems
Keywords: method of moment, optimal control, switched systems Distributed optimisation, resource allocation, multi-agent systems
dynamique.
Introduction
The HDR (Habilitation à Diriger des Recherches) is a French Degree that you get some years after the PhD. It allows the candidate to apply for some University Professor positions and/or to apply for a Research Director position at CNRS. Instead of explaining it in details, the selection phases process after an HDR is summarized with a Petri Net model in Figure 1. After obtaining the HDR Thesis Degree, the candidate is allowed to apply for a Research Director position at CNRS, after a national selection. On the other hand, in order to apply for some University Professor positions, the candidate should first apply for a National Qualifcation (CNU). Once this qualification obtained, the candidate can then apply to some University Professor positions, with a selection specific to each university. This HDR Thesis is an extended abstract of my research work from my PhD Thesis defense in 1993 until now. This report is organized as follows.
• Chapter 1 is a Curriculum Vitae I had and am having the following administrative responsabilities at Mines Nantes:
• 1997-2000: Last Year's Option AII (Automatique et Informatique Industrielle)
•
Research Activities
My main topics of research are the following:
1. Analysis and control of hybrid and switched systems 2. Supervisory control of discrete-event systems
These will be detailed in Chapter 2 and Chapter 3, respectively. The following other topics of research will not be presented. However, the corresponding papers can be found in the Complete List of Publications.
• Resource Allocation
• Holonic Systems
• Inventory Control -Invited Session, Knowledge Based Systems, IEEE ISIC 1999, Cambridge, MA, USA, Sep. 1999 (jointly organized and chaired with Karl-Erik Årzèn).
Funded and Submitted Projects
-Workshop on G2 Expert System, LAAS-CNRS, Toulouse, France, Oct. 1995 (jointly organized and chaired with Joseph Aguilar-Martin).
Complete List of Publications
A summary of the papers, classified per year, from 1994 to 2017, is given in the following
Chapter 2
Analysis and Control of Hybrid and Switched Systems
Modeling and Control of MLD systems
Piecewise affine (PWA) systems have been receiving increasing interest, as a particular class of hybrid system, see e.g. [2], [13], [11], [16], [14], [12] and references therein. PWA systems arise as an approximation of smooth nonlinear systems [15] and they are also equivalent to some classes of hybrid systems e.g. Linear complementarity systems [9]. On the other hand Mixed Logical and Dynamical (MLD) systems have been introduced by Bemporad and Morari as a suitable representation for hybrid dynamical systems [3]. MLD models are obtained originally from PWA system, where propositional logic relations are transformed into mixed-integer inequalities involving integer and continuous variables. Then mixed-integer optimization techniques are applied to the MLD system in order to stabilize MLD system on desired reference trajectories under some constraints. Equivalences between PWA systems and MLD models have been established in [9]. More precisely, every well-posed PWA system can be rewritten as an MLD system assumung that the set of feasible states and inputs is bounded and a completely well-posed MLD system can be rewritten as a PWA system [9]. Conversion methods from MLD systems to equivalent PWA models have been proposed in [4], [5], [6] and [?]. Vice versa, translation methods from PWA to MLD systems have been studied in [3] (the original one), and then in [8], [?]. A tool that deals with both MLD and PWA systems is HYSDEL [17].
The motivations for studying new methods of conversion from PWA systems into their equivalent MLD models are the following. Firstly the original motivation of obtaining MLD models is to rewrite a PWA system into a model that allows the designer to use existing optimization algorithms such as mixed integer quadratic programming (MIQP) or mixed integer linear programmimg (MILP). Secondly there is no unique formulation of PWA systems. We can always address some particular cases that introduce some differences in the conversions. Finally, it has been shown that the stability analysis of PWA systems with two polyhedral regions is in general NP-complete or undecidable [7]. The conversion to MLD systems may be another way to tackle this problem.
Piecewise Affine (PWA) Systems
A particular class of hybrid dynamical systems is the system described as follows.
{ ẋ(t) = A i x(t) + a i + B i u(t) y(t) = C i x(t) + c i + D i u(t) (2.1)
where i ∈ I, the set of indexes, x(t) ∈ X i which is a sub-space of the real space R n , and R + is the set of positive real numbers including the zero element. In addition to this equation it is necessary to define the form as the system switches among its several modes. This equation is affine in the state space x and the systems described in this form are called piecewise affine (PWA) systems NR HDR 22 [15], [9]. The discrete-time version of this equation will be used in this work and can be described as follows.
{ x(k + 1) = A i x(k) + b i + B i u(k) y(k) = C i x(k) + d i + D i u(k) (2.2)
where i ∈ I is a set of indexes, X i is a sub-space of the real space R n , and R + is the set of positive integer numbers including the zero element, or an homeomorphic set to Z + .
Mixed Logical Dynamical (MLD) Systems
The idea in the MLD framework is to represent logical propositions with the equivalent mixed integer expressions. MLD form is obtained in three steps [3], [4]. The first step is to associate a binary variable δ ∈ {0, 1} with a proposition S, that may be true or false. δ is equal to 1 if and only if proposition S is true. A composed proposition of elementary propositions S 1 , . . . , S q combined using the boolean operators like AND, OR, NOT may be expressed with integer inequalities over corresponding binary variables δ i , i = 1, . . . q. The second step is to replace the products of linear functions and logic variables by a new auxiliary variable z = δa T x where a T is a constant vector.
The variable z is obtained by mixed linear inequalities evaluation. The third step is to describe the dynamical system, binary variables and auxiliary variables in a linear time invariant system. An hybrid system described in MLD form is represented by Equations (2.3-2.5).
x(k + 1) = Ax(k)
+ B 1 u(k) + B 2 δ(k) + B 3 z(k) (2.
3)
y(k) = Cx(k) + D 1 u(k) + D 2 δ(k) + D 3 z(k) (2.4) E 2 δ(k) + E 3 z(k) ≤ E 1 u(k) + E 4 x(k) + E 5 (2.5)
where x = [x T c x T l ] ∈ R nc × {0, 1} n l are the continuous and binary states, respectively, u = [u T C u T l ] ∈ R mc × {0, 1} m l are the inputs, y = [y T c y T l ] ∈ R pc × {0, 1} p l the outputs, and δ ∈ {0, 1} r l , z ∈ R rc , represent the binary and continuous auxiliary variables, respectively. The constraints over state, input, output, z and δ variables are included in (2.5).
Converting PWA into MLD Systems
In this subsection two algorithms for converting PWA systems into MLD systems are given. The first case consists of several sub-affine systems with switching regions are explained in detail. The second case deals with several sub-affine systems, each of them belongs to a region which is described by linear inequalities is a variation of the first case. Each case is applied to an example in order to show the validity of the algorithm.
A. Case I
The PWA system is represented by the following equations:
x(k + 1) = A i x(k) + B i u(k) + f i y(k) = C i x(k) + D i u(k) + g i S ij = {x, u|k T 1ij x + k T 2ij u + k 3ij ≤ 0} (2.6)
where i ∈ I = {1, . . . , n}. The case with jumps can be included in this representation considering each jump as a discrete affine behavior valid during only one sample time. The switching region S ij is a convex polytope which volume, or hypervolume, can be infinite, and the sub-scripts denotes the switching from mode i to mode j. For this purpose we introduce a binary variable δ i for each index of the set I and a binary variable δ i,j for each switching region S ij . In order to gain insight in the following equations, we consider the hybrid the partition and the corresponding automaton is depicted in Figure 2.1. Introductory material on hybrid automata can be found in [1] and [10]. The δ ij variables are not dynamical and, when the elements k in S ij are vectors, the binary variable can be evaluated by the next mixed integer inequality
(δ ij = 1) ⇔ (k T 1ij x + k T 2ij u + k 3ij ≤ 0) (2.7)
which is equivalent to:
{ k 1ij x + k 2ij u + k 3ij -M (1 -δ ij ) ≤ 0 -k 1ij x -k 2ij u -k 3ij + ϵ + (m -ϵ)δ ij ≤ 0 (2.8)
When the elements k in S ij are matrices, it is necessary to introduce some auxiliary binary variables for each row describing a sub-constraint in S ij in the next form:
δ k = 1(⇔ k 1,k x + k 2,k u + k 3,k ≤ 0) δ ij = ∧ k δ k (2.9)
which is equivalent to:
k 1ij,k x + k 2ij,k u + k 3ij,k -M (1 -δ ij,k ) ≤ 0 -k 1ij,k x -k 2ij,k u -k 3ij,k + ϵ + (m -ϵ)δ ij,k ≤ 0 δ ij -δ ij,k ≤ 0 ∑ k (δ ij,k -1) -δ ij ≤ -1
(2.10)
The binary vector x δ = [δ 1 δ 2 . . . δ n ] T is such that its dynamics is given by:
x δi (k + 1) = (x δi (k) ∧ ∧ j̸ =i ¬δij) ∨ ∨ j̸ =i (x δj (k) ∧ δji) (2.11)
where k is an index of time, and ∧, ∨, and ¬, are standard for the logical operations AND, OR, NOT, respectively. This equation can be explained as follows: The mode of the system in the next time is i if the current mode is mode i and any switching region is enabled in this time, or, the current mode of the system is j different to i and a switching region that enables the system to go into mode i is enabled. Considering that the PWA system is well posed, i.e. for a given initial state [x T i T ] T 0 and a given input u 0,τ there exists only one possible trajectory [x T i T ] T 0,x . That is equivalent to the following conditions:
∑ i∈I x δi = 1, ∏ i∈I
x δi = 0 (2.12)
The dynamical equations for x δ vector are equivalent to the next integer inequalities:
x δj (k) + δ ji -x δi (k + 1) ≤ 1, ∀i, j ∈ I, i ̸ = j x δi (k) - ∑ j̸ =i δ ij -x δi (k + 1) ≤ 0, ∀i, j ∈ I, i ̸ = j -x δi (k) - ∑ j̸ =i δ ji -x δi (k + 1) ≤ 0, ∀i, j ∈ I, i ̸ = j (2.13)
The first inequality states that the next mode of the system should be mode i if the current mode is j different to i and a switching region for going from mode j to mode i is enabled. The second inequality means that the next mode of the system should be mode i if the current mode is i and any switching region for going from mode i into mode j different to i is enabled. And the third equation states that the system cannot be in mode i in the next time if the current mode of the system is not mode i and any switching region for going from mode i, (j different to i), into mode i is enabled.
NR HDR 24
This form for finding x δ (k + 1) causes a problem in the final model because it cannot be represented by a linear equation in function of x, u, δ and Z. For this reason, x δ (k + 1) is aggregated to the δ general vector of binary variables, and finally assigned directly to x δ (k + 1). The dynamics and outputs of the system can be represented by the next equations:
{ x(k + 1) = Ax(k) + Bu(k) + ∑ i∈I (A i x(k) + B i u(k) + f i ) × x δi (k) y(k) = Cx(k) + Du(k) + ∑ i∈I (C i x(k) + D i u(k) + g i ) × x δi (k) (2.14)
If we introduce some auxiliary variables:
{ Z 1i (k) = (A i x(k) + B i u(k) + f i ) × x δi (k) Z 2i (k) = (C i x(k) + D i u(k) + g i ) × x δi (k) (2.15)
which are equivalent to:
Z 1i ≤ M x δi (k) -Z 1i ≤ -mx δi (k) Z 1i ≤ A i x(k) + B i u(k) + f i -m(1 -x δi (k)) -Z 1i ≤ -A i x(k) -B i u(k) -f i + M (1 -x δi (k)) (2.16) Z 2i ≤ M x δi (k) -Z 2i ≤ -mx δi (k) Z 2i ≤ C i x(k) + D i u(k) + g i -m(1 -x δi (k)) -Z 2i ≤ -C i x(k) -D i u(k) -g i + M (1 -x δi (k)) (2.17)
where M and m are vectors representing the maximum and minimum values, respectively, of the variables Z, these values can be arbitrary large. Using the previous equivalences, the PWA system ( 2.2) can be rewritten in an equivalent MLD model as follows:
x(k + 1) = A rr x(k) + A br x δ (k) + B 1r u(k) + B 2r δ + B 3r ∑ i∈I Z 1i (k) x δ (k + 1) = A rb x(k) + A bb x δ (k) + B 1b u(k) + B 2b δ + B 3b ∑ i∈I Z 1i (k) y r (k) = C rr x(k) + C br x δ (k) + D 1r u(k) + D 2r δ + D 3r ∑ i∈I Z 2i (k) y δ (k) = C rb x(k) + C bb x δ (k) + D 1b u(k) + D 2b δ + D 3b ∑ i∈I Z 2i (k) (2.18) s.t. E 2 x δ (k + 1) δ ij δ k + E 3 Z(k) ≤ E 4 x(k) δ ij δ k + E 1 u(k) + E 5 (2.19)
Using this algorithm, most part of the matrices are zero, because x and y are defined by Z, and x δ is defined by δ. This situation can be avoided by defining the next matrices at the beginning of the procedure:
A = 1 n (A 1 + . . . + A n ), A i = A i -A, ∀i ∈ I B = 1 n (B 1 + . . . + B n ), B i = B i -B, ∀i ∈ I C = 1 n (C 1 + . . . + C n ), C i = C i -C, ∀i ∈ I D = 1 n (D 1 + . . . + D n ), D i = D i -D, ∀i ∈ I (2.20)
Finally, the equality matrices in (2.18) and (2.19) can be chosen as follows:
A rr = A, A br = 0 nc×n , B 1r = B, B 2r = 0 nc×(n+m+tk) , B 3r = [I nc×nc 0 nc×pc I nc×nc 0 nc×pc . . . I nc×nc 0 nc×pc ] nc×n×(nc+pc) A rb = 0 n×nc , A bb = 0 x×n , B 1b = 0 n×mc , B 2b = [I n×n 0 n×m 0 n×tk ] n×n×(n+m+tk) , B 3b = 0 nc×n×(nc+pc) (2.21) NR HDR 25 C rr = C, C br = 0 pc×n , D 1r = D, D 2r = 0 pc×(n+m+tk) , D 3r = [0 pc×nc I pc×pc 0 pc×nc I pc×pc . . . 0 pc×nc I pc×pc ] pc×n×(nc+pc) C rb = 0 n×nc , C bb = 0 x×n , D 1b = 0 n×mc , D 2b = [I n×n 0 n×m 0 n×tk ] n×n×(n+m) , D 3b = 0 n×n×(nc+pc) (2.22)
where n C is the number of continuous state variables, m C the number of continuous input variables, p C the number of continuous output variables, n the number of affine sub-systems, m the number of switching regions and tk the number of auxiliary binary variables. The algorithm for converting a PWA system in the form of (2.1) into its equivalent MLD system can be summarized as follows:
B. Algorithm 1
C. Example 1
Consider the system whose behavior is defined by the following PWA model:
x(k + 1) = A i x(k), i ∈ {1, 2} S 1,2 = {(x 1 , x 2 )|(x 1 ≤ 1.3x 2 ) ∧ (0.7x 2 ≤ x 1 ) ∧ (x 2 > 0) S 2,1 = {(x 1 , x 2 )|(x 1 ≤ 0.7x 2 ) ∧ (1.3x 2 ≤ x 1 ) ∧ (x 2 < 0)
where
A 1 = [ 0.9802 0.0987 -0.1974 0.9802 ] , A 2 = [ 0.9876 -0.0989 0.0495 0.9876 ]
The behavior of the system is presented in Figure 2.2. The initial points are (x 10 , x 20 ) = (1, 0.8). We can see that the system switches between the two behaviors, from A 1 to A 2 in the switching region S 1,2 , and from A 2 to A 1 in the switching region S 2,1 , alternatively. The switched system is stable.
D. Case 2
Consider now the system whose behavior is defined by the following PWA model:
{ x(k + 1) = A i x(k) + b i + B i u(k), i ∈ I, x(k) ∈ X i y(k) = C i x(k) + d i + D i u(k), i ∈ I, x(k) ∈ X i (2.23) with conditions X i ∩ X j̸ =i = ∅, ∀i, j ∈ I, ∪ i∈I X i = X,
where X is the admissible space for the PWA system, and variables and can be represented using the appropriate δ variables instead of x δ (k) variables in the definition of Z in (2.16) and (2.17). However, note that the conditions X i ∩ X j̸ =i = ∅ ∀i, j ∈ I and ∪ i∈I X i = X require a careful definition in the sub-spaces X i in order to avoid a violation to these conditions in its bounds. On the other hand, the MLD representation uses non-strict inequalities in its representation and the ε value in (2.8) and (2.9) should be chosen appropriately. Another way to overcome this situation and to insure an appropriated representation is the use of the following conditions in the bounds of the sub-spaces X i :
X i = {x, u|k 1i x + k 2i u + k 3i ≤ 0}
δ ij = δ i ⊗ δ j
which is equivalent to:
{ δ i + δ j -1 ≤ 0 1 -δ i -δ j ≤ 0 or more generally ∑ i∈I δ i -1 ≤ 0 1 - ∑ i∈I δ i ≤ 0 (2.24)
We now modify Equations (2.8), (2.10), (2.16), (2.17), (2.21), and (2.22) as follows:
{ k 1i x + k 2i u + k 3i -M (1 -δ i ) ≤ 0 -k 1i x -k 2i u -k 3i + ϵ + (m -ϵ)δ i ) ≤ 0 (2.25) NR HDR 27 k 1i,k x + k 2i,k u + k 3i,k -M (1 -δ i,k ) ≤ 0 -k 1i,k x -k 2i,k u -k 3i,k + ϵ + (m -ϵ)δ i,k ≤ 0 δ i -δ i,k ≤ 0 ∑ k (δ i,k -1) -δ i ≤ -1 (2.26)
The auxiliary variables Z 1i become:
Z 1i ≤ M δ i (k) -Z 1i ≤ -mδ i (k) Z 1i ≤ A i x(k) + B i u(k) + f i -m(1 -δ i (k)) -Z 1i ≤ -A i x(k) -B i u(k) -f i + M (1 -δ i (k)) (2.27)
where the matrices A i and B i are those previously defined in Equation (2.20).The auxiliary variable Z 2i is now modified according to the following equations:
Z 2i ≤ M δ i (k) -Z 2i ≤ -mδ i (k) Z 2i ≤ C i x(k) + D i u(k) + g i -m(1 -δ i (k)) -Z 2i ≤ -C i x(k) -D i u(k) -g i + M (1 -δ i (k)) (2.28)
where the matrices C i and D i are those that have been defined in Equation (2.20). Finally the matrices from Equation (2.18) can be chosen as follows:
A rr = A, A br = 0 nc×n , B 1rr = B, B 2rb = 0 nc×(n+tk) , B 3rr = [I nc×nc 0 nc×pc I nc×nc 0 nc×pc . . . I nc×nc 0 nc×pc ] nc×n×(nc+pc) C rr = C, C br = 0 pc×n , D 1rr = D, D 1rb = [ ], D 2rb = 0 pc×(n+tk) , D 3rr = [0 pc×nc I pc×pc 0 pc×nc I pc×pc . . . 0 pc×nc I pc×pc ] pc×n×(nc+pc) (2.29)
We give now an algorithm that converts a PWA system in the form of (2.23) into its equivalent MLD system.
E. Algorithm 2
1. Compute matrices A, B, C, D and A i , B i , C i and D i using (2.20).
Initialize
E 1 , E 2 , E 3 , E 4 , E 5 matrices.
3. For i = 1 to n include the inequalities using (2.25) or (2.26) that represent the behavior on the n affine regions of the PWA system.
4. For all affine regions include the inequalities in (2.24).
5. For i = 1 to n generate the nc-dimensional Z 1i vector and p c -dimensional Z 2i vector of auxiliary variables Z.
6. For each Z 1i vector introduce the inequalities defined in (2.27). M and m are n c -dimensional vectors of maximum and minimum values of x, respectively.
7. For each Z 2i vector introduce the inequalities defined in (2.28). M and m are p c -dimensional vectors of maximum and minimum values of x, respectively (This completes the inequality matrices).
8. Compute the matrices defined in (2.29) where the binary state variables are removed. 9. End.
F. Example 2
Consider the system whose behavior is defined by the following PWA model:
x(k + 1) = A i x(k), i ∈ {1, 2} i = 1 if x 1 x 2 ≥ 0 i = 2 if x 1 x 2 < 0 NR HDR 28
where The behavior of the system is presented in Figure 2.4. The PWA system with linear constraints has 4 sub-affine systems. Algorithm 2 produces an MLD system with 12 binary variables (4 variables for the affine sub-system, and 8 auxiliary variables), 16 auxiliary variables Z and 94 constraints. The behavior of the equivalent MLD system is shown in Figure 2.5. We can notice that the behavior of the MLD system is exactly the same as the original PWA model.
A
Stability of Switched Systems
A polynomial approach to deal with the stability analysis of switched non-linear systems under arbitrary switching using dissipation inequalities is presented. It is shown that a representation of the original switched problem into a continuous polynomial system allows us to use dissipation inequalities for the stability analysis of polynomial systems. With this method and from a theoretical point of view, we provide an alternative way to search for a common Lyapunov function for switched non-linear systems. We deal with the stability analysis of switched non-linear systems, i.e., continuous systems with switching signals under arbitrary switching. Most of the efforts in switched systems research have been typically focused on the analysis of dynamical behavior with respect to switching signals. Several methods have been proposed for stability analysis (see [START_REF] Liberzon | Switching in Systems and Control[END_REF], [19], and references therein), but most of them have been focused on switched linear systems. Stability analysis under arbitrary switching is a fundamental problem in the analysis and design of switched systems. For this problem, it is necessary that all the subsystems be asymptotically stable. However, in general, the above stability condition is not sufficient to guarantee stability for the switched system under arbitrary switching. It is well known that if there exists a common Lyapunov function for all the subsystems, then the stability of the switched system is guaranteed under arbitrary switching. Previous attempts for general constructions of a common Lyapunov function for switched non-linear systems have been presented in [20], [21] using converse Lyapunov theorems. Also in [22], a construction of a common Lyapunov function is presented for a particular case when the individual systems are handled sequentially rather than simultaneously for a family of pairwise commuting systems. These methodologies are presented in a very general framework, and even though they are mathematically sound, they are too restrictive from a computational point of view, mainly because it is usually hard to check for the set of necessary conditions for a common function over all the subsystems (it could not exist). Also, these constructions are usually iterative, which involves running backwards in time for all possible switching signals, being prohibitive when the number of modes increases.
The main contribution of this topic of stability of switched systems is twofold. First, we present a reformulation of the switched system as an ordinary differential equation on a constraint manifold. This representation opens several possibilities of analysis and design of switched systems in a consistent way, and also with numerical efficiency [C.39], [C.38], which is possible thanks to some tools developed in the last decade for polynomial differential-algebraic equations analysis [8,10]. The second contribution is an efficient numerical method to search for a common Lyapunov function for switched systems using results of stability analysis of polynomial systems based on dissipativity theory [23], [C.39]. We propose a methodology to construct common Lyapunov functions that provides a less conservative test for proving stability under arbitrary switching. It has been mentioned in [26] that the sum of squares decomposition, presented only for switched polynomial systems, can sometimes be made for a system with a non-polynomial vector fields. However, those cases are restricted to subsystems that preserve the same dimension after a recasting process.
Optimal Control of Switched Systems
Switched Linear Systems
A polynomial approach to solve the optimal control problem of switched systems is presented. It is shown that the representation of the original switched problem into a continuous polynomial systems allow us to use the method of moments. With this method and from a theoretical point of view, we provide necessary and sufficient conditions for the existence of minimizer by using particular features of the minimizer of its relaxed, convex formulation. Even in the absence of classical minimizers of the switched system, the solution of its relaxed formulation provide minimizers.
We consider the optimal control problem of switched systems, i.e., continuous systems with switch-NR HDR 30
ing signals. Recent efforts in switched systems research have been typically focused on the analysis of dynamic behaviors, such as stability, controllability and observability, etc. (e.g., [19], [START_REF] Liberzon | Switching in Systems and Control[END_REF]). Although there are several studies facing the problem of optimal control of switched systems (both from theoretical and from computational point of view [37], [36], [27], [39], there are still some problems not tackled, especially in issues where the switching mechanism is a design variable. There, we see how these difficulties arise, and how tools from non-smooth calculus and optimal control can be combined to solve optimal control problems. Previously, the approach based on convex analysis have been treated in [36], and further developed in [27], considering an optimal control problem for a switched system, these approaches do not take into account assumptions about the number of switches nor about the mode sequence, because they are given by the solution of the problem. The authors use a switched system that is embedded into a larger family of systems and the optimal control problem is formulated for this family. When the necessary conditions indicate a bang-bangtype of solution, they obtain a solution to the original problem. However, in the cases when a bang-bang type solution does not exist, the solution to the embedded optimal control problem can be approximated by the trajectory of the switched system generated by an appropriate switching control. On the other hand, in [36] and [34] the authors determine the appropriated control law by finding the singular trajectory along some time with non null measure.
Switched Nonlinear Systems
The nonlinear, non-convex form of the control variable, prevents us from using the Hamilton equations of the maximum principle and nonlinear mathematical programming techniques on them. Both approaches would entail severe difficulties, either in the integration of the Hamilton equations or in the search method of any numerical optimization algorithm. Consequently, we propose to convexify the control variable by using the method of moments in the polynomial expression in order to deal with this kind of problems. In this paper we present a method for solving optimal control for an autonomous switched systems problem based on the method of moments developed in for optimal control, and in [28], [29], [30] and [32] for global optimization. We propose an alternative approach for computing effectively the solution of nonlinear, optimal control problems. This method works properly when the control variable (i.e., the switching signal) can be expressed as polynomials. The essential of this paper is the transformation of a nonlinear, non-convex optimal control problem (i.e., the switched system) into an equivalent optimal control problem with linear and convex structure, which allows us to obtain an equivalent convex formulation more appropriate to be solved by high performance numerical computing. To this end, first of all, it is necessary to transform the original switched system into a continuous non-switched system for which the theory of moments is able to work. Namely, we relate with a given controllable switched system, a controllable continuous non-switched polynomial system. Optimal control problems for switched nonlinear systems are investigated. We propose an alternative approach for solving the optimal control problem for a nonlinear switched system based on the theory of moments. The essence of this method is the transformation of a nonlinear, non-convex optimal control problem, that is, the switched system, into an equivalent optimal control problem with linear and convex structure, which allows us to obtain an equivalent convex formulation more appropriate to be solved by high-performance numerical computing. Consequently, we propose to convexify the control variables by means of the method of moments obtaining semidefinite programs. The paper dealing with this approach is given in the Appendix 2, paper [J.5].
Chapter 3
Supervisory Control of Discrete-Event Systems
Multi-Agent Based Supervisory Control
Supervisory control initiated by Ramadge and Wonham [START_REF] Ramadge | Supervisory control of a class of discrete-event processes[END_REF] provides a systematic approach for the control of discrete event system (DES) plant. The discrete event system plant be is modeled by a finite state automaton [START_REF] Hopcroft | Introduction to Automata Theory, Languages, and Computation[END_REF], [43]:
Definition 1 (Finite-state automaton). A finite-state automaton is defined as a 5-tuple
G = (Q, Σ, δ, q 0 , Q m , C)
where
• Q is the finite set of states,
• Σ is the finite set of events,
• δ : Q × Σ → Q is the partial transition function, • q 0 ⊆ Q is the initial state, • Q m ⊆ Q is the set of marked states (final states),
Let Σ * be the set of all finite strings of elements in Σ including the empty string ε. The transition function δ can be generalized to δ : Σ * × Q → Q in the following recursive manner:
δ(ε, q) = q δ(ωσ, q) = δ(σ, δ(ω, q)) for ω ∈ Σ *
The notation δ(σ, q)! for any σ ∈ Σ * and q ∈ Q denotes that δ(σ, q) is defined. Let L(G) ⊆ Σ * be the language generated by G, that is,
L(G) = {σ ∈ Σ * |δ(σ, q 0 )!}
Let K ⊆ Σ * be a language. The set of all prefixes of strings in K is denoted by pr(K) with
pr(K) = {σ ∈ Σ * |∃ t ∈ Σ * ; σt ∈ K}. A language K is said to be prefix closed if K = pr(K).
The event set Σ is decomposed into two subsets Σ c and Σ uc of controllable and uncontrollable events, respectively, where Σ c ∩ Σ uc = ∅. A controller, called a supervisor, controls the plant by dynamically disabling some of the controllable events.
A sequence σ 1 σ 2 . . . σ n ∈ Σ * is called a trace or a word in term of language. We call a valid trace a path from the initial state to a marked state (δ(ω, q 0 ) = q m where ω ∈ Σ * and q m ∈ Q m ).
NR HDR 32
In this section we will focus on the Multi-Agent Based Supervisory Control, introduced by Hubbard and Caines [START_REF] Hubbard | Initial investigations on hierachical supervisory control of multi-agent systems[END_REF]; and the modified approach proposed by Takai and Ushio [START_REF] Takai | Supervisory control of a class of concurrent discrete-event systems[END_REF]. The two approaches have been applied to the supervisory control of the EMN Experimental Manufacturing Cell. This cell is composed of two robotized workstations connected to a central conveyor belt. Then, three new semi-automated workstations have been added in order to increase the flexibility aspects of the cell. Indeed, each semi-automated workstation can perform either manual of robotized tasks. These two aspects correspond to the two different approaches of multi-agent product of subsystems, for supervisory control purpose. The results can be found in [C.25].
Switched Discrete-Event Systems
The notion of switched discrete-event systems corresponds to a class of DES where each automaton is the composition of two basic automata, but with different composition operators. A switching occurs when there is a change of the composition operator, but keeping the same two basic automata. A mode behavior, or mode for short, is defined to be by the DES behavior for a given composition operator. Composition operators are supposed to change more than once so that each mode is visited more than once. This new class of DES includes the DES in the context of fault diagnosis where different modes such as e.g., normal, degenerated, emergency modes can be found. The studied situations are the ones where the DES switch between different normal modes, and not necessary the degenerated and the emergency ones.
The most common composition operators used in supervisory control theory are the product and the parallel composition [43], [START_REF] Wonham | Notes on Discrete Event Systems[END_REF] However many different types of composition operators have been defined, e.g., the prioritized synchronous composition [49], the biased synchronous composition [START_REF] Lafortune | The infimal closed controllable superlanguage and its application to supervisory control[END_REF], see [START_REF] Wenck | On composition oriented perspective on controllability of large DES[END_REF] for a review of most of the composition operators. Multi-Agent composition operator [START_REF] Romanovski | On the supervisory control of multi-agent product systems[END_REF], [START_REF] Romanovski | Multi-agent product system: Controllability and non-blocking properties[END_REF] is another kind of operator, which differs from the synchronous product in the aspects of simultaneity and synchronization.
The new class of DES that we define in this paper includes the class of DES in the context of fault diagnosis, with different operating modes. Furthermore this new class addresses especially the DES for which the system can switch from a given normal mode, to another normal mode. More
precisely this new class of DES is an automaton which is the composition of two basic automata, but with different composition operators. A switching corresponds to the change of composition operator, but the two basic automata remains the same. A mode behavior (or mode for short) is defined to be the DES situation for a given composition operator. Composition operators are supposed to change more than once so that each mode is visited more than once.
We give here below some examples of switched DES:
• Manufacturing systems where the operating modes are changing (e.g. from normal mode to degenerated mode)
• Discrete event systems after an emergency signal (from normal to safety mode)
• Complex systems changing from normal mode to recovery mode (or from safety mode to normal mode).
We can distinguish, like for the switched continuous-time systems, the notion of autonomous switching where no external action is performed and the notion of controlled switching, where the switching is forced. The results for this section can be found in [START_REF] Rakoto-Ravalontsalama | Supervisory control of switched discrete-event systems[END_REF].
Switchable Languages of DES
The notion of switchable languages has been defined by Kumar, Takai, Fabian and Ushio in [Kumaret-al. 2005]. It deals with switching supervisory control, where switching means switching between two specifi-cations. In this paper, we first extend the notion of switchable languages to n languages, (n ≥ 3). Then we consider a discrete-event system modeled with weighted automata. The switching supervisory control strategy is based on the cost associated to each event, and it allows us to synthesize an optimal supervisory controller. Finally the proposed methodology is applied to a simple example. We now give the main results of this paper. First, we define a triplet of switchable languages. Second we derive a necessary and sufficient condition for the transitivity of switchable languages (n = 3). Third we generalize this definition to a n-uplet of switchable languages, with n > 3. And fourth we derive a necessary and sufficient condition for the transitivity of switchable languages for n > 3.
Triplet of Switchable Languages
We extend the notion of pair of switchable languages, defined in [START_REF] Kumar | Maximally Permissive Mutually and Globally Nonblocking Supervision with Application to Switching Control[END_REF], to a triplet of switchable languages.
Definition 2 (Triplet of switchable languages). A triplet of languages
(K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2,
3} are said to be a triplet of switchable languages if they are pairwise switchable languages, that is,
SW (K 1 , K 2 , K 3 ) := SW (K i , K j ), i ̸ = j, i, j = {1, 2, 3}.
Another expression of the triplet of switchable languages is given by the following lemma.
Lemma 1 (Triplet of switchable languages). A triplet of languages
(K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2,
3} are said to be a triplet of switchable languages if the following holds: The following theorem gives a necessary and sufficient condition for the transitivity of switchable languages.
SW (K 1 , K 2 , K 3 ) = {(H 1 , H 2 , H 3 ) | H i ⊆ K i ∩ pr(H j ), i ̸ = j,
Theorem 1 (Transitivity of switchable languages, n = 3) . Given 3 specifications The proof can be found in [42].
(K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} such that SW (K 1 , K 2 ) and SW (K 2 , K 3 ). (K 1 , K 3 )
N-uplet of Switchable Languages
We now extend the notion of switchable languages, to a n-uplet of switchable languages, with (n > 3).
Definition 3 (N-uplet of switchable languages, n > 3). A n-uplet of languages
(K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}, n > 2
, is said to be a n-uplet of switchable languages if the languages are pairwise switchable that is,
SW (K 1 , ..., K n ) := SW (K i , K j ), i ̸ = j, i, j = {1, ..., n}, n > 2.
As for the triplet of switchable languages, an alternative expression of the n-uplet of switchable languages is given by the following lemma.
Lemma 2 (N-uplet of switchable languages, n > 3). A n-uplet of languages
(K 1 , . . . , K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, .
.., n}, n > 3 are said to be a n-uplet of switchable languages if the following holds:
SW (K 1 , ..., K n ) = {(H 1 , ..., H n ) | H i ⊆ K i ∩ pr(H j ), i ̸ = j
, and H i controllable}.
Transitivity of Switchable Languages (n > 3)
We are now able to derive the following theorem that gives a necessary and sufficient condition for the transitivity of n switchable languages.
Theorem 2 (Transitivity of n switchable languages, n > 3) . Given n specifications (K 1 , ..., K n ),
K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}.
Moreover, assume that each language K i is at least switchable with another language K j , i
̸ = j. A pair of languages (K k , K l ) is switchable i.e. SW (K k , K l ), if and only if 1. H k ∩ pr(H l ) = H k , and 2. H l ∩ pr(H k ) = H l .
The proof is similar to the proof of Theorem 6 and can be found in [42]. It is to be noted that the assumption that each of the n languages be at least switchable with another language is important, in order to derive the above result.
Perspective 1: Control of Smart Grids
According to the US Department of Energy's Electricity Advisory Committee, "A Smart Grid brings the power of networked, interactive technologies into an electricity system, giving utilities and consumers unprecedented control over energy use, improving power grid operations and ultimately reducing costs to consumers."
The transformation from traditional electric network, with centralized energy production to complex and interconnected network will lead to a smart grid. The five main triggers of Smart grid, according to a major industrial point of view, are 1) Smart energy generation, 2) Flexible distribution, 3) Active energy efficiency, 4) Electric vehicles, and 5) Demand response. From a control point of view, a smart grid is a system of interconnected micro-grids. A microgrid is a power distribution network where generators and users interact. Generators technologies include renewable energy such as wind turbines or photovoltaic cells. The objective of this project is to simulate and control a simplified model of a micro-grid that is a part of a Smart Grid. After a literature review, a simplified model for control will be chosen. Different realistic scenarios will be tested in simulation with MATLAB. Finally different NR
Perspective 2: Simulation with Stochastic Petri Nets
The Air France CDG Airport Hub in Paris-Roissy is dealing daily with 40,000 transfer luggages and 30,000 local luggages (leaving from or arriving at CDG Airport). For this purpose Air France is exploiting the Sorting Infrastructure of Paris Aeroport, and has to propose a Logistical Scheme Allocation for each luggage in order to optimize the sorting and to minimize the number of failed luggages. By failed luggages, we mean a luggage that does not arrive in time for the assigned flight.
The KPI Objective for 2017 is to have less than 20 failed luggages out of 1000 passengers. [2] arise as a suitable representation for Hybrid Dynamical Systems (HDS), in particular for solving control-oriented problems. MLD models can be used for solving a model predictive control (MPC) problem of a particular class of HDS and it is proved that MLD models are equivalent to PieceWise Affine Models in [6]. In the paper by Heemels and coworkers, the equivalencies among PieceWise Affine (PWA) Systems, Mixed Logical and Dynamical (MLD) systems, Linear Complementarity (LC) systems, Extended Linear Complementarity (ELC) systems and Max-Min-Plus-Scaling (MMPS) systems are proved, these relations are transcribed here in Fig. 1.
I. INTRODUCTION Mixed and Logical Dynamical (MLD) models introduced by Bemporad and Morari in
This equivalences are based on some propositions (see [6] for details) A more formal proof can be found in [3], where an efficient technique for obtaining a PWA representation of a MLD model is proposed.
The technique in [3] describes a methodology for obtaining, in an efficient form, a partition of the state-input space. The algorithm in [3] uses some tools from polytopes theory in order to avoid the enumeration of the all possible combinations of the integer variables contained in the MLD model. However, the technique does not describe the form to obtain a suitable choice of the PWA model, even though this part is introduced in the implementation provided by the author in [4]. The objective of this paper is to propose an algorithm of the suitable choice of the PWA description and use the PWA description for obtaining some analysis and control of Hybrid Dynamical Systems.
II. MLD SYSTEMS AND PWA SYSTEMS
A. Mixed and Logical Dynamical (MLD) Systems
The idea in the MLD framework is to represent logical propositions as equivalent integer expressions. MLD form is obtained by three basic steps [5]. The first step is to associate a binary variable δ ∈{0,1} with a proposition S, that may be true or false. δ is 1 if and only if proposition S is true. A composed proposition of elementary propositions S 1 ,…,S q combined using the boolean operators like AND(^), OR (∨), NOT(~) may be expressed like integer inequalities over corresponding binary variables δ i , i=1,…,q.
The second step is to replace the products of linear functions and logic variables by a new auxiliary variable z = δa T x where a T is a constant vector. The z value is obtained by mixed linear inequalities evaluation.
The third step is to describe the dynamical system, binary variables and auxiliary variables in a linear time invariant (LTI) system.
A hybrid system MLD described in general form is represented by (1).
1 2 3 1 2 3 2 3 1 4 5 ( 1 ) ( ) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x k Ax k B u k B k B z k y k Cx k D u k D k D z k E k E z k E u k E x k E δ δ δ + = + + + = + + + + ≤ + + ) (1)
where l are the continuous and
[ ] { 0 ,1 } c T T n n C l x x x = ∈ × R
binary states, u u are the inputs, the outputs, and , , represent the binary and continuous auxiliary variables, respectively. The constraints over state, input, output, z and δ variables are included in the third term in (1).
[ ] { 0 ,1 } c T T m m C l u = ∈ × R { 0 ,1 } c l p × , , i i i i i i i x a Bu x u X i x c Du = + + × ∈ ∈ = + + ) ( ) , ( ) i i i i i i i b B u k x u d D u k + + × + l ) [ ] T T p C l y y y = ∈ R x A y C ( 1 ) ( ( ) ( )
x k Ax k y k C x k + = = + {0,1} l r δ ∈ , t + ∈ Ι , X i k ∈ ∈I c r z ∈ R , + ∈Z
B. PieceWise Affine Systems
A particular class of hybrid dynamical systems is the system described as follows,
(
) 2
where I is a set of indexes, X i is a sub-space of the real space R n , and R + is the set of positive real numbers including the zero element.
In addition to this equation it is necessary to define the form as the system switches among its several modes. This equation is affine in the state space x and the systems described in this form are called PieceWise Affine Systems (PWA). In the literature of hybrid dynamical systems the systems described by the autonomous version of this representation are called Switched Systems.
If the system vanishes when x brings near to zero, i.e. a i and b i are zero, then the representation is called PieceWise Linear (PWL) system.
The discrete-time version of this equation will be used in this work and can be described as follows,
where I is a set of indexes, X i is a sub-space of the real space R n .
III. MLD SYSTEMS INTO PWA SYSTEMS
The MLD framework is a powerful structure for representing hybrid systems in an integrated form. Although E 1 , E 2 , E 3 , E 4 and E 5 matrices are, in general, large matrices, they can be obtained automatically. An example is the HYSDEL compiler [10].
However, some analysis of the system with the MLD representation are computationally more expensive with respect to some tools developed for PWA representations. Exploiting the MLD and PWA equivalencies, it is possible to obtain analysis and control of a system using this equivalent representations. Nevertheless, as it is underlined in [3], this procedure is more complex with respect to the PWA into MLD conversion, and there exist more assumptions. To our knowledge, the only previous approach has been proposed by Bemporad [3]. We propose then a new approach of translating MLD into PWA systems.
The MLD structure can be rewritten as follows,
1 1 2 3 1 1 2 3 2 3 2 1 1 5 ( ) ( 1 ) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) c c l l c c l l c c l l u k x k Ax k B B B k B z k u k u k y k Cx k D D D k D z k u k u k E k E z k E x k E E E u k δ δ δ + = + + + = + + + + ≤ + + (4)
Here, the binary inputs are distinguished from the continuous inputs, because they induce switching modes in the system, in general. Supposing that the system is well posed, z(k) has only one possible value for a given x(k) and u(k), and can be rewritten as:
1 2 3 ( ) ( ) ( ) | [ , ] T T T c z k k x k k u k k m x u b = + + ≤ (5)
Replacing this value in the original equations the system can be represented as,
3 1 1 3 2 3 3 3 1 1 3 2 3 3 4 31 1 32 5 2 33 ( 1 ) ( ) ( ) ( ) ( ) ( 1 ) ( ) ( ) ( ) ( ) ( ) ( ) c c x k A B k x k B B k u k B k y k C D k x k D D k u k D k E E k x E E k u E E E k δ + = + + + + + = + + + + -+ +-+ ≤ - - (6)
If an enumeration technique is used for generating all the feasible binary states of the [u l T δ T ] T vector, the first problem is to find a value of [x T u T ] T feasible for the problem, that can be obtained solving the linear programming problem,
1 3 4 5 2 1 min [ ] .
.
T T T T c c l l X u z x s t E u E z E x E E E u δ = - + - ≤ - + (7)
The solution is a feasible value [x *T u *T ] T . The next problem is to find k 1 , k 2 and k 3 .
The inequalities can be rewritten as,
3 4 1 1 2 5 41 1 2 5 c c l l c c E z E x E u E u E E E k x E k u E k δ ≤ + + - + = + + 3 (8)
where 5 E includes every constant in the problem, i.e. u l and δ. On the other hand, the E 3 matrix reflects the interaction among the z variables, and we can write:
1 2 F z k x k u k 3 × ≤ + + (9)
The matrix F represents the interaction among the z variables, if the system is well posed F -1 should exist.
With this last equation, for finding 3 k the next linear programming problem is solved,
3 * * 3 3 5 2 1 max . . l l k s t E k E E E u δ ≤ - + (10)
The solution to this problem is 3 k , in this case we assume that all components in 5 E are the maximum and minimum values of z and the only solution for the problem is 3
k . With 3 k we can obtain the other matrices.
For obtaining 1 k it is necessary to solve n x , i.e. the length of the state vector, linear programming problems,
* * 3 5 2 1 max . . i i l k s t E k E E E u E δ ≤ - + + 4 l i (11)
where E 4i represents the column i of the E 4 matrix and
1 i i k k = -3
k is the column i of the matrix 1 k .
For obtaining 2 k it is necessary to solve n u , i.e. the length of the continuous input vector, linear programming problems,
* * 3 5 2 1 max . . i i l k s t E k E E E u E δ ≤ - + + 1 l c i ( 12
)
where E 1ci represents the column i of the E 1c matrix and
2 i i k k k = -3 is the column i of the matrix 2
k . The matrix F should be found solving n z , i.e. the length of the z vector, linear programming problems,
* * 3 5 2 1 max . . i i l k s t E k E E E u E δ ≤ - + + 3 l i ( 13
)
where E 3i represents the column i of the E 3 matrix and
3 i i F k k = -is the column i of the matrix .
F Finally, k 1 , k 2 , and k 3 , can be computed as,
1 1 1 2 2 1 3 3 k F k k F k k F k - - - = = = 1 (14)
With these equations, the algorithm for translating the MLD model into PWA model is given as follows, Algorithm 1 1. Find a feasible point for the binary vector, composed by the binary inputs and binary auxiliary variables. 2. Compute 3 k using Eq. ( 10). 3. Compute 1 k , 2 k and F using Eq. ( 11), ( 12) and (13).
4. Compute k 1 , k 2 , and k 3 using Eq. ( 14). 5. Using Eq. ( 6), compute A i , B i , f i , C i , D i and g i and the valid region for this representation. 6. If there exists another feasible point go to step 1. 7. End. Some gains in the algorithm performance can be obtained if the vector z is evaluated after step one, using a linear program for finding the maximum and the minimum in z, if the z min and z max solutions are the same, it is not necessary to calculate steps 3, and 4, and z = z min = z max can be assigned directly.
IV. EXAMPLES
A. The Three-Tank Benchmark Problem
The three-tank benchmark problem has been proposed as an interesting hybrid dynamical system. This Benchmark was proposed in [7] and [8]. See [13] and references there in for some control results using MLD framework in this system. The algorithm described in the last section is used for obtaining a PWA representation of this system.
This system has three tanks each of them interconnected with another as depicted in Fig. 2.
Fig. 2. Three Tank System
The model is written using binary variables (δ i ) and relational expressions,
01 1 1 1 1 2 2 0 2 2 3 3 03 3 3 ( ) 1 1 ( 1 ( ) v v v v v v Z h h h h h h Z h h h h Z h h δ δ δ δ δ δ = - = ↔ > = ↔ > = - = ↔ > = - 2 ) 1 2 1 0 1 0 3 2 0 2 0 3 13 1 3 13 23 2 3 23 ( ) ( ) ( ) ( )
Z Z Z V Z Z Z V Z h h V Z h h V = - = - = - = - 1 1 1 3 1 1 1 1 1 1 31 1 ( ) ( ) ( ) ( ) ( 1 ) ( ) *( s L q k h k Z k Z k h k h k T C RC R C RC + = + - - - 1 ) 2 2 3 2 2 2 2 2 3 2 2 ( ) ( ) ( ) ( 1 ) ( ) *( q k Z k Z k h k h k Ts C R C RC + = - - - 2 3 1 3 2 3 1 2 3 3 3 1 3 3 2 3 2 1 3 2 ( ) ( ) ( ) ( ) ( ) ( 1 ) ( ) *( N h k Z k Z k Z k Z k h k h k Ts R C R C R C RC R C + = + - + + + + 3
)
)
The simulation of the system using the MLD framework and a Mixed Integer Quadratic Programming MIQP algorithm running in an Intel Celeron 2GHz processor and 256MB of RAM was 592.2s, using the PWA representation the same simulation was 1.33s. The time for obtaining the PWA model using the technique described in this work is 72.90s and the algorithm found 128 regions. Using the algorithm in [4] the computation time of the PWA form was 93.88s and the total regions found was 100 and the simulation took 5.89s. These results are summarized in Table I.
Where Computation Time is the time taken by the computer for computing the PWA model based in the MLD model, and Simulation Time is the time taken by the computer for computing a trajectory given a model, an initial state and an input sequence. The simulation results with MLD model and the error between PWA simulation results and MLD simulation results, for the same input are shown in Fig. 3 In this case, at t=30s, the simulation with the PWA system in the Figure 3.b produces a switching to an invalid operation mode.
B. Car with Robotized Manual Gear Shift
The example of a Hybrid Model of a Car with Robotized Manual Gear Shift was reported in [9] and is used in [3] as example. The car dynamics is driven by the following equation,
e b mx F F x β = -- ( 15
)
where m is the car mass, x and x is the car speed and acceleration, respectively, F e is the traction force, F b is the brake force and β is the friction coefficient. The Transmission Kinematics are given by,
( ) ( ) g s g e s R i x k R i F M k ω = =
where ω is the engine speed, M the engine torque and i is the gear position.
The engine torque M is restricted to belongs between the minimum engine torque C and the maximum engine torque C .
( ) e ω - ( ) e ω +
The model has two continuous states, position and velocity of car, two continuous inputs, engine torque and breaking force, and six binary inputs, the gear positions. The MLD model was obtained using the HYSDEL tool.
The translation of the MLD model took 155.73 s and the PWA model found 30 sub-models, using the algorithm proposed in this work, and the PWA model using the algorithm proposed in [3] took 115.52 s and contains 18 sub-models. The simulation time with MLD model and a MIQP algorithm for 250 iterations took 296.25s, using the PWA model obtained with the algorithm proposed here took 0.17s, and using the PWA model obtained using the algorithm in [4] the simulation took 0.35s. These results are summarized in Table II
C. The Drinking Water Treatment Plant
The example of a Drinking Water Treatment Plant has been reported in [11] and [12]. This plant was modeled using identification techniques for hybrid dynamical systems, and its behavior includes autonomous jumps.
The plant modeled is based in the current operation of drinking water plant Francisco Wiesner situated at the periphery of Bogotá D.C. city (Colombia), which treats on average 12m 3 /s. The volume of water produced by this plant is near to 60% of consumption by the Colombian capital. In this plant, there exist two water sources: Chingaza and San Rafael reservoirs which can provide till 22m 3 /s of water.
The process mixes inlet water with a chemical solution in order to generate aggregated particles that can be caught in a filter. The dynamic of the filter is governed by the differential pressure across the filter and the outlet water turbidity. An automaton associated to the filter executes a back-washing operation when the filter performance is degraded. Because of process non-linearity, the behavior of the system is different with two water sources, that is the case for the particular plant modeled.
The model for each water source includes a dynamic for the aggregation particle process which dynamical variable is called Streaming Current (SC) and is modeled using two state variables, a dynamic for the differential pressure called Head Loss (HL) with only one state variable, a dynamic for the outlet turbidity (T o ) with two state variables.
The identified model consists of four affine models, two for each water source in normal operation, one model in maintenance operation, one model representing the jump produced at the end of the maintenance operation.
i i i i i i x k Ax k B u k f y k C x k D u k g i + = + + = + + ∈ to normal operation}
where water source is an input variables, maintenance operation is executed if outlet turbidity (T o ) is greater than a predefined threshold, or, Head Loss (HL) is greater than a predefined threshold, or, operation time is greater than a predefined threshold.
The MLD model has 7 continuous states (including two variables for two timers in the automaton), 4 continuous inputs (dosage, water flow, inlet turbidity and pH), 3 binary inputs (water source, back-washing operation and normal operation), 8 auxiliary binary variables, and 51 auxiliary variables. The complete model can be obtained by mail from the corresponding authors.
The The simulation results for the same input are shown in Fig. 5, (a) MLD Model (b) Error between MLD and PWA [4] (c) Error between MLD and PWA-This Work Fig. 5. Simulation results for a water plant model.
In this case, at t=168min, the simulation with the PWA system in the Figure 5.b is not valid because there exist no mode in the PWA representation that belongs to the stateinput vector reached in this point. Some other results can be found in [14].
V. CONCLUSIONS This work presents new algorithm for obtaining a suitable choice of the PWA description from a MLD representation. The results are applied to the three-tank benchmark problem, to a car with robotized gear shift and to a drinking water plant, the three examples have been reported in the literature as examples of hybrid dynamical systems modeled with MLD formalism. The simulation results show that the PWA models obtained have the same behavior with respect to the MLD models. However in some cases the obtained PWA model does not have a valid solution for some state-input sub-spaces. As a consequence of the enumeration procedure, our PWA models have more submodels/regions than the algorithm in [3], however we show that the procedure does not spent much more computation time because of the simplicity in its formulation, and it ensures the covering of all regions included in the original MLD model.
Ongoing work concerns the analysis of MLD Systems with some results from PWA systems.
INTRODUCTION
Switched nonlinear control systems are characterized by a set of several continuous nonlinear state dynamics with a logic-based controller, which determines simultaneously a sequence of switching times and a sequence of modes. As performance and efficiency are key issues in modern technological system such as automobiles, robots, chemical processes, power systems among others, the design of optimal logic-based controllers, covering all those functionalities while satisfying physical and operational constraints, plays a fundamental role. In the last years, several researchers have considered the optimal control of switched systems. An early work on the problem is presented in [1], where a class of hybrid-state continuous-time dynamic system is investigated. Later, a generalization of the optimal control problem and algorithms of hybrid systems is presented [2]. The particular case of the optimal control problem of switched systems is presented in [3] and [4]. However, most of the efforts have been typically focused on linear subsystems [5]. In general, the optimal control problem of switched system is often computationally hard as it encloses both elements of optimal control as well as combinatorial optimization [6]. In particular, necessary optimality conditions for hybrid systems have been derived using general versions of the Maximum Principle [7,8] and more recently in [9]. In the case of switching systems [4] and [6], the switched system has been embedded into a larger family of systems, and the optimization problem is formulated. For general hybrid systems, with nonlinear dynamics in each location and with autonomous and controlled switching, necessary optimality conditions have recently been presented in [10]; and using these conditions, algorithms based on the hybrid Maximum Principle have been derived. Focusing on real-time applications, an optimal control problem for switched dynamical systems is considered, where the objective is to minimize a cost functional defined on the state, and where the control variable consists of the switching times [11]. It is widely perceived that the best numerical methods available for hybrid optimal control problems involve mixed integer programming (MIP) [12,13]. Even though great progress has been made in recent years in improving these methods, the MIP is an NP-hard problem, so scalability is problematic. One solution for this problem is to use the traditional nonlinear programming techniques such as sequential quadratic programming, which reduces dramatically the computational complexity over existing approaches [6].
The main contribution of this paper is an alternative approach to solve effectively the optimal control problem for an autonomous nonlinear switched system based on the probability measures introduced in [14], and later used in [15] and [16] to establish existence conditions for an infinitedimensional linear program over a space of measure. Then, we apply the theory of moments, a method previously introduced for global optimization with polynomials in [17,18], and later extended to nonlinear 0 1 programs using an explicit equivalent positive semidefinite program in [19]. We also use some results recently introduced for optimal control problems with the control variable expressed as polynomials [20][21][22]. The first approach relating switched systems and polynomial representations can be found in [23]. The moment approach for global polynomial optimization based on semidefinite programming (SDP) is consistent, as it simplifies and/or has better convergence properties when solving convex problems. This approach works properly when the control variable (i.e., the switching signal) can be expressed as a polynomial. Essentially, this method transforms a nonlinear, nonconvex optimal control problem (i.e., the switched system) into an equivalent optimal control problem with linear and convex structure, which allows us to obtain an equivalent convex formulation more appropriate to be solved by high-performance numerical computing. In other words, we transform a given controllable switched nonlinear system into a controllable continuous system with a linear and convex structure in the control variable.
This paper is organized as follows. In Section 2, we present some definitions and preliminaries. A semidefinite relaxation using the moment approach is developed in Section 3. An algorithm is developed on the basis of the semidefinite approach in Section 4 with a numerical example to illustrate our approach, and finally in Section 5, some conclusions are drawn.
THE SWITCHED OPTIMAL CONTROL PROBLEM
Switched systems
The switched system adopted in this work has a general mathematical model described by
P x.t/ D f .t/ .x.t //, ( 1 )
where x.t/ is the state, f i W R n 7 ! R n is the i th vector field, x.t 0 / D x 0 are fixed initial values, and W OEt 0 , t f 7 ! Q 2 ¹0, 1, 2, ..., qº is a piecewise constant function of time, with t 0 and t f as the initial and final times, respectively. Every mode of operation corresponds to a specific subsystem P x.t/ D f i .x.t //, for some i 2 Q, and the switching signal determines which subsystem is followed at each point of time into the interval OEt 0 , t f . The control input is a measurable function.
In addition, we consider a non-Zeno behavior, that is, we exclude an infinite switching accumulation points in time. Finally, we assume that the state does not have jump discontinuities. Moreover, for the interval OEt 0 , t f , the control functions must be chosen so that the initial and final conditions are satisfied.
Definition 1
A control for the switched system in (1) is a duplet consisting of (i) a finite sequence of modes, and (ii) a finite sequence of switching times such that, t 0 < t 1 < < t q D t f .
Switched optimal control problem
Let us define the optimization functional in Bolza form to be minimized as
J D '.x.t f // C Z t f t 0 L .t/ .t , x.t//dt , ( 2 )
where '.x.t f // is a real-valued function, and the running switched costs
L .t/ W R C R n 7 ! R are continuously differentiable for each 2 Q.
A switched optimal control problem (SOCP) can be stated in a general form as follows.
Definition 2 Given the switched system in (1) and a Bolza cost functional J as in (2), the SOCP is given by min
.t/2Q J.t 0 , t f , x.t 0 /, x.t f /, x.t/, .t// (3)
subject to the state x. / satisfying Equation (1).
The SOCP can have the usual variations of fixed or free initial or terminal state, free terminal time, and so forth.
A Polynomial representation
The starting point is to rewrite (1) as a continuous non-switched control system as it has been shown in [24]. The polynomial expression in the control variable able to mimic the behavior of the switched system is developed using a variable v, which works as a control variable.
A polynomial expression in the new control variable v.t/ can be obtained through Lagrange polynomial interpolation and a constraint polynomial as follows. First, let the Lagrange polynomial interpolation quotients be defined as [25],
l k .v/ D q Y i D0 i ¤k .v i/ .k i/ . ( 4
)
The control variable is restricted by the set D ¹v 2 R jg.v/ D 0 º, where g.v/ is defined by
g.v/ D q Y kD0 .v k/. ( 5 )
General conditions for the subsystems functions should be satisfied.
Assumption 3
The nonlinear switched system satisfies growth, Lipschitz continuity, and coercivity qualifications concerning the mappings
f i W R n 7 ! R n L i W R n 7 ! R
to ensure existence of solutions of (1).
The solution of this system may be interpreted as an explicit ODE on the manifold . A related continuous polynomial system of the switched system (1) is constructed in the following proposition [24].
Proposition 4 Consider a switched system of the form given in (1). There exists a unique continuous state system with polynomial dependence in the control variable v, F.x, v/ of degree q in v, with v 2 as follows:
P x D F.x, v/ D q X kD0 f k .x/l k .v/. ( 6 )
Then, this polynomial system is an equivalent polynomial representation of the switched system (1).
Similarly, we define a polynomial equivalent representation for the running cost L .t/ by using the Lagrange's quotients as follows.
Proposition 5
Consider a switched running cost of the form given in (2). There exists a unique polynomial running cost equation L.x, v/ of degree q in v, with v 2 as follows:
L.x, v/ D q X kD0 L k .x/l k .v/ (7)
with l k .v/ defined in (4). Then, this polynomial system is an equivalent polynomial representation of the switched running cost in (2).
The equivalent optimal control problem (EOCP), which is based on the equivalent polynomial representation is described next.
The functional using Equation ( 7) is defined by
J D '.x.t f // C Z t f t 0 L.x, v/dt , ( 8 )
subject to the system defined in (6), with x 2 R n , v 2 , and x.t 0 / D x 0 , where l k .v/, , and L are defined earlier. Note that this control problem is a continuous polynomial system with the input constrained by a polynomial g.v/. This polynomial constraint is nonconvex with a disjoint feasible set, and traditional optimization solvers perform poorly on such equations, as the necessary constraint qualification is violated. This makes this problem intractable directly by traditional nonlinear optimization solvers. Next, we propose a convexification of the EOCP using the special structure of the control variable v, which improves the optimization process.
SEMIDEFINITE RELAXATION USING A MOMENTS APPROACH
Relaxation of the optimal control problem
We describe the relaxation of the polynomial optimal control problem, for which, regardless of convexity assumptions, existence of optimal solutions can be achieved. Classical relaxation results establish, under some technical assumptions, that the infimum of any functional does not change when we replace the integrand by its convexification. In the previous section, a continuous representation of the switched system has been presented. This representation has a polynomial form in the control variable, which implies that this system is nonlinear and nonconvex with a disjoint feasible set. Thus, traditional optimization solvers have a disadvantaged performance, either by means of the direct methods (i.e., nonlinear programming) or indirect methods (i.e., Maximum Principle). We propose then, an alternative approach to deal with this problem. The main idea of this approach is to convexify the control variable in polynomial form by means of the method of moments. This method has been recently developed for optimization problems in polynomial form (see [17,18], among others). Therefore, a linear and convex relaxation of the polynomial problem ( 8) is presented next. The relaxed version of the problem is formulated in terms of probability measures associated with sequences of admissible controls [15].
Let be the set of admissible controls v.t/. The set of probability measures associated to the admissible controls in is
ƒ D ® D ¹ t º t 2OEt 0 ,t f W supp. t / , a.e., t 2 OEt 0 , t f ¯,
where is a probability measure supported in . The functional J.x, v/ defined on ƒ is now given by
J.x, v/ D '.x.t f // C Z t f t 0 Z L.x.t /, v/d t .v/dt ,
where x.t/ is the solution of
P x.t/ D Z F.x, v/d t .v/, x.t 0 / D x 0 .
We have obtained a reformulation of the problem that is an infinite dimensional linear program and thus not tractable as it stands. However, the polynomial dependence in the control variable allows us to obtain a semidefinite program or linear matrix inequality relaxation, with finitely many constraints and variables. By means of moments variables, an equivalent convex formulation more appropriate to be solved by numerical computing can be rendered. The method of moments takes a proper formulation in probability measures of a nonconvex optimization problem ( [18,23], and references therein). Thus, when the problem can be stated in terms of polynomial expressions in the control variable, we can transform the measures into algebraic moments to obtain a new convex program defined in a new set of variables that represent the moments of every measure [17,18,22]. We define the space of moments as
D ² m D ¹m k º W m k D Z v k d .v/, 2 P . / ³ ,
where P . / is the convex set of all probability measures supported in . In addition, a sequence m D ¹m k º has a representing measure supported in only if these moments are restricted to be entries on positive semidefinite moments and localizing matrices [17,19]. For this particular case, when the control variable is of dimension one, the moment matrix is a Hankel matrix with m 0 D 1, that is, for a moment matrix of degree d , we have
M d .m/ D 2 6 6 6 4
m 0 m 1 m d m 1 m 2 m d C1 . . . . . . . . . m d m d C1 m 2d 3 7 7 7 5 .
The localizing matrix is defined on the basis of corresponding moment matrix, whose positivity is directly related to the existence of a representing measure with support in as follows. Consider the set defined by the polynomial ˇ.v/ D ˇ0 C ˇ1v C ˇd v Á . It can be represented in moment variables as ˇ.m/ D ˇ0 C ˇ1m 1 C ˇÁm Á , or in compact form as ˇ.m/ D P Á D0 ˇ m . Suppose that the entries of the corresponding moment matrix are m , with 2 OE0, 1, : : : , 2d . Thus, every entry of the localizing matrix is defined as l D P d D0 ˇ m C . Note that the localizing matrix has the same dimension of the moment matrix, that is, if d D 1 and the polynomial ˇD v C 2v 2 , then the moment and localizing matrices are
M 1 .m/ D Ä 1 m 1 m 1 m 2 , M 1 .ˇm/ D Ä m 1 C 2m 2 m 2 C 2m 3 m 2 C 2m 3 m 3 C 2m 4 .
More details on the method of moments can be found in [19,26].
Because J is a polynomial in v of degree q, the criterion R Ld involves only the moments of up to order q and is linear in the moment variables. Hence, we replace with the finite sequence m D ¹m k º of all its moments up to order q. We can then express the linear combination of the functional J and the space of moments as follows
min v2 J.x, v/ ! min 2P . / Z J.x, v/d .v/ D min m k 2 Z t f t 0 X i X k L i .x/˛i k m k , ( 9 )
where ˛ik are the coefficients resulting of the factorization of Equation ( 4). Similarly, we obtain the convexification of the state equation
P x.t/ D Z F.x, v/d .v/ D X i X k f i .x/˛i k m k . ( 10
)
We have now a problem in moment variables, which can be solved by efficient computational tools as it is shown in the next section.
Semidefinite programs for the EOCP
We can use the functional and the state equation with moment structure to rewrite the relaxed formulation as a SDP. First, we need to redefine the control set to be coherent with the definitions of localizing matrix and representation results. We treat the polynomial g.v/ as two opposite inequalities, that is, g 1 .v/ D g.v/ > 0 and g 2 .v/ D g.v/ > 0, and we redefine the compact set to be D ¹g i .v/ > 0, i D 1, 2º. Also, we define also a prefixed order of relaxation, which is directly related to the number of subsystems.
Let w be the degree of the polynomial g.v/, which is equivalent to the degree of the polynomials g 1 and g 2 . Considering its parity, we have that if w is even (odd) then r D w=2 (r D .w C 1/=2). In this case, r corresponds to the prefixed order of relaxation. We use a direct transcription method to obtain an SDP to be solved through a nonlinear programming (NLP) algorithm [27]. Using a discretization method, the first step is to split the time interval OEt 0 , t f into N subintervals as t 0 < t 1 < t 2 < : : : < t N D t f , with a time step h predefined by the user. The integral term in the functional is implicitly represented as an additional state variable, transforming the original problem in Bolza form into a problem in Mayer form, which is a standard transformation [27]. Therefore, we obtain a set of discrete equations in moment variables. In this particular case, we have used a trapezoidal discretization, but we could have used a more elaborated discretization scheme. Thus, the optimal control problem can be formulated as an SDP.
Consider a fixed t in the time interval OEt 0 , t f and let Assumption 3 holds. We can state the following SDP of relaxation order r (SDP r ).
Semidefinite program-SDP r : For every j D ¹1, 2, : : : , N º, a semidefinite program SDP r can be described by
J r D min m.t j / h 2 N 1 X j D0 L.x.t j /, m.t j // s.t. x.t j C1 / D x.t j / C h X i X k f i .x.t j //˛i k m k .t j /, x.t 0 / D x 0 , (11)
M r .m.t j // 0, M 0 .g 1 m.t j // 0, M 0 .g 2 m.t j // 0.
Notice that in this case, the localizing matrices are linear. Let us consider the two subsystems case, that is, we have g D v 2 v that leads to polynomials
g 1 D v 2 v and g 2 D v v 2 , thus w D deg g D 2. The localizing matrices are M 0 .g 1 m/ D m 2 m 1 , so M 0 .g 2 m/ D m 1 m 2 .
This happens because we are using the minimum order of relaxation, r D w=2 or r D .w C 1/=2 depending on its parity. It is also known that the optimum J r is not always an optimal solution. However, in this case, a suboptimal solution is obtained, which corresponds to a lower bound on the global optimum J of the original problem. If we are interested in searching for an optimal solution, we can use a higher order of relaxation, that is, r > w=2, but the number of moment variables will increase, which can make the problem numerically inefficient. However, in many cases, low order relaxations will provide the optimal value J as shown in the next section, where we use a criterion to test whether the SDP r relaxation achieves the optimal value J for a fixed time. Still, suboptimal solutions of the original problem are obtained in the iteration that can be used. In order to solve a traditional NLP, we use the characteristic form of the moment and localizing matrices. We know that the moment matrices, and so the localizing matrices, are symmetric positive definite, which implies that every principal subdeterminant is positive [21]. Then, we use the set of subdeterminants of each matrix as algebraic constraints.
Analysis of solutions
Once a solution has been obtained in a subinterval OEt j 1 , t j , we obtain a vector of moments m .t j / D OEm 1 .t j /, m 2 .t j /, : : : , m r .t j /. Then, we need to verify if we have attained an optimal solution. On the basis of a rank condition of the moment matrix [26], we can test if we have obtained a global optimum at a relaxation order r. Also, on the basis of the same rank condition, we can check whether the optimal solution is unique or if it is a convex combination of several minimizers. The next result is based on an important result presented in [26] and used in [19] for optimization of 0 1 problems.
Proposition 6
For a fixed time t j in the interval OEt 0 , t f , the SDP r (11) is solved with an optimal vector solution m .t j /, if
r D rank M r m .t j / D rank M 0 m .t j / , (12)
then the global optimum has been reached and the problem for the fixed time t j has r optimal solutions.
Note that the rank condition ( 12) is a sufficient condition, which implies that the global optimum could be reached at some relaxation of order r and still the rank M r > rank M 0 . It should be noted that for the particular case of minimum order of relaxation, the rank condition yields r D rank M r .m.t j // D rank M 0 .m.t j // D 1, because M 0 D 1. Then, the rank M 0 D 1, which implies that when r > 1, that is, several solutions arise. In this case, we obtain a suboptimal switching solution.
Using the previous result, we can state some relations between solutions that can be used to obtain the switching signal in every t j . First, we state the following result valid for the unique solution case.
Theorem 7
If Problem (11) is solved for a fixed t j 2 OEt 0 , t f and the rank condition in ( 12) is verified with r D rank M r .m .t j // D 1, then the vector of moments m .t j / has attained a unique optimal global solution; and therefore, the optimal switching signal of the switched problem (3) for the fixed time t j is obtained as
.t j / D m 1 .t j /, (13)
where m 1 .t j / is the first moment of the vector of moments m .t j /.
Proof Suppose the problem (11) has been solved for a fixed t j , and a solution has been obtained. Let m .t j / be the solution obtained and the rank condition (12) has been verified. From a result presented in [19], it follows that min
2P . / Z J.x, v/d .v/ D min m k 2 Z t f t 0 X i X k L i .x/˛i k m k ,
where m .t j / D m 1 , : : : , m r is the vector of moments of some measure m . But then, as m is supported on , it also follows that m .t j / is an optimal solution and because of rank M r m .t j / D 1, this solution is unique and it is the solution of the polynomial problem (8). Then, we know that every optimal solution v corresponds to
m .t j / D v .t j /, v
Remark 8
Switched linear systems case. When we have a switched linear system, that is, when each subsystem is defined by a linear system, results presented in Theorem ( 7) can be directly applied, because Assumption (3) is satisfied for linear systems because the Lipschitz condition is satisfied globally [28]. Also, we can notice that if the switched linear system has one and only one switching solution, it corresponds to the first moment solution of the SDP r program for all t 2 OEt 0 , t f , that is, m 1 .t j / D .t j /, for all t j 2 OEt 0 , t f . This can be verified by means of the rank condition (12), which should be r D 1, for all t 2 OEt 0 , t f . This result states a correspondence between the minimizer of the original switched problem and the minimizer of the SDP r , and it can be used to obtain a switching signal directly from the solution of the SDP r . However, it is not always the case. Sometimes, we obtain a non-optimal solution that arises when the rank condition is not satisfied, that is, r > 1. But, we still can use information from the solution to obtain a switching suboptimal solution. In [29], a sum up rounding strategy is presented to obtain a suboptimal switched solution from a relaxed solution in the case of mixedinteger optimal control. We use a similar idea but extended to the case when the relaxed solution is any integer instead of the binary case.
Consider the first moment m 1 . / W OEt 0 , t f 7 ! OE0, q, which is a relaxed solution of the NLP problem for t j when the rank condition is not satisfied. We can state a correspondence between the relaxed solution and a suboptimal switching solution, which is close to the relaxed solution in average and is given by
.t j / D 8 < : dm 1 .t j /e if Z t j t 0 m 1 . /d ıt j 1 X kD0 .t k / > 0.5ıt bm 1 .t j /c otherwise (14)
where d e and b c are the ceiling and floor functions, respectively.
A SWITCHED OPTIMIZATION ALGORITHM
The ideas presented earlier are summarized in the following algorithm, which is implemented in Section 4.2 on a simple numerical example presented as a benchmark in [30]. The core of the algorithm is the inter-relationship of three main ideas:
(i) The equivalent optimal control problem The EOCP is formulated as in Section 2, where the equivalent representation of the switched system and the running cost are used to obtain a polynomial continuous system.
(ii) The relaxation of the EOCP -the theory of moments
The EOCP is now transformed into an SDP of order of relaxation r, which can be solved numerically efficiently. We obtain an equivalent linear convex formulation in the control variable. (iii)The relationship between the solutions of the original switched problem and the SDP solutions The solutions of the SDP r for each t j 2 OEt 0 , t f are obtained; and through an extracting algorithm, the solutions of the original problem are obtained.
Algorithm SDP r -SOCP
The optimal control pseudo-code algorithm for the switched systems is shown in Algorithm 1.
In the next section, we present a numerical example to illustrate the results presented in this work.
Numerical example: Lotka-Volterra problem
We present an illustrative example of a switched nonlinear optimal control problem reformulated as a polynomial optimal control problem. Then, this reformulation allows us to apply the semidefinite relaxation based on the theory of moments. We illustrate an efficient computational treatment to study the optimal control problem of switched systems reformulated as a polynomial expression.
We deal with the Lotka-Volterra fishing problem. Basically, the idea is to find an optimal strategy on a fixed time horizon to bring the biomass of both predator as prey fish to a prescribed steadystate. The system has two operation modes and a switching signal as a control variable. The optimal integer control shows chattering behavior, which makes this problem a benchmark to test different types of algorithms ‡ .
The Lotka-Volterra model, also known as the predator-prey model, is a coupled nonlinear differential equations where the biomasses of two fish species are the differential states x 1 and x 2 , the binary control is the operation of a fishing fleet, and the objective is to penalize deviation from a steady-state. The optimal control problem is described as follows:
min u Z t f t 0 .x 1 1/ 2 C .x 2 1/ 2 dt s.t. P x 1 D x 1 x 1 x 2 0.4x 1 u P x 2 D x 2 C x 1 x 2 0.2x 2 u
x.0/ D .0.5, 0.7/ > , u.t / 2 ¹0, 1º, t 2 OE0, 12.
(
The problem can be represented by the approach described earlier. Consider a subsystem f 0 when the control variable takes value 0, and a subsystem f 1 when the control variable takes value 1. This leads to a two operation modes and a switching control variable . / W OE0, 12 7 ! ¹0, 1º. Thus, by means of the algorithm SDP r -EOCP, an SDP program can be stated. First, we define the order of relaxation as r D w=2 D 1; the constraint control set as
D ¹g i .v/ > 0, g 1 .v/ D v 2 v, g 2 D v v 2 º
; the moment matrix with r D 1, M 1 .m/; and the localizing matrices, M 0 .g 1 m/ and M 0 .g 2 m/. Using the set and the moment and localizing matrices, we set the problem in moment variables obtaining the positive semidefinite program .SDP r /. Solving the SDP r program for each t 2 OE0, 12, with a step time h, we obtain an optimal trajectory, and the moment sequence allows us to calculate the switching signal.
Figure 1 shows the trajectories, the relaxed moment solution, and the switching signal obtained for an order of relaxation r D 1. It can be appreciated that when the relaxed solution has a unique optimal solution, that is, when the rank condition is satisfied, the relaxed solution has an exact unique solution that is integer and corresponds to the switching signal, which shows the validity of Theorem 7. Also, it is shown that when the rank condition is not satisfied, the algorithm proposed gives a suitable solution, that in average is close to the relaxed solution. The algorithm has shown that even if there is no global optimal solution, a local suboptimal solution is found. Furthermore, for the intervals where there is no optimal solution, a suboptimal solution has been found using the relaxed solution. In comparison with traditional algorithms, where a global suboptimal solution based on a relaxation is found, the proposed algorithm is able to detect whether an optimal solution is found in a time interval, which implies that if the system is composed by convex functions, a global optimal solution is found. The computational efficiency is based on the semidefinite methods of solutions.
CONCLUSIONS AND FUTURE WORK
In this paper, we have developed a new method for solving the optimal control problem of switched nonlinear systems based on a polynomial approach. First, we transform the original problem into a polynomial system, which is able to mimic the switching behavior with a continuous polynomial representation. Next, we transform the polynomial problem into a relaxed convex problem using the method of moments. From a theoretical point of view, we have provided sufficient conditions for the existence of the minimizer by using particular features of the relaxed, convex formulation. Even in the absence of classical minimizers of the switched system, the solution of its relaxed formulation provides minimizers. We have introduced the moment approach as a computational useful tool to solve this problem, which has been illustrated by means of a classical example used in switched systems. As a future work, the algorithm can be extended to the case when an external control input and the switching signal should be obtained.
Introduction
The ). These methods exploit the communication capabilities of the agents to coordinate their decisions based on the information received from their neighbours. Fully decentralised methodologies have important advantages, among which we highlight the increase of the autonomy and resilience of the whole system since the dependence on a central authority is avoided.
In this paper, we propose a distributed resource allocation algorithm that does not require a central coordinator. An important characteristic of our method is the capability of handling lower bounds on the decision variables. This feature is crucial in a large number of practical applications, e.g. in [START_REF] Conrad | Resource economics[END_REF], Pantoja and Quijano (2012), and Lee et al. (2016), where it is required to capture the non-negativity of the resource allocated to each entity. We use a Lyapunov-based analysis in order to prove that the proposed algorithm asymptotically converges to the optimal solution under some mild assumptions related to the convexity of the cost function, and the connectivity of the graph that represents the communication topology. In order to illustrate our theoretical results, we perform some simulations and compare our method with other techniques reported in the literature. Finally, we present two engineering applications of the proposed algorithm. The first one seeks to improve the energy efficiency in large-scale air-conditioning systems. The second one is related to the distributed computation of the Euclidean projection onto a given set.
Our approach is based on a continuous time version of the centre-free algorithm presented in Xiao and Boyd (2006). The key difference is that the method in Xiao and Boyd (2006) does not allow the explicit inclusion of lower bounds on the decision variables, unless they are added by means of barrier functions (either logarithmic or exact; [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF]. The problem of using barrier functions is that they can adversely affect the convergence time (in the case of using exact barrier functions) and the accuracy of the solution (in the case of using classic logarithmic barrier functions), especially for large-scale problems [START_REF] Jensen | Operations research models and methods[END_REF]. There are other methods that consider lower bound constraints in the problem formulation. For instance, Dominguez-Garcia, Cady, and Hadjicostis ( 2012 2009) use consensus steps to refine an estimation of the system state, while in our approach, consensus is used to equalise a quantity that depends on both the marginal cost perceived by each agent in the network and the Karush-Kuhn-Tucker (KKT) multiplier related to the corresponding resource's lower bound. In this regard, it is worth noting that the method studied in this paper requires less computational capability than the methods mentioned above. Finally, there are other techniques based on game theory and mechanism design (Kakhbod & Teneketzis, 2012; Sharma & Teneketzis, 2009) that decompose and solve resource allocation problems. Nonetheless, those techniques need that each agent broadcasts a variable to all the other agents, i.e. a communication topology given by a complete graph is required. In contrast, the method developed in this paper only uses a communication topology given by a connected graph, which generally requires lower infrastructure.
The remainder of this paper is organised as follows. Section 2 shows preliminary concepts related to graph theory. In Section 3, the resource allocation problem is stated. Then, in Section 4, we present our distributed algorithm and the main results on convergence and optimality. A comparison with other techniques reported in the literature is performed in Section 5. In Section 6, we describe two applications of the proposed method: (i) the optimal chiller loading problem in large-scale airconditioning systems, and (ii) the distributed computation of Euclidean projections. Finally, in Sections 7 and 8, arguments and conclusions of the developed work are presented.
Preliminaries
First, we describe the notation used throughout the paper and presents some preliminary results on graph theory that are used in the proofs of our main contributions.
In the multi-agent framework considered in this article, we use a graph to model the communication network that allows the agents to coordinate their decisions. A graph is mathematically represented by the pair G = (V, E ), where V = {1, . . . , n} is the set of nodes, and E ⊆ V × V is the set of edges connecting the nodes. G is also characterised by its adjacency matrix A = [a i j ]. The adjacency matrix A is an n × n non-negative matrix that satisfies: a ij = 1 if and only if (i, j) ∈ E, and a ij = 0 if and only if (i, j) / ∈ E. Each node of the graph corresponds to an agent of the multi-agent system, and the edges represent the available communication channels (i.e. (i, j) ∈ E if and only if agents i and j can share information). We assume that there is no edges connecting a node with itself, i.e. a ii = 0, for all i ∈ V; and that the communication channels are bidirectional, i.e. a ij = a ji . The last assumption implies that G is undirected. Additionally, we denote by N i = { j ∈ V : (i, j) ∈ E}, the set of neighbours of node i, i.e. the set of nodes that are able to receive/send information from/to node i.
Let us define the n × n matrix L(G) = [l i j ], known as the graph Laplacian of G, as follows:
l i j = ⎧ ⎨ ⎩ j∈V a i j if i = j -a i j if i ̸ = j. (1)
Properties of L(G) are related to connectivity characteristics of G as shown in the following theorem. We remark that a graph G is said to be connected if there exists a path connecting any pair of nodes. From Equation ( 1), it can be verified that L(G)1 = 0, where 1 = [1, . . . , 1] ⊤ , 0 = [0, . . . , 0] ⊤ . A consequence of this fact is that L(G) is a singular matrix. However, we can modify L(G) to obtain a nonsingular matrix as shown in the following lemma.
Lemma 2.1:
Let L k r (G) ∈ R (n-1
)×n be the submatrix obtained by removing the kth row of the graph Laplacian L(G), and let L k (G) ∈ R (n-1)×(n-1) be the submatrix obtained by removing the kth column of
L k r (G). If G is connected, then L k (G) is positive definite. Furthermore, the inverse matrix of L k (G) satisfies (L k (G)) -1 l k r k = -1, where l k r
k is the kth column of the matrix L k r (G). Proof: First, notice that L(G) is a symmetric matrix because G is an undirected graph. Moreover, notice that according to Equation (1), L(G) is diagonally dominant with non-negative diagonal entries. The same holds for L k (G) since this is a sub-matrix obtained by removing the kth row and column of L(G). Thus, to show that L k (G) is positive definite, it is sufficient to prove that L k (G) is nonsingular.
According to Theorem 2.1, since G is connected, L(G) has exactly n -1 linearly independent columns (resp. rows). Let us show that the kth column (resp. row) of L(G) can be obtained by a linear combination of the other columns (resp. rows), i.e. the kth column (resp. row) is not linearly independent of the rest of the columns (resp. rows).
Since L(G)1 = 0, notice that l ik = -j∈V, j̸ =k l i j , for all i ∈ V, i.e. the kth column can be obtained by a linear combination of the rest of the columns. Furthermore, since L(G) is a symmetric matrix, the same occurs with the kth row. Therefore, the submatrix L k (G) is nonsingular since its n -1 columns (resp. rows) are linearly independent. Now, let us prove that (L k (G)) -1 l k r k = -1. To do so, we use the fact that (L k (G)) -1 L k (G) = I, where I is the identity matrix. Hence, by the definition of matrix multiplication, we have that
n-1 m=1 lk im l k m j = 1 if i = j 0 if i ̸ = j , (2)
where l k i j and lk i j are the elements located in the ith row and jth column of the matrices L(G) and L k (G)
-1 , respectively. Thus,
n-1 m=1 lk im l k mi = 1, for all i = 1, . . . n -1. ( 3
)
Let l k r k m be the mth entry of the vector l k r k . Notice that, according to the definition of L k (G) and since L(G)1 = 0, l k mi = -n-1 j=1, j̸ =i l k m j -l k r k m . Replacing this value in Equation (3), we obtain
- n-1 j=1, j̸ =i n-1 m=1 lk im l k m j - n-1 m=1 lk im l kr km = 1, for all i = 1, . . . n -1.
According to Equation ( 2),
n-1 j=1, j̸ =i n-1 m=1 lk im l k m j = 0. This implies that n-1 m=1 lk im l k r k m = -1, for all i = 1, … , n -1. Therefore, L k (G) -1 l k r k = -1. Theorem 2.
1 and Lemma 2.1 will be used in the analysis of the method proposed in this paper.
Problem statement
In general terms, a resource allocation problem can be formulated as follows (Patriksson, 2008;Patriksson & Strömberg, 2015):
min x φ(x) := n i=1 φ i (x i ) (4a) subject to n i=1 x i = X (4b) x i ≥ x i , for all i = 1, . . . , n, (4c)
where x i ∈ R is the resource allocated to the ith zone;
x = [x 1 , … , x n ] ; φ i : R → R is a strictly convex and differentiable cost function; X is the available resource; and x i , is the lower bound of x i , i.e. the minimum amount of resource that has to be allocated in the ith zone.
Given the fact that we are interested in distributed algorithms to solve the problem stated in Equation ( 4), we consider a multi-agent network, where the ith agent is responsible for managing the resource allocated to the ith zone. Moreover, we assume that the agents have limited communication capabilities, so they can only share information with their neighbours. This constraint can be represented by a graph G = {V, E} as it was explained in Section 2.
Avoiding the individual inequality constraints (4c), KKT conditions establish that at the optimal solution x * = [x * 1 , . . . , x * n ] ⊤ of the problem given in Equation (4a-4b), the marginal costs φ ′ i (x i ) = dφ i dx i must be equal, i.e. φ ′ i (x * i ) = λ, for all i = 1, … , n, where λ ∈ R. Hence, a valid alternative to solve (4a-4b) is the use of consensus methods. For instance, we can adapt the algorithm presented in Xiao and Boyd (2006), which is described as follows:
ẋi = j∈N i φ ′ j (x j ) -φ ′ i (x i ) , for all i ∈ V. ( 5
)
This algorithm has two main properties: (i) at equilibrium,
φ ′ i (x * i ) = φ ′ j (x * j )
if the nodes i and j are connected by a path; (ii) n i=1 x * i = n i=1 x i (0), where x i (0) is the initial condition of x i . Therefore, if the graph G is connected and the initial condition is feasible (i.e. n i=1 x i (0) = X), x asymptotically reaches the optimal solution of (4a-4b) under (5). However, the same method cannot be applied to solve (4) (the problem that considers lower bounds in the resource allocated to each zone) since some feasibility issues related with the constraints (4c) arise.
In the following section, we propose a novel method that extends the algorithm in Equation ( 5) to deal with the individual inequality constraints given in Equation (4c).
Centre-free resource allocation algorithm
Resource allocation among a subset of nodes in a graph
First, we consider the following subproblem: let G = {V, E} be a graph comprised by a subset of active nodes V a and a subset of passive nodes V p , such that V a V p = V. A certain amount of resource X has to be split among those nodes to minimise the cost function φ(x) subject to each passive node is allocated with its corresponding lower bound x i . Mathematically, we formulate this subproblem as:
min x φ(x) (6a) subject to n i=1
x i = X (6b)
x i = x i , for all i ∈ V p . (6c)
Feasibility of ( 6) is guaranteed by making the following assumption.
Assumption 4.1: At least one node is active, i.e. V a ̸ = ∅.
According to KKT conditions, the active nodes have to equalise their marginal costs at the optimal solution. Therefore, a consensus among the active nodes is required to solve (6). Nonetheless, classic consensus algorithms, as the one given in Equation ( 5), cannot be used directly. For instance, if all the nodes of G apply ( 5) and G is connected, the marginal costs of both passive and active nodes are driven to be equal in steady state. This implies that the resource allocated to passive nodes can violate the constraint (6c). Besides, if the resource allocated to passive nodes is forced to satisfy (6c) by setting x * i = x i , for all i ∈ V p , there is no guarantee that the new solution satisfies (6b). Another alternative, is to apply (5) to only active nodes (in this case, the neighbourhood of node i ∈ V a in Equation ( 5) has to be taken as { j ∈ V a : (i, j) ∈ E}, and the initial condition must satisfy i∈V a x i (0) = X -i∈V p x i ). However, the sub-graph formed by the active nodes is not necessarily connected although G is connected. Hence, marginal cost of active nodes are not necessarily equalised at equilibrium, which implies that the obtained solution is sub-optimal. In conclusion, modification of (5) to address ( 6) is not trivial. In order to deal with this problem, we propose the following algorithm:
ẋi = j∈N i (y j -y i ), for all i ∈ V (7a) ẋi = (x i -x i ) + j∈N i (y j -y i ), for all i ∈ V p (7b) y i = φ ′ i (x i ) if i ∈ V a φ ′ i (x i ) + xi if i ∈ V p . (7c)
In the same way as in (5), the variables {x i , i ∈ V} in Equation ( 7) correspond to the resource allocated to both active and passive nodes. Notice that we have added auxiliary variables { xi , i ∈ V p } that allow the passive nodes to interact with their neighbours taking into account the constraint (6c). On the other hand, the term j∈N i (y jy i ), in Equations (7a)-(7b), leads to a consensus among the elements of the vector y = [y 1 , … , y n ] , which are given in Equation (7c). For active nodes, y i only depends on the marginal cost φ ′ i (x i ), while for passive nodes, y i depends on both the marginal cost and the state of the auxiliary variable xi . Therefore, if the ith node is passive, it has to compute both variables x i and xi . Furthermore, it can be seen that, if all the nodes are active, i.e. (V a = V), then the proposed algorithm becomes the one stated in Equation (5).
Notice that the ith node only needs to know y i and the values {y j : j ∈ N i } to compute j∈N i (y j -y i ) in (7a)-(7b). In other words, L(G)y = [ j∈N 1 (y j -y 1 ), . . . , j∈N n (y j -y n )] ⊤ is a distributed map over the graph G [START_REF] Cortés | Distributed algorithms for reaching consensus on general functions[END_REF]. This implies that the dynamics given in Equation ( 7) can be computed by each node using only local information. In fact, the message that the ith node must send to its neighbours is solely composed by the variable y i .
.. Feasibility
Let us prove that, under the multi-agent system proposed in Equation ( 7), x(t) satisfies the first constraint of the problem given by Equation ( 6), for all t ࣙ 0, provided that n i=1 x i (0) = X. Lemma 4.1: The quantity n i=1 x i (t ) is invariant under Equation (7), i.e. if n i=1 x i (0) = X, then n i=1 x i (t ) = X, for all t ࣙ 0.
Proof: It is sufficient to prove that ˙ = 0, where = n i=1 x i . Notice that ˙ = n i=1 ẋi = 1 ⊤ ẋ, where ẋ = [ ẋ1 , . . . , ẋn ] ⊤ . Moreover, according to Equation ( 7),
1 ⊤ ẋ = -1 ⊤ L(G)y. Since G is undirected, 1 ⊤ L(G) = L(G)1 = 0. Therefore, ˙ = 0.
The above lemma does not guarantee that x(t) is always feasible because of the second constraint in Equation ( 6), i.e. x i = x i , for all i ∈ V p . However, it is possible to prove that, at equilibrium, this constraint is properly satisfied.
.. Equilibrium point
The next proposition characterises the equilibrium point of the multi-agent system given in Equation (7).
Proposition 4.1:
If G is connected, the system in Equation (7) has an equilibrium point x * , { x * i , i ∈ V p }, such that:
φ ′ i (x * i ) = λ, for all i ∈ V a , where λ ∈ R is a constant; and x * i = x i , for all i ∈ V p . Moreover, x * i = λ -φ ′ i (x * i ), for all i ∈ V p . Proof: Let x * , { x * i , i ∈ V p }
be the equilibrium point of Equation (7). Since G is connected by assumption, it follows from Equation (7a) that y * i = λ, for all i ∈ V, where λ is a constant. Thus,
y * i = φ ′ i (x * i ) if i ∈ V a , and
y * i = φ ′ i (x * i ) + x * i , if i ∈ V p . Hence, φ ′ i (x * i ) = λ, for all i ∈ V a , and x * i = λ -φ ′ i (x * i )
, for all i ∈ V p . Moreover, given the fact that j∈N i (y * j -y * i ) = 0, it follows from Equation (7b) that x * i = x i , for all i ∈ V p Remark 4.1: Proposition 4.1 states that, at the equilibrium point of ( 7), the active nodes equalise their marginal costs, while each passive node is allocated with an amount of resource equal to its corresponding lower bound. In conclusion, if n i=1 x * i = X, then it follows from Proposition 4.1, that x * minimises the optimisation problem given in Equation (6). Additionally, notice that the values { x * i , i ∈ V p } are equal to the KKT multipliers associated with the constraint (6c).
.. Convergence
Let us prove that the dynamics in Equation ( 7) converge to x * , { x * i , i ∈ V p }, provided that each φ i (x i ) is strictly convex. Proposition 4.2: Assume that φ i (x i ) is a strictly convex cost function, for all i ∈ V. If G is connected, n i=1 x i (0) = X, and Assumption 4.1 holds, then x(t) converges to x * under Equation (7), where x * is the solution of the optimisation problem stated in Equation (6), i.e. x * is the same given in Proposition 4.1. Furthermore, xi converges to x * i , for all i ∈ V p .
Proof: According to Lemma 4.1, since n i=1 x i (0) = X, then x(t) satisfies the first constraint of the problem stated in Equation ( 6), for all t ࣙ 0. Therefore, it is sufficient to prove that the equilibrium point x * , { x * i , i ∈ V p } (which is given in Proposition 4.1) of the system proposed in Equation ( 7) is asymptotically stable (AS). In order to do that, let us express our multi-agent system in error coordinates, as follows:
ė = -L(G)e y ėi = e i -L(G)e y i , for all i ∈ V p e y i = φ ′ i (x i ) -φ ′ i (x * i ) if i ∈ V a φ ′ i (x i ) -φ ′ i (x * i ) + êi if i ∈ V p , (8)
where L(G) is the graph Laplacian of G; e i = x i -x * i , and e y i = y i -y * i , for all i ∈ V; êi = xix * i , for all i ∈ V p ; e = [e 1 , … , e n ] ; e y = [e y 1 , . . . , e y n ] ⊤ ; and L(G)e y i represents the ith element of the vector L(G)e y .
Since Assumption 4.1 holds, V a ̸ = ∅. Let k be an active node, i.e. k ∈ V a , and let e k , e k y be the vectors obtained by removing the kth element from vectors e and e y , respectively. We notice that, according to Lemma 4.1, e k (t) = i ࢠ ν, i ࣔ k e i (t), for all t ࣙ 0. Therefore, Equation ( 8) can be expressed as
ėk = -L k (G)e k y -l k r k e y k e k = -i∈ν,i̸ =k e i ėi = e i -L k (G)e k y + l k r k e y k i , for all i ∈ V p e y i = φ ′ i (x i ) -φ ′ i (x * i ) if i ∈ V a φ ′ i (x i ) -φ ′ i (x * i ) + êi if i ∈ V p , (9)
where L k (G) and l k r k are defined in Lemma 2.1. In order to prove that the origin of the above system is AS, let us define the following Lyapunov function (adapted from Obando, Quijano, & Rakoto-Ravalontsalama, 2014):
V = 1 2 e k ⊤ L k (G) -1 e k + 1 2 i∈V p (e i -êi ) 2 . ( 10
)
The function V is positive definite since G is connected (the reason of this fact is that, according to Lemma 2.1, L k (G) and its inverse are positive definite matrices if G is connected). The derivative of V along the trajectories of the system stated in Equation ( 9) is given by,
V = -e k ⊤ e k y -e k ⊤ (L(G)) -1 l k r k e y k - i∈V p e i (e i -êi ) Taking into account that L k (G) -1 l k r k = -1 (cf. Lemma 2.1), we obtain V = -e k ⊤ e k y + e y k i∈V,i̸ =k e i - i∈V p e i (e i -êi ) = - n i=1 e i (φ ′ i (x i ) -φ ′ i (x * i )) - i∈V p e i êi + i∈V p e i ( êi -e i ) = - n i=1 (x i -x * i )(φ ′ i (x i ) -φ ′ i (x * i )) - i∈V p e 2 i ,
where φ ′ i is strictly increasing given the fact that φ i is strictly convex, for all i ∈ V. Therefore, (x i -
x * i )(φ ′ i (x i ) -φ ′ i (x * i )) ≥ 0, for all i ∈ V, and thus V ≤ 0. Since V does not depend on { êi , i ∈ V p }, it is negative semidefinite. Let S = {{e i , i ∈ V}, { êi , i ∈ V p } : V = 0}, i.e. S = {e i , i ∈ V}, { êi , i ∈ V p } : e i = 0, for all i ∈ V .
Given the fact that G is connected and V ̸ = V p (by Assumption 4.1), then ė = 0 iff e y = 0 (see Equation ( 8)). Therefore, the only solution that stays identically in S is the trivial solution, i.e. e i (t) = 0, for all i ∈ V, êi (t ) = 0, for all i ∈ V p . Hence, we can conclude that the origin is AS by applying the Lasalle's invariance principle.
In summary, we have shown that the algorithm described in Equation ( 7) asymptotically solves the subproblem in Equation ( 6), i.e. (7) guarantees that the resource allocated to each passive node is equal to its corresponding lower bound, while the remaining resource X -i∈V p x i is optimally allocated to active nodes.
Optimal resource allocation with lower bounds
Now, let us consider our original problem stated in Equation (4), i.e. the resource allocation problem that includes lower bound constraints. Let x * = [x * 1 , . . . , x * n ] ⊤ be the optimal solution of this problem. Notice that, if we know in advance which nodes will satisfy the constraint (4c) with strict equality after making the optimal resource allocation process, i.e. I := {i ∈ V : x * i = x i }, we can mark these nodes as passive and reformulate (4) as a subproblem of the form (6). Based on this idea, we propose a solution method for (4), which is divided in two stages: in the first one, the nodes that belong to I are identified and marked as passive; in the second one, the resulting subproblem of the form ( 6) is solved by using (7).
Protocol (7) can be also used in the first stage of the method as follows: in order to identify the nodes that will satisfy (4c) with strict equality at the optimal allocation, we start marking all nodes as active and apply the resource allocation process given by (7). The nodes that are allocated with an amount of resource below their lower bounds at equilibrium are marked as passive, and then ( 7) is newly applied (in this way, passive nodes are forced to meet (4c)). This iterative process is performed until all nodes satisfy their lower bound constraints. Notice that the last iteration of this procedure corresponds to solve a subproblem of the form (6) where the set of passive nodes is equal to the set I. Therefore, this last iteration is equivalent to the second stage of the proposed method.
Summarising, our method relies on an iterative process that uses the continuous-time protocol (7) as a subroutine. The main idea of this methodology is to identify in each step the nodes that have an allocated resource out of their lower bounds. These nodes are marked as passive, so they are forced to satisfy their constraints in subsequent iterations, while active nodes seek to equalise their marginal costs using the remaining resource. In the worst case scenario, the classification between active and passive nodes requires |V| iterations, where |V| is the number of nodes in the network. This fact arises when only one active node becomes passive at each iteration.
The proposed method is formally described in Algorithm 1. Notice that this algorithm is fully decentralised since Steps 4-6 can be computed by each agent using only local information. Step 4 corresponds to solve Equation (7), while Steps 5 and 6 describe the conditions for converting an active node into passive. Let us note that Steps 4-6 have to be performed |V| times since we are considering the worst case scenario. Therefore, each agent needs to know the total number of nodes in the network. This requirement can be computed in a distributed way by using the method proposed in Garin and Schenato (2010, p. 90). We also notice the fact that the agents have to be synchronised (as usual in several distributed algorithms; [START_REF] Cortés | Distributed algorithms for reaching consensus on general functions[END_REF][START_REF] Garin | A survey on distributed estimation and control applications using linear consensus algorithms[END_REF]Xiao & Boyd, 2006) in order to apply the Step 4 of Algorithm 1, i.e. all agents must start solving Equation ( 7) at the same time.
Algorithm 1: Resource allocation with lower bounds
Input:
-Parameters of the problem in Equation ( 4).
-An initial value x (0) , such that n i=1 x (0) i =X. Output: Optimal allocation x * 1 Mark all nodes as active, i.e. Ṽa,0 ← V, Ṽp,0 ← ∅.;
2 xi,0 ← x (0)
i , for all i ∈ V.; 3 for l ← 1 to |V| do 4 xi,l ← x i (t l ), for all i ∈ V, where x i (t l ) is the solution of Equation (7a) at time t l , with initial conditions
x(0) = [ x1,l-1 , . . . , xn,l-1 ] ⊤ , V a = Ṽa,l-1 , V p = Ṽp,l-1 , and { xi (0) = 0, ∀i ∈ V p }.; 5 Ṽp,l ← Ṽp,l-1 {i ∈ Ṽa,l-1 : xi < x i },
and Ṽa,l ← Ṽa,l-1 \{i ∈ Ṽa,l-1 : xi < x i }.;
6 x * ← [ x1,l , . . . , xn,l ] ⊤ .; 7 return x * ;
According to the reasoning described at the beginning of this subsection, we ideally require to know the steady-state solution of Equation ( 7) at each iteration of Algorithm 1 (since we need to identify which nodes are allocated with an amount of resource below their lower bounds in steady state). This implies that the time t l in Step 4 of Algorithm 1 goes to infinity. Under this requirement, each iteration would demand infinite time and the algorithm would not be implementable. Hence, to relax the infinite time condition, we state the following assumption on the time t l . Assumption 4.2: Let x * i,l be the steady state of x i (t) under Equation (7), with initial conditions x(0) = xi,l-1 , V a = Ṽa,l-1 , V p = Ṽp,l-1 , and { xi (0) = 0, ∀i ∈ V p } 1 . For each l = 1, . . . , |V| -1, the time t l satisfies the following condition: x i (t l ) < x i if and only if x * i,l < x i , for all i ∈ V. According to assumption 4.2, for the first |V| -1 iterations, we only need a solution of (7) that is close enough to the steady-state solution. We point out the fact that, if the conditions of Proposition 4.2 are met in the lth iteration of Algorithm 1, then x i (t) asymptotically converges to x * i,l , for all i ∈ V, under Equation (7). Therefore, Assumption 4.2 is satisfied for large values of t 1 , . . . , t |V|-1 .
Taking into account all the previous considerations, the next theorem states our main result regarding the optimality of the output of Algorithm 1.
Theorem 4.1:
Assume that G is a connected graph. Moreover, assume that φ i is a strictly convex function for all i = 1, … , n. If t 1 , . . . , t |V|-1 satisfy Assumption 4.2, and the problem stated in Equation (4) is feasible, then the output of Algorithm 4 tends to the optimal solution of the problem given in Equation (4) as t |V| → ∞.
Proof:
The ith component of the output of Algorithm 1 is equal to xi,|V| = x i (t |V| ), where x i (t |V| ) is the solution of Equation (7a) at time t |V| , with initial conditions [ x1,|V|-1 , . . . , xn,|V|-1 ] ⊤ , V a = Ṽa,|V| , and V p = Ṽp,|V| . Hence, it is sufficient to prove that {x * 1,|V| , . . . , x * n,|V| } solves the problem in Equation ( 4). In order to do that, let us consider the following premises (proof of each premise is written in brackets).
P1: { x1,l , . . . , xn,l } satisfies (4b), for all l = 1, . . . , |V| (this follows from Lemma 4.1, and form the fact that n i=1 xi,0 = X). P2: x * i,l = x i , for all i ∈ Ṽp,l-1 , and for all l = 1, . . . , |V| (this follows directly from Proposition 4.2).
P3: Ṽp,l = Ṽp,l-1 {i ∈ Ṽa,l-1 : x * i,l < x i }, and Ṽa,l = Ṽa,l-1 \{i ∈ Ṽa,l-1 : x * i,l < x i }, for all l = 1, . . . , |V| (this follows from Step 5 of Algorithm 1, and from Assumption 4.2).
P4: If for some l, Ṽp,l = Ṽp,l-1 , then Ṽp,l+ j = Ṽp,l-1 , for all j = 0, . . . , |V| -l (this can be seen from the fact that if the set of passive nodes does not change from one iteration to the next, the steady state of Equation (7a) is the same for both iterations).
P5: Ṽa,l Ṽp,l = V, for all l = 1, . . . , |V| (from P3, we know that Ṽa,l Ṽp,l = Ṽa,l-1 Ṽp,l-1 , for all l = 1, . . . , |V|. Moreover, given the fact that Ṽp,0 = ∅, and Ṽa,0 = V, (see step 1 of Algorithm 1) we can conclude P5). P6: Since the problem in Equation ( 4) is feasible by assumption, then | Ṽp,l | < |V|, for all l = 1, . . . , |V| (the fact that | Ṽp,l | ≤ |V|, for all l = 1, . . . , V, follows directly from P5. Let us prove that | Ṽp,l | ̸ = |V|, for all l = 1, . . . , V. We proceed by contradiction: Assume that there exists some l, such that | Ṽp,l-1 | < |V| and | Ṽp,l | = |V|. Hence, from P2 and P3, we know that x * i,l ≤ x i , for all i ∈ V; moreover, {i ∈ Ṽa,l-1 : x * i,l < x i } ̸ = ∅. Therefore, n i=1 x * i,l < n i=1 x i . According to P1, we know that n i=1 x * i,l = X; thus, X < n i=1 x i , which contradicts the feasibility assumption).
P7: {x * 1,|V| , . . . , x * n,|V| } satisfies the constraints (4c) (in order to prove P7, we proceed by contradiction: assume that {x * 1,|V| , . . . , x * n,|V| } does not satisfy the constraints (4c). Since P2 holds, this assumption implies that {i ∈ Ṽa,|V-1| :
x * i,|V| < x i } ̸ = ∅. Therefore, Ṽp,|V| ̸ = Ṽp,|V|-1 (see P3). Using P4, we can conclude that Ṽp,|V| ̸ = Ṽp,|V|-1 ̸ = • • • ̸ = Ṽp,0 = ∅, i.e. {i ∈ Ṽa,|V|-j : x * i,|V|-j+1 < x i } ̸ = ∅, for all j = 1, . . . , |V|. Thus, according to P3, | Ṽp,|V| | > | Ṽp,|V|-1 | > • • • > | Ṽp,1 | > 0. Hence, | Ṽp,|V| | ≥ |V|, which contradicts P6). P8: i∈ Ṽa,l x * i,l ≥ i∈ Ṽa,l x * i,l+1
(we prove P8 as follows: using P1 and the result in Lemma 4.1, we know that i∈V x * i,l = i∈V x * i,l+1 = X. Moreover, according to P5, V can be expressed as V = Ṽa,l Ṽp,l , where Ṽp,l-1 ⊂ Ṽp,l (see P3). Thus, we have that i∈ Ṽa,l
x * i,l + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x * i,l + i∈ Ṽp,l-1 x * i,l = i∈ Ṽa,l x * i,l+1 + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x * i,l+1 + i∈ Ṽp,l-1 x * i,l+1 . Furthermore, since P2 holds, we have that i∈ Ṽa,l x * i,l + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x * i,l + i∈ Ṽp,l-1 x i = i∈ Ṽa,l x * i,l+1 + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 x i + i∈ Ṽp,l-1 x i . Therefore, i∈ Ṽa,l x * i,l = i∈ Ṽa,l x * i,l+1 + i∈ Ṽp,l ,i / ∈ Ṽp,l-1 (x i -x * i,l
), where x i -x * i,l > 0, for all i ∈ Ṽp,l , i / ∈ Ṽp,l-1 (according to P3). Hence, we can conclude P8).
P9: There exists k, such that k ∈ Ṽa,l , for all l = 1, . . . , |V| (in order to prove P9, we use the fact that, if k ∈ Ṽa,l , then k ∈ Ṽa,lj , for all j = 1, … , l (this follows from P3). Moreover, according to P5 and P6, | Ṽa,|V| | ̸ = 0; hence, there exists k, such that k ∈ Ṽa,|V| . Therefore, P9 holds). P9 guarantees that Assumption 4.1 is satisfied at each iteration.
P10:
φ ′ i (x * i,l ) ≥ φ ′ i (x * i,l+1
), for all i ∈ Ṽa,l (we prove P10 by contradiction: assume that φ
′ i (x * i,l ) < φ ′ i (x * i,l+1
), for some i ∈ Ṽa,l . According to Proposition 4.2, and since P1 and P9 hold, x * i,l has the characteristics given in Proposition 4.1, for all i ∈ V, and for all l = 1, . . . , |V|. Hence, φ ′ i (x * i,l ) has the same value for all i ∈ Ṽa,l-1 , and φ ′ i (x * i,l+1 ) has the same value for all i ∈ Ṽa,l . Moreover, since Ṽa,l ⊂ Ṽa,l-1 (according to P3), we have that
φ ′ i (x * i,l ) < φ ′ i (x * i,l+1
), for all i ∈ Ṽa,l . Thus, x * i,l < x * i,l+1 , for all i ∈ V a,l , because φ ′ i is strictly increasing (this follows from the fact that φ i is strictly convex by assumption). Therefore, i∈ Ṽa,l x * i,l < i∈ Ṽa,l x * i,l+1 , which contradicts P8). Now, let us prove that {x * 1,|V| , . . . , x * n,|V| } solves the Problem in Equation (4). First, the solution {x * 1,|V| , . . . , x * n,|V| } is feasible according to P1 and P7. On the other hand, from P9, it is known that ∃k : k ∈ Ṽa,l , for all l = 1, . . . ,
|V|. Let φ ′ k (x * k,|V| ) = λ, where λ ∈ R. Moreover, let us define V 0 = { j ∈ V : x * i,|V| > x i }, and V 1 = { j ∈ V : x * i,|V| = x i }. If i ∈ V 0 , then i ∈ Ṽa,|V|-1 (given the fact that, if i / ∈ Ṽa,|V-1| ⇒ i ∈ Ṽp,|V-1| ⇒ x * i,|V| = x i ⇒ i / ∈ V 0 ). Hence, φ ′ i (x * i,|V| ) = φ ′ k (x * k,|V| ) = λ (this follows from the fact that φ ′ j (x * j,l
) has the same value for all j ∈ Ṽa,l-1 , which in turn follows directly from step 4 of Algorithm 1, and Proposition 4.2).
If
i ∈ V 1 , then either i ∈ Ṽa,|V|-1 or i ∈ Ṽp,|V|-1 . In the first case, φ ′ i (x * i,|V| ) = φ ′ k (x * k,|V| ) = λ (following the rea- soning used when i ∈ V 0 ). In the second case, ∃l : i ∈ ( Ṽp,l \ Ṽp,l-1 ); hence, φ ′ i (x * i,l ) = φ ′ k (x * k,l
) (this follows from the fact that, if i ∈ ( Ṽp,l \ Ṽp,l-1 ), then i ∈ Ṽa,l-1 ). Furthermore, since i ∈ ( Ṽp,l \ Ṽp,l-1 ), x * i,l < x i (see P3), and given the fact that φ i is strictly increasing, we have that
φ ′ i (x * i,l ) < φ ′ i (x i ). Moreover, according to P10, φ ′ k (x * k,l ) ≥ φ ′ k (x * k,|V| ). Hence, φ ′ i (x i ) > φ ′ k (x * k,|V| ) = λ. In conclusion, if i ∈ V 1 , then φ ′ i (x * i,|V| ) ≥ λ. Thus, we can choose µ i ࣙ 0, for all i ∈ V, such that φ ′ i (x * i,|V| ) -µ i = λ, where µ i = 0 if i ∈ V 0 . Hence, let us note that ∂φ ∂x i | x i =x * i,|V| -µ i -λ = 0, for all i ∈ V, where ∂φ ∂x i | x i =x * i,|V| = φ ′ i (x * i,|V| ). Therefore, {x *
1,|V| , . . . , x * n,|V| , µ 1 , . . . , µ n , -λ} satisfies the KKT conditions for the problem given in Equation ( 4). Furthermore, since φ(x) is a strictly convex function by assumption, then {x * 1,|V| , . . . , x * n,|V| } is the optimal solution to that problem.
Early stopping criterion
Notice that, if the set of passive nodes does not change in the kth iteration of Algorithm 1 because all active nodes satisfy the lower bound constraints (see step 5), then the steady state solutions x * i,k and x * i,k+1 are the same, for all i ∈ V, which implies that the set of passive nodes also does not change in the (k + 1)th iteration. Following the same reasoning, we can conclude that
x * i,k = x * i,k+1 = • • • = x * i,|V|
, for all i ∈ V. Therefore, in this case, {x * 1,k , . . . , x * n,k } is the solution of our resource allocation problem. Practically speaking, this implies that Algorithm 1 does not need to perform more iterations after the kth one. Thus, it is possible to implement a flag z * i (in a distributed way) that alerts the agents if all active nodes satisfy the lower bound constraints after step 4 of Algorithm 1. A way to do that is by applying a min-consensus protocol [START_REF] Cortés | Distributed algorithms for reaching consensus on general functions[END_REF] with initial conditions z i (0) = 0 if the node i is active and does not satisfy its lower bound constraint, and z i (0) = 1 otherwise. Hence, notice that our flag z * i (i.e. the result of the min-consensus protocol) is equal to one, for all i ∈ V, only if all the active nodes satisfy the lower bound constraints, which corresponds to the early stopping criterion described above.
Simulation results and comparison
In this section, we compare the performance of our algorithm with other continuous-time distributed techniques found in the literature. We have selected three techniques that are capable to address nonlinear problems and can handle lower bound constraints: (i) a distributed interior point method (Xiao & Boyd, 2006), (ii) the local replicator equation (Pantoja & Quijano, 2012), and (iii) a distributed interior point method with exact barrier functions [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF]. The first one is a traditional methodology that uses barrier functions; the second one is a novel technique based on population dynamics; and the third one is a recently proposed method that follows the same ideas as the first one, but replaces classic logarithmic barrier functions by exact penalty functions. Below, we briefly describe the aforementioned algorithms.
Distributed interior point (DIP) method
This algorithm is a variation of the one presented in Equation (5) that includes strictly convex barrier functions to prevent the solution to flow outside the feasible region. The barrier functions b i (x i ) are added to the original cost function as follows:
φ b (x) = φ(x) + ϵ n i=1 b i (x i ) b i (x i ) = -ln x i -x i , for all i ∈ V,
where φ b (x) is the new cost function, and ϵ > 0 is a constant that minimises the effect of the barrier function when the solution is far from the boundary of the feasible set. With this modification, the distributed algorithm is described by the following equation:
ẋi = j∈N i φ ′ b j (x j ) -φ ′ b i (x i ) , for all i ∈ V, (11)
where
φ ′ b i (x i ) = dφ i dx i -ϵ db i dx i , i.e. φ ′ b i (x i
) is equal to the marginal cost plus a penalty term induced by the derivative of the corresponding barrier function.
Local replicator equation (LRE)
This methodology is based on the classical replicator dynamics from evolutionary game theory. In the LRE, the growth rate of a population that plays a certain strategy only depends on its own fitness function and on the fitness of its neighbours. Mathematically, the LRE is given by ẋi =
j∈N i (x i -x i )(x j -x j )(v i (x i ) -v j (x j )), v i = -φ ′ i (x i ), for all i ∈ V, (12)
where v i is the fitness perceived by the individuals that play the ith strategy. In this case, the strategies correspond to the nodes of the network, and the fitness functions to the negative marginal costs (the minus appears because replicator dynamics are used to maximise utilities instead of minimise costs). On the other hand, it can be shown that, if the initial condition x(0) is feasible for the problem given in Equation (4), then x(t) remains feasible for all t ࣙ 0, under the LRE.
Distributed interior point method with exact barrier functions (DIPe)
This technique follows the same reasoning of the DIP algorithm. The difference is that DIPe uses exact barrier functions [START_REF] Bertsekas | Necessary and sufficient conditions for a penalty method to be exact[END_REF] to guarantee satisfaction of the lower bound constraints. The exact barrier function for the ith node is given by:
b e i (x i ) = 1 ε [x i -x i ] + , where [•] + = max ( •, 0), 0 < ε < 1 2 max x∈F ∥∇φ(x)∥ ∞ , and F = {x ∈ R n : n i=1 x i = 1, x i ≥ x i }
is the feasible region of x for the problem (4). Using these exact barrier functions, the augmented cost function can be expressed as:
φ e b (x) = φ(x) + n i=1 b e i (x i ).
The DIPe algorithm is given in terms of the augmented cost function and its generalised gradient ∂φ e b (x) = [∂ 1 φ e b (x), . . . , ∂ n φ e b (x)] ⊤ as follows:
ẋi
∈ j∈N i ∂ j φ e b (x) -∂ i φ e b (x) , for all i ∈ V, (13)
where
∂ i φ e b (x) = ⎧ ⎨ ⎩ {φ ′ i (x i ) -1 ε } if x i < x i φ ′ i (x i ) -1 ε , φ ′ i (x i ) if x i = x i {φ ′ i (x i )} if x i > x i
In [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF], the authors show that the differential inclusion (13) converges to the optimal solution of the problem (4), provided that x(0) is feasible.
Comparison
In order to compare the performance of our algorithm with the three methods described above, we use the following simulation scenario: a set of n nodes connected as in Figure 1 (we use this topology to verify the behaviour of the different algorithms in the face of few communication channels since previous studies have shown that algorithms' performance decreases with the number of
(x) = n i=1 e a i (x i -b i ) + e -a i (x i -b i )
, where a i and b i are random numbers that belong to the intervals [1,2] and [- 1 2 , 1 2 ], respectively; a resource constraint X = 1; and a set of lower bounds {x i = 0 : i ∈ V}.
For each n, we generate 50 problems with the characteristics described above. The four distributed methods are implemented in Matlab employing the solver function ode23s. Moreover, we use the solution provided by a centralised technique as reference. The results on the average percentage decrease in the cost function reached with each algorithm and the average computation time (time taken by each algorithm for solving a problem 2 ) are summarised in Table 1. Results of DIPe for 100 and 200 nodes were not computed for practicality since the time required by this algorithm to solve a 100/200-nodes problem is very high.
We notice that the algorithm proposed in this paper always reaches the maximum reduction, regardless of the number of nodes that comprise the network. The same happens with the DIPe algorithm. This is an important advantage of our method compared to other techniques. In contrast, the algorithm based on the LRE performs far from the optimal solution. This unsatisfactory behaviour is due to the small number of links of the considered communication network. In Pantoja and Quijano (2012), the authors prove the optimality of the LRE in problems involving well connected networks; however, they also argue that this technique can converge to suboptimal solutions in other cases. On the other hand, the DIP method provides solutions close to the optimum. Nonetheless, its performance decreases when the number of nodes increases. This tendency is due to the influence of barrier functions on the original problem. Notice that, the larger the number of nodes, the bigger the effect of the barrier functions in Equation (11).
Regarding the computation time, although convergence of the proposed method is slower than the one shown by LRE and DIP, it is faster than the convergence of the method based on exact barrier functions, i.e. DIPe. Therefore, among the methods that guarantee optimality of the solution, our technique shows the best convergence speed. Computation time taken by DIPe is affected by the use of penalty terms that generate strong changes in the value of the cost function near to the boundaries of the feasible set. The drastic variations of the generalised gradient of exact barrier functions produces oscillations of numerical solvers around the lower bounds (a visual inspection of the results given in Figure 3 of [START_REF] Cherukuri | Distributed generator coordination for initialization and anytime optimization in economic dispatch[END_REF] confirms this claim). These oscillations are the main responsible for the low convergence speed shown by DIPe. On the other hand, LRE and DIP exhibit the fastest convergence. Hence, LRE and DIP are appealing to be implemented in applications that require fast computation and tolerate suboptimal solutions.
Applications
This section describes the use of the approach developed in this paper to solve two engineering problems. First, we present an application for sharing load in multiple chillers plants. Although this is not a large-scale application (multi-chiller plants are typically comprised of less than ten chillers; Yu & Chan, 2007), it aims to illustrate the essence of the proposed method and shows algorithm's performance in small-size problems. One of the reasons to use a distributed approach in small-/medium-size systems is due to the need of enhancing systems resilience in the face of central failures (e.g. in multiple chiller plants, central failures can occur due to cyber-attacks (Manic, Wijayasekara, Amarasinghe, & Rodriguez-Andina, 2016) against building management systems (Yu & Chan, 2007)). The second application deals with the distributed computation of the Euclidean projection of a vector onto a given set. Particularly, we use the proposed algorithm as part of a distributed technique that computes optimal control inputs for plants composed of a large number of sub-systems. This application aims to illustrate the performance of the method proposed in this paper when coping with large-scale problems.
Optimal chiller loading
The optimal chiller loading problem in multiple chiller systems arises in decoupled chilled-water plants, which are widely used in large air-conditioning systems [START_REF] Chang | Optimal chilled water temperature calculation of multiple chiller systems using Hopfield neural network for saving energy[END_REF]). The goal is to distribute the cooling load among the chillers that comprise the plant for minimising the total amount of power used by them. For a better understanding of the problem, below we present a brief description of the system. A decoupled chilled-water plant comprised by n chillers is depicted in Figure 2. The purpose of this plant is to provide a water flow f T at a certain temperature T s to the rest of the air-conditioning system. In order to do this task the plant needs to meet a cooling load C L that is given by the following expression:
C L = m f T (T r -T s ), (14)
where m > 0 is the specific heat of the water, and T r is the temperature of the water returning to the chillers. Since there are multiple chillers, the total cooling load C L is split among them, i.e. C L = n i=1 Q i , where Q i is the cooling power provided by the ith chiller, which, in turn, is given by
Q i = m f i (T r -T i ), (15)
where f i > 0 and T i are, respectively, the flow rate of chilled water and the water supply temperature of the ith chiller. As it is shown in Figure 2, we have that f T = n i=1 f i . In order to meet the corresponding cooling load, the ith chiller consumes a power P i that can be calculated using the following expression [START_REF] Chang | Optimal chilled water temperature calculation of multiple chiller systems using Hopfield neural network for saving energy[END_REF]:
P i = k 0,i + k 1,i m f i T r + k 2,i (m f i T r ) 2 + + k 3,i -k 1,i m f i -k 4,i m f i T r -2k 2,i (m f i ) 2 T r T i + k 5,i + k 6,i m f i + k 2,i (m f i ) 2 T 2 i , (16)
where k j, i , for j = 0, … , 6, are constants related to the ith chiller. If we assume that the flow rate f i of each chiller is constant, then P i is a quadratic function of the temperature T i . The optimal chiller loading problem involves the calculation of the chillers' water supply temperatures that meet the total cooling load given in Equation ( 14), and minimise the total amount of power consumed by the chillers, i.e. n i=1 P i . Moreover, given the fact that each chiller has a maximum cooling capacity, we have to consider the following additional constraints:
m f i (T r -T i ) ≤ Q i for all i = 1, . . . , n, (17)
where Q i is the maximum capacity (rated value) of the ith chiller.
Summarising, the optimal chiller loading problem can be expressed as follows: min
T 1 ,...,T n n i=1 P i (T i ) s.t. n i=1 m f i (T r -T i ) = C L T i ≥ T r -Q i m f i , for all i = 1, . . . , n. (18)
Now, let us consider that we want to solve the aforementioned problem in a distributed way by using a multiagent system, in which each chiller is managed by an agent that decides the value of the water supply temperature. We assume that the ith agent knows (e.g. by measurements) the temperature of the water returning to the chillers, i.e. T r , and the flow rate of chilled water, i.e. f i . Moreover, agents can share their own information with their neighbours through a communication network with a topology given by the graph G. If each P i (T i ) is a convex function, then the problem can be solved by using the method proposed in Algorithm 1 (we take, in this case, x i = f i T i ). The main advantage of this approach is to increase the resilience of the whole system in the face of possible failures, due to the fact that the plant operation does not rely on a single control centre but on multiple individual controllers without the need for a centralised coordinator.
.. Illustrative example
We simulate a chilled-water plant comprised by 7 chillers. 3 The cooling capacity and the water flow rate of each chiller are, respectively, Q i = 1406.8 kW, and f i = 65 kg.s -1 , for i = 1, … , 7; the specific heat of the water is m = 4.19 kW.s.kg -1 . degC -1 ; the supply temperature of the system is T s = 11 degC; and the coefficients k j, i of Equation ( 16) are given in Table 2. We operate the system at two different cooling loads, the first one is 90% of the total capacity, i.e. C L = 0.9 n i=1 Q i , and the second one is 60% of the total capacity, i.e. C L = 0.6 n i=1 Q i . The P i -T i curves are shown in Figure 3(a) for both cases, it can be noticed that all functions are convex. In order to apply 3(a)) are more loaded than the less efficient ones (i.e. chiller 2 and chiller 5). This can be noticed from the fact that their supply temperatures, in steady state, reach the minimum value. Furthermore, the energy consumption is minimised and power saving reaches to 2.6%. The results for the second cooling load, i.e. C L = 5908.6 kW, are shown in Figure 3(c), where it can be noticed a similar performance to that obtained with the first cooling load. However, in this case, it is not necessary that the supply temperatures reach the minimum value to meet the required load. Newly, energy consumption is minimised and power saving reaches to 2.8%. As it is stated in Section 4, convergence and optimality of the method is guaranteed under the conditions given in Theorem 4.1.
i = i = i = i = i = i = i = k , i
In both cases we use the early stopping criterion given in Section 4.
Although other techniques have been applied to solve the optimal chiller loading problem, e.g. the ones in Chang and Chen (2009), they require centralised information. In this regard, it is worth noting that the same objective is properly accomplished by using our approach, which is fully distributed.
Distributed computation of the Euclidean projection
Several applications require computing the Euclidean projection of a vector in a distributed way. These applications include matrix updates in quasi-Newton methods, balancing of traffic flows, and decomposition techniques for stochastic optimisation (Patriksson, 2008). The problem of finding the Euclidean projection of the vector ξ onto a given set X is formulated as follows:
min ξ ∥ ξ -ξ ∥ 2 2 s.t. ξ ∈ X , (19)
where ∥ • ∥ 2 is the Euclidean norm. The vector that minimises the above problem, which is denoted by ξ * , is the Euclidean projection. Roughly speaking, ξ * can be seen as the closest vector to ξ that belongs to the set X . In Barreiro-Gomez, Obando, Ocampo-Martinez, and Quijano (2015), the authors use a distributed computation of the Euclidean projection to decouple large-scale control problems. Specifically, they propose a discrete time method to address problems involving plants comprised of a large number of decoupled sub-systems whose control inputs are coupled by a constraint. The control inputs are associated with the power applied to the subsystems, and the constraint limits the total power used to control the whole plant. At each time iteration, local controllers that manage the sub-systems compute optimal control inputs ignoring the coupled constraint (each local controller uses a model predictive control scheme that does not use global information since the sub-systems' dynamics are decoupled). Once this is done, the coupled constraint is addressed by finding the Euclidean projection of the vector of local control inputs (i.e. the vector formed by all the control inputs computed by the local controllers) onto a domain that satisfies the constraint associated with the total power applied to the plant.
For a better explanation of the method, consider a plant comprised of n sub-systems. Let ûi (k) ≥ 0 be the control input computed by the ith local controller at the kth iteration ignoring the coupled constraint (non-negativity of ûi (k) is required since the control signals correspond to an applied power). Let û(k) = [ û1 (k), . . . , ûn (k)] ⊤ be the vector of local control inputs, and let u * (k) be the vector of control signals that are finally applied to the sub-systems. If the maximum allowed power to control the plant is U > 0, the power constraint that couples the control signals is given by n i=1 u * i (k) ≤ U . The vector u * (k) is calculated by using the Euclidean projection of û(k) onto a domain that satisfies the power constraint, i.e. u * (k) is the solution of the following optimisation problem (cf. Equation ( 19)):
min u(k) ∥ û(k) -u(k)∥ 2 2 (20a) s. t. n i=1 u i (k) ≤ U (20b) u i (k) ≥ 0, for all i = 1, . . . , n, (20c)
where u i (k) denotes the ith entry of the vector u(k).
Notice that u * (k) satisfies the power constraint and minimises the Euclidean distance with respect to the control vector ûk that is initially calculated by the local controllers. Computation of u * (k) can be performed by using the approach proposed in this paper because the problem stated in Equation ( 20) is in the standard form given in Equation ( 4) except for the inequality constraint (20b). However, this constraint can be addressed by adding a slack variable.
.. Illustrative example
Consider a plant composed of 100 sub-systems. Assume that, at the kth iteration of the discrete time method presented in Barreiro-Gomez, Obando, Ocampo-Martinez, and Quijano (2015), the control inputs that are initially computed by the local controllers are given by the entries of the vector û(k) = [ û1 (k), . . . , û100 (k)] ⊤ , where ûi (k) is a random number chosen from the interval [0, 1] kW. Furthermore, assume that the maximum allowed power to control the plant is U = 40 kW. To satisfy this constraint, the Euclidean projection described in Equation ( 20) is computed in a distributed way using Algorithm 1 with the early stopping criterion described in Section 4.
The results under a communication network with path topology (see Figure 1) are depicted in Figure 4. The curve at the top of Figure 4 describes the evolution of the Euclidean distance. Notice that the proposed algorithm minimises this distance and reaches the optimum value (dashed line), which has been calculated employing a centralised method. On the other hand, the curves at the bottom of Figure 4 illustrate the evolution of the values 100 i=1 u i (k) (solid line) and min {u i (k)} (dash-dotted line). These curves show that the constraints of the problem stated in Equation (20) performance even considering that the communication graph is sparse and the optimal solution is not in the interior of the feasible domain. As shown in Section 5, this characteristic is an advantage of Algorithm 1 over population dynamics techniques as the one proposed in Barreiro-Gomez, Obando, Ocampo-Martinez, and Quijano (2015) to compute the Euclidean projection in a distributed way.
Discussion
The method developed in this paper solves the problem of resource allocation with lower bounds given in Equation (4). The main advantage of the proposed technique is its distributed nature; indeed, our approach does not need the implementation of a centralised coordinator. This characteristic is appealing, especially in applications where communications are strongly limited. Moreover, fully distributed methodologies increase the autonomy and resilience of the system in the face of possible failures. In Section 5, we show by means of simulations that the performance of the method presented in this paper does not decrease when the number of nodes (which are related to the decision variables of the optimisation problem) is large, or the communication network that allows the nodes to share information has few channels.
In these cases, the behaviour of our approach is better than the behaviour of other techniques found in the literature, such as the DIP method, or the LRE. Moreover, it is worth noting that our technique addresses the constraints as hard. This fact has two important consequences: (i) in all cases, the solution satisfies the imposed constraints, and (ii) the objective function (and therefore the optimum) is not modified (contrary to the DIP method that includes the constraints in the objective function decreasing the quality of the solution as shown in Section 5.4). Other advantage of the method proposed in this paper is that it does not require an initial feasible solution of the resource allocation problem (4). Similarly to the DIPe technique, our method only requires that the starting point satisfies the resource constraint (4b), i.e. we need that n i=1 x i (0) = X. Notice that an initial solution x(0) that satisfies (4b) is not hard to obtain in a distributed manner. For instance, if we assume that only the kth node has the information of the available resource X, we can use (x k (0) = X, {x i (0) = 0 : i ∈ V, i ̸ = k}) as our starting point. Thus, an initialisation phase is not required. In contrast, other distributed methods, such as DIP and LRE needs an initial feasible solution of the problem (4), i.e. a solution that satisfies (4b) and (4c). Finding this starting point is not a trivial problem for systems involving a large number of variables. Therefore, for these methods, it is necessary to employ distributed constraint satisfaction algorithm (as the one described in Domınguez-Garcıa & Hadjicostis, 2011) as a first step.
On the other hand, we notice that to implement the early stopping criterion presented at the end of Section 4, it is required to perform an additional min-consensus step in each iteration. Despite this fact, if the number of nodes is large, this criterion saves computational time, because in most of the cases, all passive nodes are identified during the first iterations of Algorithm 1.
Conclusions
In this paper, we have developed a distributed method that solves a class of resource allocation problems with lower bound constraints. The proposed approach is based on a multi-agent system, where coordination among agents is done by using a consensus protocol. We have proved that convergence and optimality of the method is guaranteed under some mild assumptions, specifically, we require that the cost function is strictly convex and the graph related to the communication network that enables the agents to share information is connected. The main advantage of our technique is that it does not need a centralised coordinator, which makes the method appropriate to be applied in large-scale distributed systems, where the inclusion of centralised agents is undesirable or infeasible. As future work, we propose to use a switched approach in order to eliminate the iterations in Algorithm 1. Moreover, we plan to include upper bound constraints in our original formulation.
Notes
Disclosure statement
No potential conflict of interest was reported by the authors.
On Switchable Languages of Discrete-Event Systems with Weighted Automata
Michael Canu and Naly Rakoto-Ravalontsalama
Abstract-The notion of switchable languages has been defined by Kumar, Takai, Fabian and Ushio in [11]. It deals with switching supervisory control, where switching means switching between two specifications. In this paper, we first extend the notion of switchable languages to n languages, (n ≥ 3). Then we consider a discrete-event system modeled with weighted automata. The use of weighted automata is justified by the fact that it allows us to synthesize a switching supervisory controller based on the cost associated to each event, like the energy for example. Finally the proposed methodology is applied to a simple example.
Keywords: Supervisory control; switching control; weighted automata.
I. INTRODUCTION
Supervisory control initiated by Ramadge and Wonham [15] provides a systematic approach for the control of discrete event system (DES) plant. There has been a considerable work in the DES community since this seminal paper. On the other hand, from the domain of continuous-time system, hybrid and switched systems have received a growing interests [12]. The notion of switching is an important feature that has to be taken into account, not only in the continuous-time domain but for the DES area too.
As for non-blocking property, there exist different approaches. The first one is the non-blocking property defined in [15]. Since then other types of nonblocking properties have been defined. The mutually non-blocking property has been proposed in [5]. Other approaches of mutually and globally nonblocking supervision with application to switching control is proposed in [11]. Robust non-blocking supervisory control has been proposed in [1]. Other types of non-blocking include the generalised non-blocking property studied in [13]. Discrete-event modeling with switching maxplus systems is proposed in [17], an example of mode M. Canu is with Univ. los Andes, Bogota, Colombia, email: m.canu134@uniandes.edu.co N. Rakoto-Ravalontsalama is with IMT Atlantique and LS2N, France, e-mail: naly.rakoto@mines-nantes.fr switching DES is described in [6] and finally a modal supervisory control is considered in [7].
In this paper we will consider the notion of switching supervisory control defined by Kumar and Colleagues in [11] where switching means switching between a pair of specifications. Switching (supervisory) control is in fact an application of some results obtained in the same paper [11] about mutually non blocking properties of languages, mutually nonblocking supervisor existence, supremal controllable, relative-closed and mutually nonblocking languages. All these results led to the definition of a pair of switchable languages [11].
In this paper, we first extend the notion of switchable languages to n languages, (n ≥ 3). Then we consider a discrete-event system modeled with weighted automata. The switching supervisory control strategy is based on the cost associated to each event, and it allows us to synthesize an optimal supervisory controller. Finally the proposed methodology is applied to a simple example.
This paper is organized as follows. In Section II, we recall the notation and some preliminaries. Then in Section III the main results on the extension of n switchable languages (n ≥ 3) are given. An illustrative example of supervisory control of AGVs is proposed in Section IV, and finally a conclusion is given in Section V.
II. NOTATION AND PRELIMINARIES
Let the discrete event system plant be modeled by a finite state automaton [10], [4] to which a cost function is added. Definition 1: (Weighted automaton). A weighted automaton is defined as a sixtuple G = (Q, Σ, δ, q 0 , Q m , C) where • Q is the finite set of states, • Σ is the finite set of events, • δ : Q × Σ → Q is the partial transition function,
• q 0 ⊆ Q is the initial state, • Q m ⊆ Q is the set of marked states (final states),
• C : Σ → N is the cost function.
Let Σ * be the set of all finite strings of elements in Σ including the empty string ε. The transition function δ can be generalized to δ : Σ * × Q → Q in the following recursive manner: δ(ε, q) = q δ(ωσ, q) = δ(σ, δ(ω, q)) for ω ∈ Σ * The notation δ(σ, q)! for any σ ∈ Σ * and q ∈ Q denotes that δ(σ, q) is defined. Let L(G) ⊆ Σ * be the language generated by G, that is, L(G) = {σ ∈ Σ * |δ(σ, q 0 )!} Let K ⊆ Σ * be a language. The set of all prefixes of strings in K is denoted by pr(K) with pr(K) = {σ ∈ Σ * |∃ t ∈ Σ * ; σt ∈ K}. A language K is said to be prefix closed if K = pr(K). The event set Σ is decomposed into two subsets Σ c and Σ uc of controllable and uncontrollable events, respectively, where Σ c ∩ Σ uc = ∅. A controller, called a supervisor, controls the plant by dynamically disabling some of the controllable events.
A sequence σ 1 σ 2 . . . σ n ∈ Σ * is called a trace or a word in term of language. We call a valid trace a path from the initial state to a marked state (δ(ω, q 0 ) = q m where ω ∈ Σ * and q m ∈ Q m ). The cost is by definition non negative. In the same way, the cost function C is generalized to the domain Σ * as follows:
C(ε) = 0 C(ωσ) = C(ω) + C(σ) for ω ∈ Σ * In other words, the cost of a trace is the sum of the costs of each event that composes the trace. Definition 2: (Controllability) [15]. A language K ⊆ L(G) is said to be controllable with respect to (w.r.t.) L(G) and Σ uc if pr(K)Σ uc ∩ L(G) ⊆ pr(K). Definition 3: (Mutually non-blocking supervisor) [5]. a supervisor f : L(G) → 2 Σ-Σu is said to be (K 1 , K 2 )mutually non-blocking if
K i ∩ L m (G f ) ⊆ pr(K j ∩ L m (G f )), for i, j ∈ {1, 2}. (1)
In other words, a supervisor S is said to be mutually non-blocking w.r.t. two specifications K 1 and K 2 if whenever the closed-loop system has completed a task of one language (by completing a marked trace of that language), then it is always able to continue to complete a task of the other language [5].
Definition 4: (Mutually non-blocking language) [5]. A language H ⊆ K 1 ∪K 2 is said to be (K 1 , K 2 )-mutually non-blocking if H ∩K i ⊆ pr(H ∩K j ) for i, j ∈ {1, 2}.
The following theorem gives a necessary and sufficient condition for the existence of a supervisor.
Theorem 1: (Mutually nonblocking supervisor existence) [5]. Given a pair of specifications K 1 , K 2 ⊆ L m (G), there exists a globally and mutually nonblocking supervisor f such that L m (G f ) ⊆ K 1 ∪ K 2 if and only if there exists a nonempty, controllable, relative-closed, and (K 1 , K 2 )-mutually non-blocking sublanguage of
K 1 ∪ K 2 .
The largest possible language (the supremal element) that is controllable and mutually non-blocking exists, as stated by the following theorem.
Theorem 2: (SupMRC(K 1 ∪ K 2 ) existence) [5]. The set of controllable, relative-closed, and mutually nonblocking languages is closed under union, so that the supremal such sublanguage of K 1 ∪ K 2 , denoted supM RC(K 1 ∪ K 2 ) exists.
Recall that a pair of languages K 1 , K 2 are mutually nonconflicting if pr(K 1 ∩ K 2 ) = pr(K 1 ) ∩ pr(K 2 ) [18]. K 1 , K 2 are called mutually weakly nonconflicting if K i , pr(K j ) (i ̸ = j) are mutually nonconflicting [5].
Another useful result from [5] is the following. Given a pair of mutually weakly nonconflicting languages K 1 , K 2 ⊆ L m (G), the following holds ( [5], Lemma 3). If K 1 , K 2 are controllable then K 1 ∩ pr(K 2 ), K 2 ∩ pr(K 1 ) are also controllable.
The following theorem is proposed in [11] and it gives the formula for the supremal controllable, relativeclosed, and mutually nonblocking languages.
Theorem 3: (SupMRC(K 1 ∪ K 2 )) [11]. For relative-closed specifications
K 1 , K 2 ⊆ L m (G), supM RC(K 1 ∪ K 2 ) = supRC(K 1 ∩ K 2 ).
The following theorem, also from [11] gives another expression of the supremal controllable, relative-closed, and mutually nonblocking languages. Theorem 4: [11] Given a pair of controllable, relativeclosed, and mutually weakly nonconflicting languages K 1 , K 2 ⊆ L m (G), it holds that supM RC(K 1 ∪ K 2 ) = (K 1 ∩ K 2 ).
And finally the following theorem gives a third formula of the supremal controllable, relative-closed, and mutually nonblocking languages.
Theorem 5: [11] For specifications
K 1 , K 2 ⊆ L m (G), supM RC(K 1 ∪ K 2 ) = supM C(supRC(K 1 ∩ K 2 )).
In order to allow switching between specifications, a pair of supervisors is considered, such that the supervisor is switched when the specification is switched. The supervisor f i for the specification K i is designed to enforce a certain sublanguage H i ⊆ K i . Suppose a switching in specification from K i to K j is induced at a point when a trace s ∈ H i has been executed in the f i -controlled plant. Then in order to be able to continue with the new specification K j without reconfiguring the plant, the trace s must be a prefix of H j ⊆ K j . In other words, the two supervisors should enforce the languages H i and H j respectively such that H i ⊆ pr(H j ). Hence the set of pairs of such languages are defined to be switchable languages as follows.
Definition 5: (Pair of switchable languages) [11]. A pair of specifications K 1 , K 2 ⊆ L m (G) are said to be switchable languages if SW (K 1 , K 2 ) := {(H 1 , H 2 )|H i ⊆ K i ∩ pr(H j ), i ̸ = j, and H i controllable}.
The supremal pair of switchable languages exists and is given by the following theorem.
Theorem 6: (Supremal pair of switchable languages) [11]. For specifications
K 1 , K 2 ⊆ L m (G), supSW (K 1 , K 2 ) = (supM C(K 1 ∪ K 2 ) ∩ K 1 , supM C(K 1 ∪ K 2 ) ∩ K 2 ).
III. MAIN RESULTS
We now give the main results of this paper. First, we define a triplet of switchable languages. Second we derive a necessary and sufficient condition for the transitivity of switchable languages (n = 3). Third we generalize this definition to a n-uplet of switchable languages, with n > 3. And fourth we derive a necessary and sufficient condition for the transitivity of switchable languages for n > 3.
A. Triplet of Switchable Languages
We extend the notion of pair of switchable languages, defined in [11], to a triplet of switchable languages. Definition 6: (Triplet of switchable languages). A triplet of languages (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} are said to be a triplet of switchable languages if they are pairwise switchable languages, that is, SW (K 1 , K 2 , K 3 ) := SW (K i , K j ), i ̸ = j, i, j = {1, 2, 3}.
Another expression of the triplet of switchable languages is given by the following lemma.
Lemma 1: (Triplet of switchable languages). A triplet of languages (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} are said to be a triplet of switchable languages if the following holds:
SW (K 1 , K 2 , K 3 ) = {(H 1 , H 2 , H 3 ) | H i ⊆ K i ∩
pr(H j ), i ̸ = j, and H i controllable}.
B. Transitivity of Switchable Languages (n = 3)
The following theorem gives a necessary and sufficient condition for the transitivity of switchable languages.
Theorem 7: (Transitivity of switchable languages, n = 3) . Given 3 specifications (K 1 , K 2 , K 3 ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, 2, 3} such that SW (K 1 , K 2 ) and SW (K 2 , K 3 ). (K 1 , K 3 ) is a pair of switchable languages, i.e. SW (K 1 , K 3 ), if and only if 1) H 1 ∩ pr(H 3 ) = H 1 , and 2) H 3 ∩ pr(H 1 ) = H 3 .
Proof: The proof can be found in [3].
C. N-uplet of Switchable Languages
We now extend the notion of switchable languages, to a n-uplet of switchable languages, with (n > 3).
Definition 7: (N-uplet of switchable languages, n > 3). A n-uplet of languages (K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}, n > 2, is said to be a n-uplet of switchable languages if the languages are pairwise switchable that is, SW (K 1 , ..., K n ) := SW (K i , K j ), i ̸ = j, i, j = {1, ..., n}, n > 2.
As for the triplet of switchable languages, an alternative expression of the n-uplet of switchable languages is given by the following lemma.
Lemma 2: (N-uplet of switchable languages, n > 3).
A n-uplet of languages (K 1 , . . . , K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}, n > 3 are said to be a n-uplet of switchable languages if the following holds:
SW (K 1 , ..., K n ) = {(H 1 , ..., H n ) | H i ⊆ K i ∩
pr(H j ), i ̸ = j, and H i controllable}.
D. Transitivity of Switchable Languages (n > 3)
We are now able to derive the following theorem that gives a necessary and sufficient condition for the transitivity of n switchable languages.
Theorem 8: (Transitivity of n switchable languages, n > 3) . Given n specifications (K 1 , ..., K n ), K i ⊆ L m (G) with H i ⊆ K i , i = {1, ..., n}. Moreover, assume that each language K i is at least switchable with another language K j , i ̸ = j.
A pair of languages (K k , K l ) is switchable i.e. SW (K k , K l ), if and only if 1) H k ∩ pr(H l ) = H k , and 2) H l ∩ pr(H k ) = H l .
Proof: The proof is similar to the proof of Theorem 6 and can be found in [3]. It is to be noted that the assumption that each of the n languages be at least switchable with another language is important, in order to derive the above result.
IV. EXAMPLE: SWITCHING SUPERVISORY CONTROL OF AGVS
The idea of switching supervisory control is now applied to a discrete-event system, modeled with weighted automata. We take as an illustrating example the supervisory control of a fleet of fleet automated guided vehicles (AGVs) that move in a given circuit area.
The example is taken from [9]. A circuit is partitioned into sections and intersections. Each time an AGV moves in a new intersection or a new section, then the automaton will move to a new state in the associated automaton. An example of an area with its associated basic automaton is depicted in Figure 1.
The area to be supervised is the square depicted in Figure 1 (left). The flow direction with the arrows are specified the four intersections {A, B, C, D} and the associated basic automaton are given in Figure 1 (right). The basic automaton is denoted G basic = (Q b , Σ b , δ b , ∅, ∅) where the initial state and the final state are not defined. The initial state is defined according to the physical position of the AGV and the final state is defined according to its mission, that is his position target. A state represents and intersection or a section. Each state corresponding to a section is named XY i where X is the beginning of the section, Y its end and i the number of the AGV. For each section, there are two transitions, the first transition C XY is an input which is controllable and represents the AGV moving on the section from X to Y . The second transition is an output transition U Y which is uncontrollable and represents the AGV arriving to the intersection Y . For example the basic automaton depicted in Figure 1 (right) can be interpreted as follows. If AGV i arrives at section A, then it has two possibilities, either to go to section B with the event C ABi , or the go section D with the event C ADi . If we choose to go to section B, then the next state is AB i . From this state, the uncontrollable event U AB is true so that the following state is B i . And from B i , the only possibility is to exit to Point F with the uncontrollable event exit i . Now consider for example that 2 AGVs are moving in the circuit of Figure 1 (left). Assume AGV 1 is in D and AGV 2 is in AB so that the state is in (D 1 , AB 2 ). AGV 1 is leaving the area when the event exit 1 is true so that the system will be in state (E 1 , AB 2 ). And since AGV 1 is out of the considered area, then the new state will be (E 1 , AB 2 ) = (∅ 1 , AB 2 ) = (AB 2 ) since AGV 1 is out of the area.
We give here below the synthesis algorithm for calculating the supervisor S c as it aws proposed by Girault et Colleagues in [9]. For more details on the synthesis algorithm, the reader is referred to the above paper.
Algorithm 1 -Synthesis algorithm of S C [9] Data: G w,1 , . . . G w,n Result:
Supervisor S C G w ← {G w,1 , . . . G w,n } G u ← {∅} forall G w,i ∈ G w do G u ← G u ∪ U γi (G w,i ) end S C ← S(G u,i ) G u ← G u \{G u,1 } while G u ̸ = ∅ do x ← get(G u ) S C ← S(S C ||x) G u ← G u \{x} end V. CONCLUSIONS
The notion of switchable languages has been defined by Kumar and Colleagues in [11]. It deals with switching supervisory control, where switching means switching between two specifications. In this paper, we have extended the notion of switchable languages to a triplet of languages (n = 3) and we gave a necessary and sufficient condition for the transitivity of two switchable languages. Then we generalized the notion of switchable languages of a n-uplet of languages, n > 3 and we gave also necessary and sufficient condition for the transitivity of two (out of n) switchable languages. Finally the proposed methodology is applied to a simple example for the supervisory control of a fleet of AGVs. Ongoing work deals with a) the calculation of the supremal of n-uplet of switchable languages, and b) the optimal switching supervisory control of DES exploiting the cost of the weighted automata for the synthesis strategy.
2 Figure 1 :
21 Figure 1: Selection Phases after the HDR
2001-2004: First and Second Year: Control and Industrial Eng. courses at DAP • 2006-2012: MSc. MLPS (Management of Logistic and Production Systems) • 2012-present: MSc. MOST (Management and Optimization of Supply Chains and Transport) C. Courses given abroad • May 2008: Univ. of Cagliari (Italy): Control of Hybrid Systems (10h) Erasmus • Apr. 2009: Univ. Tec. Bolivar UTB, Cartagena (Colombia): Tutorial on DES (15h) • May 2014: Univ. Tec. Bolivar UTB, Cartagena (Colombia): Intro. to DES (15h) • Dec 2015: ITB Bandung (Indonesia): Simulation with Petri Nets (10h) Erasmus • Apr. 2017: Univ. of Liverpool (UK): Course 1 (10h) Erasmus • May 2017: ITB Bandung (Indonesia): Simulation with Petri Nets (10h) Erasmus
Figure 2 . 1 :
21 Figure 2.1: Partition and Automaton
26 Figure 2 . 2 :Figure 2 . 3 :
262223 Figure 2.2: Phase portrait of Example 1 in PWA
Figure 2 . 4 :
24 Figure 2.4: Phase portrait of Example 2 in PWA
Figure 2 . 5 :
25 Figure 2.5: Phase portrait of Example 2 in MLD
Figure 3 . 1 :
31 Figure 3.1: EMN Cell
Figure 4 . 1 :
41 Figure 4.1: Smart Grid
Figure 4 . 2 : 40 [ 15 ]
424015 Figure 4.2: CDG Airport Paris-Roissy
Fig. 1 .
1 Fig. 1. Equivalence relation between hybrid systems Every well-posed PWA system can be re-written as an MLD system assuming that the feasible states and inputs are bounded [6, proposition 4*]. A completely well-posed MLD system can be rewritten as a PWA system [6, proposition 5*].
, (a) MLD Model (b) Error between MLD and PWA [4] (C) Error between MLD and PWA-This Work Fig. 3. Simulation Results for the Three-Tank System
Fig. 4. Simulation results for robotized gear shift
water source1and normal operation, 2, if water source 2 and normal operation, 3, if maintenance operation, 4, change from maintenance operation
Figure 1 .
1 Figure 1. States and switching signal for the Lotka-Volterra example.
) and Tan et al. (2013) have developed a decentralised technique based on broadcasting and consensus to optimally distribute a resource considering capacity constraints on each entity in the network. Nonetheless, compared to our algorithm, the approach in Dominguez-Garcia et al. (2012) and Tan et al. (2013) is only applicable to quadratic cost functions. On the other hand, Pantoja and Quijano (2012) propose a novel methodology based on population dynamics. The main drawback of this technique is that its performance is seriously degraded when the number of communication links decreases. We point out the fact that other distributed optimisation algorithms can be applied to solve resource allocation problems, as those presented in Nedic, Ozdaglar, and Parrilo (2010), Yi, Hong, and Liu (2015), and Johansson and Johansson (2009). Nevertheless, the underlying idea in these methods is different from the one used in our work, i.e. Nedic et al. (2010), Yi et al. (2015), and Johansson and Johansson (
Theorem 2 . 1 (
21 adapted from Godsil & Royle, 2001): An undirected graph G of order n is connected if and only if rank(L(G)) = n -1.
Figure .
Figure . Single path topology for n nodes.
T 1 , f 1 T 2 , f 2 TFigure .
1122 Figure . Decoupled chilled-water plant with n chillers.
PFigure
Figure . (a) P i -T i curves for each chiller, (b) evolution of supply temperatures and total power consumed by the chillers, C L = . kW, (c) evolution of supply temperatures and total power consumed by the chillers, C L = . kW.
Fig. 1 .
1 Fig. 1. An AGV circuit (left) and its basic automaton (right)
•
Chapter 2 presents the Analysis and Control of Hybrid and Switched System • Chapter 3 is devoted to Supervisory Control of Discrete-Event Systems • Chapter 4 gives the Conclusion and Future Work
NR NR HDR HDR 8 10
Promo Course CM PC TD TP-MP PFE Resp Lang.
A1 Automatique 10h 10h 10h Fr.
A2 Optim. 10h 5h Fr.
A2 AII SED 10h 5h 5h Fr.
A3 AII SysHybrides 7.5h 7.5h Fr.
MSc. PM3E Control 7.5h 7.5h Eng.
MSc. MOST Simulation 5h 5h 5h Eng.
MSc. MOST Resp. MSc+UV 90h Eng.
A3 PFE superv. 36h Fr.
Masters PFE superv. 36h Eng.
Total 1 272h 10h 30h 30h 40h 72h 90h
Total 2 283up 15up 36up 30up 40up 72up 90up
Table 1.1: Teaching in 2015-2016
B. Responsabilities (Option AII, Auto-Prod, MSc MLPS, MSc MOST)
Diagnosis and Prognosis of Discrete-Event Systems, 48th IEEE CDC Shanghai, China, Dec 2009 (jointly organized and chaired with Shigemasa Takai). -Invited Session, Diagnosis of DES Systems, 1st IFAC DCDS 2007, Paris, France, June 2007 (jointly organized and chaired with Shigemasa Takai). Hybrid Systems, IEEE ISIC 2001, Mexico City, Mexico, Sep. 2001 (jointly organized and chaired with Michael Lemmon).
NR HDR 13
1.6 Organization of Invited Sessions
-Invited Session, DES and Hybrid Systems, IEICE NOLTA 2006, Bologna, Italy, Sep. 2006
(jointly organized and chaired with Shigemasa Takai).
-Invited Session, Supervisory Control, IFAC WODES, Reims, France, Sep. 2004
(jointly organized and chaired with Toshimitsu Ushio).
-Invited Session,
• Participant, French "Contrat Etat-Région" 2000-2006, CER STIC 9 / N.18036,
J.J. Loiseau PI, Euro 182,940 (US$ 182,940).
• Co-Principal Investigator (with Ph. Chevrel), Modeling and Simulation of ESP Program,
Peugeot-Citroen PSA France, Sep. 2000 -Jan 2001, FF 20,000 (US$ 3,000).
• Co-Principal Investigator (with Andi Cakravastia, ITB), LOG-FLOW, PHC NUSANTARA France Indonesia, Project N. 39069ZJ, 2017, Accepted on 31 May 2017. • Participant, "Industrial Validation of Hybrid Systems", France and Colombia ECOS Nord Project N.C07M03, A. Gauthier and J.J. Loiseau PIs, Jan. 2007 to Dec. 2009 (3 years) Euro 12,000. • Co-Principal Investigator (with J. Aguilar-Martin), Control and Supervision of a Distillation Process, Conseil Régional Midi-Pyrénées, France, 1994-1995, FF 200,000 (US$ 30,000). • Participant, European Esprit Project IPCES (Intelligent Process Control by means of Expert Systems), J. Aguilar-Martin PI, 1989-1992, Euro 500,000 (US$ 500,000).
-Invited Session,
Table 1 .
1 table. Automatic clustering for symbolic evaluation for dynamical system supervision. In Proc. of IEEE American Control Conference (ACC 1992), Chicago, USA, June 1992, vol. 3, pp. 1895-1897.
Conf. Book Chap. Book Ed. Journal Total
1994 2
1995 2 1 1
1996 1 1
1997 1
1998 2
1999 1
2000
2001 4 1
2002 1
2003 5 1
2004 6
2005 1
2006 3
2007 4
2008 2
2009 1
2010 1
2011
2012 1
2013 2
2014 4 1
2015 2 1
2016 4
2017 1+3* 1* 1+4*
2: Number of published papers per year (as of 30 June 2017) -where (*) means submitted NR HDR 20 [C.1] N. Rakoto-Ravalontsalama and J. Aguilar-Martin,
1. Compute matrices A, B, C, D and A i , B i , C i and D i using (2.20).2. InitializeE 1 , E 2 , E 3 , E 4 , E 5 matrices.3. For the m switching regions S j,i , include the inequalities defined in (2.8) or (2.10) which define the values of the m auxiliary binary variables δ j,i .4. Generate 2 * nx δi auxiliary binary dynamical variables associated with the n affine models and m auxiliary binary variables δ j,i associated with the m S ij switching regions.
5.
For i = 1 to n include the inequalities using (2.13) representing the behavior on the x δ vector. 6. For i = 1 to n generate the n c -dimensional Z 1i vector and p c -dimensional Z 2i vector of auxiliary variables Z. 7. For each Z 1i vector introduce the inequalities defined in (2.16), by replacing A i , and B i by A i , and B i , computed in Step 1. M and m are n c -dimensional vectors of maximum and minimum values of x, respectively. 8. For each Z 2i vector introduce the inequalities defined in (2.17), by replacing C i , and D i by C i , and D i , computed in Step 1. M and m are p c -dimensional vectors of maximum and minimum values of x, respectively (This completes the inequality matrices). 9. Compute the matrices defined in (2.21) and (2.22) 10. End.
.3.2 Transitivity of Switchable Languages
and H i controllable}.
NR HDR 34
3
(n = 3)
Chapter 4 Conclusion and Future Work 4.1 Summary of Contributions
The results can be found in [C.sub1].
In this HDR Thesis, I have presented a summary of contribution, in Analysis and Control of Hybrid
Systems, as well as in Supervisory Control of Discrete-event Systems.
• Analysis and Control of Hybrid and Switched Systems
-Modeling and Control of MLD Systems
-Stability of Switched Systems
-Optimal Control of Switched Systems
• Supervisory Control of Discrete-Event Systems
-Multi-Agent Based Supervisory Control
-Switched Discrete-Event Systems
-Switchable Languages of DES
I have chosen to not present some work like
the Distributed Resource Allocation Problem, the Holonic Systems, and the VMI-Inventory Control work. However the references of the corresponding papers are given in the complete list of publications. My perspectives of research in the coming years are threefold: 1) Control of Smart Grids, 2) Simulation with Stochastic Petri Nets and 3) Planning and Inventory Control.
Table I .
I Computation and Simulation Times
Representation Computation Simulation
Time (s.) Time (s.)
MLD - 592.20
PWA-[4] 93.88 5.89
PWA-This work 72.90 1.33
Table II .
II , Computation and Simulation Times
Representation Computation Simulation
Time (s.) Time (s.)
MLD - 296.25
PWA-[4] 115.52 0.35
PWA-This work 155.73 0.17
translation from the MLD model into PWA model took 572.19 s, with the algorithm proposed here, generating 127 sub-models. The translation into PWA model took 137.37s, with the algorithm in[3], generating 14 submodels. The simulation time for 300 iterations with the MLD model and a MIQP algorithm took 4249.301s, the same simulation with the PWA model obtained with the algorithm proposed here took 0.14s, and the same simulation with the PWA model obtained using the algorithm in[4] took 0.31s. These results are summarized in TableIII, Table III. Computation and Simulation Times.
Representation Computation Simulation
Time (s.) Time (s.)
MLD - 4249.30
PWA-[4] 137.37 0.31
PWA-This work 572.20 0.14
.t j / 2 , : : : , v .t j / m 1 .t j / D v .t j /. Now, using the equivalence stated in Proposition 5, we know that the solutions of the polynomial Problem (8) are solutions of the switching system; and in this case, it is only one. Hence, we obtain .t j / D v .t j /, which implies that .t j / D v .t j / D m 1 .t j /, where m 1 is the first moment of the vector of moments.
2d Á ,
which implies that
Table .
Distributed algorithms' performance.
Percentage decrease, computation time
Number of nodes Proposed approach DIP LRE DIPe
n = n = n = n = n = % , . s %, . s %, . s %, . s %, . s % , . s %, . s %, . s %, . s %, . s % , . s % , s %, . s %, s %, . s %, s %, . s -%, . s -
available communication links); a nonlinear cost func-
tion φ
Table .
Chillers' parameters.
Evolution of the Euclidean distance and constraint satisfaction using the proposed algorithm. Right y-axis corresponds to the dash-dotted line.
Euclidean distance (kW) 0 1 2 3 4 Euclidean distance evolution Proposed algorithm Optimal value
60 Constraints 1
Power (kW) 40 50 100 i=1 u i (k) min{u i (k)} 0 0.5
30 0 0 0.2 0.2 0.4 0.4 0.6 time 0.6 0.8 0.8 1 1 1.2 1.2 -0.5 x 10 x 10 -4 -4
iteration 1 iteration 2 iteration 3
are properly satisfied in steady i (k) = 40 kW and min{u * i=1 u * state, i.e. 100 i (k)} = 0 kW. As a final observation, our algorithm exhibits a suitable
Figure .
D Appendix 4 -Paper [C.sub1]:
© Informa UK Limited, trading as Taylor & Francis Group
Remerciements
VI. ACKNOWLEDMENT This work has been supported in part by « Contrat Etat -Région No STIC 9-18036, 2000-2006 », Nantes, France. The authors are thankful to the reviewers for their valuable comments and suggestions. ACKNOWLEDGEMENTS This study was supported by Proyecto CIFI 2011, Facultad de Ingeniería, Universidad de Los Andes. ACKNOWLEDGMENT Part of this work was carried out when the second author (N.R.) was visiting Prof. Stephane Lafortune at University of Michigan, Ann Arbor, MI, USA, in Sep. 2013. Grant #EMN-DAP-2013-09 is gratefully acknowledged.
Funding G. Obando is supported in part by Convocatoria 528 Colciencias-Colfuturo and in part by OCAD-Fondo de CTel SGR, Colombia (ALTERNAR project, BPIN 20130001000089). |
01761889 | en | [
"spi.auto"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01761889/file/Robust_Structurally_Contrained_Controllers_ECC_2018.pdf | Design of robust structurally constrained controllers for MIMO plants with time-delays
Deesh Dileep 1 , Wim Michiels 2 , Laurentiu Hetel 3 , and Jean-Pierre Richard 4
Abstract-The structurally constrained controller design problem for linear time invariant neutral and retarded timedelay systems (TDS) is considered in this paper. The closedloop system of the plant and structurally constrained controller is modelled by a system of delay differential algebraic equations (DDAEs). A robust controller design approach using the existing spectrum based stabilisation and the H-infinity norm optimisation of DDAEs has been proposed. A MATLAB based tool has been made available to realise this approach. This tool allows the designer to select the sub-controller inputoutput interactions and fix their orders. The results obtained while stabilising and optimising two TDS using structurally constrained (decentralised and overlapping) controllers have been presented in this paper.
Index Terms-Decentralized control, Time-delay systems, H2/H-infinity methods, linear systems, Large-scale systems.
I. INTRODUCTION
This article contributes to the field of complex interconnected dynamical systems with time-delays. It is common to observe time-delays in these systems due to their inherent properties or due to the delays in communication. It is almost infeasible, if not, costly to implement centralised controllers for large scale dynamical systems (see [START_REF] Siljak | Decentralized Control of Complex Systems[END_REF] and references within). Therefore, decentralised or overlapping controllers are often considered as favourable alternatives. There are many methods suggested by multiple authors for the design of full order controllers that stabilise finite dimensional LTI MIMO systems. The design problem of such a controller is usually translated into a convex optimisation problem expressed in terms of linear matrix inequalities (LMIs). However, determining a reduced dimension (order) controller or imposing special structural constrains on the controller introduces complexity. Since the constraints on structure or dimension prevent a formulation in terms of LMIs. Such problems typically lead to solving bilinear matrix inequalities directly or using other non-convex optimisation techniques. Solutions obtaining full order controllers for higher order plants are not favourable, since lower order controllers are preferred for implementation. Time-delay systems (TDS) can be seen as infinite dimensional LTI MIMO systems. Designing a finite dimensional controller for TDS is hence equivalent to obtaining a reduced order controller. Therefore, in this paper we combine both the problems of determining a reduced order (or fixed structure) controller and imposing constrains on the structure of the controller. Linear time invariant (LTI) neutral (and retarded) timedelay systems are considered in this article. The algorithms from [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF] and [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] with a direct optimisation based approach have been extended in this paper for designing structurally constrained robust controllers. In [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF], the design of stabilising fixed-order controllers for TDS has been translated into solving a non-smooth non-convex optimisation problem of minimising the spectral abscissa. This approach is similar in concept to the design of reduced-order controllers for LTI systems as implemented in the HIFOO package (see [START_REF] Burke | HIFOO -a matlab package for fixed-order controller design and H-infinity optimization[END_REF]). The core algorithm of HANSO matlab code is used for solving the non-smooth non-convex optimisation problems (see [START_REF] Overton | HANSO: a hybrid algorithm for nonsmooth optimization[END_REF]). In many control applications, robust design requirements are usually defined in terms of H ∞ norms of the closedloop transfer function including the plant, the controller, and weights for uncertainties and disturbances. In [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF], the design of a robust fixed-order controller for TDS has been translated into a non-smooth non-convex optimisation problem. There are other methods available to design optimal H ∞ controllers for LTI finite dimensional MIMO systems based on Riccati equations and linear matrix inequalities (see [START_REF] Doyle | Statespace solutions to standard H 2 and H∞ control problems[END_REF], [START_REF] Gahinet | A linear matrix inequality approach to h control[END_REF], and references within). However, the order of the controller designed by these methods is generally larger than or equal to the order of the plant. Also, imposing structural constrains in these controllers become difficult. There are many methods available to design decentralised controllers for non-delay systems, most of them do not carry over easily to the case of systems with time-delays. In this paper, the direct optimisation problem of designing overlapping or decentralised controllers is dealt with by imposing constrains on the controller parameters. Similar structural constrain methodologies were already mentioned in [START_REF] Siljak | Decentralized Control of Complex Systems[END_REF], [START_REF] Sojoudi | Structurally Constrained Controllers: Analysis and Synthesis, ser. SpringerLink : Bücher[END_REF], [START_REF] Alavian | Q-parametrization and an sdp for hinf-optimal decentralized control[END_REF], and [START_REF] Ozer | Simultaneous decentralized controller design for time-delay systems[END_REF]. This work allows system models in terms of delaydifferential algebraic equations (DDAEs), whose power in modelling large classes of delay equations is illustrated in the next section. In [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF], the authors state that such a system description form can be adapted for designing controllers due to the generality in modelling interconnected systems and controllers. In this way, elimination technique can be avoided which might not be possible for systems with delays. In the DDAE form, the linearity of the closed-loop system, with respect to the matrices of the controllers, can be preserved for various types of delays and combinations of plants and controllers. The rest of the paper is organised as follows. Section II formally introduces the problem of time-delay systems and the existing methods available to stabilise and optimise the performance of such systems using centralised fixed-order controllers. Section III presents the proposed concept of structurally constrained controllers and its implementation methodology. Section IV provides some example MIMO problems from literature which are stabilised and optimised using structurally constrained controllers. Section V concludes the paper with a few remarks.
II. PRELIMINARIES
In this article, TDS or plants of the following form are considered,
E p ẋp (t) = A p0 x p (t) + m A i=1 A pi x p (t -h A i ) + B p1 u(t) + B p2 w(t), y(t) = C p1 x p (t), z(t) = C p2 x p (t). ( 1
)
Here t is the time variable, x p (t) ∈ R n is the instantaneous state vector at time t, similarly, u(t) ∈ R w and y(t) ∈ R z are instantaneous controlled input and measured output vectors respectively at time t. We use the notations R, R + and R + 0 to represent sets of real numbers, non-negative real numbers and strictly positive real numbers respectively, and x p ∈ R n is a short notation for (x p1 , ..., x pn ). A, B, C, D and E are constant real-valued matrices, m A is a positive integer representing the number of distinct time-delays present in the state, the inputs, the outputs, the feed-through (inputoutput) and the first order derivative of instantaneous state vector. The time-delays, 0 < h A i ≤ h max , have a minimum value greater than zero and a maximum value of h max . The instantaneous exogenous input and the instantaneous exogenous (or controlled) output are represented as w(t) and z(t) respectively. Even though there are no feed-through components, input delays or output delays, the LTI system description of ( 1) is in the most general form. This can be portrayed with the help of some simple examples. Example 1. Consider a system with non-trivial feed-through matrices.
ψ(t) = Aψ(t) + B 1 u(t) + B 2 w(t) y(t) = C 1 ψ(t) + D 11 u(t) + D 12 w(t) z(t) = C 2 ψ(t) + D 21 u(t) + D 22 w(t) If we consider x p (t) = [ψ(t) T γ u (t) T γ w (t) T ]
T , we can bring this system to the form of (1) with the help of the dummy variables (γ u and γ w ),
I 0 0 0 0 0 0 0 0 ẋp(t)= A B 1 B 2 0 I 0 0 0 I xp(t)+ 0 -I 0 u(t)+ 0 0 -I w(t), y(t)= C 1 D 11 D 12 xp(t), z(t)= C 2 D 21 D 22 x(t).
Example 2. Consider an LTI system with time-delays at the input.
ψ(t) = Aψ(t) + B 10 u(t) + m B i=1 B 1i u(t -h B i ) y(t) = C 1 ψ(t) + D 11 u(t) If we consider x p (t) = [ψ(t) T γ u (t) T ] T ,
we can bring this system to the form of (1) with the help of the dummy variable (γ u ),
I 0 0 0 ẋp (t) = A B 10 0 I x p (t) + m B i=1 0 B 1i 0 0 x p (t -h B i ) + 0 -I u(t), y(t) = C 1 D 11 x p (t).
Simliarly, the output delays can be virtually "eliminated".
Example 3. The presence of time-delays at the first order derivative of the state vector in an LTI system (neutral equation) can also be virtually eliminated using dummy variables.
ψ(t) + m E i=1 E i ψ(t -h E i ) = Aψ(t) + B 1 u(t) y(t) = C 1 ψ(t) + D 11 u(t)
We can bring this example LTI system to the form of (1) with the help of the dummy variables (γ ψ and γ u ), where γ ψ is given by,
γ ψ (t) = ψ(t) + m E i=1 E i ψ(t -h E i ).
More precisely, when defining x p (t) = [γ ψ (t) T ψ(t) T γ u (t) T ] T the system takes the following form consistent with (1):
I 0 0 0 0 0 0 0 0 ẋp (t) = 0 A B1 -I I 0 0 0 I x p (t) + m E i=1 0 0 0 0 Ei 0 0 0 0 x p (t -h E i ) + 0 0 -I u(t) y(t) = 0 C 1 D 11 x p (t).
The system described in (1) could be controlled using the following feedback controller of the prescribed order "n c ",
ẋc (t) = A c x c (t) + B c y(t), u(t) = C c x c (t) + D c y(t). (2)
The case of n c = 0 corresponds to a static or proportional controller of the form u(t) = D c y(t). The other cases of n c ≥ 1 corresponds to that of a dynamic controller as in the form (2), where, A c is a matrix of size n c × n c . The combination of the plant (1) and the feedback controller (2) can be re-written using
x = [x T p u T γ T w x T c y T ] T , (3)
in the general form of delay differential algebraic equation (DDAE) as shown below,
E ẋ(t) = A 0 x(t) + m i=1 A i x(t -τ i ) + Bw(t), z(t) = Cx(t), (4)
where,
E = I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 , A 0 = A p0 B p1 B p2 0 0 C p1 0 0 0 -I 0 0 -I 0 0 0 0 0 A c B c 0 -I 0 C c D c . (5)
Subsequently,
A i =
A pi 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
, B = 0 0 I 0 0 , C T = C p2 0 0 0 0
.
An useful property of this modelling approach using DDAEs is the linear dependence of closed-loop system matrices on the elements of the controller matrices. To stabilise and optimise the robustness of the closed-loop system, the timeindependent parameter vector of p is defined. We build on the approach of [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] and [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF], that is to directly optimise stability and performance measures as a function of vector p, which contains the parameters of the controller,
p = vec A c B c C c D c . (6)
For a centralised controller, the matrices A c , B c , C c and D c are seldom sparse when computed using the algorithms presented in [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] or [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF]. Static controllers can be considered as a special case of the dynamic controller, for which A c , B c and C c are empty matrices. The vector p would then include only the elements from D c . The objective functions used for the performance evaluation of the closed-loop system will be explained in the following subsections.
A. Robust Spectral Abscissa optimisation:
The spectral abscissa (c(p)) of the closed-loop system (4) when w ≡ 0 can be expressed as follows, c(p; τ ) = sup λ∈C {R(λ) : det∆(λ, p; τ ) = 0}, where,
∆(λ, p; τ ) = λE -A 0 (p) - m i=1 A i (p)e -λτi (7)
and R(λ) is the real part of the complex number λ. The exponential stability of the null solution of (4) determined by the condition c(p) < 0 (see [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF]). However, the function τ ∈ (R + 0 ) m → c(p; τ ) might not be continuous and could be sensitive to infinitesimal delay changes (in general, as neutral TDS could be included in ( 1)). Therefore, we define the robust spectral abscissa C(p; τ ) as in the following way,
C(p; τ ) := lim →0+ sup τe∈B(τ , ) c(p; τ ) (8)
In [START_REF] Sojoudi | Structurally Constrained Controllers: Analysis and Synthesis, ser. SpringerLink : Bücher[END_REF],
B(τ , ) is an open ball of radius ∈ R + centered at τ ∈ (R + ) m , B(τ , ) := { θ ∈ R m : || θ -τ || < }.
The sensitivity of the spectral abscissa with respect to infinitesimal delay perturbations has been resolved by considering the robust spectral abscissa, since this function can be shown to be a continuous function of the delay parameters (and also parameters in p), see [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF]. We now define the concept of strong exponential stability.
Definition 1: The null solution of (4) when w ≡ 0 is strongly exponentially stable if there exists a number τ > 0 such that the null solution of
E ẋ(t) = A 0 x(t) + m i=1 A i x(t -(τ i + δτ i )))
is exponentially stable for all δτ ∈ R m satisfying ||δτ || < τ and τ i + δτ i ≥ 0, i = 1, ...., m.
In [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF] it has been shown that the null solution is strongly exponentially stable iff C(p) < 0. To obtain a strongly exponentially stable closed-loop system and to maximise the exponential decay rate of the solutions, the controller parameters (in p) are optimised for minimum robust spectral abscissa, that is,
min p -→ C(p). (9)
B. Strong H ∞ norm optimisation
The transfer function from w to z of the system represented by ( 4) is given by,
G zw (λ, p; τ ) := C λE -A 0 (p) - m i=1 A i (p)e -λτi -1
B.
(10) The H ∞ norm for a stable system with the transfer function given in [START_REF] Ozer | Simultaneous decentralized controller design for time-delay systems[END_REF] ||G zw (jω, p; τe )|| ∞ Contrary to the (standard) H ∞ norm, the strong H ∞ norm continuously depends on the delay parameter. The continuous dependence also holds with respect to the elements of the system matrices, which includes the elements in p (see [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF]).
To improve the robustness expressed in terms of a H ∞ criterion, controller parameters (in p) are optimised for minimum strong H ∞ norm. This brings us to the optimisation problem,
min p -→ |||G zw (jω, p; τ )||| ∞ .
To solve the non-smooth non-convex objective function involving the strong H ∞ norm, it is essential to start with an initial set of controller parameters for which the closedloop system is strongly exponentially stable. If this is not the case, a preliminary optimisation is performed based on minimising the robust spectral abscissa.
Weighted sum approach: A simple weighted sum based optimisation approach can also be performed using the two objectives mentioned in the sub-sections II-A and II-B.
The controller parameters (in p) can be optimised for the minimum of a multi-objective function f o (p), that is,
min p f o (p), (12)
where,
f o (p) = ∞, if C( p)≥0 α C( p)+ (1-α) |||Gzw(jω, p)|||∞, else if C( p)<0. (13)
III. DESIGN OF STRUCTURALLY CONSTRAINED CONTROLLERS
The direct optimisation based approach with structural constrains to selected elements within the controller matrices (A c , B c , C c and D c ) is presented in this section. These constrained or fixed elements are not considered as variables in the optimisation problem. Let us consider a matrix C M which contains all the controller gain matrices. This matrix is later vectorised for constructing the vector p containing the optimisation variables,
C M = A c B c C c D c .
Some of the elements within C M are fixed, and are not to be considered in the vector p. This can be portrayed with the help of an example problem of designing overlapping controllers.
Example 4. Let us consider an example MIMO system for which a second order controller (with an overlapping configuration of C M ) has to be designed as follows,
ẋc1 ẋc2 u 1 u 2 = C M a c11 0 b c11 b c12 0 a c22 0 b c22 c c11 0 d c11 d c12 0 c c22 0 d c22 x c1 x c2 y 1 y 2 .
(14)
This MIMO system has two inputs and two outputs. If b c12 and d c12 were to be zero elements, we would have decentralised sub-controllers. That is, input, output, and subcontroller state interactions are decoupled. Since b c12 and d c12 are non-zero elements or not fixed, we have to design overlapping sub-controllers. That is, only input and subcontroller state interactions are decoupled, while, one of the measured output is shared between the sub-controllers. In this example, to optimise the overlapping (or decentralised) sub-controllers without losing its structure, we must keep the 0 elements fixed. The difference between centralised, decentralised and overlapping configurations can be visualised with the help of Fig. 1.
In general, imposing zero values to specific controller parameters could lead to segments (sub-controllers) within one controller having restricted access to certain measured outputs and/or restricted control of certain inputs.
A. Decentralised and overlapping controllers
As mentioned earlier, it is possible to design decentralised and overlapping controllers using the principle of structural constrains. The structural constrains can be enforced on C M in Example 4 with the help of a matrix F M .
f M ij = 1, if c M ij is an optimisation variable 0, else if c M ij is a fixed element (15)
In ( 15), c M ij and f M ij denote the elements of the i th row and the j th column in the matrices C M and F M respectively. By definition, the sizes of the matrices C M and F M are identical.
p = vec
F M C M = vec F M A c B c C c D c (16)
Where, vec F M C M is a vector containing the elements of C M for which the corresponding element in F M is one, see (15). The elements in vec F M C M and vec C M are in the same order. We obtain the new controller parameter vector p using (16).
For this purpose we can define two interaction matrices M Cu and M Cy , which denote the interaction between input, output and sub-controllers. We also define a vector nCa to contain information on the order of all the sub-controllers. M Cu , M Cy and nCa are given as input to the algorithm for the design of decentralised or overlapping type of structurally constrained controller. Letting m Cuij and m Cyij denote the elements of the i th row and the j th column in matrices M Cu and M Cy respectively, we have m Cuij = 1, if i th controller handles the j th input 0, otherwise m Cyij = 1, if i th controller considers the j th output 0, otherwise.
Referring back to Example 4, the input given to the algorithm for designing (14) are given as,
M Cu = 1 0 0 1 , M Cy = 1 1 0 1 , nCa = 1 1 T . (17)
Therefore, we consider two first order sub-controllers. We need to fix some elements in the matrix C M to zero in order to have the same form as the matrix within (14). Subsequently, with the information available in (17), it is also possible to obtain the matrix F M ,
F M = 1 0 1 1 0 1 0 1 1 0 1 1 0 1 0 1 . (18)
Using (18), we can construct the new C M as in (14), this is the structurally (or sparsity) constrained form of the controller matrix. The corresponding vector p for Example 4 can be given as,
p = [a c11 c c11 a c22 c c22 b c11 d c11 b c12 b c22 d c12 d c22 ] T . (19)
We can represent the matrix F M in general form with the help of the matrix of ones (in what follows, J n×n denotes the matrix of size n by n with every entry equal to one). If l is the total number of sub-controllers, then k ∈ {1, ..., l} and n c k is the order of the k th sub-controller. If the total number of inputs is w, then h ∈ {1, ..., w}. Similarly, when the total number of outputs is z, then j ∈ {1, ..., z}. For Example 4 with input as in (17), there are two sub-controllers, two inputs and two outputs, then l = 2, w = 2 and z = 2 respectively. The general representations for matrices J n×n and F M are given below.
F M = J nc 1 ×nc 1 . . . 0 . . . . . . . . . 0 . . . J nc l ×nc l m Cykj • J nc k ×1 k,j m Cuhk • J 1×nc k h,k M Cu M Cy .
Here we use [ • ] i,j to denote the (i, j)-th block of a matrix. For both the cases of overlapping and decentralised controllers A c takes a block diagonal form as shown below.
A c = A c1 . . . 0 . . . . . . . . . 0 . . . A cl
The matrices B c , C c and D c will be sparsity constrained but they need not be block diagonal in structure. Also, this could be the case for decentralised configuration. Sparsity constrains are defined based on the interaction matrices and the order of the sub-controllers. Subsequently, the interaction matrices M Cu and M Cy which are not of diagonal form will result in controller gain matrices B c , C c , and D c which are not of block diagonal form. However, this does not restrict the implementation of this tool in anyway.
B. Other controllers
One can use the concept of structural constraints to design many other controllers. A kind of distributed controller can be considered by including the off-diagonal elements of the A c matrix in the vector p. PID controllers are commonly used as feedback controllers in the industry. It is also possible to structurally constrain the dynamic controller to represent a PID controller and optimise its gains. Let us consider the PID controller mentioned in [START_REF] Toscano | Structured Controllers for Uncertain Systems: A Stochastic Optimization Approach[END_REF].
K(s) = K P + K I 1 s + K D s 1 + τ d s , (20)
for which a realisation is determined by the controller matrices,
A c B c C c D c = 0 0 K i 0 -1 τ d I -1 τ 2 d K d I I K p + 1 τ d K d (21)
Here τ d is the time constant of the filter applied to the derivative action. The physical reliability is safeguarded by ensuring the properness of the PID controller using this lowpass first order filter (see [START_REF] Toscano | Structured Controllers for Uncertain Systems: A Stochastic Optimization Approach[END_REF]). If we assume τ d to be a constant, we can convert this into an optimisation problem for the proposed algorithm as given below.
F M = 0 0 1 0 0 1 0 0 1 → C M = 0 0 bc 11 0 -1 τ d I bc 21 I I dc 11 → p = bc 11 bc 21 dc 11
The new values for the gains of the PID controller can be obtained from the optimised dynamic controller using In this section, two MIMO plants with time-delays are used by the proposed algorithm to obtain structurally constrained controllers. Some basic information on the structure of these plants are given in Table I.
K i = b c11 , K d = -τ 2 d b c21 and K p = d c11 -1 τ d K d . IV. EXAMPLE MIMO PROBLEMS
The results obtained for the closed-loop systems of the plants and decentralised or overlapping controllers are shown in Table II. Only the final results have been presented in the table due to the space limitation.1 . In both these example problems, when α = 1 controllers were optimised for minimum robust spectral abscissa (RSA). However, when α = 0 the controllers were optimised for minimum strong H ∞ norm (SHN). For the examples considered in this paper, we can observe that minimisation of the strong H ∞ norm occurs at the cost of reduced exponential decay rate (an increase in the value of robust spectral abscissa). Also, we can observe that the overlapping controllers generally perform better than the decentralised controllers which is expected since they result in less structural constraints on the controller parameters.
V. CONCLUSION
In this paper, a methodology to design structurally constrained dynamic (LTI) controllers was presented. It was concluded that decentralised controllers, overlapping controllers and many other types of controllers can be considered as structurally constrained controllers, for which a generic design approach was presented. The proposed frequency domain based approach was used to design stabilising and robust fixed-order decentralised and overlapping controllers for linear time invariant neutral and retarded time-delay systems. The approach has been implemented as an improvement to the algorithms in [START_REF] Gumussoy | Fixed-order h-infinity control for interconnected systems using delay differential algebraic equations[END_REF] and [START_REF] Michiels | Spectrum-based stability analysis and stabilisation of systems described by delay differential algebraic equations[END_REF], therefore, the objective functions are in general non-convex. This is addressed by using randomly generated initial values for controller parameters, along with initial controllers specified by the user, and choosing the most optimal solution from them. The algorithm presented here relies on a routine for computing the objective function and its gradient whenever the objective function is differentiable. For the spectral abscissa, the value of the objective function is obtained by computing rightmost eigenvalues of the DDAE. The value for H ∞ norm is obtained by a generalisation of the Boyd-Balakrishnan-Kabamba / Bruinsma-Steinbuch algorithm relying on computing imaginary axis solutions of an associated Hamiltonian
Fig. 1 .
1 Fig. 1. Overview of centralised, decentralised, and overlapping configurations. P is the MIMO plant with two inputs and two outputs whereas C, C 1 , and C 2 are the controllers.
TABLE I INFORMATION
I
ON THE EXAMPLE TDS CONSIDERED
Example Order of No. of in- No. of out- No. of time-
plant puts puts delays
Neutral TDS 3 2 2 5
Retarded TDS 4 2 2 1
Please referhttp://twr.cs.kuleuven.be/research/ software/delay-control/structurallyconstrainedTDS. zip to obtain the tool and more information on example problems and their solutions. eigenvalue problem. Evaluating the value of the objective function at every iteration constitutes the dominant computational cost. On the contrary, the derivatives with respect to the controller parameters are computed at a negligble cost from left and right eigenvectors. Due to this and the fact that controllers of lower order are desirable for application, introducing structural constraints will not have a considerable impact on the overall computational complexity of the control design problem.
ACKNOWLEDGEMENTS
This work was supported by the project C14/17/072 of the KU Leuven Research Council, by the project G0A5317N of the Research Foundation-Flanders (FWO -Vlaanderen), and by the project UCoCoS, funded by the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 675080. |
01761975 | en | [
"sdu.astr.co",
"sdu.astr.im"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/tel-01761975/file/Thesis.pdf | M Daniel
Rouan Professeur
William Herschel
D Gratadour
email: damien.gratadour@obspm.fr
D Rouan
L Grosset
A Boccaletti
Y Clénet
F Marin
R Goosmann
D Pelat
P Andrea
Rojas Lobos
Extragalactic observations with Adaptive Optics: Polarisation in Active galactic Nuclei and Study of Super Stellar Clusters
Keywords: Galaxies: Seyfert, star clusters, Techniques: photometric, polarimetric, high angular resolution, Methods: observational, numerical, Radiative transfer galaxies: active -galaxies: Seyfert -techniques: polarimetric -techniques: high angular resolution SF2A 2016 galaxies: active, galaxies: Seyfert, radiative transfer, techniques: polarimetric, techniques: high angular resolution galaxies: active, galaxies: Seyfert, radiative transfer, techniques: polarimetric, techniques: high angular resolution 1
iii transfer code, MontAGN, with the aim of reproducing the features that we observed. This code allowed us to bring some constraining results on the optical depth of the structures. The torus' optical depth was constrained to 20 in the Ks band as a minimum. Our investigations on the densities in the ionisation cone are consistent with densities of 2.0 × 10 9 m -3 , in the range of previously estimated values for this AGN.
The last part of my PhD work was dedicated to SSC (also known as young massive clusters). These clusters correspond to the more massive example of star forming clusters, often about 10 Myr and still embedded in a dusty shell. These clusters exhibit extreme observed star formation rates. We took advantage of an instrumental run of the multi-object AO demonstrator CANARY instrument, installed at the William Hershel Telescope in 2013 to obtain images in H and Ks bands at higher resolution than previously achievable. Data on the galaxy IRAS 21101+5810 were reduced and analysed, constraining the age of the clusters between 10 and 100 Myr and the extinction to about A V ≈ 3. Photometry was obtained thanks to a new algorithm to estimate the galaxy background, bringing improvements on the fitting of the clusters' luminosity distributions.
Despite having strong theoretical models, the current limitation in our understanding of the small-scale structures of galaxies is linked to the lack of observational evidences. Many powerful telescopes and instruments have been developed in the last decades, however one of these strongest tools, namely Adaptive Optics (AO), can only be used on a very limited number of targets. Indeed, for AO to be efficient, a bright star is required close to the scientific target, typically under 30 . This is mandatory for the AO systems to be able to measure the atmospheric turbulence and this condition is rarely satisfied for extended extragalactic targets such as galaxies. The main part of this thesis work consisted in going deeper in the analysis of the inner tens of parsecs of Active Nuclei (AGN) by combining different techniques to obtain and to interpret new data. In this context, we developed a new radiative transfer code to analyse the polarimetric data. A second part of my work was dedicated to a high angular resolution study of Super Star Clusters (SSC) in a new system, thanks to data obtained with the AO demonstrator CANARY instrument.
The unified model for AGN stats that Seyfert 1 or 2 types are the same type of object harbouring a luminous accretion disk surrounded by a thick torus, but seen under different viewing angles. This model has been successfully tested for many years by several observers, bringing some evolutions to the initial model, but on the torus, one of the most important pieces, we still lack information because of its limited extension and of the high contrast required. We took advantage of SPHERE's extreme contrast and high angular resolution to propose a near infrared (NIR) polarimetric observation of the archetypal Seyfert 2 galaxy NGC 1068 in H and Ks broad bands, revealing a clear double hourglass shape of centro-symmetric pattern and a central pattern diverging from these. We continued these measurements in narrow bands (in the NIR) and at shorter wavelengths (R band) in order to study the wavelength dependency of the measured polarisation, which we are currently analysing. If the features are wavelength-dependent, this will bring new constraints on the properties of scatterers.
Polarimetry is a powerful tool as it gives access to more information than spectroscopy or imaging alone. In particular, indications on the geometry of the scatterers, the orientation of the magnetic fields or the physical conditions of matter can be revealed thanks to the additional parameters measured (the polarisation degree and the polarisation position angle for linear polarisation). The counterpart of polarimetric measurements is that analyses of data is not straightforward. The use of numerical simulations, and especially radiative transfer codes, is necessary to fully understand the observations. I developed during the first part of my PhD a simulation of radiative v
Résumé
Observations Extragalactiques avec Optique Adaptative : Polarisation dans les Noyaux Actifs de Galaxie et Étude des Super Amas d' Étoiles Abstract :
L'observation à haute résolution angulaire du centre des galaxies est l'un des défis les plus importants de l'astronomie moderne. Peu de galaxies sont en effet situées suffisamment proches de nous pour permettre une observation détaillée de leur structure et de leur différentes composantes. Un autre facteur limitant vient s'ajouter à travers le besoin critique d'une source de lumière ponctuelle à proximité de toute cible afin de rendre possible une correction par optique adaptative. Celle-ci est requise pour toute observation au sol sans laquelle la turbulence atmosphérique rendrait impossible une résolution meilleure que 0,5-1 seconde d'angle. Cette dernière restriction est relativement difficile à remplir dans le cadre de galaxies, à proximité de laquelle on ne trouve pas nécessairement d'étoile assez brillante, et la prochaine génération d'instruments a besoin de franchir ces limitations par des corrections plus vaste, plus ciblées ou applicables à un plus grand nombre de cibles.
L'observation des noyaux actifs de galaxie est particulièrement sensible à ces facteurs. Ne concernant qu'une fraction des galaxies observables et étant la partie centrale de ces galaxie, peu d'instruments permettent d'accéder à la structure interne de ces noyaux. En 1993, Antonnucci définit le modèle unifié des noyaux actifs, postulat stipulant que les différents types de noyaux actifs sont les mêmes objets intrinsèques, observés depuis différents angles de vue, suite à une observation en 1985 faite avec Miller de NGC 1068. Celle-ci est Depuis, de nombreuses études se sont portées sur les quelques principaux noyaux observables. Le tore de poussière, bloquant la lumière dans le plan équatorial de la galaxie et donc crucial pour ce modèle est néanmoins difficile à détecter du fait de sa taille relativement restreinte (autour de la dizaine à la centaine de parsecs) et du haut contraste requis pour séparer son signal de l'émission du centre du noyaux.
Cette thèse porte sur l'analyse d'observations en infrarouge proche de galaxies actives proches à l'aide d'optique adaptative afin d'accéder à une imagerie haute résolution angulaire de la région centrale de ces objets. Nous nous sommes ainsi appliqués à l'observation avec SPHERE du noyau actif NGC 1068, archétype de la galaxie à noyau actif (NAG) de type 2, très lumineuse et située relativement proche de la Terre, à 14 Mpc environ. En particulier, nous avons utilisé une technique polarimétrique afin de renforcer encore le contraste et faire apparaître une signature possible du tore, élément essentiel du modèle unifié des NAG. Le travail de thèse porte également sur l'analyse d'images infrarouges de galaxies à flambée d'étoiles afin de contraindre les paramètres décrivant les super amas stellaires, jeunes cocons de poussière très massifs abritant une formation d'étoiles très soutenue.
La première partie de ce travail de thèse porte sur les observations de ces deux types d'objets, depuis la réduction des données combinant diverses techniques à leur vi analyse à travers les différentes images obtenues. Ainsi, dans le cas des noyaux actifs, nous avons pu interpréter le signal polarimétrique observé à travers des cartes de degré et d'angle de polarisation. Ces cartes décrivent en effet le double cône d'ionisation, lié aux jets de l'objet central, mais également une région au centre, d'une extension d'environ 60 pc, où le signal trace une polarisation d'orientation constante associée à un degré de polarisation variant, de très faible sur le bord vers assez élevé (autour de 15 %) au centre même de la luminosité du noyau. L'analyse des amas s'est faite, elle, par comparaison des images à différentes longueur d'onde de ces amas, en reliant leur couleur à leurs propriétés.
La seconde partie, la plus conséquente en terme de temps et de volume de travail, a consisté à modéliser ces structures analysées afin de chercher à en reproduire le signal, notamment en polarisation. Un code de transfert radiatif a ainsi été développé, à partir d'une première version créée lors d'un stage, à travers une amélioration et une inclusion de la polarisation. Le code ainsi finalisé permet de simuler des spectres et des images, à la résolution choisie, de sources de lumières entourées de structures variées (tore, disque, cône) contenant poussières et électrons. Nous avons programmé plusieurs simulations, de façon à étudier les épaisseurs et densités des poussières des différentes structures supposées, et avons pu contraindre les dimensions et densité de plusieurs d'entre eux. Nous proposons une interprétation cohérente de la carte de polarisation observée, en terme de double diffusion. Une épaisseur minimale de tore de poussière a ainsi été trouvée, de même qu'une estimation de la densité en électron dans le cône, en accord avec les études précédentes, et en poussière dans les régions externes du tore, dans le cas d'un modèle de noyau actif en accord avec le modèle unifié.
Mots-clés : Galaxies : Seyfert, amas d'étoiles, Techniques : photométrie, polarimétrie, Haute résolution angulaire, Méthodes : observations, numériques, Transfert radiatif.
À la mémoire de Malvina À tous les rêveurs et rêveuses À Myriam viii J'ai attendu avec une impatience mêlée d'appréhension le moment d'écrire ces remerciements à toutes les personnes qui m'ont aidé à parvenir au bout de ce travail de thèse et qui ont permis sa réalisation. Les remerciements sont toujours une partie délicate à rédiger dans une thèse. Il est non seulement difficile d'y exprimer la reconnaissance que l'on a envers toutes les personnes qui ont pu nous soutenir de près ou de loin, mais de plus il me tient particulièrement à coeur de réussir à le faire le plus sincèrement possible. N'étant pas très à l'aise pour l'improvisation ou l'oral de manière générale, je n'ai pu remercier oralement le jour de ma soutenance de thèse les personnes qui m'ont permis d'accomplir ce travail de la manière que je souhaitais. C'est pourquoi je vais tacher de me rattraper ici 1 de la façon la plus honnête possible je l'espère 2 ! Cela représente évidemment un défi pour la personne réservée et pas assez organisée que je suis et je demande pardon à toutes les personnes que je manquerais de citer dans ces remerciements.
Quand il s'agit de souligner le soutien que j'ai pu avoir au cours de cette thèse, il est évident que le rôle le plus central a été tenu par mes deux directeur de thèse. Je me suis rendu compte, à posteriori, qu'ils m'avaient très rapidement fait confiance et respecté mes avis, je mesure cette chance, et ils sont le moteur principal de mon évolution en tant que chercheur pendant ces trois années.
Daniel, merci pour ta présence régulière et constante au cours de ces trois ans. Tu as su me servir de repère stable, même si tu n'es pas forcément de cet avis, et me guider petit à petit d'une façon lente et efficace au cours du temps. Ton expérience inimaginable et ton écoute m'ont énormément apporté tout au long de ces trois ans.
Je te remercie Damien pour l'impact tout aussi positif que tu as eu, d'une autre façon, sur mon travail et mon développement. Ta présence et tes conseils se sont révélés capitaux aux moments clefs de ma thèse et tu as su me remotiver d'une façon que je n'aurais pas soupçonné. Cela m'a beaucoup apporté de côtoyer ton avis, souvent optimiste devant la planification de mon travail et parfois sceptique devant mes résultats, me poussant par là à comprendre plus profondément et mieux expliquer mes raisonnements.
ix Merci à tous les deux pour la qualité de l'encadrement dont j'ai eu la chance de bénéficier et j'espère que vous avez passé autant de bons moments avec moi que j'en ai eu avec vous pendant cette collaboration.
I also would like to thanks particularly my defense jury. Catherine, Almudena, Chris and Annie, you all contributed a lot to this work, first by reading my manuscript despite the limited quality of my English, and secondly by accepting to be part of the jury, allowing me to present my work in such good conditions with great exchanges. I am glad that you, as persons that I deeply respect as great scientists, you were part of the jury that gave me the grade of doctor.
Je tiens également à remercier tout particulièrement Didier. Tu m'as montré le premier la magie du monde de la polarisation par diffusion dès mon année de Master 2 à l'Observatoire et celui des simulations numériques dans le même temps. Sans cela, je n'aurais jamais pris le courage de me lancer dans ce travail de simulation. Tu as par la suite pris le temps de m'encourager et de m'aider lors de chacune de nos interactions à ce sujet, et ta présence à ma soutenance m'a fait vraiment plaisir.
Merci aussi à Frédéric et René, pour votre aide capitale dans mon travail et votre impact sur ma vision de ma position dans notre contexte de simulation de polarisation, sans doute plus important que vous ne le soupçonnez. Merci pour votre hospitalité à Strasbourg qui m'a également touchée.
Si j'ai pu effectuer ma thèse dans d'aussi bonnes conditions, c'est que j'ai pu profiter d'une atmosphère de travail3 très conviviale et agréable. Tout particulièrement, merci Lucien, mon co-bureau de tous temps, de m'avoir accueilli dans ton bureau, pour nos échanges relaxant et pour ton soutien logistique 4 .
De même, merci Clément d'avoir entrepris le même parcours que moi au même moment, me soutenant en traversant les mêmes épreuves 5 en simultané et d'avoir contribué avec moi à la communication inter-bâtiments du pôle HRAA.
Un grand merci également à Marie, co-naufragée des pauses cafés-réunion PicSat, Vincent, Nabih, Marion, Lester, Mathias, Guillaume, Nick, Antoine, Anna, Frédéric, Sylvestre pour leur présence à mes côtés à différentes occasions et tout le bâtiment 5. Vous m'avez très rapidement adopté lors de mon début de stage puis de thèse, tout comme le bâtiment 12 en ce début d'après thèse, et j'ai vraiment apprécié ces trois années passées dans le pôle HRAA de manière générale. Merci donc à tous pour votre contribution à cette ambiance ! J'ai aussi bénéficié de soutien de la part de nombreuses personnes au sein de l'Observatoire de Meudon, que ce soit au cours de pauses, de collaborations administratives, techniques ou autres. Merci tout d'abord à Lisa, Corentin, Alan, Charly, Bastien, Sonny, Sophie et tous les doctorants, jeunes docteurs et astrophysiciens à
x tous postes avec qui j'ai passé beaucoup de temps.
J'ai eu la chance de participer à d'autres activités en dehors de la recherche proprement dites. J'ai ainsi pu découvrir ces activités qui, même si elles n'ont pas directement contribué à mon travail scientifique, font parti de la recherche et ont été importantes pour moi. Merci donc à tous mes collègue de l'organisation de la conférence Elbereth, qui a été une super aventure pour moi, et à ceux du club astro de Meudon, dans lequel je me suis investi pendant mes années de thèse. J'ai également découvert mon goût pour l'enseignement au cours des différents TP et TD que j'ai effectuées pendant ma thèse. Je remercie mes co-encadrants de m'avoir tout d'abord appris les bases puis d'avoir contribué par votre vision de l'enseignement à me donner envie d'enseigner. Merci Yann, Catherine, Pascal, Pierre, Andreas, Charlène, Benoit et les autres.
Finalement, je voudrais remercier tous mes soutiens moraux qui ont maintenu mon moral à un niveau raisonnable pendant ces 3 années de travail malgré ma disponibilité fluctuante et ma distance 6 . Je pense en particulier à Auriane, David, Daniel, Thomas, Maxime, Victoria, Émilie, Orlane, Arièle, Thibaut, Ludmilla, Ariane, Vincent, Édouard, Lauriane, Marie, Cécile, Tristan... Merci évidemment à ma famille au grand complet, qui a toujours été très compréhensive et a toujours été un support extrêmement stable sur lequel j'ai toujours pu m'appuyer. Avec un escalier prévu pour la montée on réussit souvent à monter plus bas qu'on ne serait descendu avec un escalier prévu pour la descente.
Proverbe Shadok (Jacques Rouxel)
Since the birth of instrumental astronomy with first refractor observations being conducted around 1610 (by Galileo), many extragalactic objects have been observed. Among the first targets, the Andromeda galaxy, was observed short after the development of the refractors, in the first half of the XVII th century. The Magellanic Clouds, two small irregular galaxies close to the Galaxy, visible with naked eyes, have been observed for even a longer time from the Southern hemisphere likely for thousands of years. In the first catalogue of nebulae, listed by [START_REF] Messier | Catalogue des Nébuleuses & des amas d' Étoiles[END_REF], we find an important number of objects located outside of the border of the Galaxy. However all these extragalactic objects were not known as such and all astronomical bodies were for long considered as belonging to our Galaxy. All non stellar (not observed as point-like and fixed) objects were classified into the "nebulae" category, but were not assumed to be located further away from the Earth than the stars.
This situation lasted until the great debate about the scale of the Universe that occurred around 1920, outlined by Shapley & Curtis (1921). The question was about the exact dimensions of the galactic system and about understanding whether the so-called "spirals" nebulae were intra or extra-galactic objects. These were later designated as "island universes" by Curtis. More clues were brought few years later by Hubble (1925), using Cepheids to measure distances of few of these objects, thanks to the relationship between period and magnitude of Cepheids, discovered by Leavitt & Pickering (1912). Cepheids are a particular type of stars, reasonably close to the end of their "life" and that are pulsating at a frequency precisely measurable. This led to the paper of Hubble (1926) on extra-galactic nebulae, namely galaxies, much more distant than previously thought, introducing what will become the Hubble sequence. Starting from this paper, extragalactic objects became a new domain of research. Note that the term "nebula" is today only attributed to extended gaseous (non-galaxy) objects, mainly situated inside our Galaxy. However, "nebula" was still used after the publication of the Hubble seminal paper of 1925 also to designate galaxies, like for example in Seyfert (1943) twenty years later.
This chapter aims at introducing different concepts of extragalactic objects which this thesis' work focused on. We will present the benefits of high angular resolution (HAR) and give the main limitations for the observation of extragalactic sources. Since this thesis work is based on observation in the Near InfraRed (NIR), we will focus particularly on optical and NIR instruments even though some of the concepts developed here could also be applied to instruments dedicated to other wavelength domains observations. We will detail in a second part some basics on the fields that will be investigated in this work: luminous young star clusters and active nuclei of galaxies.
Introduction to High Angular Resolution
A lot of information can be brought using the light collected over the entire objects. On extended targets, large aperture photometry, spectroscopy or polarimetry can be used to provide critical evidences to better understand the physics in astronomical objects. However it is sometimes required to have access to more details on the spatial extent of the brightness distribution of the sources in order to understand better their organisation, their components and the physics taking place on specific locations of these particular objects. And this is especially true for extragalactic sources, which are highly structured (arms, bulge, halo, Active Galactic Nucleus (AGN)...).
The angular resolution of an optical system (in the optical and NIR at least) is characterised by its Point Spread Function (PSF), representing the instrument response to a point-like source. In the case of circular apertures, like ideal telescopes (refractors or reflectors), the response is well known and is given by the Airy disk, induced by the diffraction of light on the circular aperture and represented in figure 1 This, or the Full Width at Half Maximum (FWHM) of the central peak (there is a factor 1.22 between these two values), is often used as the resolution limit of a telescope 1 and called the "diffraction limit". When represented in the Fourier domain, this limitation can be expressed as the cut-off frequency corresponding the maximal spatial frequency that the telescope is able to give access to. This limits the highest frequency of the Modulation Transfer Function (MTF) to λ/D. We will detail the shape of the MTF and its characteristics when using Adaptive Optics (AO) in section 2.5.1. One immediate conclusion is that the larger the telescope is (in diameter), the higher the angular resolution will be. Real-life experiments always bring other limitations, degrading this ideal resolution. HAR observations are conducted as close as possible to this theoretical resolution.
Adaptive Optics
After the first morphological studies of Hubble (1926), part of the research on galaxies was dedicated to studying the signal arising from small regions of these galaxies, to better understand the physics at smaller scales. This was achieved thanks to 4 Chapter 1. Introduction to Extragalactic Observations at High Angular Resolution imaging, photometry or spectroscopy, but on apertures or resolution often limited to 1 . This limitation in aperture is intrinsically due to the atmosphere's turbulence.
Impact of Turbulence
When the light travels through the atmosphere, the turbulence induces small local temperature variations that in turn induce a proportional variation of the optical index, modifying slightly the phase so that the wavefront is no longer plane but becomes bumpy. When deformed wavefronts are observed through telescopes, they will be translated in the focal plane into speckles with a typical scale of the order of the diffraction limit λ/D, but scattered on a scale of about one arc-second, as shown in figure 1.2. Long exposures will then produce images at a resolution of this typical value of 1 . Turbulence in the atmosphere is the current main limiting factor for the resolution in astronomy from the ground. Despite the growing diameter of telescopes 2 , bringing theoretically a larger resolving power, the effect of turbulence does not allow to reach it. This effect depends on the quality of the site, measured through a quantity called seeing. The seeing is defined as the typical FWHM of the PSF only due to the atmosphere, integrated over a time larger than typically 100 ms. The corresponding PSF is not an Airy function nor a Gaussian and the value gives an estimate of the maximum angular resolution achievable with a perfect instrument under these atmospheric and dome conditions. Seeing can be linked to the Fried parameter r 0 , often seen as the equivalent telescope diameter giving a similar resolution as the one allowed by the atmosphere. Seeing typically has a value of about 0.5-1 in the best observing conditions. These correspond to r 0 ≈ 10-20 cm in the visible, meaning that increasing the diameter of a telescope beyond 20 cm will not improve the angular resolution.
Note that the Fried parameter and the seeing depend on the wavelength. r 0 varies in λ 6/5 and the seeing therefore depends on
α seeing ∝ λ r 0 ∝ λ -1/5 . (1.2)
Because of this dependency, observations at longer wavelength will have a better image quality as long as λ/D > seeing and aberration in the NIR can get closer to the diffraction limit than in the visible for a given telescope.
This atmospheric limitation is critical when trying to map the light distribution of astronomical objects. This is especially true in the extragalactic domain where objects are very extended, typically on the order of the thousands or millions of parsecs. However, these objects are distant (the Magellanic clouds, roughly separating the intra and extra-galactic objects are located at 100 kpc) and at these distances, even such large objects do have a small angular diameter. Andromeda galaxy is an exception. Standing at about 2 Mpc, it has a diameter of few degrees, which is by far the largest angular diameter for any extragalactic object. Most of the galaxies are observable under an angular diameter under 1 , and it is therefore challenging to achieve high spatial resolution on some of their features, hundreds or thousands times smaller than the whole galaxy.
Methods for Achieving High Angular Resolution
As an answer, observers have been trying to develop and implement instrumental techniques to overcome this limitation in resolution caused by the atmosphere. The first two ideas were interferometry and lucky imaging. Interferometry, consists in recombining beams from different apertures (from two or several telescopes, from holes in a mask, from antennae...) to produce interferences and obtain spatial information with a resolution defined by the length of the base and not by the diameter of the aperture. With this technique, it is possible to reach a similar resolution to that of a unique aperture with a diameter of the scale of the base. It is however still sensitive to atmospheric turbulence when the aperture diameter is larger than the Fried parameter, limiting the amount of targets observable with this technique to bright sources. Lucky imaging corresponds to the strategy of observing with very short exposure times in order to try to catch the short periods during which the turbulence has the smallest amplitude on the surface of the pupil. It is not often the case, but by taking a long series of short images, some of them will have a resolution closer to the diffraction limit. This technique must be combined with shift-and-add processing. In this case as well, bright targets are required to receive enough photons on the detector at each exposure.
Another alternative, started during the 1970's, is to send telescopes to space. In this case, image quality is directly limited by the diffraction of the telescope and by Resolution the quality of the optics. The Hubble Space Telescope (HST) is a good illustration of the advantages of spatial telescopes upon ground based telescopes. From 1994, images obtained through its cameras (once the optics has been corrected of strong aberrations) improved hugely the resolution of images on many extragalactic targets in the visible and NIR. Such space instruments are however limited in size. They are expensive to build and launch, and rockets have limited capacities in size. Note that this limit is challenged thanks to new techniques developed in order to bring to space larger telescopes. One major example is the James Webb Space Telescope (JWST) with an unfolding primary mirror that should be launched in the following year and that will have a diameter of 6.5 m. Furthermore, once in space, fixing errors or problems is difficult (if even possible) and expensive, as demonstrated by the HST. Note that in other wavelength bands, in particular for those which are not observable from the ground because of atmospheric absorptions bands (e.g. the Ultra Violet (UV) or parts of the InfraRed (IR)), space telescopes are still the only way to conduct observations and have proven to be very important facilities.
At the same epoch as the HST, the idea to directly correct the effect of turbulence on ground telescopes has been made possible using AO systems in which the wavefront is measured using a camera to compute the difference between this wavefront and a perfect wavefront and correct it by sending commands to a Deformable Mirror (DM). The system is working in closed-loop where the wavefront analysis is done after the DM. The first AO systems were available on sky around 1990, with for instance COME-ON (CGE Observatoire de Meudon ESO ONERA) in 1989 on the 1.5 m of the Obervatoire de Haute Provence (OHP), see [START_REF] Kern | Active telescope systems[END_REF]. After what AO was installed on a growing number of telescopes, most of the large telescopes being currently equipped.
Principle of Adaptive Optics
The basic principle of AO is to split the beam into two components, as late as possible in the optical path, as shown in figure 1.3. The first sub-beam is sent toward the science instrument. The second sub-beam goes to a sensor, measuring the wavefront. The most common device for this part of the system is the Shack-Hartmann wavefront sensor, splitting the beam into several sub-pupils, all focused on the same detector plane (see figure 1 .4). The difference between the sub-pupil optical axis, precisely calibrated, and the position of the spot from the guide star is proportional to the local slope of the wavefront in this sub-aperture and is therefore used to reconstruct the complete wavefront. Note that other wavefront sensors are growing in maturity and are used to equip large telescope as an alternative to the Shack-Hartmann sensor. Pyramid sensor is one example of such devices (describing their operation is beyond the scope of this work since only instruments with Shack-Hartmann sensors have been used in the context of this thesis, for more information, refer to Ragazzoni & Farinato (1999)).
AO does have some limitations because most wavefront sensor concepts require As the turbulence is measured in the particular direction of this guide source, the more off-axis we go from the star in the field of view, the lower the quality of the correction level is. Indeed, the portion of atmosphere crossed by the light from
Chapter 1. Introduction to Extragalactic Observations at High Angular Resolution
the guide star will be common with the one crossed by the light coming from the scientific target only if the later is close enough to the guide star. The dependence of the PSF with respect to the spatial position in the Field Of View (FOV), called anisoplanatism, has been studied since Fried (1982). For more information on its effect on Multi-Object Adaptive Optics (MOAO) or on Extremely Large Telescopes (ELT), see for instance [START_REF] Neichel | Adaptive Optics Systems[END_REF] and Clénet et al. (2015) respectively. On extended targets, this appears to be limiting the usability of AO and especially on extragalactic observations where stars density at high galactic latitude is generally low. More complex AO systems have been developed to correct the wavefront in a peculiar direction, on a larger field (Multi-Conjugate AO, see [START_REF] Beckers | Active telescope systems[END_REF] for example), or on multiple objects (MOAO, presented in section 2.1). These AO systems are particularly relevant to extragalactic observations. In any case, the corrected field will always be limited to the availability of bright stars. For this reason, the Laser Guide Star (LGS) mode, where lasers are used to create artificial point sources in the upper atmosphere (usually in the sodium layer, around 90 km above the ground) was developed. The two main ways of creating such artificial stars are to use Rayleigh backscattering (see sections 3.2.2.1 and 3.2.3 for details on scattering) or to excite sodium atoms, that will emit light by deexcitation. By using this source as the reference for turbulence measurements, AO can correct the wavefront on any direction. The only limitation is the tip/tilt error, linked to the first order motion of the image of the targets (due to turbulence), which is not accessible using this method, thus requiring a Natural Guide Star (NGS) to be fully operational. The NGS can however be fainter and selected farther away due to the larger isoplanatic patch of the tip/tilt mode, extending the sky coverage of the AO systems.
All these improvements to the AO concept provide access to an increased number of targets. The last generation of AO allowed the 8-meter class telescopes to achieve better resolution, mostly in the NIR, than the HST, thanks to very efficient correction levels. In particular, SPHERE (Spectro-Polarimetric High-contrast Exoplanet REsearch instrument) on the Very Large Telescope (VLT), used in this work, uses an extreme AO system in order to reach a high contrast to reveal exoplanets and disks around bright stars with a resolution better than 50 mas in the NIR, close to the diffraction limit at these wavelengths (about 40 mas in K band).
This thesis work is based on the use of these new AO facilities. The first part of this work focused on Super Stellar Cluster (SSC) while the second and main part targets AGN.
Super Stellar Clusters
According to their name, SSC are the largest and most massive young stellar clusters, therefore often called YMC for Young Massive Clusters. Star clusters are groups of newly born stars with a local star density much higher than in the surround-ing media, often bound by self gravitation and whose stars share an approximately identical age. They are thought to be created by the collapse of large and dusty clouds, due to a strong gravitational perturbation, triggering locally a high star formation rate when the gas collapses into stars. Some very young clusters are still embedded inside their dust cocoon while the older ones, populated with lower mass stars with a longer life time than the more massive ones, have consumed and/or expelled most of the original gas and dust and are no longer hidden.
To be considered as a SSC, stellar clusters should have a mass within the upper range of cluster masses, typically 10 3 -10 6 M , comparable to Globular Clusters (GC). Their age is often estimated to be less than 100 Myr, which is much younger than the minimal age estimated for GC about 10 Gyr (Portegies Zwart et al. 2010), but the limit depends on authors. Some of them estimate the limit at nearly 10 Myr, because at this age most of the massive stars have already reach the supernova stage (as discussed in Johnson 2001). Because of the high concentration of gas embedded within such a small volume, below 5-15 pc of radius (Bastian 2016), SSCs feature among the largest ever measured Star Formation Rate (SFR) with typically ≈ 20 M yr -1 (Bastian 2016). Such a high concentration of matter requires some particular conditions, able to trigger collapses of large amount of gas. For this reason, SSCs are often associated to interacting starburst galaxies or to Ultra-Luminous in InfraRed Galaxies (ULIRG) (Johnson 2001;Bastian 2016, described in the next section).
Properties of SSC have been described by many excellent reviews. In particular, for a precise description and discussion about the properties of SSC mass functions, refer to Johnson (2001[START_REF] O'connell | The Formation and Evolution of Massive Young Star Clusters[END_REF] or Turner (2009). One can also find a list of the whole zoology of clusters, including SSCs, in Moraux et al. (2016) where most of these definitions come from.
Host Galaxies
Starburst galaxies is an ambiguous term, but corresponds roughly to galaxies showing a SFR about 1 M yr -1 kpc -2 or greater, according to Johnson (2001). Usual SFR within a galaxy would be in the range 0.001-0.01 M yr -1 kpc -2 . ULIRG are characterised by their luminosity, much larger in the IR than in the visible L IR 10 12 L . They are an extreme category of Luminous in InfraRed Galaxy (LIRG), often defined by L IR 10 11 L (see Turner 2009). Such IR luminosity requires large concentrations of dust within the galaxy to explain that the essential of the stellar luminosity of young stars is absorbed by grains and converted in thermal IR. Both starburst and (U)LIRG are generally thought to be the consequence of an interaction between galaxies (with mergers being the stronger example), the gravitational torque inducing the falling matter toward the centre of the galaxy.
Short History
It was first noticed (for example by Sargent & Searle 1970) that some galaxies were undergoing larger star formation than average. Stellar clusters were known for
Chapter 1. Introduction to Extragalactic Observations at High Angular Resolution
long, but the first use of the term SSC to designate particularly massive star cluster is found in van den Bergh (1971) for the starburst galaxy M 82. Schweizer (1982) made the hypothesis that such massive stellar clusters were present in NGC 7252 (from the New General Catalogue) and triggered by the recent merger event undergone by the galaxy. The term SSC was then formally re-used by Arp & Sandage (1985) for two clusters in NGC 1569 and by Melnick et al. (1985) for the clusters in NGC 1705.
The understanding of SSCs increased significantly thanks to InfraRed Astronomical Satellite (IRAS, launched in 1983) observations, according to Turner (2009). HAR imaging brought in particular by the HST was also critical, as shown for example by the observation of NGC 1275 by Holtzman et al. (1992), confirming that these objects are extremely compact (< 15 pc, see also Bastian 2016).
Later, Heckman (1998) identified that a significant fraction (about 25 %) of the massive stars in the local universe were formed in a few number of starburst galaxies, underlining the importance of starburst galaxies and of SSCs in the process of birth of massive stars.
Evolution of Super Stellar Clusters
The first discovered clusters were Open Clusters (OC) and GCs. Because they are very different, we had to wait for the first observations of SSC to better understand the link between the different types of known clusters.
The evolution of clusters in general, and SSCs in particular is still a matter of debate. It is mostly understood that SSCs are associated to ultra dense H II regions. Since the discovery of SSCs and because of their mass, close to those of GCs, there has been discussions about whether they are the precursors of GCs, which would be their final evolution stage or if they were formed different (Bastian 2016). It appears now that OCs are not fundamentally different from SSCs, whose they represent the lower mass branch. It seems also unlikely that all these massive clusters survive more than 100 Myr or 1 Gyr. The number of GCs would be much higher if it was the case, and we expect the dissolution of most of these clusters, in particular because of the supernova stage of the most massive stars of the clusters, potentially creating strong winds thought to be able to disrupt clusters (Whitmore et al. 2007).
Furthermore, GCs and SSCs do not show the same luminosity functions. The luminosity of SSCs follows a power-law distribution which is not consistent with the GCs which appear to exhibit luminosity distribution according to a log-normal (Johnson 2001). Many studies (e.g. van den Bergh 1971) were dedicated to understand what may have caused such turn-overs, however because of the various effects that need to be included, as selection bias for example, no clear result is up to now largely accepted.
The fields of clusters and those of SSCs in particular may benefit strongly from the HAR. Improvements of the angular resolution clearly triggered this field and allowed true advances. HAR is therefore a choice tool for SSCs studies.
Active Galactic Nuclei
Active galaxies are a peculiar type of galaxies, harbouring a very bright nucleus with strong emission lines. The high, non thermal luminosity of an AGN is assumed to be emitted by a hot accretion disk, surrounding a super massive black hole, located at the very centre of the nucleus. The basic definition of AGN states that the luminosity of its nucleus should be stronger than the total luminosity of the rest of the host galaxy. This general feature is then split into several subgroups. One famous classification is the distinction by Seyfert (1943) between AGNs showing broad emission lines, and those which only show narrow emission lines. Other differences are used to classify the nuclei as for example their radio emission allowing to distinguish the radio loud and radio quiet AGNs (see e.g. Urry & Padovani 1995).
Interestingly, first identification of the particular properties of AGNs happened before the formal separation between galaxies and nebulae. Fath (1909) discovered bright emission lines, detailed later by Slipher (1917) in NGC 1068 (which will be presented in Chapter 4 as the central target of this thesis work). As mentioned, Seyfert (1943) proposed the first classification of galaxies with intense emission lines, differentiating nuclei showing or not broad lines, what will later become Seyfert galaxies, of type 1 and type 2 respectively.
Toward an Unified Theory
These very different characteristics pushed to the setting of an unified scheme, able to produce the very different features observed in this panel of objects.
Based on observations of different type of AGNs (see, e.g., Rowan-Robinson 1977[START_REF] Neugebauer | [END_REF][START_REF] Lawrence | [END_REF], Antonucci 1984), it was proposed that a circumnuclear obscuring region was responsible for the lack of a broad line region in Seyfert 2, only revealed in AGNs for which the line of sight was close to the polar axis (type-1 view). This was strengthened by the study of Antonucci (1984) on optical polarisation of radio-loud AGNs and Antonucci & Miller (1985) who discovered broad Balmer and Fe II emission lines in the polarised light of NGC 1068, the archetypal Seyfert 2 galaxy. Following this idea, Antonucci (1993) proposed the unified model for radio-quiet AGNs, stating that Seyfert 1 or 2 were the same type of object harbouring a luminous accretion disk surrounded by a thick torus, but seen under different viewing angles. This model was extended to the radio-loud AGNs by Urry & Padovani (1995) a couple of years later.
According to this model, as represented in sketch of figure 1.5 from Marin et al. (2016a), an AGN would be constituted of:
-a Central Engine (CE), more likely to be a super massive black hole with a hot accretion (with its innermost regions being estimated around 10 7 K by Pringle & Rees 1972) disk emitting through unpolarised thermal emission most of the luminosity from ultraviolet to NIR; -an optically thick dusty torus, of typical size 10 to 100 pc, surrounding the Resolution CE and blocking the light in the equatorial plane, with its innermost border at sublimation temperature. This hot dust is the main source in the near/midinfrared and appears as quasi-unresolved source. The cavity inside the torus is assumed to be the region where the broad emission lines are formed, called Broad Line Region (BLR); -an ionisation cone in the polar directions would be directly illuminated by the CE, connecting with the interstellar medium to create the Narrow Line Region (NLR) after few tens of parsecs.
Figure 1.5 -Unscaled sketch of the AGNs unification theory. A type 1 AGN is seen at inclinations 0-60 • while a type 2 AGN is seen at 60-90 • , approximately. Colour code: the central super massive black hole is in black, the surrounding X-ray corona is in violet, the multi-temperature accretion disc is shown with the colour pattern of a rainbow, the BLR is in red and light brown, the circumnuclear dust is in dark brown, the polar ionized winds are in dark green and the final extension of the NLR is in yellow-green. A double-sided, kilo-parsec jet is added to account for radio-loud AGN.
From Marin et al. (2016a).
Validation, Tests and Limitations of the Unified Model
This model has been thoroughly tested for many years by several observers. A significant part of the constraints on these different regions were brought thanks to observations of NGC 1068 and a more detailed picture on the proposed structure and the limits of the interfaces between theses areas can be found in the NGC 1068 description, in section 4.1.1.
More statistical studies were also conducted on the populations of Seyfert galaxies.
Goal of this Research Work
For example, Ramos Almeida et al. ( 2016) examined a sample of Seyfert 2 galaxies, aiming at observing the hidden broad emission lines, thanks to spectro-polarimetry. They detected these hidden emissions in a significant fraction of the sample (about 73 %), correcting misclassification of some of these objects, supporting the unified model.
Observations were compared to many simulations, trying to reproduce the main features (see for example Wolf & Henning 1999;Marin 2014Marin , 2016)), with the result that a zeroth-order agreement was found with measures.
Thus, the unified model of AGNs is currently largely accepted, but the initial scheme has evolved thanks to new constraints and to the influence of other models. The torus extension is now assumed to be smaller than the hundred of parsecs predicted by Antonucci (1993), for example a recent estimate by García-Burillo et al. (2016) is about 5 to 10 pc . Furthermore, some components (in particular the torus, but also for example the NLR) are now envisaged as a fragmented media, more physically in line with the fractal structure of the InterStellar Medium (ISM), see for example Elmegreen (1991) and the AGN study of Marin et al. (2015). Furthermore, the BLR and the torus, initially described as two different entities by Antonucci (1993) are now considered as a unique structure, with a temperature gradient inducing the properties of the two regions (see for example Elitzur & Ho 2009). This feature is for instance more related to the disk-wind model proposed by Emmering et al. (1992).
One can note that among the techniques used to increase our understanding of AGNs, polarimetric observations are one of the key methodologies as demonstrated 30 years ago by the studies of Antonucci (1984) and Antonucci & Miller (1985), providing the angular stones for the definition of the unified model of AGNs, extended more recently by Ramos Almeida et al. (2016). We can also underline the growing use of AO on extragalactic sources, starting from Lai et al. (1998), to reach the most hidden inner structures of extension of only few parsecs.
Current Investigations
As demonstrated by the most recent papers still updating the features observed in some Seyfert galaxies, like the new broad lines detections of Ramos Almeida et al. (2016), our understanding of AGNs is continuously evolving. Concerning the torus especially, one of the most important pieces of the model, we still lack informations due to its limited extension and to the high contrast required to observe it. Indeed, with an extension of few tens of parsecs of cold matter, optically thick, close to the very bright CE and the core of hot dust, detecting the signature of this dusty torus is particularly challenging.
Goal of this Research Work
This thesis will focus on the use of two of the most recent AO systems described in section 1.2.3 in order to go deeper in the understanding of AGNs and of SSCs.
Chapter 1. Introduction to Extragalactic Observations at High Angular Resolution
The main part of this work is dedicated to polarimetric observations of an AGN and their interpretation through the use of radiative transfer simulations. With this study, we aim at constraining the geometry, composition and density of different AGN structures.
The first part is dedicated to SSCs and we will describe in Chapter 2 one of the first set of MOAO scientific images obtained on two galaxies, one being out of reach for other AO systems. In particular, our goal is to constrain the properties, as age and metallicity of the clusters newly observed. The second and main part will focus on AGNs, with a polarimetric point of view. We will present in Chapter 3 will introduce and detail the physics of polarisation and in Chapter 4, we will describe the new polarimetric observation conducted on one archetypal Seyfert 2 nucleus. Chapter 5 presents the radiative transfer code developed in the frame of this thesis work to interpret the AGN data and the corresponding analysis will be discussed in Chapter 6, before concluding (Chapter 7). For a moment, nothing happened. Then, after a second or so, nothing continued to happen.
Douglas Adams
As presented in section 1.3, SSCs are key targets to understand the extreme star formation episodes and to constrain SFR evolution at different ages of the Universe. These massive and young star clusters are indeed the best examples of regions with extreme star formation. Understanding their properties is key to understanding the evolution of galaxies. However, up to now the number of accessible galaxies featuring SSCs is still limited. As technology is progressing, more and more targets are becoming reachable for HAR observations. This is one of the main objectives of the future instrument MOSAIC (Multi-Object Spectrograph for Astrophysics, Intergalactic-medium Chapter 2. Super Stellar Clusters studies and Cosmology), planned for the European-ELT. Such new instruments, requiring significant technological breakthrough rely on demonstrators, allowing to develop and verify the different concepts that will be introduced. CANARY was built with the purpose of testing the performance of new AO concepts on sky, some of which are planned for MOSAIC. We will focus on MOAO in the following.
The CANARY Instrument and MOAO
Extragalactic science often requires to get access to the best possible resolution on a maximum number of targets simultaneously. As opposed to wide field AO systems, MOAO does not aim at correcting a large field but to only compensate the turbulence perturbations in particular directions, in order to have turbulence corrected images on some small regions of the field instead of the entire FOV. The main difference here is that the turbulence measurements are not conducted in the direction of the targets and that the AO loop is in that case "open": turbulence is measured thanks to other references, and deformation is applied to various DM to correct the wavefront for another line of sight. MOAO can use LGS in order to measure the perturbations, however at least one NGS is still required for tip/tilt correction.
CANARY was designed to be installed on the Nasmyth focus of the 4.2 m diameter William Herschel Telescope (WHT), located at Roque de los Muchachos in La Palma, Canary Islands (in Spain). It has been installed for its phase A on 2010. It has followed several phases, each one dedicated to a particular evolution of the AO strategy until the summer 2017. Initially, CANARY was designed to be able to perform MOAO correction using four NGSs, one on-axis and three off-axis, between 10 and 60 . All four sensors were 7 × 7 sub-pupils Shack-Hartmann (see section 1. 2.3 and Vidal et al. 2014) and the central beam is separated thanks to a dichroic plate, part of the flux being redirected toward a dedicated NIR camera. A detailed review of the instrument can be found in Gendron et al. (2011).
It has then been evolving for 7 years and after 2012 it had been equipped with four additional wavefront sensors, allowing the AO system to use four LGS as well as NGS farther from the scientific target, up to 1 Gendron et al. (2011). These LGSs used Rayleigh backscattering. Being a demonstrator, CANARY is designed to compute and correct the wavefront in one particular direction, in a limited FOV (8 × 8 ), but further away from the on axis NGS than what would be possible with other AO systems.
The purpose of CANARY was mainly performances tests and characterisation of new AO concepts. It was not designed for performing scientific programs on astronomical sources, however the team tried after a particularly successful run to observe two galaxies in MOAO mode. These two targets, NGC 6240 and IRAS 21101+5810 were observed during summer 2013 thanks to the LGSs constellation and one NGS. For the first target, NGC 6240, the NGS was located at 30 away from the centre of the galaxy. It is close enough to allow classical AO systems to give access to this target and it was already observed by Pollack et al. (2007) using Keck. This was however not the case for the second galaxy, IRAS 21101+5810, for which the closest star, bright enough to be used as a guide source is situated 50 away. CANARY observations are therefore the only AO corrected ground observation of this galaxy up to now, the only previous HAR images were obtained with the HST (Armus et al. 2009). CANARY was furthermore one of the first instruments able to obtain scientific data on astronomical targets in MOAO mode (Gendron et al. 2011).
Observed Systems
Both galaxies observed with CANARY, NGC 6240 and IRAS 21101+5810, are in mergers at different stages. NGC 6240 is composed of two nuclei, bulges without disks, remnants of the two initial galaxies, currently merging at an advanced stage, with tidal tails. The formation of a disk is even expected in this system in less than 10 Myr according to Tacconi et al. (1999). On the other hand, IRAS 21101+5810 is likely to be at the beginning of a merging process, its two components showing asymmetries due to their first gravitational interaction but being mostly intact (see [START_REF] Haan | [END_REF]. As opposed to NGC 6240, the time scale for the merging process was not determined for this second couple of galaxies. Both systems belong to a sample of (U)LIRG: the Great Observatories All-Sky LIRG Survey (GOALS) (eg. Armus et al. 2009).
NGC 6240 is a well known galaxy and has been studied extensively, but this is not the case of IRAS 21101+5810, only examined through GOALS statistical studies. These galaxies are situated at z = 0.0245 and z = 0.0390 respectively, which corresponds to 116 and 174 Mpc for H 0 = 70 km/s/Mpc, according to Iwasawa et al. (2011) using data from the NASA/IPAC Extragalactic Database (NED) based on measurements by Strauss et al. (1992).
SSCs were already detected in NGC 6240 by Pollack et al. (2007) and we expect to find SSCs in the IRAS 21101+5810 system because of the gravitational interaction undergoing by the galaxy. We therefore took advantage of these new CANARY observations to conduct a photometric study of the SSCs present in these systems. This provides new insights on IRAS 21101+5810 and brings new elements to confirm the detection of Pollack et al. (2007) on NGC 6240. Our work is also a good assessment of CANARY performances through the comparison with the NIRC2 (Near InfraRed Camera -2 nd generation) data.
NGC 6240
According to Armus et al. (2009), NGC 6240 has a IR luminosity of log 10 (L IR /L ) = 11.93, and is therefore at the frontier between LIRG and ULIRG. Its two components, shown in figure 2.1, have a magnitude in the I band of 11.75 and 14.56 respectively [START_REF] Kim | [END_REF]) and both nuclei would be active according to Komossa et al. (2003). 20 to 24 % of the bolometric luminosity of the AGN would come from the contribution of dust, and the Spectral Energy Distribution (SED) fitting required hot gas, estimated at T = 700 K by Armus et al. (2006). Its distribution studied thanks to CO emission, is concentrated in a 4 kpc region surrounding the two nuclei. This gas shows indication of disk rotation around the Northern nucleus, with a peak of emission in the central kilo-parsec between the nuclei where motions are more turbulent, as discussed by [START_REF] Iono | [END_REF], measurements of stellar velocities shows a large dispersion. This would be explained by the existence of two different stellar populations with the younger and more recently formed would superimpose on an older stellar populations from the two progenitor galaxies, inducing this apparent wide dispersion in velocities according to Engel et al. (2010).
In NGC 6240, a precise study of the SSCs was conducted by Pollack et al. (2007). The authors identified about 32 clusters, as shown in figure 2.1. They estimated their masses using photometry, obtaining between 7 × 10 5 and 4 × 10 7 M and estimated their age to approximately 15 Myr. This would correspond to the end of the starburst episode as evaluated by Tecza et al. (2000) which occurred 15 to 25 Myr ago. The starburst would have happened when the two nuclei were at the closest distance from each other, estimated to nearly 1.4 kpc. According to the same study of Tecza et al. (2000), the K band luminosity of the two nuclei would be dominated by red supergiants that should have been formed during the aforementioned starburst. [START_REF] Haan | [END_REF] estimated their luminosity to L South = 10 11.29 L 1 and L N orth = 10 10.81 L respectively. They also measured their radius to R South = 0.23 ± 0.0 kpc and R N orth = 0.2 ± 0.01 kpc, leading to estimates of the mass of their assumed supermassive black holes of M BH,S = 10 8.76±0.1 M and M BH,N = 10 8.2±0.07 M , using the relation of Marconi & Hunt (2003). Another estimate, by Medling et al. (2011) for the largest nucleus, gave a mass between 8.3 × 10 8 and 2.0 × 10 9 M , consistent with the previous value. Medling et al. (2014) estimated the dynamical mass of the two nuclei to M dyn,S = 3.2±2.8×10 9 M and M dyn,N = 0.2±0.01×10 9 M respectively.
IRAS 21101+5810
As for NGC 6240, Armus et al. (2009) estimated the IR luminosity of IRAS 21101+5810 to be log 10 (L IR /L ) = 11.81, once again close to the ULIRG limit. The largest component of IRAS 21101+5810, the only one visible in figure 2.2, has I and H band magnitudes of 14.57 and 12.54 respectively, while the other component was measured to m I = 16.59 and m H = 14.21 by [START_REF] Kim | [END_REF]. These authors also estimated the FWHM of the luminosity profile of these components, obtaining for I and H bands respectively 9.1 and 3.9 kpc for the first one and 1.0 et 1.1 kpc for the second one, both components being separated by 7.5 kpc. The study by [START_REF] Haan | [END_REF] also includes measurements at H to assess the mass of the assumed central black hole of the principal component. They measured a luminosity of L bulge = 10 10.42 L with a radius of R bulge = 0.39 ± 0.01 kpc. This led to a black hole mass estimation M BH = 10 7.75±0.08 M . Chapter 2. Super Stellar Clusters
Data Reduction
In Addition to the data obtained with CANARY, we also retrieved images of the same two targets from the Keck observatory and from the HST through the dedicated archive, with the purpose of comparing the photometry of the identified SSCs in different bands, from NIR to visible and analyse their properties through colour dependency. NGC 6240 and IRAS 21101+5810 were observed with the HST using ACS (Advanced Camera for Surveys), with the F814W and F435W filters, and NICMOS (Near Infrared Camera and Multi-Object Spectrometer) with the F160W filter. The NIRC2 instrument, installed on the Keck 10 m diameter telescope, was also used to observe NGC 6240 at Kp using AO correction.
CANARY observations were conducted on three consecutive nights, starting on the 22 nd of July 2013. During the last two nights, the standard calibration star CMC 513807 (from the Carlsberg Meridian Catalogue) was observed at H and Kp to perform photometric calibration. In total, 40 images were recorded in H band, 28 in Kp band for NGC 6240 ; 96 images were obtained in H band and 62 in Kp band for IRAS 21101+5810. Seeing during this period was measured between 0.4 and 0.6 at 500 nm.
Exposure time per image was set to 20 s for NGC 6240 and to 40 s for IRAS 21101+5810. Those of CMC 513807 were varying between 0.5 and 5 s. On the CANARY IR camera, the plate scale is 0.03 per pixel for a detector of 256 × 256 pixels.
NIRC2 data were used by Pollack et al. (2007) for their detection of SSCs in NGC 6240 in Kp band, with an exposure time of 30 s. All the observation parameters are given in table 2.1.
CANARY Reduction
As a demonstrator, dedicated to performances studies, no standard scientific data reduction pipeline is available for CANARY. A dedicated pipeline was developed to process the data used in this work. Data were reduced using classical reduction methods:
-Computation of a sky image through a median of science images, acquired following a dithering process on the camera; -Subtraction of this sky to all images; -Bad pixel correction; -Stack of subsets of few images through median; -Images registration; -Final images stacking using a clipped average. The star photometry shows very few variations from one night to the other, below 0.05 magnitude. The measurements are outlined on table 2.2. The zero point (ZP ) was then computed from the known magnitude m (from Cutri et al. 2003) and the measured flux F thanks to:
HST -Nicmos
Archives data were already reduced. However, in order to be able to compare these images to the CANARY sets, we needed to convert their photometry to the Vega magnitude system.
Nicmos images were converted thanks to header keyword PHOTFNU. It corresponds to flux density per unit of frequency F ν , for a source that would produce 1 ADU per second. It is expressed in Jy.s/ADU. This allows to translate these ADU fluxes (F ) in the images onto AB magnitudes, according to the Nicmos manual: m AB = -2.5 log 10 (P HOT F N U × F ) + 8.9.
(2.2)
We then applied the AB to Johnson translation m J = m AB -1.39 in H band, from Blanton & Roweis (2007).
HST -ACS
the reduction of ACS data is analogous to what was done on Nicmos data. The keywords used are however different and we used PHOTFLAM (Inverse sensitivity in erg/cm 2 /s/ Å) combined with PHOTPLAM (which is the central wavelength of the filter passband) in order to compute the zero point in AB magnitude: ZP AB = -2.5 log 10 (P HOT F LAM ) -5 log 10 (P HOT P LAM ) -2.408, (2.3) which is then used to convert onto magnitude the flux F measured in the images: (2.4) and convert to Johnson magnitudes, with m J = m AB -0.45 in I band and m J = m AB + 0.09 in B band (the two closest Johnson filters to F814W and F435W respectively, see Blanton & Roweis 2007).
m AB = -2.5 log 10 (F ) + ZP AB ,
Note that these corrections are given for filters H, I et B, different from filters F160W, F814W et F435W. The induced absolute photometric error is thus of the order of 0.05 in magnitude.
Keck -NIRC2
NIRC2 handbook gives a zero point value of 24.80±0.07 in Kp band, depending on the Strehl ratio2 (this value is of 24.74 in Kp band for a Strehl ratio of 1). Assuming small Strehl variations during the observations, we used the same value for all images and therefore converted NIRC2 images using: m J = -2.5 log 10 (I/t) + 24.80, (2.5) with t the exposure time (in s) and I the intensity (in ADU).
Images Registration
The last processing to obtain comparable data sets is to interpolate the images to a same plate scale, and to rotate them to obtain the same orientation.
We used linear interpolation, with a scale factor deduced from plate scales respective values and used CANARY images orientation as a reference. As the exact orientation of CANARY images according to the North is not known, rotations were conducted using reference position in the field of view. This was more precise for IRAS 21101+5810 thanks to a star in the field of view (visible on the upper right corner of figure 2.2), than for NGC 6240, especially between images with very different morphological features at various wavelength.
To cope with the resolution difference between NIRC2 images (obtained with Keck -10 m) and CANARY images (obtained with the WHT -4.2 m), we convolved NIRC2 images with a 2D Gaussian representing the CANARY PSF (see section 2.5.1 for details) in order to be able to superimpose images sets.
Final Images
Final reduced images are shown for NGC 6240 in figure 2.3 and for IRAS 21101+5810 in figure 2.4. Displayed images were rotated, cropped to the same field of view and convolved in the case of NGC 6240 with NIRC2.
Photometry
With the images shown in figure 2.3, it appears that CANARY data do not bring an improved resolution on NGC 6240. This is fully expected from two instruments with very different diameters but allowed to characterise on sky the performance of CANARY. Furthermore, images of IRAS 21101+5810 (see figure 2.4) are the only available data for this target in these bands at such a resolution and will therefore provide a unique diagnostic for the study of SSC in this system. We aim in the following at constraining the ages of the clusters, as well as their size, composition, metallicity and extinction. Concerning photometry, an approach fitting the clusters intensity distributions was implemented as opposed to standard aperture photometry. Because clusters are located close to the bright nucleus and inside the galaxy brightest region, aperture photometry will suffer from contamination by the galaxy background. It is therefore necessary to fit a brightness distribution on each cluster, including a proper estimation of the background to achieve a better accuracy.
PSF Estimation
We used the foreground star in the images of IRAS 21101+5810 as an estimate of the instrumental PSF (see figure 2.5). It provides an accurate resolution calibrator for the CANARY observations, to assess whether the clusters are resolved or not. Note that this PSF could be used to fit the clusters intensity distribution.
Efficiency of the AO correction can be evaluated by comparing the FWHM of the PSF to the theoretical resolution. In the case of the WHT, D = 4.2 m and we obtain in H band (λ = 1.65 µm) λ/D ≈ 0.081 or approximately 2.7 pixels. The FWHM of the star is 4.5 pixels and we are thus reasonably close to the diffraction limit of the instrument.
Even the best AO system will not be able to compensate fully the higher spatial frequencies of the turbulence induced aberrations (see e.g. Rigaut et al. 1991 and figure 2.6) resulting in a PSF close to the Airy profile (diffraction limit) for the central core, but with stronger wings, as discussed in Moffat (1969) and Rigaut et al. (1991). To perform photometry fitting, we should use a distribution of Moffat (1969), which has a Gaussian core with wings stronger than those of the Gaussian distribution. However, the wings of the Moffat distribution make it difficult to converge in case of a varying background. It appears that when using Moffat fitting, if the background is not perfectly estimated and subtracted, the fit overestimates the flux or even fails to converge. We therefore used Gaussian fitting, which will likely underestimate slightly the fluxes but will be more reliable and still remains reasonably close to the instrumental PSF (see figure 2.5).
Classical Fitting
Our first fitting algorithm used a second order polynomial to fit the background. Both the cluster and the background were fitted simultaneously with a combination of a second order polynomial and a 2D Gaussian, as shown in figure 2.7. Despite obtaining better accuracy with a Gaussian than using a Moffat distribution, this fitting procedure was still failing for some clusters, if the background was not reasonably flat or in the case of low luminosity clusters. We therefore tried to implement a new procedure, using the resolution of the Poisson's equation inside a contour around the source, to evaluate the background, in order to have a better estimate of its contribution.
Fitting using Poisson's Equation Resolution
This algorithm was initially proposed by Daniel Rouan, based on Dirichlet problem. It consists in a first determination of the cluster's limit on the initial image, using a rough detection procedure. The basic idea here is just to identify both the peak location and the radius, by estimation of the radius where the flux stops to decrease. All the inner pixels are replaced by interpolation of a Green function (φ) whose values are constrained on the contour which is equivalent to solving the Poisson's equation ∇ 2 φ = 0. The principle here is simply to find the surface with the lowest possible curvature to replace the missing part. This newly estimated background is then subtracted to the initial image, and the 2D Gaussian could then be precisely fitted to the Chapter 2. Super Stellar Clusters cluster. A test of the method on simulated data is shown in figure 2.8 and an example of the process on the image of IRAS 21101+5810 is displayed on figure 2.9. 2.10. To estimate the errors made on the photometry, we ran a series of Gaussian fit on simulated images, including background, the source and noise (see figure 2.8). By comparing the obtained photometry to the theoretical one, we have relative incertitude on the photometry of 10-15 %. We will use in the following a conservative incertitude of 15 %.
Colour Maps
Another way of studying the photometric of the clusters is to combine the different images obtained a various wavelength into colour maps. It requires accurate registration of the different images, but provides colour indices for the different clusters in a single shot. This combination strategy can be used to compare with the photometric results obtained through the previous method, for a more robust estimation of the errors, and to have access to fainter SSCs (SSCs 7 to 11 were more difficult to fit properly).
Images were first converted into magnitude maps, using the process described in section 2.3. All images were clipped to a given threshold, to avoid floating overflow. Images were then combined to create maps of magnitude difference between two given observing bands. Colour maps are shown on figure 2.11 Thanks to these maps and using the H band image to identify the clusters, it is possible to obtain colour indices for all of the clusters. Table 2.4 summarises all the results, including the colour indices obtained through photometry.
Note that the given error ranges include only the photometric errors, not the calibration errors.
Comparison to Models
GALEV
In order to interpret these results, we simulated the colour indices that clusters would have depending on their age, metallicity and extinction. GALEV (GALaxy EVolution synthesis models) is a code allowing to simulate the evolution of stellar populations in galaxies globally, but which can also be used to model stellar populations in individual SSC. It was developed by Kotulla et al. (2009). Among the parameters that can be set, we will focus on metallicity and extinction (other parameters were set according to general properties for stellar clusters or by default). Extinction follows the law proposed by Calzetti et al. (1994), for values of E(B-V) between 0.5 and 3 by step of 0.5. Three values of metallicity ([Fe/H] ratio) were tested. The first one takes into account the chemical interaction between the ISM and the stellar populations, simulating the metallicity changes through the galaxy/cluster evolution, the second one is fixed to the Sun metallicity and the third one was set to [Fe/H]=+0.3. The iron content [Fe/H] is related to the metallicity Z (see [START_REF] Bertelli | [END_REF]) through:
log 10 (Z) = 0.977[Fe/H] -1.699. (2.6) Therefore [Fe/H]=+0.3 corresponds to Z ≈ 0.04, a high metallicity, (Z being 0.02, see Kotulla et al. 2009).
Photometric data were recorded every 4 Myr, with the starburst being set 1 kyr after the beginning of the simulation. The galaxy mass was set to 10 8 M . This is fairly low, but was chosen to constrain the contribution from the galaxy background luminosity in order to remain as close as possible to the intrinsic luminosity of the clusters. Indeed, our measurements on the data will be contaminated by background emission. Comparisons between simulations and observational data should therefore be conducted carefully.
Interpretation
GALEV gives as an output the absolute magnitude, but since we are using the magnitude difference between two bands, this does not have to be taken into account. Our analysis of the simulations results and the comparison with the photometric data through colour-colour diagrams is based on plotting the colour index between two bands versus another colour index, for example I-H magnitude as a function of B-I.
With this tool, it becomes possible to follow the evolutionary path of a simulated cluster, through GALEV. Time will make the luminosity of the cluster to vary, differently for various wavelengths, and its position in the colour-colour diagram will therefore evolve with age.
If all the observed clusters are gathered along a given evolutionary path, this will indicate that clusters probably share roughly the same parameters with different ages. Note that is unlikely because we expect most of the clusters formation to have been triggered by the starburst and therefore the clusters to have similar ages. It is more likely for the SSCs to be aligned along an extinction line, representing the impact of difference in extinction on clusters sharing the exact same parameters (a reddening of their emission distribution). As such, extinction is represented in the diagram, by a segment with a length indicating a change of 1 in extinction. GALEV uses colour excess, E(B -V ), instead of extinction A V and in average, A V ≈ 3.3 E(B -V ) is generally considered as a good approximation (see e.g. Kotulla et al. 2009).
Colour-colour diagrams of observed SSCs (red crosses) superimposed on GALEV tracks are shown on figures 2.12 and 2.13.
The main parameters impacting the colour of the clusters are their age, their metallicity and extinction: can we disentangle them ?
Extinction could vary from a SSC to another, depending on its geometrical position around the galaxy nucleus. However its impact is mainly a linear translation and therefore can not account for the observed dispersion. All red crosses are consistent with an overall extinction of E(B -V ) ≈ 1.0, with small variations for every cluster. This would mean an averaged extinction of A V ≈ 3 for the whole galaxy. The dispersion of clusters along the direction of the extinction can be explained by extinction. Indeed, as extinction can vary between clusters, we expect the individual clusters position on the diagram to be aligned along the extinction direction. This is particularly striking on the I-H as a function of B-I graph of figure 2.12, where all the clusters are aligned. This would be consistent with position of clusters to be scattered only due to extinction.
However, clusters are not aligned along the extinction direction on the graph of I-H as a function of B-I (second panel of figure 2.12). This extension of the clusters perpendicularly to the extinction direction is consistent with age disparity between the clusters, with all ages smaller than 100-150 Myr. Indeed, the luminosity of clusters after this time scale become more stable and would only results in clusters being aligned along the extinction direction. Because at young ages the clusters colours vary rapidly, as represented by the large distance between the initial cluster position in these diagrams and the first black cross on figure 2.12, it is likely that all the clusters have ages lower than 100 Myr. This is consistent with the estimation that at 100 Myr, most of the SSCs (up to 90 %) would have been destroyed (see for example Bastian 2016) and that in this case we should not be able to observe this high concentration of clusters. Because of their dispersion, clusters are unlikely to have the exact same age. Note that most of these clusters are also detected on I and B band images and are therefore not as embedded as what is expected for very young clusters and are thus unlikely to be younger than 2-3 Myr (Bastian 2016). This interpretation is also consistent with different metallicities, as shown in graphs of figure 2.13. In the graph of I-H as a function of B-I, clusters are not as aligned as they were for the chemically consistent metallicity and it is therefore more complicated to disentangle between the different parameters that can affect the luminosity of the clusters. However, a dependency on the extinction, with an averaged value of A V ≈ 3 and an age variation of the clusters, younger than 150 Myr, would still be able to explain the results. It is therefore not possible to clearly disentangle the value of the metallicity in the clusters of IRAS 21101+5810.
Chapter 2. Super Stellar Clusters
Conclusions
Still in a preliminary state, this study requires further investigations. As we now have an estimate of the distance and the luminosity of each clusters, we should be able to constrain their masses. Recent improvements of the photometric method, involving the resolution of Poisson's equation, will allow us to go further in the analysis. It is also planned to use the radiative transfer code MontAGN, developed in the context of this thesis described in Chapter 5 in order to analyse the effect of dust shells on the luminosity distribution of clusters.
Despite its limits, this study based on data from a demonstrator highlights the interest of new AO concepts for extragalactic astronomy. This capacity to reach resolutions similar to HST observations, from the ground and without bright star close to the target is a major advantage for extragalactic observations and the planned ELTs that will see first light during the next decade should bring the capacity to observe a significant number of new targets, as IRAS 21101+5810. We will therefore be able to detect SSCs on a growing number of galaxies and therefore obtain a more reliable statistics on the SSCs in various environments and ages, so as to better constrain the occurrence of extreme star formation, the conditions of its triggering and its evolution. S'il n'y a pas de solution c'est qu'il n'y a pas de problème.
Proverbe Shadok (Jacques Rouxel)
In optical astronomical observations, only the intensity of the received light is recorded. This is typically the case in standard imaging: each photon that hits a pixel of a Coupled Charge Device (CCD) (or any other type of detector), produces an electron increasing the count number in the pixel. Some detectors are able to count photons one by one, however it is more generally an averaged number of photons over a certain integration time which is recorded and used in the analysis, at least from optical to NIR. This is why a major part of astrophysics at these wavelengths is based on fluxes and intensities. However, there is more information transported by photons than just their energy. Some techniques, such as interferometry or spectroscopy, uses some other properties of the light like its wavelength coherence level for example, and the polarisation of light is one of these additional informations.
From a wave point of view, polarimetry consists in measuring the oscillation direction of electric and magnetic field of the incoming light, giving a more complete description of the received light properties. This can be achieved through classical imaging, with polarimetric images integrated over time, or can be combined with other observing methods to have access to more information at the same time. This can be done for example with some hybrid instrumentation such as spectro-polarimetry (as used by Antonucci & Miller 1985) or sparse aperture masking combined with polarimetry (available for instance on SPHERE, see e.g. [START_REF] Cheetham | Optical and Infrared Interferometry and Imaging V[END_REF].
When a wave packet, composed of a certain quantity of photons, propagates, oscillations of the fields can either be correlated over time or distributed in a fully random orientation. Typically, if from a given source, we receive a fraction of the photons with a privileged direction of oscillation for their electric field, this means that there is a physical process favouring this peculiar oscillation direction in this field of view. This process can be sometimes taking place at the photon emission but can also be a process impacting the light during its propagation.
This chapter will focus on the study of the techniques that allow the observer to measure this oscillation and to disentangle the origin of the polarisation of light.
Introduction to Polarisation
Polarisation was discovered in the XVII th century and was studied for example by Christiaan Huygens (around 1670) and Étienne Louis Malus (in 1808). Astronomers have been investigating the polarisation of light from extraterrestrial sources for more than a hundred years. It first began on the Sun and the Moon, which have been known to emit or reflect polarised light. Solar observations particularly benefited from polarisation with about a hundred years of measurements. Zeeman (1899) discovers that magnetic field have an impact on emitted light, separating spectral lines into different components, with particular polarisation. Astronomers then tried to detect it on others sources like Lyot (1924) on Venus or later Hiltner (1949) on three stars, some of the first polarimetric sources outside of the Solar system. Polarisation studies have then been extended to extra galactic observations, leading to one of the most important breakthroughs in AGN science with the polarimetric observation by Antonucci & Miller (1985) of NGC 1068, leading Antonucci (1993) to propose the unified model for AGNs.
Formally, light is a propagation of an oscillation of both an electric and a magnetic field along a direction. Both fields are related through Maxwell's equations, which are defined, for an electric field E and magnetic field B, as:
div E = ρ ε 0 (3.1) div B = 0 (3.2) rot E = - ∂ B ∂t (3.3) rot B = µ 0 j + µ 0 ε 0 ∂ E ∂t (3.4)
with ρ the electric charge, ε 0 the vacuum permittivity, µ 0 the vacuum permeability and j the current density. In the vacuum, ρ = 0 and j = 0, leading through temporal derivative of equations 3.3 and 3.4 to wave equations for both fields:
1 c 2 ∂ 2 E ∂t 2 -∆ E = 0 (3.5) 1 c 2 ∂ 2 B ∂t 2 -∆ B = 0 (3.6)
with c = 1/ √ µ 0 ε 0 the speed of light in the vacuum. These two equations have solutions of the form:
E( r, t) = E 0 × e i( k. r-ωt) (3.7) B( r, t) = B 0 × e i( k. r-ωt) (3.8)
with ω = | k|c, the pulsation, k the wave vector, indicating the direction of propagation, E 0 and B 0 the amplitude of respectively the electric and magnetic field, at a position r and a time t.
Both fields therefore oscillates simultaneously, the electric field, the magnetic field and the direction of propagation being all three always orthogonal one to each others, as shown in figure 3.1. This comes from Maxwell's equations. By using equation 3.7 into equation 3.1, we obtain:
div E = i k. E = 0 (3.9)
and doing the same thing on equation 3.2 for B:
div B = i k. B = 0, (3.10)
proving that the direction of propagation of the light is orthogonal to both E and B.
Equation 3.3 leads to: .11) Therefore: (3.12) leading to the orthogonality of E, B and k.
i k × E = rot E = - ∂ ∂t B = iω B. ( 3
k × E = ω B,
As the oscillation plane of the electric and magnetic fields is always perpendicular to the direction of propagation of the light, we will just need to express the direction of oscillation of the fields to measure polarisation. As their oscillation is synchronous, only one oscillation is required to fully described it. By convention, polarisation is related to the electric field. One should note that fields E and B are conveniently described by complex numbers, carrying some information about the amplitude and the phase of the wave. We also need to consider that these electromagnetic waves are not monochromatic, but correspond to a wave packet, with a minimum bandwidth ∆λ. The correlation time of this packet will be ∆λ c and this is the time scale we must consider to determine whether the two components of the electric field (E and E ⊥ that will be detailed in section 3.2.1) are correlated or not. The level of such a correlation determine the degree of polarisation of the wave.
How can we measure this orientation of the electric field ? One simple way is to use a polariser. It often consists in a metal grating, constituted of thin bars along a certain direction. Oscillations in this direction will be absorbed by the polariser as the electrons of the metal will be able to move in this direction in response to the incoming wave. This is the principle of reflection of light on metal, used in many fields of Physics, like radio astronomy.
A polariser therefore allows to select the polarisation of the absorbed light, and by consequence to the light going through it. The polariser will have a transmission of 0 % for a given direction of the electric field and 100 % for the one at 90 • . Note that what is selected is the direction axis of the oscillation, the orientation of the instantaneous field along this direction, whether it is vertical up or vertical down for instance, is not relevant (and actually could not be disentangled). Because of this particularity, all values of the polarisation orientation are given with a range of only 180 • , and not 360 • . This thesis will use for convention the range -90 • and +90 • for polarisation values.
For an incoming intensity I 0 of a fully linearly polarised light and an angle of θ in between the polarisation of the light and the polariser, the received intensity I follows the Malus' law (see figure 3.3): This is where things become slightly more complicated: light with polarisation at +/-45 • from the polariser have a transmission of 50 % and is therefore far from being cancelled. Measuring intensity with only two position of the polariser, 0 • and 90 • for example, will not be enough to estimate the direction of the polarisation as we will have no information on the -45/+45 direction.
I = I 0 × cos 2 θ. ( 3
In order to fully characterise the polarisation, four measurements are required. If we restrict ourselves to linear polarisation, we need to obtain measurements in at least three directions, but in fact, generally one uses four measurements, for instance at 0 • , 45 • , 90 • and -45 • (corresponding to 135 • ). Stokes formalism is particularly adapted to describe polarisation in this context.
Note that most (if not all) polarimetric instruments are based on this scheme, despite they are often more complicated than just a rotating polariser. Different methods are used, like Wollaston prisms (described in more details in section 3.3) which spatially separate light into two components with different polarisation instead of blocking one of them.
Stokes Formalism and Scatterings
Measuring Polarisation with Stokes Vector
The Stokes formalism has been introduced by George Gabriel Stokes around 1852, giving his name to the four parameters he used to describe any polarisation state. Polarisation would be intuitively described by the four parameters of the pattern of the electric field on its ellipse: the ellipse's major and minor axes, the inclination of its major axis and the phase of the vector along this pattern. However these parameters are not measurable as such and this is why George G. Stokes proposed this alternative way of representing the polarisation.
In order to describe the electric field we need a reference direction. In the case of scattering, this reference will be the intersection of the scattering plane with the polarisation plane (orthogonal to the direction of propagation of light). In cases of polarisation maps of regions of the sky, the reference will be the North direction, according to I.A.U. (1973) (International Astronomical Union).
Let us call E the value of the projected E on this reference direction, and E ⊥ its projection on the orthogonal axis. Stokes formalism describes the polarisation of light thanks to four parameters, I, Q, U and V . These parameters are averaged quantity over time of a particular intensity of the electric field. I corresponds to the total intensity and is computed with:
I = E E + E ⊥ E ⊥ .
(3.14)
The three other components are defined as following:
Q = E E -E ⊥ E ⊥ (3.15) U = E E ⊥ + E ⊥ E = 2 Re E E ⊥ (3.16) V = i E E ⊥ -E ⊥ E = 2 Im E E ⊥ . (3.17)
Thus, Q measures positively the intensity of light with a polarisation along the reference direction and negatively along 90 • . U is the same measure but for directions at 45 • and -45 • . V measures the circular polarisation intensity thanks to a de-phasing of π between the two polarisers. A positive V indicates a direct circular polarisation while a negative V corresponds to an indirect one.
If a wave is totally polarised, we will measure
I 2 = Q 2 + U 2 + V 2 .
If it is not polarised, we will have I > 0 and Q = U = V = 0. In any other case, the wave will be partially polarised and the parameters will satisfy 3.1 shows some particular polarisation states.
I 2 > Q 2 + U 2 + V 2 . Table
S = I Q U V (3.18)
Note that Stokes parameters can also be defined from the polarisation ellipse described by the electric field movement as a function of time. Let us call a the maximum amplitude of the electric field (the amplitude in case of linear polarisation), θ the polarisation angle and χ the opening angle of the ellipse (tan(χ) being therefore the axial ratio of the ellipse), as illustrated by figure 3.4, following Tinbergen (1996) with different notations to avoid confusions with scattering angles. Stokes parameters can also be expressed as:
S = I Q U V = a 2 a 2 cos(2χ) cos(2θ) a 2 cos(2χ) sin(2θ) a 2 sin(2χ) (3.19) θ χ a c o s ( 2 χ ) a s in ( 2 χ ) E y E x a
Scattering: Grain Properties
If one wants to model polarimetric observations of any type of source, as long as there is some dust around the source or somewhere between the source and the observer, one needs to take into account the scattering which is one of the most important polarising mechanism. Stokes vector formalism is very useful in polarimetric observations as it allows to fully describe the polarisation with only observable quantities, however it will also reveal itself as being well adapted at describing what happens with the scattering of light on dust grains.
We are now examining the various kinds of scatterings a photon can undergo. In AGN environments, scattering can mostly happen on dust grains, whether spherical or not, and aligned or not and on electrons.
Spherical Grains
Scattering on a homogeneous dielectric sphere is analytically solvable using Maxwell's equations and this was first achieved by Mie (1908). Not only this computation takes into account the polarisation, but it also depicts how initial polarisation will affect the resulting scattering geometry. The phase function of the scattering describes the angular dependence of the scattered intensity ; it only depends on two parameters, the form factor x and the complex refractive index of the medium m. Before Mie, John Rayleigh developed around 1871 a theory of scattering which is applicable to cases where x 1 and is therefore a boundary case of Mie scattering. -The form factor x is defined as follow:
x = 2πa/λ (3.20)
with λ the incoming wavelength and a the radius of the dust grain. -The complex refractive index of the medium is composed of the characteristic ratio of the speed of light in vacuum to the velocity of light in the medium (constituting the grain) as its real part n = c/v and the characteristic dissipation of the wave k as its imaginary part:
m = n + ik (3.21)
We can use Rayleigh scattering if x 1, which is the case if the wavelength is much larger than the grain radius. Rayleigh scattering is still used because it offers the advantage of being independent of the form factor (as long as it is below 0.2) and require therefore less computing time for very similar results.
Mie theory gives from x and m the values of the parameters
S 1 , S 2 , Q ext Q sca and Q back .
-S 1 links the incoming wave and the scattered one through their perpendicular component to the scattering plane for every scattering angle. -S 2 links the incoming wave and the scattered one through their parallel component to the scattering plane for every scattering angle. -Q ext is the efficiency factor of extinction.
-Q sca is the efficiency factor of scattering.
-Q back is the efficiency factor of backscattering.
-Q abs , the efficiency factor of absorption, is computed thanks to:
Q abs = Q ext -Q sca .
All these quantities depend on the grain type, the wavelength and the grain radius through the form factor. S 1 and S 2 also depend on the scattering angle. In the following, we will not write all the dependences of these quantities in term of form factor and grain type.
Note that the efficiency factors are linked to the cross sections as following:
Q ext = C ext πa 2 (3.22) Q sca = C sca πa 2 (3.23) Q back = dC sca πa 2 dΩ . (3.24)
We can compute the albedo from these quantities using:
albedo = Q sca Q ext . (3.25)
Non Spherical Grains
Generally, grains do not have a spherical shape. Oblong grains are for example proposed in the torus of AGNs where they may explain some properties. However, we do not have a rigorous theory as the Mie theory to compute the scattering characteristic of non spherical grains.
The model presented in this thesis does not yet allow to take into account non spherical grains. Doing it is however one of the goals in the near future. One proposed way to include non spherical grains would be to have two populations of the same type of grains, with different spherical radius ranges, to stand for grains with different crosssection depending on the angle of incidence. This becomes however more complex if we want the grains to be aligned as in the case of oblong grains aligned by a magnetic field for instance. For these reasons, we will not enter in the details for non spherical grains in the following.
Electrons
Scattering of light not only happens on dust, but also on electrons as discovered by [START_REF] Thomson | Conduction of Electricity through Gases[END_REF]. Thomson scattering is also a peculiar case of a more complete description of scattering on electrons proposed by Compton (1923). Compton scattering is required when the energy of photons is close to the mass energy of particles, electrons in our case and therefore λ ≈ h mec with h the Planck constant and m e the electron mass. This correspond to high energy photons (λ 10 -12 m). We will only restrict our models to Thomson scattering since we deal with infrared and visible photons only.
In term of phase functions, Thomson scattering on electrons is very similar to Rayleigh scattering. There is however a significant difference as there is no absorption of light with Thomson scattering. We will therefore adopt here the same description as previously used to describe scattering on spherical grains, with particular values corresponding to Rayleigh phase function without absorption:
S 1 (α) = 1.0 (3.26) S 2 (α) = cos(α) (3.27) Q sca = Q ext (3.28) albedo = 1. (3.29)
α is here the principal scattering angle (see section 3.2.3 for description of the angles).
Scattering: Geometry
With the quantities derived from Mie theory, we are able to fully characterise the scattering geometry, through the two phase functions for the two scattering angles. To describe this geometry, we will use the following convention (illustrated on figure 3.5):
-p is the direction of propagation of the light.
-u is the orthogonal to the last scattering plane.
-α is the principal scattering angle, between the old and the new direction of propagation -β correspond to the azimuthal scattering angle. It describes the variation of the vector u from a scattering event to the next one.
u i β p i scattering α u f u f p i p f Figure 3
.5 -Illustration of the scattering angles α and β on vectors p (indicating the direction of propagation) and u (normal to the last scattering plane) before and after scattering. Note that changes of vectors u and p are simultaneous and are only separated here for a better understanding.
p and u are known from the previous propagation of the light, however α and β are determined thanks to the grain properties through S 1 and S 2 . We can show that the probability density function of the principal scattering angle α follows:
f cos(α) (cos(α)) = 2 x 2 Q sca 1 2 |S 2 (α)| 2 + |S 1 (α)| 2 . (3.30)
The second scattering angle β depends on the polarisation of light before the scattering occurs (given in term of the Stokes parameters I, Q and U ) and on α. We can express the probability density function of β knowing α:
f β (β|α) = 1 2π 1 + 1 2 (|S 2 (α)| 2 -|S 1 (α)| 2 ) 1 2 (|S 2 (α)| 2 + |S 1 (α)| 2 )I (Q cos(2β) + U sin(2β)) . (3.31) Figure 3.
6 shows examples of phase functions of the scattering angle α for different values of the form factor. We can see that by increasing the form factor we collimate the scattering forward, with more complex geometry. Because the grain properties depend on the wavelength, we get all the figures with the same grain radius, only changing the wavelength to control the form factor.
Note that the phase functions and the probability densities computed before are related through a sin(α) factor as shown on figure 3.7. This is related to the spherical geometry where the solid angle is larger toward the equator than in the direction of the pole. On the same way, we can display the phase function of scattering angle β, depending on both the initial polarisation and α. Figure 3.8 shows a typical measurements of β angle distribution with a form factor of 1 and without restriction on α, which therefore follows the above phase function. Note that in this case, the distribution is isotropic, as expected for unpolarised incoming light.
Scattering: Mueller Matrix
In order to compute the evolution of polarisation through scattering with Stokes formalism, we can use the Mueller matrix combined with a rotation matrix. Indeed, two matrices are required to compute the new Stokes vector after the scattering event. The Mie theory gives the values of S 1 and S 2 which are determined for each wavelength, grain type, grain radius and scattering angle. Once again we will not specify all these dependences for each matrix element. S 1 and S 2 are used to compute the elements of the Mueller matrix: Note that this is not the conditional probability density function because there are no knowledges about α.
S 11 = 1 2 (|S 2 | 2 + |S 1 | 2 ) (3.32) S 12 = 1 2 (|S 2 | 2 -|S 1 | 2 ) (3.33) S 33 = Re(S 2 S 1 ) (3.34) S 34 = Im(S 2 S 1 ), ( 3
with the Mueller matrix being define as:
M = S 11 S 12 0 0 S 12 S 11 0 0 0 0 S 33 S 34 0 0 -S 34 S 33 .
(3.36)
Note that with this notation, the repartition functions of the scattering angles can be rewrite in a simpler way:
f cos(α) (cos(α)) = 2 x 2 Q sca S 11 (α) (3.37) f β (β|α) = 1 2π 1 + S 12 (α) S 11 (α)I (Q cos(2β) + U sin(2β)) . (3.38)
By applying the Mueller matrix to the Stokes vector, we take into account the change in polarisation introduced by the main scattering angle (α) between the incoming ray and the new direction of propagation: (3.39) which gives:
S f inal = M × S inter ,
I f Q f U f V f = S 11 S 12 0 0 S 12 S 11 0 0 0 0 S 33 S 34 0 0 -S 34 S 33 I inter Q inter U inter V inter .
(3.40)
Polarimetric Observations 51
However, as the scattering is not in the previous polarisation plane of the photon, we need to translate the old photon's polarisation into the new frame used for the scattering. We use for this purpose a rotation matrix defined as follow:
R(β) = 1 0 0 0 0 cos(2β) sin(2β) 0 0 -sin(2β) cos(2β) 0 0 0 0 1 .
(3.41)
The rotation matrix is mandatory to modify the polarisation plane according to the scattering geometry and depend on β, the azimuthal scattering angle. We then switch to S f inal from S init by applying both matrices:
S f inal = M × R(β) × S init (3.42)
and in detailed form:
I f Q f U f V f = S 11 S 12 0 0 S 12 S 11 0 0 0 0 S 33 S 34 0 0 -S 34 S 33 1 0 0 0 0 cos(2β) sin(2β) 0 0 -sin(2β) cos(2β) 0 0 0 0 1 I i Q i U i V i . (3.43)
If the incident ray is unpolarised, we have in the Mueller matrix computation I f = S 11 I i and Q f = S 12 I i because Q i , U i and V i are null. Therefore, whatever the rotation matrix, the polarisation degree will always be p = Q f /I f = S 12 /S 11 (see section 3.3.2). The polarisation induced by scattering therefore only depends on scattering angle α for a given form factor as long as the initial photon is not polarised. As we displayed the phase function on previous section, we can also indicate on polar diagrams the polarisation of packets as shown in figure 3.9.
One can note that the maximum of polarisation is obtained in cases of scattering angles close to 90 • . This configuration is not the most likely as it generally corresponds to a minimum in the phase functions as seen in figure 3.6, but it is one of the most important point in many analysis of observed polarisation in astrophysics. As the understanding of polarisation becomes really difficult in cases of multiple scatterings, one of the most studied polarisation signal is the one coming from single scatterings, and therefore following the above relation to the scattering angle, as will be seen in the centro-symmetric polarisation patterns of section 3.3.3 for example.
Polarimetric Observations
When observing with optical and NIR instruments featuring polarimetric capacities, one logical way is to use a polariser and to make it rotate between two exposures. It is however possible to separate the beam into two beams of different direction of the electric field. This can be achieved thanks to birefringent materials, like Wollaston prism. If so, it is possible to record with a unique device the two perpendicular polarisations at the same time. This still needs a rotation if we want a complete description of the polarisation (remember that we need four measurements) but it is faster than the previous method.
With these methods, what we measure is not directly the Stokes parameters, but the averaged modulus of the component of the electric field transmitted by the polariser. There are few methods allowing to display polarimetric measurements from the measured quantities and we will introduce some of them in the following. As this work mainly consists in polarimetric imaging, we will mostly base our explanations on 2D images. However the same techniques can be applied to any polarimetric measurement.
Q, U and V maps
Translating polarimetric imaging data in term of Stokes parameters corresponds to creating Q and U maps. These maps are comparable to the I maps, the "classical" intensity maps. Following the same method, it is also possible to create V maps to display the circular polarisation, however instruments with circular polarisation capacities are rare (at least in visible and NIR), because it requires to measure the phase difference between the two orthogonal components of the electric vector.
The way to link the observable quantities to the Stokes parameters depends on the instruments, but follows generally the same method, whether observations are obtained at a given angle of polariser, or with ordinary and extraordinary beams. The definitions of Q and U previously used are not the only ones, (see equation 3.15 and 3.16), and we can also define Stokes parameters in a way linked to the components of the electric field (more commonly used in radio astronomy, see Tinbergen 1996): We then group the observations of the same Stokes parameter and stack them together. There are different ways to do so, some common ones will be presented in section 3.4.
I = E E + E ⊥ E ⊥ = Q + + Q -= I 0 + I 90 (3.44) Q = E E -E ⊥ E ⊥ = Q + -Q -= I 0 -I 90 (3.45) U = E E ⊥ + E ⊥ E = 2Re E E ⊥ = U + -U -= I 45 -I -45 (3.46) V = i E E ⊥ -E ⊥ E = 2Im E E ⊥ . (3.47) Q + , Q -, U +
+Q +U -Q -U +Q +U -U -Q N E +Q +U -Q -U +Q +U -U -Q N E
Degree and Angle of Polarisation
Once the I, Q, U and V maps are created, it is useful to describe the polarisation measurements more explicitly in a way easier to interpret. One classical method is to represent the degree and angle of polarisation. The polarisation degree, noted p, indicates the fraction of the received light which is polarised. p = 1 corresponds to a fully polarised light: the oscillation direction is always the same for all the received photons. p = 0 indicates that the light is unpolarised and that the oscillation direction
p = Q 2 + U 2 + V 2 I (3.48) θ = 1 2 atan2(U, Q). (3.49)
Note that this definition of θ is consistent with the expression of the Stokes parameters in equation 3.19. We can also define the linear and circular degree of polarisation:
p lin = Q 2 + U 2 I (3.50) p circ = V I . (3.51)
The degree of linear polarisation is the most commonly used and the term "degree of polarisation" often refers to this quantity instead of the real degree polarisation (once again because it is difficult to measure circular polarisation). Following I.A.U. (1973) convention, the reference of the angle of polarisation is the position angle of the electric-vector maximum, θ, starting from North and increasing through East. Q and U were defined in this work to be compatible with this convention, every polarisation maps are displayed as such.
An important fact is that all operations on polarimetric data have to be executed before computing the degree and angle of polarisation. This includes for instance binning, convolution/deconvolution by a PSF, sum of different observations, flat field or dark corrections. Indeed, Stokes parameters are intensity quantities, and (usually) follow Gaussian distributions, which is not the case of neither the degree of polarisation nor the angle of polarisation. Even the reduced Stokes parameters, Q/I or U/I do not necessary follow such Gaussian distribution as investigated by Tinbergen (1996). We therefore have to perform all operations on Q and U parameters but not on the final product of the processing. In the case of maps, it is possible to combine these two informations into short segments representing the polarisation vectors to be superimposed to any other maps, their length being proportional to the degree of polarisation in the selected zone (pixel or group of pixels) and the orientation corresponding to the angle of polarisation.
Q Tangential and Centro-symmetric Patterns
One of the most commonly used alternative, especially in the field of exoplanets and disks around young stars, is the polarisation representation by Q tangential maps. Q and U maps represent the intensity of light oscillating along a particular direction, respectively vertical/horizontal and ±45 • of this pattern. However we may not be interested on the absolute angle but on the angle relative to the direction of a central source. We introduce in the following the concept of centro-symmetric polarisation, Q tangential representation being used to separate the centro-symmetric polarisation from the radial one. Light of a central source scattered once1 by a circum-central medium (dust, electrons) exhibits a characteristic polarisation pattern, symmetric according to the central source. In particular, each scattering plane will contain the observer, the source, and the location of the scattering event. The position angle of the polarisation vector will therefore rotate according to the position angle of the scatterer and produces a peculiar pattern of vectors rotating around the central source, as discussed by Fischer et al. (1996) and Whitney & Hartmann (1993) for instance. This pattern is the one observed on figures 3.11,3.12 and 3.13.
Q tangential (we will adopt the notation Q φ in this work but there are other popular notations) is a parameter computed from Q and U, with a rotation according to the position angle φ of the observed position with respect to the central source. It is defined as (see for example Avenhaus et al. 2014):
Q φ = Q cos(2φ) + U sin(2φ).
(3.52)
The other Stokes parameter can as well be transformed into a radial quantity:
U φ = -Q sin(2φ) + U cos(2φ), (3.53)
with φ being defined from the position of the observation (x,y) and of the central source (x 0 ,y 0 ) as: Q φ and U φ maps trace intensity with polarisation respectively tangential and radial. In the case of a fully centro-symmetric pattern, we should observe a Q φ map very close to the intensity map and a null U φ map, a situation close to the one presented previously with an optically thin dust shell, where most of the photons are scattered only once, and displayed in figure 3.15. This is equivalent to the maps of the difference angle between the observed pattern and a purely centro-symmetric pattern, obtained by subtracting φ to the θ map2 , as showed in figure 3.16.
φ = arctan x -x 0 y -y 0 . ( 3
Polarimetric Instruments
To map the polarisation, techniques have been developed to decrease the required observation time. Two devices are commonly used, the Half Wave Plate (HWP) and the Wollaston prism (named after William Hyde Wollaston). The HWP provides a way to transform the polarisation vector by introducing a phase of π on one of the two components of the electric field. Two HWPs are used on instrument SPHERE, the first in order to compensate from the instrumental polarisation offset and the second HWP is used to reverse the sign of the polarisation of the beam and therefore to select the polarisation orientation reaching the camera (see [START_REF] Thalmann | Ground-based and Airborne Instrumentation for Astronomy II[END_REF].
A Wollaston prism is a birefringent material (so as HWP) which separates the two components of the polarisation into two separate beams. It is therefore possible to record after the prism the two components at the same time, only two exposures are thus required for a complete polarimetric observation. This is for instance the device used in NaCo (NAOS + CONICA), the first AO assisted instrument installed on one unit telescope of the VLT at Cerro Paranal. On these instrument, the two separated components are recorded on the same image (with a field reduction). For more details, see [START_REF] Lenzen | Instrument Design and Performance for Optical/Infrared Ground-based Telescopes[END_REF] and [START_REF] Rousset | Adaptive Optical System Technologies II[END_REF].
SPHERE is another AO instrument with polarimetric capacities of the VLT. A detailed description will be given on Chapter 4. We are interested here in the InfraRed Dual-beam Imager and Spectrograph (IRDIS) and Zurich IMaging POLarimeter (ZI-MPOL) sub-systems, both using polarimetric modes. IRDIS features a beam splitter which allows to record two images at the same time. This is useful to observe in different filters at the same time, but it can also be combined with polarisers to record simultaneously two different polarisations, with the same advantage than offered by NaCo. ZIMPOL is based on a more original system. Only one detector is used, but a fast periodic transfer of charges in the CCD at the same rhythm than a half-wave modulation by an electro-optic material provides the ability to record one of the polarisation component in half of the pixels and the other one on the other halve. This still requires four measurements, but allows the observer to measure the polarisation simultaneously, with the same optical path at a time-scale much shorter than the atmospheric turbulence and conducts therefore polarisation measurements at very high precision.
Data Reduction Methods
In section 3.3 we explained the relationship of Q and U maps with the recorded images. There are however different techniques to compute these maps and we describe some of them in the following. In the case of images obtained using a birefringent material splitting the beam, one of the most used nomenclature is to distinguish images by their ordinary or extraordinary properties. Ordinary beam will for instance correspond to the Q + polarisation, and the extraordinary one to Q -. We will keep in the following a description under the names Q + , Q -, U + and U -, assuming that there is more than one image taken under the same conditions to keep a general description.
Note that before applying any of these methods, images should have been reduced following standard astronomical procedures (sky subtraction, flat field correction and realignment are the basics). Because polarimetry consists in combining intensity images, it very sensitive to the different instrumental noises and bias. In particular, offsets or differential transmission introduced between two polarisation measurements will alter the resulting measured polarisation. For that reason, different reduction methods were developed, each optimised with particular instruments.
Double Differences Method
The double differences method is the mathematically simplest one. Starting with the quantities Q + , Q -, U + and U -, it consists in reconstructing through addition/subtraction the quantities Q and U. Therefore the operations are:
I = ΣQ + + ΣQ - (3.55) Q = ΣQ + -ΣQ - (3.56) U = ΣU + -ΣU -. (3.57)
It has the advantage not to be sensitive to precise sky background estimation as only intensity differences are considered.
Double Ratio Method
Another method uses ratio instead of subtraction. Dividing images is less sensitive to imperfect instrumental and associated bias calibrations, if used on images recorded on very close conditions (see Avenhaus et al. 2014) and is therefore commonly used for instruments with two beams. It is thus particularly adequate to SPHERE data reduction. This method is well described in Tinbergen (1996) and Quanz et al. (2011) and compute Q and U through:
I = 1 2 (ΣQ + + ΣQ -) (3.58) R Q = ΠQ + ΠQ - (3.59) R U = ΠU + ΠU - , (3.60)
with
Q = I * R Q -1 R Q + 1 (3.61) U = I * R U -1 R U + 1 . (3.62)
Matrix Inversion
The last method described here uses matrix inversion to compute I, U and Q maps, inspired by polarization state analysers methodologies, see Zallat & Heinrich (2007) for instance. Indeed, by observing with a polariser, we apply to the initial Stokes parameters on the incoming light the following transformation matrix W to get the measured intensities:
I meas = W × S, (3.63)
with
S = I Q U (3.64) I meas = I 1 I 2 ... I n . (3.65)
W depends on the angles θ n of the polariser for each image recorded:
W = cos 2 (θ 1 ) cos(θ 1 ) sin(θ 1 ) cos 2 (θ 2 ) cos(θ 2 ) sin(θ 2 ) ... cos 2 (θ n ) cos(θ n ) sin(θ n ) .
(3.66) Because W may not be a square matrix, it is not always invertible. In practice, it will never be the case (having only three polarimetric measurements/images is rare because of the symmetry of measurements) and we apply the pseudo-inverse (W T W ). Therefore, by applying (W T W ) -1 W T on both sides, we can compute S directly:
S = (W T W ) -1 W T I meas .
(3.67)
We will illustrate this method with a simple setup. If we have eight measurements with the four following positions of the polariser:
I meas = Q + Q - U + U - Q + Q - U + U - . (3.68)
We then get the W matrix as following
W = 1 1 0 1 -1 0 0 0 1 0 0 -1 1 1 0 1 -1 0 0 0 1 0 0 -1 . ( 3.69)
With this method, it is possible to introduce properly the instrumental effects in the matrix W.
Chapter 4
Observation of Active Galactic Nuclei Contents Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.
Frank Herbert
As a branch of the study of galaxies studies, AGN is a quite recent field. First evidence for the existence of these galaxies with strong luminosity and bright emission lines are dated to the first half of XX th century, as introduced in section 1.4. But because of the distance of these objects, very few pieces of information were obtained until the 70's. Most of our current understanding on their properties comes from more recent observations and the establishment of the unified model of AGNs by Antonucci (1993).
As one of the closest and brightest, NGC 1068 is one of the most studied AGN. This galaxy is located at about 14 Mpc corresponding to an angular scale of roughly 72 pc for 1 , giving the possibility to reach higher resolution than with other AGNs. Furthermore, the galaxy is bright, and its nucleus is luminous enough to be used as a decent guide source for wavefront sensors. For these reasons, NGC 1068 was the first target of our polarimetric high angular resolution study of AGN.
Context of NGC 1068
NGC 1068 was one of the early observed galaxies. It was indeed included in the first addition to the Messier (1781) Catalog of Nebulae & clusters of Stars in 1783. It was therefore first known has M 77 (these are the two main identifiers of the galaxy, it now has 81 of them). According to [START_REF] Messier | Catalogue des Nébuleuses & des amas d' Étoiles[END_REF], it would have been first observed around 1780 by Méchain, and first classified into the "nebulae" category, before galaxies were known and the separation between galaxies and actual nebulae.
As mentioned in section 1.4, NGC 1068 played a central role for many breakthrough in research on AGN. When Seyfert (1943) established the first classification of galaxies (still called "nebulae") with intense emission lines, what will later become Seyfert galaxies, NGC 1068 was among the first galaxies in the list. Fath (1909) was the first to discover some bright emission lines, detailed later by Slipher (1917) in NGC 1068. Later, new observations of this AGN by Antonucci & Miller (1985) leaded Antonucci (1993) to propose the unified model of AGNs, explaining the apparent subcategories of Seyfert galaxies and of other kinds of radio-quiet AGNs discovered in between.
If NGC 1068 has been the main target of all these observations and triggered these important discoveries, it is because it is one of the closest and brightest AGN. Bland-Hawthorn et al. (1997) listed and discussed the typical standard parameters we should use for NGC 1068. Concerning the distance, they relied on a previous study by Tully & Fisher (1988) giving a value of 14.4 Mpc. Table 4.1 lists basic characteristics of NGC 1068.
General Presentation
Trying to model AGNs structure has been an important work since the Seyfert (1943) classification to understand the observed differences among the various species of AGNs. As one of the ideal laboratory to test assumptions on the unified model of AGNs, the development of instrumentation since the early 90's has brought new constraints on the structure of the central region of NGC 1068.
One of the first well detected structure related to the unified model is the radio jet. NGC 1068 shows a bipolar radio jet, associated to radio emission extending over 500 pc from the centre, as shown in figure 4.1 from Wilson & Ulvestad (1983). Along the jet path, several components were identified (see for example Alloin et al. 2000or Gallimore et al. 2004). The jet is bent at the location of a cloud (the component C of Gallimore et al. 1996b), located about 25 pc north of the centre, where various authors found evidences for a shock between the jet and the surrounding media, detected in Mid InfraRed (MIR). Bock et al. (2000) superimposed radio and MIR detections, showing that the jet is surrounded by different clouds. The central radio component S1 (see Gallimore et al. 1996b) would be close to the centre of the AGN, defined here as the brightest UV source. Because it is hidden behind what we expect to be the dusty torus, the exact location of the centre was unclear until the end of the 90's, according to Kishimoto (1999). It is currently accepted that the centre is located at S1 because the H 2 O maser emissions detected by Gallimore et al. (1996b) are centred on this location (Das et al. 2006). Because of the inclination of the galaxy on the line of sight (estimated to be around 40 • , see Packham et al. 1997 andBland-Hawthorn et al. 1997), most of the maser detections are on the North side, more visible than in the south, hidden behind the galactic plane. This is for instance visible in the radio map of figure 4.1 at 4.9 GHz (λ ≈ 61 mm), but will also be observable on The region surrounding the jet in the few central arc seconds corresponds to the NLR. This region was named after the detection of strong narrow emission lines in the spectra of Seyfert 2 galaxies, as opposed to the broad lines detected in Seyfert 1, originating from the BLR. The NLR is a highly ionised region, with a conical shape as revealed by HST images (Macchetto et al. 1994, see also figure 4.4). The core of the NLR is a polar outflow of ionised matter (Marin et al. 2016a). It is prolonged by an extended NLR of about 7 (about 500 pc) according to [OIII] line detections by Macchetto et al. (1994). Capetti et al. (1997) revealed that the morphology of this region and its ionisation properties are dominated by the interaction with the radio jet, regions of lines emission and radio lobes being spatially concomitant. According to various studies (see e.g. Axon et al. 1998 andLutz et al. 2000), the electrons density in this region would be about 10 8 to 10 11 m -3 . The inclination of the axis of the bicone according to the line of sight was investigated by Das et al. (2006), who found an inclination of 85 • (North is closer). Thanks to the first AO observations of NGC 1068 with NaCo, Rouan et al. (2004) resolved the nucleus centre in K and L bands into a core of hot dust of diameter of 5 and 8.5 pc respectively. With the same instrument, Gratadour et al. (2006) were able to map and deconvolve the inner parsecs in IR, revealing structures in the NLR, at the source location corresponding to features detected in the optical and MIR. The hot core was resolved at Ks with a FWHM of 2.2 pc, confirming the previous detection, and an upper limit of 5.5 pc in the FWHM of the core in L' and M' bands was found. The torus and the inner parsecs of NGC 1068, hidden behind the torus, are difficult to detect. The H 2 O and OH maser emissions detected by Gallimore et al. (1996a) were linked to a maser disk, which would stand between 1 and 2 pc from the centre, surrounded by the BLR (see also Gallimore et al. 2001 and sketch of figure 4.2). This disk has to be different from the accretion disk, closer to the CE. However, according to Elitzur & Ho (2009), the transition between the BLR and the maser disk is likely to be mainly associated to the sublimation limit. They suggested that the outer boundary of the maser disk is the sublimation radius, which is the inner bound of BLR. The same idea would apply to the transition with the torus, which would be a colder and thicker continuation of the maser disk (see Elitzur &Ho 2009 andMarin et al. 2016a).
At the very centre, would stand a very compact and luminous object, likely a super massive black hole, surrounded by an accretion disk. This disk would emit light from NIR to UV. Inverse Compton scattering would occur in a corona in the polar directions, close to the accretion disk, emitting in the X-ray domain (Marin et al. 2016a). All these elements are illustrated in figure 1.5 and detailed in Chapter 1.
Finally, an accretion phenomenon seems to occur on larger scales. Infalling gas was detected by Müller Sánchez et al. (2009a) thanks to integral field spectroscopy in the NIR in the inner 30 pc, streaming toward the nucleus.
Torus
Most of the components of the archetype AGN structure are observable and detected at a particular wavelength. As the coldest piece of the model, the dusty torus, surrounding and hiding the CE and the BLR is poorly constrained. It is far less luminous at shorter wavelengths than in MIR, especially when compared to the surrounding components like the very bright CE. Its extent is another concern and is still a matter of debate. Its typical size would be less than a hundred of parsecs (e.g. Packham et al. 2007), which corresponds to an angular size of approximately 1.3 . It therefore requires very high contrast and high resolution for this signature to be detect.
The estimated torus size has significantly decreased over years. Planesas et al. (1991) detected 8 × 10 7 M of H 2 gas within a range of 130 pc to the centre, but did not formally identify this with the torus. Using optically thick HCN emission detection, Jackson et al. (1993) studied the velocity of the gas, leading to a velocity gradient compatible with a torus of 180 pc radius, rotating around an axis inclined at 33 • . Young et al. (1996) obtained polarised images in J and H bands and estimated the torus to be larger than 200 pc. They found similar results about the torus axis with an estimate of inclination of 32 ± 3 • . Packham et al. (1997) interpret polarisation as arising from dichroic absorption through the torus and obtained a size of 220 pc, from a several series of polarimetric imaging observations in the NIR. A 100 pc structure was resolved by Alloin et al. (2000) in the NIR, compatible with the torus. However, current estimate is much smaller. Gratadour et al. (2003) performed spectroscopy of the nucleus of NGC 1068 with the Canada France Hawaii Telescope (CFHT) and instrument PUEO1 -GriF (see Clénet et al. 2002) in K band. When compared to radiative transfer simulations, it leads to results consistent with the supposed toroidal geometry and a torus radius smaller than 30 pc. A 22 pc upper limit was determined by Packham et al. (2007) (again from polarimetric observations) and a size smaller than 7 pc was derived by Alonso-Herrero et al. (2011) using MIR spectrometry and SED fitting. Raban et al. (2009) detected two components in the centre of NGC 1068 using mid-infrared interferometry. Their first measured signature is consistent with an emission from a compact region of 0.45 × 1.35 pc that they interpret as the obscuring regions, composed from silicate dust at T ≈ 800 K and their modelling required a second component, colder (T ≈ 300 K) extension of the torus (3 × 4 pc). García-Burillo et al. ( 2016) observed with ALMA a 10 pc diameter molecular disk, which they interpret as the submillimeter counterpart to the torus, they estimated the torus to have a radius of 3.5 ± 0.5 pc. Note that the distance used for NGC 1068 has varied from 18 to 14 Mpc between the oldest and most recent papers, playing a small role in these variations.
The morphology of the torus has also changed. From the uniform constant density of the first models, as for example the one by Antonucci (1993), it is now commonly accepted that the torus has a more complex geometry. It is likely to be clumpy, as the hydrodynamic stability of structures on a scale of tens of parsecs is unlikely (see for example work of Elmegreen 1991). Furthermore, fragmentation has been invoked to explain some observations according to recent work [START_REF] Mason | [END_REF]Müller Sánchez et al. 2009b;Nikutta et al. 2009;Alonso-Herrero et al. 2011). Current sketchs of the torus geometry and shape can be seen in figures 4.2 and 1.5.
Because the geometry is not well known, the estimations of the optical depth show large variations. Gratadour et al. (2003) estimated a likely τ V = 40 in the mid plane, while Marin et al. (2012) used τ V = 750. Note that fragmentation even complicate the estimation, because optical depth of individual clouds and their number along the line of sight need to be constrained. Most studies (see e.g. Marin et al. 2015) argue for clouds with optical depth of nearly 50 in the visible, but the number of clouds is uncertain so is the integrated optical depth. [START_REF] Lira | [END_REF] estimated a lower limit of 5 clouds. More recently, Audibert et al. (2017) obtained a number of clouds between 5 and 15 (but based on several Seyfert galaxies). Both these studies lead to integrated optical depth on the line of sight of above 200 in visible. Lower limit is set by the non-detection of broad lines in unpolarised light from NGC 1068, see for example Ramos Almeida et al. ( 2016) who revealed these hidden lines only thanks to polarimetry. This implies for certain that the torus can not be optically thin from UV to K band.
The orientation of the torus is also uncertain. Because we do not detect broad line in unpolarised light from NGC 1068 (Seyfert 2), we expect the inclination to be relatively close to 90 • as it should be viewed edge on. Young et al. (1996) derived a torus inclination of 42 • , while Alloin et al. (2000) argue for an inclination closer to 65 • . However Marin et al. (2016a) warn about the difficulty to have a good inclination estimator.
Magnetic field on the torus was also investigated, for instance by Lopez-Rodriguez et al. (2015) using AO observations to constrain the torus composition and the magnetic field intensity and orientation assuming aligned elongated grains.
Previous Polarimetric Studies
Several polarimetric studies were dedicated to NGC 1068. From the breakthrough spectro-polarimetric observation of Antonucci & Miller (1985) to the more recent polarisation maps of Packham et al. (2007), polarimetry has been able to bring strong constraints. Note that polarisation can also be used to constrain the position of the CE. Because it emits the light that will be scattered on the polar directions, leading to a centro-symmetric pattern, we can trace back the location of the source thanks to the polarisation vector PA. This was done by Kishimoto (1999) who derived accurately the location of the nucleus of NGC 1068.
New Observations with SPHERE
SPHERE (for Spectro-Polarimetric High-contrast Exoplanet REsearch, Beuzit et al. 2008) is an instrument including an extreme AO system installed on the VLT at Cerro Paranal to detect exoplanets and explore the disks around stars. It is composed of three systems, IRDIS, (infrared) Integral Field Spectrograph (IFS) and ZIMPOL. It has been designed to allow polarimetric observations (see Langlois et al. 2014 for IRDIS and[START_REF] Thalmann | Ground-based and Airborne Instrumentation for Astronomy II[END_REF] for ZIMPOL, the two instruments used in this work). This mode was intended to be used on disk around young stars, to explore the planetary formations, however we thought we could obtain some good results when applied to an AGN.
Our program was based on results of the NaCo observation NGC 1068, deconvolved by Gratadour et al. (2006) (see figure 4.3). We took advantage of the SPHERE Science Verification (SV) program, at the beginning of this PhD work to propose a follow up of the high angular resolution observation of the core of an AGN. The idea was to take benefit of the high contrast imaging capacities of SPHERE, with a higher AO correction level than NaCo, to obtain a better resolution on the core of NGC 1068. Furthermore, SPHERE can also be used in polarimetric mode. The proposal was accepted and SPHERE and ESO staff carried it, obtaining new images with IRDIS in Broad Band (BB). Based on the results of this observation, we then proposed observations with IRDIS in Narrow Band (NB) as well as an observation with ZIMPOL in R band, during the following periods. Proposal for IRDIS was accepeted for P97 and ZIMPOL observation was executed during P99. Accepeted proposal can be found in Appendix A.
IRDIS Broad Bands
This section details the first observation conducted in the context of this PhD work, using SPHERE-IRDIS during SPHERE SV program in December 2014. I joined the team while the proposal was submitted. I was therefore highly involved on the observation preparation, from the observing time estimation based on our previous NaCo observation to the Observation Block (OB) preparation. We obtained results by the end of 2014 and spent the first half of 2015 reducing and analysing the data, mainly in H and Ks Broad bands (see figure 4.6), leading to the following article (Gratadour et al. 2015). In the context of the initial stage of my PhD work, I participated to the data reduction, going through all the process steps to the final images.
Introduction
The unified model of active galactic nuclei, which has been largely accepted, explains the extreme energy production in a compact region by the presence of a supermassive black hole (a few million to billions of solar masses), which is continuously fueled through an accretion disk and which strongly irradiates its close environment mainly at short wavelengths. To cope with the diverse observed aspects of activity, a key ingredient is the presence of circumnuclear, optically thick material, arranged in an anisotropic manner and hiding the central core emission when viewed edge-on (Antonucci 1993). As one of the closest active galaxies (15 Mpc), NGC 1068 is the ideal laboratory for studying nuclear activity. While indirect evidence has been found in this object, thereby corroborating the unified model (Antonucci & Miller 1985;Raban et al. 2009), many unknowns remain as to the nature and distribution of the obscuring material (Nenkova et al. 2002). Recently, the complexity of the circumnuclear environment has been revealed by high angular resolution broad band observations in the near-IR (Rouan et al. 2004;Gratadour et al. 2006;Exposito et al. 2011). In this Letter, we present the new elements introduced by polarimetry.
Observations and data processing
The core of NGC 1068 was observed with the SPHERE instrument on the Very Large Telescope, under the science verification program, using the infrared camera IRDIS (Langlois et al. 2014) in its polarimetric mode. SPHERE has been designed for hunting exoplanets through direct imaging and is equipped with the extreme adaptive optics system SAXO (Beuzit et al. 2008), providing diffraction-limited image quality to IRDIS under nominal atmospheric conditions. It requires an R < 11 star for the adaptive optics loop, but bright and compact extragalactic targets can also be used as guide sources as in the case of these observations. NGC 1068 was observed under rather good turbulence conditions (median seeing of about 1 �� ) on 10 and 11 December 2014. The adaptive optics correction quality was fair even though SAXO could not be used at full capacity owing to the faintness of the guide source. The airmass ranged from 1.1 to 1.26 during the observations. The achieved resolution on NGC 1068 (60 mas, i.e. about 4 pc) at H (1.65 μm) and K � (2.2 μm) reveals new details over more than 600 pc around the central engine. We built a data reduction pipeline to produce, for each band, maps with pixels of 0.01225 �� (0.9 pc) of the total intensity, the degree of linear polarization, and the linear Article published by EDP Sciences L8, page 1 of 5 polarization angle. This data reduction pipeline includes data pre-processing (detector cosmetics: flat-fielding and bad pixel correction, distortion correction: vertical spatial scale is multiplied by 1.006, sky background subtraction and true north correction as measured on images of 47 Tuc), high accuracy shiftand-add (Gratadour et al. 2005a), and a dedicated polarimetric data reduction procedure. The latter is based on the double-ratio method, which is adequate for dual beam analyzers using halfwave plates (Tinbergen 1996). In such systems the polarization information is contained in the ratio of the two beams but mixed up with the system gain ratio for each pixel. To filter out the
Polarization in the ionization bicone
At H and at K � , very similar polarized intensity images reveal a bright central source and a distinct bicone, whose axis is at position angle (PA) of about 30 • . While the northern cone can be associated to the cloud complex of ionized gas that is extensively studied through optical ionization lines (Groves et al. 2004) and near-IR coronal lines (Barbosa et al. 2014), the southern cone appears as a perfectly symmetrical structure with respect to the central engine location. It shows distinct edges, which are highly polarized close to the base, as well as successive well-defined arcs at 125, 170, and 180 pc from the nucleus to the southwest.
The orientation of the bicone with respect to the host galaxy explains that the southern cone is more conspicuous in the near-IR than in the visible (Das et al. 2006): the northern cone is above the disk, while the southern cone, below, is reddened. This is also consistent with the properties of the 21 cm absorption feature observed with the VLA (Gallimore et al. 1994). Highly polarized cone edges is consistent with a simple geometric assumption in which the edge-brightened regions have a scattering angle close to 90 • required for maximum polarization (Tadhunter et al. 1999). The southern arcs can be partly associated to patchy clouds detected in the optical with HST/FOC (Macchetto et al. 1994) and to features observed with coronagraphy in the near-IR (Gratadour et al. 2005b). The linearly polarized emission in the bicone is largely dominated by a centro-symmetric component, as shown by the map of polarization angle, and is thus interpreted as scattered emission from the central engine (Simpson et al. 2002). Kinematic models of a global outflow (conical or hourglass shaped) in the narrow line region (NLR), originating in a disk or torus wind, have been successfully compared to spectroscopic data in the optical (Das et al. 2006) and near-IR (Riffel et al. 2014) to explain the observed emission-line velocities in the whole bicone. In these models, the NLR clouds are accelerated to about 1000 km s -1 up to a distance of 80 pc from the nucleus, keeping a constant velocity beyond (hourglass model) or decelerating to the galaxy's systemic (conical model), consistent with the location of the arc structure. The sharp and regular circular morphology of the near-IR polarized emission suggests a vast bow shock stemming from the interaction of this outflow with the galactic medium. It could explain the increased degree of polarization in these regions owing to the inescapable accumulation of matter, hence of dust, because of the velocity change at this location, resulting in an increase in Mie scattering efficiency. This interpretation is consistent with the lobe structures observed on radio images (see [START_REF] Wilson | [END_REF] for instance) and infer the presence of a bow shock.
Evidence for an extended nuclear torus
We subtracted a purely centro-symmetric component from the map of polarization angle, assuming the central emitting component to be at the location of the near-IR peak intensity and with the proper reference angle, fit by minimizing the median residual angle over the bicone. Following this method, we find a PA for the bicone axis of 33 • ± 2. We are only interested in the absolute value of the difference between the directions of the polarization vectors and the centro-symmetric pattern (hence an angle between 0 and 90 • ). It is the quantity displayed in the lefthand panel of Fig. 2 for the H band.
The result confirms that the polarized intensity in the bicone is completely dominated by scattered light from the central source. Most importantly, it reveals a compact elongated region in the inner arcsecond around the nucleus, showing a clear deviation from the centro-symmetric pattern with linear polarization aligned perpendicular to the bicone axis and extending over 55 pc perpendicular to this axis and about 20 pc parallel to it. The same result is obtained in both bands as shown by the magnified versions of the difference angle map around the nucleus for both the H (left) and K � (right) bands pictured in the upper part of the right panel of Fig. 2. In addition to the lack of significant residual pattern in the peripheral zones on the difference image at each extremity of the elongated structure, we believe this feature is neither an instrumental artifact nor a shadow effect tracing a region where the polarization is cancelled totally due to multiple scattering in denser material closer to the nucleus. It should be noticed, though, that the bicone shape of this feature is an artefict inherent to the displayed parameter (difference of angle), Indeed for points on the ionization bicone axis, the apparent transverse component becomes parallel to the centro-symmetric vector so that the difference in angles vanishes along this axis. Most important, this subtraction process only enhances a pattern already observable on the polarization angle map without processing as shown by the image at the bottom of the righthand panel of Fig. 2.
In this image, the polarization vectors are displayed as bars overlaid on the degree of the linear polarization map. A transverse polarization angle component crossing the nucleus and extending over about 0.8 �� with the same PA can be clearly identified. Such a feature, with a central transverse polarization, has already been observed at lower angular resolution (Packham et al. 1997) and has been interpreted since as dichroic absorption of the central light radiation by aligned non-spherical dust grains. Additionally, it is believed that the transition between dichroic absorption and emission could explain the sudden flip of the polarization angle around 10 μm (Packham et al. 2007) toward the nucleus. However, this interpretation appears incompatible with the observed extension of this feature appearing at high angular resolution because it would require a still undetected extended source of emission in the background to be attenuated by this structure.
This feature is similar to the polarimetric disk effect observed in young stellar objects. An original interpretation of this effect (Bastien & Menard 1988) involves a thick disk or flat torus, on which the radiation from the central engine is scattered twice on average, before reaching the observer: once on the upper (or lower) surface of the disk and then near its external edge. Recently, refined radiative transfer modeling (Murakawa 2010) has supported this interpretation. In this model, grains can be spherical or not, and grain alignment is neither required nor expected to have a significant impact. These simulations show that an extended patch of linear polarization aligned in a direction perpendicular to the bicone axis can appear at its base for a thick disk (thickness parameter greater than 0.3) with rather small dust grains (0.25 μm diameter). The comparison of their Fig. 6 with our data, in the case of double scattering, is particularly convincing. Additionally, this model is able to reproduce the highly polarized bicone edges we observe.
In this simulation, the thickness of the disk, viewed almost edge-on, was taken to be 30% of the diameter, which is consistent with the thickness of 15 pc measured at the outer edges of this feature on our data. For the effect to be efficient, the optical depth of the torus cannot be much larger than 1, otherwise multiple scattering would cancel the polarization: this condition is L8, page 3 of 5 A&A 581, L8 (2015) compatible with the range generally considered for NGC 1068. An optical depth on the line of sight of 1.25 in K band is inferred, for instance, to cope with high angular resolution spectroscopic observations (Gratadour et al. 2003). We thus believe that this extended patch of linear polarization aligned in a direction perpendicular to the bicone axis is the first direct evidence of an extended torus at the core of NGC 1068.
The derived parameters (radius of 27 pc and thickness of 15 pc) are consistent with recent results using ALMA data that complement a spectral energy distribution analysis (García-Burillo et al. 2014). Additionally, we derive a PA 118 • for the uniformly polarized structure that is consistent with the alignment of the two patches of hot molecular hydrogen observed in the near-IR (Müller Sánchez et al. 2009) and the orientation of the extended structure observed with VLBA at 5 GHz (Gallimore et al. 2004). Moreover, the map of degree of polarization seems to suggest that the obscuring complex extends way beyond the elongated feature we detect around the core, with two lobes of extremely low polarization levels on each side of the nucleus showing efficient screening from the central radiation.
While the polarization level in this extended patch is rather low, which agrees with the polarized disk model, one thing the latter cannot reproduce is the ridge of higher linear polarization observed at the location of the nucleus with a PA of 56 • (center of bottom right panel in Fig. 2). We propose that this ridge be interpreted as the location at which the linearly polarized emission due to dichroic absorption of the central light originates. While this effect was not included in the other simulations we mentioned, the measured degree of polarization (5-7%) is consistent with such an interpretation. The size of the minor axis of this feature is consistent with the size of the dust sublimation cavity, believed to be the main contributor to the near-IR emission from the core, as it would appear convolved by the instrumental PSF. However, its extension (comparable to the thickness of the previously inferred torus) and exact orientation are questionable.
A deeper analysis of our data, including a close comparison of H and K � bands and numerical simulations, including dichroic absorption, are required for a better understanding of the nature of this ridge.
Centro-symmetric Subtraction
A critical point in our final interpretation is the subtraction of the centro-symmetric pattern, showing the central region as having constant polarisation orientation. This subtraction needs to be applied with a centro-symmetric pattern centred on the precise location of the central source of the AGN. However it does not require any hypothesis as it is only a different way of displaying the same data. As discussed in section 3.3.3, this is an operation that can be conducted directly on the final angle map. A similar result can also be obtained by computing Q φ and U φ as explained in section 3.3.3, a usual way of representing such analysis on polarimetric data with SPHERE and in exoplanetary field, but more rarely used with AGNs. After the stimulating discussions that followed the article publication, we derived those maps, displayed for Ks band on figure 4.7. These images confirm the peculiar polarisation in the centre of NGC 1068 with a central polarimetric signal neither tangential nor radial, consistent with a constant polarisation orientation. Tangential pattern of the polarisation in the bicone is also clearly confirmed. An important new feature brought by these images is the intensity of these different structures. While the bicone centro-symmetric structures show as expected high fluxes, the very central region where the polarisation orientation is constant also exhibit a significant intensity, making the interpretation of the signal in this region even harder.
IRDIS Narrow Bands
Observations
According to Efstathiou et al. (1997), one significant difference between polarisation induced by dichroic absorption/emission by aligned grains and the one created by scattering on more or less spherical grains, is the switch of polarisation angle when looking at different wavelengths. The change from dichroic absorption to emission indeed triggers a 90 • change in the polarisation orientation when increasing the wavelength. It would occur around λ = 4 µm, according to Efstathiou et al. (1997). However, SPHERE can only observe up to the K band (2.2 µm). Our idea was however to obtain SPHERE HAR maps of the core of NGC 1068 at several wavelengths, looking for an evolution of the polarisation. We would not likely observe the switch, but we may catch some modifications of the polarisation with respect to the wavelength. We therefore applied for a follow up observation with SPHERE in the form of Observations were conducted in visitor mode, the only mode allowed for polarimetric observations with IRDIS for this period, by Daniel Rouan and me, between the 11 th and the 14 th of September 2016. Observation log is outlined in table 4.2
Data Reduction
Polarimetric images were obtained for four positions of the HWP (1/4 of the observing time per position). All data reduction followed the steps below:
-Sum of the sky images (using median, see section 4.2.2.4) We also applied these steps before the polarimetric reduction. All polarimetric data were then reduced following the three methods described in section 3.4: double subtraction, double ratio and inverse matrix. The final image quality obtained in the various bands are compared in table 4.3 using the pseudo-noise method described in next paragraph. We also tried improved versions of the inverse matrix, allowing elements of the matrix to vary slightly around 1 or -1 to account for instrumental depolarisation, with results very close to the classical inverse matrix. We derived our final images with the inverse matrix method, that gives a slightly lower pseudo-noise (see table 4.3). Note that currently the images are not corrected for aliasing and true North orientation (about 1 • ). This should be done before publishing final data products, but will not change significantly our analysis.
As discussed in Tinbergen (1996), it is difficult to evaluate the quality of polarimetric maps. Signal to Noise Ratio (SNR) for example is not well defined as a higher intensity will not correspond to higher degree of polarisation. Furthermore, when the degree of polarisation is small, neither the polarisation angle nor the polarisation degree follow Gaussian distributions. In order to evaluate the different generated maps and compare the methods, we based our analysis on the local variations of the degree of polarisation. Thus we ensure to compare the variations on regions were the SNR of the intensity maps is identical which is required since SNR will affect the determination of the degree of polarisation, as detailed in [START_REF] Clarke | [END_REF]. We subtracted to each pixel a mean of the four closest pixels, creating a "pseudo-noise" map m σ and then look at the dispersion of these values on all the maps, as follows: One important parameter for polarimetric observations with SPHERE is the derotator angle. It affects the polarisation measurements and should therefore be carefully planned or taken into account. Figure 4.9 shows the polarimetric efficiency depending on the filter and derotator angle. One can note that NB filters have not been tested and their polarimetric efficiency is therefore unknown. We expect their efficiency to follow the general tends of the other filters, but we have no mean to know exactly what is the efficiency for a given angle. We placed our observations on a graph indicating for each of them the derotator angle (figure 4.10). Most of our observations have been done in the -15 • -15 • range, i.e. on optimal position for BB filters. However this is not the case for some images taken with the CntK1 and CntK2 filters. CntK2 furthermore shows the lowest polarimetric signal (see fig 4.18 of section 4.2.2.5). We therefore expected this lack of signal to come from the derotator and compared the final maps computed with all raw images, and those obtained with a selection of raw images with an optimised derotator position. Images are shown in figure 4.11, results are shown on table 4.4. Note that despite having some higher polarimetric signal, we also conduct for the same reasons the same experiment on CntK1 filter with similar results (not shown in this thesis). Table 4.4 -Impact of derotator position on polarisation degree.
σ pol = 1.4826 × median(| √ 0.8 m σ -median( √ 0.8 m σ )|) (4.1)
Filter
CnK2 full CntK2 selected σ (ADU) 0.20695 0.24220
We have no clear improvement by selecting the images in the optimised range of derotator angle. The difference is likely to arise from the exposure time, because it is shorter for the selection therefore inducing a lower SNR. Furthermore, the images look very similar and the low polarisation signal is unlikely to be entirely due to the derotator. We finally used the reduced image obtain with the complete observation, because we can not go further into these comparison without measurements of the NB filters efficiency.
Polarimetric Sky Strategy
Preparing the OB, we expected the sky and science acquisitions to match. However it appeared that this is not the case. The SPHERE-IRDIS instrument is set to take sky images with only one HWP position (by default 0 • ). This is consistent with the fact that the sky should not be highly polarised in the NIR (during night). This difference explains the difference in acquisition time between scientific and sky observations, as shown in table 4.2. We however changed the instrument setup to take skies with the four positions of the HWP. This allowed us to test whether the use of only one polariser position for sky measurement is enough to fully take into account the sky emission in polarimetric measurements. Note that this is not analytically trivial as long as the polarimetric data reduction includes non linear processes (see for example [START_REF] Clarke | [END_REF]. We tried three combinations of skies: all skies combined, 1/4 of all skies stacked and skies combined by HWP position. The second sky strategy is to be compared to the first method with the same exposure time per created final sky. We compared these three reductions with a fourth final image computed without removing skies, with and without re-centring because the absence of sky subtraction induces an increased amount of bad pixels (see 4 th raw of figure 4.12). Final images for CntK1 filter are displayed in figure 4.12 and results are shown in table 4.5.
One striking fact is that the apparent best final image is the one without sky subtraction. However this image shows polarisation degrees very different from the other three methods. SNR should not affect the value of the resulting polarisation and the values of this map must be rejected. Once again, this is not trivial because it has not been demonstrated analytically so far. We conducted simulations to confirm this: we created some arbitrary maps with some intrinsic polarisation; we added a sky image generated from random Gaussian distribution; reduced these data using another randomly created sky (with the same distribution but another initial seed) or without sky subtraction. Results on the polarisation degree are shown in figure 4.13 .15 indicates that the sky correction has very little impact on the measured polarisation angle. This is furthermore confirmed by reduced image of figure 4.12. However that is not the case on polarisation degree (figures 4.13 and 4.14). The skycorrected image has a lowest pseudo-noise, however the uncorrected image has a polarisation degree with an important offset with respect to the initial theoretical value, leading to a wrong estimate. Despite not giving the precise value of expected degree of polarisation and therefore not being perfect, the sky-subtracted image distribution is almost centred on this value, giving a fair estimate of the true polarisation.
Polaro-imaging Results
Following the above remarks, what we consider as the best estimates for the intensity and polarimetric maps of NGC 1068 in the three NB CntH, CntK1 and CntK2 are shown respectively in figures 4.16, 4.17 and 4.18.
Classical Imaging Results
The intensity images of NGC 1068 in the NB H2 and in the corresponding continuum CntK1 are shown in figure 4.19. The ratio NB/continuum is shown in figure 4.20. Note that the conspicuous features seen on figure 4.20 are mainly ghosts reflection of light, most likely to occur on the H2 filter. This unfortunate behaviour has been identified by the SPHERE consortium and does not allow us to go deeper in the analysis of this observation.
ZIMPOL
In order to further investigate the wavelength dependency of the polarisation in NGC 1068, we planed to used the impressive performances of ZIMPOL in the visible to obtain complementary polarimetric maps. The luminosity of NGC 1068 appears to be even more limiting in the visible than in the NIR, restricting to R band observations only. We proposed this program in P99 but had the opportunity to obtain these data
Other Targets
With those ZIMPOL observations in R NB conducted in late 2016, we reach the best high resolution polarimetric observations achievable on NGC 1068. The ability to directly trace the torus using polaro-imaging motivated us to push forward our investigations towards other AGN to look for a similar behaviour, or differences that could be linked to the inclination of the torus on the line of sight. Indeed, observing a similar behaviour of the NIR polarisation in other Seyfert galaxies would bring extremely strong arguments in favour of the unified model and would allow us to build upon the analysis done on NGC 1068.
Three objects, NGC 1097, NGC 1365 and NGC 1808, were selected on the basis of their rather small distances (respectively 16, 18 and 11 Mpc) and of the R-magnitude of the central core, as measured using HST archives. Because of their distance, the resolution that should be obtained is between 0.65 and 1.1 pc/pixel. All sources have been observed using polarimetry at B, B y , V, R y and H bands by Brindle et al. (1990[START_REF] Brindle | [END_REF]. Furthermore, NGC 1097 has already been observed with NaCo using the nucleus as a guide source for AO (Prieto et al. 2005). The last object, ESO 323-G077, was selected as one of the brightest polar scattering dominated Seyfert 1, a type of AGN which is expected to be observed with a line of sight close to the vertical extend of the torus (Smith et al. 2004), a particularly interesting case in the study of AGNs geometry.
We therefore included these targets in proposals for ESO (European Southern Observatory) P98 and P99, using SPHERE -IRDIS and looking for high resolution on the dusty molecular torus similarly to what we obtained on NGC 1068, with the aim to constrain the extent and opacity of the torus. However, these targets are all fainter in the visible than NGC 1068 and the AO performance will be lower. After the last observation run on NGC 1068 during P97, we decided to propose for P100 to use NaCo instead of SPHERE to guarantee the feasibility of these observations and we are currently discussing about the following of the program.
NaCo would not allow us to reach the same resolution in the inner structure as what was achieved with SPHERE. However, the wave-front sensors allow to close safely the AO loop on fainter targets. Thanks to this trade-off, high enough resolution should be reached to look for the signature of the torus at the tens of parsecs scale on these other targets. Note that we required for that purpose a high precision on the polarisation angle, a condition that is satisfied by NaCo according to Witzel et al. (2011). With the constraint of the previous polarisation observations in the visible and using our radiative transfer model, it will be possible to test the validity of our interpretation or bring new constraints on the geometry and inner structures of those AGNs.
Furthermore, inclination of NGC 1097 and NGC 1365 have been derived with different methods that do not give consistent values (Storchi-Bergmann et al. 1997;Risaliti et al. 2013;Marin et al. 2016a). This issue could be tackled thanks to the additional and independent polarisation observation with NaCo. Prieto et al. (2005) found nuclear spiral structures in NGC 1097 and claimed that a torus should be smaller than 10 pc. Adding the polarisation information at high resolution thanks to NaCo data could confirm or give another perspective on these results. The role of the magnetic field, as discussed in Beck et al. (2005) from radio data, could also be considered in our analysis since aligned grains are expected to affect polarisation through dichroic absorption or emission.
Finally, we also selected in our target list one polar scattering dominated AGN. Despite being further away and therefore observed at a lower spatial resolution, polarisation maps in this type of object could bring strong constraints on the inner geometry of AGNs, especially on the hypothesis of the low range of inclination assumed for these particular Seyfert 1. Because of the difficulty to interpret polarimetric data, radiative transfer codes are essential tools. Several codes have been developed since computational power allowed to envisage such simulations, however as far as we know, there is no radiative transfer code that is able to fulfil the needs of all wavelength ranges. In order to analyse the polarimetric observations presented in Chapter 4, we needed to conduct simulations to assess our interpretations and improve them. For the prospect of this thesis, centred on AGNs observation at high angular resolution in the NIR, we focused on radiative transfer from IR to visible, including the proper treatment of polarisation.
Few codes corresponded at least partly to these criteria at the time we were looking to extend the study in this direction. We identified: RADMC-3D [START_REF] Dullemond | RADMC-3D: A multi-purpose radiative transfer tool[END_REF], STOKES (Goosmann & Gaskell 2007;Marin et al. 2012Marin et al. , 2015)), SKIRT (Stalevski et al. 2012), MCFost (Pinte et al. 2006) and Hyperion (Robitaille 2011). While RADMC-3D, STOKES, MCFost and Hyperion are available to the community, this was not the case of SKIRT when we started this study (early 2015). Furthermore, RADMC-3D did not include fully usable polarisation at this time. We conducted some short tests on MCFost and Hyperion but it appeared that we needed to spend some time to learn how the codes were working in detail to be able to build the precise cases we would be investigating on.
For our purpose we needed a code able to simulate polarisation in the NIR, especially in the H and K bands, therefore between 1.6 and 2.2 µm and STOKES is only for use under 1.0 µm. However, we have had the opportunity to work in collaboration with the group Hautes Énergies of Observatoire Astronomique de Strasbourg that developed STOKES code and this collaboration allowed to improve the interpretation of the data. Their help was also beneficial on the simulation work where their experience and the common tests we defined allowed to correct bugs and improve several aspects of our simulations.
Because of our particular requirements, we decided to carry on the development of our own simulation code optimised for AGNs observations in the infrared. MontAGN has been developed in the Laboratoire d' Etudes Spatiales et d'Instrumentation en Astrophysique (LESIA), first by Jan Orkisz during an internship from September 2014 to February 2015. Its development started before we obtain our first polarimetric observations (Chapter 4) and its aim was to interpret AGNs HAR observations, however, it was initially without taking into account polarisation. We decided to continue to develop this code and extend it to polarimetry with the goal to make it public, a good way to improve the reliability of results through cross use of codes. This would also allow us to assess whether our assumptions on double scattering with a proper torus geometry were valid. It required an incompressible amount of time to built this code, probably more than getting use to one of the other codes, but with a fully controlled code and with very specific options.
Overview
MontAGN (acronym for "Monte Carlo for Active Galactic Nuclei") is a radiative transfer simulation code, written entirely in Python 2.7. It uses the libraries Numpy and Matplotlib (Hunter 2007). Its first version only included simple scatterings and no polarisation. It has then been upgraded to its current version as a part of this thesis's work and is aimed to be on open source in a near future.
As a radiative transfer, MontAGN aims at following the evolution of photons, from 5.1. Overview 97 their emission by one of the sources defined in the simulation to their exit from the volume of the simulation, where they are recorded. This code uses photons packets instead of propagating single photons. This allows to choose between two propagation techniques, allowing simulations to be run on two modes:
-If re-emission is disabled, the energy of the packet is modified to take into account, at each event, the fraction of the initial packets that continue to propagate in the medium. Therefore at each encounter with a grain, the absorbed fraction of photons is deduced so that just the scattered photons propagate. Murakawa (2010) used the same techniques on Young Stellar Objects (YSO) simulations.
-If enabled, one can takes into account the absorption and re-emission by dust as well as temperature equilibrium adjustment at each absorption. The absorbed photon packet is re-emitted at another wavelength with the same energy but different number of photons to conserve the energy in the cell. The re-emission wavelength is not only dependant on the new cell temperature but also on the temperature difference between the old and the new temperature. By following these steps, we ensure to take into account that the previous re-emissions were not emitted with the final emission function, following the method of Bjorkman & Wood (2001). We therefore uses the differential spectrum between the two emissivities of cell before and after the absorption of the packet to emit the new photon packets at a proper wavelength (see section 5.5.2 and figure 5.6 extracted from the former article).
By disabling re-emission, every packet will propagate until it exits the medium. This disabling allows us to get much more statistics at the end of the simulation as every packet is taken into account1 . But it also requires to have a significant number of packets in each pixel of final images as we may obtain in one pixel only packets with very few photons (because of several scatterings), a situation that may not be representative of the actual pixel polarisation. This point will be discussed in section 5.7
Pseudo-code
We will give here a general overview of the algorithm of MontAGN. All the non trivial steps will be detailed in the following sections.
General MontAGN algorithm:
Draw of a τ that the packet will be able to penetrate: Computed by inversion from optical depth penetration definition
P (τ x > τ ) = e -τ -MontAGN 10
Determination of the next event of the packet: interaction if τ is too low to allow the packet to exit the cell or exit from the cell (in that case, decrease of τ of the optical depth of the cell and go to MontAGN 18).
-MontAGN 11
Determination of the properties of the encountered grain/electron: size and therefore albedo, Q abs and Q ext .
-MontAGN 12
Case 1 (no re-emission): decrease of the energy of the packet according to the albedo (and go to MontAGN 16).
Case 2 (re-emission): determination whether the packet is absorbed from albedo. If no go to MontAGN 16.
-MontAGN 13
Determination of the new temperature of the cell: Determined from the previous one by balancing incoming and emitted energy.
-MontAGN 14
If the new temperature is above the Sublimation temperature of a type of grain: update of the sublimation radius of this type of grains.
-MontAGN 15
Determination of the new wavelength and direction of propagation of the packet: Directions determined randomly. Wavelength determined from the cell temperature and from the difference between new and old temperatures if updated. Go to MontAGN 18. - MontAGN 16 Determination of the angles of scattering and of the new directions of propagation and polarisation:
From grain properties and wavelength, determination of α, S 1 and S 2 . Determination of β. Compute of the new vectors p and u (see figure 3.5).
- MontAGN 17 Constructs of the matrices and applies them to the packets Stokes vector: From the angles, S 1 and S 2 , construction of Mueller matrix and of the rotation matrix (as described in section 3.2.4). Application to Stokes vector of the packet.
- MontAGN 18 Determines if the packet exits: if the photon is still in the simulation box, go back to MontAGN 10.
-MontAGN 19
Computes the packet properties in the observer's frame: From p and u, determination of the orientation of the polarisation frame and correction of Stokes parameters using a rotation matrix.
-MontAGN 20
Records the packet's properties: Written in the specified files.
-MontAGN 21
Is there other packets to launch ? If yes go back to MontAGN 06.
- MontAGN 22 End of simulation.
-MontAGN 23 Computes the displays if asked.
Options Available
MontAGN is usable with many options. This section contains a short description of options available for simulations.
-Thermal re-emission: The main option is the enabling or disabling of the re-emission by dust. When disabled, all packets continue to propagate until they exit the medium or they reach the scattering limit. If enabled, packets when absorbed will be re-emitted to another wavelength, with an update of the temperature of the cell. The final model will contain the new adjusted temperature of the grid.
-Pre-existing model: It is possible to use already existing models by giving them, as a model class object, containing the principal simulation parameters.
-Force the wavelength: If the goal of the simulation is to obtain monochromatic images of a source, it is more efficient to emit only the observed wavelength. It is possible to specify it to MontAGN. Note that this should not be used without good reasons together with re-emission mode because the temperature computed might be affected2 . Also note that even in no-re-emission mode, the final map will only show light coming originally from the sources and not from the dust. This should not be used either in MIR or larger wavelength.
-Grain properties: Dust grain have all a default distribution function. All grains are considered as spherical dielectric, with a Mathis, Rumpl and Nordsieck (MRN, see Mathis et al. 1977) distribution, ranging from 0.005 to 0.25 µm with a power law of -3.5. All these parameters are tunable using different keywords, for each type of dust. Up to now, silicate and graphite grains are available, with a distinction between graphite parallel and perpendicular. Note that electrons are also available, using the same parameters despite not being dust.
-Parallelisation: As it is based on a repetitive Monte-Carlo process, this radiative transfer code is easy to parallelise. It is possible with MontAGN to specify how many processes should be launched for the simulation. Note that currently this mode is only supported for simulations without dust re-emission.
-Maximum number of scatterings: In the case without re-emission, the only way to consider that a packet has finished propagating is by an exit of this packet. However, in the case of high optical depth, this could happen after many scatterings, leading to really long simulations with low energy of the packet at the end. To avoid this configuration, we limit the number of scatterings a packets can undergo. After this number of interaction, the packet is no more considered and we switch to the next packet.
-Structures available: When filling the 3D grid with dust densities, some structures already implemented are available: radial and spherical power laws, clouds, shells, constant density cylinders and torus geometries.
A complete list of parameters and keywords, associated to these options or to other functionality is available on section 5.10.
Tools
MontAGN code contains many routines with very different functions. However, some of them have common points and some techniques are often used. Instead of being described in the corresponding sections, they will be introduced here.
Simulation of Random Variables
As a Monte-Carlo code, MontAGN is highly dependant on generation of random numbers. It uses a home-made seed generator created by Guillaume Schworer (gen_seed) to initialise the seed, using both the current time and the thread number. This initialisation is particularly critical in the case of parallelised uses of MontAGN, to ensure that every process has a different initial seed.
In order to generate random numbers, one has to know the cumulative distribution function, F of the corresponding random variable X. The cumulative distribution characterise any distribution of random variable by giving for any value x of the variable X the probability for X to be smaller than x: F (x) = P (X x).
(5.1)
As a function indicating a probability and strictly increasing, it starts at 0, its minimum, and increases toward 1 (see figure 5.1 for an example).
The probability density, f , is also an useful function, defined as the derivative of the cumulative distribution. It represents the probability to obtain a random number in a certain range around the targeted value. The greater the probability density goes, the more chance we have to get this value of the random variable.
For example, an uniform distribution between 0 and 1 will have:
F (u) = 0 if u 0 (5.2) F (u) = x if 0 u 1 (5.3) F (u) = 1 if u 1, (5.4) f (u) = 0 if u 0 (5.5) f (u) = 1 if 0 u 1 (5.6) f (u) = 0 if u 1. (5.7)
In order to get a value of a random variable, one simple method is to inverse its cumulative distribution function. If F is the cumulative distribution function of a random variable X, then F (X) is a random variable following a uniform distribution U . We can write:
U = F (X) (5.8)
and therefore:
X = F (-1) (U ).
(5.9)
From an uniform distribution random variable, we can obtain a random variable following any distribution as long as its cumulative distribution function can be inverted. This should be achievable in the sense that cumulative functions are strictly monotonic, however there is not always analytic simple expression for this inverse function.
Numerically, an uniform distribution is the basis of the random number generator and we will therefore use this method to simulate some of the distributions. For non-easy invertible distribution, we will prefer the rejection method of von Neumann.
Von Neumann's Rejection Method
This method, first described in von Neumann (1951), consists in finding an envelope g to the function we would like to use as a random generator, f , in the cases where f could not be easily inverted. The envelope must be strictly positive and greater than the function, but the closer it is to the function, the more efficient the method is. The envelope needs to be a simple function so it can be used as a random number generator instead of using the initial function. We have:
∀x, g(x) = 0, f (x) g(x)
M.
(5.10)
From the envelope g, we get a value y of the random variable Y (the random variable associated to g, for example using an inversion method as described on the previous section). Once y obtained, we evaluate for this particular value the difference between the function and the envelope:
1 M f (y) g(y)
.
(5.11)
We then make a test to verify whether the value is accepted or not: we take a number u randomly between 0 and 1 (uniform distribution U ), if u is smaller than the ratio function/envelope on the selected abscissa, the value is accepted, of not, it is rejected and we need to take a new random number from the envelope:
If u 1 M f (y) g(y)
, then x = y.
(5.12) This is illustrated on figure 5.2.
Von Neumann method ensure to respect the probability density of the function, and only requires to know both the envelope and the function, and to know the inverse of the cumulative distribution of the envelope. Its efficiency, defined as the ratio of the accepted values to the tested ones, is directly related to the matching between the envelope and the function, determined from their relative integral. For example on the case illustrated by figure 5.2, the efficiency would be about 50 %.
Initialisation
The first important step in the simulation is the initialisation. It consists in all operations that needed to be executed before entering the main loop of the code, that will launch the photon packets. The largest part consist in organising the different variables into the good class objects for the code to be the most efficient possible. We can notice two critical phases, the first one is the construction of the 3D grid sampling the dust densities. The second one is the pre-computing of all the parameters required for photon propagation. We could compute all this values at each propagation but this will take a long time. The solution used in MontAGN to have the lowest possible approximation is to compute before the simulation tables of these values for each parameters. These tables can then be interpolated when needed, an operation much faster than processing all the calculi.
Density Grid
MontAGN uses dust particles densities to fill its grid. The grid is composed of cubic cells and a vector in each cell has as many values as the number of dust species plus two. These additional values are the temperature of the cell and the number of packets absorbed within the cell. Currently, the grid has six values stored in each cell for the four available species densities (in particles per cubic metre) corresponding to silicates, graphites ortho and para and Electrons. Despite electrons not being dust, we process them in the same way as explained in section 3.2.2. This number of species can easily be increased if required.
The filling with the proper dust densities is achieved thanks to densities functions, regularly sampled. Several functions are pre-existent, from constant densities to complex power law combinations (see the list in section 5.1.2). It is possible to combine two or more function to construct some complex structures, which is an important advantage of this method. The counterpart is that one needs a good sampling rate to have a fair representation of the dust structures. This often requires to use a grid with a lot of cells, which takes space on the Computer Processor Unit (CPU) memory.
One other geometrical aspect is the concept of sublimation radii in MontAGN. In order to stand for the ionisation sphere surrounding the high luminosity objects, MontAGN allows to set sublimation radii for each dust specie. These radii are adjusted at the occasion of temperature adjustment to match the temperature increase. This guarantees a good physical representation of the path that photons can follow without interaction because of sublimation. In term of algorithm, any cell located before the sublimation radius of a certain dust specie will be considered as empty (if the radius actually crosses the cell, only the inner part will be the object of this procedure).
The last geometrical feature is the ionisation cone. We can define such cones in case of objects with polar jets. In these regions, all packets will propagate directly to the boundary of the cone. This allows to have faster simulations. However in the majority of our simulations, we prefer to use instead a cone constituted of electrons, more physically coherent.
Pre-computed Elements
Before the simulation starts, tables of albedo, phase function, Mueller matrix, absorption and extinction coefficients are generated for a range of grain size and wavelength thanks to Mie theory. For that purpose, we use bh_mie modulus initially written by [START_REF] Bohren | Absorption and scattering of light by small particles[END_REF] combined with grain data from Draine (1985).
To have a detailed description of the role of each of these elements, see section 3.2.1. These tables then just need to be interpolated to get the precise value for given val-ues of grain size and wavelength during the simulation. This interpolation makes the execution of the code faster and with a fair approximation.
Emission
At emission, the source is selected according to the respective luminosity of each source. A tag in the packet parameters allows if needed to distinguish which source it comes from. Note that even with a high temperature in the grid, a dust cell will never be considered by the code as a source because its emission is fully integrated on the re-emission process described in section 5.5.2. The inclusion of these dust regions as sources in the other mode of the code, without re-emission, is an interesting point for future implementations.
For photon propagation, MontAGN uses the Stokes vectors formalism to represent polarisation of photons packets: (5.13) with I the intensity, Q and U the two components of linear polarisation and V the circular polarisation (see for more details section 3.2.1). We normalise in MontAGN the Stokes vector (I = 1) to simplify the Stokes vector propagation.
S = I Q U V ,
The packet's wavelength is determined using the source's spectral energy distribution, initially not polarised (i.e. with Stokes vector of the photon set to [1,0,0,0]). The packet starts to propagate in a direction p with a polarisation reference direction u both randomly picked. It would technically be possible to change these parameters but it is not yet available as an input parameters. However we can manually impose a particular polarisation at emission, or a particular direction propagation function. This has been achieved in MontAGN for some tests, presented in section 5.8 and should be soon implemented.
Photons Propagation
Packets propagation is mainly constituted of two distances computing. The algorithm estimates the next crossing of a wall of the current cell (d wall ) and the distance that the packet will be able to travel without interaction (d τ ) for a given optical depth τ at the packet's wavelength λ.
In these chapters, we will describe dust and electrons structures thanks to their optical depth. Note that this definition is close but different from extinction A λ , which is formally defined as:
A λ = -2.5 log 10 (F obs,λ /F no ext,λ ).
(
5.14)
There is therefore a factor 2.5/ log(10) between optical depth and extinction:
A λ = 2.5 log(10) τ λ ≈ 1.0857 τ λ . (5.15)
What is important for radiative transfer of photons through dust clouds is not the distance travelled but the optical depth of the medium crossed. It is a way to represent the distance in term of "probability of interaction unit". We can compute optical depth τ from a distance d by integrations over the photon path of the number density of particles n times their attenuation cross section σ at :
τ = d 0 σ at n dl.
( 5.16) In cases of dust MRN distributions (with coefficient α), optical depth can be expressed in function of the extinction coefficient Q ext , the grain geometrical crosssection σ cc and the radius a of a grain as:
τ = d 0 amax a min n Q ext σ cc a α da amax a min a α da dl = d 0 amax a min n Q ext π a 2 a α da amax a min a α da dl.
(5.17)
As optical depth is directly linked to interaction probability, we will first determine an optical depth τ that the packet will be allowed to travel through. This value is computed at the first encounter of the packet with a non empty cell and will decrease with the travelled distance as long as τ > 0. We derive τ from a random number, U , uniformly distributed between 0 and 1 as follows: τ =log(U ).
(5.18)
Because we use constant dust properties in a cell, the integration of optical depth from equation 5.17 over the packet path is trivial and the corresponding distance can be computed in each crossed cell with number densities n i of the dust species i using:
d τ (λ) = τ Σ i a max,i a min,i a α i da i a max,i a min,i n i Q ext,i π a 2 i a α i da i . (5.19)
If the packet is located at a radius smaller than the sublimation radius, or if it is in the funnel, the cells will be considered empty as explained in section 5.3.1.
If the distance to the cell border is the shortest, the packet is moved to this location and the value of τ is decreased according to the distance and the densities of the different dust species:
τ = τ -d wall Σ i a max,i a min,i n i Q ext,i π a 2 i a α i da i a max,i
a min,i a α i da i .
(5.20)
If d τ is the shortest, interaction happens with a grain of the cell. Note that Robitaille (2011) uses the same strategy with the Hyperion code. The grain type is first determined randomly with a probability relative to the value of n i Q ext,i (λ, a i )a 2 i πda i . The radius of the grain is obtained using the dust grain size function and from this radius and the wavelength, the albedo is determined by interpolation. Two cases need to be differentiated here, whether the thermal re-emission is enabled or not.
Interaction: Scattering without Re-emission
Without re-emission, we always have a scattering. The phase functions of the scattering, the angles themselves and the Mueller matrix are determined by interpolation from the grain radius and wavelength as described in section 3.2.3. This way of proceeding is a slow solution compared for instance to the use of a direct averaged function on the grain size distribution. However it is more realistic and despite this disadvantage, the simulations still have an acceptable duration for our studies.
In order to obtain the scattering angles from the phase functions, we need to inverse them to be able to generate random numbers. This is fairly easy in the case of Rayleigh scattering, therefore for x 1. However this is not possible in other cases and we will use for this purpose the rejection method of von Neumann, described in section 5.2.2. Figure 5.5 gives examples of Mie phase functions and there envelopes used in MontAGN. The normalised phase function is the ratio phase function envelope and corresponds to the test of acceptance of the random variable. The closer this function is to 1, the more efficient the method is.
Envelopes need to be adapted to the phase functions. As these later are highly dependant on the form factor, we need to modify our envelope accordingly. For the low values of the form factor, the phase function is really close to the Rayleigh phase function and we can therefore use this one as an envelope (with an efficiency close to 100 %). For intermediate form factors, we use a third degree polynomial, fitted to the phase function, becoming asymmetric with a higher probability of forward scattering. At high values of the form factor, we prefer a Henyey-Greenstein (H-G) function, a good representation of the main forward lobe of the phase function. Because of the difficulties to reproduce all the secondary lobes, the efficiency decrease for high form factors (see section 5.8).
Table 5.1 -Envelope phase functions used in MontAGN simulations
Envelope function
Form factor Rayleigh phase function
x < 0.1 Polynomial degree 3 0.1 < x < 2.5 H-G function x > 2.5
In order to update polarisation properties, we apply to the Stokes vector the Mueller matrix and the rotation matrix:
S f inal = M × R × S init (5.21)
with M the Mueller matrix constructed from Mie theory and R the rotation matrix, both depending on the scattering angles. A complete description of these matrices, phase functions and Stokes formalism can be found in Chapter 3. We also obtain the new direction of propagation p as well as a new polarisation reference u.
We need to take into account that not all the photons of the packets have been scattered during this interaction. Because of the albedo, photons have a probability P (abs) = 1albedo (5.22) to be absorbed. The fraction of scattered photon will therefore be proportional to the
Photons Propagation 111
albedo. The energy of the packet, also proportional to the number of photons in the packet will be weighted by the albedo:
E exit = albedo × E incoming .
(5.23)
A new optical depth is randomly determined after each scattering as explained before and the propagation continue.
Interaction: Temperature Update and Re-emission
If re-emission is enabled, we first need to verify whether the packet is absorbed or not. For that purpose, we take a random number U uniformly between 0 and 1 and if U > albedo, the packet is absorbed, else it is scattered. If scattered, the process is exactly the same as described in the previous section 5.5.1 but without the albedo weighting of the energy (which will therefore always have the same constant value). If absorbed, the packet will contribute to the cell temperature as following.
This temperature update algorithm is described for instance by Lucy (1999) and Bjorkman & Wood (2001). It consists in balancing the received and emitted energy of each cell, assuming therefore a Local Thermodynamic Equilibrium (LTE). The incoming energy is simply for each cell (at the position (i,j,k)) the number of packets absorbed by the cell multiplied by the energy in each packet (a constant for every packets in this mode):
E i,j,k in = N i,j,k × E packet .
(5.24)
N i,j,k is here the number of absorbed packets in the cell (i,j,k) and E packet is the energy of the packets.
The emitted energy depends on the thermal emissivity of the dust, usually defined as: (5.25) where B ν (T ) is the Planck emission of a black body at temperature T per unit of frequency, κ ν the dust absorptive opacity and ρ the density of dust.
j ν = κ ν ρ B ν (T ) dλ,
We will use here an alternative definition, replacing κ ν by its value in term of Q abs and geometrical cross-section σ cc : κ ν = Q abs σ cc n / ρ, with n the number density. We get as a function of wavelength: (5.26) with B λ (T ) the Planck emission of a black body at temperature T per unit of wavelength. The emitted energy is, for a cell of volume dV i,j,k during a time ∆t:
j λ = Q abs (λ) σ cc n B λ (T ) dλ,
E i,j,k em = 4 π ∆t dV i,j,k Q abs (λ) σ cc n B λ (T ) dλ.
(5.27)
Note that we can also write these two expressions in a different way as the energy of a packet can be expressed in function of the source luminosity L and the total number of packets emitted N tot as:
E packet = L ∆t / N tot .
(5.28)
Because we are using constant densities and temperature in each cell, the first integral become a simple multiplication by the volume of the cell. The integral over λ can then be numerically integrated and we only need to solve E i,j,k in -E i,j,k em = 0:
N i,j,k E packet -4 π V i,j,k σ cc n i,j,k ∆t Q abs (λ) B λ (T ) dλ = 0.
(5.29) By solving numerically equation 5.29 on T , we get the new temperature T of the cell after each packet absorption. Note that as absorption number N i,j,k is increasing, the Planck emission function will increase as well. This implies that the new temperature will always increase within a cell (Planck emission is strictly monotonically increasing function with T). Now that we have computed the new temperature of the cell, we need to correct the emission function of the cell. Of course we will adjust the dust emissivity to the new temperature:
j ν = κ ν B ν (T ).
(5.30)
However, all the previous packets have been emitted with different successive emissivities. If we want a correct SED at the end of the simulation, we also need to correct for these offsets in the wavelength selection of re-emitted packets. This can be done by selecting the new wavelength based on the difference of emissivities and not on the new emissivity. We will therefore use for emissivity:
∆j ν = j ν -j ν = κ ν (B ν (T ) -B ν (T )).
( 5.31) Remember that as the temperature is increasing, this difference will always be positive. This corresponds to the shadowed region of figure 5.6. We can now continue our propagation with the new wavelength of the packet determined from this emissivity difference. The direction of propagation and polarisation reference is reset to the initial unpolarised emission and a new optical depth is determined as previously. We also indicate with a flag on the packet that it has undergone a re-emission.
Output and Recording
When the photon packet exits the simulation box, it is recorded with most of its parameters, whose list is given in table 5.2. Its final direction of propagation The spectrum of the previously emitted packets is given by the emissivity at the old cell temperature (bottom curve). To correct the spectrum from the old temperature to the new temperature (upper curve), the photon packet should be re-emitted using the difference spectrum (shaded area), from Bjorkman & Wood (2001). If the model is axi-symmetric, MontAGN allows to increase the signal obtained from simulations by two ways. First, in case of cylindrical symmetries, all the photons exiting the simulation with the same inclination angle θ but different azimuthal angle φ can be considered as equivalent. We can therefore extract photons (within a θ range if selected) for all the azimuthal angles φ and then apply a rotation around the vertical axis on the position of last interaction of the packet.
If the model is also symmetric according to the equatorial plan, photons recorded above the equatorial plan are equivalent to those below the plan. At the end we can add photons of the four quadrant (up-left, up-right, bottom-left and bottom-right) as long as we correctly change their polarisation properties according to the symmetry. It is however mandatory to observe from an inclination angle of 90 • to use this last method.
Note that it is possible to limit the number of absorptions a photons packet undergo in order to only keep the most energetic packets and have faster simulations. As this information is kept, it is also possible to get access to maps of averaged number of interactions undergone in a particular direction.
Summing Packets
As we record packets with different number of photons, we need to take these differences into account when creating the observed maps. The maps referred to as "averaged" (for example averaged number of scatterings) as well as Q and U maps includes a process correcting the relative fraction of photons in packets: all packets i of a given quantity are summed with a factor α i proportional to the ratio of photons in the given packet, in every pixel. The averaged number of scatterings n will therefore be computed from the number of scatterings of each packet n i as: .38) Because this factor is strictly proportional to the energy of the packet, we use directly the recorded energy in the packets to weight the summed quantities.
n = Σ i α i × n i Σ i α i . ( 5
Note that this difference in photon number should not be considered when looking at different wavelengths. For a same energy in two packets at different wavelengths, the longer the wavelength is, the greater the number of photons should be. However as the number of emitted packets versus wavelength follows the source's SED, they are the ones that one should consider for establishing the final SED (in this case, the packets are equivalent to single photons).
Packets Significance
One major concern in the simulation strategy without re-emission is the conservation of the photons until they exit the simulation box whatever their number of interactions. We must verify to what extent the detected photons are relevant. As explained before, the energy of each packet, representing the number of photons in the packet, is decreased according to the albedo at each interaction. It is therefore common at the end to record packets of photons with energies 10 4 times lower than the initial energy. However, if a high energy packet reaches a pixel previously populated only with low energy ones, it will totally dominate the pixel so that Q and U values and the polarisation parameters will be essentially linked to this packet. Indeed, most of the photons in the pixel will belong to this packet.
As opposed to simulations including absorption, where all the recorded photons have the same probability, it is not any more the case in the case where re-emission is disabled. A pixel of an output image could have stacked many low energy packets before a high energy one, more representative of the actual polarisation, reach it. But it is also possible that no high power photon reaches this cell, in the course of the simulation leading to a wrong estimate since in the actual case the number of photons emitted by the source is enormous (> 10 56 s -1 ) and several will escape after very few scatterings.
To analyse our results, we need first to be sure to disentangle between photons representative of the actual polarisation, and those who are not. Limiting the number of scattering is a good way to ensure that below a given limit on number of photons, packets will stop to be recorded. We also analysed maps of "effective numbers" of packets per pixel, as shown in figure 5.8. These maps are generated by dividing the energy received in each pixel by the highest energy of packets recorded in the pixel.
If a single packet dominates the pixel's information, the effective number of packets in the pixel will be close to 1, indicating a non reliable pixel. As the energy of packets is decreasing with the number of scatterings, regions of low number of scatterings are more likely to be reliable. All regions of single scattered photons, with the exception of the central pixels, are reliable as long as photons can not reach the pixel in this region directly without being scattered. These regions corresponds to the "North and South" regions of images in figure 5.9.
For the same reasons, all the regions with photons scattered twice are very likely to be representative, because only one-time scattered packets have higher energies. For example, the central belt region in figure 5.9 are very unlikely to receive singly scattered photons because of the high optical depth of the torus (around 20 at this wavelength). Therefore despite having a fairly low number of effective packets, this region is also quite reliable in terms of intensity and polarisation.
Validity Tests
Before using the code to simulate high angular resolution images of AGN and interpreting the observations according to these simulations, we first conducted tests to verify all the different sub-parts of the code. All tests concerning the temperature update are still under investigation. We have not yet used these capabilities of the code to interpret any data, and we therefore concentrated ourselves on pure scattering/polarisation capacities directly involved in the NGC 1068 SPHERE data analysis (Chapter 6).
Phase Function
We conducted some tests on the phase functions. We first displayed the averaged phase function of the MontAGN code by adding naively all the different scattering angles given by the scattering routine (on the classical MRN distribution of Silicates, with photons in the visible/NIR leading to form factor below 0.3). This distribution function has been weighted by a factor sin(α) to correct for the solid angle effect, and is displayed on figure 5.10. We get as expected a function very close to the Rayleigh one. We also measured the phase functions resulting of different form factors, as displayed in figure 5.11. Simulations were achieved through the launch of 10 5 packets for each form factor, with grain radius of 100 nm.
We finally measured the efficiency of the rejection method of von Neumann for a range of form factors using 10 5 packets, and compared them to the measurements of Didier Pelat on his simulation of Mie scattering using the same rejection method. These results correspond to the four middle columns of table 5.3 and have been obtained using water's dielectric characteristic and therefore slightly differs from MontAGN's silicates ones.
MontAGN stops to generate phase functions at form parameters larger than about 2 for grains of radius 10 nm, and at x = 20 for grains of 100 nm of radius, because of the wavelength's domain limitations. Table 5.3 shows good agreement until x = 1, while beyond the efficiencies diverge. This is likely to be due to the difference of dielectric constants, changing the phase functions and therefore the corresponding envelope efficiency.
Optical Depth
Because of the complexity of the multigrain framework in the estimation of the optical depth, we verify the good behaviour of each of the grain in term of optical depth separately and with the mixture. This later is composed of 25 % of silicates, 25 % of ortho graphites, 25 % of para graphites and 25 % of electrons. In order to conduct the tests, we use a cylinder, whose radius is set to 60 AU (Astronomical Unit). Its height is 20 AU and its density is computed to give particular values of optical depth at 1.6 µm according to this height using equation 5.19. Direction of propagation of photons were set to be initially in direction of the symmetrical axis of the cylinder and therefore to encounter the selected optical depth. It is therefore easy to predict the number of packets N out that should exit without interaction from N tot = 100, 000 packets launched:
N out = N tot × P escape (τ ) = N tot e -τ .
(5.39)
Results are displayed in table 5.4. Table 5.4 indicate a good correlation between the measured and theoretical escaped fraction of packets, with an error typically below 1 %.
Temperature
As this thesis work does not include studies of the temperature of dust either in AGN or SSC environments by lack of time, we only conducted few tests on the temperature functionalities of MontAGN. Among them, we tried to reproduce the Many tests concerning the temperature are currently ongoing, however the multiple grain populations complicate the algorithm and this functionality is not yet available.
Angle Corrections and Polarisation Propagation
One major concern in our studies is the orientation of the polarisation vectors. We therefore need to ensure that our simulations correctly reproduces the changes of polarisation through the scatterings, and that we apply the right rotation matrix correction at the end to translate polarisation into the observer's frame.
Analysing multiple scattering is challenging because it is hardly achievable analytically. This is one of the major motivation for developing MontAGN to simulate polaro-images from AGNs. We will consider for validation tests some more simple cases, mainly with one scattering.
One typical case is a central source surrounding by a dust shell of low optical depth. This imposes to have a low probability of more than one scattering for packets, leading to more predictable polarimetric patterns. We should indeed obtain a perfect centrosymmetric polarisation as shown in the maps of section 3.3. This is a well known feature, studied by Fischer et al. (1996) and Whitney & Hartmann (1993), observed for instance by the HST (Capetti et al. 1995) and reproduced by many simulations as the ones of Murakawa (2010); Marin et al. (2012).
We first try to obtain a similar pattern from simulation of simple dust structure. We consider a star at the centre of this cocoon of radial optical depth τ V as our source and run a simulation with packets at 2.2 µm. The optical depth is low enough to obtain mainly direct or single scattered photons. Resulting maps are shown in figure 5.14.
We obtained results in excellent agreement with the expected signal of a centrosymmetric polarisation. This is a good indication that the polarisation propagation is occurring accordingly to the physics. Furthermore, it also indicates that the Q-U orientation correction using rotation matrix is working correctly (see section 5.6).
To verify the evolution of polarisation we can also have a look at the local changes on the packets properties during scattering events. Figure 5.15 for example represents the polarisation degree of the packet just after being scattered, in function of the scattering angle α. As expected, the polarisation is maximum at 90 • and null for perfect forward and backward scatterings.
Similarly, we can plot the polarisation degree after more than one scattering. In these cases however, the incident light will be polarised after the first scattering and we therefore expect a more complex function, as displayed in figure 5.16. Of course, a scattering of 90 • still fully polarises the packets, however, we see a globally increasing polarisation degree for smaller angles with the number of scatterings. This is due to the polarisation already present in the incoming light after a first scattering in case of second scattering, and all the previous scatterings in the other cases.
As explained in section 3.2.3, β the conditional probability density function depends on α and on the incoming polarisation and we can verify that β distributions follow these rules. We measured during a MontAGN simulation β distributions, with imposed α and incoming Q and U. Some results are shown in figure 5.17 which illustrates the effect of both the angle α and the polarisation on the β distribution. We can verify the rotation of the β distribution induced by polarisation. The value of α = π/2, for which the polarisation is maximum, seems to collimate the values of β toward precise values (depending on the incoming Q and U).
A final test of MontAGN simulation was to create a map of a target seen from an inclined line of sight. We use for that purpose a model of AGN quite simple, constituted of a torus with a decreasing jump in density beyond 10 pc and of an ionisation cone. This model will be detailed in Chapter 6. We selected an inclination We can see on this image the direct light from the centre, half hidden by the denser part of the torus, situated above the source. The two ionisation cones are clearly visible, above and under the source, as well as the inner border of the torus, illuminated by the central source. Despite the line of sight is crossing the outer torus, we can see photons from the hidden ionisation cone, because of the low optical depth on this external region of the torus. As this image is consistent with the expectation, we expect reliable results in other situations with a more complex structure.
First Results
As a first experiment using MontAGN, we decided to consider structures with each a constant dust density to represent a simple AGN model. In this way, analysis of the results would be easier and this is critical when using a newborn code. Furthermore, detection of errors is easier. This allows to obtain some first results, guiding our research toward the more complex models of Chapter 6, but also permits to improve the code through comparison to results obtained with the same geometry using the STOKES radiative transfer code. Because both codes were not designed for the same wavelength studies, we selected some intermediate wavelength range to conduct this pre-work: 800 nm to 1 µm.
This need of a comparison to validate our code was the starting point of our collaboration with René Goosmann and Frédéric Marin from the high energy team of Observatoire Astronomique de Strasbourg, developers of STOKES. Indeed, the first comparison allowed to detect some bugs and errors in MontAGN code as well as the need of updating the display function of STOKES. These first steps were the topic of our contributions to the SF2A conference 2016. The two proceedings that arose from this conference appear on the next pages, summarizing the work achieved mainly by Frédéric Marin and I, from determining the initial model properties to the first analysis of the outcome.
Introduction
Polarimetry is a powerful tool as it gives access to more information than spectroscopy or imaging alone, especially about scattering. In particular, indications on the geometry of the distribution of scatterers, the orientation of the magnetic field or the physical conditions can be revealed thanks to two additional parameters : the polarisation degree and the polarisation position angle. Polarimetry can put constraints on the properties of scatterers, like for example spherical grains or oblate grains (Lopez-Rodriguez et al. 2015) and therefore constrain the magnetic field orientation and optical depth of the medium. The downside is that analysis of polarimetric data is not straightforward. The use of numerical simulations and especially radiative transfer codes is a strong help to understand such data (see for instance Bastien & Menard 1990;Murakawa et al. 2010;Goosmann & Matt 2011). It allows us to assess and verify interpretations by producing polarisation spectra/maps for a given structure, which can then be compared to observations. STOKES and MontAGN are two numerical simulations of radiative transfer both using a Monte Carlo method built to study polarised light travelling through dusty environments (whether stellar or galactic). In both cases, one of the main goal in developing such codes was to investigate the polarisation in discs or tori around the central engine of AGN. While STOKES was designed to work at high energies, from near infrared (NIR) to X rays, MontAGN is optimised for longer wavelength, typically above 1 µm. Therefore they are covering a large spectral scale with a common band around 0.8 -1 µm. Both approaches are quite different since STOKES is a geometry-based code using defined constant dust (or electrons, atoms, ions ...) threedimensional structures while MontAGN uses a Cartesian 3D grid sampling describing dust densities.
In this first research note, we want to present our first comparison between the two codes. We opted for a similar toy model that we implemented in the two simulation tools in order to produce polarisation maps to be compared one to each other. The second proceedings of this series of two will focus on the results of the code when applied to a toy model of NGC 1068. (Goosmann & Gaskell 2007;Goosmann et al. 2007). The code was continuously upgraded to include an imaging routine, a more accurate random number generator and fragmentation (Marin et al. 2012(Marin et al. , 2015)), until eventually pushing the simulation tool to the X-ray domain (Goosmann & Matt 2011;Marin et al. 2016). STOKES is a radiative transfer code using Mueller Matrices and Stokes vectors to propagate the polarisation information through emission, absorption and scattering. Photons are launched from a source (or a set of sources) and then propagate in the medium until they are eventually absorbed or they exit the simulation sphere. The optical depth is computed based on the geometry given as an input. At each encounter with a scatterer, the photon's absorption is randomly determined from the corresponding albedo; if it is absorbed, another photon is launched from the central source. In a scattering case, the new direction of propagation is determined using phase functions of the scatterer and the Stokes parameters are modified according to the deviation. For a detailed description of the code, see papers of the series (Goosmann & Gaskell 2007;Marin et al. 2012Marin et al. , 2015)).
MontAGN
Following the observation of NGC 1068 in polarimetric mode at high angular resolution conducted by Gratadour et al. (2015), MontAGN (acronym for "Monte Carlo for Active Galactic Nuclei") was developed to study whether our assumptions on the torus geometry were able to reproduce the observed polarisation pattern through simulations in the NIR. MontAGN has many common points with STOKES. Since the two codes were not designed for the same purpose, the main differences originate from the effects that need to be included in the two wavelength domains, which differ between the infrared and the shorter wavelengths. STOKES includes Thomson scattering, not available in MontAGN, while MontAGN takes into account the re-emission by dust as well as temperature equilibrium adjustment at each absorption to keep the cells temperature up to date, not present in STOKES.
In MontAGN photons are launched in the form of frequency-independent photon packets. If absorption is enabled, when a photon packet is absorbed, it is immediately re-emitted at another wavelength, depending on the dust temperature in the cell. The cell temperature is changed to take into account this incoming energy. The re-emission depends on the difference between the new temperature of the cell and the old one to correct the previous photon emissions of the cell at the former temperature (following Bjorkman & Wood 2001). If reemission is disabled, all photon packets are just scattered, but we apply the dust albedo as a factor to the energy of the packet to solely keep the non-absorbed fraction of photons (see Murakawa et al. 2010). This disabling allows us to get much more statistics at the end of the simulation as every photon is taken into account. But it also requires to have a lot of photons in each pixel at the end as we may obtain in one pixel only photons with weak probability of existence, a situation that is not representative of the actual pixel polarisation.
Simulation
We set up a model of dust distribution compatible with the two codes. At the centre of the model, a central, isotropic, point-like source is emitting unpolarised photons at a fixed wavelength (0.8, 0.9 and 1 µm, only images at 0.9 µm are shown in this publication). Around the central engine, is a flared dusty disk with radius ranging from 0.05 pc to 10 pc. It is filled with silicate grains and has an optical depth in the V-band of about 50 along the equatorial plane (see Fig. 1). Along the polar direction, a bi-conical, ionised wind with a 25 • half-opening angle with respect to the polar axis flows from the central source up to 25 pc. The wind is filled with electrons in STOKES and silicate grains at much lower density in MontAGN * . The conical winds are optically thin (τ V = 0.1). We added to these structures a cocoon of silicate grains surrounding the torus, from 10 pc to 25 pc, outside the wind region to account for a simplified interstellar medium in another model. See our second proceedings of this series (Marin, Grosset et al., hereafter Paper II) for more information about the models. Re-emission was disabled for MontAGN in these simulations. With more than 5×10 6 photons sampled, we obtain for both models an overall good agreement between the two codes. In the polar outflow region, the similarities are high between the two codes, revealing high polarisation degrees (close to 100%) despite the differences in composition (see Fig. 2). This is expected from single scattered light at an angle close to 90 • (see Bastien & Menard 1990), which is confirmed from the maps of averaged number of scatterings (see Fig. 3,right). However in the central region, where the torus is blocking the observer's line-of-sight, the results between MontAGN and STOKES slightly differ (see the equatorial detection of polarisation at large distances from the centre in Fig. 2, right). We interpret this polarisation as arising from the differences in the absorption method between the two codes. Because in MontAGN all photons exit the simulation box, we always get some signal even if it may not be representative of photons reaching this peculiar pixel. If inside a pixel only photons with low probability, i.e. with the energy of their photon packets being low after multiple scatterings, are collected, the polarisation parameters reconstructed from these photons will not be reliable. This is why we need to collect an important number of photons per pixel.
Otherwise, the polarisation structure revealed by polaro-imaging is very similar between the two codes and lead to distinctive geometrical highlights that will be discussed in the second research note of this series (Paper II).
Introduction
Understanding the role, morphology, composition and history of each AGN component is a non-trivial goal that requires a strong synergy between all observational techniques. The role of polarimetric observations was highlighted in the 80s thanks to the discovery of broad Balmer lines and Fe ii emission sharing a very similar polarization degree and position angle with the continuum polarization in NGC 1068, a type-2 AGN (Antonucci & Miller 1985). The resemblance of the polarized flux spectrum with respect to the flux spectrum of typical Seyfert-1s lead to the idea that Seyfert galaxies are all the same, at zeroth-order magnitude (Antonucci 1993). Observational difference would arise from a different orientation of the nuclei between pole-on and edge-on objects; this is due to the presence of an obscuring dusty region situated along the equatorial plane of the AGN that will block the direct radiation from the central engine for observers looking through the optically thick circumnuclear material. This is the concept of the so-called "dusty torus", first conceived by Rowan-Robinson (1977) and later confirmed by (Antonucci & Miller 1985).
Since then, a direct confirmation for the presence and structure of this dusty torus was an important objective for the AGN community. The closest evidence for a dusty torus around the central core of NGC 1068 was first obtained by Jaffe et al. (2004) and [START_REF] Wittkowski | L39 MontAGN Quick guide MontAGN is fully written in Python 2[END_REF], using mid-infrared (MIR) and near-infrared (NIR) interferometric instruments coupled to the European Southern Observatory's (ESO's) Very Large Telescope interferometer (VLTI). Jaffe et al. (2004) were able to spatially resolve the MIR emission from the dusty structure and revealed that 320 K dust grains are confined in a 2.1 × 3.4 pc region, surrounding a smaller hot structure. The NIR data obtained by [START_REF] Wittkowski | L39 MontAGN Quick guide MontAGN is fully written in Python 2[END_REF] confirm the presence of this region, with the NIR fluxes arising from scales smaller than 0.4 pc. Since then, long-baseline interferometry became a tool used to explore the innermost AGN dusty structure extensively at high angular resolution (typically of the order of milli-arcsec, see e.g., Kishimoto et al. 2009Kishimoto et al. , 2011)).
SF2A 2016
Coupling adaptive-optics-assisted polarimetry and high angular resolution observations in the infrared band, Gratadour et al. (2015) exploited the best of the two aforementioned techniques to obtain strong evidence for an extended nuclear torus at the center of NGC 1068. Similarly to previous optical (Capetti et al. 1995) and infrared (Packham et al. 1997;Lumsden et al. 1997) polarimetric observations, Gratadour et al. (2015) revealed an hourglass-shaped biconical structure whose polarization vectors point towards the hidden nucleus. By subtracting a purely centro-symmetric component from the map of polarization angles, an elongated (20 × 50 pc) region appeared at the assumed location of the dusty torus. If the signal traces the exact torus extension, high angular resolution polarization observations would become a very powerful tool to study the inner core of AGN.
In this lecture note, the second of the series, we will show the preliminary results obtained by running Monte Carlo radiative transfer codes for a NGC 1068-like AGN. Our ultimate goal is to reproduce the existing UV-toinfrared polarimetric observations using a single coherent model in order to constrain the true three-dimensional morphology of the hard-to-resolve components of close-by AGN.
2 Building an NGC 1068 prototype Our primary model is powered by a central, isotropic, point-like source emitting an unpolarized spectrum with a power-law spectral energy distribution F * ∝ ν -α and α = 1. Along the polar direction, a bi-conical, ionized wind with a 25 • half-opening angle * with respect to the polar axis flows from the central source to 25 pc. The wind is assumed to be ionized and therefore filled with electrons. It is optically thin (optical depth in the V-band along polar direction τ V = 0.1). Along the equatorial plane, a flared disk sets on at 0.05 pc (a typical dust sublimation radius, see Kishimoto et al. 2007) and ends at 10 pc. The half-opening angle of the dust structure is fixed to 30 • (Marin et al. 2012) and its V-band optical depth is of the order of 50. The dust is composed of 100 % silicates with grain radii ranging from 0.005 µm to 0.25 µm, together with a size distribution proportional to a -3.5 (a being the grain radius).
The observer's viewing angle is set to 90 • with respect to the symmetry axis of the model. More than 10 7 photons were sampled to obtain polarimetric images with a pixel resolution of 1 pc (9 milli-arcsec at 14.4 Mpc). For this proceedings note, we selected the images computed at 1 µm and used the Monte Carlo code STOKES.
Results of our baseline model
The polarimetric maps for our baseline model are shown in Fig. 1. The left image shows the polarization position angle superimposed to the 1 µm polarized flux. The polarization vectors show the orientation of polarization but are not proportional to the polarization degree (which is shown in the right image).
The polar outflows, where electron scattering occurs at a perpendicular angle, shows the strongest polarization degree (up to 90 -100 %) associated with a centro-symmetric polarization angle pattern. This in perfect agreement with the polarization maps taken by the optical and infrared polarimeters in the 90s and recently upgraded by Gratadour et al. (2015). At the center of the model, the photon flux is heavily suppressed by the optically thick material, but scattered radiation from the cones to the surface of the torus leads to a marginal flux associated with a weak polarization degree † (< 8 %). Such degree of polarization is in very good agreement with the values observed by Gratadour et al. (2015) at the location of the nucleus (5 -7 %). However, compared to the results of the previous authors, the polarization position angle from the modeling is almost centro-symmetric rather than aligned/anti-aligned with the circumnuclear dusty structure. It is only on the highest point of the torus morphology that a ∼ 50 % polarization degree associated with a higher flux can be found, due to a lesser amount of dust facing the wind-scattered photon trajectories. When the whole picture is integrated, the resulting polarization degree is of the order of 60 %.
Accounting for the ISM
To test multiple configurations, we ran a second series.Based on the previous setup, we included interstellar matter (ISM) around the model. The ISM dust grains share the same composition as the dust in the circum- A striking difference between the ISM-free (Fig. 1) and the ISM-included (Fig. 2) polarimetric maps is the shape of the outflowing winds seen in transmission through the dust. The perfect hourglass shape observed when the AGN was in vacuum is now disturbed. The global morphology is more similar to a cylinder, with no flux gradient observed as the photons propagate from the central engine to the far edges of the winds. The overall flux distribution is more uniform throughout the winds, which seems to be in better agreement with what has been observed in the same band, at least for sub-arcsec scales (Packham et al. 1997). At better angular resolutions, polarized flux images are still needed. The polarization position angle has retained its centro-symmetric pattern but the polarization angle is more chaotic at the location of the torus, due to additional dust-scattering. The degree of polarization at the center of the AGN is the same as previously but the integrated map shows a slightly smaller polarization degree (58 %) due to depolarization by multiple scattering.
Discussion
Running our radiative transfer codes at 1 µm, we found that a NGC 1068-like model produces the centrosymmetric polarization angle pattern already observed in the optical and infrared bands. Disregarding additional dilution by other sources, the polarization position angle pinpoints the source of emission. Including the ISM in the model does not change the results but tends to decrease the final polarization degree. It also changes the flux repartition in the outflowing winds that act like astrophysical mirrors, scattering radiation from the hidden nucleus. Compared to what has been shown in Gratadour et al. (2015) in the H (1.6 µm) and K (2.2 µm) bands, we find very similar levels of linear polarization at the center of the model, where the central engine is heavily obscured by dust. However, we do not retrieve the distinctive polarization position angle of the torus found by the authors. According to our models, the pattern is at best centro-symmetric rather than directed perpendicular to the outflowing wind axis.
Additional work is needed to explore how such a distinctive pattern can arise at the location of the torus. In particular, adopting the most up-to-date morphological and composition constraints from literature is mandatory to built a coherent NGC 1068 model. Including effects such as polarization by absorption (dichroism) will be necessary. Comparing our results with past linear and circular polarization measurements (e.g, Nikulin et al. 1971;Gehrels 1972;Angel et al. 1976) will drive our models towards the right direction. Finally, the broadband coverage of the codes, from the X-rays to the far infrared, will allow us to robustly test our final model against spectroscopic and polarimetric observations in many wavebands.
The authors would like to acknowledge ... PNCG, PNHE ? 5.10. MontAGN Manual and Future Improvements 135
MontAGN Manual and Future Improvements
This section gives the required informations to use MontAGN as a simulation tool. The following document describes the main simulation function, as well as some useful functions implemented in the MontAGN code. This is intended to be a first draft of what will be the MontAGN manual when available to community. As the code is continuously evolving, the manual will probably undergo an important number of evolutions before its first release, however it gives already a good vision of how to use the code and the options available.
The final objective is to stabilize the code on a more ergonomic version, easily usable by public. This goal should be reached in the near future. We however started to think about the future developments that could improve MontAGN within a reasonable amount of time.
As one of the main limitation is the execution time, one of the best direction in which improvements are important is to increase the speed of the code. There are different ways to do so, one idea is to export MontAGN toward a code usable on a Graphical Processor Unit (GPU) architecture. As the code is repeating the same calculus a great number of time, its adaptation to GPU should be reasonably easy. Another way is to translate the core of the code from Python to C or Fortran. Doing so, the execution speed should increase sensibly. Because it was from the start thought to need this improvement, the architecture of MontAGN was built closer as possible to a C++ architecture.
One interesting feature which is not yet included in MontAGN is the possibility to use aligned non spherical grains. As two types of graphites are already implemented, the use of elongated grains can be achieved thanks to setting different populations with different sizes of the same dust species. The main limitation is that in order for this grains to have an impact, we need to allow them to be aligned on a particular direction. This is important for AGN studies as it is likely for these grains to exists and to be aligned by magnetic fields (see for example Efstathiou et al. 1997).
Finally, the last fairly accessible amelioration consists in modifying the source selection in order to include the dust thermal emission. This process is already included into the energy equilibrium used in the re-emission routine, however when disabled, these regions are not any more considered as emitters. Thanks to this upgrade, it would be possible to first use the re-emission mode to compute the temperature map and in a second time to conduct faster simulations without re-emission, but including emissions by dust. This improvement would be significant especially when looking at long wavelength.
1 Using MontAGN
Basic commands
MontAGN can be run by calling the main function of the code :
In [1] : montAGN() (note that there is no caps on the first letter to avoid confusion with the code's name).
It is required however to load the functions into the terminal, using run, import or exec on the file 'montagn.py' :
In [0] : run montagn
The function has no input parameter required but can be use with many keywords as well as a parameter file. Its gives as an output a model class object, as defined in section 4.
Keywords
The keywords available on the main MontAGN function are listed here, following this organisation :
keyword values available (default value) Explanations.
Keywords available for other useful function included in MontAGN will be described in the corresponding sections.
paramfile 'string' () Name of the parameter file to be loaded.
ask 0 or 1 (1) Ask to the user (1) or not (0) the parameters for simulation. If not, one should give the wanted parameters otherwise the default will be used.
usemodel integer from list below (0) keyword unused if ask=1. 0 uses parameters tunable in the file montagn_launch.py For star models : 1 uses parameters to create a dust cocoon surrounding a central star 2 uses parameters to reproduce the model of Murakawa (2010) 3 uses parameters to reproduce the model of Murakawa (2010) with a lower optical depth For AGN models : 11 utilise les paramètres du modèle boite de camembert de la simulation de Mie 12 utilise les paramètres du modèle simple d'AGN de NGC 1068 13 utilise les paramètres du modèle de Murakawa 2010 adapté aux AGN 14 utilise les paramètres du modèle commun Stokes de Strasbourg 14.0 pour le modèle complet 14.1 pour le modèle sans ISM 14.2 pour le modèle avec uniquement le tore add 0 or 1 (0) Add (1) or not (0) the new data at the end of the output file (if it already exists). If add=0 and the file exist, it will be replaced.
usethermal 0 or 1 (1) Enabled (1) or disabled (0) the thermal part of the MontAGN code. If enabled, the simulation will consider temperature adjustment of each cell as well as re-emission by dust, with a slower execution as a downside. If disabled, packet's energy are pondered by the albedo at each scattering to stand for the absorbed part of the packet's photons.
nphot positive integer ( ) number of packets to be launched. If not given, it will be asked to the user at the beginning of the simulation. Some typical execution time is 0.2 s per packet (it depends on the optical depth).
filename 'string' ('test') root name of the output files (between quotes ") for packets, dust densities and temperature (if usethermal=1).
ndiffmax positive integer (10) Maximum number of interaction that a packets will be allowed to undergo. If it reaches this limit, it will stop propagating and will not be considered in the output. dthetaobs positive integer ( 5) Indicate the angle half-width range recorded in one file. dthetaobs=5 will therefore create 19 files, each recoding packets with a width of 10, centred on the angle value of the file.
-force_wvl float (in m) ( ) Allow if specified to impose a unique wavelength to all the packets launched in the code. Please consider only using wavelength allowed by spectra and grains properties (between 20nm and 2mm currently).
-grain_to_use list of 0 or 1 for [sil,gra_o,gra_p,elec] ([0,0,0,0]) Indicate whether grains properties should be computed (1) or not (0). Note that if a grain is used in the simulation, its properties will always be computed.
nang positive odd integer (999) Indicate the angular resolution (number of steps) in the phase functions for scatterings. have to be an odd number ! nsize positive integer (100) Indicate the size resolution (number of steps) for grains. As tables are not well sampled on grain size currently, it is unnecessary to give a high value.
rgmin list of floats for [sil,gra_o,gra_p] (in m) ([0.005e-6,0.005e-6, 0.005e-6]) Minimal radius of grains of different species.
rgmax list of floats for [sil,gra_o,gra_p] (in m) ([0.25e-6,0.25e-6, 0.25e-6]) Maximal radius of grains of different species.
alphagrain list of floats for [sil,gra_o,gra_p] ( [-3.5,-3.5,-3.5]) Degree of the power law of MRN dust distribution for grains of different species nsimu positive integer (1) Indicate the identity of the simulation in term of thread. Mostly used for parallel usages, should be set to 1 otherwise.
display 0 or 1 (1) Allows (1) or not (0) the display of the density maps, final images etc.. Disabling it is recommanded in parallel usages.
cluster 0 or 1 (0) 1 is mandatory when launched on clusters for management of files and directories.
cmap 'string' ('jet') Determine the colormap to be used for displays.
vectormode 0 or 1 (0) If display=1, create some additional final maps with polarisation vectors plotted above. Launch a simulation with 20,000 photons packets, with pre-defined model 1 (dust shell, see keyword usemodel). Outputs will be saved in 'test_***.dat' (see output section 2). Will replace the file if existing.
Example of execution
Launch directly from terminal
It is also possible to launch the code directly from a terminal (bash or else) by executing file 'montagn_paral.py' : > python montagn_paral.py --usemodel=1 --nphot=20000 --filename= 'test' -Most of the main options described in section 1.2 are also available, see parallelised section 1.5 for more information. -Launching the code this way automatically disabled the display. -Filenames will be composed from the root filename, the thread number and the inclination angle : 'filename_001_***.dat' for the first thread, etc..
Typical execution time
The typical execution time depend on the mode used :
-with temperature computation and dust re-emission : 300 packets <-> 1 min -without temperature computation and dust re-emission : 100 packets <-> 1 min
Other parameters
Other parameters are available, especially about the dust structures to be sampled into the 3D grid. These parameters are to be enter into a parameter file (see keyword paramfile) :
-res_map float () Grid resolution (in m).
-rmax_map float () Grid size (in m). rsubsilicate float () Estimated sublimation radius of the silicate grains (in m). Be careful, as this radius can only increase, be sure not to choose a value above the actual value.
rsubgraphite float () Estimated sublimation radius of the graphite grains (in m). Be careful, as this radius can only increase, be sure not to choose a value above the actual value.
When filling the 3D grid with dust densities, some structures already implemented are available (radial and spherical power laws, clouds, shells, constant density cylinders and torus geometries), through parameters :
denspower [float(col[1]),float(col [2]),float(col [START_REF]1 Examples of Stokes parameters for some polarisation states[END_REF]), [float(col[4]),float(col [5]), float(col [6]),float(col [START_REF]1 Comparison of CANARY performances[END_REF])]] () [[Radial power index, Radial typical profile size (in m), Vertical decay size (in m), [Density of grains (in particles / m 3 ) at radial typical profile size (one for each grain type)]]] .
spherepower
[float(col[1]), float(col [2]), [float(col [START_REF]1 Examples of Stokes parameters for some polarisation states[END_REF]),float(col [4]), float(col [5]),float(col [6])]] () [[Radial power index, Radial typical profile size (in m), [Density of grains (in particles / m 3 ) at radial typical profile size (one for each grain type)]]] .
torus
[float (col[1]),[float(col [2]),float(col [START_REF]1 Examples of Stokes parameters for some polarisation states[END_REF]),float(col [4]),float(col [5])] ,float(col [6]),float(col [START_REF]1 Comparison of CANARY performances[END_REF]),float(col -torus_const
[float (col[1]),[float(col [2]),float(col [START_REF]1 Examples of Stokes parameters for some polarisation states[END_REF]),float(col [4]), float(col [5])],[float(col [6]),float(col [START_REF]1 Comparison of CANARY performances[END_REF]),float(col cylinder
[float(col[1]),float(col [2]),[float(col [START_REF]1 Examples of Stokes parameters for some polarisation states[END_REF]),float(col [4]),float(col [5]), float(col [6])]] () [[cylinder radius (in m),cylinder height (in m),[density of grains (in particles / m 3 )]]] .
Parallel
MontAGN can be launched in parallel. If so, the code will create as many threads as asked, creating as many times the usual number of output files. Of the threads are fully independent in term of seed, insuring to have independent results from one thread to any other.
To launch MontAGN in parallel, the following command should be entered on a terminal in the MontAGN directory, executing file 'montagn_paral.py' : > python montagn_paral.py --options -Most of options are the same as the classical montAGN() ones. Some of them can not be used in this mode, displays is automatically disabled and a new options can be used (--nlaunch). -As an usual function called from terminal, options should be called using '--', with the corresponding parameter if required. -The new option --nlaunch define the number of thread that will be created and launched in parallel. If it is not given, only one thread will be created, leading to a classical MontAGN simulation.
Example
> python montagn_paral.py --usemodel=2 --nphot=20000 --filename= 'test' --nlaunch=2 will launch two thread, of 20,000 photons packets each, using pre-defined model number 2, and will save the results in files 'test_001_***.dat' and 'test_002_***.dat'(see Output section 2)
Available options : - ask can not be used - --usemodel=0, 1, 2, ... - --add - --usethermal - --nphot=## - --filename="
Displays
The two main displaying functions included in MontAGN are display_SED() and plot_image(). Both these functions uses the _xxx_phot.dat files to construct their maps or graphs.
The last display function is plot_Tupdate() which allows to compute maps of useful informations about the temperature changes within the cells. It therefore uses and requires file _T_update.dat.
3.1 display_SED display_SED() according to its name is a function allowing to show the SED of packets received under the selected restriction. It gives as an output two maps, a linear and a log scale ones.
In [2] : display_SED(filename)
Resolution of the final images. Depends on keyword resunit. If resunit is not given, or set to 'pixel', resimage gives the number of pixels that will be used on one axis to display the maps (for example 51 for an image of 51x51). Else, resimage determine the resolution of the image in resunit.
resunit 'string' ('pixel') Gives the unit of the value entered with resimage.
diffn positive integer or null ( ) If given, uses only packets with this value of number of scatterings.
ndiffmax positive integer ( ) If given, define the maximum value of number of scattering for packets to be used.
cmap 'string' ('jet') Determine the colormap to be used for displays.
enpaq float (3.86e26) Initial energy in photons packets. Used to compute the probability map.
coupe 0 or 1 (0) Allows to extract cuts of the central lines (1) and to compute histograms of polarisation within these cuts.
vectormod 0 or 1 (0) If display=1, create some additional final maps with polarisation vectors plotted above.
sym 0, 1, or 2 (0) Uses or not symmetries to compute the Q and U maps. 0 cylindrical symmetry used 1 symmetry according to equatorial plane is assumed as well as cylindrical symmetry 2 symmetry of all four quadrants is assumed as well as cylindrical symmetry
MontAGN class objects
MontAGN has a class object called model which keeps all the important parameters about the simulation. It is therefore possible to have access to these parameters, at the end of a simulation or before its start. A model object contains :
- Your theory is crazy, but it's not crazy enough to be true.
Niels Bohr
In this chapter, we will use simulation tools and especially the MontAGN code, described in Chapter 5, with the goal to go further in the interpretation of the polarimetric observations conducted on NGC 1068 and detailed in Chapter 4 and Gratadour et al. (2015). In particular, we aim at reproducing the central region of low degree of polarisation at a constant position angle with a ridge of higher polarisation degree at the very centre of the AGN.
As said before, polarisation allows us to have access to more information. What is important here is that polarisation gives clues on the history of the successive matter-light interactions. Photons can be emitted already polarised if the source is for instance composed of elongated and aligned grains (see e.g. Efstathiou et al. 1997).
But light can also become polarised if scattered or if there is unbalanced absorption by aligned grains. Polarisation is therefore a powerful tool to study the properties of scatterers such as dust grains or electrons in AGN environments. Polarimetric measurements can also help to disentangle the origin of polarisation between spherical and oblate grains, as discussed in Lopez-Rodriguez et al. (2015) and put constraints on the properties of the magnetic fields around the source, as it is likely to align non spherical dust grains. This would be seen as polarised light through dichroic emission and absorption (see Packham et al. 2007). Differences between oblong and spherical grains will be discussed in section 6.3.
Observational Constraints for Simulations
We will base our models on the unified model of AGNs, proposed by Antonucci (1993), presented in section 1.4 and detailed in the case of NGC 1068 in section 4.1.1. For such complex environment, simplified models are easier to handle to test basic ideas and we thus approximated the nucleus of NGC 1068 as its essential components: the CE, the ionisation cone and the torus (see section 4.1.1).
Our assumption was that the optically thick torus would block direct light from the hottest dust and that we would see light scattered twice. The first scattering would take place in the ionisation cone, allowing some photons to be redirected toward the outer part of the equatorial plane and then undergo a second scattering (see blue path on figure 6.1 for illustration). If we detect such photons, the signature would be a polarisation pattern aligned with the equatorial plane. This is an analogous phenomenon to the one observed in the envelope of YSOs as proposed by Bastien & Menard (1990) and Murakawa (2010). In this configuration, spherical grains can alone produce these signatures in polarisation images. This should constrain the optical depth of the different components as we will investigate hereafter.
The torus is more likely to be clumpy, as discussed in section 4.1.2, however we will first limit our models to uniform structures, each with a constant density, to ease simulations and analysis. We have tested a range of optical depth between τ V = 20 and 100 for the torus median plane. We based this choice on the current estimations of about 50 in visible, for example as derived by Gratadour et al. (2003). Note that more recently, [START_REF] Lira | [END_REF] or Audibert et al. (2017) found higher values of optical depth and that both these authors derived them assuming clumpy structures.
The difference in polarisation arising from the choice of uniform or clumpy structures was investigated, in UV -visible, by Marin et al. (2015). Their conclusions was that fragmentation, of the torus mainly, but they also tested it in the NLR, has a significant impact on the observed polarisation (angle and degree). It however becomes much less significant when looking at AGNs with an inclination of about 90 • , as shown by their figure 12 in particular. In the case we studied, using or not fragmentation should therefore not affect the overall observed polarisation degree and our results should therefore be also consistent with a fragmented medium. One point that should however be investigated further is whether this could significantly affect the polarisation locally when mapped at high angular resolution.
Structures Geometry
First Toy Models
We first developed a set of three models to compare the results of simulations through both the MontAGN and STOKES codes (Goosmann & Gaskell 2007;Marin et al. 2012Marin et al. , 2015) ) in a simple case. Detailed informations about the comparison of both codes can be found in [START_REF] Grosset | Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics[END_REF] and first analysis and conclusions are also available in Marin et al. (2016b), both shown in section 5.9. As the two codes are not aimed to work in the same spectral range, namely the infrared for MontAGN and the NIR, visible, UV and X-rays for STOKES, we selected intermediate and overlapping wavelengths: 800, 900 and 1000 nm. Our baseline basic models included only silicates grains in the dusty structures and electrons in the ionisation cone. All grains data, including graphites used in other models, come from Draine (1985). In all cases, the grains were set to dielectric spheres with a radius ranging from 0.005 to 0.250 µm and a power law of -3.5, following MRN distribution as defined in Mathis et al. (1977).
The three models are based on simple geometrical features with constant silicates or electrons densities:
-A flared dusty torus, ranging from 0.05 pc to 10 pc, a value typical of what is currently estimated (Antonucci 1993;Kishimoto 1999;García-Burillo et al. 2016). It has a half opening angle of 30 • from the equatorial plane and an optical depth of 50 at 500 nm in the equatorial plane. -An ionised outflow, containing only electrons in a bi-cone with a half opening angle of 25 • from the symmetry axis of the model, from 0.05 pc to 25 pc with an optical depth of 0.1. -An outer dusty shell ranging from 10 pc to 25 pc, outside of the ionisation cone, with a radial optical depth of 0.5 at 500 nm. Model 1 is built with these three components, model 2 with only the torus and the outflow and model 3 is reduced to the torus only (model 3). See figure 6.2 for density maps of each model. We launched for each of these models 10 7 packets, disabling re-emission and considering only the aforementioned wavelengths. All inclination angles were recorded. We only show in figure 6.3 and 6.4 the λ = 800 nm maps at an observing angle of 90 • (edge on).
Packets are selected for computing the maps on a certain range of inclinations. In this work, we used the range ±5 • and the images computed at 90 • are therefore including photons exiting with an inclination in the range [85 As showed in figure 6.3, results between the two codes are in fairly good agreement, as expected from [START_REF] Grosset | Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics[END_REF] and Marin et al. (2016b). The few observed differences arise likely from the divergence in the simulation strategy. Because MontAGN (in the configuration used here) propagates photons packets sometimes having suffered strong absorption, the pixels with a low number of photons do not contain reliable information (see section 5.7). However, as the packets number increases, the maps converge toward a more realistic result, close to the output of STOKES. In the central regions, we only have few photons recorded. This is expected because photons escaping here have been scattered several times, their number decreases strongly at each interaction because of the low albedo. For this reason, the central regions differ slightly between STOKES and MontAGN results, especially in their outer parts.
In model 1, we get two different regions. First a central region mainly composed of photons that have been scattered twice. The second region is separated into two polar areas of single scattering. These appear in blue in the maps of figure 6.4. As it is expected in this configuration (discussed in Fischer et al. 1996 andWhitney &Hartmann 1993 for example), we observe at north and south two well define regions of centrosymmetric polarisation position angle. In these two zones, photons are scattered with an angle close to 90 • which ensure a high degree of polarisation and a polarisation position angle orthogonal to the scattering plane, leading to this characteristic pattern (see section 3.3.3 for details).
The central belt is mostly dominated by the photons scattered twice. Therefore it traces the regions that photons can hardly exit without scattering, because of the optical depth of the medium: these are the areas obscured by the torus. However there is no clear privileged direction of polarisation angle in this central band. This is likely to be due to the surrounding medium, allowing the first scattering to take place in any direction around the torus, a conclusion that leads us to define model 5 (described in next section).
Model 2 only differs by the lack of a dust shell surrounding the torus found in the first model. The signal clearly shapes the ionisation cone as a region of single scattering, and the torus region. In this second area, the centre is not tracing photons scattered twice, but photons that undergo multiple scattering (5 or more). The role of the shell is therefore critical for the central signal we observe with model 1. It provides an optically thin region where photons can be scattered, without going through the optically thick torus (see sketch on figure 6.1). This is confirmed in model 3, which reproduces quite well the results of model 2 in the torus region despite the lack of an ionisation cone in which the photons can undergo their first scattering.
Double Scattering Models
As these simple models were not able to reproduce well enough the observed horizontal polarisation in the central inner parsecs of Gratadour et al. (2015), we then added some features. Namely we increased the density of the torus to an optical depth of nearly 150 at 800 nm (model 4) and replaced in a second model the outer shell by an extension of the torus with lower density (model 5). See figure 6.5 for the density maps.
We used for these models two different dust compositions, only silicates on one hand and a mixture of graphites (parallel and orthogonal) and silicates grains in the other hand. The ionisation cone contains only electrons in both cases. In the case of silicates and graphites, we used the following number ratio: 37.5 % silicates, 41.7 % orthogonal graphites (electric field oscillates in a plane perpendicular to the graphite plane) and 20.8 % parallel graphites (electric field oscillates in a plane parallel to the graphite plane), based on Goosmann & Gaskell (2007). Other ratio could be considered, for example by using the work of Jones et al. (2013). Pure silicate and mixture models share the same τ R . We ran each of these models with 10 7 photons packets launched, without reemission and considering only the wavelengths 800 nm and 1.6 µm. All inclination angles were recorded. We show in figures 6.6 and 6.7 maps derived from the MontAGN simulations. These correspond to maps of averaged number of scatterings with polarisation vectors (p = 1 is represented by a length of a pixel size), for models with the two dust compositions. One should first notice that the dust composition slightly affects the results, mostly on maps of figure 6.7. This is likely due to the difference in optical depth introduced by the difference in composition 1 as it will be discussed later in section 6.3.
The centro-symmetric regions of model 4 are similar for both dust composition at 800 nm and 1.6 µm and are similar to results of model 1. As the optical depth of the torus should not affect this part of the maps, this is consistent. However, the results found with model 4 are very close to those observed with model 1, even in the central belt. In this region, most of the photons undergo two scatterings at 800 nm for model 4, the only difference arises from the model with silicates where we see that a slightly larger fraction of photons have undergone more than two scattering events. At 1.6 µm, there are almost no photon coming from the regions shadowed by the torus on the outer central belt, with the pure silicates model. Again this is an effect of optical depth (see section 6.3) Model 5 (second and fourth images of figure 6.6 and 6.7) differs more from the previous models. By adding a region, spatially limited, where photons can be scattered a second time, hidden by the torus from the emission of the source, we see a large region of light scattered twice. Polarisation in this area in the 800 nm map shows a clear constant horizontal polarisation. This is not visible on the 1.6 µm map, where the optical depth of the torus is not high enough to block photons coming directly from the AGN centre.
First Interpretations
In the ionisation cone, we are able to reproduce the observed centro-symmetric pattern at large distance from the centre. This is expected from photons scattered only once (see e.g. Marin et al. 2012). The maps of the average number of scatterings confirm that in all these regions, we see mainly photons scattered a single time.
More interesting is the central region. We are able to reproduce with model 5 (figure 6.6) a horizontal polarisation pattern, at least at 800 nm. This is comparable to what was obtained in the case of YSOs by Murakawa (2010) (see for example their figure 6). The case of YSOs is however very different from AGN in terms of optical depth, which is about 6.0x10 5 in K band in the equatorial plane. However the geometry of the dust distribution is somewhat similar, with two jets in the polar directions and a thick dusty environment surrounding the central source. We based our interpretation of the pattern observed in NGC 1068 on the effect called "roundabout effect" by Bastien & Menard (1990) and used by Murakawa (2010) to describe the photons path in YSOs. In AGN environments, despite having an optically thick torus, we do not expect such high optical depth. Studies tend to argue for optical depth lower than 300 in the visible as discussed in section 6.5.2.
As said before, to be able to see horizontal polarisation with only spherical grains, it is necessary to have a region, beyond the torus, with lower optical depth at the considered wavelength (typically under 10), for the photons to be scattered a second time toward the observer. This is realistic because one can expect the external part of the torus to be diluted into the interstellar medium, with a smooth transition. Raban et al. (2009) used two components with different extension and temperature to fit their observations. Furthermore, the theory of disk wind (Emmering et al. 1992) being at the origin of torus, described for example in Elitzur & Ho (2009), would support such dilution.
The ionisation cone is an important piece as it is required for the first scattering. However, while comparing results of model 1 and 5, it seems important to have a collimated region for this first scattering to happen because all photons will have almost the same scattering plane which therefore leads to a narrow range of polarisation position angle. This is not the case in model 1 where photons could interact not only in the ionisation cone but in all the outer shell.
Discussion on Composition
Oblong Aligned or Spherical Grains
One major composition difference that has been investigated for nearly 30 years is whether polarisation is due to dichroic absorption or emission through elongated aligned grains, or comes from scattering. Because of their shape, non spherical grains would emit or absorb photons in a non uniform fashion as a function of their polarisation. If they are randomly distributed, one should expect the difference in polarisation to be averaged and not significant. However, if aligned, polarisation can arise directly from absorption or emission of these grains. This model was discussed in Efstathiou et al. (1997), who investigated through simulations the effect of aligned dust grains on polarisation.
A critical feature of dichroism is the switch in polarisation angle induced by the transition from absorption to emission as a function of the wavelength described by Efstathiou et al. (1997). This should theoretically drive a 90 • orientation modification of the polarisation angle since the preferred direction of absorption at short wavelengths will become a direction of emission at longer wavelengths. Such a transition was observed on NGC 1068 by Bailey et al. (1988) with a polarisation PA of approximately 120 • under 4 µm and a PA of 46±8 • above. This was interpreted as arising from the presence of elongated aligned grains by Efstathiou et al. (1997).
In this case, this alignment could be induced by strong magnetic fields around the nucleus (see for example Bailey et al. 1988;Efstathiou et al. 1997). This allowed Aitken et al. (2002) and Lopez-Rodriguez et al. (2015) to constrain this hypothetical magnetic field using polarimetric simulations and observations. However, the precise modality of this alignment process in this configuration on a rather large scale (60 pc as observed in Chapter 4) is still to be established. For instance if it is due to magnetic field, it would require a toroidal component because we expect the polarisation pattern to be parallel to the magnetic field in the case of dichroic absorption as discussed in Aitken et al. (2002) and Lopez-Rodriguez et al. (2015). Bastien & Menard (1990) and later Murakawa (2010) have shown that scattering on spherical grains can reproduce some of the features attributed to elongated grains. In particular, the horizontal pattern reproduced thanks to model 5 was also obtained by Murakawa (2010) (see figure 6 of the paper). The wavelength dependency of the scattering on spherical grains is however still to be investigated in the case of multiple scattering, but could hardly reproduce the wavelength switch with simple scattering. For these reasons, the inclusion of aligned elongated grains is an important future improvement for our model.
Dust Composition
Dust composition also has an effect on the polarisation maps. Optical depth evolution as regard to the wavelength is dependent on the dust species. This is triggered by the difference in extinction coefficient shown in figure 6.8. Because we used different dust compositions, despite fixing the optical depth at a given wavelength, optical depths will evolve differently. For instance, with τ V ≈ 50, we have τ H ≈ 2.7 in the pure silicates case and τ H ≈ 11.6 with the mixture of graphites and silicates. This also explains why there are almost no photons in the equatorial belt of upper panel of figure 6.7 (pure silicates), as opposed to the case with silicates and graphites. The extended torus region indeed has in the pure silicate model a low optical depth at 1.6 µm and photons have a low interaction probability in this area, making the signal in the equatorial belt hard to detect. This will be discussed and used later to study the role of the optical depth in the outer torus.
NLR Composition
When Antonucci & Miller (1985) first observed broad emission lines in the polarimetric spectra of NGC 1068, they estimated that the scattering was likely to happen on free electrons in the NLR. It was thought before that dust scattering may be the dominant mechanism as explained by Angel et al. (1976) because of the shape of the spectra. The presence of electrons in a highly ionised gas is currently mostly accepted, with a higher density closer to the centre, despite discussions on the presence of dust components in the NLR as for example by Bailey et al. (1988) and Young et al. (1995), invoked to explain observational features especially in the NIR.
We assessed two versions of our two first models. The first version is the one showed in section 6.2.1 with the densities displayed on 6.2, containing electrons in the ionisation cone. The second version uses the same dust densities in the torus and the outer shell, but we replaced electrons by silicate grains in the ionisation cone, with a density allowing to have the same optical depth in the V band. This provides the ability to evaluate the respective impact of the two types of scatterings. Images generated with these two versions are shown in figure 6.9 for model 1 and 6.10 for model 2.
We observe that in the cone, the flux due to scattering on electrons is stronger than the one coming from scattering on silicate grains on both figures 6.9 and 6.10. This difference stems directly from the absorption properties. Silicates through Rayleigh scattering and electrons with Thomson scattering have the same scattering phase functions. However electrons have an absorption coefficient of 0 while silicate's is close to 1 and depends on the wavelength. This increased flux coming from the ionisation cone is the effect that allows photons to be scattered twice on the central belt because it sends more photons toward this region.
Note that the linear degree of polarisation is also slightly lower in the cone if coming from dust scattering. It is however though to compare this feature to observations because we need to include the dilution of polarisation by thermal emission as well as by ISM and galactic influence.
Optical Depth
Models
In order to measure the impact and significance of the optical depth, we ran a series of models based on model 5, differing only by the optical depth of their components. Among the three main parameters, the ionisation cone, the torus and its extended region, we kept fixed the density of two of them and changed the last one's. First we took models with varying optical depth in the torus. In a second batch, we changed the optical depth of the cone, and lastly we studied a variation in extended torus. The results are shown in figures 6.11, 6.12 and 6.13, respectively. Polarisation vectors are shown, their length being proportional to the polarisation degree and their position angle representing the polarisation angle. All these models contain the same mixtures of silicates and graphites as in model 4 and 5. 2×10 7 packets were launched per model, all at 1.6 µm where the optical depth is set. Some additional maps are available in figures 6.14 and 6.15. These maps were computed from the same model, at 1.6 µm in which optical depth of the cone is fixed to 0.1, 0.8 in the extended torus and 20 in the torus (in H band).
With the first series of simulation, (figure 6.11), we obtain similar results as in the previous cases. For a low optical depth in the torus, photons can travel without interaction through the torus and produce a centro-symmetric pattern, even in the central region. By increasing the density, the significance of these photons decreases until the double scattering effect becomes predominant (around τ ≈ 20). At this step, increasing the optical depth does not seem to change significantly the observed features.
With the second and third series, the density of the two other structures must be in a given range to obtain the constant horizontal polarisation pattern we are looking for. If these structures have an optical depth too low or too high, the central belt shows multiple scatterings as seen in both figures 6.12 and 6.13. In extreme cases with very high optical depth in the outer torus, we have no photons in this region, a case close to the simulations of model 2 (see figure 6.4). These two conditions seem to be critical to reproduce the features we are expecting.
A last remark is that the photons detected from the ionisation cone are not any more just scattered once at high optical depth in the cone (figure 6.12). However the centro-symmetric pattern seems to be somewhat similar for double scattering in the cone.
Thick Torus
In order for the photons scattered twice to be dominant in the central region, it is necessary to have an optical depth high enough in the equatorial plane. If not, photons coming directly from the source, with single or no scattering will contaminate the observed polarisation and even become dominant, as said in section 5.7. The switch from one regime to the other is when the probability for a photon to emerge from the dust having suffered at most one scattering is of the same order as the probability of a photon being scattered twice. Note that because electrons do not absorb photons, they are much more efficient than dust grains for a given optical depth. Even with electrons in the cone, photons have to be redirected in the correct solid angle toward the equatorial plane and be scattered again without being absorbed in the observer's direction. The favourable cases to observe this signal seems to be for optical depth higher than 20 in the equatorial plane, as determined from figure 6.11.
This conclusion is in agreement with the observation that no broad line emission is detected in total flux when the AGN is viewed edge-on. With lower optical depth torus, we would have been able to detect such broad lines directly through the torus. This is not the case as demonstrated for example by Antonucci & Miller (1985) on NGC 1068 and Ramos Almeida et al. ( 2016) on a larger sample, who revealed these hidden lines only thanks to polarimetry.
Optical Depth of Scattering Regions
Another important parameter is the density of the matter in the cone and in the extended part of the torus. If both areas have low density, photons have a lower probability to follow the roundabout path and are less likely to produce the horizontal pattern, as seen in figure 6.12. But if the optical depth is too high in the cone and/or in the outskirts of the torus, typically of the order of 1-2, photons have a low probability to escape from the central region and will never be observed. We can see on the last two panels of figure 6.12 that the averaged number of scatterings in the cone is larger than 1, and that the signal arising from the photons scattered twice becomes weaker in the central belt. The range of values which favours the constant polarisation signal in the centre is around 0.1-1.0 in the cone and 0.8-4.0 in the extended region of the torus. This corresponds for the outer torus to a density about 20 times lower than in the torus.
This underlines the significance of these two parameters and especially the extension of the torus. When comparing these results to the previous models, we noticed photons in this configuration. The only factor that can explain this lack of photon at 1.6 µm is the optical depth difference in the outer shell, which is of the order of 0.5 at 800 nm and much lower in the NIR, to be compared to the 0.8 lower limit determined previously. The structure, composition and density of the torus (including its outer part) are therefore all intervening to determine the polarisation pattern in the median plane. To be observable, the horizontal polarisation requires a proper combination of the parameters within a rather narrow range. Based on previous simulations, we tried to reproduce the SPHERE observations of NGC 1068. As we observed constant polarisation orientation in the core of NGC 1068 both in H and Ks bands, we need in our simulation an optical depth τ 20 at 2.2 µm to be able to generate such polarisation features. We changed our previous model 5 to adapt it to these larger wavelengths. Here, the torus has an optical depth τ Ks = 19, which gives τ H = 35 and τ V = 169. Note that this is in the range of the current estimation for the density of dust in the torus as it will be discussed in section 6.5.2. We used for this simulation a mixture of silicate and graphite grains to rely on realistic dust composition for an AGN, inspired from Wolf & Henning (1999). We adapted their mass ratio to number ratio according to [START_REF] Guillet | Theses[END_REF] leading to 57 % silicates, 28.4 % parallel graphites, 14.2 % orthogonal graphites.
With these parameters, maps of figure 6.16 are now able to reproduce some of the observed features. Namely, the constant horizontal polarisation in the central part is similar to the one observed on NGC 1068 at 1.6 and 2.2 µm. We are however not able yet to reproduce the ridge that appears on the very centre of the SPHERE observations.
Consequences for Observations
On figure 6.16, one can see that there are very few differences between H band and Ks band images. The optical depth of 169 in V band required for these images is acceptable, being in the range of present estimations. Marin et al. (2015) use values of optical depth in the range 150-750 in visible, while Gratadour et al. (2003) 2017) more recently found integrated values of optical depth of about 250, based on fits of the NIR and mid-IR spectral energy distributions of samples of Seyfert 1 and 2 galaxies. We have to keep in mind that all these optical depths are derived assuming a clumpy structure so that we should be careful when comparing them to optical depths of continuous dust distributions, the masses of dust being very different. It is in our plans to explore the effect of a clumpy structure, a capability already implemented in MontAGN.
A part of the polarisation may arise from elongated aligned grains as studied by Efstathiou et al. (1997) and introduced in section 6.3.1. If so, the required optical depth could change. This might explain the observed ridge on the polarimetric images of NGC 1068 and aligned elongated grains are therefore something we should investigate further.
Our interpretation does not require special properties for grains (elongated and/or aligned) and indeed, spherical dust grains in the torus coupled to electrons in the ionisation cone are able to reproduce the polarisation orientation in the central belt. We constrained in this case the optical depth of the structures to a rather narrow range of values. In the cone, the range of optical depth is in between 0.1 to 1. A value of 0.1 gives an electrons density of 2.0 × 10 9 m -3 which is consistent with the estimated range in AGN (10 8 -10 11 m -3 for NGC 1068 according to Axon et al. 1998;Lutz et al. 2000).
The only feature that our simulations do not seem to reproduce is the ridge at the very centre of NGC 1068 (Gratadour et al. 2015). Investigating a model with non spherical grains could potentially solve this problem through dichroism.
Wavelength Dependency
As introduced before, the considered scattering effects do not have the same wavelength dependency (scattering on dust, on electrons, dichroic absorption/emission) and observations in different bands can bring strong constraints on the dominant mechanism. This was the motivation of conducting the IRDIS NB observations shown in section 4.2.2.5. From these and the conclusion driven from simulations, we will try here to interpret the wavelength dependency of the features observed on NGC 1068.
Concerning the NLR structures, South-West of the nucleus, the different images from CntH to CntK2 show a rather constant degree of polarisation, about 10-15 %. As demonstrated in section 4.2.2.3, this signal is unlikely to come from the depolarisation induced by the derotator. Furthermore, these structures are also detected on intensity maps and the SNR should therefore be sufficient to have a fair confidence on the obtained values. However we can not drive strong conclusions as long as the filters used have not been tested in polarimetric mode.
Assuming that this lower polarisation is due to physical processes taking place in NCG 1068, we can constrain the composition and temperature of observed structures. CntK2 is the higher wavelength filter as shown in figure 4.8, centred around 2.275 µm (against 2.100 and 1.575 µm for CntK1 and CntH respectively). For this reason we expect images with this filter to show more thermal emission from dust than in the other filters, potentially accounting for the dilution of the polarised signal. But as we identified these parts of the double hourglass structures in H, Ks, CntH, CntK1 and CntK2 polarimetric maps with a rather constant polarisation as a function of wavelength, this would be more consistent with electron scattering. This is in agreement with the results obtained on the previous section and with the studies of Axon et al. (1998); Lutz et al. (2000).
As opposed to the hourglass structures, the central feature is varying according to wavelength. Its polarisation degree is rather high on the CntH image (about 10-15 %), lower on the CntK1 map (< 10 %) and is not detected on CntK2 image. However, this structure at the very centre is visible in the polarised intensity maps from CntH to CntK2, and are therefore not completely hidden. This would indicate the presence of dust in this region, at a temperature high enough for the dust to become a major emitter at 2.275 µm (roughly above 600 K), or with an optical depth close to 1-2 in the CntK2 band.
Because emissivity and optical depth at 1.575 and 2.275 µm are close, we could theoretically constrain precisely the dust temperature or the density in this structure. This, however, requires a precise knowledge of the scattered flux and will be conducted later. It would be also risky to drive strong conclusions from data with non characterised filters. Furthermore, ZIMPOL observation in narrow R band brought new constraints, with the central region being detected with a degree of polarisation about 5 %, slightly slower than those obtained with IRDIS.
On these images, obtained in R band with ZIMPOL, we can roughly identify the inner parts of the bicone structures, situated about 1 from the centre, visible as 5-10 % polarisation degree regions on the R maps.
Despite this low polarisation degree, the polarisation position angle map shows more precise patterns, once again indicating a clear polarising mechanism taking place on very luminous regions and therefore not dominating the total intensity. At large distance (≈ 1 ), we seem to observe once again the previously analysed centro-symmetric pattern. In order to further investigate, we are currently simulating images at 646 nm with MontAGN, the first obtained image is shown in figure 6.17 Then, shalt thou count to three. No more. No less. Three shalt be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, nor either count thou two, excepting that thou then proceed to three. Five is right out.
Monty Python -the Holy Grail
It will require yet a significant amount of work from committed scientists to fully understand the actual structure of the inner region of an AGN, or to backtrace the proper evolution of SSCs from their birth in extreme star formation regions to their late states, with a clear understanding of the impact of the period of their formation. It is exciting to be facing these unresolved problems and to participate to the construction of the new tools that will hopefully be able to give the answers to these questions, or at least to come closer to the solution.
In this last chapter, we will outline the results obtained in this thesis about two types of extragalactic objects, thanks to new adaptive optics techniques giving access to high angular resolution. In a first section, the preliminary results obtained with MOAO on SSCs will be summarised and in a second section we will present the major results of this work obtained on AGNs, thanks to the comparison between extreme AO corrected polaro-imaging in the NIR and simulations of radiative transfer featuring polarimetric capabilities using the code MontAGN developed in-house. We managed to simulate the inner dust structures of an AGN, with a special focus on NGC 1068, and found that the comparison with observations brings some clear constraints on the geometry and the density of the dust and electron structures. Finally, we will detail the perspectives opened by this work as well as the projected future work to improve the simulations and go further in the understanding of these objects.
At the scale of this work, our starting point on the SSC domain was to constrain the young cluster properties with questions such as: Are the SSCs the progenitors of the GCs ? Are the SSCs a extremely massive version of the OCs ? Do we lack information due to incompleteness of our sample ?
Concerning AGNs, the holy grail would be to build an AGN model which would perfectly reproduce all the observations at all wavelengths and for all type of objects. Obviously, our purpose was less ambitious, and we aimed at constraining the torus geometry on Seyfert 2 galaxies. Would this constrained structure also perfectly stand for other type of AGN ? Would it be possible to determine the composition of the torus ?
Observations and Simulations
This work is mostly based on high angular resolution original data obtained thanks to the last generation of AO systems, one being a demonstrator giving a taste of the next generation of instruments. The results obtained here demonstrate the capabilities of these new techniques and illustrate the kind of scientific discoveries achievable thanks to them.
CANARY is a demonstrator of the new MOAO technology and was used on two extragalactic targets: IRAS 21101+5810 and NGC 6240.
While the images of IRAS 21101+5810 brought unprecedented informations, the observation of NGC 6240 with CANARY (on the WHT, a 4.2 m diameter telescope), did not bring improvements to the already existing images by NIRC2 on the Keck telescope (10 m diameter). We however did take advantage of these two sets of images to compare the quality of the data obtained thanks to MOAO with CANARY. This study, lead by Damien Gratadour, shows that despite using a new AO system, still under development and on a smaller aperture telescope, the results are close to the expectations for this system, reaching the same resolution as achieved by SINFONI (Spectrograph for INtegral Field Observations in the Near Infrared) on the VLT (note that SINFONI is an integral field spectrograph and is not optimised to obtain the highest angular resolution through BB imaging). A comparison of these images is shown on figure 7.1. These performances give confidence in the future development of MOAO and its ability to achieve on sky an excellent resolution on several targets simultaneously. SPHERE is the new exoplanet hunter installed since late 2014 on the VLT. It features an extreme AO system, giving access to high contrast capabilities and to a HAR, which are required to image exoplanet or disks around young stars. However, it reveals powerful as well at unveiling the inner region of an AGN.
Our three observing runs of NGC 1068 with SPHERE highlighted the gain brought by very good AO corrections when observing nearby galaxies. This idea is not new as our team already applied this method on NGC 1068 with PUEO-CFHT (Rouan et al. 1998), then with NaCo (Rouan et al. 2004 andGratadour et al. 2005). The difference between the previous NaCo observations and the new SPHERE data illustrates perfectly the improvements achieved on this 2 nd generation of instrument on the VLT. Despite the fact that NGC 1068 was a difficult target because its magnitude was just at the SPHERE sensitivity limit, SPHERE images show a very satisfactory resolution as well as high contrast (weak wings on the PSF), critical for the study of the inner structures. Furthermore, the use of polarimetry revealed an important piece of information, out of reach without this technique. Analysing polarimetric data is more complex, but thanks to modelling, it is possible to derive new conclusions and therefore improve our understanding of AGNs inner regions.
Last but not least, we developed a radiative transfer code to reproduce the observed polarimetric features directly from the inferred geometry of the inner region of the AGN. This simulation code was entirely developed at LESIA, thanks to an initial release by Jan Orkisz and a complete update toward polarimetric capacities in the context of this thesis. Radiative code already revealed themselves as very powerful when trying to interpret complex sets of data, but when applied to the polarimetric studies of this work, MontAGN allowed us to go deeper in the fundamental understanding of an AGN inner parsecs.
Super Stellar Clusters
Thanks to the other set of data from CANARY, we were able to conduct a preliminary study of the SSCs in IRAS 21101+5810, the first for this galaxy. We implemented a new fitting algorithm, able to provide photometry on targets lying on a complex background. The tests conducted were satisfactory and the routine was able to fit properly most of the clusters, with a significant fraction of them being out of reach to more classical methods. In order to compare the results obtain with this tool, we also created colour maps of IRAS 21101+5810.
We then compared these results with the star formation simulator GALEV, which gave us colour indices for star clusters, at different evolutionary stage and with different sets of parameters.
Thanks to combination of these different tools, we obtained constraints on the extinction, about A V ≈ 3 on average for the system, and on the ages of the clusters in the system, which would be more compatible with ages roughly between 3 and 100 Myr, with a small dispersion in ages and in extinction between the clusters. Furthermore, the results are compatible with a metallicity following the evolution of the SSCs through feedback, but we can not exclude other metallicity.
By going deeper in the analysis, we should be able to better constrain the ages of the clusters, that might be linked to the starburst time-scale and therefore to the dynamical scale of interaction of the two components. We aim also at improving our incertitudes by a better constraints on the algorithm. An improving, for example of the source estimation, would give access to a better background estimation and to an even more reliable photometry. This would also bring more statistics for the SSCs population, still limited in their understanding by the low number of observable systems.
We will be able thanks to these tools to analyse a growing number of SSCs. Indeed, we already obtained images of a third system, IRAS 17138-1017 that we will analyse through the same process. But on longer time-scales, we will have access to a significant larger sample of targets, thanks to the next generation of instruments. The JWST should be launched on newt year, and will provide a turbulence-free 6.5 m telescope that will be able to image with an unprecedented angular resolution on many extragalactic targets, among which several starburst galaxies, likely to feature SSCs. Furthermore, as demonstrated by CANARY, new AO systems will soon be able to increase the sky coverage for such targets. This, combined with the horizon of the ELTs, bringing the telescopes' diameters to almost 40 m, will give us a significant amount of work.
If we can reach SSCs situated in galaxies at larger distance from us, thanks to these new capacities, we will be able to trace the evolution of the characteristics of the SSCs as a function of redshift. With these new pieces of information, we should be able to improve our model of star and cluster formation, toward a model standing for all the different types of clusters.
This would for example help to assess the question of the multiple star populations highlighted by Krause et al. (2016). GCs show evidences of multiple populations, while SSC do not seem to show the same feature (see Moraux et al. 2016). Multiple scenarii were developed to try to explain these observed differences, but so far, no clear answer to this question was found (Bastian 2016). If this point was solved, we could then move to deeper investigations, like for instance precise studies of the impact of the clusters on the evolution of galaxies on long time-scales.
Active Galactic Nuclei
With our high angular investigation of the inner parsecs of AGN, we were successive in reproducing observed patterns thanks to simulations of a simple model. We applied this on NGC 1068, in which the observed polarisation pattern is compatible with the idea of photons scattered twice proposed by Bastien & Menard (1990) and later validated by Murakawa (2010) on YSOs. This was the basis of our analysis of our observations of NGC 1068 (Gratadour et al. 2015). The code MontAGN allowed us to simulate NIR polarimetric observations of an AGN featuring an ionisation cone, a torus and an extended envelope.
Despite limiting ourselves to a simple case where the various structures have a constant density of dust or electrons, we were able with spherical grains only, to constrain the optical depth of the different dust structures so that obtaining a similar polarisation pattern as the one observed on NGC 1068.
We highlighted the important role of both the ionisation cone and the dusty torus. The cone allowed photons to be scattered toward the equatorial plane. In order for this redirection to be efficient, we found that electrons are much more satisfactory than dust grains, because they are non absorbing, a hint that these cones are more likely populated by electrons, as suspected by Antonucci & Miller (1985). We estimate the optical depth in the ionisation cone, measured vertically, to be in the range 0.1-1.0 in the first 25 parsecs from the AGN. This is confirmed by the first results from our series of observations from R to K NBs.
We found that for the light directly coming from the central region of the AGN to be blocked, the torus should have an in-plane integrated optical depth greater than 20 in the considered band, so τ Ks 20 in the case of NGC 1068. Furthermore, the torus should not only be constituted of an optically thick dense part, but also of an almost optically thin extended part. We find satisfactory results with an outer part with optical depth 0.8 < τ H < 4.0 and an extension ranging from 10 to 25 pc from the centre. We considered structures with constant density, something unrealistic, but we expect similar results for a more continuous torus with a density decreasing with distance to the centre. This is a direction in which we aim at carrying our study in the near future.
This work opens several perspectives as most of the studies conducted here will be investigated further. At short time scales, publication of the simulation code MontAGN will require to work on the input and output management in order to make the code easier to use for non-developers. This work will be the occasion to implement the last short term ameliorations planed. These include for example the setting of new dust geometries, like progressive dust torii or fragmented media, both already partially implemented. With these new tools, we will be able to conduct more realistic simulations and possibly being able to reproduce the few features that we could not observe through our simulations up to now, especially the ridge at the very centre of the observed polarisation maps.
There are other improvements that we will be able to conduct on longer time-scales. One major amelioration will be to include elongated aligned grains and potentially other dust species, like Polycyclic Aromatic Hydrocarbons (PAH) or nano-diamonds for example. This will give us the opportunity to study the polarisation patterns created by spherical grains and elongated grains, in the same conditions. We will therefore assess the question of the mechanism triggering the observed polarisation of the light from AGNs.
One particularly significant limitation to our studies is the time required for the simulations. Thanks to a long experience in GPU usage for scientific computations (in particular on AO systems) of the HAR team of LESIA, we were able to identify that the code MontAGN is possible to translate for GPU usages. Thus, we planed to make this change and we should be able to accelerate the code execution, making it faster enough to bring new opportunities of usage of the code.
One other work that will be undertaken is the interpretation and publication of the newly obtained data on NGC 1068. Observation from IRDIS in September 2016 and from ZIMPOL conducted in early 2017 are currently analysed and we have already drawn some conclusions from early comparisons to our models.
Description of the proposed programme
A -Scientific Rationale:
Polarimetric observation is one of the key methodology to investigate the nature of AGN, as demonstrated 30 years ago by Antonucci and Miller who measured a broad line emission in the polarized light of NGC 1068 (Antonucci and Miller 1985), the archetypal Seyfert 2 galaxy which was catalogued as such because its standard spectrum exhibited only narrow lines. Assuming that a dusty torus could hide the broad line region in Seyfert 2, Antonucci (1993) proposed the unified model for AGN, stating that Seyfert 1 or 2 types were the same type of object harboring a luminous accretion disk surrounded by a thick torus, but seen under different viewing angles. For many years, the AGN unified model has been tested with simulations (Marin et al. 2012) and confronted with a number of observations (see Gratadour et al. 2005 andPackham et al. 2007 for instance). As one of the closest Seyfert 2 galaxy at a distance of 14.4Mpc, NGC 1068 is an ideal laboratory to investigate further this model of AGN and its interactions with its host galaxy. Multi-wavelength observations at a few parsec scale allowed to discover several features close to the Central Engine of this object. This includes a structured radio jet (Gallimore et al. 1996) and a fan-shaped Narrow Line Region seen in UV and visible (Capetti et al., 1997). Near-IR is also very well suited to explore AGN environment because it allows to observe through dense dusty regions and traces the hot dust, especially the one at the internal edge of the torus, directly heated by the luminous accretion disk. Near-IR range also contains several lines of interest, among which are ro-vib lines of H2, recombination lines of H+ (Brγ) and FeII lines, tracing high energy phenomena like jets and accretion disks (Barbosa et al. 2014). One master piece of the model is the presence of a dusty torus surrounding the central engine. Several hints point to its existence, however no clear imaging have been obtained so far. Last year we took advantage of the SPHERE excellent performances to propose, in the frame of the Science verification program, a polarimetric observation of NGC 1068 at the highest possible angular resolution. The Ks and H images, with and without a coronagraph, showed a very precisely defined hourglass-shaped polarization pattern centered on the bright core that fits very well the visible cone of the NLR and, even more important, the polarization angle maps traced a thin and compact elongated structure perpendicular to the bi-cone axis that we interpreted as the direct signature of the torus (Gratadour et al. 2015): see Figure 1. The goal of the proposal is to go further in this approach of using polaro-imaging to probe the physical conditions in the torus of NGC1068. One issue is to disentangle the origin of the peculiar polarization of the putative torus, either due to dichroic absorption/emission by aligned elongated grains or to Rayleigh scattering on more or the less spherical grains. In the first case the magnetic field intensity and orientation could be constrained. In the second case, the pattern we observed is compatible with scattering on spherical grains, as in the case of disks in YSO (Murakawa 2010), if the optical depth is of the order of unity, an important piece of information. The behavior of the polarization (degree and direction) with respect to wavelength could therefore indicate the actual dominant mechanism and give access to important parameters. We thus propose polaro-images in several narrow-band filters (continuum and Brγ). We also wish to benefit from the excellent correction by SPHERE to look for the emission of the hot molecular hydrogen in the very central region, a challenging observation given the important continuum, but it could locate the region where molecular hydrogen is protected enough to survive the intense UV field. Observing a similar behavior of the near-IR polarization in other Seyfert galaxies would be extremely interesting, so we will propose in P98 a pilot program where a few nearby AGNs will be observed in broad-band polaroimaging.
B -Immediate Objective:
A first aim of the observations is to complete the near-IR polarimetric information in the very central nuclear region of NGC 1068 by obtaining four narrow bands polarimetric images, three in the continuum (filters Cnt H, Cnt K1 and Cnt K2) and one in Brγ. The basic idea is to look for a trend in the degree and direction of the polarization vector that could indicate polarization by dichroic absorption or dichroic emission by elongated grains and even a potential switch from one to the other mechanism (Efstathiou et al. 1997). Indeed, the polarization vector measured at 10 microns by Packham (Packham et al. 2007) is perpendicular to the one we measured at K, an indication of such a possible switch. In the case of the continuum NB filters, the dominant source of photons is the hot dust at the internal edge of the torus, practically a point source, while in the case of the Brγ filter, the ionized gas could fill a much larger volume so that comparison between the different maps could be indicative of the structuration of the medium. Note that we already developed a numerical model of radiative transfer, including scattering, that will be used to interpret the data set. The second aim is to try to detect and measure the extent of the molecular hydrogen that should be the main component, in mass, of the torus. The difficulty, already faced in previous observations with NACO (Gratadour 2005) and SINFONI (Müller Sànchez et al., 2009) is that the continuum due to very hot dust is extremely high and dilutes the flux of the line. On the image obtained by Müller Sànchez et al. (Fig. 2) there is a clear hint of an elongated structure in the H2 v=1-0 S(1) ro-vib line, aligned with the one we delineate in polaro-imaging, however the very central region cannot be mapped because of the high contrast. We expect that thanks to the use of the coronagraph in the 1-0 S(1) line, the most prominent central source of this continuum will be hidden, 7. Description of the proposed programme and attachments Description of the proposed programme (continued) so that the molecular torus will be detectable. If successful, this observation would bring strong constraints on the mass of molecular gas and on its distribution, two key parameters in the unified model of AGNs. 8. Justification of requested observing time and observing conditions Lunar Phase Justification:
Attachments (Figures)
The Lunar phase requirement is linked to the VLT active optics which could encounter problems, according to the SPHERE manual, when the Moon is less than 30 degrees from the targets.
Time Justification: (including seeing overhead)
ESO Exposure Time Calculators is not available for DPI mode of SPHERE. However, it is offered for the CI mode and the corresponding output can be used as an estimator for the DPI mode. Furthermore, as the same target has been observed with success during SPHERE SV, we were able to refine this estimate using available broad band polarimetric data and narrow band filters specifications. To meet the strong SNR requirements, critical in polarimetric imaging, we request 3200s on target per band in DPI mode (DIT=16 x NDIT=20 x NEXPO=5 x 2 positions of the Half-Wave Plate) and 3072s in CI mode (DIT=64 x NDIT=16 x NEXPO=3). This should allow us to reach a SNR of about 100 (i.e. 1 percent error on the polarization angle) on the extended emission close to the central source along the torus axis. As we aim at measuring a small variation (few percents) in polarization angle from one narrow band to another, reaching this SNR is critical to our program. Since one of our main goals is to look for a trend in the degree and direction of the polarization vector on the very central source and its close environment, we need to avoid saturation on the core which explains the rather low DIT in our DPI mode observations as compared to the one calculated for the CI mode for which the DIT can be larger thanks to the use of the coronagraph. Using the overhead formula given in the SPHERE Manual leads to 3200s per band in DPI mode and 3072s in CI mode which means a total of 6810s per band in DPI mode and 6258s in CI mode including skyes and overheads. Hence assuming 4 narrow bands filters in DPI mode and two filters in CI mode, we obtain a total time of about 11h. We thus request 3 half nights to complete this program and adding 6 acquisition templates (6x1320s) leads to a total observing time request of 13.3h.
8a. Telescope Justification:
SPHERE, with its extreme AO system, has already proven to be the only instrument allowing us to reach the required contrast close to the very central source of NGC 1068. While SPHERE has been designed to hunt for exoplanets, it shows excellent results on NGC 1068 because of the rather limited extension of the core (used as the guide source for the AO system) which dominates completely the extended emission at R. The performance achieved during SV significantly surpasses what was obtained with NaCO in the past decade which completely justifies the use of SPHERE for this program.
8b. Observing Mode Justification (visitor or service):
For SPHERE DPI observations, only visitor mode is offered. Additionally, several AO experts are involved in the team and will be able to provide critical advises for efficiently closing the AO loops on this extended target and optimizing AO performance to reach maximum contrast.
8c.
Calibration Request:
Abstract
Despite having strong theoretical models, the current limitation in our understanding of the small-scale structures of galaxies is linked to the lack of observational evidences. Many powerful telescopes and instruments have been developed in the last decades, however one of these strongest tools, namely Adaptive Optics (AO), can only be used on a very limited number of targets. Indeed, for AO to be efficient, a bright star is required close to the scientific target, typically under 30 . This is mandatory for the AO systems to be able to measure the atmospheric turbulence and this condition is rarely satisfied for extended extragalactic targets such as galaxies. The main part of this thesis work consisted in going deeper in the analysis of the inner tens of parsecs of Active Nuclei (AGN) by combining different techniques to obtain and to interpret new data. In this context, we developed a new radiative transfer code to analyse the polarimetric data. A second part of my work was dedicated to a high angular resolution study of Super Star Clusters (SSC) in a new system, thanks to data obtained with the AO demonstrator CANARY instrument.
Keywords
Galaxies: Seyfert, star clusters, Techniques: photometric, polarimetric, high angular resolution, Methods: observational, numerical, Radiative transfer.
5 1. 2 . 3
523 Principle of Adaptive Optics . . . . . . . . . . . . . . . . . . . . 6 1.3 Super Stellar Clusters . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.1 Host Galaxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Short History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3 Evolution of Super Stellar Clusters . . . . . . . . . . . . . . . . . 10 1.4 Active Galactic Nuclei . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.1 Toward an Unified Theory . . . . . . . . . . . . . . . . . . . . . . 11 1.4.2 Validation, Tests and Limitations of the Unified Model . . . . . . 12 1.4.3 Current Investigations . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5 Goal of this Research Work . . . . . . . . . . . . . . . . . . . . 13
Figure 1
1 Figure 1.1 -Example of an Airy disk, in logarithmic scale.
Figure 1 . 2 -
12 Figure 1.2 -Two examples of PSF, diffraction limited and turbulence limited. D refers to the telescope diameter and r 0 to the Fried parameter, defined in the following text. Image from G. Rousset.
Figure 1 . 3 -
13 Figure 1.3 -Basics Adaptive Optics principle. Image credits: Lawrence Livermore National Laboratory and NSF Center for Adaptive Optics.
Figure 1 . 4 -
14 Figure 1.4 -Principle of a Shack Hartmann wavefront sensor, from Kern et al. (1989).
Figure 2 .
2 Figure 2.1 -NGC 6240 in Kp band by NIRC2 -Keck. Many SSC are distinguishable on this image as small clouds spread around the two nuclei. The angle between y axis and North is about 160 • .
Figure 2 . 2 -
22 Figure 2.2 -IRAS 21101+5810 in H band by CANARY -WHT. The angle between y axis and North is about 130 • .
Figure 2 . 3 -
23 Figure 2.3 -NGC 6240. Images represent intensity maps (in log 10 (ADU)) obtained in F435W (B) and F814W (I) bands with ACS -HST; F160W (H) band with Nicmos -HST; H and Kp bands with CANARY -WHT; Kp band with NIRC2 -Keck.
Figure 2 . 4 -
24 Figure 2.4 -IRAS 21101+5810. Images represent intensity maps (in log 10 (ADU)) obtained in F435W (B) and F814W (I) bands with ACS -HST; F160W (H) band with Nicmos -HST; H and Kp bands with CANARY -WHT.
Figure 2 . 5 -
25 Figure 2.5 -Image and profile of the star in the image of IRAS 21101+5810, with CANARY in H band, a good calibrator for the instrument PSF.
Figure 2 . 6 -
26 Figure 2.6 -Examples of MTF on a 3.6 m telescope: MTF normalised to 1 in cases without correction: S1(f); with AO correction: S2(f); ideal telescope MTF with central obscuration: S3(f). In K band, Strehl uncorrected of 0.05, Strehl corrected of 0.28, r 0 = 12 cm, wind speed of 3 m.s -1 . FromRigaut et al. (1991)
Figure 2 . 7 -
27 Figure 2.7 -Example of a fit of a 2D Gaussian on one of the clusters in the H band image of IRAS 21101+5810. Left panel shows the initial image, with the cluster to be fitted in the centre, right panel displays the obtained fit, with a colour bar scaled to the previous map (with a constant background).
Figure 2 .
2 Figure 2.8 -Example of a fit using the Poisson Equation Solving with a 2D Gaussian on simulated data. First panel shows the initial image with varying backgound. We added on the second panel a point-like source, with a Gausian distribution. Last panel shows the resulting background estimation from our fitting method. Image from Daniel Rouan.
Figure 2 .
2 Figure 2.9 -Example of a fit of a SSC thanks to Poisson Equation Solving with a 2D Gaussian. Upper left panel shows the initial image, upper right displays the image with the background estimation through the Green function, bottom left shows the final cluster fitted and bottom right panel correspond to the image subtracted from the fitted cluster.
Figure 2 .
2 Figure 2.10 -Reference location of the detected SSC in IRAS 21101+5810 on H CANARY image (see figure 2.4). Positions of the centre of the galaxy and of the major clump component are marked A and B respectively. The star (S) is also indicated. Circles only indicate the position of the SSC and their size has no photometric significance.
Figure 2 .
2 Figure 2.11 -Colour maps for IRAS 21101+5810. F435W-F814W, H-F160W, F814W-H, F814W-F160W, H-K and F160W-K respectively.
Figure 2 .
2 Figure 2.12 -Colour-colour diagrams for IRAS 21101+5810. First panel corresponds to I-H as a function of B-I diagrams and the second one to H-K as a function of B-I, both for a metallicity including chemical evolution. Black crosses represents GALEV simulations data points, spaced every 100 Myr, for two different extinctions, while red crosses shows the measured photometry of the clusters on IRAS 21101+5810, with their associated error bars (through colour maps). The black segment represents the effect of a variation of 1 in E(B-V).
Figure 2 .
2 Figure 2.13 -Same as figure 2.12 for solar metallicity (first column) and for a fixed metallicity of [Fe/H]=+0.3 (last column). First row corresponds to I-H as a function of B-I diagrams and the second one to H-K as a function of B-I
Figure 3
3 Figure 3.1 -Illustration of light propagation.Here, k is along z, while E is along x and B along y. Note that the illustration represent a precise photon propagating and that therefore the z axis also corresponds to an evolution of time.
Figure 3 . 2 -
32 Figure 3.2 -Different type of polarisations: top: two different linear polarisation; bottom left: circular polarisation; bottom right elliptical polarisation (all viewed from the observer). The electric field's vector moves along the traced curves.
Figure 3 . 3 -
33 Figure 3.3 -Malus' law
Table 3.1 -Examples of Stokes parameters for some polarisation Stokes parameters in the form of a vector, called the Stokes vector with symbol S:
Figure 3 . 4 -
34 Figure 3.4 -Determination of Stokes parameters from ellipse parameters.
Figure 3 . 6 -Figure 3 . 7 -
3637 Figure 3.6 -Examples of α phase functions in the case of Mie scattering, for x = [0.1, 0.3, 1.0, 3.0, 5.0, 10.0]. Second image is a zoom on the central part.
Figure 3
3 Figure 3.8 -Examples of β phase function in the case of Mie scattering, for x = 1.0.Note that this is not the conditional probability density function because there are no knowledges about α.
Figure 3 .
3 Figure 3.9 -Examples of polarisation (S 12 /S 11 ) depending on the scattering angle α in the case of Mie scattering, for x = 1.0
and U -are equivalent to I 0 , I 90 , I 45 and I -45 and are the measured intensities depending on the polariser angle. The corresponding positions are shown on figure3.10. V is not easily translated into a sum of measured intensities. It requires two measurements with a phase introduced between the two components, to measure the positive and negative circular polarisation.
Figure 3 .
3 Figure 3.10 -Example of measured Stokes parameters depending on the position of a polariser, first image from a source point of view and second image as viewed by a detector.
Figure 3 .
3 Figure 3.11 -Maps of I, Q, U and V in the case of a star surrounded by a dust shell of τ K = 0.05 simulated in K band.
Figure 3 .
3 Figure 3.12 -Maps of p lin and θ in the case of a star surrounded by a dust shell of τ K = 0.05 simulated in K band.
Figure 3 .
3 Figure 3.13 -Maps of I with polarisation vectors in the case of a star surrounded by a dust shell of τ K = 0.05 simulated in K band.
Figure 3
3 Figure 3.14 -Centro-symmetric map of φ.
Figure 3 .
3 Figure 3.15 -Maps of Q φ , U φ in the case of a star surrounded by a dust shell of τ K = 0.05 simulated in K band.
Figure 3 . 16 -
316 Figure 3.16 -Maps of differences to centro-symmetric pattern, in the case of a star surrounded by a dust shell of τ K = 0.05 simulated in K band. the first images takes the absolute value of this difference while the second one shows the relative difference.
Figure 4 .
4 Figure 4.1 -VLA (Very Large Array) maps of the central region of NGC 1068 at 4.9 GHz with resolution 0.4 × 0.4 . Contours plotted are at -0.1 %(dotted), 0.1 %, 0.2 %, 0.3 %, 0.4 %, 0.6 %, 1 %, 1.5 %, 2 %, 3 %, 5 %, 7 %, 10 %, 15 %, 20 %, 30 %, 50 %, 70 % and 90 % of the peak brightness of 0.273 Jy (beam area) -1 . Insets (a), (b) and (d) show details of the brighter regions at 15.0 GHz. From Wilson & Ulvestad(1983)
the NIR images.A sketch of these regions was drawn by Lopez-Rodriguez et al. (2016) and is displayed on the right panel of figure 4.2.
Figure 4 . 2 -
42 Figure 4.2 -Sketch of the central 100 × 100 pc (right panel) with a zoom-in of the central few parsecs (left panel) of NGC 1068 (both panels are not on linear scale). In the central few parsecs (left panel): the central engine (central black dot), the ionised disk/wind (dark grey region), the maser disk (light grey region), the obscuring dusty material (brown dots) and the radio jet (black solid arrow) are shown. In the 100 × 100 pc scale (right panel): the central few parsecs are represented as a brown ellipsoid with the Position Angle (PA) of the obscuring dusty material. The inner radio jet is shown as a black arrow, bending after the interaction with the North knot (grey circle), towards the NW knot (grey circle). The [OIII] ionization cones (yellow polygons) and the NIR bar (light-blue ellipse) are shown. North is up and East is left. From Lopez-Rodriguez et al. (2016)
Figure 4 .
4 Figure 4.3 -M band deconvolved images of NGC 1068 with two different deconvolution techniques from Gratadour et al. (2006).
Figure 4 .
4 Figure 4.4 -A revised version of the kinematic model of the NLR by García-Burillo et al. (2014), which accounts for the molecular outflow (denoted in figure as CO outflow) detected by Atacama Large Millimeter/submillimeter Array (ALMA) in the CND (seen in projection at N and S, for northern and southern knots) and farther north in the molecular disk. The figure shows a cross-cut of the NLR as viewed from inside the galaxy disk along the projected direction of the radio jet (PA ≈30 • ; shown by the green line).
Figure 4 . 5 -
45 Figure 4.5 -Polarisation image of the central 25 x 15 arcsec 2 in H band. The zero of the coordinates corresponds to the centroid of the nuclear region in this band. A 2.4 length polarisation vector corresponds to 10 % polarisation. From Packham et al. (1997).
�
Figure 2 is available in electronic form at http://www.aanda.org �� Data obtained with the SPHERE an instrument designed and built by a consortium consisting of IPAG (France), MPIA (Germany), LAM (France), LESIA (France), Laboratoire Lagrange (France), INAF -Osservatorio di Padova (Italy), Observatoire de Genève (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA (France), and ASTRON (Netherlands) in collaboration with ESO.
Fig. 1 .
1 Fig. 1. Output of our data reduction pipeline. North is up, east to the left. A) Total intensity image (color bar in arbitrary units); B) degree of linear polarization; C) polarized intensity (color bar in arbitrary units); D) polarization angle (in degrees). The total intensity image has been histogram-equalized between the 2 values in the color bar; i.e., each byte in the color map occurs with equal frequency between the 2 specified values.
latter, Stokes parameters are obtained by computing the ratio of the intensity in one of the beams for two positions of the halfwave plate over the same ratio for the other beam. Four positions of the halfwave plate are thus required to retrieve the degree of polarization and the polarization angle. The total intensity is obtained by summing all the images from both beams. These maps are shown in Fig.1for the H band with the addition of the polarized intensity image (Panel C), obtained from the product of the total intensity (Panel A) by the degree of polarization (Panel B). The total intensity image has been histogramequalized to help to show the nice spiral structure of the diffuse L8, page 2 of 5 D. Gratadour et al.: High angular resolution polarimetric observations of NGC 1068 emission at low contrast, it is not representative of the real intensity image entirely dominated by the central source if displayed on a linear or logarithmic scale.
Fig. 2 .
2 Fig. 2. Left panel: result of the difference between the polarization angle map and a purely centro-symmetric pattern. Right panel: the top left image is a magnified version of the left panel, around the central source and top right is the result of the same processing on the K � band angle map. The bottom right image is a magnified version of the degree of linear polarization found in Fig. 1, on which we overlaid the direction of the polarization vectors, showing the clear patch of aligned vectors in this region in the data before the subtraction of the centro-symmetric pattern.
Figure 4 . 6 -
46 Figure 4.6 -SPHERE-IRDIS BB filters. From SPHERE User Manual.
Figure 4 . 7 -
47 Figure 4.7 -NGC 1068 in Ks band, Q φ and U φ maps.
a Dual Polarisation Imager (DPI) program of NBs observations in few continuum bands (see proposal in Appendix A and filters CntH, CntK1 and CntK2 in figure 4.8). The program also included an observation of H 2 very close to the nucleus, with and without coronograph, in Classical imaging (CI).
Figure 4 .
4 Figure 4.8 -SPHERE-IRDIS NB filters. From SPHERE User Manual.
Figure 4 .
4 Figure 4.9 -Measurements of the instrumental polarisation efficiency (not accounting the telescope) for four broad band filters. For best use of the DPI mode, one should avoid the pink zone where the efficiency drops below 90 • (> 10 • loss due to crosstalks). For that, one should make sure the derotator angle stays < 15 • or > 75 • . From SPHERE User Manual.
Figure 4 .
4 Figure 4.10 -Derotator position for NB observations, CntH, CntK1 and CntK2. The red bands indicates the zones that one should avoid.
Figure 4 .
4 Figure 4.11 -NGC 1068 in CntK2 NB. First image was created without selection, using all images. Second image was generated from selected images on their derotator angle.
Figure 4 .
4 Figure 4.12 -Results of reducing with different sky strategies CntK1 NB images. First column shows the polarisation degree maps and second the polarisation angle. First row correspond to treatment with all skies combined, second one was obtained using 1/4 of the skies, third with the skies corresponding to the HWP positions, fourth and fifth without sky subtraction. Fifth images were not re-centred.
Figure 4 .
4 14 shows the histogram of the degree of polarisation of these images.
Figure 4 .
4 Figure 4.13 -Impact of the sky background on the degree of polarisation. First image is the initial polarisation degree generated, second one represents the polarisation degree observed, with sky subtraction and the third image shows the polarisation degree as measured without subtracting sky.
Figure 4
4 Figure4.15 indicates that the sky correction has very little impact on the measured polarisation angle. This is furthermore confirmed by reduced image of figure 4.12. However that is not the case on polarisation degree (figures 4.13 and 4.14). The skycorrected image has a lowest pseudo-noise, however the uncorrected image has a polarisation degree with an important offset with respect to the initial theoretical value, leading to a wrong estimate. Despite not giving the precise value of expected degree
Figure 4 .
4 Figure 4.14 -Histogram of the figure 4.13 on impact of the sky background on the degree of polarisation. Blue histogram is the initial polarisation degree generated, the green one one represents the polarisation degree observed, with sky subtraction and the red histogram shows the polarisation degree as measured without subtracting sky.
Figure 4 .
4 Figure 4.15 -Impact of the sky background on the degree of polarisation. First image is the initial polarisation angle generated, second one represents the polarisation angle observed, with sky subtraction and the third image shows the polarisation angle as measured without subtracting sky.
Figure 4 .
4 Figure 4.16 -NGC 1068 in CntH NB. 1 st row image shows the total intensity recorded (in log 10 (ADU)); 2 nd row: left shows the polarised intensity (in ADU) and right the degree of polarisation (in %); 3 rd row: left shows the Q φ map and right the U φ map (both in ADU); 4 th row: right shows the polarisation position angle (in • ) and right the difference angle to centro-symmetric (in • ).
Figure 4 .
4 Figure 4.17 -NGC 1068 in CntK1 NB. 1 st row image shows the total intensity recorded (in log 10 (ADU)); 2 nd row: left shows the polarised intensity (in ADU) and right the degree of polarisation (in %); 3 rd row: left shows the Q φ map and right the U φ map (both in ADU); 4 th row: right shows the polarisation position angle (in • ) and right the difference angle to centro-symmetric (in • ).
Figure 4 .
4 Figure 4.18 -NGC 1068 in CntK2 NB. 1 st row image shows the total intensity recorded (in log 10 (ADU)); 2 nd row: left shows the polarised intensity (in ADU) and right the degree of polarisation (in %); 3 rd row: left shows the Q φ map and right the U φ map (both in ADU); 4 th row: right shows the polarisation position angle (in • ) and right the difference angle to centro-symmetric (in • ).
Figure 4 .
4 Figure 4.19 -NGC 1068 in NB H2 and CntK1, classical imaging. Total intensity recorded (in log 10 (ADU)) are shown for H2 band and for CntK1 band respectively.
Figure 4 .
4 Figure 4.20 -NGC 1068 map of NB H2 over CntK1.
Figure 4 .
4 Figure 4.21 -NGC 1068 in R NB. Top left image shows the total intensity recorded (in log 10 (ADU)), top right shows the degree of polarisation (in %), bottom left shows the polarisation position angle (in • ) and bottom right the difference angle to centrosymmetric (in • ).
Figure 4 . 22 -
422 Figure 4.22 -Degree of polarisation (in %) in NGC 1068. The left panel shows image in CntK1 with IRDIS and the right panel displays the image in R NB by ZIMPOL, corresponding to the centre of the FOV of IRDIS.
Figure 5 . 1 -
51 Figure 5.1 -Cumulative distribution (left) and probability density (right) in the case of uniform distribution between 0 and 1.
Figure 5 . 2 -
52 Figure 5.2 -Illustration of the method of Von Neumann. Purple curve is the one we want to use as a probability function, the blue one is the envelope. The yellow line indicate a value of parameter where the envelope is efficient while the green one is placed on a particularly inefficient region of the envelope.
Figure 5 . 3 -
53 Figure 5.3 -Illustration of the grid used in MontAGN code. It displays the different cubic cells constituting the grid, the different sublimation radii of the dust species and the ionisation cone.
Figure 5 . 4 -
54 Figure 5.4 -Example of densities grid used in MontAGN simulations. Slice of sampled densities in the cells corresponding to y = 0 in a simple AGN model with constant densities in three different structures.
Figure 5 . 5 -
55 Figure 5.5 -Examples of α phase functions with their corresponding envelope functions, in the case of Mie scattering, for x = [0.1, 0.3, 1.0, 3.0, 5.0, 10.0]. The normalised phase function correspond to the ratio envelope over phase function.
Figure 5 . 6 -
56 Figure 5.6 -Temperature correction frequency distribution. Shown are the dust emissivities, j ν = κ ν B ν (T), prior to and after the absorption of a single photon packet.The spectrum of the previously emitted packets is given by the emissivity at the old cell temperature (bottom curve). To correct the spectrum from the old temperature to the new temperature (upper curve), the photon packet should be re-emitted using the difference spectrum (shaded area), fromBjorkman & Wood (2001).
Figure 5 . 7 -
57 Figure 5.7 -Illustration of the reference angles used in MontAGN code.
Figure 5
5 Figure 5.8 -Example of map of effective number of packets in the case of a star at the centre of a dust cocoon.
Figure 5 . 9 -
59 Figure 5.9 -Example of map of averaged number of scattering and the corresponding map of effective number of packets in the case of an AGN surrounded by a dust torus and with an ionisation cone.
Figure 5 .
5 Figure 5.10 -Averaged phase (left) and probability density (right) functions as measured with MontAGN, in the case of Mie scattering on classical MRN distribution of Silicates (not normalised, 4 × 10 5 photons were sent).
Figure 5 .
5 Figure 5.11 -Examples of α phase functions measured with MontAGN, in the case of Mie scattering, for x = [0.1, 0.3, 1.0, 3.0, 5.0, 10.0] on Silicates (based on a grain size of 100 nm). 1 × 10 6 packets were launched.
Figure 5 .
5 Figure 5.12 -Examples temperature maps generated by montAGN. Left shows the maximum temperature inside a cell, in function of radius and hight. Right shows the number of update that occured in the cells at each position in radius and hight. 5 × 10 6 packets were launched. Central star has a temperature of 5700 K.
Figure 5 .
5 Figure 5.13 -Examples of a temperature profile computed from MontAGN with a constant density structure (r 0 , right panel). 5 × 10 6 packets were launched and central star has a temperature of 5700 K. Left is shown a profile obtain using another computation by Daniel Rouan using comparable parameters. The blue curve corresponds to a case similar to the graph obtained with MontAGN, with a constant density structure (r 0 ) including scattering.
Figure 5 .
5 Figure 5.14 -MontAGN centro-symmetric tests maps. First map shows the intensity (in log scale) with polarisation vectors over-plotted. Bottom left image indicate the measured angle of polarisation θ and bottom right shows the difference angle to centro-symmetric pattern. 10 5 packets at 2.2 µm were launched, central star has a temperature of 5700 K and the shell has τ Ks ≈ 0.05.
Figure 5 .
5 Figure 5.15 -Polarisation degree depending on the main scattering angle α as measured using MontAGN on first scattering. Polarisation degree is obtain through S 12 /S 11 , 4 × 10 5 packets were launched, incoming light is unpolarised.
Figure 5 . 16 -
516 Figure 5.16 -Polarisation degree depending on the main scattering angle α as measured using MontAGN. Polarisation degree is obtain through S 12 /S 11 , 4 × 10 5 packets were launched. The four profiles correspond respectively to second, third, fourth and fifth or more scattering.
Figure 5.18 shows this simulation.
Figure 5 . 17 -
517 Figure 5.17 -Profiles of distribution of scattering angle β for x = 1. First row corresponds to α = π/2 and second one to α = π/4. First column was measured with Q = 1 and U = 0 and second one corresponds to Q = U = 0.5. 10 6 packets were launched.
Figure 5 .
5 Figure 5.18 -Example of simulation with a viewing angle of 150 • . Left map describes the model used, with a torus in orange, with a fainter part starting at 10 pc from the centre. Two ionisation cones are represented in green along the vertical axis. Right image shows the number of received packets per pixel, showing the shapes of the structures. ≈ 5 × 10 7 packets were launched.
1
LESIA, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Universités, UPMC Univ. Paris 06, Univ. Paris Diderot, Sorbonne Paris Cité, 5 place Jules Janssen, 92190 Meudon, France 2 Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de lUniversité, 67000 Strasbourg, France 3 LUTh, Observatoire de Paris, CNRS, Université Paris Diderot, Sorbonne Paris Cité, 5 place Jules Janssen, 92190 Meudon, France c Société Francaise d'Astronomie et d'Astrophysique (SF2A) 2016 arXiv:submit/1696108 [astro-ph.GA] 17 Oct 2016 STOKES was initially developed by R. W. Goosmann and C. M. Gaskell in 2007 in order to understand how reprocessing could alter the optical and ultraviolet radiation of radio-quiet AGN
Fig. 1 .Fig. 2 .
12 Fig. 1. Grain density (in kg/m 3 ) set for both models. Note that in STOKES the polar outflow is constituted of electrons, at a density allowing us to have the same optical depth Left: first model : "model I" Right: second model with the dust shell : "model II".
Fig. 1 .
1 Fig. 1. 1 µm polarimetric simulations of NGC 1068. Left figure shows the color-coded polarized flux in arbitrary units with the polarization position angle superimposed to the image. Right figure shows the color-coded degree of polarization (from 0, unpolarized, to 1, fully polarized).
Fig. 2 .
2 Fig. 2. Same as Fig. 1 with the addition of optically thin interstellar matter around the model
give to MontAGN an already existing model class object (see 4). Allows for example to use a previously computed map temperature, or to avoid to compute again a density grid already existing. If not specified, MontAGN will create a new model class object. thetaobs [list of integer within the range [0,180]] ([0,90,180]) Keywords that trigger what inclination images to display at the end of the simulation.
1. 3 . 1
31 Simple launchExample of launch of MontAGN with a model of dust cocoon around a central star from a ipython session :In [0] : run montagnIn [1] : model=montAGN(ask=0,usemodel=1,add=0,nphot=20000,filename= 'test')
containing the spectrum of the central object (Star, AGN...) centrobject string () Label of central source ('AGN', 'star'...)
outer radius (in pc),[density coefficent of grains (in kg / m 3 )],ratio of the disk height at the disk boundary to the disk outer radius,Envelope mass infall (in Msol / yr),Mass of the star (in Msol)]] .
[8]),float(col[9])],[float(col[10]), float(col[11]),float(col[12]),float(col[13])]] () [[Disc outer radius (in m),[density coefficent of grains in the torus (in particles / m 3 )],[density coefficent of grains in the envelope (in particles / m 3 )],[density coefficent of grains in the cone (in particles / m 3 )]]] .
Figure 6
6 Figure 6.1 -Example of two photon paths on an AGN environment: the blue path will have an integrated optical depth of about 1-4 while the red one will be about 50-200.
Figure 6 . 2 -
62 Figure 6.2 -Vertical slices of grain density of silicates (in log 10 (kg/m 3 ), first column) and density of electrons (in log 10 (e-/m 3 ), second column) set for model 1 (first row), model 2 (second row) and model 3 (third row).
Figure 6 . 3 -
63 Figure 6.3 -Maps of the polarisation degree with an inclination angle of 90 • at 800 nm for model 1, 2 and 3. First column shows maps from STOKES, second column corresponds to MontAGN maps. First row is for model 1, second row for model 2 and third row shows model 3. Polarisation vectors are shown, their length being proportional to the polarisation degree and their position angle representing the polarisation angle
Figure 6 . 4 -
64 Figure 6.4 -Maps of the observed averaged number of scatterings with an inclination angle of 90 • at 800 nm for model 1, 2 and 3 with MontAGN. Polarisation vectors are shown, their length being proportional to the polarisation degree and their position angle representing the polarisation angle (p = 1 is represented by a length of a pixel size).
Figure 6 . 5 -
65 Figure 6.5 -Grain density of silicates (in log 10 (kg/m 3 ), first column) and electron density (in log 10 (e-/m 3 ), second column) set for model 4 (first row) and model 5 (second row) (shown here for the silicates-electron composition).
Figure 6 . 6 -Figure 6 . 7 -
6667 Figure 6.6 -Maps of the observed averaged number of scattering with an inclination angle of 90 • at 800 nm with MontAGN. Polarisation vectors are shown, their length being proportional to the polarisation degree and their position angle representing the polarisation angle. The first two images are for models with silicate and electrons, the last two images for models with silicates, graphites and electrons. The first and third images correspond to model 4 and the second and fourth to model 5.
Figure 6 .
6 Figure 6.8 -Extinction coefficient Q ext in function of wavelength, grain radius r and grain type (data from Draine 1985).
Figure 6 . 9 -
69 Figure 6.9 -Maps obtained from model 1 with MontAGN with an inclination angle of 90 • at 500 nm. The first column corresponds to ionisation cone with electron, the second one with silicates, both after launching 1.5 × 10 7 packets. The first row shows the total intensity (in log 10 (J)), the second row the linearly polarised intensity (in J), with polarisation vectors represented, and the last row shows the linear degree of polarisation (in %).
Figure 6 .
6 Figure 6.10 -Maps obtained from model 2 with MontAGN with an inclination angle of 90 • at 500 nm. First column correspond to ionisation cone with electron, second one with silicates, both after launching 1.5 × 10 7 packets. First row shows the total intensity (in log 10 (J)) and second row shows the linear degree of polarisation (in %).
Figure 6 .
6 Figure 6.11 -Maps of observed averaged number of scattering with an inclination angle of 90 • at 1.6 µm with MontAGN. Optical depth of the cone is fixed to 0.1, those of the extended torus to 0.8 and the torus one is respectively5, 10, 20, 50 and 100 (in H band)
Figure 6 .
6 Figure 6.12 -Same as figure 6.11. Optical depth of the cone is respectively 0.05, 0.1, 0.5, 1.0 and 10.0, those of the torus and its extended part are fixed to 20 and 0.8 (in H band)
Figure 6 .Figure 6 .Figure 6 .
666 Figure 6.13 -Same as figure 6.11 and 6.12. Optical depth of the cone is 0.1, those of the extended torus respectively 0.4, 0.8, 4.0, 8.0 and 80.0. The torus one is fixed to 20 (in H band)
Figure 6 . 16 -
616 Figure 6.16 -Maps of observed averaged number of scattering with an inclination angle of 90 • at 1.6 (top) and 2.2 µm (bottom). Polarisation vectors are shown, their length being proportional to the polarisation degree and their position angle representing the polarisation angle.
derived τ V = 40. Assuming clumpy structures, Alonso-Herrero et al. (2011) obtained optical depth of about 50 in V per cloud fitting MIR SEDs of NGC 1068. Lira et al. (2013); Audibert et al. (
.
Figure 6 . 17 -
617 Figure 6.17 -Map of observed averaged number of scatterings with an inclination angle of 90 • at 646 nm. Polarisation vectors are shown, their length being proportional to the polarisation degree and their position angle representing the polarisation angle.
Figure 7 .
7 Figure 7.1 -Comparison of CANARY performances on Kp band images of NGC 6240. Left panel shows the Keck -NIRC2 image, at the centre is VLT -SINFONI image and right panel corresponds to the WHT -CANARY map. Image from Damien Gratadour.
Fig. 1 :
1 Fig. 1: Left panel : NGC 1068 nucleus, degree of linear polarization in H band. North is up, east to the left. Right panel : result of the difference between the polarization angle map and a purely centro-symmetric pattern in H band.
Introduction to High Angular Resolution . . . . . . . . . . . . 2 1.2 Adaptive Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Impact of Turbulence . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Methods for Achieving High Angular Resolution . . . . . . . . .
xxiv List of Tables List of Tables xxv
List of acronyms GTO Guarantee Time Observation NICMOS Near Infrared Camera and Multi-Object Spectro-
6. C'est à dire Meudon ACS AGN ALMA AO ASCII AU BB BLR CCD CE CFHT CI CMC COME-ON CGE Observatoire de Meudon ESO ONERA Advanced Camera for Surveys Active Galactic Nucleus Atacama Large Millimeter/submillimeter Array Adaptive Optics American Standard Code for Information Interchange Astronomical Unit Broad Band Broad Line Region Coupled Charge Device Central Engine Canada France Hawaii Telescope Classical Imaging Carlsberg Meridian Catalogue CONICA COude Near-Infrared Camera CPU Computer Processor Unit DEC Declination DM Deformable Mirror DPI ELT ESO FITS FOV FWHM GALEV GC GOALS GPU Graphical Processor Unit NGS Natural Guide Star Great Observatory All sky LIRG Survey NGC New General Catalogue Globular Cluster NED NASA/IPAC Extragalactic Database GALaxy EVolutionary synthesis models NB Narrow Band Full Width at Half Maximum NAOS Nasmyth Adaptive Optics System Field Of View NaCo NAOS + CONICA Flexible Image Transport System MTF Modulation Transfer Function European Southern Observatory MRN Mathis, Rumpl and Nordsieck Extremely Large Telescope Intergalactic-medium studies and Cosmology Dual Polarisation Imager H-G Henyey-Greenstein HAR High Angular Resolution HST Hubble Space Telescope HWP Half Wave Plate IAU International Astronomical Union IFS (infrared) Integral Field Spectrograph IR InfraRed IRAS InfraRed Astronomical Satellite IRDIS InfraRed Dual-beam Imager and Spectrograph ISM InterStellar Medium JD Julian Date JWST James Webb Space Telescope LESIA Laboratoire d' Études Spatiales et d'Instrumentation en Astrophysique LGS Laser Guide Star LIRG Luminous in InfraRed Galaxy LTE Local Thermodynamic Equilibrium MC Monte-Carlo MIR Mid InfraRed MOAO Multi-Object Adaptive Optics MontAGN Monte-Carlo for Active Galactic Nuclei MOSAIC Multi-Object Spectrograph for Astrophysics, meter NIR Near InfraRed NIRC2 Near InfraRed Camera (2 nd generation) NLR Narrow Line Region OB Observation Block OC Open Cluster OHP Obervatoire de Haute Provence OPC Observing Programmes Committee PA Position Angle PAH Polycyclic Aromatic Hydrocarbon PSF Point Spread Function PUEO Probing the Universe with Enhanced Optics (and an Hawaii owl) RA Right Ascension RMS Root Mean Square SED Spectral Energy Distribution SFR Star Formation Rate SINFONI Spectrograph for INtegral Field Observations in the Near Infrared SNR Signal to Noise Ratio SPHERE Spectro-Polarimetric High-contrast Exoplanet RE-Observations at High Angular Resolution Contents search instrument Introduction to Extragalactic 1.1
The CANARY Instrument and MOAO . . . . . . . . . . . . . 16 2.2 Observed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 NGC 6240 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.2 IRAS 21101+5810 . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Data Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.1 CANARY Reduction . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3.2 HST -Nicmos . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.3 HST -ACS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.4 Keck -NIRC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.5 Images Registration . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4 Final Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Photometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.1 PSF Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.2 Classical Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.3 Fitting using Poisson's Equation Resolution . . . . . . . . . . . . 27 2.5.4 Colour Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Comparison to Models . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 GALEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.2 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chapter 2
Super Stellar Clusters
Contents
2.1
Table 2 .
2 1 -Summary of data used in our study.
Instrument Exp. time (s) Plate scale ( /pixel) Field (pixels) Date CANARY 800 0.03 256 × 256 22-25/07/2013 CANARY 560 0.03 256 × 256 22-25/07/2013 ACS 720 0.05 4228 × 4358 ACS 1260 0.05 11/02/2006 4228 × 4358 Nicmos 192 0.075948 × 0.075355 NIRC2 660 0.009952 11/02/2006 387 × 322 12/02/1998 1024 × 1024 03/08/2001 CANARY 3840 0.03 256 × 256 22-25/07/2013 CANARY 2480 0.03 256 × 256 22-25/07/2013 ACS 830 0.05 4228 × 4358 ACS 1425 0.05 4228 × 4358 Nicmos 2496 0.075948 × 0.075355 387 × 322 03/11/2005 03/11/2005 10/06/2008 CANARY 0.5 × 2 0.03 CANARY 5 × 2 0.03 CANARY 1 × 2 0.03 CANARY 5 × 2 0.03 256 256 256 256 × 256 × 256 × 256 × 256 24/07/2013 24/07/2013 25/07/2013 25/07/2013
Object and band NGC 6240 H Kp F814W (I) F435W (B) F160W (H) Kp IRAS 21101+5810 H Kp F814W (I) F435W (B) F160W (H) CMC 513807 H Kp H Kp
Table 2 .
2
3 - Photometry of the detected SSC in the intensity maps of
IRAS 21101+5810. Values are given in apparent unit of 10 -9 F V ega,Band . If a "x"
is indicated, photometry could not have been obtained with a proper fit. Relative
incertitude is 15 %. For SSC location, see figure 2.10.
Object F435W (B) F814W (I) F160W (H) H Kp
A B ≈ 0 0.198861 ≈ 0 1.40011 35.3203 12.3392 42.1078 159.846 11.3307 22.2334
SSC1 0.0783366 2.05192 14.2731 14.4328 22.6999
SSC2 <0.01 0.191088 3.14487 3.31517 9.59219
SSC3 <0.01 0.239019 2.50334 3.81237 8.24234
SSC4 0.445026 5.73229 13.0449 8.4024 28.3546
SSC5 0.017639 0.996541 10.803 8.56428 13.2121
SSC6 0.0603434 1.06526 6.55562 6.58728 2.58787
SSC7 x x x x x
SSC8 x x x x x
SSC9 x x x x x
SSC10 x x x x x
SSC11 x x x x x
Star 2.29142 19.2562 47.5067 27.956 41.8263
Following this last method, we fitted the SSCs present in the IRAS 21101+5810
image. Results are shown in table 2.3. Identification of the cluster numbering can be
found in figure
Table 2 .
2 4 -Colour indices of detected SSC in IRAS 21101+5810. For SSC position references, see figure 2.10.
H-K 1.64 ± 0.33 0.64 ± 0.33 0.50 ± 0.33 1.21 ± 0.33 1.29 ± 0.33 0.84 ± 0.33 0.22 ± 0.33 -1.01 ± 0.33 x x x x x -0.14 ± 0.33
From photometry F814W-F160W F435W-F814W x x 2.27 ± 0.33 2.12 ± 0.33 3.10 ± 0.33 3.01 ± 0.33 0.42 ± 0.33 2.34 ± 0.33 1.98 ± 0.33 x 2.12 ± 0.33 3.55 ± 0.33 > 3.20 ± 0.33 2.77 4.38 3.12 > 3.45 ± 0.33 ± 0.33 ± 0.33 ± 0.33 x x x x x x x x x 0.40 ± 0.33 2.31 ± 0.33
H-K 1.04 ± 0.09 0.33 ± 0.12 0.54 ± 0.02 1.10 ± 0.30 0.48 ± 0.04 0.67 ± 0.07 0.46 ± 0.05 0.58 ± 0.02 0.57 ± 0.08 0.45 ± 0.05 0.47 ± 0.03 0.59 ± 0.03 0.34 ± 0.15 0.10 ± 0.11
From colour maps Object F814W-F160W F435W-F814W A 4.16 ± 0.13 B 1.99 ± 0.15 SSC1 2.73 ± 0.16 SSC2 3.10 ± 0.20 SSC3 3.33 ± 0.11 SSC4 2.51 ± 0.18 SSC5 2.82 ± 0.09 SSC6 2.74 ± 0.10 SSC7 2.58 ± 0.18 SSC8 2.69 ± 0.07 SSC9 2.64 ± 0.04 SSC10 3.06 ± 0.09 SSC11 2.80 ± 0.07 Star 0.61 ± 0.13 4.03 ± 0.22 2.17 ± 0.12 3.70 ± 0.15 4.50 2.97 4.05 3.43 3.44 3.07 2.87 4.62 3.76 2.35 4.50 ± 0.5 ± 0.15 ± 0.30 ± 0.09 ± 0.06 ± 0.09 ± 0.07 ± 0.04 ± 0.16 ± 0.19 ± 0.07
Measuring Polarisation with Stokes Vector . . . . . . . . . . . . . 42 3.2.2 Scattering: Grain Properties . . . . . . . . . . . . . . . . . . . . 44 3.2.3 Scattering: Geometry . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2.4 Scattering: Mueller Matrix . . . . . . . . . . . . . . . . . . . . . 49 3.3 Polarimetric Observations . . . . . . . . . . . . . . . . . . . . . 51 3.3.1 Q, U and V maps . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Degree and Angle of Polarisation . . . . . . . . . . . . . . . . . . 53 3.3.3 Q Tangential and Centro-symmetric Patterns . . . . . . . . . . . 55 3.3.4 Polarimetric Instruments . . . . . . . . . . . . . . . . . . . . . . 57 3.4 Data Reduction Methods . . . . . . . . . . . . . . . . . . . . . .
Chapter 3
Polarisation
Contents
3.1 Introduction to Polarisation . . . . . . . . . . . . . . . . . . . . 38
3.2 Stokes Formalism and Scatterings . . . . . . . . . . . . . . . . 42
3.2.1 59
3.4.1 Double Differences Method . . . . . . . . . . . . . . . . . . . . . 59
3.4.2 Double Ratio Method . . . . . . . . . . . . . . . . . . . . . . . . 59
3.4.3 Matrix Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Table 4 .
4 1 -Basic informations about NGC 1068 (fromBland-Hawthorn et al. 1997)
Main names M 77 / NGC 1068
Right ascension α 02 h 42 min 40.771 s
Declination δ -00 • 00 47.84
Distance 14.4 Mpc
Inclination of the galaxy 40 •
Galaxy type Sb
Nuclear type Seyfert 2 (direct light)
Seyfert 1 (if observed through polarised light)
Bolometric luminosity 2 × 10 11 L
Table 4 .
4 2 -SPHERE-IRDIS observation log, 11 th to 14 th of September 2016.
Filter CntH CntK1 CntK2
Mode DPI DPI DPI
Coronograph no no no
Night 1 st night 3 rd night 1 st night
Observation time (min) 85.33 85.33 106.67
Sky time (min) 32.0 42.67 42.67
Filter H2 CntK1 H2 CntK1
Mode CI CI CI CI
Coronograph yes yes no no
Night 2 nd night 2 nd night 3 rd night 3 rd night
Observation time (min) 51.20 51.20 29.60 29.60
Sky time (min) 51.20 51.20 25.60 25.60
-Subtraction of the final sky/skies to scientific images -Flat-field correction of scientific images -Image re-centring per HWP position -Sum of scientific images per HWP position -Image re-centring
Table 4 .
4 3 -Efficiency of the three polarimetric reduction methods. It shows the measurements of the pseudo-noise σ pol in ADU.
Filter CntH CntK1 CntK2
Double differences method 0.05904 0.08924 0.26746
Double ratio method 0.06081 0.09099 0.20141
Inverse Matrix method 0.05751 0.08565 0.20695
4.2.2.3 Derotator
Table 4 .
4 5 -Impact of the skies used in the polarisation efficiency.
Filter CntH CntK1 CntK2
all skies 0.05751 0.08565 0.20695
1/4 skies 0.05417 0.10326 0.21883
HWP skies 0.06890 0.09366 0.21687
no sky 0.05791 0.03216 0.01160
no sky & no centring - 0.020317 -
while those on polarisation angle are displayed on figure
4
.15.
Table 4 .
4 6 -Property of ZIMPOL N_R filter, from SPHERE User Manual.
Filter λ min (nm) λ min (nm) FWHM (nm) λ 0 (nm)
N_R 617.5 674.2 56.7 645.9
Table 5 .
5 2 -Output parameters recorded in MontAGN simulations
Output parameter Symbol unit
Photon number i
Out inclination angle θ •
Out azimuthal angle φ •
Normalised Stokes U parameter U %
Normalised Stokes Q parameter Q %
Normalised Stokes V parameter V %
Out polarisation orientation angle φ QU •
Last interaction position x x pc
Last interaction position y y pc
Last interaction position z z pc
Number of interaction n inter
Number of re-emission n reem
Wavelength λ m
Label of the packet label
Energy of the packet E J
Emission time of the packet δt s
Total emission time (only if first packet) ∆t s
Table 5 .
5 3 -Efficiency of the von Neumann rejection method applied to envelope phase functions used in MontAGN simulations
x Rayleigh Poly. n n H-G MontAGN MontAGN
(r = 100nm) (r = 10nm)
0.01 100 % 100 % 3 66.67 % 99.99 % 99.99 %
0.03 99.96 % 99.99 % 3 66.67 % 99.95 % 99.94 %
0.1 99.57 % 99.93 % 3 66.67 % 99.93 % 99.92 %
0.3 96.13 % 99.41 % 3 66.70 % 99.38 % 99.41 %
1 66.88 % 92.62 % 3 71.23 % 89.79 % 88.94 %
2 27.20 % 89.56 % 5 72.72 % 51.59 % 50.75 %
3 14.19 % 56.40 % 7 66.45 % 39.27 % -
4 8.56 % 44.04 % 8 64.92 % 66.23 % -
10 2.51 % 13.13 % 9 21.29 % 53.06 % -
30 <1 % 2.69 % 13 10.86 % - -
100 <1 % <1 % 18 5.22 % - -
300 <1 % <1 % 20 2.09 % - -
1000 <1 % <1 % 24 0.76 % - -
Table 5 .
5 4 -Fraction of packets that escaped without interaction through a dust structure of optical depth τ with MontAGN.τ N out /N tot N sil /N tot N grao /N tot N grap /N tot N el /N tot N mixt /N tot
0.1 0.90483 0.90590 0.90502 0.90468 0.90535 0.90465
0.2 0.81873 0.82012 0.81974 0.82040 0.82066 0.82045
0.5 0.60653 0.60695 0.60402 0.60964 0.60761 0.60443
1 0.36787 0.36943 0.36946 0.36931 0.36974 0.36682
2 0.13533 0.13594 0.13474 0.13638 0.13608 0.13494
5 0.00673 0.00672 0.00664 0.00725 0.00628 0.00639
temperature profile of a cocoon of silicates around a single star, as compared to other computations. MontAGN temperature maps are displayed on figure
5
.12 and the temperature profile is shown in figure
5
.13.
Table 1 -
1 Output parameters recorded in MontAGN simulations
Output parameter Photon number Out inclination angle Out azimuthal angle Normalised Stokes U parameter Normalised Stokes Q parameter Normalised Stokes V parameter Out polarisation orientation angle Last interaction position x Last interaction position y Last interaction position z Number of interaction Number of re-emission Wavelength Label of the packet Energy of the packet Emission time of the packet Total emission time (only if first packet) Symbol unit i θ • φ • U % Q % V % φ QU • x pc y pc z pc n inter n reem λ m label E J δt s ∆t s
Observational Constraints for Simulations . . . . . . . . . . . 154 6.2 Structures Geometry . . . . . . . . . . . . . . . . . . . . . . . . 155 6.2.1 First Toy Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.2.2 Double Scattering Models . . . . . . . . . . . . . . . . . . . . . . 159 6.2.3 First Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.3 Discussion on Composition . . . . . . . . . . . . . . . . . . . . . 162 6.3.1 Oblong Aligned or Spherical Grains . . . . . . . . . . . . . . . . 162 6.3.2 Dust Composition . . . . . . . . . . . . . . . . . . . . . . . . . . 163 6.3.3 NLR Composition . . . . . . . . . . . . . . . . . . . . . . . . . . 164 6.4 Optical Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 6.4.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 6.4.2 Thick Torus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.4.3 Optical Depth of Scattering Regions . . . . . . . . . . . . . . . . 168 6.5 Application to NGC 1068 Observations . . . . . . . . . . . . . 170 6.5.1 NGC 1068 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 6.5.2 Consequences for Observations . . . . . . . . . . . . . . . . . . . 171 6.5.3 Wavelength Dependency . . . . . . . . . . . . . . . . . . . . . . . 172
-usethermal whether use or not the re-emission (1) Chapter 6
sources -Temp_mode Observation Analysis through () temperature method used Simulations () list of sources -af -nRsub_update (0) whether use or not the sublimation radius evolution () aperture of the funnel -graininfo structure containing information about dust species -Rsub list of sublimation radii -energy energy of a packet -Dt time length if the simulation -ndiffmax maximum number of scattering allowed (50) -map.dust list of dust distribution used () (0) -map.grid grid containing all dust densities as well as temperatures ([]) () -map.N number of cells along an axis (the grid is 8 × N 3 ) (int(Rmax/res)+1) () -map.Rmax size of the grid () () -map.res () Contents resolution of the grid 6.1
-usemodel indices of model to be used ()
-thetaobs inclination of observation, for final plotting ([90])
-thetarec list of recorded angles (not used any more) ([0,45,90,135,180])
-dthetaobs angle interval for registration into files (5)
Observations and Simulations . . . . . . . . . . . . . . . . . . . 176 7.2 Super Stellar Clusters . . . . . . . . . . . . . . . . . . . . . . . . 177 7.3 Active Galactic Nuclei . . . . . . . . . . . . . . . . . . . . . . . 178
Chapter 7
Conclusions and Prospectives
Contents
7.1
The core of NGC 1068 will be used as the guide for Adaptive Optics
11. List of targets proposed in this programme 12. Scheduling requirements
Run Target/Field α(J2000) δ(J2000) ToT Mag. Diam. Additional Reference star
info
A NGC 1068 Résumé 02 42 40.771 -00 00 47.84 15.0 12.66 2 min
Malgré l'existence de modèles pré-
cis, notre connaissance des struc-
tures à petite échelle des galaxies
est toujours limitée par le manque
de preuves observationnelles. Les
progrès instrumentaux ont permis
d'atteindre une haute résolution an-
gulaire à l'aide des nouvelles généra-
tions de télescopes, mais celle-ci
est restreinte à un faible nombre
de cibles extragalactiques à causes
des besoins de l'Optique Adaptat-
ive (OA). En effet, afin de permettre
une mesure efficace du front d'onde,
l'OA requiert une source brillante et
ponctuelle proche de la cible sci-
entifique, typiquement en dessous
de 30 . La partie principale de
cette thèse porte sur l'analyse de
la dizaine de parsecs centrale des
Galaxies à Noyaux Actifs (NAG) à
l'aide de différentes techniques ob-
servationnelles et numériques. Nous
avons dans ce contexte développé
un code de transfert radiatif nous
permettant d'analyser les données
polarimétriques. La seconde partie
de ce travail est dédiée à l'analyse
d'images en proche infrarouges de
galaxies à flambée d'étoiles afin de
contraindre les paramètres décrivant
les super amas stellaires, jeunes
cocons de poussière très massifs
abritant une formation d'étoiles très
soutenue, à l'aide de données ob-
tenues avec l'instrument CANARY,
démonstrateur de nouvelles techno-
logies d'OA.
Mots Clés
Galaxies: Seyfert, amas d'étoiles,
Techniques: photométrie, polar-
imétrie, Haute résolution angu-
laire, Méthodes: observations,
Standard Calibration numériques, Transfert radiatif.
-6 -
alt
Target Notes:
ou de pause
L A T E X, python, spip, page d'accueil, applications, horloge...
raclettes incluses
A larger telescope diameter will allow to receive an increased quantity of light, which still is an important improvement by itself
Strehl ratio is used to assess the quality of the seeing, defined in section 1.2.1. A value of 1 would indicate a perfect diffraction limited observation.
We assume here that there is only one scattering. This is a particularly common case when the dust is optically thin. In case of multiple scattering, the pattern will be more complex
Note that this subtraction is an exception to the impossibility to execute operation on final polarimetric data because it just changes the reference of the angle.
PUEO stand for Probing the Universe with Enhanced Optics and is an Hawaii owl
Note that this is only true if the signal in the given wavelength is mainly originating from the centre (hottest dust or CE), it is the opposite when looking at wavelength where emission come mostly from direct dust re-emission (typically in the MIR), see section5.5.2
the new temperature calculus involve the wavelength through Q abs
Observatoire Astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l'Université, 67000 Strasbourg, France
2 LESIA, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Universités, UPMC Univ. Paris 06, Univ. Paris Diderot, Sorbonne Paris
Cité 3 LUTH, Observatoire de Paris, CNRS, Universit Paris Diderot, 92190 Meudon, France c Société Francaise d'Astronomie et d'Astrophysique (SF2A) 2016
* The half-opening angle of our model is smaller than what is observed(Goosmann & Matt 2011). We will change this value when the comparison between montAGN and STOKES will be completed, see Paper I.† Polarization degrees quoted in the text are for the scattered-induced polarization only. Dilution by the interstellar polarization, host galaxy starlight and starburst light will drastically reduce the observed polarization degree.
Remerciements
Acknowledgements. This paper is based on observations at the Very Large Telescope (VLT) of the European Observatory (ESO) in Chile. We thank Julien Girard and Dimitri Mawet for carrying out the observations and Maud Langlois for helpful discussions of how to calibrate polarimetric observations with SPHERE, along with Chris Packham and Enrique Lopez-Rodriguez for very useful and lively discussions regarding data interpretation. We also already began to plan some follow-up observing programs. With the collaboration built with the High Energy team of Observatoire Astronomique de Strasbourg, we decided to extend this work toward new targets, to have access to a range of inclinations for AGNs and being able to study in detail the influence of the geometry on the observed polarisation. This program is made difficult by the fact that except for NGC 1068, other AGNs are difficult to targets with AO on ground based telescopes because they are fainter than NGC 1068 and not point-like sources. It is therefore very challenging to obtain polarimetric images of these objects, with a sufficient resolution and without risks.
However, such new observations could help to give strong constraints on the inner structures of AGNs, mostly constrained at HAR through the observations of NGC 1068. We aim for example at assessing the origin location of the polarimetric signal from polar scattering dominated AGNs. These Seyfert 1 galaxies are expected to be observed with a line of sight close to the vertical extend of the torus (Smith et al. 2004).
Chapter 5. MontAGN -MontAGN 01
Read of input parameters and conversion into corresponding class objects: General important parameters given as keyword. Parallelisation if asked.
-MontAGN 02
Read of the parameters for dust structures geometry: Given as keyword, asked or from parameter file. Dust densities, grains properties and structures associated. Available structures are described in section 5.1.2.
-MontAGN 03
Creation of the 3D grid and filling with dust densities: Compute the density of each grain type for each cell. Temperature initialisation.
-MontAGN 04
Compute of tables of all propagation elements: Mueller Matrices, albedo, Q abs , Q ext , phase functions, for each grain type, and different wavelength/grain radius.
-MontAGN 05
Start of the simulation:
-MontAGN 06 Random selection of one source: Based on relative luminosity.
-MontAGN 07
Determination of the photons wavelength and initial directions of propagation and of polarisation: Wavelength randomly determined from the source's SED. Stokes vector initialised. p and u computed, in general randomly. See section 3.2.3 for illustration of p and u.
-MontAGN 08
Propagation of the packet until reaching a non empty cell: Determination of the cell density. If null, direct determination of the next cell, back to the beginning of MontAGN 08. If radius smaller than sublimation radius, the cell is considered empty.
Chapter 5. MontAGN azimuthal φ (ranging from 0 to 360 • ) as shown in figure 5.7. Furthermore, we compute from the orientation vectors p and u the angle to the polarisation reference direction from the Q-U orientation in the frame of the packet: .32) with the u i and p i being defined as (with i in [x,y,z]):
(5.33)
We will use this angle to express the Stokes parameters Q and U in the frame of the observer with the proper orientation. This, once again, is achieved thanks to a rotation matrix comparable to those of section 3.2.4:
(5.34)
All the packets parameters are currently written in few data files. This allows to limit the used CPU memory while simulations are running. Until now, parameters are recorded in 19 files, each centred on a value of a particular inclination observation angle between 0 and 180 • . If a packet has an exit inclination within a range of ±5 • to the file's central value, it will be recorded in this file.
From these data, we can extract SED or maps with different selection criteria. The two main routines allowing these will be described in section 5.10. Any polarisation measurement goes through combining all the photons packets into Q and U parameters or maps. All selected packets are summed, pondered by their energy (this point will be detailed in section 5.6.1) and can be combined in polarisation angle and degree maps as presented in section 3.3.2 using:
(5.37)
It is possible to apply any selection on the displayed packets. Altitude and azimuthal angles as well as wavelength are the most used selections, but we can also select only packets at a given number of scatterings, or only the re-emitted ones. In case of maps and once the packets have been selected, the packets' last position of interaction are used to compute their position in the images, according to the observer's inclination angle.
Concluding remarks
We compared the MontAGN and STOKES codes between 0.8 and 1 µm for similar distribution of matter and found that many of the polarimetric features expected from one code are reproduced by the second. The only difference so far resides on the detection of polarisation at large distances from the centre of the model, where we need a higher sampling in order for MontAGN to match STOKES results. The next step will be to improve the models, and develop the MontAGN code by including more effects like electron scattering or non spherical grains (ortho-and para-graphite, namely). However we already get a fairly good agreement between the codes which give us confidence to pursue our exploration of the near-infrared signal of AGN together with MontAGN and STOKES. Note that the comparison allowed to detect flaws and bugs, a positive outcome. We intend to explore our first results in Paper II and push the codes towards more complex geometries. Once a complete agreement will be found in the overlapping band (0.8 -1 µm), we will run a large simulation ranging from the far-infrared to the hard X-rays for a number of selected radio-quiet AGN. Our targets include the seminal type-2 NGC 1068, as well as a couple of other nearby AGN with published polarimetric data. Forthcoming new infrared polarimetric observations using SPHERE will complement our database and be modelled with MontAGN and STOKES.
The authors would like to acknowledge financial support from the Programme National Hautes Energies (PNHE). modelmont can not be used display can not be used -
The mains output are the files containing all the exited photons data. There are however some other files than can be created, all starting with the strings set to filename. They will all be detailed here :
-_xxx_phot.dat files contain all the information about photons, at inclination angle xxx (in degrees). The list of all recorded informations is in section 2.1. These files are the one used to reconstruct images, SED... -_dust_xy.dat file contain a density map of dust grains in the xy plane.
-_dust_xz.dat file contain a density map of dust grains in the xz plane.
Furthermore, if the re-emission (usethermal=1) is enabled, some additional files are created : -_T_update.dat file contain the list of all the temperature update that occurred, with the corresponding cells. -_T_#.dat file contain a sliced temperature map at a specific time (#= 0 for example is for the initial map, #= 3 corresponds to the final map).
Photons files
Table 1 list all the informations recorded in the _xxx_phot.dat files with the corresponding unit if needed :
plot_image
The second one, plot_image() is the most elaborated display function and compute maps according to requirements. It has one input parameter, filename, the name of the file in which load the data to display, and many options, detailed below. Note that if thethaobs is set to a particular angle while executing montAGN, the code will automatically call plot_image() for the corresponding inclination angle.
In [START_REF]1 Examples of Stokes parameters for some polarisation states[END_REF] : plot_image(filename) will read the data in file 'filename_phot.dat' Keywords : outname 'string' ('test') Root name of the images that will be recorded (if rec=1).
suffixe 'string' (") Suffix to be placed at the end of the name of recorded files.
thetaobs float within the range [0,180] ( ) Inclination angle selected for the display of maps.
dtheta positive float < 180 (5) Tolerance interval around the given angle thetaobs for photons packets selection. If [] is specified, all photons packets within the file will be used.
obj 'AGN' or 'star' ('AGN') Allows to select have an idea of the final scale (UA ou pc) path 'string' (") Indicate the path to find the files to be read. Will read the file path/filename.
dat 0 or 1 (1) If set to 1, automatically add '.dat' at the end of 'filename'.
resimage integer ( ) rec 0 ou 1 (0) If set to 1, will record the displayed images, using 'saveformat' format and 'outname' name for files.
saveformat 'pdf' or 'png' ('pdf') Define the file format of recorded images.
plot_Tupdate
In [5] : plot_Tupdate('filename') Displays the temperature maps extracted from 'filename_T_update.dat' Keywords : path 'string' (") Indicate the path to find the files to be read. Will read the file path/filename.
dat 0 or 1 (1) If set to 1, automatically add '.dat' at the end of 'filename'.
unity 'pc' or 'AU' ('pc') Indicate the unit to be used on the axis.
rec 0 ou 1 (0) If set to 1, will record the displayed images, using 'saveformat' format and 'outname' name for files.
saveformat 'pdf' or 'png' ('pdf') Define the file format of recorded images.
size positive integer (100) Define the number of pixel to be used on each axis to display profiles and maps. By submitting this proposal, the PI takes full responsibility for the content of the proposal, in particular with regard to the names of CoIs and the agreement to act according to the ESO policy and regulations, should observing time be granted.
1. Title Category:
B-9
Flushing out the nuclear torus of NGC 1068 with the exoplanet hunter We propose here to complete this measurement by narrow-band polaroimages to look for wavelength dependent effect. We also propose coronagraphic imaging in the 1-0 S(1) H2 line to look for the molecular extension of the torus. In the coming semesters we aim at obtaining polaro-imaging of a sample of few close AGN in order to extend this kind of study to other targets with different characteristics.
Special remarks:
This program is an updated and extended form of proposal 60.A-9361(A) for SPHERE SV, which had been granted 5.5h with UT3+SPHERE and was partially executed.
6. Principal Investigator: Lucas Grosset, lucas.grosset@obspm.fr, F, LESIA, |
01762094 | en | [
"sdv.mhep.csc",
"sdv.sp.pharma"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01762094/file/9%20CUISSET.pdf | Pierre Deharo
Jacques Quilici
Laurence Camoin-Jau
Thomas Johnson
Clémence Bassez
Guillaume Bonnet
Marianne Fernandez
Manal Ibrahim
Pierre Suchon
Valentine Verdier
PhD Pierredeharomd Abc Jacquesquilicimd D Laurencecamoin-Jaumd
Thomas W Johnsonmd
Clémencebassezmd Ac Guillaumebonnetmd Ac Mariannefernandezmd
Manali
PhD Valentineverdiermd H Laurentfourcademd
PhD bcei Pierre Emmanuelmorangemd
PhD Jean Louisbonnetmd
PhD Marie Christinealessimd
PhD Bcei Thomascuissetmd
Benefit of Switching Dual Antiplatelet Therapy After Acute Coronary Syndrome According to On-Treatment Platelet Reactivity: The TOPIC-VASP Pre-Specified Analysis of the TOPIC Randomized Study
published or not. The documents may come
Introduction
After acute coronary syndrome (ACS), adequate platelet inhibition is crucial to minimize the risk of recurrent ischemic events [START_REF] Cuisset | Predictive values of post-treatment adenosine diphosphate-induced aggregation and vasodilator-stimulated phosphoprotein index for stent thrombosis after acute coronary syndrome in clopidogrel-treated patients[END_REF]. "Newer P2Y12 blockers" (i.e., prasugrel and ticagrelor) have a more pronounced inhibitory effect on platelet activation and have proved their superiority over clopidogrel, in association with aspirin [START_REF] Wiviott | TRITON-TIMI 38 InvestigatorsPrasugrel versus clopidogrel in patients with acute coronary syndromesN[END_REF][START_REF] Wallentin | Ticagrelor versus clopidogrel in patients with acute coronary syndromesN[END_REF]. The clinical benefit provided by these drugs is related to a significant reduction in recurrent ischemic events, despite an increased incidence of bleeding complications [START_REF] Wiviott | TRITON-TIMI 38 InvestigatorsPrasugrel versus clopidogrel in patients with acute coronary syndromesN[END_REF][START_REF] Wallentin | Ticagrelor versus clopidogrel in patients with acute coronary syndromesN[END_REF]. The TOPIC (Timing Of Platelet Inhibition after acute Coronary syndrome) study recently showed that switching from ticagrelor or prasugrel plus aspirin to fixed dose combination (FDC) of aspirin and clopidogrel, 1 month after ACS, was associated with a reduction in bleeding complications, without increase of ischemic events at 1 year (4).
Platelet function testing has been used for years to assess individual response to antiplatelet agents. Indeed, platelet reactivity has been strongly associated with clinical outcomes after ACS [START_REF] Cuisset | Predictive values of post-treatment adenosine diphosphate-induced aggregation and vasodilator-stimulated phosphoprotein index for stent thrombosis after acute coronary syndrome in clopidogrel-treated patients[END_REF][START_REF] Kirtane | Is there an ideal level of platelet P2Y12receptor inhibition in patients undergoing percutaneous coronary intervention?: "Window" Analysis From the ADAPT-DES Study (Assessment of Dual AntiPlatelet Therapy With Drug-Eluting Stents)[END_REF][START_REF] Parodi | High residual platelet reactivity after clopidogrel loading and long-term cardiovascular events among patients with acute coronary syndromes undergoing[END_REF][START_REF] Stone | ADAPT-DES InvestigatorsPlatelet reactivity and clinical outcomes after coronary artery implantation of drug-eluting stents (ADAPT-DES): a prospective multicentre registry studyLancet[END_REF]. High on-treatment platelet reactivity (HTPR), defining biological resistance to dual antiplatelet therapy (DAPT) is frequent on clopidogrel and has been associated with an increased risk of cardiovascular events, including stent thrombosis [START_REF] Cuisset | Predictive values of post-treatment adenosine diphosphate-induced aggregation and vasodilator-stimulated phosphoprotein index for stent thrombosis after acute coronary syndrome in clopidogrel-treated patients[END_REF][START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF]. In contrast, HTPR is rarely observed with use of newer P2Y12 blockers (prasugrel, ticagrelor). Instead, biological hyper-response is frequently noticed and associated with bleeding events on P2Y12 blockers [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Low on-treatment platelet reactivity (LTPR) has been proposed to define hyperresponse to P2Y12 blockers [START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Therefore, the objective of the present analysis was to investigate the impact of LTPR on clinical outcomes after ACS and the relation between initial platelet reactivity and benefit of the switched DAPT strategy tested in the TOPIC study.
Methods
Study design and patients
The design of the TOPIC randomized study has been previously published [START_REF] Cuisset | Benefit of switching dual antiplatelet therapy after acute coronary syndrome: the TOPIC (timing of platelet inhibition after acute coronary syndrome) randomized studyEur[END_REF]. Briefly, this was an open-label, single-center, controlled trial randomizing patients admitted for ACS and treated with aspirin and a new P2Y12 inhibitor. One month after the ACS, eligible patients were then randomly assigned in a 1:1 ratio to receive a FDC of aspirin 75 mg plus clopidogrel 75 mg (switched DAPT) or continuation of aspirin plus the established new P2Y12 blocker (unchanged DAPT). Inclusion criteria were admission for ACS requiring early percutaneous coronary intervention (PCI) within 72 h, treatment with aspirin and a newer P2Y12 blocker at discharge, no major adverse event 1 month after the ACS, and >18 years of age. Exclusion criteria were history of intracranial bleeding; contraindication to use of aspirin, clopidogrel, prasugrel, or ticagrelor; major adverse event (ischemic or bleeding event) within a month of ACS diagnosis; thrombocytopenia (platelet concentration lower than 50×10 9 /l); major bleeding (according to the Bleeding Academic Research Consortium [BARC] criteria) in the past 12 months; long-term anticoagulation (contraindication for newer P2Y12 blockers); and pregnancy. During the randomization visit, patients had to present fasting and biological response to P2Y12 blocker was assessed by % platelet reactivity index vasodilator-stimulated phosphoprotein (PRI-VASP). On the basis of PRI-VASP, patients were classified as LTPR (PRI-VASP ≤20%), normal response (20% < PRI-VASP ≤50%), or HTPR (PRI-VASP >50%) [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Due to expected very low rates of HTPR, we decided to pool normal response and HTPR in the non-LTPR cohort (PRI-VASP >20%).
Randomization
All patients received treatment with aspirin and a newer P2Y12 inhibitor for 1 month after the ACS. One month after the ACS, eligible patients were then randomly assigned in a 1:1 ratio to receive an FDC of aspirin 75 mg plus clopidogrel 75 mg (switched DAPT) or continuation of aspirin plus continuation of newer P2Y12 blocker (unchanged DAPT with same treatment than before randomization). The randomization was performed independently of platelet inhibition status, with the investigators blinded to PRI-VASP results. The randomization sequence was computer generated at Timone Hospital, and patients' allocations were kept in sequentially numbered sealed envelopes. Group allocation was issued by the secretarial staff of the research department at Timone Hospital.
Treatment
During the index admission, a 300-mg loading dose of aspirin was given to patients who were treatment-naive before the study. All patients were pre-treated with a loading dose of ticagrelor 180 mg or prasugrel 60 mg before PCI. Regarding the PCI, the use of second-and third-generation drug-eluting stents was recommended. At the discretion of the attending physician, patients were discharged on ticagrelor 90 mg twice a day or prasugrel 10 mg daily in addition to aspirin. At 1-month patients were randomly assigned to either continue with the standard regimen of 75 mg of aspirin plus newer P2Y12 blocker (unchanged DAPT) or receive a single tablet FDC of aspirin 75 mg plus clopidogrel 75 mg (switched DAPT). To reduce the risk of bleeding, use of radial access, proton-pump inhibitors, and access site closure devices (when PCI was undertaken via the femoral artery) were recommended but not mandatory. Other cardiac medications were given according to local guidelines.
Follow-up and endpoint assessments
The primary endpoint of this analysis aimed to evaluate the impact of on-treatment platelet reactivity on clinical outcomes in both groups (unchanged and switched DAPT). The primary endpoint was a composite of cardiovascular death, unplanned hospitalization leading to urgent coronary revascularization, stroke, and bleeding episodes as defined by the BARC classification ≥2 at 1 year after ACS [START_REF] Mehran | Standardized bleeding definitions for cardiovascular clinical trials: a consensus report from the Bleeding Academic Research ConsortiumCirculation[END_REF]. This combination of both ischemic and bleeding events was defined as net clinical benefit. Each of the components was also evaluated independently, as well as the composite of all ischemic events and all BARC bleeding episodes. Factors associated with LTPR status were determined. Unplanned revascularization was defined as any unexpected coronary revascularization procedure (PCI or coronary artery bypass graft surgery) during the follow-up period. Stroke diagnosis was confirmed by a treating neurologist. Computed tomography or magnetic resonance imaging was used to distinguish ischemic from hemorrhagic stroke.
All data were collected prospectively and entered into a central database. Clinical follow-up was planned for 1 year after the index event or until the time of death, whichever came first. After collection, data were analyzed by a physician at our institution dedicated to study follow-up.
Platelet inhibition evaluation
Platelet reactivity was measured using the VASP index. Blood samples for VASP index analysis were drawn by a traumatic venipuncture of the antecubital vein. Blood was taken at least 6 h after ticagrelor intake and 12 h after prasugrel intake. The initial blood drawn was discarded to avoid measuring platelet activation induced by needle puncture; blood was collected into a Vacutainer (Becton Dickinson, New Jersey) containing 3.8% trisodium citrate and filled to capacity. The Vacutainer was inverted 3 to 5 times for gentle mixing and sent immediately to the hemostasis laboratory. VASP index phosphorylation analysis was performed within 24 h of blood collection by an experienced investigator using the CY-QUANT VASP/P2Y12 enzyme-linked immunosorbent assay (Biocytex, Marseille, France) [START_REF] Schwarz | EigenthalerFlow cytometry analysis of intracellular VASP phosphorylation for the assessment of activating and inhibitory signal transduction pathways in human platelets-definition and detection of ticlopidine/clopidogrel effectsThromb[END_REF]. Briefly, after a first step of parallel whole blood sample activation with prostaglandin E1 (PGE1) and PGE1+adenosine diphosphate (ADP), platelets from the sample are lysed, allowing released VASP to be captured by an antihuman VASP antibody, which is coated in the microtiter plate. Then, a peroxidase-coupled antihuman VASP-P antibody binds to a phosphorylated serine 239-antigenic determinant of VASP. The bound enzyme peroxidase is then revealed by its activity on tetramethylbenzidine substrate over a pre-determined time. After stopping the reaction, absorbance at 450 nm is directly related to the concentration of VASP-P contained in the sample. The VASP index was calculated using the optical density (OD) (450 nm) of samples incubated with PGE1 or PGE1+ADP according to the formula: Maximal platelet reactivity was defined as the maximal PRI reached during the study.
Ethics
The ethics committee at our institution approved the study protocol, and we obtained written informed consent for participation in the study. We honored the ethical principles for medical research involving human subjects as set out in the Declaration of Helsinki. The data management and statistical analysis were performed by the research and development section, Cardiology Department, Timone Hospital (Marseille, France).
Statistical analysis
All calculations were performed using the SPSS version 20.00 (IBM Corporation, Armonk, New York) and GraphPad Prism version 7.0 (GraphPad Software, San Diego, California). Baseline characteristics of subjects with and without LTPR were compared. Because randomization was not stratified by LTPR status, baseline characteristics were compared among subjects with and without LTPR by treatment assignment. Continuous variables were reported as mean ± SD or as median (interquartile range) (according to their distribution), and categorical variables were reported as count and percentage. Standard 2-sided tests were used to compare continuous variables (Student t or Mann-Whitney U tests) or categorical variables (chi-square or Fisher exact tests) between patient groups. Multivariate regression models were used to evaluate the linear association between LTPR status (dependent variable) and clinical characteristics (independent variable) using binary logistic regression. The primary analysis was assessed by a modified intention-to-treat analysis. Percentages of patients with an event were reported. We analyzed the primary and secondary endpoints by means of a Cox model for survival analysis, with time to first event used for composite endpoints, and results reported as hazard ratio (HR) and 95% confidence interval (CI) for switched DAPT versus unchanged DAPT. Survival analysis methods were used to compare outcomes by treatment assignment (unchanged DAPT vs. switched DAPT) and by presence or absence of LTPR. Hazard ratios (HRs) were adjusted to the factors independently associated with LTPR status. Areas under the receiver-operating characteristic curve were determined using MedCalc Software version 12.3.0 (Ostend, Belgium). According to the receiver-operating characteristic curve, the value of PRI-VASP exhibiting the best accuracy was chosen as the threshold. This study is registered with ClinicalTrials.gov (NCT02099422).
Results
Baseline
Between March 2014 and May 2016, 646 patients were enrolled; 323 patients were randomly assigned to the switched DAPT group, and 323 patients were randomly assigned to the unchanged DAPT group. Follow-up at 1 year was performed for 316 (98.1%) patients in the switched DAPT group and 318 (98.5%) in the unchanged DAPT group (Figure 1). The median follow-up for both groups was 359 days, and the mean follow-up was 355 days in the switched DAPT group versus 356 days in the unchanged DAPT group. The characteristics of the studied cohort are summarized in Table 1. Patients with LTPR had lower body mass index (BMI) and were less often diabetic (Table 1). Platelet reactivity testing was performed for all patients, and results were available for 644 (99.7%) patients.
Values are n (%) or mean ± SD.
Platelet inhibition
In the whole cohort, 1 month after ACS, mean PRI-VASP was 26.1 ± 18.6%, corresponding to 27.3 ± 19.4% in the switched arm versus 25.0 ± 17.7% in the unchanged arm (p = 0.12). A total of 305 patients (47%) were classified as LTPR, corresponding to 151 (47%) patients in the switched arm and 154 (48%) patients in the unchanged arm (p = 0.84). Patients on ticagrelor had a significantly lower platelet reactivity and higher incidence of LTPR than did patients on prasugrel (mean PRI-VASP: 22.2 ± 18.7% vs. 29.1 ± 18.0%; p < 0.01; and 167 [55%] vs. 139 [45%]; p < 0.01, respectively) (Figure 2).
Factors associated with LTPR status
LTPR patients were older (p = 0.05), had lower BMI (p < 0.01), were less often diabetic (p = 0.01), and were more often on ticagrelor (p < 0.01). In multivariate analysis, BMI (p < 0.01), diabetes (p = 0.01), and ticagrelor treatment (p < 0.01) remained associated with LTPR.
Clinical outcomes
Results of the TOPIC study have been previously published and showed a significant reduction in the primary composite endpoint on switched DAPT strategy driven by a reduction in bleeding complications (9.3% vs. 23.5%; p < 0.01) without differences in ischemic endpoints (9.3% vs. 11.5%; p = 0.36).
Effect of LTPR on clinical outcomes in both randomized arms
Unchanged arm
At 1-year follow-up, in the unchanged arm the rate of primary endpoint occurred in 51 (33.1%) patients defined as LTPR and in 34 (20.1%) patients defined as no LTPR (p = 0.01) (Table 2 and Figure 3). Bleeding events defined as BARC ≥2 occurred in 28 (18.2%) LTPR patients and in 20 (11.8%) non-LTPR patients (p = 0.19) (Table 3 and Figure 4), while bleeding events defined as all BARC occurred in 41 (26.6%) LTPR patients and in 35 (20.7%) non-LTPR patients (p = 0.39) (Table 4). Any ischemic endpoint occurred in 23 (14.9%) LTPR patients and in 14 (8.3%) non-LTPR patients (p = 0.04) (Table 5, Figure 5). Abbreviations as in Table 2.
Switched arm
Differently from the unchanged group, at 1-year follow-up, in the switched arm, the rate of primary endpoint was not significantly different and occurred in 18 (11.9%) LTPR patients and in 25 (14.6%) non-LTPR patients (p = 0.45) (Table 2, Figure 3). Bleeding events defined as BARC ≥2 occurred in 8 (5.3%) LTPR patients and in 5 (2.9%) non-LTPR patients (p = 0.29) (Table 3, Figure 4), while bleeding events defined as all BARC occurred in 19 (12.6%) LTPR patients and in 11 (6.4%) non-LTPR patients (p = 0.046) (Table 4). Any ischemic endpoint occurred in 10 (6.6%) LTPR patients and in 20 (11.7%) non-LTPR patients (p = 0.11) (Table 5, Figure 5).
Impact of LTPR on benefit of switching strategy
Patients with LTPR
In LTPR patients, the rate of primary endpoint at 1 year was significantly lower after switching and occurred in 18 (11.9%) patients in the switched arm and in 51 (33.1%) patients in the unchanged arm (p < 0.01) (Table 2, Figure 3). This benefit on primary endpoint was related to lower incidence of both bleeding and ischemic complications. Indeed, the rate of bleeding BARC ≥2 occurred in 8 (5.3%) LTPR patients in the switched arm and in 28 (18.2%) LTPR patients in the unchanged arm (p < 0.01) (Table 3, Figure 4). Also, the rate of all BARC bleeding occurred in 19 (12.6%) patients in the switched arm and in 41 (26.6%) patients in the unchanged arm (p < 0.01) (Table 4). Finally, the rate of any ischemic endpoint occurred in 10 (6.6%) LTPR patients in the switched arm and in 23 (14.9%) LTPR patients in the unchanged arm (adjusted HR: 0.39; 95% CI: 0.18 to 0.85; p = 0.02) (Table 5, Figure 5).
Patients without LTPR
In patients without LTPR the rate of primary endpoint at 1 year was not significantly different but was numerically lower in patients in the switched group compared with the unchanged group: 25 (14.6%) patients versus 34 (20.1%) patients, respectively (p = 0.39) (Table 2, Figure 3). However, the risk of bleeding was, as LTPR patients, significantly lower in the non-LTPR patients after switching. Indeed, the rate of bleeding BARC ≥2 occurred in 5 (2.9%) non-LTPR patients in the switched arm and in 20 (11.8%) non-LTPR patients in the unchanged arm (p < 0.01) (Table 3 and Figure 4) and the rate of all BARC bleedings occurred in 11 (6.4%) patients in the switched arm and in 35 (20.7%) patients in the unchanged arm (p < 0.01) (Table 4). Finally, any ischemic endpoint occurred in 20 (11.7%) patients in the switched arm and in 14 (8.3%) patients in the unchanged arm (adjusted HR: 1.67; 95% CI: 0.81 to 3.45; p = 0.17) (Table 5, Figure 5).
Discussion
The main finding of our study is that the benefit of a switching DAPT strategy on bleeding prevention is observed regardless of a patient's biological response to newer P2Y12 blockers. Indeed, the switched strategy allows reduction of bleeding complications without apparent increase in ischemic complications in both the LTPR and the non-LTPR groups. However, benefit of switched DAPT was greater in LTPR patients, who had impaired prognosis with unchanged DAPT but similar rate of adverse events with a switched DAPT strategy.
In patients treated with DAPT, the relationship between platelet reactivity and clinical outcomes has been extensively investigated in clopidogrel-treated patients [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. Indeed, resistance to clopidogrel is frequent and defined by an HTPR [START_REF] Stone | ADAPT-DES InvestigatorsPlatelet reactivity and clinical outcomes after coronary artery implantation of drug-eluting stents (ADAPT-DES): a prospective multicentre registry studyLancet[END_REF][START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Aradi | Bleeding and stent thrombosis on P2Y12inhibitors: collaborative analysis on the role of platelet reactivity for risk stratification after percutaneous coronary intervention[END_REF]. Newer P2Y12 blockers are characterized by stronger and more predictable platelet inhibition in comparison with clopidogrel (2,3). Both ticagrelor and prasugrel proved, in large randomized trials, their clinical superiority over clopidogrel after ACS [START_REF] Wiviott | TRITON-TIMI 38 InvestigatorsPrasugrel versus clopidogrel in patients with acute coronary syndromesN[END_REF][START_REF] Wallentin | Ticagrelor versus clopidogrel in patients with acute coronary syndromesN[END_REF]. Although resistance to newer P2Y12 blockers is infrequently observed, significant rates of hyper-responders emerged [START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. This status, defined as LTPR, has been later associated with increased risk of bleeding events on DAPT [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF][START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF][START_REF] Aradi | Bleeding and stent thrombosis on P2Y12inhibitors: collaborative analysis on the role of platelet reactivity for risk stratification after percutaneous coronary intervention[END_REF][START_REF] Bonello | Relationship between post-treatment platelet reactivity and ischemic and bleeding events at 1-year follow-up in patients receiving prasugrelJ[END_REF]. Our study confirmed that biological hyper-response to DAPT is frequent on newer P2Y12 blockers, with 47% of the patients defined as LTPR, using the definition validated by our group on a large cohort of ACS patients [START_REF] Cuisset | Clinical implications of very low ontreatment platelet reactivity in patients treated with thienopyridine: the POBA study (predictor of bleedings with antiplatelet drugs)[END_REF]. We also confirmed the significant association between LTPR and bleeding complications. Moreover, we observed that patients defined as LTPR on newer P2Y12 blockers had worse outcomes if they were maintained on their original "unchanged" DAPT regimen, whereas after switching a similar benefit was observed between LTPR and non-LTPR patients.
Surprisingly, we noticed a trend in favor of the higher incidence of ischemic complications in LTPR patients who remained on unchanged DAPT. In the switched arm, LTPR was associated with nonsignificant reduction in ischemic events, which is in line with stronger platelet inhibition levels. We might hypothesize that hyper-responders maintained on newer P2Y12 blockers were exposed to ischemic complications following DAPT change or nonadherence due to side effects such as minor bleedings or ticagrelor-associated dyspnea as well as a play of chance that cannot be excluded.
Despite the strong prognostic value of platelet function testing, strategies aiming to tailor DAPT according to individual platelet inhibition failed to prove significant clinical benefit [START_REF] Price | GRAVITAS InvestigatorsStandard-vs high-dose clopidogrel based on platelet function testing after percutaneous coronary intervention: the GRAVITAS randomized trialJAMA[END_REF][START_REF] Trenk | A randomized trial of prasugrel versus clopidogrel in patients with high platelet reactivity on clopidogrel after elective percutaneous coronary intervention with implantation of drug-eluting stents: results of the TRIGGER-PCI (Testing Platelet Reactivity In Patients Undergoing Elective Stent Placement on Clopidogrel to Guide Alternative Therapy With Prasugrel) studyJ[END_REF][START_REF] Collet | ARCTIC InvestigatorsBedside monitoring to adjust antiplatelet therapy for coronary stentingN[END_REF][START_REF] Cayla | ANTARCTIC investigatorsPlatelet function monitoring to adjust antiplatelet therapy in elderly patients stented for an acute coronary syndrome (ANTARCTIC): an open-label, blinded-endpoint, randomised controlled superiority trialLancet[END_REF]. All these studies included mostly patients treated with clopidogrel, or prasugrel last, and aimed to adjust the molecule or the dose according to platelet function. Three of 4 trials aimed to correct poor response to clopidogrel (14-16), whereas only 1 trial did adjust the DAPT regimen according to hyper-response in elderly patients only (>75 years of age) treated with a 5-mg dosage of prasugrel [START_REF] Cayla | ANTARCTIC investigatorsPlatelet function monitoring to adjust antiplatelet therapy in elderly patients stented for an acute coronary syndrome (ANTARCTIC): an open-label, blinded-endpoint, randomised controlled superiority trialLancet[END_REF]. However, it seems that ticagrelor is associated with higher rates of hyper-response than prasugrel is. Consequently, it is possible that platelet function testing may have a role in the management of selected patients treated with ticagrelor after ACS who are at risk of developing hyper-response (i.e., older patients, with low BMI, nondiabetic). Because no large study assessing the benefit of treatment adaptation based on platelet function has been conducted on ticagrelor so far, it is possible that higher rates of hyper-response make relevant the use of platelet function testing in this setting. The next challenge could be to identify which patients will benefit from platelet testing and treatment adaptation in case of hyper-response. However, in our study, benefit of switching DAPT was observed also in non-LTPR patients, which could mitigate the usefulness of platelet testing and reserve it to selected candidates after ACS (such as nondiabetics and lower BMI).
Moreover, despite the fact that the recommended DAPT duration after ACS is 12 months [START_REF] Roffi | Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: Task Force for the Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology[END_REF], there is evidence that shorter DAPT duration could be safe after ACS in selected patients [START_REF] Naber | LEADERS FREE InvestigatorsBiolimus-A9 polymer-free coated stent in high bleeding risk patients with acute coronary syndrome: a Leaders Free ACS sub-studyEur[END_REF] and therefore benefit of the switched strategy would be less substantial, whereas P2Y12 blockers could be stopped after 1 to 3 months. Nevertheless, this strategy of short DAPT after ACS does not apply to all patients but is reserved to very high bleeding risk ACS patients [START_REF] Roffi | Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology2015 ESC Guidelines for the management of acute coronary syndromes in patients presenting without persistent ST-segment elevation: Task Force for the Management of Acute Coronary Syndromes in Patients Presenting without Persistent ST-Segment Elevation of the European Society of Cardiology[END_REF]. Nevertheless, reduced platelet inhibition potency from 1 to 12 months could maintain ischemic protection while reducing the risk of bleeding as demonstrated in TOPIC study [START_REF] Cuisset | Benefit of switching dual antiplatelet therapy after acute coronary syndrome: the TOPIC (timing of platelet inhibition after acute coronary syndrome) randomized studyEur[END_REF].
The effect of switching from a newer P2Y12 inhibitor to clopidogrel on platelet inhibition has been assessed in crossover studies [START_REF] Gurbel | Response to ticagrelor in clopidogrel nonresponders and responders and effect of switching therapies: the RESPOND studyCirculation[END_REF][START_REF] Kerneis | Switching acute coronary syndrome patients from prasugrel to clopidogrelJ[END_REF][START_REF] Deharo | Effectiveness of switching 'hyper responders' from prasugrel to clopidogrel after acute coronary syndrome: the POBA (Predictor of Bleeding with Antiplatelet drugs) SWITCH studyInt[END_REF][START_REF] Pourdjabbar | CAPITAL InvestigatorsA randomised study for optimising crossover from ticagrelor to clopidogrel in patients with acute coronary syndrome[END_REF]. These studies have shown that switching to clopidogrel is associated with a reduction of platelet inhibition and an increase in rates of HTPR. Therefore, the concern may be that some of the patients switched will have insufficient platelet inhibition on clopidogrel and will be exposed to increased risk of ischemic recurrence. However, the reduced potency of DAPT offered by our switching strategy, 1 month after ACS in patients free of adverse events, was not associated with an increased risk of ischemic events, compared with an unchanged DAPT strategy (4). There is also evidence that 80% of stent thrombosis will occur within the first month after stent implantation [START_REF] Palmerini | Long-term safety of drug-eluting and bare-metal stents: evidence from a comprehensive network meta-analysisJ[END_REF]; it is likely that after this time point the impact of resistance to clopidogrel on stent thrombosis incidence is less critical. Finally, the large ongoing TROPICAL-ACS (Testing Responsiveness To Platelet Inhibition On Chronic Antiplatelet Treatment For Acute Coronary Syndromes) study will provide important additional information about both the concept of evolutive DAPT with switch as well as the value of platelet function testing to guide it [START_REF] Sibbing | TROPICAL-ACS InvestigatorsA randomised trial on platelet function-guided deescalation of antiplatelet treatment in ACS patients undergoing PCI. Rationale and design of the Testing Responsiveness to Platelet Inhibition on Chronic Antiplatelet Treatment for Acute Coronary Syndromes (TROPICAL-ACS) trialThromb Haemost[END_REF]. This trial will randomize 2,600 ACS patients to standard prasugrel treatment or de-escalation of antiplatelet therapy at 1 week with a switch to clopidogrel. This de-escalation group will undergo platelet testing 2 weeks after switching with a switch back to prasugrel in case of low response [START_REF] Sibbing | TROPICAL-ACS InvestigatorsA randomised trial on platelet function-guided deescalation of antiplatelet treatment in ACS patients undergoing PCI. Rationale and design of the Testing Responsiveness to Platelet Inhibition on Chronic Antiplatelet Treatment for Acute Coronary Syndromes (TROPICAL-ACS) trialThromb Haemost[END_REF].
Study limitations
First, it was an open-label study. Nevertheless, all events for which medical attention was sought were adjudicated by a critical events committee unaware of treatment allocation. However, self-reported bleeding episodes and treatment discontinuation, for which patients did not consult a health care professional, were subjective. In case of adverse event reporting or treatment modification, the letters from general practitioners and medical reports were collected and analyzed. Second, this is a post hoc analysis of a randomized trial with inherent bias. Third, we used only the PRI-VASP assay to assess platelet inhibition. However, it is recognized as the most reliable assessment of platelet inhibition, being the only test that specifically measures P2Y12 receptor activity [START_REF] Tantry | Working Group on On-Treatment Platelet ReactivityConsensus and update on the definition of on-treatment platelet reactivity to adenosine diphosphate associated ischemia and bleedingJ[END_REF]. Fourth, by protocol we did not reassess platelet inhibition after switching and then could not determine the prognosis and frequency of patients defined as HTPR after switching. Last, initial population calculation was made to compare switched versus unchanged strategy and therefore, the platelet reactivity analysis was underpowered for clinical outcomes and could only be considered as hypothesis generating.
Conclusions
Our data suggest that in patients on aspirin plus ticagrelor or prasugrel without evidence of an adverse event in the first month following an ACS, switching DAPT strategy to aspirin plus clopidogrel is beneficial regardless of biological platelet inhibition status. However, switching DAPT is highly efficient in hyper-responders. Indeed, hyper-response is associated with worse clinical outcomes with unchanged DAPT, which was corrected by a switched DAPT strategy. Therefore, platelet testing could facilitate tailoring DAPT 1 month after a coronary event, biological hyper-response being 1 more argument to switch DAPT. Further randomized evaluations are necessary to validate antiplatelet regimen adaptation in case of biological hyper-response to P2Y12 blockers.
Perspectives
WHAT IS KNOWN? "Newer" P2Y12 blockers (i.e., prasugrel and ticagrelor) have a more pronounced inhibitory effect on platelet activation and have proved their superiority over clopidogrel, in association with aspirin. The TOPIC study suggested that switching from ticagrelor or prasugrel plus aspirin to FDC of aspirin and clopidogrel (switched DAPT), 1 month after ACS, was associated with a reduction in bleeding complications, without an increase in ischemic events at 1 year.
WHAT IS NEW? Biological hyper-response to a newer P2Y12 blocker is frequent and affects almost one-half of ACS patients. The benefit of a switching DAPT strategy is observed regardless of a patient's biological response to newer P2Y12 blockers. However, the benefit of switched DAPT is higher in hyper-responders who have impaired prognosis with unchanged DAPT, whereas switching the DAPT strategy significantly reduces the risk of bleeding and ischemic events at 1 year in this cohort.
WHAT IS NEXT?
The next challenge will be to identify which patients will benefit from platelet testing and treatment adaptation in case of hyper-response to a newer P2Y12 blocker after ACS.
BMI = body mass
index; BMS = bare-metal stent(s); BVS = bioresorbable vascular scaffold; CAD = coronary artery disease; DES = drug-eluting stent(s); EF = ejection fraction; HDL = high-density lipoprotein; LDL = low-density lipoprotein; LTPR = low on-treatment platelet reactivity; RAS = renin-angiotensin system; PPI = proton pump inhibitors; STEMI = STsegment elevation myocardial infarction; NSTEMI = non-ST-segment elevation myocardial infarction; UA = unstable angina.
Table 1 .
1 Clinical Characteristics and Treatment at Baseline
Whole Cohort (N = LTPR (n = Non-LTPR (n = p
646) 306) 340) Value
Male 532 (82) 247 (81) 285 (84) 0.18
Age, yrs 60.1 ± 10.2 60.9 ± 10.3 59.3 ± 10.1 0.05
BMI, kg/m 2 27.2 ± 4.5 26.3 ± 4.0 28.0 ± 4.7 <0.01
Medical history
Hypertension 313 (49) 148 (48) 165 (49) 0.52
Type II diabetes 177 (27) 68 (22) 109 (32) <0.01
Dyslipidemia 283 (44) 137 (45) 146 (43) 0.35
Current smoker 286 (44) 126 (41) 160 (47) 0.08
Previous CAD 197 (31) 89 (29) 108 (32) 0.26
Treatment
Beta-blocker 445 (69) 221 (72) 224 (66) 0.05
RAS inhibitor 486 (75) 224 (73) 262 (77) 0.21
Statin 614 (95) 292 (95) 322 (95) 0.41
PPI 639 (99) 303 (99) 336 (99) 0.81
Antiplatelet agent <0.01
Ticagrelor 276 (43) 167 (55) 109 (32)
Prasugrel 370 (57) 139 (45) 231 (68)
Table 2 .
2 Primary Endpoint Incidence According to Treatment Arm
Events Adjusted HR 95% CI p Value
Table 3 .
3 Bleeding BARC ≥2 Incidence According to Treatment Arm
Events Adjusted HR 95% CI p Value
Table 4 .
4 Bleeding All BARC Incidence According to Treatment Arm
Events Adjusted HR 95% CI p Value
Table 5 .
5 Any Ischemic Endpoint Incidence According to Treatment Arm
Events Adjusted HR 95% CI p Value
Acknowledgments
The authors thank their nurse team and technicians in executing this study. |
01762172 | en | [
"spi.mat"
] | 2024/03/05 22:32:13 | 2017 | https://theses.hal.science/tel-01762172/file/SHI_2017_archivage.pdf | Maman Belge
Claire Schollaert-Nguyen
Pierre Nguyen
Meng Liao
Xiaomeng Zhang
Xiaoya Li
Shouwei Zhang
Zimin Li
Xiaoqin Gao
on lithography training and advices on the 2D exfoliation and of course, fun jokes; Seb, from all the memories of March meeting to every daily conversations we've shared, you are like walking encyclopaedia to me; and Andra, you are so earnest with generous heart, thanks for all your warm-hearted encouragement and support and all the sweets to cheer me up.
After an intensive period of four years, today is the day: writing this note of thanks is the finishing touch on my dissertation. It has been a period of intense learning for me, not only in the scientific arena, but also on a personal level. At the end of this, I should take some time to express my gratitude to those without whom this thesis would never have been possible.
First and foremost I offer my sincerest gratitude to my two enthusiastic supervisors, Prof. Benoît Hackens and Prof. Thierry Ouisse. They have taught me, both consciously and unconsciously, how good experimental physics is done. I appreciate all their contributions of time, ideas, and funding to make my Ph.D experience productive and stimulating. The joy and enthusiasm they have for the research was contagious and inspiring for me, even during tough times in the Ph.D. pursuit. My PhD has been an amazing experience and I thank Benoît and Thierry wholeheartedly, not only for their tremendous academic support, the systematic guidance that put into training me in the scientific field, but also for their patience and kindness helped me overcome tough situations. Thank you to pick me four years ago and offered me this great opportunity to work in two great labs. I would like then to take a moment to thank my thesis committee members. Thank you to Prof. Sylvain Dubois and Prof. Matthieu Verstraete for being my rapporteurs.
Also thank you to Prof. Jean-Christophe Charlier and Prof. Vincent Bayot for being examiners of my thesis. I appreciate greatly of your time investing on my thesis and valuable insights for the thesis revisions. I feel honoured that you have accepted to be on my committee.
I am grateful to LMGP members with whom I have interacted during my first stage of thesis, Didier Chaussende, Jean-Marc Dedulle and Eirini Sarigiannidou for good advices on my research, Odette Chaix for the Raman analysis, Herve Roussel for the i XRD analysis, Joseph La Manna, Serge Quessada and Lucile Parent-Bert for technical support, and Benjamin Piot (LNCMI,CNRS) for helping me with LowT measurements in Grenoble. The members of the Cristallogenese group in LMGP have contributed immensely to my personal and professional time at Grenoble. Special thanks to our closed team members: Nikolaos Tsavdaris, Kanaparin Ariyawong, Yunji Shin, Sven Tengeler, Demitrios Zevgitis, Shiqi Zhang, Hoang-Long Le Tran. From our first Christmas party at Yunji's home to the laser game, rock climbing, weekend hiking, and our magic table always with snacks and beverage at lab, I spent so many joyful and lighthearted moments with all of you. Your great support and accompany all ease the adaptation to the new changes of my life in Europe. And to my friends in Grenoble, my dear roommate Xuexin Ma and Yunxia Yao. I will never forget those moments filled with laughters and tears lighting up this snowy mountain city.
Similar, profound gratitude goes to those with whom I have interacted during my research time in UCLouvain. To members of BSMA, Prof.Luc Piraux for LowT measurement, Delphine Magnin for SEM analysis, Cécile D'Haese for AFM analysis, Sergey Basov and Pedro Sá for helping me changing Helium and endless gas and water pressure checking. Special to Pascale Lipnik for TEM analysis and thanks to your effort, I finally get the TEM results on my samples. To members of MOST, Prof. Sophie Hermans for support of all chemical experiments, Jean-François Statsyns for technical support, Koen Robeyns for crystallography analysis. Special thanks to Nadzeya Kryvutsa, for all the advices and support from chemical side and to be honest, I could not image if the time spending at Building Lavoisier without you. To members of ICTEAM, Ferran Urena Begara for valuable suggestions on the Raman analysis. To members of NAPS, Aurélie Champagne, for all your ideas and discussions on this topic. To members of Winfab, Sébastien Faniel for helps on Ebeam lithography and metal deposition, Ester Tooten for the laser machining, Milout Zitout for the wafer processing. Thank you all for helping my works in Louvain-la-neuve.
I am hugely appreciative to our fun group 'Fridge': Damien Cabosart, Thanh Nhan Bui, Andra-Gabriela Iordanescu, Boris Brun-Barrière, Sébastien Toussaint, Nicolas Moreau, Hui Shang and our previous group member Frederico Matins and Cristiane Nascimento Santos. Starting from the beginning of my presentation for FRIA, they have been a source of friendships as well as good advice and collaboration. Nhan, thank you for you've helped me so much with day and night endless recycling of cooling down and warming up, pumping vacuum and refilling Helium, and for electronic measurement advices; Damien, hey three year's officemate, thank you for your help
Introduction
MAX phases are layered early transition metal ternary carbides and nitrides so called because they are composed of M, an early transition metal, A, a group A element and X is C and/or N. MAX phase structure is composed of near close-packed planes of M atoms with the X atoms occupying all the octahedral sites between them. They combine some of the best properties of ceramics and metals. Their physical properties (stiffness, damage and thermal shock resistance, high thermal and electrical conductivity) along with the fact they are readily machinable, make them extremely attractive in terms of the potential technological applications.
In 2011, it was discovered that by immersing Al-containing MAX phases in hydrofluoric(HF) acid, it was possible to selectively etch the Al, resulting in two-dimensional (2D) materials, that were labelled MXene to denote the removal of the A-group element and make the connection to another conducting 2D material, graphene. This new member of 2D materials family owns stronger, more chemically versatile, and have higher conductivity than other materials. As such they are highly interesting on new applications, e.g. specialized in vivo drug delivery systems, hydrogen storage, or as replacements of common materials in e.g. batteries, sewage treatment, and sensors. The list of potential applications is long for these new materials.
In this thesis, as its self-telling title indicated, we present our work on the synthesis, structural characterization and the electron transport in the MAX phases and their 2D derivatives, MXenes.The manuscript is organized as follows:
Chapter 1 An general introduction of MAX phases and its derivative MXene is given, including lattice and electronic structure, experimental and theoretical research work on synthesis, characterization and properties.
Contents
For MAX phase:
Chapter 2 Motivated by the theoretically expected anisotropic properties of these layered materials, producing bulk single crystals is a natural way to obtain samples where the anisotropy of the physical properties can be experimentally probed. Also, knowledge of low-temperature behaviour of single crystal is vital because it can provide insight into MAX intrinsic physical properties. Using high temperature solution growth and slow cooling technique, several MAX phases single crystals have been successfully grown. Structural characterization confirms the single crystalline character of the samples.
Chapter 3 Experimentally, a set of experimental data are obtained from single crystals of V 2 AlC and Cr 2 AlC as a function of temperature and magnetic field. In comparison with the resistivity of polycrystalline samples, that of Cr 2 AlC and V 2 AlC, are correspondingly much lower. In particular, we obtain a substantial anisotropy ratio between the in-plane and c-axis resistivity, in the range of a few hundreds to thousands. From Magneto-resistance and Hall effect measurement, in-plane transport behaviours of MAX phases have been studied.Theoretically, a general model is therefore proposed for describing the weak field magneto-transport properties of nearly free electrons in 2D hexagonal metals, which is then modified to be applicable for the transport properties of layered MAX phases.
For MXene:
Chapter 4 We report on a general approach to etch V 2 AlC single crystal and mechanically exfoliate multilayer V 2 CT x (T= termination group, -OH, -F, =O) MXenes. We then investigate the structural characterization of the obtained MXene by means of XRD, SEM, TEM, Raman and AFM techniques. The second part of this chapter discusses the process with the aim of obtaining Ti 2 CT x MXene from Ti 2 SnC single crystals.
Chapter 5 We then further pursuit the electrical device fabrication process and proceeded with electrical measurements, performing down to low temperature, with the aim to extract electronic transport parameters. We successfully attain some first hand data on V 2 CT x MXenes, including the average value for the resistivity of V 2 CT x MXenes, the filed effect measurement indicates field effect mobility µ FE and the Hall mobility µ H , which contributes to the understanding of this class of materials.
chapter 1
From MAX phases to MXene:
Background, History and Synthesis
This introductory chapter aims to bring a brief overview of MAX phases and its derived two-dimensional (2D) material-MXene.
In the first part, the history of MAX phases is given, including the salient properties and the status of current understanding of the MAX phases. Their potential applications are also highlighted. The up-to-date theoretical and experimental understandings on the electronic transport properties of MAX phases are reviewed as well.
The second part is dedicated to the discovery of 2D MXene, encompassing both theoretical and experimental studies of relevance to their synthesis, properties and potential applications in the field of transparent conductors, environmental treatment and energy storage.
Structure of MAX phases
MAX phases acquire a hexagonal layered structure with space group P6 3 /mmc with two formula units per cell. They consist of alternate near close-packed layers of [M6X] octahedral interleaved with layers of pure group A-atoms. The [M6X] octahedral are connected to each other by shared edges. The main difference, in terms of the structures, among so called 211, 312 and 413 phase, is the numbers of M layers between every two A layers. As can be seen in Figure 1.2, two M layers are intercalated between two A layers in 211 phase, three M layers and four M layers between two A layers in 312 and 413 phase, respectively. It also shows the corresponding unit cell with nanolamelar structure where the presence of metallic M-A bonds and covalent M-X bonds results in a combination of metallic and ceramic properties.
MAX phases are typically good thermal and electrical conductors, as well as thermodynamically stable at high temperatures and oxidation resistant. MAX phases exhibit extreme thermal shock resistance, damage tolerance and are easily machinable. All these characteristics make MAX phases promising candidates for various industrial applications.
Synthesis of MAX phases
In last two decades, substantial efforts have been placed on the synthesis and characterization of MAX phases. Here we summarize the methods applied to synthesize MAX phase powders, bulk material and thin films. We can see from the Table 1.1 that, up to now, most of MAX phases are usually produced in a highly polycrystalline form, except in a limited number of reported cases dealing either with thin singlecrystalline layers [5], or single-crystalline platelets [START_REF] Etzkorn | V 2 AlC, V 4 AlC 3-x (x=0.31), and V 12 Al 3 C 8 : Synthesis, Crystal Growth, Structure, and Superstructure[END_REF]. Producing bulk single crystals is a natural way to obtain samples where the anisotropy of the physical properties can be experimentally probed, and which can also be used for developing technological processes leading in turn to the production of macroscopic two dimensional MXene samples with acceptable area. Until recently, the difficulty to produce single crystal MAX phases prohibited a thorough investigation of their physical properties. In 2011, our LMGP team found the way to produce such single crystals [START_REF] Mercier | Morphological instabilities induced by foreign particles and Ehrlich-Schwoebel effect during the two-dimensional growth of crystalline Ti 3 SiC 2[END_REF][START_REF] Mercier | Raman scattering from Ti 3 SiC 2 single crystals[END_REF], and this opened the door to new lines of research in a field which already triggers an intense international research activity. More details about the crystal growth of MAX phases will be discussed in Chapter II.
Electronic properties of MAX phases
From various aspects, MAX phases exhibit interesting and unusual properties. The present thesis will mainly focus on their electrical properties. Herein, we will summarize some of magneto-electronical transport properties of MAX phases in polycrystalline form. In order to highlight the electrical conduction mechanism in the MAX phases, Hall effect, Seebeck Effect and magnetoresistance measurement are presented.
Resistivity
In general, most of the MAX phases are excellent metal-like electrical conductors, particularly, the conductivity of MAX phases is higher than that of the corresponding binary transition metal carbides or nitrides. Some of them, such as Ti 3 SiC 2 , Ti 3 AlC 2 , are better conductors than Ti itself. Up to now, the resistivity of most studied MAX phases was found to be metallic-like: in the phonon-limited regime, the resistivity, ρ, increases linearly with increasing temperature. This behaviour can be described by a linear fit according to the relation
ρ(T) = ρ 0 [1 + β(T -T RT )] (1.1)
6
Method Example Reference
Mechanical Alloying (MA) Ti 3 SiC 2 [START_REF] Li | Fabrication of highly dense Ti 3 SiC 2 ceramics by pressureless sintering of mechanically alloyed elemental powders[END_REF] Powder Self-propagating high-Ti 3 AlC 2 [START_REF] Lopacinski | Synthesis of Ternary Titanium Alumimum Carbides Using Self-Propagating High-Temperature Synthesis Technique[END_REF] temperature synthesis(SHS) Ti 2 AlC [START_REF] Yeh | Effects of TiC and Al 4 C 3 addition on combustion synthesis of Ti 2 AlC[END_REF] (in-situ)Solid state reaction pressure-less sintering Ti 3 SiC 2 [START_REF] Yang | Reaction in Ti 3 SiC 2 powder synthesis from a Ti-Si-TiC powder mixture[END_REF][13] Ti 3 SnC 2 ,Ti 2 SnC [START_REF] Dubois | A New Ternary Nanolaminate Carbide: Ti 3 SnC[END_REF] Hot isostatic pressing (HIP) sintering Hf 2 PbC, Zr 2 PbC [START_REF] El-Raghy | Synthesis and characterization of Hf 2 PbC, Zr 2 PbC and M 2 SnC (M=Ti, Hf, Nb or Zr)[END_REF] Bulk materials V 2 AlC [START_REF] Barsoum | The M n+1 AX n phases: A new class of solids[END_REF] (Polycrystalline MAX Hot pressing (HP) sintering Cr 2 AlC [START_REF] Lin | In-situ hot pressing/solid-liquid reaction synthesis of bulk Cr 2 AlC[END_REF] and its composites Nb 2 AlC [START_REF] Zhang | Reactive Hot Pressing and Properties of Nb 2 AlC[END_REF] Spark plasma sintering (SPS) Ti 3 SiC 2 [START_REF] Zhang | Rapid fabrication of Ti 3 SiC 2 -SiC nanocomposite using the spark plasma sintering-reactive synthesis (SPS-RS) method[END_REF] Ti 2 AlC [START_REF] Shi | Fabrication, Microstructure and Mechanical Properties of TiC/Ti 2 AlC/TiAl3 in situ Composite[END_REF] Slip casting (SC) Ti 3 AlC 2 [START_REF] Sun | Surface Chemistry, Dispersion Behavior, and Slip Casting of Ti 3 AlC 2 Suspensions[END_REF] Physical vapor deposition(PVD) Ti 3 GeC 2 ,Ti 2 GeC [START_REF] Högberg | Epitaxial Ti 2 GeC, Ti 3 GeC 2 , and Ti 4 GeC 3 MAX-phase thin films grown by magnetron sputtering[END_REF] Thin film Chemical vapor deposition (CVD) Ti 3 SiC 2 [START_REF] Jacques | Pulsed reactive chemical vapor deposition in the C-Ti-Si system from H 2 /TiCl 4 /SiCl 4[END_REF] Solid-state reactions Ti 2 AlN [START_REF] Hoglunda | Topotaxial growth of Ti 2 AlN by solid state reaction in AlNaTi(0001) multilayer thin films[END_REF] Thermal spraying Ti 2 AlC [START_REF] Sonestedt | Oxidation of Ti 2 AlC bulk and spray deposited coatings[END_REF] Table 1.1: Synthesis of various forms of MAX phases.
the charge carrier concentration and type of majority charge carriers. Furthermore, most MAX phases exhibit a quadratic, positive, non-saturating magnetoresistance where the MR is defined as △ρ/ρ = ρ(B)-ρ(B=0) ρ(B=0)
.
The combination of small R H , the linearity of the Hall voltage with magnetic field and finally the parabolic non saturating magnetoresistance strongly suggested that most of the MAX phases are compensated conductors [START_REF] Finkel | Lowtemperature transport properties of nanolaminates Ti 3 AlC 2 and Ti 4 AlN 3[END_REF][START_REF] Finkel | Electronic, thermal, and elastic properties of Ti 3 Si (1-x) Ge x C 2 solid solutions[END_REF][START_REF] Finkel | Magnetotransport properties of the ternary carbide Ti 3 SiC 2 Hall effect, magnetoresistance, and magnetic susceptibility[END_REF]]. An assumption on the fact that both electron-like and hole-like states contribute approximately equal to the electrical conductivity allows to interpret the transport data by using traditional two band model. The electrical conduction is assured by electrons and holes and it is described by the relation
σ = 1 ρ = e(nµ n + pµ p ) (1.2)
where e is the electronic charge, n and p are the electron and hole densities, µ n and µ p are electron and hole mobilities.
The R H in low field limit is defined by
R H = (pµ 2 p -nµ 2
n ) e(pµ p + nµ n ) 2 (1.
3)
The magnetoresistance behaviour is described by the following expression
MR = △ρ ρ(B=0) = αB 2 (1.4)
where B is the magnetic field and α is the magnetoresistance coefficient. For two types of carriers, the two band model is required and α is given by α = npµ n µ p (µ n + µ p ) 2 (nµ n + pµ p ) 2 (1.5)
In these equations, n, p , µ n and µ p are unknown. Assuming that MAX phases are compensated conductors lead to n=p which allows simplifying the Eqs 1.2 1.3 and 1.5 where the conduction is governed by either electrons or holes, these relations are simplified to:
σ = 1 ρ = en(µ n + u p ) R H = (µ p -µ n ) en(µ p + µ n ) (1.6) α=µ n µ p (1.7)
If we assume that, µ n = µ p , then this implies that R H =0. The mobility values are close to each other as deduced from 1.6 and relatively low value of measured R H . The electron-like and hole-like mobilities were determined by Barsoum and coworkers [START_REF] Finkel | Lowtemperature transport properties of nanolaminates Ti 3 AlC 2 and Ti 4 AlN 3[END_REF][START_REF] Finkel | Electronic, thermal, and elastic properties of Ti 3 Si (1-x) Ge x C 2 solid solutions[END_REF][START_REF] Finkel | Magnetotransport properties of the ternary carbide Ti 3 SiC 2 Hall effect, magnetoresistance, and magnetic susceptibility[END_REF][START_REF] Scabarozi | Electronic and thermal properties of Ti 3 Al(C 0.5 , N 0.5 ) 2 , Ti 2 Al(C 0.5 ,N 0.5 ) and Ti 2 AlN[END_REF], the results are summarized in the Table 1.2.
From the results listed, it is suggested that MAX phases can be considered as compensated conductors and a traditional two band model seems to give the range of magnitude for the mobility and charge carrier density: charge carrier densities are typically in the range of 1.1-6.3 ×10 27 m -3 and both n and p are temperature independent due to their metal-like character. Mobilities are in the range of (0.55-9) ×10 -3 m 2 /V•s.
Electronic band structure of MAX phases
The electronic structure of a large number of MAX phases has been calculated through Density Functional Theory(DFT). In general, simulation results show that there is no gap between the valence band and the conduction band of MAX phases, which is consistent with metal-like conductivity as demonstrated experimentally. At the Fermi level, the total densities of states (TDOS) is mainly dominated by M 3d states, suggesting that the 3d states of the M element dominate the MAX phase's electronic conductivity.
In this section, we will focus on the electronic band structure and DOS of 211 phase (Cr 2 AlC and V 2 AlC) and one 312 phase (Ti 3 SiC 2 ), which are the main phases involved in the present experimental work.
Cr 2 AlC
The crystallographic, electronic, dielectric function and elastic properties of Cr 2 AlC were studied by means of pseudo-potential plane-waves method using the density functional theory. The energy band structure of Cr 2 AlC is shown in Figure 1.4(a) [START_REF] Jia | Ab initio calculations for properties of Ti 2 AlN and Cr 2 AlC[END_REF].
MAX ρ 0 R H n p µ n µ p α Ref.
(µΩ•m) (10 -11 m 3 /C) (10 27 m -3 ) (10 27 m -3 ) (10 For the case of V 2 AlC, as is shown in Figure 1.5, the metallic nature can be observed with the bands crossing the Fermi level. Also, around Fermi level, weak energy dispersions along M-L and Γ-A directions appeared, which leads to anisotropic character of the conductivity in this compound (i.e., a weaker conductivity along the c-axis).
-3 m 2 /V•s) (10 -3 m 2 /V•s) (T -2 )10 -5 m 4 /V
Previous discussions on Cr 2 AlC have demonstrated that the hybridized M 3d -C 2p states dominate the bonding of this type of MAX phase. The calculated V 2 AlC confirmed such conclusions again, and demonstrates that Al p states also contribute to the orbital bonding. Further analysis of PDOS shows that the hybridization peaks V 3d -C 2p are located at a lower range with respect to that of V 3d -Al 3p, suggesting a stronger V-C bond but a weaker V -Al bond. Actually, the valence DOS can roughly be divided into three major regions: (i) from -12 eV to -10 eV with mainly C 2s states, which is mainly because of the localized or tightly bound electrons; (ii) from -6.5 to -1.0 eV with strongly hybridized V 3d and C 2p or Al 3p states; and finally (iii) from -1.0 eV to 0 eV with mainly V 3d states. The Fermi level is situated halfway between the peak and the valley, which leads to the stability and conductivity of these materials.
In comparison between DOS of Cr 2 AlC and V 2 AlC, difference in bonding between two structures can be inferred from the weakly hybridized M 3d -A 3p states which are closer to the Fermi level for V 2 AlC (-2 eV) than for Cr 2 AlC (-2.5 eV), which indicates that the V-Al bond is slightly weaker than the Cr-Al bond. Meanwhile, similarities can also be concluded as M 3d -C 2p bonds are stronger than M 3d -A 3p bonds, which therefore is mainly responsible for the high modulus and strength of both M 2 AlC MAX phase. The metal-like behaviour of electronic properties is attribute to the M 3d states near Fermi energy level.
From the electron band structure of both cases, it is clearly indicated that, for both Cr 2 AlC and V 2 AlC, the bands are much less dispersive (or even no crossing) along the c-axis than along the basal plane, suggesting that the electronic properties of both MAX phase are anisotropic. The Fermi velocity ( ∂E ∂k ) along the basal plane is higher than that along the c-axis. Herein, we would like to discuss Ti 3 SiC 2 as representative of 312 MAX phases, whose thermopower S is almost zero over a wide range of temperature [START_REF] Yoo | Materials science: Ti 3 SiC 2 has negligible thermopower[END_REF]. First of all, it is essential to distinguish two independent type of M element: Ti(1) and Ti(2) as in the Figure 1 Two particular bands are also shown on their Fermi surface in Figure 1.7(a) [START_REF] Chaput | Anisotropy and thermopower in Ti 3 SiC 2[END_REF]. The Fermi surface of the upper band is very flat and is located around the c/2 plane. Since the velocities are normal to the Fermi surface they are mainly along the c-axis direction.
It explains the predominant role of this band along the c component and its relatively minor role in the basal plane. On the other hand, the normals to Fermi surface of downside band have large components in the ab plane and therefore contribute to the component in the basal plane. Evidence for the anisotropic thermopower of Ti 3 SiC 2 has also been given by Magnuson [38] that in the polycrystalline bulk sample, the Seebeck coefficient is about zero whereas a positive value in the range of 4-6 µV/K is measured on (000l)-oriented thin film, as shown in 1.7(b).
Figure 1.6:
Crystal structure [START_REF] Magnuson | Electronic-structure origin of the anisotropic thermopower of nanolaminated Ti 3 SiC 2 determined by polarized x-ray spectroscopy and Seebeck measurements[END_REF] and electronic band structure of Ti 3 SiC 2 [START_REF] Zhou | Electronic structure and bonding properties in layered ternary carbide Ti 3 SiC 2[END_REF]. [START_REF] Chaput | Anisotropy and thermopower in Ti 3 SiC 2[END_REF] and (b) calculated and measured Seebeck coefficients of Ti 3 SiC 2 (a (000l)-oriented thin film of Ti 3 SiC 2 (triangle) and polycrystalline sample (square) [START_REF] Magnuson | Electronic-structure origin of the anisotropic thermopower of nanolaminated Ti 3 SiC 2 determined by polarized x-ray spectroscopy and Seebeck measurements[END_REF].
PhD work on MAX phase
The primary objective of the present PhD thesis is to deal with the synthesis of single crystal MAX phases (Cr 2 AlC and V 2 AlC) and the characterization of their intrinsic properties. Thermodynamics and kinetics of crystal nucleation and growth were deduced from the synthesis of different single crystal MAX phases (See Chapter II).
The main physical properties, mainly magneto-electronic transport properties of the MAX phases were also studied and the consistency of results obtained in experiments and from simulations was checked. More specifically, probing the anisotropic properties was also achieved thanks to the crystalline samples (See Chapter III).
MXene
2D materials
2D family
Since the pioneering work on graphene, world-wide enthusiasm for 2D materials including inorganic graphene analogues (IGAs) has rapidly grown. These IGAs include hexagonal boron nitride (h-BN),transition metal oxides and hydroxides, transition metal dichalcogenides (TMD), etc. Due to the fact that the thickness is significantly smaller than the other two dimensions, which always results in dramatic changes in electronic structure and lattice dynamics, these 2D materials exhibit unique properties compared with their three-dimensional counterparts.
Top-down Approaches
Mechanical Cleavage Micromechanical cleavage, as originally used in peeling off graphene from graphite, can be extended to other layered materials with weak van der Waals (vdW) forces or hydrogen bond between layers. It proves that 2D layers can be readily exfoliated from 3D crystals mechanically by cleaving the crystals against another surface. The micromechanical cleavage was applied to the isolation of h-BN, MoS 2 , NbSe 2 , from their layered phases. The resulting 2D sheets are stable under ambient conditions, exhibit high crystal quality, with the obtained thickness ranging from 1 to 10 atomic layers.
Micromechanical cleavage has proven an easy and fast way of obtaining highly crystalline atomically thin nanosheets. However, this method produces a large quantity of thicker sheets, while the thinner or monolayer ones only reside in a very minor proportion; thus this method is not scalable to mass production for potential engineering applications. Also, the size of the 2D sheets produced by this technique is limited by the size of the parent 3D crystal and only work for weakly bonded layered materials.
Chemical Exfoliation
As an alternative method, chemically derived exfoliations, such as liquid-phase exfoliation, and ion-intercalation induced exfoliation, have been demonstrated to effectively isolate single layer and few layers from those thicker structures in large quantities. Chemical exfoliation of 3D layered material is used for the production of a wide range of 2D materials chemistries, as varied as graphene and its oxide [START_REF] Hernandez | High-yield production of graphene by liquid-phase exfoliation of graphite[END_REF], h-BN [START_REF] Zhi | Large-Scale Fabrication of Boron Nitride Nanosheets and Their Utiliza-tion in Polymeric Composites with Improved Thermal and Mechanical Properties[END_REF], TMDs [START_REF] Zeng | An Effective Method for the Fabrication of Few-Layer-Thick Inorganic Nanosheets[END_REF] metal oxide and hydroxide [START_REF] Li | Positively Charged Nanosheets Derived via Total Delamination of Layered Double Hydroxides[END_REF][START_REF] Wang | Recent Advances in the Synthesis and Application of Layered Double Hydroxide (LDH) Nanosheets[END_REF].
The principle of chemical exfoliation is to break the bonds between the layers by chemical, chemical-thermal treatment, or chemical reaction assisted by sonication procedures. Most of the chemical exfoliation processes are conducted in the aqueous environment, relying upon strong polar solvents, reactive reagents, or ion intercalation, which is versatile and up-scalable. The fabricated 2D materials can be re-dispersed in common organic solvents, or in various environments and substrates, which is not feasible for mechanical cleavage methods. These methods step up a wide range of potential large-scale preparations and applications of 2D materials for nano devices, composites, or liquid phase chemistry.
Bottom-up Approaches
Chemical Vapour Deposition (CVD)
The main technique that uses a bottom-up approach to synthesize 2D materials is CVD. 2D layers of graphene [START_REF] Reina | Large Area, Few-Layer Graphene Films on Arbitrary Substrates by Chemical Vapor Deposition[END_REF], h-BN [START_REF] Song | Large Scale Growth and Characterization of Atomic Hexagonal Boron Nitride Layers[END_REF], and MoS 2 [START_REF] Lee | Synthesis of Large-Area MoS 2 Atomic Layers with Chemical Vapor Deposition[END_REF] have been successfully obtained by using this technique. Compared to the mechanical and chemical exfoliation, the main asset of CVD method is the feasibility of scaling-up. More than 30 inch screen size graphene film used as transparent electrodes can be successfully fabricated by roll-to-roll production [START_REF] Bae | Roll-to-roll production of 30-inch graphene films for transparent electrodes[END_REF]. CVD also allows for the fabrication of electronic devices such as transistors [START_REF] Mark | Transfer-Free Batch Fabrication of Single Layer Graphene Transistors[END_REF]. Yet, compared with the simple and easily-handled chemical exfoliation methods, the CVD strategy is costly.
Surface-assisted Epitaxial Growth
Surface-assisted epitaxial growth can be regarded as a modification of CVD method, in which the substrate surface serves as a seed crystal other than a template or a catalyst. This method is alternatively considered as molecular beam epitaxy (MBE) growth. The epitaxial growth has been successfully applied to fabricate one-atom-thick Si sheets (silicene), and the un-reactive metal Ag with sixfold surface symmetry provides a promising substrate to facilitate growth of hexagonal silicene or Si nanoribbons [START_REF] De Padova | Multilayer Silicene Nanoribbons[END_REF].
Application of 2D materials
According to the definition that all 2D materials have high aspect ratios of lateral dimension compared to the thickness of few atoms, they typically result in very high specific surface areas. This unique property of 2D materials suggest that they can be applied in energy storage systems, which is a keystone in today's world technologies.
Considerable research efforts have been dedicated to exploring and developing new
anode materials for lithium ion battery (LIBs), with the aim of developing new materials with higher capacities and lifetimes than current graphite or lithium titanate anodes.
As a host materials for metal ion batteries, 2D materials have unique morphology that enables fast ion diffusion and ion insertion channels [START_REF] Liu | Two-Dimensional Nanoarchitectures for Lithium Storage[END_REF].
Another general merit is that most 2D materials show properties that differ from those of their 3D counterparts, which bring themselves wide applications: reinforcement for polymer composites to produce light weight, high strength and conductive composites [START_REF] Stankovich | Graphene-based composite materials[END_REF]; transparent flexible electronic devices due to their good electronic properties and flexibility [START_REF] Eda | Large-area ultrathin films of reduced graphene oxide as a transparent and flexible electronic material[END_REF].
MXenes
Structure of MXenes
Herein, a new member of 2D materials family is now introduced. MXene, a derivative of MAX phase, is a group of two-dimensional materials consisting of atomic layers of transition metal carbides, carbonitrides and nitrides, consisting of a few atomic layers of two elements (Mono-M), or more than two elements (solid-solution M or double-M element). MXenes, were denoted to emphasize the loss of the A element from the MAX parent phase and to highlight their 2D nature, which is similar to graphene. The ideal MXene composition can be described by M n+1 X n , in general, at least three different formulas have been discovered: M 2 X, M 3 X 2 , M 4 X 3 . The structure is shown in the Figure 1.9. They can be made in different forms: mono-M elements, a solidsolution of at least two different M elements or ordered double-M elements. Chemical-exfoliated MXene exhibits surface groups and is denoted M n+1 X n T x , where T x describe the surface groups, -OH, =O, -F. MXene sheets are almost stacked, where ions and/or molecules may be positioned in between sheets without strong chemical bonding (intercalants). These intercalated MXenes are described by M n+1 X n T x -IC, where IC denote the intercalants.
Since the first MXene Ti 3 C 2 T x was reported in 2011 [START_REF] Naguib | Two-Dimensional Nanocrystals Produced by Exfoliation of Ti 3 AlC 2[END_REF], up to date, over 19 MXenes have been synthesized while another 25 more have been predicted according to the Density Functional Theory (DFT) calculations. The element composition of these MXenes is also shown in the Figure 1.9, of which Ti 3 C 2 T x and Ti 2 CT x are still the most studied MXenes to date.
MXenes originate from MAX phases. That is to say, the MAX phase can be described as 2D layers of early transition metal carbides and/or nitrides stuck together with the A-element layer. Due to the fact that the M-A bond is metallic, it is difficult to separate the MX layers by simply mechanical shearing of MAX phases as graphene derivated from graphite. Nevertheless, as the M-X bond has mixed covalent/metallic/ionic bond character, which is stronger than M-A bonds, selective etching becomes possible to remove the A-element layers without destroying the M-X bonds, which is the primitive principle to access MXenes. In plane view, the structure of MXene sheets is hexagonal with space group P6 3 /mmc (See Figure 1.10). Take MXene Ti 3 C 2 as an example, the unit cell has lattice parameter a and b = 3.05 Å, while the lattice parameter c is 19.86
Åfor ideal Ti 3 C 2 [START_REF] Shi | Structure of Nanocrystalline Ti 3 C 2 MXene Using Atomic Pair Distribution Function[END_REF], while the surface groups and intercalants expand the separation between sheets and increase the lattice parameter c [START_REF] Wang | Atomic-Scale Recognition of Surface Structure and Intercalation Mechanism of Ti 3 C 2 X[END_REF].
1.2. MXene 1.2.
Synthesis of MXenes
Etching with hydrofluoric acid
Attempts on etching A layers from MAX phase have been made by heating MAX phase under vacuum or in molten salts at high temperatures, which results in the selective loss of the A element, whereas the formation of 3D M n+1 X n rock salt structure were also formed due to the detwinning of M n+1 X n layers at elevated temperature [START_REF] Farbera | The Topotactic Transformation of Ti 3 SiC 2 into a Partially Ordered Cubic TiiC0.67Si0.06) Phase by the Diffusion of Si into Molten Cryolite[END_REF]. Moreover, the operation of strong etchants, such as Cl 2 gas, at temperatures over 200 °C results in the etching of both the A and M atoms, to yield carbide derived carbons (CDC).
In 2011, the pioneering work has been done by selective etching of Al from Ti 3 AlC 2 using aqueous HF at room temperature (RT), which can be seen in Figure 1.11(a) [START_REF] Naguib | Two-Dimensional Nanocrystals Produced by Exfoliation of Ti 3 AlC 2[END_REF].
In this process, the Al atoms are replaced by O, OH and/or F atoms. The removal of the Al layers dramatically weakens the interactions between the M n+1 X n layers, which allows them to be separated. This simple selective etching of Al from the parent MAX phases was then successfully extended and yielded multi-layered Ti 2 CT x , Ta 4 C 3 T x ,
(V 0.5 Cr 0.5 ) 3 C 2 T x , Ti 3 CN x T x , Nb 2 CT x , V 2 CT x , Nb 4 C 3 T x , Mo 2 TiC 2 T x , Mo 2 Ti 2 C 3 T x , etc.
[ [START_REF] Ghidiu | Conductive two-dimensional titanium carbide /'clay/' with high volumetric capacitance[END_REF][START_REF] Yang | Two-Dimensional Nb-Based M 4 C 3 Solid Solutions (MXenes)[END_REF][START_REF] Anasori | Two-Dimensional, Ordered, Double Transition Metals Carbides (MXenes)[END_REF]. Typical MXenes accordion-like morphology are shown in Figure 1.11(b, c).
This preferential etching of the M-A bond in MAX phases with Al may be summarized as: and after (c) HF etching [START_REF] Naguib | Two-Dimensional Nanocrystals Produced by Exfoliation of Ti 3 AlC 2[END_REF].
M n+1 AlX n + 3HF → M n+1 X n + AlF 3 + 1.5H 2 (1.8) M n+1 X n + 2H 2 O → M n+1 X n (OH) 2 + H 2 (1.9) M n+1 X n + 2HF → M n+1 X n F 2 + H 2 (1.
can effectively reduce the required etching duration and/or HF concentration, the as-synthesized MAX phase powders are usually subjected to attrition or ball milling and/or sieved prior to chemical exfoliation. proved to be milder etchant as well [START_REF] Feng | Fabrication and thermal stability of NH 4 HF 2 -etched Ti 3 C 2 MXene[END_REF].
Etching with fluoride solution
Delamination and intercalation
In principle, as the strong M-A bonds are replaced by weaker bonds, intercalation and delamination of multi-layered stacked MXenes into single or few layers is possible, which is essential for exploring its 2D nature.
Multilayered MXenes have relatively stronger interlayer interactions than those in graphite or TMDs, simple mechanical exfoliation provides a low yield of single layers.
There are only two reports of Scotch tape exfoliation of multilayer MXene into single flakes [START_REF] Lai | Surface group modification and carrier transport properties of layered transition metal carbides (Ti 2 CT x , T: -OH, -F and -O)[END_REF][START_REF] Xu | MXene Electrode for the Integration of WSe 2 and MoS 2 Field Effect Transistors[END_REF].
The rest of the reports are mainly focused on the chemical intercalants which have been used successfully in other 2D materials. Figure 1.12 shows the schematic diagram of intercalation mechanism [START_REF] Mashtalir | Intercalation and delamination of layered carbides and carbonitrides[END_REF]. To date, intercalation of Ti 3 C 2 T x with a variety of organic molecules, such as hydrazine, urea and dimethyl sulphoxide (DMSO) was reported. Hydrazine can also intercalate Ti 3 CN and (Ti,Nb) 2 C. Intercalation of Ti 3 C 2 T x and Mo 2 TiC 2 T x with DMSO followed by sonication in water led to a colloidal solution of single-and few-layer MXenes [START_REF] Mashtalir | Intercalation and delamination of layered carbides and carbonitrides[END_REF][START_REF] Mashtalir | Amine-Assisted Delamination of Nb 2 C MXene for Li-Ion Energy Storage Devices[END_REF][START_REF] Mashtalir | The effect of hydrazine intercalation on the structure and capacitance of 2D titanium carbide (MXene)[END_REF]. By the use of isopropylamine or large organic base molecules such as tetrabutylammonium hydroxide (TBAOH), choline hydroxide, or n-butylamine, V 2 CT x , Nb 2 CT x and Ti 3 CNT x were delaminated [START_REF] Naguib | Largescale delamination of multi-layers transition metal carbides and carbonitrides "MXenes[END_REF].
However, TBAOH does not delaminate Ti 3 C 2 T x , possibly due to its large size. Either DMSO or an alkali metal halide salt have been used for delaminating Ti 3 C 2 T x .
MXene can also be intercalated with different metal cations by introducing aqueous solutions of ionic compounds. As etching with fluoride salt with acid, metal cations is intercalated spontaneously into layers. The possibility of intercalating MXenes with various organic molecules and metal cations goes beyond delaminating MXenes on a large scale. This phenomenon is of great significance for a range of MXene applications, from polymer reinforcements to energy storage systems.
Properties of MXenes
The main reason why the electronic properties of MXenes are of special interest is because they can be tuned by changing the MXene elemental composition and/or their surface terminations. The band structure and electron density of states (DOSs) of MXenes have been extensively studied by DFT, indicating that MXenes properties range from metallic to semiconducting. Bare MXene mono layers are predicted to be metallic, with a high charge carrier density near the Fermi level [START_REF] Khazaei | Novel Electronic and Magnetic Properties of Two-Dimensional Transition Metal Carbides and Nitrides[END_REF][START_REF] Xie | Prediction and Characterization of MXene Nanosheet Anodes for Non-Lithium-Ion Batteries[END_REF][START_REF] Tang | Are MXenes Promising Anode Materials for Li Ion Batteries? Computational Studies on Electronic Properties and Li Storage Capability of Ti 3 C 2 and Ti 3 C 2 X 2 (X = F, OH) Monolayer[END_REF]. Interestingly, the electron DOS near the Fermi level (N(E f )) for bare individual MXene layers is higher than in their parent MAX phases. Some MXenes with heavy transition metals (Cr, Mo,W) are predicted to be topological insulators [START_REF] Khazaei | Novel Electronic and Magnetic Properties of Two-Dimensional Transition Metal Carbides and Nitrides[END_REF][START_REF] Weng | Largegap two-dimensional topological insulator in oxygen functionalized MXene[END_REF] [START_REF] Anasori | Control of electronic properties of 2D carbides (MXenes) by manipulating their transition metal layers[END_REF], which means that it would be possible to tune the electronic structure of MXenes by varying termination group and transition metal.
Ferromagnetic and anti-ferromagnetic properties have been predicted for some pristine MXenes, whereas magnetism disappears with surface terminations for some cases [START_REF] Si | Half-Metallic Ferromagnetism and Surface Functionalization-Induced Metal Insulator Transition in Graphene-like Two-Dimensional Cr 2 C Crystals[END_REF]. Only two MXenes Cr 2 CT x and Cr 2 NT x have been predicted to be magnetic even with surface terminations, yet their magnetic nature is not clear, also no experimental basic electrolytes have been demonstrated to be 300-400 F•cm -3 , which is higher than the best carbon-based EDLCs [START_REF] Lukatskaya | Cation Intercalation and High Volumetric Capacitance of Two-Dimensional Titanium Carbide[END_REF]. Also, no change in capacitance was reported after 10000 cycles for Ti 3 C 2 T x electrode , indicating their excellent cyclability [START_REF] Ghidiu | Conductive two-dimensional titanium carbide 'clay' with high volumetric capacitance[END_REF].
MXene-based composites have opened a new pathway in various energy storage systems, due to the possible synergistic effect in agglomeration prevention, facilitating electronic conductivity, improving electrochemical stability, enhancing pseudo-capacitance and minimizing the shortcomings of individual components.
Other application
Energy storage systems is the primary and most studied application for MXenes, yet, due to their rich chemistries and diverse structures, there are potentially many other applications of MXene, such as hydrogen storage medium [START_REF] Hu | MXene: A New Family of Promising Hydrogen Storage Medium[END_REF],
photocatalysis [START_REF] Ran | Ti 3 C 2 MXene co-catalyst on metal sulfide photo-absorbers for enhanced visible-light photocatalytic hydrogen production[END_REF][START_REF] Zhang | Ti 2 CO 2 MXene: a highly active and selective photocatalyst for CO 2 reduction[END_REF], biosensor [START_REF] Rakhi | Novel amperometric glucose biosensor based on MXene nanocomposite[END_REF][START_REF] Liu | A novel nitrite biosensor based on the direct electrochemistry of hemoglobin immobilized on MXene-Ti 3 C 2[END_REF] and sewage purification [START_REF] Manawi | Can carbon-based nanomaterials revolutionize membrane fabrication for water treatment and desalination?[END_REF][START_REF] Guo | Heavy-Metal Adsorption Behavior of Two-Dimensional Alkalization-Intercalated MXene by First-Principles Calculations[END_REF].
Summary
For most MXenes, theoretical predictions about their electrical, thermoelectrical, magnetic and other properties should be verified experimentally, especially on the single or fewer layers MXenes. Understanding and controlling surface chemistry is of great importance to allow tailoring the material structure and properties. Regardless of the advancement of these 2D layered MXenes and their composites in Li-ion batteries (LIBs), super-capacitors, transparent conductors, environmental protection, electrocatalysts, etc, more works remain to be conducted, in particular aiming at understanding the exact mechanisms behind certain properties.
PhD work on MXenes
The second part of the PhD thesis is focused on the V 2 C MXenes derived from large V 2 AlC MAX phase crystals by using a combination of chemical modification and The high temperature growth reactor is firstly presented in this chapter.
Then the development of growth process for various MAX single crystals is introduced. Single crystalline characterization as well as the mechanical cleavage of the as-grown MAX phases single crystal are also discussed.
Single Crystal Growth
Growth setup
A modified Czochralski (Cz) puller, named "memere", has been adapted to MAX phase crystal growth. Figure 2.1 shows photo of the puller. It consists of three main parts:
(1) The Heat unit. It is composed of an induction coil, a capacitor box and a power generator.
(2) The reactor includes the growth chamber, gas lines (only Ar 99.9% was used in this work), and recycling cooling water line. Rotation units are located at the top of the chamber, as well as an optical non contact infrared temperature sensor to focus on the crucible through a quartz window to measure the temperature.
(3) The computer based monitor which contains a control panel (not listed in the figure) to tailor the growth conditions by programming heating and cooling sequences.
It is also equipped with safety elements who are able to alarm when in the absence of cooling water in the generator and chamber. Besides, a pressure meter is also applied to monitor the pressure in the chamber in order to avoid the overheating.
It is worth mentioning that two quartz windows are implanted at the side of the chamber, among which, one is used for the temperature measurement of crucible and the other is for the observation when rotating the graphite into the solution.
Three main steps can be explained as follows: firstly, meltdown of alloy (e.g. Cr and Al , V and Al) ;secondly, dissolution of carbon in the fused solution and thirdly, crystallization of MAX phase single crystal in the crucible.
The crucible kit
The crucible kit is composed of crucibles and insulators. The solution growth process is launched at high temperature, therefore the crucible material must be thermally stable for long time operation at high temperature. There are some requisite characteristics for the crucible materials: high-temperature and thermal shock resistant, high electrical and thermal conductivity, easily machinable. Therefore, the most suitable material is graphite as it fulfils all the criteria listed above. So in the previous growth experiment for Ti 3 SiC 2 [START_REF] Mercier | Morphological instabilities induced by foreign particles and Ehrlich-Schwoebel effect during the two-dimensional growth of crystalline Ti 3 SiC 2[END_REF], graphite crucible can be used when the temperature reach up to above This has two drawbacks: on one hand, this does not allow precise control of graphite incorporation, and when it comes to Al-containing MAX phase, this can lead to the unwanted formation of Al 4 C 3 during the initial heating process. On the other hand, if the temperature becomes higher than 1400 • C before the transition metal melts, then liquid Al reacts extremely fast and violently with graphite, so that in a very short time and at the hot point, most of the crucible is consumed until it breaks.
For Cr 2 AlC, this is not a problem because Cr melts before reaching 1400 • C. While in the case of V 2 AlC, it becomes crucial when using more refractory metals (such as Vanadium), as the transition metal melt before introducing the carbon. This imposes the choice of another materials for the crucible. So for this thesis, concerning the Alcontaining MAX phase growth, the alumina crucible is used instead of graphite crucible, though the highest temperature achieved is 1700-1800 • C (See The resistance of the graphite material will lead to its Joule heating. On the top of the outer graphite crucible, several pieces of titanium-zirconium alloy as oxygen absorber material are placed between graphite lip and top insulation, because they can prevent the heterogeneous nuclei from solvent due to the floating oxide particles generated from the the native oxide layers on the metal. It is proved that Zr is a very effective de-oxidizing metal, while Ti exhibit a wide range of solubility against both oxygen and nitrogen. Temperature limitation of Ti-Zr alloy is around 1520 • C, hence even it is placed far from the heating zone, it would start to melt when the growth temperature reaches up to 1800 • C.
Reactor geometry
Growth procedure
Growth is achieved by maintaining the solution for several hours at high temperature, with a partial Ar pressure p Ar =1.5 bar. Different stages of growth are summarized in Figure 2.4 for the high temperature solution growth and slow cooling methods. In general, the crucible is manually heated up to a temperature T 1 (higher than 1000 • C) which is possible to be detected by the pyrometer. From T 1 to T 2 , it is the period of meltdown alloys, T 2 is dependent on the binary phase diagram (Cr-Al, V-Al, etc.). Here, we set T 2 equals to 1600 • C for V-C system while 1650 • C for Cr-C system. The time period lasting from t 1 to t 2 varied from 40 min to 1h according to the raw materials quantities. Once crucible is heated to T 2 , a graphite rod is put into the melt with continuous rotation to dissolve carbon into the melt. The amount of carbon dissolved should be precisely calculated based on the ternary phase diagram. As long as the carbon is all dissolved in the melt, the graphite rod should be pulled out from the melt.
Done with high temperature meltdown of all raw materials, next critical procedure came out, which is slow cooling. The less nuclei, the better. Here, a pre-cooling procedure (t 3 -t 4 ) was introduced to reduce the number of nuclei .
Besides, since no seed crystal is available to be introduced, spontaneous nucleation in the flux is the key factor. A way to decrease the nucleation and increase the crystal size is to start at a temperature at which the flux can be completely converted into a liquid phase, and then to decrease the temperature slowly, so that the first nucleated crystals reduce subsequent nucleation by Ostwald ripening. This is essentially the strategy which we would follow for growth. So, a fairly slow cooling starts, which could last several hours or days (under proper monitoring conditions). In the end, when the system is cooled down to acceptable temperature, the solidified sample can be taken out from the chamber. It is worth mentioning the chamber pressure should be monitored strictly during the carbon dissolution procedure, which is at highest temperature, because the low-melting-point metal vapour will form precipitates on the monitor window, making the pyrometer unable to work in proper conditions. If the temperature indicated is much lower than the real one, the feedback power will keep increasing, leading to a destructive result. To collect the crystal from the flux "cupcake" (See Figure 2.5(a)), diluted HCl acid is applied. Alternatively, after a few weeks under air, the solidified flux hydrolysed and turns into powders as can be seen in Figure 2.5(b,c).
Cr 2 AlC crystal growth
V 2 AlC crystal growth
As for the Ti-Si-C and Cr-Al-C systems, the V-Al-C system has also been evaluated in detail, so that the isothermal sections as well as the liquids surface have been calculated
and/or measured with a reasonable accuracy. In any case, one has to find a temperature and flux composition for which the desired MAX phases is in equilibrium with a liquid phase. Figure 2.9(a) is the V-Al-C ternary phase diagram. For the V-Al-C system, this can be achieved, for example, at V mole fractions below 0.2 and temperatures above 1500
• C. As applied in the Cr-Al-C system, combined with the V-Al binary phase diagram (See Figure 2.9(b)), we tried several growth processes with different vanadium atomic ratio from 10% to 20%, the optimized value is around 15%, so we maintained this for all the following experiments.
In the case of V 2 AlC, we typically maintain the melt at 1650 • C for 1h, and then introduced and dissolved the carbon. On contrary to the case of Cr 2 AlC , the carbon solubility is not high enough. We increased temperature up to 1700-1750 • C , which did not successes due to the thermoresistant limit of the crucible. We also tried to preheat the crucible before growth in order to maintain the flux at high temperature for long time (e.g. 90 min) in order to increase the carbon solubility. Depending on the size of crucible, if we use 8 g V and 24 g Al, the melted carbon amount is around 0.44 g.
After a slow cooling stage for 7 h from 1650 • C to 1100 • C, a solidified flux was obtained. Unlike Cr 2 AlC, the solidified flux did not turn into powder after few weeks, so that in order to extract the crystals from the flux, we had to dip the flux in HCl for a few hours. Since alumina crucible is not as easy machinable as graphite one, the whole crucible was put into the concentrated HCl. From the Figure 2.10(a) showing the crucible and the crystals inside, we can see that the numbers of V 2 AlC crystals is not as high as for Cr 2 AlC, while the size and thickness are also smaller than that of
Other MAX phases crystal growth
The solution growth is clearly limited by the carbon solubility in the flux. This makes it, for instance, very difficult to grow phase such as Ti 2 SnC by this technique, because the carbon solubility is so limited, which is not even measurable in a reliable way. The most favourable case is that of Cr-Al-C, because carbon solubility can be achieve very high and better still, at very acceptable temperature. For a given crucible size, the final platelet area mainly depends on carbon solubility: in our experimental set-up, some 10 -4 cm 2 for Ti 2 SnC, some 0.25 cm 2 for Ti 3 SiC 2 , 1cm 2 for V 2 AlC and several cm 2 for Cr 2 AlC , as can be seen in Figure 2.11.
Structural characterization
Characterization details
Crystal identification was firstly achieved by X-ray diffraction, using a Siemens-Bruker D5000 diffractometer (Cu, K α1 radiation). The X-ray source was copper anode and diffraction of K α1 and K α2 lines were measured. The data was collected in the 2θ range 10°-115°, with a step size of 0.05°, duration of 2s/step. The pole figures were recorded using a setup comprising a four-circle goniometer (Schultz geometry), and an incident beam of diameter 1 mm, a nickel filter for attenuating the CuK β radiation and a point scintillation detector. The illuminated sample surface consisted of a 4 mm ×3 mm ellipse. The penetration depth corresponding to 99% of the diffraction beam was about 8-10 µm. For each pole figure, complete φ-scans (-180°up to 180°) were performed at different χ values (0°up to 90°). Our goals, by using XRD analysis, are the identification of the phases present in the as-grown sample and the determination of the crystal quality.
Micro-Raman measurement were performed at room temperature on different crystals using a He-Ne laser as the exciting line (λ=632.8 nm) of a Jobin Yvon/Horiba LabRam spectrometer equipped with a liquid nitrogen cooled CCD detector. Experiments were carried out using a laser power inferior to 1 mW with a focus spot of about 1µm 2 under the microscope to avoid sample heating.
The chemical composition of the crystals were analysed by Energy Dispersive X-ray Spectroscopy (EDS) using a BRUKER Silicon Drift Detector (SDD) mounted on a Quanta 250 FEI field Emission Gun (FEG) Scanning Electron Microscope (SEM) operated at 15
keV. Observation of the surface is achieved with an optical microscope. (2023) plane is around 0.26°, which is in the order of magnitude 0.1°. In comparison with standard industrial semiconductors materials with the FWHM 0.05°-0.1°, our single crystals are clearly not that of high quality. We attribute this to the fact that none of the platelets is perfectly flat and to the presence of defects, so that this value can be partly explained in terms of the relaxation of long range residual strain in samples with a very high aspect ratio. Our data indicate that the platelets are single crystals at least over an area of several mm 2 and a depth of order of 10 µm.
Cr 2 AlC crystal characterization
Raman spectrum
Fast identification can be achieved by Raman spectrum which is determined by the crystal structure. All samples exhibit similar spectra, and a typical Raman spectrum is reported in Figure 2.14(a). Among the four Raman active vibration modes calculated by the factor group analysis of Cr 2 AlC (2E 2g + E 1g + A 1g , See Figure 2.14(b) [START_REF] Volker Presser | Erratum: First-order Raman scattering of the MAX phases: Ti 2 AlN, Ti 2 AlC 0.5 N 0.5 , Ti 2 AlC, (Ti 0.5 V 0.5 ) 2 AlC, V 2 AlC, Ti 3 AlC 2 , and Ti 3 GeC 2[END_REF]), two well-defined Raman peaks appear at 248 and 336 cm-1, which is in agreement with values reported in the literature. Another weaker mode which exhibit as shoulder in the as-shown spectrum at 238 cm -1 can be identified to the mode at 237, 263,and 250 cm -1 according to various reports [START_REF] Wang | Raman active phonon modes and heat capacities of Ti 2 AlC and Cr 2 AlC ceramics: firstprinciples and experimental investigations[END_REF][START_REF] Spanier | Vibrational behavior of the M n+1 AX n phases from first-order Raman scattering (M=Ti,V,Cr,A=Si,X=C,N)[END_REF][START_REF] Su | Synthesis of Cr 2 AlC thin films by reactive magnetron sputtering. Fusion Engineering and Design[END_REF]. The crystal orientation of our platelet (basal plane perpendicular to the c axis) constrains the observation of E 1g modes. So these two intense peaks are determinate to be E 2g and A 1g . The large band close to 564 cm -1 can be attributed to a combination band of two modes of the first order spectrum.
Surface morphology of as grown crystals
Crystal growth in the melt can be considered as a two step-process: nucleation and growth. Nucleation can be homogeneous, in the absence of foreign particles or crystals in the solution, or heterogeneous, in the presence of foreign particles in the solution.
Both types are known as primary nucleation. Secondary nucleation takes place when nucleation is induced by the presence of crystals of the same substance. No seed crystal is introduced into our growth process, so it is assumed that the primary nucleation occurred spontaneously in the melt during the cooling down. The actual nucleation and growth mechanism of Cr 2 AlC crystal is still unclear, as crystallization occurs in the whole volume of the liquid despite the applied temperature gradient. We assumed some heterogeneous nucleation induced by foreign particles or drops [START_REF] Mercier | Morphological instabilities induced by foreign particles and Ehrlich-Schwoebel effect during the two-dimensional growth of crystalline Ti 3 SiC 2[END_REF].
Besides, growth and solidification of other compounds also happen during the cooling stage. Depending on the various growth parameters (temperature, time and cooling down rate), the Cr 2 AlC crystals can be forced to grow very close to one another, or in competition with other phases. As a consequence, the final growth stage involves usual solidification phenomena, including but not limited to dendrite growth occurring at the surface of the crystals during final solidification. The accumulation of solute and heat ahead of the interface can lead to circumstances in which the liquid in front of the solidification front is supercooled. The interface thus becomes unstable and in appropriate circumstances solidification gives rise to dendrite. A dendrite tends to branch because the interface instability applies at all points along its growth front. The branching gives it a tree-like character which is the origin of the term dendrite. The aspect of the crystal surface may therefore result in a high variety of morphologies at microscopic scale. which is roughly self-organized.
Crystal surface structure can also be the evidence to explain growth mechanism.
One of the most commonly used models was that provided by Kossel [100]. This model envisions the crystal surface as made of cubic units (See Figure 2.16) which form layers of atomic height limited by steps. These steps contain a number of kinks along their length. The area between steps is referred to as a terrace, and it may contain single adsorbed growth units, clusters, or vacancies. According to this model, growth units attached to the surface will form one bond, whereas those attached to the steps and kinks will form two and three bonds, respectively. Growth will then proceed by the attachment of growth units to kink sites in steps. Then, a new step will be formed by the nucleation of an island of monolayer height on the crystal surface. However, when the nucleation rate is faster than the time required for the step to cover the whole crystal surface, 2D nuclei will form all over the surface and on top of other nuclei. These nuclei will spread and coalesce forming layers. Growth terrace can be obviously observed on the surface of our as-grown crystal from the SEM analysis has been conducted in order to investigate the nature of the growth.
Figure 2.17 shows the typical terrace morphology of the as-grown surface, particularly the peninsulas grown on the terrace as features of 2D instability growth (highlighted with red line in Figure 2.17(b)). As already observed for other materials, it can therefore be inferred that the Ehrlich-Schwoebel (ES) effect, give rise to these Bales-Zangwill instabilities. The mechanism and models of these 2D-growth instabilities have been discussed in detail in the previous experiment of our group for the growth of Ti 3 SiC 2 [START_REF] Mercier | Morphological instabilities induced by foreign particles and Ehrlich-Schwoebel effect during the two-dimensional growth of crystalline Ti 3 SiC 2[END_REF].
In such experiments, the presence of foreign particles induces formation of elongated peninsulas or islands on terrace, followed by highly anisotropic growth parallel to the step edges. The instabilities often occur for a larger terrace width, in comparison to the stable and narrow terrace. We explain this as follows: initially, the crystals are supposed to grow totally flat inside the solution, for they are not subject to any constraint as long as they do not meet each other or the opposite crucible wall. However, upon cooling and during the final solidification stage of the flux, they are submitted to a high stress and a subsequent deformation, enhanced by their small thickness and the ease with which they can be plastically bent. We expect such an induced curvature to affect mainly the peak width along χ, whereas a coalescence of crystal planes with the same orientation of the c-axis, but rotated with respect to one another around this axis, or any rotation disorder or mosaicity in the basal plane would rather lead to a broadening of the φ-scan peaks. This explanation is also made plausible by the fact that in the case of much thicker Cr 2 AlC crystals (around 400 µm-thick), such a wide broadening along χ is not observed.
Raman spectrum
Raman measurements performed on different flat and large crystals led to the typical spectrum shown in Figure 4.19 (top spectrum). All the four Raman active optical modes predicted by the factor group analysis of V 2 AlC are observed at 158, 239, 259 and 362 cm -1 in the backscattering configuration used with the laser beam propagation along the c axis, which is in good agreement with the spectra reported by Spanier et al. [START_REF] Spanier | Vibrational behavior of the M n+1 AX n phases from first-order Raman scattering (M=Ti,V,Cr,A=Si,X=C,N)[END_REF] and Presser et al. [START_REF] Volker Presser | First-order Raman scattering of the MAX phases: Ti 2 AlN, Ti 2 AlC0.5N0.5[END_REF]. The similarity between Raman spectra of Cr 2 AlC and V 2 AlC, both compounds belonging to the same 211 MAX phase family with the same crystal structure, and the V 2 AlC mode assignment reported in [START_REF] Spanier | Vibrational behavior of the M n+1 AX n phases from first-order Raman scattering (M=Ti,V,Cr,A=Si,X=C,N)[END_REF] allow us to assign the different modes to E 2g , E 1g , E 2g and A 1g , respectively.
In our spectra, only the mode E 1g at 239 cm -1 slightly differs in intensity and position from one crystal to another one. Theoretically, this mode should not be observed in this backscattering configuration according to the Raman selection rules. Its observation probably results from some polarization leakage due to a slight misorientation between the laser beam and the crystal c axis, its amplitude varying with the misorientation angle. Together with the E 2g modes, this mode is also observed in polarized spectra collected using crossed polarization configuration (Figure 2.20, bottom), which is not predicted by the selection rules. This confirms that the crystal is not strictly horizontal.
As expected for A 1g symmetry modes, the mode pointed at 362 cm -1 is not observed in this configuration.
dislocation was further refined by Burton, Cabrera, and Frank, giving rise to what is known as the BCF theory [START_REF] Burton | The Growth of Crystals and the Equilibrium Structure of their Surfaces[END_REF]. In our experiment, the unpredictable growth process can be attribute to the absence of seed crystal and designed temperature gradient. Hence, we find there may be several nucleation and growth synergy mechanism. Investigation of the surface by means of atomic force microscope shows that the underlying surface morphology is characteristic of a step flow growth process, with well defined steps and terraces (Figure 2.22(a)). On the flat parts, the step height is most often equal to c, the lattice base vector along the c-axis (c=1.31nm), and those regions are separated by highly bunched steps. Locally, the terrace width is generally regular. The physical origin of the step flow process is still unknown, and we never found particular surface structures indicating mechanisms such as, e.g., spiral growth, nor a particular morphological pattern which might be responsible for the birth of the step and terrace structure.
It is worth noting that even if the HCl etching step used for dissolving the solidified flux does not appreciably modify the crystals, it nevertheless induces a partial etching of the step edges which is clearly visible in Figure 2.22(a) and which is substantially aggravated in regions exhibiting a higher defect density (Figure 2.22(b)). In some unusual cases, the surface can even exhibit microscopic etch pits which might indicate the presence of threading dislocations, as observed for other materials with appropriate etchants [START_REF] Thi | Critical assessment of birefringence imaging of dislocations in 6H silicon carbide[END_REF]. Immersing crystals in concentrated HF (40%) for 5 days does not appreciably change their macroscopic appearance, but modify their surface. However, we did not obtain any delamination of the MX planes by removal of the A plane. This is most probably due to the fact that in a highly polycrystalline structure, HF penetration is greatly favoured, in contrast to the case of our single crystals.
Surface morphology of platelets cleaved along the basal plane
An outstanding illustration of the MAX phase nano-lamellar structure and of the weakness of the A atomic bonds is given by the ease with which the platelets can be cleaved along the basal plane, using a process almost similar to producing graphene from graphite. Sticking a platelet onto a strong adhesive tape (gorillaTM), folding the tape onto the crystal and separating the folded parts again is enough for cleaving the platelets in the basal plane, a fact which would be clearly impossible to realize with materials characterized by a strong covalent bonding. As for graphite, the separation can occur not only because basal atomic planes are not tightly bound, but because the detached layer can be deformed without systematically breaking, so that all the "weak" detachment force is applied at the separation line, and not over the full surface (in the latter case, even bonding through van der Waals forces would be too strong to allow plane separation).
We already observed such a cleavage with Cr 2 AlC and Ti 3 SiC 2 , but it is in general not possible to obtain a separation over the full sample area for those two materials.
This can be ascribed to the fact that the Cr 2 AlC samples are rougher and thicker, so that the adhesion of the tape depends on the position and is not homogeneous. As a consequence, only some top regions of the Cr 2 AlC sample or regions devoid of surface dendrites sufficiently adhere to the tape to lead to a partial mechanical cleavage. V 2 AlC platelets are thinner and flatter, so that the adhesion is more homogeneous and so more efficient. In some cases, this allows us to obtain cleavage over the full crystal area, as exemplified by the photograph of Figure 2.23. The remaining part of this section is devoted to the AFM observation of the produced cleaved surfaces.
The surface usually exhibits terraces often ending with a triangular shape (Figure 2.23). Most terraces are separated by unit or half-unit steps, and sometimes by higher steps, but then still equal to an integer number of half-units. This strongly supports the assumption that local cleavage is always obtained at the A atomic planes, since those planes appear twice in a unit cell. In some regions, only a very few steps appear on areas as large as 50 µm ×50 µm, and in other regions the cleavage results in more disturbed patterns (Figure 2.23)).
In some regions, half-unit sharp triangular tears, either terraces or grooves, all start from a single line, and basically exhibit two different kinds of shape, as shown in Figure 2.24 for which we chose an image including both kinds of patterns. The tears are either roughly symmetrical in the direction of the pulling or asymmetrical. The crack orientation is determined by the competition of several factors, since the tearing angle corresponds to the minimum energy variation and depends on a number of other parameters, such as the initial curvature, the film thickness and the width of the flap to be delaminated [START_REF] Kruglova | How Geometry Controls the Tearing of Adhesive Thin Films on Curved Surfaces[END_REF]. We attribute the origin of asymmetrical shapes to the fact that, once a crack is initiated along a direction with a weaker number of bonds, then it goes on in that direction. Focusing on the symmetrical patterns, such tears are usually observed during the pulling of a simple adhesive tape [START_REF] Kruglova | How Geometry Controls the Tearing of Adhesive Thin Films on Curved Surfaces[END_REF] or of a graphene sheet [START_REF] Sen | Tearing Graphene Sheets From Adhesive Substrates Produces Tapered Nanoribbons[END_REF], and we also observed it with Cr 2 AlC samples. More specifically, it has been shown that if the ratio between the flap width W (i.e. the basis of our triangles) and the flap thickness t is much larger than the ratio between the adhesive energy per unit surface τ and the energy of the edge to be cracked γ, then the tear angle θ behaves as sin θ ∝ (2Bτ)1/2/2γt, where B is the bending modulus, so that it is independent of W [START_REF] Novoselov | Two-dimensional atomic crystals[END_REF].
Even if the atomic bonding between the A atoms and the M sublayer atoms is weaker than M-M or X-X bonding coming into the play for cracking the flap edge, the corresponding ratio is certainly much smaller than 2 W/c, since we observed W values larger than 0.1 µm. This implies that locally, the angle of symmetrical tears should
Ti 2 SnC crystal charaterization
Structural characterization were conducted on the as-grown Ti 2 SnC single crystals as well. In order to compare with the as-etched crystals according to the attempts on
Conclusion
Using high temperature solution growth and slow cooling technique, several MAX phase single crystals were successfully grown:
The solution growth is clearly limited by the carbon solubility in the flux. The most favourable case is that of Cr-Al-C, because carbon solubility can be achieved very high and better still, at very acceptable temperature. For a given crucible size, the final platelet area mainly depends on carbon solubility: some 10 -4 cm 2 for Ti 2 SnC, some 0.25 chapter 3
Transport properties of MAX phases
In this chapter, in-plane and out-of-plane magneto-electronic transport properties of MAX phase have been investigated experimentally and theoretically. For in-plane transport, we firstly investigated the in-plane resistivity as the function of temperature and magnetic field, followed by the Hall effect measurement. Thermal transport measurement was also conducted on MAX single crystal samples. For out-of-plane transport, out of plane resistivity was measured and a substantial anisotropy ratio was observed. Results on the out-of-plane magnetoelectronic transport are also discussed. Theoretically, a general model is proposed for describing the weak field magneto-transport properties of nearly free electrons in two-dimensional hexagonal metals. It was then modified to be applicable for the transport properties of layered MAX phases.
Experiment details
Measurement configuration
In-plane transport
Four probes method Four probes method was adopted to eliminate contact resistance for these low resistivity materials [START_REF] Valdes | Resistivity Measurements on Germanium for Transistors[END_REF]. Figure 3.1(a) is schematic of the four probes method.
ρ = R s l = R wt l = 1 σ (3.1)
where: ρ is resistivity, R is resistance obtained from the measurement (V/I), S is cross section area perpendicular to the current direction, l is distance between two side contacts, w is sample width, t: sample thickness.
Van der
-π R A R S + exp -π R B R S = 1 (3.2)
Thus, the resistivity can be calculated using ρ = R S × t, where t is the sample thickness. Here we note that for practical purposes we found that the numerical integral giving f can be replaced by an analytical approximation, which is plotted in Figure 3.3: Then, the resistivity is calculated as
Hall bar bridge
ρ c (corrected) = R × Ld t (1 -f ) -1 (3.3)
where R is the measured resistance, and L, d and t are dimensions defined in Figure 3.2(a). Once ρ c is measured, the resistivity anisotropy ratio ρ c /ρ ab is obtained by using the in-plane tranport resistivity measured from a Van der Pauw sample or a Hall bar processed from crystal issued from the same process run in order to minimize the sample quality variations.
Sample fabrication
In-plane transport
Single crystals of Cr 2 AlC, V 2 AlC, Ti 3 SiC 2 and Ti 2 SnC were successfully grown.
Among them, the size of the Ti 3 SiC 2 and Ti 2 SnC crystals remains quite small, so the data reported here are therefore restricted to Cr 2 AlC and V 2 AlC crystals of macroscopic size. The crystal plane is always perpendicular to the c-axis. The typical areas of the crystals are in the range of a few cm 2 for Cr 2 AlC, and around 1 cm 2 for V 2 AlC, with thicknesses t of order 200-300 µm for Cr 2 AlC and 30-50 µm for V 2 AlC. For all the geometries discussed above, sample has to be prepared based on following criterion:
1) sample should have flat surface with uniform and homogeneous thickness, 2) all contacts should be located at the edge of othe sample.
As demonstrated in the previous chapter, the as-grown surface of V 2 AlC is flat and its initial thickness does not vary more than a few percent over the crystal area while that of Cr 2 AlC crystals needed to be firstly polished so as to obtain a good thickness homogeneity. The samples are processed in two different ways: some are cut with a diamond wire saw so as to form parallelepiped-shaped samples with well-defined dimensions. Some others are defined by laser cut, which allows one to produce more complicated shapes, such as Hall bars with well-defined and aligned lateral arms.
Out-of-plane
For the out-of-plane transport, a metallic mask was machined by laser patterning.
(See Figure 3.5 (a)). Sample was placed in the high vacuum chamber of an E-gun Vacotec evaporation system available in the WINFAB cleanroom, Louvain-la-neuve. Then we proceeded a first metal deposition of a 5 nm-thick Ti/Cr layer followed by a second metallization of a 500 nm Cu layer. Here, the Ti/Cr layer is used as an adhesion layer, than its polycrystalline counterparts. Accordingly, this implies that single crystalline sample exhibits lower resistivity resulting from the absence of grain boundaries which consequently limits the mean free path at low temperature. Due to the imperfections of our as-grown crystals, it is highly probable that these values do not represent a lower intrinsic limit, and further improvement of the materials quality should result in an additional drop of the resistivity.
MAX phase transport properties
In-plane resistivity
More specifically, the sample preparation method can also affect the resistivity substantially, the rectangle and VdP sample obtained by diamond saw cutting are considered to create less defects than that made by laser, that's why the lowest resistivity can be achieved on VdP sample in both cases. Here we also plot one "back and forth" curve for the procedure of cooling down and warming up in By using the Mathiessens's rule, we can extract the ideal resistivity (ρ i ), which is defined as
ρ i (T) = ρ(T) -ρ residual .
Herein, if we do not considering the form factor, which is to say all contact are well connected at the edge of the sample and current pass through all the planes of the layered structure, ρ i is an intrinsic characteristic which is supposed to be only dependent on the electron-phonon-scattering mechanism. Figure 3.9 shows the ideal resistivity of both cases, indicating noticeable uniformity for Cr 2 AlC sample. As usually observed in MAX phases, ρ i varies linearly with the temperature in the range from 150 -300 K and the slope of the linear variation can be calculated. It is clear that the slopes obtained from V 2 AlC sample are smaller than those of Cr 2 AlC sample, such behaviour can be attributed to the differences between two MAX phases in terms of 1) electron-phonon coupling, 2) electron band structure and 3) charge carrier densities. Cr 2 AlC (upper figures), though the correction does not merge all curves, the maximum difference between the five samples at low T does not exceed a factor of 1.5. Most likely, this variation is attributed to the fact that the crystals are all but perfect, so that some variability between samples issued from different runs is to be expected. The ratio is plotted as a function of the temperature in order to get more insights into resistivity anisotropy. It is very substantial, in the range of a few hundreds. The anisotropy ratio increases as T decreases as long as phonon scattering prevails, which did not exhibit the sample trend of temperature coefficient, as reported in the case of oriented grown thin film and bulk Ti 2 AlC phase. This anisotropy is a combination of the Fermi surface anisotropy and that of scattering mechanisms, but a full understanding of temperature dependence of the coefficient of anisotropy is still unclear.
Out-of-plane resistivity
Plotting the same data for a V 2 AlC sample shows that the anisotropy ratio is still higher, a few thousands (Figure 3.10, bottom). The hugh anisotropy ratio still needs to be verified by more samples, here, we found for the sample with larger thickness has lower anisotropy ratio, which may be due to the higher level of defects caused by growth stress in the case of thinner sample. The anisotropies obtained for Cr 2 AlC and V 2 AlC are a spectacular illustration of the impact of the nano-lamellar structure of the MAX phases on electrical transport, and strongly support the assumption of a strong spatial confinement in the transition metal planes. The larger resistivity anisotropy found in V 2 AlC, compared with Cr 2 AlC, may come from a different topology and anisotropy of the Fermi surface itself, which could be described as more "tube-like" in one case than in the other case.
In-plane transport
In this part, three MAX phases will be discussed, including Cr 2 AlC, Ti 2 AlC, Ti 3 SiC 2 .
Magnetoresistance
Figure 3.11 shows the magnetoresistance (MR) variation with temperature at a magnetic field from 0 to 11 T. As in the case of the polycrystalline phases, we observe a magneto-resistance of a few per cent in both cases, higher for V 2 AlC and Ti 3 SiC 2 than for Cr 2 AlC. In general, MR rapidly decreases with increasing temperature. In standard metals, the Lorentz force caused by an applied magnetic field changes the electron trajectory and gives rise to a positive MR which increases quadratically with the strength of the field.
Power function y = Ax B is used to fit the MR curve and the fitting curves at low T are plotted in Figure 3.12. In the cases of V 2 AlC and Ti 3 SiC 2 , unlike Cr 2 AlC, the exponents of power-law fits are around 1.4 for both cases. For a conventional metal, the change in isothermal resistivity in an applied magnetic field (B) normally obeys a functional relation known as Kohler's law [START_REF] Luo | Kohler's rule and relaxation rates in high-Tc superconductors[END_REF]. Semiclassical transport theory based on the Boltzmann equation predicts Kohler's law to hold if there is a single type of charge carrier and the scattering time τ is the same at all points on the Fermi surface. According to Kohler's law, MR at different temperatures can be scaled by the expression:
( ∆ρ ρ 0 ) = f (Hτ) = F( B ρ 0 ) (3.4)
where ρ 0 is the zero-field resistivity at given temperature. This relation follows from the fact that scattering time 1/τ(T) ∝ ρ(T). In the low field limit, most metals exhibits a quadratic dependence of the MR , so
∆ρ ρ 0 ∝ τ 2 B 2 .
Hence, a plot of ∆ρ ρ 0 versus ( B ρ 0 ) is expected to collapse onto a single temperature-independent curve if charge carrier density is constant , regardless of the topology and geometry of the Fermi surface.
In general, this rule is applicable to single-band metals with temperature-independent charge carrier density. Thus, this condition can be satisfied most easily if there is only a single temperature-dependent scattering time. Interestingly, this rule, although derived from the semiclassical Boltzmann theory, was found to be well obeyed in a large number of metals, including the metals with two types of carriers, the pseudo gap phase of the underdoped cuprates superconductors [START_REF] Chan | In-Plane Magnetoresistance Obeys Kohler's Rule in the Pseudogap Phase of Cuprate Superconductors[END_REF] as well as some other Q1D metals [START_REF] Narduzzo | Possible Coexistence of Local Itinerancy and Global Localization in a Quasi-One-Dimensional Conductor[END_REF].
The violation is generally believed to result from the change of charger carrier density with temperature or from the fact that the anisotropic carrier scattering rates do not have the same T scaling on different sections of the Fermi surface.
Here, the Kohler plots of three MAX phases samples are shown in the Figure 3.14.
For Ti 3 SiC 2 Kohler's scaling is reasonably well obeyed below 50 K with small deviations from full scaling thereafter, which is consistent with reported (000l)-oriented thin film of Ti 3 SiC 2 [116]. So as for V 2 AlC. In contrast, for Cr 2 AlC, Kohler's rule is violated at all T. This difference in Kohler's scaling between Ti 3 SiC 2 and Cr 2 AlC, is striking if we consider the very similar ρ(T) behaviour of t both crystals. Such violation of Kohler's rule suggests several possibilities:(i) Charge carriers density is not temperature-independent while the electronic structure varies with temperature due to the formation of density waves; (ii) There is more than one type of carriers and their mobilities have different temperature dependences, in which a simple Boltzmann-type approach with its associated scattering time approximation is not valid;
(iii)The scattering times associated with the magnetoresistance are distinct and have different temperature dependences.
Hall coefficients
Hall measurement was also conducted on three MAX phase single crystals. Figure 3.15 gives an example of the Hall resistivity of V 2 AlC, measured versus magnetic field, at different temperatures. One can notice that Hall resistivity varies linearly with magnetic field, and this phenomenon is temperature independent, indicating the systems are in the weak-field limit. It is worth mentioning that within a sweep of magnetic field, four values were measured to obtain the average value for all the discussion afterwards (See sample. The sign of R H is always positive. For Cr 2 AlC and Ti 3 SiC 2 , these results are in qualitative agreement with the result found for the polycrystalline phases, but for V 2 AlC, negative R H values were always reported for polycrystals [3].
It is also worth noticing that though different samples can lead to different R H values, there seems to be no substantial variation of R H with T (See Figure 3.16 ). We ascribe the variability among samples to a change in their quality, which according to our simplified 2D model can lead to substantial variations of R H for most cases. The small value of R H makes it difficult to be measured, and we notice that a very slight temperature variation during a magnetic field sweep (less than 1K) may seriously affect the final value R H . We suspect that some fluctuations, as well as those reported in the literature, might be due to this artifact.
Charge carrier density and mobility
Using Cr 2 AlC temperature data, we have extracted the carrier densities, mobilities and α using the conventional two band model (as plotted in Figure 3.17). We obtain a decrease in carrier density with T. This is a quite anomalous behavior for isotropic bands, which can be explained in terms of a variation in the ratio between τ n and τ p in the frame of a proposed 2D model which will be discussed in next section. Also, as predicted by this 2D model (and as reported in the case of polycrystalline phases), we find apparent densities in the range of 10 27 m -3 . Apparent mobilities also correspond to what would be predicted from typical values of the relaxation time in metals, and in the range from 120 to 50 cm 2 /Vs when T is varied from 4 K to 200 K.
Thermal transport measurement of Cr 2 AlC
Thermal transport measurement was conducted on the as-grown Cr 2 AlC crystal by my previous group in Shanghai Institute of Ceramic, China. Seebeck coefficient from RT to high T were measured using an Ulvac ZEM-3, while Low T seebeck coefficient and thermoconductivity were conducted on Physical Property Measurement System (PPMS) (Quantum Design, Inc.) Unlike higher electronic conductivity of single crystalline sample compared to their polycrystalline counterparts, no significant improvement on the thermal conductivity was observed compared to the reported results on the polycrystalline samples below room temperature [3] Among measurements taken on several samples, the average thermal conductivity at RT is around 20 W/Km (See Figure 3.18). Moreover, the electron contribution to the thermal conductivity can be inferred according to the Wiedemann-Franz relation κ = σLT, where L is 2.45×10
-8 W • Ω • K -2 ,
which is theoretical value for the contribution to thermal conduction from charge carriers only. The electronic contribution of thermal conductivity is plotted in Figure 3.19(a).
Note that, experiments have shown that the value of L, while roughly constant, is not exactly the same for all materials. It is established that the Wiedemann-Franz relation is generally valid for high temperatures and for low (i.e., a few Kelvins) temperatures, but may not hold at intermediate temperatures [START_REF] Rosenberg | The Solid State[END_REF].
To investigate the phonon contribution to the thermal conductivity, according to the calculated and measured value, we should consider the Debye temperature T D of Cr 2 AlC, which is 735 K [START_REF] Jia | Ab initio calculations for properties of Ti 2 AlN and Cr 2 AlC[END_REF].Our measurement is only conducted below RT, therefore, the following discussion concerns the at low temperature limit (T D / T ≪ 1) . Under this circumstance, thermal conductivity of a perfect infinite crystal is finite at low temperatures only because of Umklapp processes(U-process). For U-process at least one of the initial phonons must have energy comparable to hω D . At T D / T ≪ 1, the number of such phonons is
n = 1 e βhω -1 ∼ = 1 e T D /T -1 ∼ e -T D /T
As T decreases, the number of phonons that can take part in U-process falls exponentially. Thermal conductivity is inversely proportional to the number of U-processes, so the effective relaxation time for thermal scattering goes as:
τ ∼ e T D /T Thus as T decreases, shows an increase till the mean free path, l , becomes limited by the scattering from the imperfections/boundaries of the crystal. Below this temperature, l becomes T-independent and is determined solely by the temperature dependence of specific heat C v . So, at very low temperatures, thermal conductivity will be determined by C v and should go as T 3 .
However, in the Cr 2 AlC, as indicated in the Figure 3.19(b, c): at low T, the deviation from Debye T 3 laws was observed. The thermal conductivity increased linearly with the temperature increased. Also, with the temperature keeping increasing, the thermal conductivity, after treaching a maximum, did not begin to fall exponentially with temperature as e T D /T . At higher temperatures, we observed the violation of 1/T power law fall of κ with increasing T.
When it goes to the Seebeck coefficient as shown in Figure 3.20, the curve does not display monotonous sign within all temperature range, which is not in agreement with Hall coefficient measurement results on Cr 2 AlC. This is therefore another hint for a departure from the conventional isotropic two-band model. Compared to the polycrystalline sample, almost identical trends were obtained: at around 50 K, there is a knee point on both curves . The dominant transport carrier characteristic of single crystalline sample changed from electron-like to hole-like in the temperature range between 200 K-250 K, while for polycrystalline is 250 K-300 K. The values for both cases are strikingly comparable. It is worth noticing that due to the high thermal conductivity and low Seebeck coefficient , at low T , it is practically difficult to build large thermal gradient to obtain measurable thermopower, leading to noisy signal at low T. Nevertheless, we consider the case in which the contribution from in-plane resistivity component is non-negligible and can be extracted from the overall magnetoresistvity, as shown in Figure 3.23 .The derived MR is between 5%-8%, a reasonable value comparable to that obtained from our in-plane measurement. Hence, we use parabolic fitting to indicate the in-plane contribution:
Out-of-plane transport
MR o = MR -(αB 2 -A)
The fitting parameter α and intercept A as function of temperature were plotted in Figure 3.24, in which one can notice that as temperature increases, both α and A However, the physical meaning of intercept value, as the amplitude of the offset to the Zero point, remains unknown.
We use different fitting equations for the residual magnetoresistance (MR) demon-strated in Figure 3.25. Among them, it seems that a Boltzmann-type equation fits better than others. Here we fixed the onset magnetic field at 2.5T, the residual magnetoresistance
MR o = A 1 1+exp( x-2.5 dx )
where the parameterA1 is equals to the intercept A in the parabolic fitting for the in-plane contribution. We now try to understand our results by referring to the literatures in which an analogous phenomenon was reported in lamellar structures. Considering the stacking of two dimensional layers in MAX phase, we firstly referred to out-of-plane magneto transport of two-dimensional electron gas in heterostructures [START_REF] Podor | Parabolic negative magnetoresistance in twodimensional electron gas in InGaAs/InP[END_REF] and graphene [START_REF] Jobst | Electron-Electron Interaction in the Magnetoresistance of Graphene[END_REF] in the temperature range from 20 mK to 4.2 K. In magnetic fields less than about 1 T and before the onset of Shubnikov-de Haas (SdH) oscillations a large negative magnetoresistance was observed, which followed a quadratic dependence on the magnetic field.
The observed negative magnetoresistance can be explained in terms of electron-electron interactions (weak localization) in two-dimension. Yet, in our case, we did not observe that the out-of-plane resistivity has a logarithmic dependence with temperature, which excludeds weak localization.
The data discussed here are somewhat anomalous to anisotropic magnetoresistance in the high-T superconductors [START_REF] Xing | Out-of-plane transport mechanism in the high-T c oxide Y-Ba-Cu-O[END_REF][START_REF] Yoo | Out-of-plane transport properties of Bi 2 Sr 2 CaCu 2 O 8 single crystals in normal and mixed states[END_REF][START_REF] Hussey | Out-of-plane magnetoresistance of La 2-x Sr x CuO 4 : Evidence for intraplanar scattering in the c-axis transport[END_REF], who also exhibit a striking feature of large anisotropic electronic properties, as our MAX phase samples. It was widely studied that the interlayer tunnelling or weak coupling were proposed for c-axis conduction and the presence of Josephson coupling between layers had been experimentally proved in Bi-based superconductor single crystals.
In our experiment, B is parallel to I, in principle, so that a macroscopic Lorentz force is zero and the Lorentz-force-driven charge motion is not expected. However, if the carrier trajectories are not straight along c-axis, and there exists some sort of misalignment between B and I, out-of-plane magnetoresistance may be produced even in the absence of Lorentz force.
It is worth mentioning another point related to defect scattering, as demonstrated in [START_REF] Crommie | Thermal-conductivity anisotropy of single-crystal Bi 2 Sr 2 CaCu 2 O 8[END_REF], the electrical conductivity anisotropy of Bi-based superconductor material is of order 10 4 and is strongly temperature dependent, which is also the case for V 2 AlC sample. A defect scattering played an important role in out-of-plane transport mechanism. At this stage these hypothesis still need to be further investigated with more experimental results.
V 2 AlC
Similar phenomena was detected in the V 2 AlC sample as well. Here we simply list the measurement results (Figure 3.26 3.27 3.28) as reference. The anisotropy ratio of V 2 AlC sample is around 10 3 . Unlike Cr 2 AlC, the onset magnetic field of negative MR for V 2 AlC is lower , at 1.8 T. Besides, up to 50 K, the negative MR could still be observed.
After extraction, we can clearly find out the MR value for V 2 AlC is much larger than Cr 2 AlC sample, which might imply that higher mobilities leading to lower resistivity.
Theoretical calculation of MAX phase transport properties
Introduction of a 2D hexagonal metal with nearly free electrons
As emphasized by N.W. Ashcroft and N. D. Mermin many years ago, the nearly free-electron-like Fermi surfaces are essential in understanding the real Fermi surfaces of many metals [START_REF] Ashcroft | Solid State Physics[END_REF]. Here we intend to argue that MAX phases probably make no exception. Determining the Fermi line of nearly free electrons in a pure 2D hexagonal metal is elementary. The origin of the hole and electron bands can be seen by plotting several Brillouin Zones (BZ) along with the Fermi circles corresponding to the filling of the free electrons states up to the Fermi energy in each zone . We focus on cases where the free electron circle extension is larger than that of the corresponding BZ. Then, restricting oneself to the first hexagonal BZ, one obtains free electron pockets at the corners of the hexagon and a hole band centred at the origin (See Figure 3.29). To get the dispersion curves is a trivial matter, obtained by solving the following secular equation (justification of this procedure can be found, e.g., in Eqs 3.5):
E 0 -E U U U U U U U E 1 -E U 0 0 0 U U U E 2 -E U 0 0 0 U 0 U E 3 -E U 0 0 U 0 0 U E 4 -E U 0 U 0 0 0 U E 5 -E U U U 0 0 0 0 E 6 -E = 0 (3.5)
where E is the unknown energy and the E i is the free electron parabolic relations of An example of dispersion curves is given in Figure 3.30. For a large number of electrons per unit cell, the first band is filled with electrons. As in the case of graphene tight-binding treatment, it is easy to demonstrate that for nearly free electrons the first band touches the second at the so-called Dirac points K's. The second band is the hole band. It is separated from the third band -the upper electron band -by an energy gap close to 3U at points K's. Here we restrict our analysis to the case where the first band is totally filled (with two electrons per unit cell), and we define N as the number of electrons per unit cell populating all higher partially filled bands. In order to compare the model to the case of the M 2 AX phases, or to compute 3D carrier densities from the 2D model, we assume that we have four 2D planes per unit cell (as a consequence, to get the number of electrons per plane in partially filled bands one must divide N by a factor of four).
E i = h2 (k -k i ) 2 /
that each band is split in two, the dispersion of the split bands along c being of the form Combining the in-plane and out-of-plane energy contributions give rise to Fermi surfaces such as that of Figure 3.32 (where we assumed that the splitting is different for the hole and electron pockets). Although not perfect, the similarity with DFT calculations is prominent, even if so simple a model can obviously not reproduce the smallest details of the Fermi surface. This approach also neglects an additional but quite small splitting, which dissociates each of the hole and electron Fermi surfaces into almost indistinguishable surfaces, and which can be attributed to the fact that two consecutive A planes are inequivalent.
E = E 0 ± (β 2 + γ 2 + 2βγcos(kc)) 1/2 .
In order to further simplify the calculation of all transport parameters, we move one last step further, and replace the 3D structure by a fully 2D model, neglecting any energy variation along c (so that our model can only predict the values of in-plane transport coefficients). This gives Fermi lines such as the one presented in the tight-binding splitting is such that the electron bands are repelled at energies high enough for being totally unoccupied [START_REF] Chaput | Thermopower of the 312 MAX phases Ti 3 SiC 2 , Ti 3 GeC 2 , and Ti 3 AlC 2[END_REF].
Although only a restricted number of Fermi surface calculations are presently available in the literature, many support the nearly-free electron explanation of the Fermi surface shape. Our purpose is therefore to use the 2D model as a reasonable approximation of the Fermi surface shape, because it allows one to simplify the calculation of the transport parameters to a considerable extent, and thus to discuss the physical aspects which govern the transport properties in a very convenient way. Apparently, this approximation restricts the analysis to in-plane transport.
Transport formalism
Here, we briefly summarize the principle used for calculating the transport coefficients. As usual, we start from Boltzmann's equation, where F is the force, including the electrical and Lorentz contributions, f 0 is the equilibrium distribution and τ is the relaxation time:
F h ∇ k ( f 0 + ∆ f ) + ∆ f τ = 0 (3.6)
From this equation, the out-of-equilibrium part ∆ f of the distribution function is expressed as
∆ f = -(1 - eτ h (v×B) ∂ ∂k ) -1 eτvE ∂ f ∂E (3.7)
where v is the velocity, B the magnetic field and E the electric field. Expanding the denominator gives (Jones-Zener approximation) [START_REF] Ziman | Electrons and phonons[END_REF] ∆
f = -(1 - eτ h (Bv y ∂ ∂k x -Bv x ∂ ∂k y ) + order2)eτvE ∂ f ∂E (3.8)
The first term inside the parenthesis gives rise to the direct conductivity σ xx (without the magneto-resistance contribution), the second term (first order in B) gives rise to the transverse conductivity σ xy , and the third term (second-order term in B) is at the origin of the magneto-resistance. ∆ f is used to compute any current component by estimating
j = e D(k)v(k)∆ f d 3 k (3.9)
With D(k) the density of states. Applying the electric field E along x, computing j x gives σ xx and the magneto-resistance, and j y leads to R H . Besides, a considerable simplification is obtained for a 2D system, since any transport integral can then be put in the form of a circulation along the Fermi line. Grouping all terms and ∂ f /∂E under the form of a function g(k), any transport integral can be put in the polar form
g(k x , k y ) ∂ f ∂E dk x dk y ∼ = g(k F ) dk F hv F = g(k F ) 1 hv F k 2 F + ( ∂k F ∂θ ) 2 dθ (3.10)
Roughly speaking, this means that for interpreting magneto-transport we just have to examine what the holes and electrons do along the Fermi line as a function of time.
From the calculations of the direct conductivities and transverse conductivities for each band of index i, one can obtain the overall resistivity and Hall coefficient values from summations of
ρ ab = ∑ i σ i xx ( ∑ i σ i xx ) 2 + ( ∑ i σ i xy ) 2 (3.11) R H = ∑ i σ i xy ( ∑ i σ i xx ) 2 + ( ∑ i σ i xy ) 2 1 B (3.12)
Predicted results for a simple 2D hexagonal metal
Herein, a simple model is introduced, not directly mimicking the MAX phase Fermi surface, so as to emphasize and isolate physical phenomena. The lattice parameters are that of Ti 2 AlC, and then, these phenomena is shown to rule in the same way the transport properties associated to more complex Fermi lines (such as that of Figure
3.33).
As a first example, if we fill our 2D system with just a few electrons per unit cell to get a free carrier system (Figure 3.35(1)). Still, the transport calculations give a substantial magneto-resistance while R H is quite small, equals to 1.178 × 10 -11 m 3 C -1 as in Figure 3. 35(1d). An isotropic 1-band model would lead to an apparent hole density n = 1 eR H = 5.30 × 10 29 m -3 , extremely far from the assumed value 2.30 × 10 28 m -3 . In order to interpret the results for the Hall coefficient and magneto-resistance, we first remind that in a semi-classical approximation, the wave vector changes with time according to h∂k/∂t = ev × B when submitted to the Lorentz force, and holes with an energy E F cycle clockwise along the Fermi line, until they are scattered to another part by a collision. If one focuses on a concave part and in contrast to this clockwise rotation, the hole velocity, which is perpendicular to the Fermi line, clearly rotates counter-clockwise with time, which means that in real space the holes are thus turning counter-clockwise, so that during a fraction of time in between two collisions they truly exhibit an electron-like behaviour (See Figure 3.35(1a)).It is only at the corners with a convex shape that their velocity rotates clockwise with time, so that they rotate clockwise with time, as free holes would do in real space. This phenomenon is discussed in [START_REF] Banik | Hall coefficient of a holelike Fermi surface[END_REF][START_REF] Ong | Geometric interpretation of the weak-field Hall conductivity in twodimensional metals with arbitrary Fermi surface[END_REF] , and it is quite understandable that this feature can dramatically affect the value of the Hall coefficient.
An elegant way to describe this is provided by Ong's theorem for 2D metals [START_REF] Ong | Geometric interpretation of the weak-field Hall conductivity in twodimensional metals with arbitrary Fermi surface[END_REF]. It states that the transverse conductivity , σ xy is given by
σ 2D xy = 2e 3 h 2 A l B (3.13)
where A l is the algebraic area spanned by the mean free path as the wavevector cycles with time over one full orbit. If the mean free path λ = v k τ k is a constant (for pure impurity scattering), one obtains a circle with the sign corresponding to the type of carriers [START_REF] Ong | Geometric interpretation of the weak-field Hall conductivity in twodimensional metals with arbitrary Fermi surface[END_REF]. However, λ may vary with k F (for instance if the relaxation time τ k is isotropic and the velocity is not constant along the Fermi line (See Figure 3.35 (1b)(2b)).
For holes, the oriented area now exhibits outer flaps affected by a negative sign (See If the flaps get bigger than the central part, the Hall coefficient can even reverse its sign, so that pure holes can be seen as electrons [START_REF] Ong | Geometric interpretation of the weak-field Hall conductivity in twodimensional metals with arbitrary Fermi surface[END_REF]. That is to say if holes partially behaving as electrons and leading to seemingly compensated transport, pure holes can obviously give rise to a magneto-resistance. This is the case of our hexagonal 2D model.
(See Figure 3.35(1d)).
It is worth noticing that using the extraction procedure assuming the existence of two types of carriers and n=p(as described in traditional two band model in Chapter I), the parameters of Figure 3.35(1) would lead to almost equal mobilities (96.2 and 93.5 cm 2 /V•s, respectively) and a concentration n= p= 7.36×10 27 m -3 . Therefore an intermediate conclusion can be drawn as already indicated in [START_REF] Banik | Hall coefficient of a holelike Fermi surface[END_REF]: the evidence of a small R H value and a substantial magneto-resistance is by no means enough to prove the presence of two kinds of carriers. As in the Figure 3.35(1), a system involving just one type of carriers can even mimic compensated transport with almost perfectly balanced electron and hole properties.
Furthermore, with a higher N, we now have both hole and electron bands, and the considerations developed above now apply to both kinds of carriers (See Figure 3.35(2)). Increasing the value of U makes the hole Fermi line smoother, and tends to reduce the flap size. However, as long as the Fermi line appreciably deviates from a circle, the mean free path curve, even if it is devoid of flaps (as that of electrons in 3.35(2c)),also departs from a circle and lead to a substantial modification of R H , as well as to magneto-resistance.
Following all above discussions, questions are generated : for such a simple system, can we rely on the extracted electron and hole densities in the frame of a conventional two-band model, and do they bear any similarity with the real values?
In order to answer it, it is instructive to plot the electron and hole densities as a function of N (Figure 3.36), as well as the n app =p app values which would be extracted by using twoband model. Figure 3.36 demonstrates that the extracted values are quite different from the ones using hexagonal 2D nearly free electron model. The Hall coefficient is quite small, and depending on N, it can be either positive or negative. This example also shows that in spite of considerable variations of the real densities at the Fermi level, n app can remain remarkably stable, whereas the sign of R H may vary as well as its magnitude.
Another message can be taken from
Predicted results for MAX phases
We simply use the lattice parameters of the MAX phases and select a set of energy parameters aimed at recovering the Fermi surface of a given MAX phase. Here we take the case of Ti 2 AlC as an example and draw general conclusions which can be applied to other MAX phases. The contributions of each band are calculated (split hole bands and split or single electron pockets) and summed. Figure 3.37 gives the results obtained for a Fermi line mimicking the full Fermi surface of Ti 2 AlC, obtained for equal electron and hole relaxation times τ n =τ p =10 -14 s, representing a common order of magnitude for metals [START_REF] Ashcroft | Solid State Physics[END_REF]. The valley degeneracy of electron g n =1, of each hole band g p =2. The fact indicates that in spite of a minor contribution of the electrons to ρ ab (as postulated in [START_REF] Mauchamp | Anisotropy of the resistivity and charge-carrier sign in nanolaminated Ti 2 AlC: Experiment and ab initio calculations[END_REF]), there is a substantial contribution of all carriers to R H and holes alone would lead to a considerably higher value . Any band, considered alone, exhibits a substantial magneto-resistance, with quite similar relative variations. But the measured magneto-resistance is due to holes, since they give the major contribution to the overall resistivity.
A two-band isotropic model would give n app =p app =3.94×10 27 m -3 (staying the case of g n =1 and g p =2), and mobilities µ p =148 cm 2 /V•s and µ n =125 cm 2 /V•s, respectively (polycrystalline samples give values around 10 27 m -3 [3]). A one-band model would give only holes with p app =4.60×10 28 m -3 . The values given by this 2D nearly free electrons model, are p=2.02×10 28 m -3 and n=3.15×10 27 m -3 .
So we can see from this case that it is quite obvious that neither a conventional 2-band model, nor a 1-band model can give reasonable estimations of the true densities.
The small value of R H and the apparent compensation must not be attributed to the fact that the hole and electron densities compensate one another, since they are indeed quite different. Firstly, the velocities remain the same order of magnitude in most parts (close to the free electron velocities). Secondly, with phonon scattering we can reasonably expect an almost isotropic scattering time. If the τ n and τ p are similar , the substantial, in the range of a few hundreds to thousands.
From MR and Hall effect measurement, in-plane transport behaviours of MAX phases have been studied. As in the case of the polycrystalline phases, we observe a magneto-resistance of a few per cent, among them, only MR curve of Cr 2 AlC exhibits parabolic-like curves, as reported for polycrystalline MAX phases. One can notice that Hall resistivity varies linearly with magnetic field, and this phenomena is temperature independent, indicating the systems are in the weak-field limit. R H is small, as previously noticed for polycrystalline samples. The extracted mobility is in the range from 50 to 120 cm 2 /Vs, which is the same order of magnitude of polycrystalline sample.
Thermal transport measurements were also conducted on Cr 2 AlC samples. Similar as higher electronic conductivity compared to its polycrystalline counterpart, higher thermal conductivity is verified. Non-monotone sign of charge carrier type was displayed in the curve of Seebeck coefficient, while similar phenomena was observed to Hall coefficient measurement. Yet this discrepancy can still be partly explained by a compensation between holes and electrons. Attempts on the out-of-plane magnetoelectronic transport were also performed. Though it is interesting to observe the anomalous magnetoresistance in the absence of Lorentz-force, the mechanism of a field-induced transport along c-axis is still unclear.
Theoretically, a general model was proposed for describing the weak field magnetotransport properties of nearly free electrons in two-dimensional hexagonal metals.
It was then modified to be applicable for the transport properties of layered MAX phases. We argue that the values of the in-plane Hall coefficient and the in-plane parabolic magneto-resistance are due to the specific shape of the Fermi surface of almost two-dimensional hole and electron bands. If the contribution of the electron pockets to in-plane resistivity is often predicted to be a minor one, in contrast, both holes and electrons should substantially contribute to the overall value of the in-plane Hall coefficient.
The Fermi surface of MAX phases has never been experimentally probed. Yet, it is only the experimental verification of its shape for various MAX phases which can ultimately prove or invalidate some of our assumptions. The striking similarity between some polycrystalline and single-crystalline extracted data is yet to be explained.
MXenes are derived from MAX phases, a class of layered ternary carbides and/or nitrides that have been introduced in the previous chapters. The motivations for extracting MXenes are multiple, as explained in the introduction, inspired by the highest metal uptake when testing V 2 CT x MXenes as electrodes for lithium battery compared to the other multilayer MXenes [START_REF] Naguib | New Two-Dimensional Niobium and Vanadium Carbides as Promising Materials for Li-Ion Batteries[END_REF]. Herein, we selected V 2 CT x as a representative M 2 XT x MXenes.
V 2 CT x MXene Synthesis
Etching
Wet hydrofluoric acid (HF) treatment of MAX phases was the first efficient method to synthesize MXenes, which allows selective dissolution of the Al layers from the MAX phases [START_REF] Naguib | Two-Dimensional Nanocrystals Produced by Exfoliation of Ti 3 AlC 2[END_REF]. Our process to synthesize V 2 CT x flakes is illustrated in Figure 4.2. First, we found that, in the case of mm-size V 2 AlC single crystals, chemical dissolution of the Al layers was extremely slow. In order to hasten the etching procedure, as-grown V 2 AlC single crystal were scratched heavily with a sharp blade to create defective surface, and then sonicated in 45% HF solutions at 80 °C for 12 h. Subsequently, dark green supernatant with V 2 CT x precipitates was collected, confirming the etching of V 2 AlC (See In our present experiment, it is worth noting that some attempts have been made
to improve the chemical etching process. First, the as-grown crystals were directly immersed into the HF without breaking into pieces, aiming at large-scale MXenes. The effect of sizes of crystals on the etching process will be discussed in the next section.
Moreover, dimethyl sulfoxide (DMSO), an effective intercalant for Ti 3 C 2 T x -MXenes as evident by the observed shift in major XRD peaks towards lower angles in the Ti 3 C 2 T x samples [START_REF] Mashtalir | Intercalation and delamination of layered carbides and carbonitrides[END_REF], was also introduced in our experiment since other organic intercalants such as thiophene, ethanol, acetone, tetrahydrofuran, formaldehyde, chloroform, toluene and hexane, were found to be unsuitable. The as-etched V 2 CT x MXenes was added to the DMSO and stirring was employed. After 24 h of intercalation, the MXenes was centrifuged and the supernatant poured out to remove most DMSO and was washed with DI water followed by 4h sonication with ethanol. The effect of DMSO on our V 2 CT x will also be discussed in the next section. However, the prospect of intercalating DMSO is severely limited by its exclusive effectiveness towards delaminating Ti 3 C 2 T x , and its difficulty of thorough removal due to high boiling point solvents. Furthermore, lamellar thickness can be undesirably increased by remnant DMSO molecules, which joined the delaminated sheets together. To date, the exact mechanism to explain how and why DMSO only interact with multi-layer Ti 3 C 2 T x remains unclear [START_REF] Mashtalir | The effect of hydrazine intercalation on the structure and capacitance of 2D titanium carbide (MXene)[END_REF].
Substrate preparation
In order to investigate the electronic properties of V 2 CT x MXene, the multilayer sheets should be isolated onto a substrate. Si/SiO 2 substrate is often used in the case of 2D crystals, as it offers the possibility to tune charge carrier densities according to the voltage applied on the backgate formed by the doped Si.
In this thesis, Si/SiO 2 substrate was used for all device fabrication. Three inches diameter doped silicon (ρ~0.001-0.005 Ω•m ) wafers with a thermally grown 90 nm or 300 nm-thick dioxide layer on both the front polished and the back surface were used, ordered from MTI Corporation.
In order to electrically contact the backgate, we first chemically etched the oxide on the wafer back side and then metallized a 300 nm-thick aluminum layer.
A grid of metallic marks [Ti(5 nm)/Au(60 nm)] with the combination of symbols and numbers was defined by optical lithography, metal deposition and lift off on the front surface to locate flakes for the following process. Then, the wafers are diced in 6
×8 mm 2 pieces in order to fit the chips used for electrical measurements.
Mechanical exfoliation
Our original focus is to isolate single large flakes of V 2 CT x MXenes, that can be examined and measured, particularly for electrical, optical and thermal properties.
As for other 2D inorganic graphene analogues discussed in the Chapter 1.2.1, we will show that mechanical cleavage as originally applied in peeling off the graphene from graphite allows to mechanically isolate fewer layer MXene flakes from as-etched
V 2 CT x MXenes.
First, as-etched V 2 CT x MXenes was placed on the sticky side of a medium-track blue tape. This residue-free tape promised to leave little to no sticky residue on the substrate(See Figure 4.4(a)). Then, the tape was folded more than 10 times so as to peel off multilayers V 2 CT x sheets. We proceed with a transfer on a fresh piece of tape. As generally known, the interlayer bond are stronger than graphite, even after the etching process. In order to maximize the shear force, it is necessary to use the fresh tape to continue the exfoliation process.
Then, the exfoliated V 2 CT x sheets stuck on the tape was directly transferred onto a piece of Si/SiO 2 substrate (See Figure 4.4(b)). The flakes can be identified by optical microscopy contrast with SiO 2 , similar as graphene. Meanwhile, the position of flakes can be identified according to the metallic mark system.
Material characterization
4.1.4.1 X-ray diffraction X-ray diffraction is an analytical technique used to characterize materials atomic structure. In a θ -2θ configuration, θ corresponds to the angle between (hkl) plane and the incident X-ray and the diffracted X-ray. The detector is placed at an angle 2θ to the incident X-ray. The results are usually presented as the intensity vs 2θ. According to the Bragg's law:
nλ = 2d hkl sinθ (4.1)
the interplane atomic spacing d hkl can be calculated from the peak position. For MXene phase, the (002) plane is one of the most intense peaks observed, whose 2θ position is therefore used to determine the c-lattice parameter.
In the present experiments, X-ray diffraction on V 2 AlC before and after HF etching were performed on a Mar345 image plate using Mo Kα radiation (l= 0.071074 nm) (Rigaku Ultra X18S rotating anode, Xenocs fox 3d mirror). Data images were integrated by Fit 2d software. Crystals were picked up using grease. For the following discussion, the data was converted under the wavelength of Cu Kα radiation in order to compare with the 2θ value in the literature.
Micro Raman
The unpolarized Raman spectra were collected on a LabRAM HR800 (Horiba Jobin-Yvon, France) equipped with an air-cooled CCD array detector in the backscattering configuration. An Argon ion laser (514.5 nm) was used as the excitation source. An objective with numerical aperture of 1.25 was used. The spot size of the laser was focused to 2 µm. The spectral resolution was grooves/mm with grating 1800, 2400 and 150.
SEM, TEM with EDS and AFM
The microstructure morphology and elemental analysis of V 2 CT x MXene sheets were characterized by a field emission scanning electron microscope (JSM-7600F, JEOL, Japan) equipped with Energy Dispersive Spectroscopy (EDS). A TEM (LEO922) also equipped with EDS detector, was used to observe the microstructure of the sample and investigate the element distribution. Flake thickness and surface topology was tested by AFM using Bruker Multimode Nanoscope VIII under using Peak-force or normal Tapping mode.
Structural characterization of as-etched V 2 AlC single crystal
SEM and EDS
Work towards isolating large single V 2 CT x MXenes flakes began with the idea that starting the process with large V 2 AlC single crystal would lead to eventual separation of large flakes. As-grown V 2 AlC crystals were firstly employed as raw materials ready for HF etching. As can be seen in Figure 4.5, showing the SEM image of V 2 AlC crystals after etching by 45%HF for two weeks, the as-produced samples were neither fully delaminated, nor were they as large as we desired. No typical accordion-like morphology of MXenes are observed, combined with EDS spectrum results, indicating that the Al was not completely removed from the parent MAX phase. It is evident that delamination has not occurred. Yet, comparing to the EDX results before and after etching, it is undeniable that that the etching process act on the V 2 AlC single crystal, though reaction kinetics are limited by the etchant concentration, crystal size as well as reaction temperature.
AFM
AFM image of the steps of as-etched V 2 AlC shows that the step height is around 1.3 to 1.4 nm, which is very close to value of the c-lattice parameter of V 2 AlC (See Figure Apparently, increasing the crystal size of the parent MAX phase makes it impossible to extract MXene flakes through chemical etching. Hence, to ease the etching process, for V 2 AlC and V 2 CT x , respectively. Such phenomenon has been attributed to the water intercalation in between the layers after delamination in an aqueous solution. Indeed, water intercalation is common and has been observed in other MXenes such as Nb 2 CT x , Nb 4 C 3 T x [START_REF] Byeon | Lithium-ion capacitors with 2D Nb 2 CTx (MXene) -carbon nanotube electrodes[END_REF][START_REF] Ghidiu | Synthesis and characterization of two-dimensional Nb 4 C 3 (MXene)[END_REF]. The value of the c-LPs is almost comparable to that reported in previous works on V 2 CT x powders, yet, the (006) (103) peaks in the present seemingly vanished [START_REF] Chen | CO 2 and temperature dual responsive "Smart" MXene phases[END_REF]. In our experiment, DMSO was also introduced as intercalant. Figure 4.9(a) shows the collected V 2 CT x (VC-p) samples with and without treatment of DMSO. We can see from the patterns that, contrary to Ti 3 C 2 T x , there is no obvious further shift of (002) peak corresponding to the expansion of interplane distance on the V 2 CT x with treatment of DMSO. Although we did not try any other intercalant molecule, we would like to mention that a relatively large molecules organic base, namely tetrabutylammonium hydroxide (TBAOH) was recently demonstrated to be effective for the delamination of Note that at early stage trials of the etching experiments, the etched samples terminated by a mixture of functional groups -F and -OH were found, as shown in the Figure 4.13. We then improved cleaning, drying and collection of the as-etched sample and it became rare to detect -F in EDS spectrum.
SEM and EDX
More and more evidences show that hydroxide groups directly bond to the MXene, with a layer of water hydrogen-bonded to the hydroxide surface while fluoride moieties bonded to the MXene surface. This is confirmed by direct measurement of surface termination groups of V 2 CT x MXenes using NMR spectroscopy [START_REF] Harris | Direct Measurement of Surface Termination Groups and Their Connectivity in the 2D MXene V 2 CT x Using NMR Spectroscopy[END_REF]. However, recent computational studies revealed more complexity in the locations and orientations of surface groups depending on the species and elemental compositions.
Xie et al. [START_REF] Xie | Role of Surface Structure on Li-Ion Energy Storage Capacity of Two-Dimensional Transition-Metal Carbides[END_REF] suggested that =O and/or -OH terminated MXenes were the most stable, because -F terminations were readily replaced by -OH groups, which could occur when they were washed and/or stored in H 2 O. During the high temperature treatments and/or metal adsorption processes, -OH was converted to =O terminations.
The structural, elemental and chemical properties of single and double sheets of native point defects such as Ti vacancies and oxidized titanium adatoms (TiO x ) are also observed [START_REF] Karlsson | Atomically Resolved Structural and Chemical Investigation of Single MXene Sheets[END_REF]. By analogy, we can infer that the absence of -F group in our end product is not surprising.
TEM
As collected V 2 CT x (VC-p) were dispersed in the DI water and isopropanol (IPA), In comparison with DI water dispersed sample exhibiting well-defined and clean edges, the IPA dispersed ones have uneven edges and easily roll up. The a -LP of this flake from SAED, inheriting the hexagonal basal plane symmetry of the parent V 2 AlC phase, is measured to be ~2.9 Å, which is similar to that of the V 2 AlC precursor [START_REF] Shi | Synthesis of single crystals of V 2 AlC phase by high-temperature solution growth and slow cooling technique[END_REF]: as found for all other MXenes synthesized to date, removing Al did not alter the in-plane structure. In addition, no Al element signal was detected with EDS spectrum (See
Raman spectrum
The Raman spectra of the untreated V 2 AlC and HF-treated V 2 CT x are shown in
E 2g (ω 1 ), E 2g (ω 2
), E 1g (ω 3 ), A 1g (ω 4 ) are labelled after fitting the spectra with Gaussian fitting. Furthermore, peak at 257.1 cm -1 is assigned to E 1g group vibration which corresponds to in-plane (shear) modes of V atoms. This peak should be observed in the V 2 CT x spectrum after etching V 2 AlC.
We observed a reduced peak intensity and broadened shape, which is probably due to the increased interlayer spacing of the MXene structure. This change in the Raman spectra can be analogous to the relatively low G band intensity in monolayer graphene.
The peak at 358.6 cm -1 is assigned to A 1g group vibration, which reflects the out-of-plane vibrations of V atoms. Therefore, if the Al interlayer is removed, the corresponding peak in MXene is expected to red-shift, broaden and change the peak shape, in analogy with the 2D-band of graphene. The E 2g modes ω 1 (155.7 cm -1 ) and ω 2 (239.5 cm -1 ) are in-plane vibration of V and Al atoms. After etching, the Al atoms are supposed to be replaced with lighter atoms (such as F, O or H), hence these peaks involving V-Al coupled vibration are more affected compared to those of E 1g , the intensities of which reduced dramatically. Also the broadening of these peaks in the Raman spectrum could be the result of the higher disorder degree of the structure due to the introduction of various surface termination. All these features present in the Raman spectrum of V 2 CT x corroborate removal of Al layer. characterization methods, such as XRD. However, it is also noticeable that the flake produced from the sample obtained after DMSO treatment is rougher along the edge and surface, which might be due to the stirring and sonication time.
Ti 2 CT x MXene
As discussed in the Charpter I, the bonds between M n+1 X n and A are weaker than those of M-X, and are thus chemically more reactive. Therefore, selectively etching only the A layers from the MAX phase is possible. An effective way to extract A layer without destroying the layered morphology is the chemical etching. Alternative method as direct mechanical exfoliation has never been proved to be successful in delamination of 2D MXene layer. It is well known that the mechanical exfoliation is closely related to the mechanical properties of materials including elastic constants C ij , bulk mechanical moduli (K, G, and E).
From the calculated elastic coefficients and mechanical properties of the MAX phase compounds (See Table 4.1), we can find that the elastic constants of MAX phases with large atomic radius A elements smaller than that with those of small atomic radius element. Here, if we compare C 33 of Ti 2 SnC to that of Ti 3 AlC 2 , it clearly indicates lower C 33 which should ease the mechanical deformation along c-axis. By checking the atomic radius of Al (143 pm) and Sn (158 pm) [144], the bond energy of Ti-Sn is weaker than that of Ti-Al. In this case, HF-etching combined with mechanical exfoliation were also applied on the as-grown Ti 2 SnC aiming at synthesizing Ti 2 CT x MXenes.
Ti 2 SnC etching
The average size of as-grown Ti 2 SnC crystal is around 10-100 µm, exhibiting hexagonal morphology (as shown in Figure 4.24). Ti 2 CT x MXene was synthesized by immersing Ti 2 SnC in 45% concentrated HF solution for 72 h at room temperature. The low yield of crystal limited the raw materials that we can use in the experiment, hence the amount of particles is not enough to form a suspension as that of V 2 CT x after rinsing with DI water.
Another method used, henceforth referred to as drop casting, consisted of filling a pipette with Ti 2 CT x MXene solution and dispensing a drop on a glass slide or Si/SiO 2 substrate that had been cleaned in ethanol. The substrate would then be left to dry in air. This was intended as a quick and easy method that might result in dispersion of colloidal MXene particles. The goal was the evidence of a potential route for non-Al containing MAX phase etching.
Theoretically, when the water in the drop dried, a thin layer of flakes might be left behind. To help with flake adhesion, positively charged hydrophilic glass slides were used, since MXene flakes are negatively charged and the aqueous solution spreading itself along the slide surface was desirable. Finally analysis consisted of optical microscopy, Raman spectroscopy, and ToF-SIMS spectrometry were performed.
Optical microscopy of HF-etching Ti 2 SnC
Unlike dry method, drop casting resulted in extremely uneven coverage of the drop area with MXenes particles as well as a very large amount of solution residues on the Si/SiO 2 substrate as can be seen in Figure 4. 25(a,b). A single droplet of MXenes solution was placed on top of the glass substrate and allowed to dry, as shown in Figure 4.25(c,d).
Raman spectroscopy of HF-etching Ti 2 SnC
Conclusion
In summary, large scale V 2 CT x MXene flakes was obtained by conventional HFetching of V 2 AlC single crystals. Mechanical delamination of multilayered V 2 CT x flakes into few layer flake and transfer on Si/SiO 2 substrate was also achieved. Structural characterization demonstrated an enlarged interplane distance, while prior DMSO intercalation seems to have no effect on this type of MXenes. Typical 2D material morphology was derived in SEM and TEM images. From EDS results, we concluded that -OH terminations on V 2 CT x is dominating, and the most energetically favourable, compared to -F and -O functional groups. Isolated V 2 CT x flake on the Si/SiO 2 substrate are suitable for the fabrication of electrical device and measurements, which will be discussed in next Chapter.
chapter 5
Electron transport in MXenes devices
In this chapter, protocols employed for fabrication of V 2 CT x MXene device suitable for electrical transport measurements will be firstly introduced. Then, methods to measure the carrier transport properties of 2D V 2 CT x MXene are discussed. Measurements have been performed from room to low temperature. First-hand electron transport data were obtained for this new 2D material and we discuss the salient features emerging from these data. µC/cm 2 ); The second one aimed for the pattering of the contact pads themselves (200 x 200 µm 2 ) as well as the path to these contacts. Due to larger working area and lower requirements for the resolutions, a smaller magnification and higher beam current were chosen to optimize the expose time (E beam ~30keV, spot 5, magnification ×400, dose ~300 µC/cm 2 ).
Following this, the sample was developed in a MIBK/IPA (1:3) solution for 90 seconds and then rinsed in the IPA for 30 seconds. After development, the sample was put in the high vacuum chamber (~2 ×10 -7 mbar) of an E-gun vacotec metal evaporation system. Then a first metal deposition of a 5 nm thick Ti layer as adhesion layer followed by a second metallization of a Au layer was performed. The deposition rate for both layers is ~1 Å/s.
The last step (lift-off) process consists in immersing the sample in an acetone solution.
The device is ready after rinsing in IPA and drying with nitrogen gun.
Thermal annealing under an inert gas flow
Prior to the bonding, the sample to be annealed was placed in the chamber of a Rapid Thermal Annealing (RTA) system (ULVAC Mila-5000) in the Winfab cleanroom.
First, the annealing chamber was purged during 10 min under inert atmosphere (a Ar/H 2 flow in our experiment). Then the sample was heated using a near-infrared lamp from room temperature to 500 °C for 1 hour and maintained at 500 °C for 4 hours, always under Ar/H 2 flow. Afterwards, the sample was cooled down back to room temperature.
Due to the lack of the reference on the contact fabrication on MXene sample, no standard protocol was available for this process. The effect of the annealing on the contact performance was still unclear. The condition applied in the sample of Ti 3 C 2 T x MXenes were varied from 140 °C to 300 °C for 15 min to 30 min [START_REF] Lipatov | Effect of Synthesis on Quality, Electronic Properties and Environmental Stability of Individual Monolayer Ti 3 C 2 MXene Flakes[END_REF][START_REF] Miranda | Electronic properties of freestanding Ti 3 C 2 T x MXene monolayers[END_REF].
The bonding
The last fabrication step is to contact the metallic leads of the device to the pins of the sample holder (Dual-in-line non-magnetic holder, 2 x 7 metallic pins) which is adapted for electrical (magneto) transport measurements in the cryostat.
First, all the pins of the chip were short circuited using a gold wire (diameter = 0.025 mm and purity of 99.99% from GoodFellow) bonded using silver epoxy (Epoxy Technology). The chip was then annealed at 140 °C for 1 h to dry the silver epoxy to become electrically conductive. This procedure, along with with the a grounded wrist band are useful to avoid the electric discharge through the device during the bonding step. In fact, according to our previous experience on mesoscopic device of graphene device, such discharge could strongly damage the device. Though seemingly the MXene device was more robust, same procedure was retained.
Then, the backgate of the sample was glued on the chip using silver paint (Agar Scientific) and contacted to one of the sample holder pins using the same gold wire and silver epoxy. After all the metallic leads were contacted to the pins, the sample was again annealed under the same conditions as for the previous step. Afterwards, the chip was plugged on the measurement setup and grounded. Finally, the wires short circuiting the pins on the chip were carefully cut using tweezer. It is worth mentioning that some very thin flakes were also electrically contacted as shown in Figure 5.4. It is, however, due to the size of the flake that can not define four contacts. In the present thesis, only devices with four contacts successfully deposited on the flakes will be discussed. In table 5.1 we list the size and characteristics of the flakes that were used to produce devices, as described below. Coherent transport calculations were performed by other researchers via a non equilibrium Green's function (NEGF) technique which have already been applied to low-dimensional system for the electron transport. The simulation results predict that the pristine MXenes are highly conductive [START_REF] Khazaei | Nearly free electron states in MXenes[END_REF]. In the first paper on Ti 3 C 2 T x , a small band gap of 0.05 eV with -OH termination and 0.01 eV with -F termination were predicted. Since then, dependence of the electronic band structure on chemical termination groups have been widely studied [START_REF] He | New two-dimensional Mn-based MXenes with room-temperature ferromagnetism and half-metallicity[END_REF][START_REF] Hu | Investigations on V 2 C and V 2 CX 2 (X = F, OH) Monolayer as a Promising Anode Material for Li Ion Batteries from First-Principles Calculations[END_REF][START_REF] Anasori | Control of electronic properties of 2D carbides (MXenes) by manipulating their transition metal layers[END_REF][START_REF] Lai | Surface group modification and carrier transport properties of layered transition metal carbides (Ti 2 CT x , T: -OH, -F and -O)[END_REF][START_REF] Guo | Ti 2 CO 2 Nanotubes with Negative Strain Energies and Tunable Band Gaps Predicted from First-Principles Calculations[END_REF][START_REF] Li | Lattice dynamics and electronic structures of Ti 2 C 2 O 2 and Mo 2 TiC 2 O 2 (MXenes): The effect of Mo substitution[END_REF][START_REF] Zhou | Electronic and transport properties of Ti 2 CO 2 MXene nanoribbons[END_REF].
According to the theoretical results, most of the functionalized MXenes are metallic or have small band gaps which could be possibly missed experimentally Specifically, for the V 2 CT x MXenes, V 2 C monolayer is predicted to be metallic with antiferromagnetic configuration, while its derived V 2 CF 2 and V 2 C(OH) 2 in their most stable configuration are small-gap antiferromagnetic semiconductor ( [START_REF] Hu | Investigations on V 2 C and V 2 CX 2 (X = F, OH) Monolayer as a Promising Anode Material for Li Ion Batteries from First-Principles Calculations[END_REF][START_REF] Gao | Monolayer MXenes: promising half-metals and spin gapless semiconductors[END_REF]). According to the calculation results of [START_REF] Hu | Investigations on V 2 C and V 2 CX 2 (X = F, OH) Monolayer as a Promising Anode Material for Li Ion Batteries from First-Principles Calculations[END_REF], Figure 5.5 shows the optimized structural geometries of the free-standing V 2 C monolayer and its fluorinated and hydroxylated derivatives.
Electrical characterization of V 2 CT x MXenes
I-V curve
First electrical characterization was carried out by sweeping the source-drain voltage, V DS , from -4 V to 4 V and record the current, I DS . Figure 5.8 shows the typical I -V curve obtained at room temperature for Device I. Current was swept through contacts 1 and 4 while the voltage difference between contact 2 and 3 was measured. An ohmic character is observed with a constant resistivity 2.46 ×10 -2 Ω•m which is two orders of magnitude higher than reported value of Ti 2 CT x MXenes [START_REF] Lai | Surface group modification and carrier transport properties of layered transition metal carbides (Ti 2 CT x , T: -OH, -F and -O)[END_REF] and one order of magnitude higher than reported value of Ti 3 C 2 T x [START_REF] Miranda | Electronic properties of freestanding Ti 3 C 2 T x MXene monolayers[END_REF]. The contact resistance in our first trial is relatively high, staying in the range of few MΩs, which might be due to the PMMA residuals after contact fabrication and some attempts are therefore made to improve the contact and lower the contact resistance.
Improvement on the contact fabrication procedure were achieved after several attempts on the device measurement. It is confirmed that thermal annealing is a way to get rid of organic contaminants. The detailed process can be referred to the experimental part: Section 5.1.2.
Temperature dependency of resistivity
To investigate the carrier transport properties of the V 2 CT x MXenes, the resistivity as the function of temperature was measured on the devices with four contacts after annealing procedure. Current was swept through contacts 1 and 4 while the voltage between contact 2 and 3 was measured as in the previous configuration. Ω•m and 2.20 ×10 -5 Ω•m at 10 K, as measured respectively, for device II and device III.
Clearly, the resistance increases with the increasing temperature, unlike the reported temperature dependence reported for Ti 3 C 2 T x [START_REF] Miranda | Electronic properties of freestanding Ti 3 C 2 T x MXene monolayers[END_REF] and Ti 2 CT x [START_REF] Lipatov | Effect of Synthesis on Quality, Electronic Properties and Environmental Stability of Individual Monolayer Ti 3 C 2 MXene Flakes[END_REF], where in both cases, the resistance decreased with increasing temperature.However, the change in temperature is rather modest, compared e.g. to that observed in the corresponding MAX phase. with decreasing temperature, the resistance first decreases until about 250 K and then increases slightly all the way down to 2.5 K. We can see that the results varied from sample to sample dramatically, though different regimes of the resistance variation with the temperature were reported.
In the case of V 2 CT x MXenes, two regimes were also clearly observed from our plots where there exist two slopes of linear variation dependence, one sharper than another.
Metallic behaviour (decreasing resistance with decreasing temperature) is observed in both samples. However, one would expect the resistance to reach a constant value at low temperature due to impurity scattering, which is not observed. Moreover, the the top part) then it could happen that, with temperature, the inter-stack resistance is gradually reduced. In that case, one could imagine that in the high temperature regime, only the top stack contributes to the measured resistance, while in the low temperature regime, the whole MXene crystal contributes to transport, and the resistance becomes smaller (as in Figure 5.9).
Another hypothesis is that the anisotropy ratio (ρ ab /rho c ) in the case of V 2 CT x MXene follows the same trend as in the V 2 AlC MAX phases (see Figure 3.10). In that case, since the anisotropy factor exhibits a factor-of-two change between room temperature and low temperature, this could explain a transition between different regimes in the temperature dependence of the in-plane resistance of V 2 CT x , especially if the electrical contacts to each MXene layer inside the stack are not homogeneous (i.e. better contacts to some layers).
We compared resistivity values of two samples in Figure 5.12. As demonstrated, the shape of the flake is irregular, leading to non homogeneous. Considering the maximum and minimum value used for the calculation of resistivity, the error bar is calculated and shown in Figure 5.12(a). From the data on both measured samples, we can find out the average value of resistivity of V 2 CT x is around 2×10 -5 Ω•m. To the best of our knowledge, this is the highest-conductivity MXene device ever reported. More details can be found in the Table 5.2. Furthermore, from Figure 5.12 (b), the residual resistivity varies quite strongly depending on the amount of impurities and other crystallographic defects. In our present test, it implies that the thinner flake device has better quality than thicker one. Therefore, the resistivity of V 2 CT x MXene is highly sensitive to defects and also to the layers thickness.
Field effect of V 2 CT x MXene device
Using the Si substrate as backgate electrode, transport in V 2 CT x MXene device II was investigated by sweeping the backgate voltage, V G . As can be observed in Figure 5.13(b). We can see that the transfer characteristics of the V 2 CT x MXene device exhibits p-type behavior, as Ti 2 CT x [START_REF] Lai | Surface group modification and carrier transport properties of layered transition metal carbides (Ti 2 CT x , T: -OH, -F and -O)[END_REF].
From a linear fit of the data, the field effect mobility, µ FE can be determined from the standard relation
µ FE = ∆I D ∆V G t ǫV DS L W (5.1)
where t and ǫ are the thickness and permittivity of the SiO 2 layer, L and W are the length and width of the MXene device, respectively. Based on the results shown in Figure 5.13(a), a 1.6 K field effect mobility µ FE , 22.7 ± 10 cm 2 /V•s is calculated the error bar is due to the error on linear fitting), which does not seem unphysical, considering the reported values of mobility in the literature : 0.6 cm 2 /V•s [START_REF] Miranda | Electronic properties of freestanding Ti 3 C 2 T x MXene monolayers[END_REF], 4.23 cm 2 /V•s [START_REF] Lipatov | Effect of Synthesis on Quality, Electronic Properties and Environmental Stability of Individual Monolayer Ti 3 C 2 MXene Flakes[END_REF] for Ti 3 C 2 T x and 10215 cm 2 /V•s for Ti 2 CT x [START_REF] Lai | Surface group modification and carrier transport properties of layered transition metal carbides (Ti 2 CT x , T: -OH, -F and -O)[END_REF].
From σ = µn 0 e, we calculate that the charge carrier density n 0 of V 2 CT x MXene is 1.66×10 20 cm -3 , while the density of Ti 3 C 2 T x MXenes was measured to be 8×10 21 cm -3 [START_REF] Miranda | Electronic properties of freestanding Ti 3 C 2 T x MXene monolayers[END_REF]. value of 2×10 -5 Ωm is of the same order of magnitude as reported other MXene sample (thickness 20-30 nm), and about 2 orders of magnitude higher than its corresponding MAX phase sample, where the charge carrier concentration deduced from field-effect measurement is 1 order of magnitude lower than the reported data.
Conclusion
We can conclude that we successfully obtained some first hand transport data on V 2 CT x MXenes. The average value for the resistivity of Device II and Device III is 2×10 -5 Ω•m, not far from value reported other MXene samples. The temperature dependence is relatively small and unusual compared to a normal metal, but is also sample dependent, so it is probably related to defects and the corresponding conduction mechanism between layers. The field effect measurement indicates field effect mobility µ FE , 22.7 ± 10 cm 2 /V•s. The magnetoresistance surprisingly does not show any feature nor a clear classical parabolic dependence which could allow to extract very reliable information on charge carriers. We obtained the µ H is in the order of magnitude 10 2 cm 2 /V•s, comparable to its parent MAX phase.
As demonstrated in the literature [START_REF] Ouisse | Magnetotransport in the MAX phases and their 2D derivatives: MXenes[END_REF], understanding magneto-transport in MXenes (c) The possible presence of a large fraction of defects in the 2D sheets, especially when aggressive etchants such as 50% HF are used, also limit the reproducibility of experimental results from sample to sample.In solids, in general and 2D solids in particular it is difficult to characterize and quantify point defects.
It is assured that with further knowledge of the structure of MXenes, their electronic transport properties will be further understood thoroughly.
Conclusion and perspectives
At the end of this thesis, let us look back on our initiatives of this project: the primary objective is firstly involving the crystal growth of single crystal MAX phases and the characterization of their intrinsic physical properties, mainly magneto-electronic transport properties and more specifically, probing the anisotropic properties. Then, second objective of this project is focused on the synthesis of MXenes derived from MAX phase single crystals by a combined chemical etching and mechanical exfoliation. Then, MAX devices for in-plane and out-of-plane transport measurement were fabricated. A full set of experimental data were consequently obtained from single crystals of V 2 AlC and Cr 2 AlC as a function of temperature and magnetic field. In comparison with the resistivity value of polycrystalline samples, the resistivity of Cr 2 AlC single crystal is three times lower, while for that of V 2 AlC, four times lower. In particular, we obtain a very high ratio between the in-plane and parallel to the c-axis resistivity, which is very substantial, in the range of a few hundreds to thousands. From MR and Hall effect measurement, in-plane transport behaviour of MAX phases have been studied.As in the case of the polycrystalline phases, we observe a magneto-resistance of a few per cent, among them, only MR curve of Cr 2 AlC exhibits parabolic-like curves, as reported for polycrystalline MAX phases. One can notice that Hall resistivity varies linearly with magnetic filed, and this phenomena is temperature independent, indicating the systems are in the weak-field limit. R H is small, as previously noticed for polycrystalline samples. The extracted mobility is in the range from 50 to 120 cm 2 /Vs, which is the same order as magnitude of polycrystalline sample. Thermal transport measurement was also conducted on Cr 2 AlC samples. Similar as higher electronic conductivity compared to its polycrystalline counterpart, higher thermal conductivity is verified. The sign of Seebeck coefficient is not in consistent with that of Hall coefficient in which this discrepancy can still be partly explained by a compensation between holes and electrons. Attempts to measure the out-of-plane magnetoelectronic transport were also performed. Though we observed an interesting anomalous magnetoresistance in the absence of Lorentz-force, the mechanism of a field-induced transport along c-axis is still unclear. Theoretically, a general, yet simple model was proposed for describing the weak field magneto-transport properties of nearly free electrons in two-dimensional hexagonal metals. It was then modified to be applicable for the transport properties of layered MAX phases. We argue that the values of the in-plane Hall coefficient and the in-plane parabolic magneto-resistance are due to the specific shape of the Fermi surface of almost two-dimensional hole and electron bands. If the contribution of the electron pockets to in-plane resistivity is often predicted to be a minor one, in contrast, both holes and electrons should substantially contribute to the overall value of the in-plane Hall coefficient.
Additionally, large scale V 2 CT x MXene flakes were successfully synthesized by conventional HF-etching of V 2 AlC single crystals. Efforts have been made on the optimization of reaction conditions and general protocol for etching V 2 AlC was proposed.
Mechanical delamination of multilayered V 2 CT x flakes into few layer flake and transfer on Si/SiO 2 substrate was also achieved. Structural characterization demonstrated an enlarged interplane distance, while prior DMSO intercalation seems to have no effect on V 2 CT x MXenes. Typical 2D material morphology was found in SEM and TEM images. From the XRD pattern , one observed an evident shift of 2θ from 13.5°to 7.4°, corresponding to c-LPs changing from 13.107 Åto 23.872 Å. Such phenomenon has been attributed to the water intercalation in between the layers after delamination in an aqueous solution.The a -LP of this flake from SAED, inheriting the hexagonal basal plane symmetry of the parent V 2 AlC phase, is measured to be ~2.9 Å, which is similar to that of the V 2 AlC precursor.
In the end, we detailed the electrical device fabrication process and proceed with electrical measurements result. We can conclude that -OH terminations on V 2 CT x is dominate, and the most energetically favourable, compared to -F and -O functional groups. Finally, we can conclude that we successfully obtained some first hand transport data on V 2 CT x MXenes, though compared to other reported MXenes, the average value for the resistivity of Device II and Device III is 2×10 -5 Ω•m, which is in consistent with reported other MXene samples. The temperature dependence is unusual compared to a normal metal, but is also sample dependent, so it is probably related to defects and the corresponding conduction mechanism between layers. The field effect measurement indicates field effect mobility µ FE , 22.7±10 cm 2 /V•s. The magnetoresistance surprisingly does not show any feature nor classical parabolic dependence which could allow to get more information on charge carriers. The obtained µ H is in the same order of magnitude as its parent MAX phase.
At the end of this work, one can also point out a few directions that could be investigated in future studies, in light of the difficulties and hindrances that I have faced, and of the new opportunities that emerged.
The first aspect concerns sample fabrication including improvement of single crystal growth. The crystal size matters in all the following measurements. In order to understand the anisotropic magneto resistance, the crystal size is the prerequisite.
Sample fabrication also involves the fabrication of MXene device. To the best of our knowledge, among synthesis methods of MXene, there are very rare reports on the mechanical exfoliation of MXene. Up to now, no single layer MXene is obtained by mechanical exfoliation. As explained in the last chapter, understanding magneto-transport in MXenes is undoubtedly challenging. The terminations group characterization is still under exploration,which limits the understanding of relationship between chemical function and electronic properties, let alone the effect of defects introduced due to the harsh etchant. So as to get the high quality flake for the device fabrication, dry method would definitely be more favourable than wet method. Micro-mechanical exfoliation could be approached in the future.
Due to the lack of single-crystalline/single flake transport data on the MAX phases and MXenes, there is still much to be done before a thorough picture of the physical mechanisms governing magnetotransport. Here list some future work: i) thorough investigation of magnetotransport properties in magnetic MAX phases; ii)assess correlation between anisotropy ratio and crystal quality; iii) out-of-plane magnetoelectronic transport; iv) deep understanding of the termination group of MXenes according to different fabrication process; v) more experimental endeavours on MXene electronic measurement.
Figure 1 . 2 :
12 Figure 1.2: Unit cell of representative MAX phase for (a) 211 (b) 312 (c) 413 [4].
crossing the Fermi level along the short H-K, Γ-A and M-L directions (c-axis direction) whereas many bands are crossing the Fermi level along the Γ-K and Γ-M directions (directions located in the basal plane). Such result suggests a very strong anisotropy of the electronic properties (might give zero conductivity along the c-axis direction).The DOS and LDOS of Cr 2 AlC is shown in Figure 1.4(b). The lowest states around 10 eV below the Fermi level, originate from C 2s states, which are separated from the upper part of the valence band. The upper part of the valence band mainly consists of Cr 3d, C 2p, Al 3s and Al 3p states. The Cr 2 AlC valence band shows the hybridizations of Cr 3d -C 2p(from -7 to -3.7 eV) and Cr 3d -Al 3p (from -3.1 to -1.6 eV), implying strong ionic-covalent Cr-C bonds and weaker Cr-Al ones. At the Fermi level, pure Cr d states are predominant, indicating that, in a first approximation, the electronic properties of Cr 2 AlC are dominated by the Cr d states. The charge density contour is shown in Figure 1.4(c) for the Cr-C bonds in Cr 2 AlC and the metastable rock salt structure CrC with space group Fm 3m. Comparing the charge density of the Cr-C bonds in both phases the similarity is prominent. The ionic and covalent contribution to the overall bond character in the cubic CrC phase are essentially conserved in Cr 2 AlC, as can be seen from the charge density around Cr andbetween the Cr and C atoms in both structures. It can be concluded that the bonding is characterized by covalent and ionic contributions and that this character is essentially conserved in the M 2 AC ternaries. The chemical bonding in both materials therefore appears to be rather similar.
Figure 1 . 4 :
14 Figure 1.4: (a) Energy band structure of Cr 2 AlC [31] (b)Total and local DOS of Cr 2 AlC [32] (c) Charge density contour for the Cr-C bond. upper image:in Cr 2 AlC for a cut in the plane marked in left structure. down image: in rock salt structure CrC for a cut in the (100) plane [33] (d) Primitive Brillouin zone of the hexagonal unit cell.
Figure 1 . 5 :
15 Figure 1.5: Band structure of V 2 AlC via first principles under different pressures [34] (upper image) and TDOS and PDOS (down image)of V 2 AlC [35].
.6(a) illustrating the crystal structure and the electronic orbitals across and in the laminate plane for Ti 3 SiC 2 . In general, Ti(2) 3d states less contributes to the Ti 3d-Si 3p chemical bonding because Ti(2) atoms are located in between two octahedral layers. Electronic band structure of Ti 3 SiC 2 presented in the Figure 1.6(b) exhibits similar anisotropic features as 211MAX phases, in which near Fermi level there is almost no band crossing the Γ-A, M-L and few along H-K direction while there are large numbers of bands crossing along Γ-K, Γ-M. Moreover, it is demonstrated by theoretical calculation that Ti 3 SiC 2 presents holelike properties along the basal plane and electron-like properties along the c-axis, as experimentally proved by the different sign of Seebeck coefficient along two directions.
Figure 1 . 7 :
17 Figure 1.7: (a) Fermi surfaces for two adjacent bands crossing Fermi level indicating hole-like (upper) and electron-like features (down)[START_REF] Chaput | Anisotropy and thermopower in Ti 3 SiC 2[END_REF] and (b) calculated and measured Seebeck coefficients of Ti 3 SiC 2 (a (000l)-oriented thin film of Ti 3 SiC 2 (triangle) and polycrystalline sample (square)[START_REF] Magnuson | Electronic-structure origin of the anisotropic thermopower of nanolaminated Ti 3 SiC 2 determined by polarized x-ray spectroscopy and Seebeck measurements[END_REF].
Figure 1.8 represents current 2D materials families.
Figure 1 . 8 :
18 Figure 1.8: Current 2D material library [40].
Figure 1 . 11 :
111 Figure 1.11: (a) Schematic describing the synthesis process of MXenes from microcrystalline MAX phase by HF selectively etching. SEM images of Ti 3 AlC 2 before (b) and after (c) HF etching [55].
Figure 1 . 12 :
112 Figure 1.12: (a) Schematic representation of the intercalation mechanism (b) Particle size distribution in aqueous colloidal solution; inset shows Tyndall scattering effect in the solution (c) Scanning electron microscope image of d-Ti 3 C 2 single flake on alumina membrane [68].
2000 • C. The carbon content was obtained by partial dissolution of the graphite walls (See Figure 2.2(a)).
Figure 2 . 1 :
21 Figure 2.1: (a) Photo of the puller with heat unit and reactor (computer control panel not included) (b) schematic diagram of the experimental set-up used for high temperature solution growth.
Figure 2 .
2 2(b)).
Figure 2 . 2 :
22 Figure 2.2: (a) Graphite crucible (for Ti 3 SiC 2 , Ti 2 SnC) (b) Alumina crucible (for Cr 2 AlC,V 2 AlC) and the raw materials used for growth of Cr 2 AlC.
Figure 2 .
2 Figure 2.3 shows the details of the reactor geometry. The crucible can be separated in two parts: the inner alumina crucible where growth takes place and the outer graphite crucible, which is directly heated by the electromagnetic field applied by induction coil.
Figure 2 . 3 :
23 Figure 2.3: Reactor configuration indicating the melt solvent and crucible structure.
Figure 2 . 4 :
24 Figure 2.4: Experimental process profile of all the crystal growth in the current work.
Figure 2 .
2 Figure 2.6(a) is the Cr -Al-C ternary phase diagram, in which a liquid surface expands along the Al-Cr line of the isothermal sections at a temperature higher than the melting point of Al. At temperature in the 1400 • C range, and starting from a binary Cr-Al melt with a Cr atomic fraction roughly below χ Cr =0.4, dissolving a small amount of carbon in the melt allows one to obtain a composition for which the liquid is in equilibrium either
Figure 2 . 5 :
25 Figure2.5: (a) Flux "cupcake" taken immediately from crucible after cooling process (b) Flux "cupcake" after immersed in the dilute HCl for few hours (c) Flux "cupcake" standing in the air after for few weeks.
Figure 2 .
2 Figure 2.7 shows the cross section of the flux cake with different molar ratio of Al:Cr:C. It is obvious that by changing the carbon dissolution amount in the solvent, the numbers of nuclei and size of the crystals vary.Figure 2.7(a) shows that if the dissolution
Figure 2 .
2 7(a) shows that if the dissolution amount is too low, there is no sufficient carbon resource to grow continuously, which leads to a small amount of crystals. On the contrary, excessive carbon will result in plenty of nuclei and form the undesirable by-products Al 4 C 3 which appeared as gold color phase in Figure2.7(b).
Figure 2 .
2 7(c, d) give the optimized example with 7.2at% of carbon, before and after washing with HCl acid, from which we can determine the nucleation is generated from the bottom and wall of the crucible.
Figure 2 . 6 :
26 Figure 2.6: Phase diagrams of (a) Cr-Al-C[92] (b) Cr-C.
Figure 2 . 7 :
27 Figure 2.7: Cross section images of various flux cupcakes with different molar ratio of raw materials (a) n(Cr) : n(Al) : n(C) = 37.5% : 56.9% : 5.6% (b) n(Al) : n(Cr) : n(C) = 53.9% : 33.5% : 12.6% (c) n(Cr) : n(Al) : n(C) = 35.6% : 57.2% : 7.2% (d) etching of (c).
Figure 2 . 8 :
28 Figure 2.8: Photo of as-grown crystals.
Figure 2 .
2 Figure 2.8 shows the photos of the as-grown Cr 2 AlC crystals, with an average size in the centimeter range. We can see from the photos that the crystals are platelet-like, indicating a highly anisotropic high-oriented growth.
Cr 2
2 AlC. As shown in Figure 2.10(b), this process results in the complete dissolution of the flux, leaving the crystals free and almost unaffected by the acid treatment. After the reduction of the solidified flux, the remaining platelets were cleaned and sonicated in ethanol or propanol.
Figure 2 . 9 :
29 Figure 2.9: Phase diagrams of (a) V-Al-C [93](b) V-Al [94].
Figure 2 . 10 :
210 Figure 2.10: Optical photos of V 2 AlC single crystal growth (a) solidified flux after immersed in diluted HCl acid for several hours and (b) single crystal after washed with ethanol or propanol and rinsed with water.
Figure 2 . 11 :
211 Figure 2.11: Photos of different MAX crystals showing the typical sizes.
2. 2 . 1 . 1
211 Crystal structure and identification Three different kinds of pole figures were obtained, among which, each one corresponding to one or two different plane orientations with same interplane distance:(a) (0006) (1 peak) and 1013 (6 peak), (b) 1019 (6 peak), (c) (2023) (6 peaks) and (1126)
( 6
6 peaks) (See Figure2.12). Particularly, the 1013 pole figure displays exhibits the expected peaks, all located at the apex of a hexagon. It is noticeable that the interplane distance of the (0006) planes is very close to that of the (1013) planes, (0.214073 nm and 0.214592 nm, respectively), so that for the crystal orientation along c-axis, an additional diffraction peak appears at the origin. For the other crystal orientations, we obtained a set of diffraction peaks which are scattered over the circle including the hexagonal distribution expected from a single crystal.
Figure 2 . 12 :
212 Figure 2.12: X-ray pole figure of Cr 2 AlC platelet with an area of several mm 2 .
Figure 2 . 13 :
213 Figure 2.13: FWHM of φ-scan pattern of (2023) plane.
Figure 2 .
2 Figure 2.13 shows that the FWHM (width at half-peak maximum) of φ-scans of the
Figure 2 . 14 :
214 Figure 2.14: (a) Raman spectrum of Cr 2 AlC platelet with an excitation laser parallel to the c-axis (b) Assignment of atomic displacements to the different Raman active vibrational modes in 211MAX phases[95].
Figure 2 .
2 Figure 2.15 shows the optical microscope images of various samples observed in the Normarski mode.Figure 2.15(a) shows the bumpy and tangled surface of the as-grown
Figure 2 .
2 15(a) shows the bumpy and tangled surface of the as-grown crystals after cleaning. The dendrite islands can be clearly observed in Figure2.15(b),
Figure 2 . 15 :
215 Figure 2.15: Optical microscope views of the surface of the Cr 2 AlC crystals obtained in the Nomarski mode (Differential interference contrast (DIC) microscopy) (a) Bumpy and tangled surface of the as-grown crystals after cleaning (b) Dendrite islands (c-e) Dendrite morphology and (f) Growth terrace.
Figure 2 .
2 [START_REF] El-Raghy | Synthesis and characterization of Hf 2 PbC, Zr 2 PbC and M 2 SnC (M=Ti, Hf, Nb or Zr)[END_REF](f) , we can see the transport and diffusion of solute on the crystal surface. Concurrently, no clear spirals growth phenomenon were obtained.
Figure 2 . 16 :
216 Figure 2.16: Kossel model of a crystal surface[101].
Figure 2 . 17 :Figure 2 .
2172 Figure 2.17: SEM images of as-grown Cr 2 AlC crystal surface.
Figure 2 . 18 :
218 Figure 2.18: XRD pattern of as-grown V 2 AlC platelet.
Figure 2 . 19 :
219 Figure 2.19: X-ray pole figures measured from a V 2 AlC platelet and obtained from different interplane distances. Diffraction peak indexations are shown in the figures.
Figure 2 . 21 :
221 Figure 2.21: Optical microscope views of the surface of V 2 AlC crystals obtained in the Nomarski mode(a) Flat surface (b) Heterogeneous nuclei (c) Dendrite structure (d) Etch pits.
Figure 2 . 22 :
222 Figure 2.22: AFM topographic images of the surface of V 2 AlC platelets after dissolution of the solidified flux in concentrated HCl during several hours. A terrace and step structure characteristic of a step flow growth process is visible. In both images all the step heights are equal to c.
Figure 2 . 23 :
223 Figure 2.23: The left photograph shows the two parts of a V 2 AlC crystal after cleavage with a strong adhesive tape (the tape has been cut around the cleaved crystal and corresponds to the white color). The right image is an AFM topography of the cleaved surface in a region with a high density of tears.
Figure 2 . 25 :
225 Figure 2.25: AFM images of the top surface of a terrace (the color range corresponds to a much smaller height than in the previous figures). The extracted RMS value of the roughness is 0.18 nm for image (a) and 0.06 nm for image (b).
Ti 2 C
2 MXene from Ti 2 SnC in Chapter IV, some results are briefly demonstrated in Figure2.26. The hexagonal morphology of the crystal as resulted from the SEM micrographs, combined with the EDX results, undoubtedly prove the harvest of Ti 2 SnC crystals. Due to the lack of reported data on the Raman spectrum of Ti 2 SnC, we can only refer to our results on other M 2 AX MAX phase, the peak at 246.8 cm -1 can be assigned to E 2g mode, which is generated by the M-A atoms in-plane vibrations.
Figure 2 . 26 :
226 Figure 2.26: (a) SEM image indicating the hexagonal morphology of the crystal (inserted EDX results showing the atomic ratio of Ti:Sn:C) and (b) Raman spectrum of Ti 2 SnC.
cm 2 for
2 Ti 3 SiC 2 , 1 cm 2 for V 2 AlC and several cm 2 for Cr 2 AlC were obtained in in our experiment set-up. Structural characterization confirms the single crystalline character of the samples by X-ray measurements and Raman spectroscopy. The surface morphology of V 2 AlC crystals is flat without dendrites observed on the surface of Cr 2 AlC. Well defined steps and terraces indicates a step flow growth process. Emphasis is put on the mechanical cleavage of the sample which can be achieved in the basal plane thanks to the existence of the weakly bonded Al atomic planes. The nano-lamellar structure of the crystals permits to cleave the platelets parallel to the basal plane just by sticking and folding a strong adhesive tape on each face. The initial flatness of the crystal surface allowed us to obtain cleaved areas almost equal to that of the entire platelet. Since the transformation of HF-treated, polycrystalline V 2 AlCinto MXene nano-or micro-crystals has already been proved, our crystals might be used as a starting basis for forming MXenes with macroscopic area. This will be the main objective of next step work, along with the assessment of the transport properties.
Pauw geometry A schematic of a square Van der Pauw configuration is shown in Figure 3.1(b) [109]. In our exeriment, the Van der Pauw geometry was used for the temperature dependency of resitivity, not for Hall measurement. Four voltage measurements yield the following four values of resistance R 21,34 = V 34/ I 21 , R 32,14 = V 41/ I 32 , R 43,21 = V 12/ I 43 , R 14,23 = V 14/ I 23 , Then the two characteristic resistance can be written as R A = (R 21,34 + R 43,21 )/2 , R B = (R 32,41 + R 14,23 )/2 , Van de Pauw demonstrated that R A and R B determinate the sheet resistance R S through the Van der Pauw equation: exp
Figure 3 .
3 1(c) is a typical Hall bar geometry, the Hall coefficient is defined as R H = V H t IB , where t is sample thickness.
Figure 3 . 1 :
31 Figure 3.1: Geometry of in-plane measurements: (a) Four probe (b) Van der Pauw [109] (c) Hall bar bridge.
Figure 3 . 2 :
32 Figure 3.2: Geometry of out-of-plane measurement: (a) Contact configuration and (b) Derivation of current lines along the edge.
Figure 3 . 3 :
33 Figure 3.3: Correction function curve.
Figure 3 .
3 Figure 3.4(a,b) shows the samples fabricated by these two different ways. In order to investigate the effect of high power laser on the samples, optical microscopic photos were shown as well , indicating that the sample did not burn heavily or not too many defects were generated in spite of few oxidation layers along the cutting edge.
Figure 3 . 4 :
34 Figure 3.4: (a) Diamond saw cutting of V 2 AlC (b) Laser cutting of Cr 2 AlC; Optical microscopic photos of the heat affect area of laser cutting of (c) V 2 AlC and (d) Cr 2 AlC.
Cr seemed perform better for Cr 2
2 AlC sample than Ti. The deposition rate for Ti/Cr is 1 Å/s, for Cu is 3 Å/s. Meanwhile, a standard silicon sample was also prepared as reference (See Figure 3.5(b)).
Figure 3 .
3 5(c) shows the optical microscopic view of contacts on the Cr 2 AlC sample.
Figure 3 . 5 :
35 Figure 3.5: (a) Metalization mask for contact deposition and designed for different size of MAX phase samples (b) Standard silicon sample (c) Microscopic photo of as-prepared Cr 2 AlC sample.
Figure 3 . 6 :
36 Figure 3.6: Four probes measurement with Lock-in amplifiers in the present experiments.
Figure 3 . 7 :
37 Figure 3.7: Ratio of in-phase and out of phase signal (example: raw curve of V 2 AlC out-of-plane resistivity measurement).
Figure 3 .
3 Figure 3.8 summarizes the temperature dependency of in-plane resistivity obtained in both cases of Cr 2 AlC and V 2 AlC. First of all, in the 100 K-300 K temperature range, there is a linear dependence of ρ as a function of T as generally observed of almost all MAX phases compounds[3]. The evolution is qualitatively similar to that already reported for polycrystalline phases, but the resistivity values are much smaller in the case of the single crystals. This typical metal-like behaviour is assumed to be simply due to the large density of states at Fermi level as explained in the Chapter I. Note that variations between the absolute in plane resistivity values measured among different samples are not negligible. Possible cause for these variations include different levels of defects (e.g. related to slightly different synthesis conditions), as well as imperfect electrical contacting of all the planes constituting the structure.In comparison with the resistivity value of polycrystalline samples for both cases (1.5×10 -7 Ω•m (4 K) and 7.4×10 -7 Ω•m (300 K) for Cr 2 AlC, 4×10 -8 Ω•m (4 K) and 2.5×10 -7 Ω •m (300 K) for V 2 AlC[3]), we can find that, for single crystal samples, though the value varies among samples, the resistivity of Cr 2 AlC is three times lower, being 2.5×10 -7 Ω•m (300K) and 5×10 -8 Ω• m (4K). As for case of V 2 AlC, the RT and 4K resistivity, being 1×10 -7 Ω•m (300K) and 1 ×10 -9 Ω•m (4K) are four times lower
Figure 3 . 8 (
38 left), which indicates that the contacts are robust during cooling process and the curves are reproducible.
Figure 3 . 8 :
38 Figure 3.8: Temperature dependence of in-plane resistivity of (a) Cr 2 AlC and (b) V 2 AlC.
Figure 3 . 9 :
39 Figure 3.9: Corresponding ideal resistivity of Cr 2 AlC and V 2 AlC.
Figure 3 .
3 Figure 3.10 summarizes the temperature variation of corrected value ρ c versus T. For
Figure 3 . 10 :
310 Figure 3.10: Out-of-plane resistivity (all the values are corrected using correction functions) and anisotropy ratio ρ c /ρ ab of Cr 2 AlC and V 2 AlC.
Figure 3 . 11 :
311 Figure 3.11: Temperature dependency of magnetoresistance MR of Cr 2 AlC, V 2 AlC and Ti 3 SiC 2 .
Figure 3 . 12 :
312 Figure 3.12: Power fitting of MR of V 2 AlC and Ti 3 SiC 2 in comparison with to parabolic MR of Cr 2 AlC.
Figure 3 . 13 :
313 Figure 3.13: Temperature dependence of exponent of power fitting for MR of V 2 AlC and Ti 3 SiC 2 .
Figure 3 .
3 Figure 3.15(b)) . Following by linear fitting, Hall coefficient (R H ) can be deduced from the slope of the fitting curves.
Figure 3 . 16 :
316 Figure 3.16: Temperature dependency curves of Hall coefficient of R H of Cr 2 AlC, V 2 AlC and Ti 3 SiC 2 .
Figure 3 . 17 :
317 Figure 3.17: Two band model density (a), mobilities(b) and fitting parameter α(c) as a function of temperature for Cr 2 AlC Hall bar.
Figure 3 . 18 :
318 Figure 3.18: (a) Thermal conductivity of Cr 2 AlC single crystal and (b) In comparison with polycrystalline sample[3]).
Figure 3 . 19 :
319 Figure 3.19: (a)Electronic contribution and (b,c) Plotting curves indicating phonon behaviour to thermal conductivity of Cr 2 AlC.
Figure 3 . 20 :
320 Figure 3.20: Seebeck coefficient of Cr 2 AlC single crystal (In comparison with polycrystalline sample [3]).
Figure 3 .
3 Figure 3.21 shows the measurement configuration of out-of-plane magneto-electronic transport of MAX phases (modified Montgomery contact configuration[START_REF] Crommie | Thermal-conductivity anisotropy of single-crystal Bi 2 Sr 2 CaCu 2 O 8[END_REF]), where B//c, I//c. According to the function of Lorentz force: F = qE + qv × B, when B//v, no magnetic force is applied on charge carriers. Hence, ideally, no magnetoresistance will be observed.Yet, it is not the case of our experimental results. For magnetic field applied perpendicular to the basal (a-b) plane, the out-of-plane (c-axis) resistivity displays a magnetic enhancement.
Figure 3 . 21 :Figure 3 .
3213 Figure 3.21: Measurement configuration of c-axis magneto-electronic transport.
Figure 3 . 22 :
322 Figure 3.22: Out-of-plane resistivity of Cr 2 AlC (B//c. I//c) (a) magnetoresistivity (b) Zero field ρ c (T).
Figure 3 . 23 :
323 Figure 3.23: Extraction the in-plane contribution from out-of-plane magnetoresistance.
Figure 3 . 24 :
324 Figure 3.24: Fitting parameters of in-plane contribution for out-of-plane magnetoresistivity vs T (a) α (b) A and its relation with corresponding charge moblity.
Figure 3 . 25 :
325 Figure 3.25: (a) Residual magnetoresistance vs T and (b) various fitting curves.
Figure 3 . 26 :
326 Figure 3.26: Out-of-plane resistivity of V 2 AlC (B//c. I//c) (a) ρ c (B) (b) magnetoresistance at 1.5K and (c) Zero field ρ c (T).
Figure 3 . 27 :
327 Figure 3.27: Extraction the in-plane contribution from out-of-plane magnetoresistance and the fitting of residual magnetoresistance.
Figure 3 . 28 :
328 Figure 3.28: Fitting parameters of in-plane contribution for out-of-plane magnetoresistivity vs T (a) α (b) A. 88
2m 0 centred in the first BZ (index zero, k 0 =0) and all BZ's adjacent to it (See Figure3.29(a) ). In the latter expressions k is the wave vector and the k i 's coordinates are that of the centres of the BZ's. U represents the Fourier component of the periodic potential for a wave vector joining the centres of two adjacent hexagonal BZ's. If the cell dimensions are specified, U is the only adjustable parameter. It splits the free electron curves into a hole band and electron pockets (Figure3.29(b)). Close to the corners of the hexagon, the secular equation might be reduced to the determinant of a 3×3 matrix, leading to analytical relationships between E and k. The E vs k relation can be plotted for any value of k, and the Fermi surface is computed by finding the line segments which satisfy E=E F , where the Fermi energy E F is determined by finding the level for which the sum of electrons in all bands is equal to the number of electrons per unit cell.
Figure 3 . 29 :
329 Figure 3.29: Reproduction of adjacent Brillouin zones of a two-dimensional hexagonal lattice with the corresponding free electron disks (left) and Fermi line of the same lattice in the first Brillouin zone for nearly free electrons.
Figure 3 . 31 :
331 Figure 3.31: 1D-tight-binding with 2 atomic planes per unit cell showing large splitting.
Figure 3 . 33 (
333 red line). Comparing it to the projection of a full numerical computation in the basal plane demonstrates that in the case of Ti 2 AlC(See Figure3.33 right), only the small features are not reproduced. In the case of Ti 2 AlC, the roughest approximation is to consider that the electron pockets form open tubes, which is actually not predicted by An interesting point is that compiling the published expectations for other Ti-based MAX phases shows that the overall structure of the Fermi surface is always formed by hexagonally-shaped hole bands in the center of the zone, and trigonal electron pockets extending over three hexagonal BZ's and centred at the corners of the BZ hexagon[START_REF] Chaput | Thermopower of the 312 MAX phases Ti 3 SiC 2 , Ti 3 GeC 2 , and Ti 3 AlC 2[END_REF][START_REF] Chaput | Anisotropy and thermopower in Ti 3 SiC 2[END_REF][START_REF] Mauchamp | Anisotropy of the resistivity and charge-carrier sign in nanolaminated Ti 2 AlC: Experiment and ab initio calculations[END_REF](Figure 3.34) .Very often, the electron pockets form open tubes too. The number of bulges appearing in the open tubes over one unit cell is a direct function of the integer n appearing in the M n+1 AX n formula. In one extreme case (Ti 3 AlC 2 ),
Figure 3 . 34 :
334 Figure 3.34: Fermi surface of Ti 3 AlC 2 , Ti 3 SiC 2 ,Ti 3 GeC 2 [126]and Ti 2 AlC[125].
Figure 3 .
3 Figure 3.35 (1c)), corresponding to the electron-like parts of the Fermi line. They must be subtracted from the hole-like contribution (the central part of the Figure 3.35 (1c)).
Figure 3 . 35 :
335 Figure 3.35: (1) Top: from left to right: (a) Fermi line, (b) polar plot of the velocity along the Fermi line, (c) polar plot of the mean free path and (d) magneto-resistance of a 2D hexagonal system of nearly free electrons with a low number of electrons per unit cell (N=2, τ p =10 -14 s, U=0.25eV, four 2D planes per unit cell with c=1.36 nm and a=0.304 nm).(2) Bottom: from left to right: (a) Fermi line, (b) radial plot of the velocity and (c) mean free path for a higher value of N (N=6, τ p =τ n =10 -14 s, U=0.75 eV, four 2D planes per unit cell with c=1.36 nm and a=0.304 nm).
Figure 3 .
3 [START_REF] Yoo | Materials science: Ti 3 SiC 2 has negligible thermopower[END_REF] and from the nearly-free electron model which can explain a relative insensitivity of n app to the electron band filling. A small value of R H can indeed attribute to multiple reasons. It may be induced by a compensation between holes and electrons, or to a larger number of only one type of carriers. Yet, it is only in the isotropic case that the common observation of a small value of R H indicate the presence of two types of carriers. free path curves of the hole band and one of the two electron pockets almost contract one another, and what remains is the net contribution to σ xy of the second electron pocket (See Figure3.35(2c)). Since the velocity does not considerably vary with the filling, R H exhibits a plateau, whose value is fixed by the electron pocket per unit cell which remains uncompensated. Yet the hole density substantially decreases, whereas the electron density does the opposite.
Figure 4 .
4 1 shows the principle of the removal of Al from the V 2 AlC 3D structure.
Figure 4 . 1 :
41 Figure 4.1: Schematic diagram of Al removal from V 2 AlC.
Figure 4 .
4 3(a)). Those not fully etched large crystals at the bottom of the beaker were immersed in fresh HF solutions with same concentration at room temperature for 120 h. The collected suspension was separated by centrifugation at 6000 rpm for 10 min and washed by deionized water several times until the pH value of solution reached above 5. Thereafter, the wet sediment were vacuum dried at 40 °C for 24 h, marked as VC-p indicating the powder-like morphology (See Figure4.3(b). The same rinsing procedure was applied for the larger crystals as well, marked as VC-f indicating the flake-like morphology (See Figure4.3(c)), both can be referred to V 2 CT x , where T x stands for terminated-surface groups.
Figure 4 . 2 :
42 Figure 4.2: Schematic diagram of synthesis process.
Figure 4 . 3 :
43 Figure 4.3: (a)Dark green supernatant and collection of (b)VC-p (powder-like) and (c)VC-f (flake-like).
Figure 4 . 4 :
44 Figure 4.4: Blue tape exfoliation. (a) Status of exfoliation after ~10 times of foldingunfolding of the tape on VC-f. (b) Transfer on a piece of Si/SiO 2 .
Figure 4 . 5 :
45 Figure 4.5: (a) SEM image and (b) the corresponding EDS spectrum of as-etched V 2 AlC single crystal.
4. 6 )
6 . No clear change of interlayer space was detected after two weeks etching.
Figure 4 . 6 :
46 Figure 4.6: AFM image indicating the step height of the V 2 AlC crystal after etching for 14 d.
Figure 4 . 8 :
48 Figure 4.8: XRD patterns of V 2 AlC and V 2 CT x (VC-p and VC-f).
V 2
2 CT x (See in the Figure 4.9(b,c)). The (002) peak shifted from a 2θ of 8.9°to 4.6°(c-LP from 19.9 Åto 38.6 Å) after 2 h mixing in TBAOH solution. By doubling the time to 4 h, a very slight down shift in the (002) peak position was observed (shifted down to 4.56°). Also this intercalant works well for large-scale delamination of carbonitrides MXene, such as Ti 3 CNT x as well.
Figure 4 . 9 :
49 Figure 4.9: (a) XRD patterns of V 2 CT x MXenes with and without treatment of DMSO and (b,c) XRD patterns of Ti 3 CNT x and V 2 CT x before and after mixing with TBAOH [71].
Figure 4 .
4 Figure 4.10 shows the cross-section SEM images of the mechanically disrupted V 2 AlC single crystals after sonication in 45% HF at 80 • C for 12 h. Apparently, the etching
Figure 4 . 10 :
410 Figure 4.10: Cross-section SEM images of the as-mechanically destructed V 2 AlC single crystals after sonication in 45% HF at 80 • C for 12h.
Figure 4 . 11 :
411 Figure 4.11: Cross-section SEM images of the as-etched V 2 CT x after immersed in 45% HF for 120 h.
Figure 4 .
4 Figure 4.12(a) showed the distribution of V 2 CT x MXene grain sizes after sonication, from few micrometers to hundreds of micrometers. Comparing to small delaminated V 2 CT x particles[START_REF] Chen | CO 2 and temperature dual responsive "Smart" MXene phases[END_REF], and even most well-studied case Ti 3 C 2 T x with grain sizes of tens of micrometers, Figure4.12(b) demonstrates that the crystals with grain size larger than 100 µm could be successfully exfoliated by HF acid. From the magnified image (Figure4.12(c)) of the area marked in the Figure 4.12(b), we can clearly see the delaminated accordion-like delaminated morphology. EDS spectra was acquired under low magnification, on the sample of Figure 4.12(b). The results indicate that Al is almost completely removed from the etched samples (See Figure 4.12(d)). No fluorine was detected in the sample, which is a probable indication that -OH groups functionalize most of the V 2 CT x surface.
Figure 4 . 12 :
412 Figure 4.12: SEM images indicating (a) V 2 CT x MXenes with various sizes and (b) 100 µm large particles, (c) Enlarge part of areas marked in (b) and (d) EDS spectrum of (b).
Figure 4 . 13 :
413 Figure 4.13: EDS spectrum of as-etched sample in the early stage of our experiments.
Ti 3 C
3 2 T x by aberration-corrected atomic-resolution scanning transmission electron microscopy (STEM)-electron energy loss spectroscopy (EELS), and confirmed that the inherited close-packed structure of Ti 3 C 2 T x is observed as atomic positions of two Ti 3 C 2 T x sheets remained laterally aligned and coverage of O-based surface groups,
Figure 4 . 14 :
414 Figure 4.14: The considered types of -OH terminated Ti 3 C 2 T x MXenes (a) -OH groups are placed at the hollow site between three neighbouring carbon atoms, (b) -OH groups are placed at the top site of the carbon atom.
respectively. Digital photos of V 2 CT x nano sheets dispersed in water and IPA with typical Tyndall effect indicates their excellent hydrophilicity and dispersity (See Figure4.15). Furthermore, a drop of the colloidal solution of V 2 CT x MXenes were placed on a lacey carbon coated TEM copper grid and allowed to dry in the vacuum at 40 °C overnight, ready for the TEM observation.
Figure 4 . 15 :
415 Figure 4.15: Optical photos of dispersed colloidal solution of V 2 CT x MXenes.
Figure 4 .
4 Figure 4.16 shows TEM micrographs of DI water dispersed V 2 CT x nanosheets. Electron transparent sheets of different lateral sizes, from hundreds of nanometres
Figure 4 . 16 :
416 Figure 4.16: TEM images and SAED pattern of V 2 CT x nanosheets (DI water dispersed colloidal solution).
Figure 4 .
4 Figure 4.17 shows the TEM and corresponding EDS of IPA dispersed V 2 CT x nanosheets.
Figure 4 .
4 Figure 4.17(c)).
Figure 4 . 17 :
417 Figure 4.17: TEM images and SAED patterns (a,b) and EDS spectrum (c) of V 2 CT x nanosheets (IPA dispersed colloidal solution).
Figure 4 .
4 [START_REF] Zhang | Rapid fabrication of Ti 3 SiC 2 -SiC nanocomposite using the spark plasma sintering-reactive synthesis (SPS-RS) method[END_REF] shows the morphology of the electron transparent monolayer V 2 CT x flake.
Figure 4 . 18 :
418 Figure 4.18: TEM images of individual monolayer V 2 CT x flake.
Figure 4 .
4 Figure 4.19(a) and the corresponding optical images in Figure 4.19(b). In comparison to clean flat crystal surface before etching, highly defective surfaces were obtained, especially etching holes tend to appear at those places where there are more growth defects. The Raman spectrum of original V 2 AlC has consistent peak position as previously reported. The observed vibration modes of certain symmetry groups of V 2 AlC,
Figure 4 . 19 :
419 Figure 4.19: Raman spectra (a) and corresponding optical images (b) of the V 2 AlC, as-etched V 2 CT x and as-exfoliated flake on the Si/SiO 2 substrate.
Figure 4 . 21 :
421 Figure 4.21: AFM images indicating sizes and thickness distributions of isolated V 2 CT x flake on the Si/SiO 2 substrate starting from the as-etched V 2 CT x MXene treated with intercalant DMSO.
Figure 4 . 22 :
422 Figure 4.22: AFM images of the flake indicating tape residue.
Figure 4 . 23 :
423 Figure 4.23: AFM images of two typical flakes: (a) 5-7 nm thick, smaller than 5 µm (b) 15-20 nm thick, larger than 10 µm.
Figure 4 . 24 :
424 Figure 4.24: Optical micrographs of Ti 2 SnC.
Figure 4 .
4 Figure 4.26 is the Raman spectrum of Ti 2 SnC before and after HF-etching. The peaks at 243.2 cm -1 and 365.1 cm -1 in the Ti 2 SnC spectrum can be assigned to E 2g and A 1g modes, where E 2g involves the in-plane vibration of Ti-Sn atoms while A 1g involves out of plane vibration of Ti. After etching with HF acid, the E 2g peaks vanished by removing the Al. Meanwhile, two peaks appears centred at 403.3 cm -1 and 650.4 cm -1 , close to the Raman peaks found for Ti 2 CT x MXene in the literature, representing the vibration modes which can be assigned to nonstoichiometric titanium carbide[START_REF] Ahmed | H 2 O 2 assisted room temperature oxidation of Ti 2 C MXene for Li-ion battery anodes[END_REF][START_REF] Cai | TiCx-Ti 2 C nanocrystals and epitaxial graphene-based lamellae by pulsed laser ablation of bulk TiC in vacuum[END_REF].
Figure 4 . 25 :
425 Figure 4.25: Drop coat with MXene solution on (a,b) Si/SiO 2 substrate and (c,d) Glass substrate.
Figure 4 . 26 :
426 Figure 4.26: Raman spectra of Ti 2 SnC before and after HF-etching (on the Si/SiO 2 substrate).
Figure 4 . 27 :
427 Figure 4.27: Raman spectrum of Ti 2 CT x MXene on different substrates.
Figure 5 . 2 :
52 Figure 5.2: Schematic diagram of the steps for electrical contact fabrication: (a) Side view of the sample (b) Spin-coating of PMMA layer on the sample (c) EBL writing the electrical contact pattern on PMMA layer (d) Development for removing the nonexposed PMMA areas (e) deposition of the Ti/Au layer (f) Lift-off to remove the rest of PMMA layer.
Figure 5 .
5 Figure 5.3 shows a micrograph of as-contacted device at the end of this process. Note
Figure 5 . 3 :Table 5 . 1 :
5351 Figure 5.3: Optical images of electrically contacted V 2 CT x MXene device.
Figure 5 . 4 : 5 . 2 . 1
54521 Figure 5.4: Two contacts on the V 2 CT x MXenes flakes.
Figure 5 . 5 :
55 Figure 5.5: Optimized structural geometries of the free-standing V 2 C monolayer and its fluorinated and hydroxylated derivatives. (a) Side view of the bare V 2 C monolayer consists of a triple-layer with V(1) CV(2) stacking sequence. (b) Top view of the V 2 C monolayer with T5 magnetic configuration. (c-h) Side views of (c) I-V 2 CF 2 , (d) II-V 2 CF 2 , (e) III-I-V 2 CF 2 , (f) I-V 2 COH 2 , (g) II-V 2 COH 2 , and (h) III-V 2 COH 2 . (i, j) top views of I-V 2 CF 2 and II-V 2 CF 2 [151].
Figure 5 .
5 [START_REF] Etzkorn | V 2 AlC, V 4 AlC 3-x (x=0.31), and V 12 Al 3 C 8 : Synthesis, Crystal Growth, Structure, and Superstructure[END_REF] shows the total density of states (TDOS) of V 2 C, V 2 CF 2 and V 2 C(OH) 2 monolayer. For bare V 2 C, the resulting DOS indicates that it is metallic with antiferromagnetic configuration. In contrast, when V 2 C is passivated by F or OH, the generated V 2 CF 2 or V 2 C(OH) 2 monolayer would turn into a semiconductor regardless of the specific adsorption configurations of functional groups. The calculated DOS results showed that the C atoms have almost no contribution to states near the Fermi level, namely, the metallic electronic character is determined by free electrons from V atoms. The termination group -F or -OH could saturate the free electrons of V atoms, resulting in the nonmetal character of fluorinated and hydroxylated of V 2 C monolayer.Different adsorption configurations merely result in different band gaps.
Figure 5 . 6 :
56 Figure 5.6: TDOS of (a) V 2 C (b) I-V 2 CF 2 (c) III-V 2 CF 2 (d) I-V 2 C(OH) 2 (e) II-V 2 C(OH) 2 and (f) III-V 2 C(OH) 2 .The Fermi levels are set to zero. In each panel, the upper curve and the lower curve correspond to the DOS of the two spin species.[START_REF] Hu | Investigations on V 2 C and V 2 CX 2 (X = F, OH) Monolayer as a Promising Anode Material for Li Ion Batteries from First-Principles Calculations[END_REF]
Figure 5 . 7 :
57 Figure 5.7: Electronic band structures of (a) V 2 C, (b) V 2 CF 2 , (c) V 2 C(OH) 2 and (d) V 2 CF(OH) in their high-symmetric configuration. The Fermi level is fixed as the reference of zero energy [157].
Figure 5 . 8 :
58 Figure 5.8: (a) Optical microscopy (OM), (b) AFM image, (c) Dimensions and (d) Current-voltage (I-V) characteristic of V 2 CT x MXene device I measured at room temperature.
Figure 5 . 9 andFigure 5 .
595 Figure 5.10 present the resistance as a function of the temperature for the two annealed samples : the high-temperature value is three orders of magnitude below the resistivity value for the samples which was not annealed. With decreasing temperature, a drop on the resistivity from, 1.68 ×10 -5 Ω• m and 2.35 ×10 -5 Ω• m at 200 K, to 1.66 ×10 -5
Figure 5 . 9 :
59 Figure 5.9: (a,b) AFM profile of as contacted flake,(c) Optical micrograph of the device and (d) Resistivity versus temperature curve of V 2 CT x MXene device II.
Figure 5 . 10 :
510 Figure 5.10: (a,b) AFM profile of as contacted flake , (c) Optical micrograph of the device and (d) Resistivity versus temperature curve of V 2 CT x MXene device III.
Figure 5 . 11 :
511 Figure 5.11: Resistance versus temperature curves of (a) Ti 2 CT x [66] and (b) Ti 3 C 2 T x [148].
Figure 5 . 12 :
512 Figure 5.12: Temperature dependency of resistivity of Device II and III (a)Resistivity (b)Residual-resistance.
Figure 5 . 13 :
513 Figure 5.13: (a)Schematic diagram of the MXenes transistor (b,c,d) functional dependence of source-drain current I DS (for a constant V DS ) on backgate voltage on different MXenes device (b) V 2 CT x (c) T 2 CT x [66](d) Ti 3 C 2 T x [148].
Figure 5 . 14 :
514 Figure 5.14: Effect of magnetic fields on V 2 CT x MXenes device.(a) Magnetoresistance of Device II at 1.6K (b) Magnetoresistance of Device III at 1.6 K(c) Magnetoresistance after symmetry and fitting with MR = αB 2
is undoubtedly challenging for several reasons: (a) When the A layers are etched from MAX phase, they are replaced by =O, -F and -OH terminations. With different surface chemical termination, the transport properties certainly vary. The terminations group characterization is still under exploration, which limits the understanding of relationship between chemical function and electronic properties. (b) The interlayer space is intercalated by cations and water molecules, and the exact arrangement of stacking in multilayered particles can vary from flake to flake, from device to device.
Fabrication
of MXene-based nano device and electronic properties cryo measurement are also proposed in the beginning of the thesis. Then, it is the time to examine the degree of completion of this project: First of all, several MAX phase single crystal have been successfully grown by using high temperature solution growth and slow cooling technique, including Ti 2 SnC, Ti 3 SiC 2 , V 2 AlC and Cr 2 AlC. Structural characterization confirms the single crystalline character of the samples by X-ray measurements and Raman spectroscopy. The most favorable case is that of Cr-Al-C, attributed to its extreme high carbon solubility. Fortunately, the size of as-obtained crystals are only limited by the size of the crucible, leading to relatively large Cr 2 AlC crystals ready for the subsequent device measurement. V 2 AlC, as a starting basis for forming MXenes with macroscopic area, permits the selective etching after HF-treatment and cleavage of the platelets parallel to the basal plane.
3. 12
12 Power fitting of MR of V 2 AlC and Ti 3 SiC 2 in comparison with to parabolic MR of Cr 2 AlC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.13 Temperature dependence of exponent of power fitting for MR of V 2 AlC and Ti 3 SiC 2 . 75 3.14 Kohler's scaling for sample Ti 3 SiC 2 , V 2 AlC and Cr 2 AlC at different temperatures. 77 3.15 (a) Hall resistivity vs magnetic field at different temperature and (b) the variation of Hall coefficient with temperature ( noting: considering the sweep direction, values computed from within one sweep of magnetic field (-11T-11T) are also demonstrated. 77 3.16 Temperature dependency curves of Hall coefficient of R H of Cr 2 AlC, V 2 AlC and Ti 3 SiC 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.17 Two band model density (a), mobilities(b) and fitting parameter α(c) as a function of temperature for Cr 2 AlC Hall bar. . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.18 (a) Thermal conductivity of Cr 2 AlC single crystal and (b) In comparison with polycrystalline sample[3]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.19 (a)Electronic contribution and (b,c) Plotting curves indicating phonon behaviour to thermal conductivity of Cr 2 AlC. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.20 Seebeck coefficient of Cr 2 AlC single crystal (In comparison with polycrystalline sample [3]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.21 Measurement configuration of c-axis magneto-electronic transport. . . . . . . . . . 84 3.22 Out-of-plane resistivity of Cr 2 AlC (B//c. I//c) (a) magnetoresistivity (b) Zero field ρ c (T). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.23 Extraction the in-plane contribution from out-of-plane magnetoresistance. . . . . . 85 3.24 Fitting parameters of in-plane contribution for out-of-plane magnetoresistivity vs T (a) α (b) A and its relation with corresponding charge moblity. . . . . . . . . . . . 86 3.25 (a) Residual magnetoresistance vs T and (b) various fitting curves. . . . . . . . . . 86 3.26 Out-of-plane resistivity of V 2 AlC (B//c. I//c) (a) ρ c (B) (b) magnetoresistance at 1.5K and (c) Zero field ρ c (T). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.27 Extraction the in-plane contribution from out-of-plane magnetoresistance and the fitting of residual magnetoresistance. . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.28 Fitting parameters of in-plane contribution for out-of-plane magnetoresistivity vs T (a) α (b) A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.29 Reproduction of adjacent Brillouin zones of a two-dimensional hexagonal lattice with the corresponding free electron disks (left) and Fermi line of the same lattice in the first Brillouin zone for nearly free electrons. . . . . . . . . . . . . . . . . . . . 90 3.30 Dispersion curves of nearly free electrons in a 2D hexagonal lattice (parameters are inserted in the graph and defined in the text). . . . . . . . . . . . . . . . . . . . . 91 3.31 1D-tight-binding with 2 atomic planes per unit cell showing large splitting. . . . 92
Table 1
1 .2 is a summary of the available Hall constants for MAX phases at room temperature. We can see from Table 1.2 that R H of MAX phases, like most other metallic conductors, has quite small value. The measured Hall coefficients tend to fluctuate around zero and give either slightly positive or negative values.
2
Table 1 . 2 :
12 Summary of electrical transport parameters of some MAX phases that have been measured.The Fermi level is set to 0 eV. One can notice that there are many bands crossing the Fermi level, no gap between the valence band and conduction band, suggesting that Cr 2 AlC would demonstrate metallic conductivity.Moreover, these bands around Fermi energy are mainly from Cr 3d states, indicating Cr 3d states dominate the conductivity of Cr 2 AlC. One can also notice that, no band is
Generally, synthesis techniques for 2D materials can be classified into two approaches:
1.2. MXene
1.2.1.2 Synthesis of 2D materials
CT x , Ti 2 CT x , Cr 2 TiC 2 T x , Mo 2 TiC 2 T x , Mo 2 Ti 2 C 3 T x , Mo 2 CT x ,
Instead of HF, different etchants were introduced for the etching of MAX phase. Even with various synthesis conditions and chemical reagents, fluoride compounds work as a decisive factor to produce 2D MXenes. Fluoride salts, such as lithium fluoride (LiF), sodium fluoride (NaF), potassium fluoride (KF), caesium fluoride (CsF), could be added into hydrochloric acid (HCl) or sulphuric acid (H 2 SO 4 ) to produce the etchant solution [64]. It is convincing that a precise balance of etching conditions for different combinations with acid and salt, can potentially bring out multi-layered early transition metal carbides modified by surface chemistries and diverse pre-intercalated ions. This one-step method has been successfully applied to produce multi-layered Nb 2 1.2. MXene etc [62, 63, 64]. Moreover, bifluoride solution (e.g., NaHF 2 , KHF 2 or NH 4 HF 2 ) were
Application of MXenes Energy storage application
1.2. MXene
Cr-MXene have been successfully synthesized except for Cr-Ti-containing MXenes. on improving their volumetric capacity, i.e. energy density per volume. Depending on
their charge-discharge mechanisms, supercapacitors are classified as either electrical High Seebeck coefficients were predicted for MXenes by DFT calculations [72]. double-layer capacitors (EDLCs) or pseudo-capacitors. In general, pseudo-capacitors Thermoelectric calculations based on the Boltzmann theory imply that semiconducting possess higher volumetric capacitances and lowered cycling stability. MXenes, with MXenes attain very large Seebeck coefficients at low temperatures, which opens a novel their 2D characteristics, large surface areas, present themselves as promising electrode potential applications for these surface terminated MXenes. materials for super-capacitors. Ti 3 C 2 T x is the most studied MXenes for super-capacitors,
The mechanical properties of MXenes are also of great interest as the M-C and/or the volumetric capacitances of free standing Ti 3 C 2 T x MXene electrodes in neutral and
M-N bonds are some of the strongest known. M 2 X MXenes are predicted to be stiffer
and stronger than their M 3 X 2 and M 4 X 3 counterparts. Though experimental mechanical
testing has been conducted only for MXene films and not for single-layer, a cylinder with
walls made of 5 µm thick Ti 3 C 2 T x film can support over 4000 times of its own weight [81].
MXenes/polymer composites have also been developed with improved mechanical
and electrical properties. Thin films of MXenes are transparent and its optoelectronic
properties can be tuned by the chemical and electro-chemical intercalation of cations,
which implies that MXenes films can be applied in transparent conductive coatings and
optoelectronics [82].
1.2.2.4 Battery: Similar to graphene, MXene are promising candidate electrode materials for
Lithium-ion battery (LIBs) and super-capacitors by the intercalation of Li ions into the
MXene layers. Owing to their wide chemical variety, it has more tunable performance
compared to the elementary graphene. MXenes are promising LIB anode materials,
since they have excellent electronic conductivity, low operating voltage range, low
diffusion barriers which are favourable for high rate performance and exceptional
mechanical properties that are invariant to Li adsorption. Unlike typical diffusion
limited battery electrode materials, intercalation of ions in MXenes can paradoxically
occur at a high rate, without significantly depreciating their energy storage capacities.
Based on the theoretical calculations, it is proved that MXenes with low formula weights
(M 2 X MXenes) are the most promising in terms of theoretical gravimetric capacity.
Encouraged by the first application of HF-etched multilayered Ti 2 CT x as anode material
in LIBs, other MXene materials, such as Ti 3 C 2 T x , Mo 2 TiC 2 T x , Nb 2 CT x , V 2 CT x , Nb 4 C 3 T x ,
and Mo 2 CT x , have also been examined as potential anode for LIBs.[63, 69, 76, 83]
Supercapacitor: Supercapacitors provide alternative energy storage with rapid power
density but low energy density compared to batteries. Research efforts have been made
chapter 2 MAX Phase Single Crystal Growth and Characterization
1.2. MXene
ultrasonic technique. "Large"( >10 µm) and homogeneous V 2 C MXenes were obtained
by following mechanical exfoliation method. Synthesis and characterization of V 2 C
MXenes are detailed in Chapter IV. MXene-based devices were fabricated and their
electronic properties were examined down to low temperature and in high magnetic
field as discussed in Chapter V.
In the present work, several MAX single crystals were grown from liquid
phase by high temperature solution growth and slow cooling technique.
Table 4 . 1 :
41 Calculated elastic coefficients and mechanical properties of the 20 MAX phase compounds (Unit:GPa)[145].
. While Sc 2 CT 2 (T= F, OH, and O), Ti 2 CO 2 , Zr 2 CO 2 and Hf 2 CO 2 become semiconducting with the surface functionalization. The energy gaps are estimated to be 1.03, 0.45 and 1.8 eV for Sc 2 CT 2 with T= F, OH, and O, respectively, 0.24 eV for Ti 2 CO 2 , 0.88 eV for Zr 2 CO 2[START_REF] Khazaei | Novel Electronic and Magnetic Properties of Two-Dimensional Transition Metal Carbides and Nitrides[END_REF].
Table 5 .
5 2 lists the electronic transport of MXene device and comparison with the MAX phases. Undoubtedly, electronic transport data on this new family of 2D material is still deficient. Even, the reported results of mostly studied Ti 2 CT x and Mo 2 CT x varies dramatically. If we compare the resistivity of our V 2 CT x samples, the average Resistivity, Hall mobility, Field effect mobility, Concentration, Ref ρ,Ω • m µ H , cm 2 /V•s µ EF , cm 2 /V•s n, cm -3
V 2 CT x (1.68-2.22)×10 -5 97.3 ; 145.2 22.7 1.66×10 20
Ti 3 C 2 T x (f) (2.22-3.92)×10 -5 [158]
Mo 2 C T x (p) 0.02-20 [76]
Mo 2 C T x (p) 3.37×10 -5 [159]
Ti 3 C 2 T x (s) 0.7 ± 0.2 8 ± 3 × 10 21 [148]
Ti 3 C 2 T x (m) sheet resistivity, 1590 4.23 [77]
Ti 2 CT x (m) 9375 >10000 [66]
V 2 AlC 2 × 10 -7 (80-120) (2 -4) × 10 21
*NOTE f:film; p:paper; s:single layer; m:multi-layer
Table 5 . 2 :
52 Electronic transport data of MXenes device.
1.1. MAX phases
Acknowledgements
DFT calculations . This means that the simplified model will probably slightly overestimate the contribution of electrons to in-plane transport. Such case can be empirically improved by finding a set of energy parameters which slightly underestimates the extent of those electron pockets. [START_REF] Mauchamp | Anisotropy of the resistivity and charge-carrier sign in nanolaminated Ti 2 AlC: Experiment and ab initio calculations[END_REF] along with the fitting Fermi line given by the 2D model using U=0.35eV and an appropriate splitting between the hole bands and electron pockets (red line). chapter 4
MXenes synthesis and characterization
In this chapter, we report a general approach to etch V 2 AlC single crystal and mechanically exfoliate multilayer V 2 CT x MXenes. We then detail the structural characterization of the obtained MXenes. Up to our knowledge, all MXenes reported up to now have been derived from the Al-containing MAX phases, the second part of this chapter discusses the process with the aim of obtaining Ti 2 CT x MXene from Ti 2 SnC single crystals. a simple mechanical disruption was introduced to break apart the large single crystal by using sharp tweezers to scratch the surface. Figure 4.7 shows the optical images of the same sample before and after mechanical disruption. Additional macro and micro cracks are generated by the mechanical force during sonication process and lead to acceptable size for chemical etching. The effect of these intentional defects on the improvement of preferential etching will be discussed in the following part. The structure and phase composition before and after HF exfoliation were characterized by XRD. The typical XRD patterns of the raw material V 2 AlC and V 2 CT x samples are presented in Figure 4.8. Here, it is worth mentioning that the broaden peaks at 11°is the signal from the grease on sample holder from the XRD patterns of VC-p. Similar as other MAX phase to MXenes transformation, the degree of order as measured by XRD clearly decreased and VC-p exhibit a more disordered structure than that of VC-f. The (002) peaks on both VC-p and VC-f samples broaden and shift to lower angles after acid treatment, indicating the larger d spacings. Yet, from the XRD pattern of VC-f, we can still observe the (002) peak with decreased intensity comparing to the original crystal, indicating unreacted crystals. From the XRD pattern of VC-p, one observed an evident shift of 2θ from 13.5°to 7.4°, corresponding to c-LPs changing from 13.107 Åto 23.872 Å, The Raman spectrum of V 2 CT x flake transferred on the Si/SiO 2 substrate exhibits the same peaks at the same positions as those observed on the "bulk MXene" materials, in addition to peaks characteristic of phonon modes of the substrate.Yet, it is intriguing to find that, when the layer thickness is down to below 10 nm, the signal of Raman spectrum could not be detected on the thin layers.
AFM
The thickness and shapes of the flakes produced by mechanical exfoliation and transferred on Si/SiO 2 were investigated by atomic force microscopy (AFM), as can be seen Figure 4.20. The AFM height profiles measured along the color lines showed that V 2 CT x flakes have the similar heights of ≈ 10-20 nm and are identified as multilayers.
The lateral sizes of flakes is several µm. It is worth mentioning that the measured AFM height of the flake relative to the Si/SiO 2 substrate is overestimating the actual height due to the presence of surface adsorbates, such as water molecules, that are trapped under the V 2 CT x flake; similar observations have been previously reported for other 2D materials as well [START_REF] Diaz | Interface properties of CVD grown graphene transferred onto MoS 2[END_REF][START_REF] Ochedowski | Graphene on Mica -Intercalated Water Trapped for Life[END_REF]. Here, electron beam lithography (EBL) was performed using a SEM (FEI/Philips XL30 FEG) equipped with a Raith laser interferometer controlled stage and the Elphy Plus software (remote control for Ebeam writing). This equipment is capable of writing patterns with a resolution down to 10 nm.
A metallic mark close to the flake was firstly located by SEM imaging in TLD mode (~10 KeV, magnification of ×2000, spot 3). SEM imaging corrections including wobble, astigmatism and focus, were performed as the alignment and focus are closely related to the precision of beam writing. It is worth mentioning that a calibration is necessary to know the relative position of the electron beam with respect to the mark coordinate system in Elphy Plus. Based on the SEM image of the target flake, one can easily draw the flake shape in the Elphy Plus layout editor according to the values of the cartesian coordinates. Once the drawing of flake was done in the Elphy Plus file, the electrical contacts were designed by using Layout Editor.
Once the pattern file is ready, the process of electrical contact fabrication is conducted in sequence. |
01762193 | en | [
"shs.litt"
] | 2024/03/05 22:32:13 | 2005 | https://hal.science/cel-01762193/file/Fl.O%27Connor.pdf | Paul Carmignani
Paul Carmignani Flannery O'connor's Complete
FLANNERY O'CONNOR'S COMPLETE STORIES
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
"The two circumstances that have given character to my own writing have been those of being Southern and being Catholic." "Art is never democratic; it is only for those who are willing to undergo the effort needed to understand it" "We all write at our own level of understanding" (Mysteries and Manners)
Hence, the thought-provoking consequence that we all read at our own level of understanding; the aim of these lectures is consequently to raise our own level…
Books by Flannery O'Connor
A BIOGRAPHICAL SKETCH
In spite of the author's warning : « If you're studying literature, the intentions of the writer have to be found in the work itself, and not in his life » (126), we'll conform to tradition and follow the usual pattern in literary studies i.e. deal with OC's life in relation to her work, for the simple reason that, as in most cases, the person and the writer are inextricably entwined.
To sum up the gist of what is to follow: « The two circumstances that have given character to my own writing have been those of being Southern and being Catholic. ». The two essential components of O'C's worldview and work are her Southerness and her Catholicism.
Mary Flanneryshe dropped her 1 st name Mary on the grounds that nobody "was likely to buy the stories of an Irish Washerwoman"was born in Savannah, Georgia on 25 March 1925 ; she was the only child of Edward Francis and Regina Cline O'Connor, a Catholic family. Her father, a real estate broker who encountered business difficulties, encouraged her literary efforts ; her mother came from a prominent family in the State.
The region was part of the Christ-haunted Bible belt of the Southern States and the spiritual heritage of the section profoundly shaped OC's writing, as you all know by now.
Her first years of schooling took place in the the Cathedral of St. John the Baptist, across from her home; she attended mass regularly In 1938, after her father had fallen gravely ill, the family moved to Milledgeville, her mother's birthplace, and the place where she lived for the rest of her life. Her father died in 1941.
In 1945 graduated from the Georgia State College for Women with a major in social science. Later she went to the State University of Iowa and got a degree of Master of Fine Arts in Literature.
She attended the Iowa's writer workshop conducted by Paul Engle and Allan Tate, who was to become a lifelong friend. She published her first story "The Geranium" while she was still a student. She won the Rinehart-Iowa Fiction Award for the publication of 4 chapters of what was to become Wise Blood.
Later on, she was to win several times first place in the O. Henry contest for the best short-story of the year.
After graduating, she spent the fall of 1947 as a teaching assistant while working on her first novel, Wise Blood, under the direction of Engle. In early 1948, she moved to Yadoo, an artist colony in upstate New York where she continued to work on her novel. There, she became acquainted with the prominent poet Robert Lowell and influential critic Alfred Kazin. She spent a year in New York with the Fitzgeralds from the spring of 1949 to the Christmas of 1950. Early in 1951 she was diagnosed with lupus erythematosus, a blood disease that had taken her father's life in 1941 ; she accepted the affliction with grace, viewing it as a necessary limitation that allowed her to develop her art. Consequently, she retreated permanently to the South ; for the next 13 years O'C lived as a semi-invalid with her mother at Andalusia, their farm house, a few miles outside Milledgeville, surrounded with her her famous peacocks and a whole array of animals (pheasants, swans, geese, ducks and chickens). She wrote each morning for three hours, wrote letters, and took trips with her mother into town for lunch. She read such thinkers as Pierre Teilhard de Chardin, George Santayana and Hannah Arendt. She held no jobs, subsisting solely on grants (from the National Institute of Arts and Letters and the Ford Foundation for instance), fellowships and royalties from her writing.
Though her lupus confined her to home and she had to use crutches, she was able to travel to do interviews and lecture at a number of colleges throughout the 50's. In 1958, she even managed a trip to Lourdes and then to Rome for an audience with the Pope. An abdominal operation reactivated the lupus and O'C died on August 3 rd , 1964, at the age of 39.
Although she managed to establish her reputation as a major writer by the end of the 50's, she was regardered as a master of the short-story, O'C grew increasingly disheartened in her attempt to make "the ultimate reality of the Incarnation real for an audience that has ceased to believe." This article of faith and her belief in Christ's resurrection account for her way of seeing the universe. However, her work is mostly concenred with Protestant, a paradox she explained in the following way: "I can write about Protestant believers better than Catholic believers-because they express their belief in diverse kinds of dramatic action is obvious enough for me to catch. I can't write about anything subtle." An important milsetone in her literary career and reputation : the publication of Mystery and Manners edited by Sally and Robert Fitzgearld in 1969. Through the 70s, this collection of essays became the chief lens for O'C interpretation.
In 1979, the same people edited a collection of O'C's letters, The Habit of Being.
From beyond the grave, O'C herself became the chief influence on O'C criticism.
To sum up: O'C led a rather uneventful life that was focused almost exclusively on her vocation as a writer and devotion to her Catholic faith. All told, OC's was a somewhat humble life yet quite in keeping with her literary credo : "The fact is that the materials of the fiction writer are the humblest. Fiction is about everything human and we are made out of dust, and if you scorn getting yourself dusty, then you shouldn't try to write fiction. It's not a grand enough job for you". Influences: The Bible ; St Augustine ; Greek tragedy ; G. Bernanos ; T.S Eliot ; W. Faulkner ; G. Greene ; N. Hawthorne ; Søren Kierkegaard ; Gabriel Marcel ; Jacques Maritain ; F. Mauriac ; E. A. Poe ; Nathaniel West.
THE SETTING OF F. O'CONNOR'S FICTION : THE SOUTH AND ITS LITERARY TRADITION
A sketchy background aiming to place OC's work in its appropriate context. But before doing that it is necessary to take up a preliminary problem, a rather complex one i.e. the question of the relationship between fiction and life or the world around us, which will lead us to raise 2 fundamental questions :
the first bears on the function and nature of literature ; the second regards the identification of a given novelist with the section known as the South (in other words, is a novelist to be labelled a Southern writer on account of his geographical origin only ?).
A) Function and nature of literature
A word of warning you against a possible misconception or fallacy (a mistaken belief), i.e. the sociological approach to literature which means that even if OC's fiction is deeply anchored to a specific time and place, even if fiction-writing is according to OC herself "a plunge into reality" beware of interpreting her stories as documents on the South or Southern culture.
Such an approach -called the "sociological fallacy"takes it for granted that literature is a mirror of life, that all art aims at the accurate representation or imitation of life (a function subsumed under the name of mimesis, a term derived from Aristotle). That may be the case, and OC's fiction gives the reader a fairly accurate portrayal of the social structure and mores of the South but it can't be reduced to the status of a document. Consequently, there is no need to embark upon a futile quest for exact parallels between the fictional world and the experiential world. A characteristic delusion exposed by critic J. B. Hubbell 1 : Many Northern and European and, I fear, Southern readers make the mistake of identifying Faulkner's fictitious Yoknapatawpha County with the actual state of Mississippi. It is quite possible that an informed historian could parallel every character and incident in Faulkner's great cycle with some person or event in the history of the state ; and yet Faulkner's world is as dark a literary domain as Thomas Hardy's Wessex and, almost as remote from real life as the Poictesme of James Branch Cabell or the No Man's Land of Edgar Allan Poe. This is the main danger: the sociological dimension somewhat blurs if not obliterates the literary nature and qualitythe literarinessof Southern fiction.
As an antidote to the mistaken or illusory view that literature is a mirror of life, I'd like to advocate the view that the South also is a territory of the imagination. Just as one does not paint from nature but from painting, according to A. Malraux, one does not write books from actual life but from literature. Cf. also OC's statement that: "The writer is initially set going by literature more than by life" (M&M, 45). Moreover, there exists now a fictitious South, a composite literary entity that owes its being to all the novels that were written about or around it.
What was once raw about American life has now been dealt with so many times that the material we begin with is itself a fiction, one created by Twain, Eliot, or Far from being a mere "transcript of life", literature aims in W. Faulkner's own words, at "sublimating the actual into the apocryphal" (an apocryphal story is well-known but probably not true), which is just another way of claiming that literature creates its own reality.
By selecting and rearranging elements from reality and composing them into an imaginative pattern the artist gives them a meaningfulness and a coherence which they would otherwise not have possessed. As an imaginative recreation of experience the novel can thus, in and of itself, become a revolt against a world which appears to have no logical pattern Another element bearing out the literariness of OC's short fiction is the play of intertextuality :
The referent of narrative discourse is never the crude fact, nor the dumb event, but other narratives, other stories, a great murmur of words preceding, provoking, accompanying and following the procession of wars, festivals, labours, time: And in fact we are always under the influence of some narrative, things have always been told us already, and we ourselves have always already been told. (V. Descombes, Modern French Philosophy, 186).
OC's texts take shape as a mosaic of quotations ; they imitate, parody, transform other texts.
Her stories reverberate with echoes from The Bible, or The Pilgrim's Progress, etc., which conclusively proves that one does not write from life or reality only but also from books :
[...] la création d'un livre ne relève ni de la topographie, ni du patchwork des petits faits psychologiques, ni des mornes déterminations chronologiques, ni même du jeu mécanique des mots et des syntaxes. Elle ne se laisse pas circonscrire par l'étroite psychologie de l'auteur, champ de la psychanalyse, ni par son « milieu » ou par son « moment », que repère la sociologie ou l'histoire. L'oeuvre ne dépend de rien, elle inaugure un monde. [...] La tâche de l'artiste n'est-elle pas de transfigurer, de transmuerl'alchimiste est l'artiste par excellencela matière grossière et confusemateria grossa et confusaen un métal étincelant ? Toute conclusion d'un poète doit-être celle de Baudelaire : « Tu m'as donné de la boue et j'en ai fait de l'or. » (Durand,398) Far from being a mere transcript of the social, economic, historical situation of the South, OC's work is essentially, like all form of literary creation a transmutation of anecdotal places and geographic areas into topoi* : « une transmutation de lieux anecdotiques et de sites géographiques en topoi […] toute oeuvre est démiurgique : elle crée, par des mots et des phrases, une terre nouvelle et un ciel nouveau » (Durand,.
*Topos, topoi : from Greek, literally "place" but the term came to mean a traditional theme (topic) or formula in literature. Between life and literature there intervenes language and the imagination. Literature is basically a question of "words, commas and semi-colons" as S. Foote forcefully maintained.
B) Identification of a given novelist with the section
As for the second question -the "southernness" of such and such novelist -, I'd like to quote the challenging opinion of a French specialist, M. Gresset, who rightly maintains that :
Il est à peu clair maintenant que le Sud est une province de l'esprit, c'est-à-dire non seulement qu'on peut-être "Sudiste" n'importe où, mais que le Sud en tant que province, se trouve à peu près où l'on voudra. [...] Être sudiste, ce serait donc non seulement être un minoritaire condamné par l'Histoire, mais être, comme on dit maintenant, un loser 3 . (Emphasis mine)
Consequently, one should be wary of enrolling under the banner of southern literature such and such a writer just because he was born in Mississippi, Tennessee or Georgia. all her books in the South, consequently she is a regionalist, but she managed to endow her fiction with universality i.e. to turn the particular into the universal.
As a Southerner born and bred, Flannery O'Connor is consistently labeled a regional writer, which in the eyes of many critics amounts to a limitation if not a liability. However, if Flannery O'Connor undoubtedly is of the South she is also in the South as one is in the human condition to share its greatness and baseness, its joys and sorrows, its aspirations and aberrations. In other words, her work and preoccupations transcend the limits of regionalism to become universal in their scope and appeal: "O'Connor writes about a South that resides in all of us. Her works force us toward parts of our personal and collectives histories we thought we had shed long ago." (R. K. -"What made the South ?" This essential question is no less complex than the former. Among the most often quoted "marks of distinctiveness" are to be found : Geography: but far from forming a homogeneous geographical region, a unit, the South can be divided into 7 regions too much diversity for geography to be a suitable criterion. Consequently, specialists resorted to another factor : Climate: the weather is very often said to be the chief element that made the South distinctive. The South has long been noted for its mild winters, long growing seasons, hot summers and heavy rainfall. Climate has doubtless exerted a strong influence on the section, e.g. it slowed the tempo of living and of speech, promoted outdoor life, modified architecture and encouraged the employment of Negroes, etc. However, climate is a necessary but not a sufficient explanation.
Economy: the South used to be an agricultural region but it underwent a radical process of urbanization and industrialization that led to the Americanization of Dixie: economically, the South is gradually aligning itself with the North.
History: the South suffered evils unknown to the nation at large : slavery, poverty, military defeat, all of them un-American experiences: "the South is the region history has happened to" (R. Weaver, RANAM IX, 7)
To sum up → The South is a protean* entity baffling analysis and definition : it can't be explained in terms of geography, climate or history, etc. yet it is all that and something more : "An attitude of mind and a way of behaviour just as much as it is a territory" (Simkins, IX). *having the ability to change continually in appearance or behaviour like the mythological character Proteus.
Nevertheless, if it can be said there are many Souths, the fact remains that there is also one South. That is to say, it is easy to trace throughout the region (roughly delimited by the boundaries of the former Confederate States of America, but shading over into some of the border states, notably Kentucky, also) a fairly definite mental pattern, associated with a fairly definite social patterna complex of established relationships and habits of thought, sentiments, prejudices, standards and values, and associations of ideas which, if it is not common strictly speaking to every group of white people in the South, is still common in one appreciable measure or another, and in some part or another, to all but relatively negligible ones (W. J. Cash, The Mind of the South, 1969).
Taking into account the literary history of the South and the fiction it gave rise to may help us to answer some of the question we've been dealing with.
D) A Very Short Introduction to Southern Fiction
A rough sketch of the literary history of a "writerly* region", focussing on the landmarks. *Writerly: of or characteristic of a professional author; consciously literary. The Southern literary scene was long dominated by Local-color fiction or Regionalism (a movement that emphasizes the local color or distinctive features of a region or section of the US).
Local-color fiction was concerned with the detailed representation of the setting, dialect, customs, dress and ways of thinking and feeling which are characteristic of a particular region (the West, the Mississippi region, the South, the Midwest and New England). This movement or literary school was illustrated by the works of/Among local-colorists four names stand out: Joel Chandler Harris A period of paramount importance in the emergence of Southern literature was the debate over slavery and abolition just before Civil War (1861-1865). The controversy over the peculiar institution gave rise to a genuine Southern literature:
The South found itself unable to accept much of the new literature which emanated from the Northern states. It then began half-consciously building up a regional literature, modeled upon English writers, which was also in part a literature of defense […] The South was more or less consciously building up a rival literary tradition. (Hubbell,133) Another fact for congratulation to the South is, that our people are beginning to write booksto build up a literature of our own. This is an essential prerequisite to the establishment of independence of thought amongst us. (G. Fitzhugh, 338)
The most influential work of those troubled years was the anti-slavery plea, Uncle Tom's Cabin published in 1852 by Harriet Beecher Stowe (1811-1896), the first American best-seller.
The period gave rise to the plantation novel whose archetype is Swallow Barn by John Pendleton Kennedy (1851).
By the end of the "War between Brothers", the South could boast a number of good writers, but one more half-century was needed for Southern literature to come of age and establish a new tradition. The most prominent voice in the post-war period was M. Twain . Importance of The Adventures of Huckleberry Finn (1884) on the development of American prose: in a single step, it made a literary medium of the American language; its liberating effect on American writing was unique, so much so that both W. Faulkner and E. Hemingway made it the fountainhead of all American literature : All modern American literature comes from one book by Mark Twain called Huckleberry Finn. [...] All American writing comes from that. There was nothing before. There has been nothing as good since. (E. Hemingway, Green Hills of Africa, 26). The present-day popularity of Southern fiction can be accounted for by the fact that Americans have long been fascinated with the South as land of extremes, the most innocent part of America in one respect and the guiltiest in another; innocent, that is in being rustic or rural (there's in the latter observation an obvious hint of pastoralism: an idealized version of country life→the South seems to have embodied a certain ideal mixture of ruralism and aristocratic sophistication) yet guilty due to the taint of slavery and segregation.
Anyway, what makes the South a distinctive region is that the South was at one time in American history not quite a nation within a nation, but the next thing to it. And it still retains some of the characteristics of that exceptional status. So does the fiction it/she gave rise to: "The Southern writer is marginal in being of a region whose history interpenetrates American moral history at crucial points". (F. J. Hoffman, The Modern Novel in America).
Importance of historical experience and consciousness in the Southern worldview ; when the South was colonized, it was meant to be a paradise on earth, a place immune from the evils that beset Europe, a sort of blessed Arcadia (a region of Greece which became idealized as the home of pastoral life and poetry), but the tragedy of the South lies in the fact that it "is a region that history has happened to". And afterwards myth took over from history in order to make up for the many disappointments history brought about:
The Old South emerges as an almost idyllic agricultural society of genteel people and aristocratic way of life now its history is transformed into the story of a fallen order, a ruined time of nobility and heroic achievements that was vanquished and irrevocably lost. In this way the actual facts of the old South have been translated by myth into a schemata of the birth, the flowering and the passing of what others in an earlier era might have called a Golden Age. (J. K. Davis)
E) Southern literature
It is not easy to sum up in one simple formula the main features of Southern fiction ; this is besides a controversial question. As a starting-point→ a tentative definition from a study entitled Three Modes of Southern Fiction : Among these characteristics [of Southern fiction] are a sense of evil, a pessimism about man's potential, a tragic sense of life, a deep-rooted sense of the interplay of past and present, a peculiar sensitivity to time as a complex element in narrative art, a sense of place as a dramatic dimension, and a thorough-going belief in the intrinsic value of art as an end in itself, with an attendant Aristotelian concern with forms and techniques (C. Hugh Holiman, Three Modes of Southern Fiction) Against this background, the features that are to be emphasized are the following :
-A strong sense of place (with its corollary: loyalty to place) ; Sense of place or the spirit of place might be said to be the presiding genius of Southern fiction. Whereas much modern literature is a literature without place, one that does not identify itself with a specific region, Southern fiction is characterized by its dependence on place and a special quality of atmosphere, a specific idiom, etc. Novelist Thornton Wilder claims, rigthly or wrongly, that: "Americans are abstract. They are disconnected. They have a relation but it is to everywhere, to everybody, and to always" (C. Vann Woodward, The Search for Southern Identity, 22). According to him "Americans can find in environment no confirmation of their identity, try as they might." And again: "Americans are disconnected. They are exposed to all place and all time. No place nor group nor movement can say to them: we are waiting for you; it is right for you to be here." Cf.
Also "We don't seem anchored to place [...] Our loyalties are to abstractions and constitutions, not to birthplace or homestead or inherited associations." (C. Vann Woodward)
The insignificance of place, locality, and community for T. Wilder contrasts strikingly with the experience of E. Welty who claims that: "Like a good many other regional writers, I am myself touched off by place. The place where I am and the place I know […] are what set me to writing my stories." To her, "place opens a door in the mind," and she speaks of "the blessing of being located-contained." Consequently, "place, environment, relations, repetitions are the breath of their
[the Southern States'] being."
The Southern novel has always presupposed a strong identification with a place, a participation in its life, a sense of intense involvement in a fixed, defined society (involvement with a limited, bounded universe, South, 24)
Place is also linked to memory; it plays another important rôle as archives (or record) of the history of the community : one of the essential motifs of Southern fiction is the exploration of the link between place and memory and truth. Here a quotation from E. Welty is in order :
The truth in fiction depends for its life on place. Location is at the crossroads of circumstances, the proving ground of "What happened ? Who's here ? Who's coming ?" and that is the heart's field (E. Welty, 118).
Place : it is a picture of what man has done and imagined, it is his visible past result (Welty, 129) Is it the fact that place has a more lasting identity than we have and we unswervingly tend to attach ourselves to identity ? (119)
--A sense of Time
The Southern novelist evinces a peculiar sensitivity to time as a complex element in narrative art (Three Modes of Sn Fiction); he/she shows a deep-rooted sense of the interplay of past and present. Southern fiction is in the words of Allen Tate "a literature conscious of the past in the present" (Ibid., 37) and "Southern novelists are gifted with a kind of historical perspective enabling them to observe the South and its people in time". Cf. W. Faulkner: "To me no man is himself, he is the sum of his past" (171). Concerning the importance of the past and of remembrance, two other quotations from Allen Tate are in order:
After the war the South again knew the world… but with us, entering the world once more meant not the obliteration of the past but a heightened consciousnes of it (South, 36)
The Southerners keep reminding us that we are not altogether free agents in the here and now, and that the past is part master" (South, 57) --A "cancerous religiosity"* *"The South with its cancerous religiosity" (W. Styron, Lie Down in Darkness)
Cf. OC's statement: "I think it is safe to say that while the South is hardly Christ-centered, it is most certainly Christ-haunted" (M&M, 44). Existence of the Bible-Belt : an area of the USA, chiefly in the South, noted for religious fundamentalism.
-A sense of evil, a certain obsession with the problem of guilt (cf. Lilian Smith's opinion: "Guilt was then and is today the biggest crop raised in Dixie") and moral responsibility bound up, of course, with the race issue, the Civil War, etc. :
There is a special guilt in us, a seeking for something hadand lost. It is a consciousness of guilt not fully knowable, or communicable. Southerners are the more lonely and spiritually estranged, I think, because we have lived so long in an artificial social system that we insisted was natural and right and just -when all along we knew it wasn't (McGill) -Another distinctive feature: the tradition of the folktale and story-telling which is almost as old as the South itself ; I won't expand on this feature and limit myself to a few quotes: I think there's a tradition of story-telling and story-listening in the South that has a good deal to do with our turning to writing as a natural means of pressing whatever it is we've got bubbling around inside us. (S. Foote)
The South is a story-telling section. The Southerner knows he can do more justice to reality by telling a story than he can by discussing problems or proposing abstraction. We live in a complex region and you have to tell stories if you want to be anyway truthfuil about it, (F. O'Connor) Storytelling achieved its ultimate height just before the agricultural empire was broken down and the South became industrialized. That's where storytelling actually flowered (E. Caldwell)
In the world of Southern fiction people, places and things seem to be surrounded by a halo of memories and legends waiting to get told. People like to tell stories and this custom paves the way for would-be novelists. Hence too, the importance of Voice: not an exclusively Southern feature but most Southern novels are remarkable for the spoken or speech quality of their prose/style : For us prose fiction has always been close to the way people talkmore Homeric than Virgilian. It presumes a speaker rather than a writer. It's that vernacular tone that is heard most often in contemporary Southern fiction.
No wonder then all these factors should result in the fact that: "The Southerner has a great sense of the complexities of human existence" (H. Crews) -He is endowed with a sense of distinctiveness and prideful difference. That sense stems from the conviction that the South is section apart from the rest of the United States. The History of the section shows that such a conviction is well-founded for it comprises many elements that seem to be atypical in American history at large, cf. C. Vann Woodward's opinion :
In that most optimistic of centuries in the most optimistic part of the world [i.e. the USA at large], the South remained basically pessimistic in its social outlook and its moral philosophy. The experience of evil and the experience of tragedy are parts of the Southern heritage that are as difficult to reconcile with the American legend of innocence and social felicity as the experience of poverty and defeat are to reconcile with the legends of abundance and success (The Burden of Southern History, Baton Rouge, LSU, 1974, 21.)
There are still numerous features that might be put forward to account for the distinctiveness or differentness of the South and Southern literature, but this is just a tentative approach. All these points would require qualification but they will do as general guidelines (cf. the bibliography if you wish to go into more detail).
LECTURE SYMBOLIQUE, ALLEGORIQUE ET PARABOLIQUE
Dans Le Livre à venir, le philosophe M. Blanchot déclare que « la lecture symbolique est probablement la pire façon de lire un texte littéraire » (125). On peut souscrire à cet anathème si l'on a du symbole une conception réductrice qui en fait une simple clé, une traduction, alors qu'en réalité c'est un travail (Bellemin-Noël, 66) et qu'en outre, comme nous le verrons, « la symbolique se confond avec la démarche de la culture humaine tout entière ». Qu'entendons-nous par là ? Tout simplement que, selon la belle formule de G. Durand, « L'anthropologie ne commence véritablement que lorsqu'on postule la profondeurs dans les "objets" des sciences de l'homme » (Figures Comme le précise le philosophe J. Brun :
mythiques et visages
Les véritables symboles ne sont pas des signes de reconnaissance, ce ne sont pas des messagers de la présence, mais bien des messagers de l'Absence et de la Distance. C'est pourquoi ce sont eux qui viennent à nous et non pas nous qui nous portons vers eux comme vers un but que nous aurions plus ou moins consciemment mis devant nous. Les symboles sont les témoins de ce que nous ne sommes pas ; si nous nous mettons à leur écoute, c'est parce qu'ils viennent irriguer nos paroles d'une eau dont nous serons à jamais incapables de faire jaillir la source. (81).
Les symboles nous redonnent aussi cet état d'innocence où, comme l'exprime magnifiquement P. Ricoeur : « Nous entrons dans la symbolique lorsque nous avons notre mort derrière nous et notre enfance devant nous » (Le Conflit des herméneutiques).
Tout symbole authentique possède trois dimensions concrètes ; il est à la fois :
-"cosmique" (c'est-à-dire puise sa figuration dans le monde bien visible qui nous entoure) ; -"onirique" (c'est-à-dire s'enracine dans les souvenirs, les gestes qui émergent dans nos
FROM WORDS TO THE WORD (i.e. GOD'S WORD) :
FICTION-INTERTEXTUALITY-VARIATIONS ON INITIATION
By way of introduction to OC's fictional universe, I'd like to discuss three statements : the first 2 by the author herself :
All my stories are about the action of grace 7 on a character who is not very willing to support it (M&M, 25) : hence the reference to initiation
We have to have stories in our background. It takes a story to make a story (Ibid., 202 ) : hence the reference to intertextuality and the third from a critic, R. Drake, who pointed out that : "Her range was narrow, and perhaps she had only one story to tell. […] But each time she told it, she told it with renewed imagination and cogency" : hence the reference to variations on the same theme.
Those three observations will lead us to focus on the fundamental and interrelated questions or notions -interrelated that is in O'C's workthose of fiction-writing and intertextuality, initiation.
I. Function & Aim of fiction according to OC
"No prophet is accepted in his own country" (Luke 4 : 24) "Writing fiction is a moral occupation" (H. Crews) "Writing fiction is primarily a missionary activity" (O'Connor) Fiction with a religious purpose ("My subject in fiction is the action of grace in a territory held largely by the devil" M&M, 118) based on the use of parables in the Bible : "Therefore speak I to them in parables: because they seeing see not; and hearing they hear not, neither do they understand" (Matt. 13 : 13). OC's short-stories = variations on two key parables :
1. "Behold, a sower went forth to sow ; And when he sowed, some seeds fell by the way side, and the fowls came and devoured them up : Some fell upon stony places, where they had not much earth : and forthwith they sprung up, because they had no deepness of earth : And when the sun was up, they were scorched ; and because they had no root, they withered away. And some fell among thorns ; and the thorns sprung up, and choked them : But other fell into good ground, and brought forth fruit, some an hundredfold, some sixtyfold, some thirtyfold. Who hath ears to hear, let him hear. And the disciples came, and said unto him, Why speakest thou unto them in parables ? He answered and said unto them, Because it is given unto you to know the mysteries of the kingdom of heaven, but to them it is not given" (Matt. 13 : 3-11)
"
The kingdom of heaven is likened unto a man which sowed good seed in his field : But while men slept, his enemy came and sowed tares among the wheat, and went his way. But when the blade was sprung up, and brought forth fruit, then appeared the tares also.
7. The free and unmerited favour of God, as manifested in the salvation of sinners and the bestowal of blessings So the servants of the householder came and said unto him, Sir, didst not thou sow good seed in thy field ? from whence then hath it tares ? He said unto them, An enemy hath done this. The servants said unto him, Wilt thou then that we go and gather them up ? But he said, Nay ; lest while ye gather up the tares, ye root up also the wheat with them. Let both grow together until the harvest : and in the time of harvest I will say to the reapers, Gather ye together first the tares, and bind them in bundles to burn them: but gather the wheat into my barn." (Matt. 13 : 24-30)
II. Initiation
"Every hierophany is an attempt to reveal the Mystery of the coming together of God and man".
All the narratives by OC are stories of initiation ("The bestowal of grace upon an unwilling or unsuspecting character" : a process, an operation, a pattern corresponding to what is called initiation) → an investigation into the constituents and characteristics of stories of initiation but first, what is initiation ? Origins of the term : initium : starting off on the way ; inire : enter upon, begin. Initiation as an anthropologic term means "the passage from childhood or adolescence to maturity and full membership in adult society" (Marcus, 189), which usually involves some kind of symbolic rite. "The Artificial Nigger" is a good example thereof.
Held to be one of the most ancient of rites, an initiation marks the psychological crossing of a threshold into new territories, knowledge and abilities. The major themes of the initiation are suffering, death and rebirth. The initiate undergoes an ordeal that is symbolic of physically dying, and is symbolically reborn as a new person possessing new knowledge.
In pagan societies, the initiation marks the entrance of the initiate into a closed and traditionally secret society; opens the door to the learning of ritual secrets, magic, and the development and use of psychic powers; marks a spiritual transformation, in which the initiate begins a journey into Self and toward the Divine Force; and marks the beginning of a new religious faith. Many traditional initiations exist so that the spiritual threshold may be crossed in many alternate ways; and, all are valid: the ritual may be formal or informal; may be old or new; may occur as a spontaneous spiritual awakening, or may even happen at a festival.
2. What is a story of initiation? In general, one can say that there is no single precise and universally applicable definition of stories of initiation in literary theory. There are some attempts to build a concise theory of the initiation-theme in literature : several aspects of initiation can be found in literature.
First of all, initiation as a process in literary descriptions denotes the disillusioning process of the discovery of the existence of evil, which is depicted as a confrontation of the innocent protagonist with guilt and atonement and often has the notion of a shocking experience. This confrontation usually includes a progress in the protagonists character or marks a step towards self-under-standing. Thus, this type describes an episode which leads the protagonist to gaining insight and gaining in experience, in which this experience is generally regarded as an important stage towards maturity.
The second type differs from the first in focusing on the result of the initiatory experience. This includes the loss of original innocence concerning the protagonist and is often compared to the biblical Fall of Men. Furthermore, this approach generally stresses the aspect of duality in the initiation process, which is the aspect of loss of innocence as a hurtful but necessary experience as well as the aspect of profit in gaining identity.
The next aspect centers on the story of initiation as describing the process of self-discovery and self-realization, which basically means the process of individuation (the passage to maturity).
From that point of view, an initiation story may be said to show its young protagonist experiencing a significant change of knowledge about the world or himself, or a change of character, or of both, and this change must point or lead him towards an adult world. It may or may not contain some form of ritual, but it should give some evidence that the change is at least likely to have permanent effects.
The aspect of movement in stories of initiation : it plays an important role in many stories dealing with the initiation theme. Often, the inner process of initiation, the gaining of experience and insight, is depicted as a physical movement, a journey. This symbolic trip of the protagonist additionally supports the three-part structure, which is usually found in initiation stories. The threepart structure of initiation can shortly be described as the three stages of innocenceexperiencematurity. The motive of the journey reflects this structure, as the innocent protagonist leaves home (i.e. the secure place of childhood), is confronted with new situations, places and people on his journey and returns back home as a `new man′ himself, in a more mature state of mind.
Also to be taken into account, the aspect of effect in stories of initiation which may be categorized according to their power and effect. Three types of initiation which help to analyze stories dealing with this topic:
-First, some initiations lead only to the threshold of maturity and understanding, but do not definitely cross it. Such stories emphasize the shocking effect of experience, and their protagonists tend to be distinctly young.
-Second, some initiations take their protagonists across a threshold of maturity and understanding but leave them enmeshed in a struggle for certainty. These initiations sometimes involve self-discovery.
-Third, the most decisive initiations carry their protagonists firmly into maturity and understanding, or at least show them decisively embarked toward maturity. These initiations usually center on self-discovery. For convenience'sake, these types may be called tentative, uncompleted, and decisive initiations.
As one can see, the change in the protagonist′s state of mind plays an important role for his definition. To analyze the dimension of effect usually also involves a consideration of the aspect of willfulness of the initiatory experience, as voluntary initiation experiences are more likely to have direct, permanent effect on the protagonist, whereas forced initiations may be rejected, or rather suppressed so that the effect may be not clearly distinguishable at first. Crucial, however, is the aspect of permanency of effect ("one may demand evidence of permanent effect on the protagonist before ascribing initiation to a story"), as it may prove difficult to provide evidence of this permanency.
To sum up:
Initiation → involves an ontological (dealing with the nature of being) mutation/metanoia :
«
Characteristics of initiation in O'C
Almost systematically involves a journey or a trip→a journey of enlightenment (cf. "A Good Man is Hard to Find")
The Initiate may be : an adolescent, an old man/woman ; an intellectual, etc. Key figure: initiation always involves an agent → the messager, the agent of the change (Negro, preacher, stranger, kids, a plaster figure, etc.) → In most of the stories, a visitor or a visit irrevocably alters the home scene and whatever prevailing view had existed. These visitors take various shapes: a one-armed tramp, three juveline arsonists (visitors/messengers often go in threes), a deranged escaped convict, etc.
Place of initiation: river, woods, staircase. Landscape often fulfills the function of an actant; it isn't just a decor but it exerts an influence on what happens, on the character's fate, etc. (cf. role of the moon and the sun/son) Catalyst of the initiatory experience : violence (Only when that moment of ultimate violence is reached, i.e. just before death, are people their best selves UOC, 38)→assumes many forms: a stroke, a fit, a fall, an attack, a bout, a physical assault, etc.
Violences triggers off the change: This notion that grace is healing omits the fact that before it heals, it cuts with the sword Christ said he came to bring (109) "The Word of God is a burning Word to burn you clean" (Th. 320) Participation of evil in initiation to the divine: « I suppose the devil teaches most of the lessons that lead to self-knowledge » (Th, 79) A case in point : "A good Man is Hard to Find" Necessary to ponder the paradox of blasphemy as the way to salvation (Th. 343)
Paradox
The interweaving of the sacred and the profane, the pure and the impure, sanctity and taint/ corruption : Il résulte que la souillure et la sainteté, même dûment identifiées […] représentent, en face du monde de l'usage commun, les deux pôles d'un domaine redoutable. C'est pourquoi un terme unique les désigne si souvent jusque dans les civilisations les plus avancées. Le mot grec "souillure" signifie aussi "le sacrifice qui efface la souillure". Le terme agios "saint" signifiait en même temps "souillé" à date ancienne, au dire des lexicographes. La distinction est faite plus tard à l'aide de deux mots symétriques agès "pur" et euagès "maudit", dont la composition transparente marque l'ambiguïté du mot originel. Le latin, expiare "expier" s'interprète étymologiquement comme "faire sortir (de soi) l'élément sacré que la souillure contractée avait introduit". (R. Caillois, L'Homme et le sacré, pp. 39-40) Il y a là la révélation d'une intuition fondamentale, masquée par la religion établie, à savoir que sacré et interdit ne font qu'un et que « l'ensemble de la sphère sacrée se compose du pur et de l'impur » (G. Bataille). Cette double valence se retrouve également dans la sexualité. En effet si, en bonne théologie chrétienne, le spirituel s'oppose au charnel, il est des cas où la chair peut représenter une des voies d'accès au divin : Dieu le père, l'impénétrable, l'inconnaissable, nous le portons dans la chair, dans la femme. Elle est la porte par laquelle nous entrons et nous sortons. En elle, nous retournons au Père, mais comme ceux qui assistèrent aveugles et inconscients à la transfiguration (D. H. Lawrence)
Outer vs. inner dimensions
The outward trip also is an inner journey, a descent into oneself → processus de l'introrsum ascendere des mystiques médiévaux : « la montée spirituelle passe par une "enstase", un voyage intérieur qui s'ouvre sur un espace élargi, au terme de la rencontre de ce qui est en nous et de ce qui est hors de nous » (J. Thomas, 85) → « l'homme s'élève en lui-même, en partant de l'extérieur, qui est ténèbres, vers l'intérieur, qui est l'univers des lumières, et de l'intérieur vers le Créateur » (Rûmi, 99)
III. Intertextuality
We have to have stories in our background. It takes a story to make a story (Ibid., 202 ) By way of transition→ point out/up the kinship between the two notions of initiate and narrator since both have to do with knowledge: the "initiate" means "he who knows", so does the term "narrator" which comes from the Latin narus : he who knows. Narrator bears a certain relationship to the notions of secret/sacred/mystery/mysticism.
A writer doesn't start from scratch → intertextuality
Textuality or textness: the quality or use of language characteristic of written works as opposed to spoken usage.
What is a text ? The term goes back to the root teks, meaning to weave/fabricate. Text means a fabric, a material: not just a gathering of signs on a page. R. Barthes was the originator of this textual and textile conjunction. We know that a text is not a succession of words releasing a single meaning (the message of the Author-God) but a multi-dimensional space in which a variety of writings, none of them original, blend and sometimes clash: "Every text takes shape as a mosaic of citations, every text is the absorption and transformation of other texts" (J. Kristeva).
The text is a tissue of quotations (cf. a tissue of lies)… (N. 122). A literary work can only be read in connection with or against other texts, which provide a grid through which it is read and structured. Hence, R. Barthes's contention that "The I which approaches the text is itself already a plurality of texts".
Intertextuality : refers to the relations obtaining between a given book and other texts which it cites, rewrites, absorbs, prolongs, or generally transforms and in terms of which it is intelligible.
The notion was formulated and developed by J. Kristeva who stated for instance that "books imitate, parody other books".
A reminder of the essential observation that "learning to write may be a part of learning to read [...] that writing comes out of a superior devotion to reading": this, by the way, was a profession of faith by Eudora Welty in The Eye of the Story. As another Southern novelist, S. Foote, put it: "Reading good writers is one's university". What are the literary works FMD is brought into relation with ?
In the case of O'C the fundamental proto-texts are : the Bible, The Pilgrim's Progress, Teilhard de Chardin, other religions (Buddhism/Vedanta), classical mythology, Dante's Divine Comedy. This topic is discussed in the Ellipses volume, so I'll refer you to it.
The Pilgrim's Progress
A famous novelthe most popular after the Bible in U.K. -, published by John Bunyan, a village tinker and preacher in 1678. Bunyan's book unfolds the universal theme of pilgrimage as a metaphor/image of human life and the human quest for personal salvation. Bunyan describes the road followed by Christian and the mishaps he has to endure to reach the Celestial City: e.g. he passes through the Valley of the Shadow of Death, the Enchanted Ground, the Delectable Mountains, and enters the "the Country of Beulah" ( "la Terre épouse") cf. novel :
In this land also the contract between the bride and the bridegroom was renewed : Yea here, as the bridegroom rejoiceth over the bride, so did their god rejoice over them. Here they had no want of corn and wine ; for in this place they met with abundance of what they had sought for in all their pilgrimage. (pp. 195-196) Then Christian crosses the River, etc., and on his way he comes across various people assuming allegoric and symbolic functions e.g. Mr. Worldly-Wiseman, Faithful, Saveall, Legality who is: a very judicious man [...] that has skill to help men off with such burdens as thine are from their shoulders and besides, hath skill to cure those that are somewhat crazed in their wits with their burdens. (p. 50).
Christian is brought to trial in vanity fair (a trial takes place in Vanity Fair: Faithful, Christian's fellow-traveller is martyred but Christian escapes death. Christian realizes that "there was a way to Hell, even from the gates of heaven, as well as from the City of Destruction" (205).
The Bible
"It is the nature of American scriptures to be vulgarizations of holy texts from which they take their cues..." (Fiedler, 97).
The Bible plays so important a rôle that it may be considered as one of the dramatis personae ; the Book forms a constant counterpoint to OC's narratives. Its influence is to be found in
Names:
Motes → Matthew 7 : 3-5 : "And why seest thou the mote that is in thy brother's eye, but seest not the beam that is in thy own eye ? Or how sayest thou to thy brother : Let me cast the mote out of thy eye ; and behold a beam is in thy own eye ? thou hypocrite, cast out first the beam out of thy own eye ; and then shalt thou see to cast out the mote out of thy brother's eye".
Hazel → Hazael : God has seen → Haze : a reference to a glazed, impaired way of seeing Enoch a character from the Old Testament; was taken up to heaven without dying Asa : king of Syria Thomas : meaning "twin" Parker (Christophoros) moves from the world of the inanimate to the animal kingdom to humans, to religious symbols and deities, and, ultimately to Christ.
Obadiah : servant of the Lord→the pride of the eagle // Elihue : God is He→a symbolic transformation of Parker to God
Ruth is an ironic inversion of her biblical counterpart
Symbols/Images
Jesus'removing the demonic spirit from the people to the herd of swine which then ran violently down a steep place into the sea (Mark 5 : 13)
The River→ (Luke 8 : 32) Jesus drives the demons out of one man called legion into the herd of swine and then sends the entire herd over the bank to drown in a lake 51
Themes and notions "The True country" in "The Displaced Person"→St Raphael: The prayer asks St. Raphael to guide us to the province of joy so that we may not be ignorant of the concenrs of the true country 80
The Rheims-Douai Version of the Bible→ the Kingdom is not to be obtained but by main force, by using violence upon ourselves, by mortification and penance and resisting our evil incli-nations…(88) P. Teilhard de Chardin P. Teilhard de Chardin's concept of the "Omega Point" as that particular nexus where all vital indicators come to a convergence in God becomes in OC's fiction a moment where a character sees or comes to know the world in a way that possesses a touch of ultimate insight.Teilhard's The Phenomenon of Man was "a scientific expression of what the poet attempts to do : penetrate matter until spirit is revealed in it" (110). Teilhard's concept of the Omega Point, a scientific explanation of human evolution as an ascent towards consciousness that would culminate forwards in some sort of supreme consciousness […] To be human is to be continually evolving toward a point that is simultaneously autonomous, actual, irreversible, and transcendent: "Remain true to yourselves, but move ever upward toward greater consciousness and greater love ! At the summit you will find yourselves united with all those who, from every direction, have made the same ascent. For everything that rises must converge" (111)
Other denominations
Passing references in "The Enduring Chill" to Buddhism & Vedanta. In Buddhism, the Bodhisattva is an elightened being who has arrived at perfect knowledge and acts as a guide for others toward nirvana, the attainment of disinterested wisdom and compassion.
Vedanta is an Hindu philosophy that affirms the identity of the individual human soul, atman, with Brahman, the holy power that is the source and sustainer of the universe.
Note that in the short-story, it is a cow, a sacred animal to the Hindu, which is the source of Asbury's undulant fever.
Classical mythology
Peacock → "Io"/"Argus" cf. Who's who in the ancient world→ Short-story : "Greenleaf"
IV. The play upon sameness and difference
« L'écrivain est un expérimentateur public : il varie ce qu'il recommence ; obstiné et infidèle, il ne connaît qu'un art : celui du thème et des variations. » (R. Barthes, Essais critiques, 10)→ Variations of the same pattern hence the keys to the interpretation of any given narrative will serve for all the others: "All my stories are about the action of grace on a character who is not very willing to support it" (M&M, 25).
REALISM + "AN ADDED DIMENSION"
Even if the author emphatically stated that "[A] writer is initially set going by literature more than by life" (M&M, 45), life, in the sense of "the texture of existence that surrounds one" is far from being a negligible factor in the process of literary creation ; on the contrary, it plays a fundamental role as witness the following quotation which, in a way, offsets the former :
There are two qualities that make fiction. One is the sense of mystery and the other is the sense of manners. You get the manners from the texture of existence that surrounds you. The great advantage of being a Southern writer is that we don't have to go anywhere to look for manners ; bad or good, we've got them in abundance. We in the South live in a society that is rich in contradiction, rich in irony, rich in contrast, and particularly rich in its speech. (MM, 103) It is fundamentally a question of degree : the writer […] is initially inspired less by life than by the work of his predecessors (MM, 208) OC's contention → "The natural world contains the supernatural" (MM, 175) → « This means for the novelist that if he is going to show the supernatural taking place, he has nowhere to do it except on the literal level of natural events » (176). In other words :
What-is [ce qui est, le réel] is all he has to do with ; the concrete is his medium ; and he will realize eventually that fiction can transcend its limitations only by staying within them (146)
The artist penetrates the concrete world in order to find at its depths the image of its source, the image of ultimate reality (157)
Reality
The basis of fiction, the point of departure of the novelist :
The first and most obvious characteristic of fiction is that it deals with reality through what can be seen, heard, smelt, tasted, and touched 91 Writing fiction is [not] an escape from reality. It is a plunge into reality and it's very shocking to the system. If the novelist is not sustained by a hope of money, then he must be sustained by a hope of salvation 78
Realism
"All novelists are fundamentally seekers and describers of the real, but the realism of each novelist will depend on his view of the ultimate reaches of reality" (40) "The Southern writer is forced from all sides to make his gaze extend beyond the surface, beyond mere problems, until it touches that realm which is the concern of prophets and poets. When Hawthorne said that he wrote romances, he was was attempting, in effect, to keep for fiction some of its freedom from social determinisms, and to steer it in the direction of poetry". ( 46) "The prophet is a realist of distances, and it is this kind of realism that goes into great novels. It is the realism which does not hesitate to distort appearances in order to show a hidden truth" (179)
What is Realism ? (Realism/regionalism, and naturalism) Cf. S. Crane→ "that misunderstood and abused word, realism." What is realism ? It is a special literary manner (actually a set of conventions and devices) aiming at giving:
The illusion that it reflects life as it seems to the common reader. [...] The realist, in other words, is deliberately selective in his material and prefers the average, the commonplace, and the everyday over the rarer aspects of the contemporary scene.
One cannot read far into OC's fiction without discovering numerous realistic touches e.g. in the depiction of life in the South: way of life, manner of speech (with its numerous elisions and colloquial or even corrupt expressions), turns of phrase, customs, habits, etc. So OC's work evinces the extended and massive specification of detail with which the realist seeks to impose upon the reader an illusion of life. Such an "effet de réalité" is heightened by the consistent use of details or features pertaining to a specific section of the US, which resulted in OC being labelled a regional writer i.e. one whose work is anchored, rooted in the fabric, the actualities or the concrete particulars of life in a specific area or section of the USA i.e. the OC clearly objects to those writers who "feel that the first thing they must do in order to write well is to shake off the clutch of the region […] the writer must wrestle with it [the image of the South], like Jacob with the angel, until he has extracted a blessing" (M&M, 197-198) Naturalism What about Naturalism? It is "a mode of fiction that was developed by a school of writers in accordance with a special philosophical thesis. This thesis, a product of post-Darwinian biology in the mid-nineteenth century, held that man belongs entirely in the order of nature and does not have a soul or any other connection with a religious or spiritual world beyond nature ; that man is therefore merely a higher-order animal whose character and fortunes are determined by two kinds of natural forces, heredity and environment." (Abrams).
OC rejected naturalism since, from her point of view, naturalism ran counter to one of her most essential tenets: I don't think any genuine novelist is interested in writing about a world of people who are stictly determined. Even if he writes about characters who are mostly unfree, it is the sudden free action, the open possibility, which he knows is the only thing capable of illuminating the picture and giving it life (The Added Dimension, 229) True, OC occasionally makes use of animal imagery but she ignores the last two factors mentioned in the above definition. The only naturalistic element studied by OC is aggressive behaviour and a certain form of primitiveness but her utilization of such material is quite different from Zola's, OC's naturalism is more descriptive than illustrative and conjures up a moral landscape where the preternatural prevails.
A consequence of OC's choice of realism →Importance of the senses and sensory experience : "Fiction begins where human knowledge begins-with the senses" (MM, 42)
The novelist begins his work where human knowledge begins-with the sense ; he works through the limitations of matter, and unless he is writing fantasy, he has to stay within the concrete possibilities of his culture. He is bound by his particular past and those institutions and traditions that this past has left to his society. The Judaeo-Christian tradition has formed us in the west ; we are bound to it by ties which may often be invisible, but which are there nevertheless. (155) BUT what distinguishes OC from other realists is that for her "The natural world contains the supernatural" (M&M 175). The aim of OC's particular realism is to lead the reader to the per-ception of a second, superior plane or level of reality: "the supernatural, what I called the added dimension." 2 nd characteristic: OC's originality lies in the fact that she held realism to be a matter of seeing, a question of vision. The novelist, she wrote, "must be true to himself and to things as he sees them." → Vision or rather anagogical vision is what throws a bridge over the gap between the natural and the supernatural; it is the link, the connection between the two universes.
Anagogical Vision
Starting-point→ OC's statement :
The kind of vision the fiction writer needs to have, or to develop, in order to increase the meaning of his story is called anagogical vision, and that is the kind of vision that is able to see different levels of reality in one image or one situation. (M&M, 72)
Three preliminary observations or reminders
Visible←→Invisible/Mystery « Le monde sensible tout entier est, pour ainsi dire, un livre écrit par le doigt de Dieu… Toutes les choses visibles, présentées à nous visiblement pour une instruction symbolique -c'est-à-dire figurée -, sont proposées en tant que déclaration et signification des invisibles 8 . » → God as first and ultimate author. We find an echo of this worldview and faith in OC'S statement : "What [the writer] sees on the surface will be of interest to him only as he can go through it into an experience of mystery itself" (M&M, 41)
Judgment ←→Vision
For the novelist, judgment is implicit in the act of seeing. His vision cannot be detached from his moral sense (M&M, 130)
In the greatest fiction, the writer's moral sense coincides with his dramatic sense, and I see no way for it to do this unless his moral judgment is part of the very act of seeing (Ibid., 31)
The question of anagogical vision is connected with biblical interpretation
The epithet anagogical refers to one of the four tradional modes of interpretation of the Holy Bible (Exegesis: critical explanation or interpretation vs. Hermeneutics: interpretation) i.e.: 1. literal; 2. typological; 3. tropological; 4. anagogical.
literal: applied to taking the words of a text in their natural and customary meaning ;
Prophecy ←→Vision
In the novelist's case, prophecy is a matter of seeing near things with their extensions of meaning and thus of seeing far things close up. The prophet is a realist of distances, and it is this kind of realism that you find in the best modern instances of the grotesque (44) "The prophet is a realist of distances, and it is this kind of realism that goes into great novels. It is the realism which does not hesitate to distort appearances in order to show a hidden truth" (179)
Vision/Anagogical vision (a few statements by way of illustration)
"The novelist must be characterized not by his function but by his vision" (47) "For the writer of fiction, everything has its testing point in the eye, and the eye is an organ that eventually involves the whole personality and as much of the world that can be got into it" (91) "Anything that helps you to see, anything that makes you look. The writer should never be ashamed of staring. There is nothing that doesn't require his attention. […] The writer's business is to contemplate experience, not to be merged in it". (84) Conrad said that his aim as a fiction writer was to render the highest possible justice to the visible universe […] because it suggested and invisible one. « My task which I am trying to achieve is, by the power of the written word, to make you hear, to make you feel-it is, before all, to make you see. That-and no more, and it is everything ». (80) "He's looking for one image that will connect or combine or embody two points ; one is a point in the concrete, and the other is a point not visible to the naked eye, but believed in by him firmly, just as real to him, really, as the one that everybody sees. It's not necessary to point out that the look of this fiction is going to be wild, that it is almost of necessity going to be violent and comic, because of the discrepancies that it seeks to combine" (42) "Now learning to see is the basis for leaning all the arts except music... Fiction writing is very seldom a matter of saying things; it is a matter of showing things (93) [telling vs. showing] "The longer you look at one object, the more of the world you see in it ; and it's well to remember that the serious fiction writer always writes about the whole world, no matter how limited his particular scene" (77) Even when OC seems to be concerned with the relative, the world around us, daily life, and the little disturbances of man, it is always: "A view taken in the light of the absolute" (134).
The anagogical level is that level in which the reader becomes aware that the surface antics and the bizarre twists of the lunatic fringe are much more deeply intertwined with a mystery that is of eternal consequence (UOC, 173)
Indexes
Simple objects endowed with symbolical meaning, hence beware of hats, spectacles, wooden objects, stairwell, things or people going in threes, etc. : the woods = a Christ figure, they appear to walk on water; glasses removed→outward physical sight is replaced by inward spiritual clarity;
The 3 arsonists have their biblical counterparts in Daniel; Hulga's wooden leg → woodenness: impervious to the action of grace;
A car may be a means of salvation, a vehicle for the Spirit: cf. Tom Shiftlet, the preacher in "The Life You Save May Be Your Own": "Lady, a man is divided into two parts, body and spirit... The body, lady, is like a house: it don't go anywhere; but the spirit, lady, is like an automobile, always on the move, always..." Car = pulpit, coffin, means of escape… So be on the look out for all details, however insignificant or trivial they may seem, because they often trigger off a symbolical reading of the text in which they apppear→"Detail has to be controlled by some overall purpose" (M&M, 93): in OC's world the most concrete or material or trivial thing, detail may point to or give access to the most abstract and immaterial dimension, spirituality, the divine.
DISTORTION(S): GOTHIC, GROTESQUE, PROSTHETIC GROTESQUE, FREAKS (= AMERICAN GARGOYLES)
Point of departure → "The prophet is a realist of distances, and it is this kind of realism that goes into great novels. It is the realism which does not hesitate to distort appearances in order to show a hidden truth" (MM, 179). According to OC, distortion is a key device in literary creation as witness the following quotations from Mystery and Manners : "The problem for such a novelist will be to know how far he can distort without destroying " "His way will much more obviously be the way of distortion" (42) "The truth is not distorted here, but rather, a certain distortion is used to get at the truth" (97) → Why is it so ? 1° Distortion as a strategy : found in Roman ruins and representing motifs in which the human, the animal and the vegetable kingdoms inter-mingled/twined. Later on, the term was carried over into literature, but the term has taken on specific connotations in American fiction; it became popular thanks to S. Anderson's work, Winesburg, Ohio, in which "freakiness" also means an attitude to truth, a crippling appropriation of truth as S. Anderson pointed out :
It was the truths that made the people grotesques. The old man had quite an elaborate theory concerning the matter. It was his notion that the moment one of the people took one of the truths to himself, called it his truth, and tried to live his life by it, he became a grotesque and the truth he embraced became a falsehood (S. Anderson, Winesburg, Ohio)
The grotesques are characterized by various types of psychic unfulfilment or limitation owing in part to the failure of their environment to provide them with opportunities for a rich variety of experience and in part to their own inability or reluctance to accept or understand the facts of isolation and loneliness. The grotesques have become isolated from others and thus closed off from the full range of human experience; they are also the socially defeated, human fragments... (Cf. W. Styron in Lie Down in Darkness: "Didn't that show you that the wages of sin is not death, but isolation?"). Cf. the connection between the gothic and the grotesque in the following excerpt:
If, as has been suggested, the tendency of works in the [Gothic] tradition has been not to portray with mimetic fidelity the manners and social surface of everyday life but, rather, to uncover at the heart of reality a sense of mystery, then the grotesque figure becomes the Ulysses of this terra incognita. He is a figure who is in some way distorted from the shape of normalitywhether by a physical deformity (Ahab) or by a consuming intellectual (Usher), metaphysical (Pierre), moral (Ethan Brand, the veiled minister), or emotional (Bartleby) passion; and his discovery often takes a violent shapedestructive of himself or of others" (M. Orvell, Invisible Parade: The Fiction of Flannery O'Connor)
The grotesque paves the way for the realization of "that disquieting strangeness apt to arise at every turn out of the most intimately familiar, and through which our everyday sense of reality is made to yield to the troubling awareness of the world's otherness" (A. Bleikasten, "Writing on the flesh") "The grotesque is a literature of extreme situation, and indeed mayhem, chaos, and violence seem to predominate in the genre" (G. Muller) Why are there so many freaks, grotesques or handicapped people in OC's fiction ?
For one thing, they seem to be a feature of southern country life cf. H. Crews→ The South as the country of nine-fingered people:
Nearly everybody I knew had something missing, a finger cut off, a toe split, an ear half-chewed away, an eye clouded with blindness from a glancing fence staple. And if they didn't have something missing, they were carrying scars from barbed wire, or knives, or fishhooks. But the people in the catalogue [the Sears, Roebuck mailorder catalogue] had no such hurts. They were not only whole, had all their arms and legs and toes and eyes on their unscarred bodies, but they were also beautiful. Their legs were straight and their heads were not bald and on their faces were looks of happiness, even joy, looks that I never saw much of in the faces of the people around me.
Young as I was, though, I had known for a long time that it was all a lie. I knew that under those fancy clothes there had to be scars, there had to be swellings and boils of one kind or another because there was no other way to live in the world (H. Crews, A Childhood, 54) But this sociological factor is far from being the only reason, as two key statements by OC will show :
To be able to recognize a freak, you have to have some conception of the whole man, and in the South the general conception of man is still, in the main, theological. (44) "The freak in modern fiction is usually disturbing to us because he keeps us from forgetting that we share in his state" (MM, 133)
Freaks or partial people (as W. Schaffer, a critic, put it → "partial people seeking spiritual completion point up the sorry state of the human condition. A kind of touchstone of our human condition") :
We can say we are normal because a psychological, sexual, or even spiritual abnormality canwith a little luckbe safely hidden from the rest of the world (Crews, 105) Feaks were born with their traumas. They've already passed their test in life. They're aristocrats (H. Crews, 87)
The freak, through acceptance, can be viewed not as the deviation, the perversion of humanity, but the ideal (107) We all eventually come to our trauma in life, nobody escapes this. A freak is born with his trauma (113) « Son humanité ne fait pas de doute et pourtant il déroge à l'idée habituelle de l'humain » (a French critic, Th. 114)
Grotesque
The same holds true of the grotesque i.e. a reminder of human imperfection. OC uses grotesque characters to usher in the mysterious and the unexpected :
Their [grotesque characters'] fictional qualities lean away from typical social patterns, toward mystery and the unexpected (40)
The Communion of Saints : a communion created upon human imperfection, created from what we make of our grotesque state (228)
Prosthetic grotesque (le grotesque prothétique)
A variant : prosthesis : an artificial body part such as a leg, an arm, a heart or breast. "The horror of prosthesis (which is more than an object, unassimilable either to other objects or to the body itself )" (Crews, 171). A case in point: Hulga's wooden leg.
According to Russian critic, Bakhtine : « Le grotesque s'intéresse à tout ce qui sort, fait saillie, dépasse du corps, tout ce qui cherche à lui échapper » (Crews, 122)
A possible conclusion ?
« Explorer le grotesque c'est explorer le corps » (Crews 118) → in the same way as meaning seems to be body-centered, grounded in the tactile and the tangible, so is salvation a process involving the body → Transcendance is in physicality (A. Di Renzo). OC's fiction is « Un univers
Université de Perpignan-Via Domitia FLANNERY O'CONNOR'S COMPLETE STORIES Three quotations from F. O'Connor to give the tone of this reading of The Complete Stories:
Fitzgerald. (Wright Morris in The Territory Ahead (in Reality and Myth, 316) So, one has to take into account what French philosopher, G. Durand calls « les puissances de transfiguration de l'écriture 2 ». Literature may well express reality, but it also creates a form of reality that does not exist beside, outside, or before the text itself, but in and through the text. Literature does not re-create the world; it brings a new world into being. It is « la nécessité du récit qui secrète son paysage [...] l'oeuvre littéraire crée son espace, sa région, son paysage nourricier » (G. Durand, 393). The most extreme inference one can deduce from such a premiss is that Uncle Tom's Cabin, Sartoris and Gone With the Wind, etc., conjured up a referential, fictitious, legendary, and mythical South, a territory of the imagination.
F. O'Connor set 3. M. Gresset, préface à P. Carmignani, Les Portes du Delta : Introduction à la fiction sudiste et à l'oeuvre romanesque de Shelby Foote, Perpignan, PUP, 1994, 6-7.
Johansen
This being said, let's now deal with the question of the setting of OC's fiction viz. the South.If you attended last year's course onJordan County, you must know by now how difficult it is to answer the simple question "What is the South ?", how dificult it is to merely say how many States comprise the section since their number range from 11 to 17 (the 11 States of the Confederacy; the 15 Slave states i.e. where slavery was legal; the 17 States below the Mason-Dixon line*) *Mason-Dixon line = a line of demarcation between Pennsylvania and Maryland deriving it name from those of the two topographers who determined that border, hence the nickname Dixie-(land) for the South .
who recorded Negro folklore in Uncle Remus: His Songs and Sayings (1880); George Washington Cable (1844-1925), a native of New Orleans, was the depictor of the Creole civilization; Thomas Nelson Page (1832-1922); Kate Chopin (1851-1904), a woman writer of French descent who dealt with Louisiana in The Awakening (1899).
The 2
2 nd crucial period in the formation of Southern literature, i.e. the two decades between 1925 and 1945 which were called "The Southern Renascence". It was the most extraordinary literary development of 20 th century America, a regional development comparable to the flowering of New England literature one hundred years earlier. The inception of that impressive cultural phenomenon in the South coincided with W. Faulkner's arrival on the literary scene. The South that had the reputation of being an intellectual and literary desert produced in a short span of time an exceptional crop of good writers. Stress the role of female voices in that literary chorus: -Katherine Anne Porter : a very successful short-story writer : Flowering Judas (1930), Pale Horse, Pale Rider (1939). Her only published novel was A Ship of Fools (1962) -Eudora Welty whose fame rests mainly on her collections of short stories also wrote of the Mississippi like Faulkner, but her picture of Southern life is removed from high tragedy ; it is the day-to-day existence of people in small towns and rural areas. Delta Wedding (1946), The Ponder Heart (1954), Losing Battles (l970). -Carson McCullers (1917-1967), another remarkable woman of letters : Lonely Hunter (1940), The Member of the Wedding (1946), The Ballad of the Sad Café (1951). -Flannery O'Connor (1925-1964) who found in her native South, in American fiction of the 19 th century, and her conservative Catholic Christianity the three sources of all her work : Wise Blood (1952), A Good Man Is Hard to Find (1955), The Violent Bear it Away (1960). -Shirley Ann Grau : The Hard Blue Sky (1958), The Keepers of the House (1964), The Condor Passes (1971). -Margaret Mitchell's bestseller, Gone With the Wind (1936), etc. -Elizabeth Spencer TheLight in The Piazza. The Southern tradition is carried on nowadays by an impressive array of writers: the 4 Williams (William Faulkner; William Styron; William Goyen; William Humphrey); Shelby Foote; Walker Percy; Fred Chappell; Truman Capote and an impressive number of new voices (C. Mac-Carthy, Robert Olen Butler, Madison Smart Bell, etc.)
de l'oeuvre, 60), profondeur que traduit précisément le symbolisme.Chez les Grecs, le symbole était un morceau de bois ou un osselet qui avait été coupé en deux et dont deux familles amies conservaient chacune une moitié en la transmettant à leurs descendants. Lorsque, plus tard, ceux-ci rapprochaient les deux fractions complémentaires et parvenaient à reconstituer l'unité brisée dont elles étaient issues, ils redécouvraient ainsi une unité perdue mais retrouvée. (J. Brun, L'Homme et le langage, 81). Ainsi :En grec (sumbolon) comme en hébreu (mashal) ou en allemand (Sinnbild), le terme qui signifie symbole implique toujours le rassemblement de deux moitiés : signe et signifié. Le symbole est une représentation qui fait apparaître un sens secret, il est l'épiphanie d'un mystère. (L'Imagination symbolique, 13).
figuré, c'est-à-dire n'est qu'un symbole restreint. La langue ne ferait donc que préciser le langage symbolique et ce, jusqu'au sens propre. En d'autres termes, la poésie serait première et non la prose utilitaire, le langage-outil, profondément lié à l'apparition parallèle de l'homme-outil...). Mais l'autre moitié du symbole, « cette part d'invisible et d'indicible qui en fait un monde de représentations indirectes, de signes allégoriques à jamais inadéquats, constitue une espèce logique bien à part » . Les deux termes du symbole sont infiniment ouverts. Précisons enfin que le domaine de prédilection du symbolisme, c'est le non-sensible sous toutes ses formes : inconscient, métaphysique, surnaturel et surréel, bref ces choses absentes ou impossibles à percevoir. Dernier point, capital, la fonction symbolique est dans l'homme le lieu de passage, de réunion des contraires : le symbole dans son essence et presque dans son étymologie est "unificateur de paires d'opposés" (Imag. symb. 68). De nombreux spécialistes ont essayé de mettre à jour ce qu'on pourrait appeler le soubassement de la faculté symbolique (du symbolisme imaginaire*) qui habite l'homme et proposé divers systèmes de classification des symboles à partir de critères ou de principes tenus pour déterminants ; G. Bachelard, par exemple, adoptera comme axiomes classificateurs, les quatre éléments -Air, Eau, Feu, Terre -les "hormones de l'imagination" ou catégories motivantes des symboles. G. Dumézil s'appuiera sur des données d'ordre social, à savoir que les systèmes de représentations mythiques dépendent dans les sociétés indo-européennes d'une tripartion fonc-tionnelle : la subdivision en trois castes ou ordres : sacerdotal, guerrier et producteur qui déterminerait tout le système de représentations et motiverait le symbolisme laïc aussi bien que religieux. A. Piganiol a, lui, adopté une bipartition (constellations rituelles pastorales et agricoles) recoupant l'opposition archétypale entre le pâtre Abel et le laboureur Caïn : certaines peuplades pastorales élèvent des autels, rendent un culte au feu mâle, au soleil, à l'oiseau ou au ciel et tendent au
Expérience d'ordre existentiel qui comporte généralement une triple révélation : celle du sacré, celle de la mort, et celle de la sexualité » (Thèse sur O'Connor, 14). R. Guénon distingue l'initiation virtuelle qui signifie une entrée ou un commencement dans la voie, au sens du latin initium, de l'initiation effective, qui correspond à suivre la voie, cheminer véritablement dans la voie, ce qui est le fait d'un petit nombre d'adeptes alors que beaucoup restent sur le seuil (Aperçus sur l'initiation, 198) « L'initiation équivaut à la maturation spirituelle de l'individu : l'initié, celui qui a connu les mystères, est celui qui sait » (M. Eliade, 15) Sacré→secernere : mettre à part→parenté des termes muthos et mustêrion qui présentent la même racine mu (bouche fermée), muô : se taire et mueô : initier.
-
typology: the study of symbolic representation esp. of the origin and meaning of Scripture types (a type: that by which sth is symbolized or figured [symbol, emblem] in Theology a person, object or event of Old Testament history prefiguring some person or thing revealed in the new dispensation ; 8. Hugues de Saint-Victor (Eco, Sém et phil., 162) -tropology: (a speaking by tropes); a moral discourse; a secondary sense or interpretation of Scripture relating to moral. Tropological: an interpretation of Scripture applied to conduct or mo-rals→ sens moral ou psychique anagogical (ana : up in place or time) ← anagoge: spiritual elevation esp. to understand mysteries → Anagogy: a spiritual or mystical interpretation. Anagogical: of words: mystical, spiritual, allegorical. In French sometimes called « sens mystique ou pneumatique ».
Blood, a novel 1952 A Good Man Is Hard to Find, a collection of short-stories, 1955 The Violent Bear It Away, a novel, 1960 Everything That Rises Must Converge, a collection of short-stories published posthumously in 1965 Mystery and Manners, occasional prose, 1969 The Complete Stories, 1971 The Habit of Being, collected letters, 1979 The Presence of Grace and Other Book Reviews by Flannery O'Connor, 1983 F. O'Connor: Collected Works, 1988.
« When you have to assume that your audience does not hold the same beliefs you do, then you have to make your vision apparent by shock-to the hard of hearing you shout, and for the almost-blind you draw large and startling figures » (M&M, 34).
OC quotes a very convincing example on p. 162 : "When I write a novel in which the central action is a baptism, I am very well aware that for a majority of my readers, baptism is a meaningless rite, and so in my novel I have to see that this baptism carries enough awe and mystery to jar the reader into some kind of emotional recognition of its significance. To this end I have to bend the whole novel […] Distortion in this case is the instrument" (162)
2° Physical and moral distortion
Distortion isn't just a narrative or stylistic device serving a pedagogical purpose, and another remote avatar or embodiment of it is to be found in the striking number of physically abnormal characters i.e freaks, cripples, handicapped persons peopling OC's fiction, which has been described as "an insane world peopled by monsters and submen" (UOC, 15). She accounted for it by statingamong other things-, that :
"My own feeling is that writers who see by the light of their Christian faith will have, in these times, the sharpest eye for the grotesque, for the perverse, and for the unacceptable" (33)
Rôle of mutilation and physical imperfection
In OC's world, a man physically bereft always indicates a corresponding spiritual deficiency. Mutilations and physical imperfections may serve as a clue to or index of a character's function as initiator or initiate; may be a sign of election marking the character as one of the knowing few. Cf. G. Durand: « Le sacrifice oblatif de l'oeil est surdétermination de la vision en voyance ». So, we have our work cut out→dealing with all types of distortions made use of in OC's fiction, which will lead us to an examination of the cognate notions listed above in the title. Before going into details, some general information : Freak (O. E. ?) frician, to dance) sudden change of fortune, capricious notion ; product of sportive fancy→monstrous individual→Southern themes of alienation, degeneracy, mutilation, dehumanization→staples of Southern fiction which resulted in the coining of the label of "The School of Southern Degeneracy". Gothic → the Gothic novel : that type of fiction made plentiful use of all the trappings of the medieval period: gloomy castles, ghosts, mysterious disappearances, and other sensational and supernatural occurrences. Later on, the term denoted a type of fiction which does not resort to the medieval setting but develops a brooding atmosphere of gloom or terror and often deals with aberrant psychological states. OC's work is a convincing illustration of Poe's dictum that "The Gothic is not of Germany but of the soul" Grotesque: Etymologically, grotesque comes from the Greek "kraptos" meaning "hidden, secret". In the late XV th century, grotesque referred to those ornamental and decorative elements où le spirituel ne peut être atteint qu'à travers le corps, la matière, le sensoriel » (Thèse sur Crews
31)
Bear in mind that God, the Spirit, underwent the mystery of the incarnation (the embodiment of God the Son in human flesh as Jesus Christ); the body houses the soul, the Spirit, and as such partakes in the Resurrection of the Flesh.
As to monsters: « Le monstre n'obéit pas à la loi du genre ; il est au sens propre, dégénéré » (D. Hollier). Let's say that the monster is both degenerate and a reminder of what constitutes the "genus" of man, mankind, i.e. our fallen state, our incompleteness and corresponding yearning for wholeness. |
01762224 | en | [
"spi.mat",
"spi.meca",
"spi.meca.msmeca",
"spi.meca.mema",
"spi.meca.solid"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01762224/file/LEM3_FATIGUE_2018_MERAGHNI.pdf | Yves Chemisky
email: yves.chemisky@ensam.eu
Darren J Hartl
Fodil Meraghni
Three-dimensional constitutive model for structural and functional fatigue of shape memory alloy actuators
A three-dimensional constitutive model is developed that describes the behavior of shape memory alloy actuators undergoing a large number of cycles leading to the development of internal damage and eventual catastrophic failure. Physical mechanisms such as transformation strain generation and recovery, transformation-induced plasticity, and fatigue damage associated with martensitic phase transformation occurring during cyclic loading are all considered within a thermodynamically consistent framework. Fatigue damage in particular is described utilizing a continuum theory of damage. The total damage growth rate has been formulated as a function of the current stress state and the rate of martensitic transformation such that the magnitude of recoverable transformation strain and the complete or partial nature of the transformation cycles impact the total cyclic life as per experimental observations. Simulation results from the model developed are compared to uniaxial actuation fatigue tests at different applied stress levels. It is shown that both lifetime and the evolution of irrecoverable strain are accurately predicted by the developed model.
Introduction
Shape memory alloys (SMAs) are metals that have the ability to generate and recover substantial deformation during a thermomechanical cycle. The physical mechanism that drives the shape recovery in the materials is a martensitic phase transformation that results from thermal and/or mechanical inputs, often without the consequence of significant plastic strain generation during formation and recovery of martensitic variants. This unique ability has led to the development of devices for aerospace and medical applications [START_REF] Hartl | Aerospace applications of shape memory alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF][START_REF] Lester | Review and perspectives: shape memory alloy composite systems[END_REF]. The design of such devices has required the development of constitutive models to predict their thermomechanical behavior.
A comprehensive review of SMA constitutive models can be found in works by [START_REF] Birman | Review of Mechanics of Shape Memory Alloy Structures[END_REF], [START_REF] Patoor | Shape Memory Alloys, {P}art {I}: {G}eneral Properties and Modeling of Single Crystals[END_REF], [START_REF] Lagoudas | Shape Memory Alloys -Part {II}: Modeling of polycrystals[END_REF], [START_REF] Paiva | An Overview of Constitutive Models for Shape Memory Alloys[END_REF], and [START_REF] Cisse | A review of constitutive models and modeling techniques for shape memory alloys[END_REF].
Early models describe the behavior of conventional SMAs without considering irrecoverable strains and damage, which is sufficient for the design of devices where operating temperatures, maximum stress levels, and number of actuation cycles are all relatively low. To expand the capabilities of such models, the evolution of transformation induced plasticity was first considered for conventional SMAs by Bo and Lagoudas (1999b) and then [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF]; these models allow calculations of accumulated irrecoverable strains caused by cycling. The coupling between phase transformation and plasticity at higher stresses has been considered in the literature for the simulation of shape memory alloy bodies under high loads at low temperatures compared to their melting points [START_REF] Hartl | Constitutive modeling and structural analysis considering simultaneous phase transformation and plastic yield in shape memory alloys[END_REF][START_REF] Zaki | An extension of the ZM model for shape memory alloys accounting for plastic deformation[END_REF][START_REF] Khalil | A constitutive model for Fe-based shape memory alloy considering martensitic transformation and plastic sliding coupling: Application to a finite element structural analysis[END_REF]. A model accounting for the effect of retained martensite (martensite pinned by dislocations) has been developed by [START_REF] Saint-Sulpice | A 3D super-elastic model for shape memory alloys taking into account progressive strain under cyclic loadings[END_REF]. To predict the influence of irrecoverable strains in high-temperature SMAs (HTSMAs) where viscoplastic creep is observed, a one-dimensional model accounting for the coupling between phase transformation and viscoplasticity has been developed by Lagoudas et al. (2009a); a three-dimensional extension of this model was developed and implemented via finite element analyses (FEA) by Hartl et al. (2010b), and the cyclic evolution of irrecoverable strains accounting for combined viscoplastic, retained martensite, and TRIP effects was later implemented by [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]. blueThe evolution of the pseudoelastic response for low cycle fatigue of SMAs has been invesigated recently [START_REF] Zhang | Experimental and theoretical investigation of the frequency effect on low cycle fatigue of shape memory alloys[END_REF]. A strain-energy based fatigue model has been proposed and confronted to experiments.
These past efforts focused on the prediction of thermomechanical responses for only a small number of cycles (e.g., up to response stabilization). However, active material actuators are often subjected to a large number of repeated cycles [START_REF] Van Humbeeck | Non-medical applications of shape memory alloys[END_REF][START_REF] Jani | A review of shape memory alloy research, applications and opportunities[END_REF], which induces thermally-induced fatigue in the case of SMAs [START_REF] Lagoudas | Thermomechanical Transformation Fatigue of {SMA} Actuators[END_REF][START_REF] Bertacchini | Thermomechanical transformation fatigue of TiNiCu SMA actuators under a corrosive environment -Part I: Experimental results[END_REF]. During the lifetime of an SMA actuator, the influence of two different classes of fatigue must be considered:(i) Structural fatigue is the phenomenon that leads towards catastrophic failure of components, while (ii) functional fatigue describes permanent geometric changes to the detriment SMA of component performance and is associated with the development of irrecoverable strain [START_REF] Eggeler | Structural and functional fatigue of NiTi shape memory alloys[END_REF]). The prediction of functional fatigue evolution allows for calculation of changes expected in a given actuator over its lifetime, while the prediction of structural fatigue evolution allows for determination of the actuator lifetime itself.
While the prediction of functional fatigue relies on the simulation of irrecoverable strains upon cycling (i.e., so-called trans-induced plasticity(TRIP)) (Bo and Lagoudas, 1999b;[START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF], catastrophic structural fatigue is associated with the development of micro-cracks during transformation. Most SMAs are herein taken to be sufficiently similar to hardening metal materials, so as to apply the theoretical modeling of structural fatigue via thermodynamic approaches developed in recent years. [START_REF] Khandelwal | Models for Shape Memory Alloy Behavior: An overview of modeling approaches[END_REF].
Continuum damage mechanics (CDM) has been extensively utilized to predict the fatigue lifetime of metallic materials and structures since its development and integration within the framework of thermodynamics of irreversible processes [START_REF] Bhattacharya | Continuum damage mechanics analysis of fatigue crack initiation[END_REF][START_REF] Lemaitre | Engineering damage mechanics: Ductile, creep, fatigue and brittle failures[END_REF][START_REF] Dattoma | Fatigue life prediction under variable loading based on a new non-linear continuum damage mechanics model[END_REF]. The notion of damage itself concerns the progressive degradation of the mechanical properties of materials before the initiation of cracks observable at the macro-scale [START_REF] Simo | Strain-and stress-based continuum damage models-I. Formulation[END_REF].
Contrary to approaches based on fracture mechanics, which explicitly consider the initiation and growth of micro-cracks, voids, and cavities as a discontinuous and discrete phenomenon [START_REF] Baxevanis | Fracture mechanics of shape memory alloys: review and perspectives[END_REF], CDM describes damage using a continuous variable associated with the local density of micro-defects. Based on this damage variable, constitutive equations have been developed to predict the deterioration of material properties [START_REF] Voyiadjis | Advances in damage mechanics : metals and metal matrix composites[END_REF]. CDM enables fatigue life prediction in innovative superalloys [START_REF] Shi | Creep and fatigue lifetime analysis of directionally solidified superalloy and its brazed joints based on continuum damage mechanics at elevated temperature[END_REF] and standard aluminium alloys [START_REF] Hojjati-Talemi | Fretting fatigue crack initiation lifetime predictor tool: Using damage mechanics approach[END_REF] alike.
Relevant models can also be implemented within FEA framework to predict the response of structures with complex shapes [START_REF] Zhang | Finite element implementation of multiaxial continuum damage mechanics for plain and fretting fatigue[END_REF]. Two opposing views exist in the theoretical modeling of continuous damage. If the micro-defects and their associated effects are considered isotropic, a simple scalar variable (i.e.,the damage accumulation) is sufficient to describe the impact of damage on material properties. However, to comply with experimental findings confirming anisotropic evolution of damage in ductile materials [START_REF] Lemaitre | Anisotropic damage law of evolution[END_REF][START_REF] Bonora | Ductile damage evolution under triaxial state of stress: Theory and experiments[END_REF][START_REF] Luo | Experiments and modeling of anisotropic aluminum extrusions under multi-axial loading Part II: Ductile fracture[END_REF][START_REF] Roth | Effect of strain rate on ductile fracture initiation in advanced high strength steel sheets: Experiments and modeling[END_REF], researchers have also developed anisotropic damage continuum models as proposed by [START_REF] Voyiadjis | A coupled anisotropic damage model for the inelastic response of composite materials[END_REF]; [START_REF] Brünig | An anisotropic ductile damage model based on irreversible thermodynamics[END_REF]; [START_REF] Desrumaux | Generalised Mori-Tanaka Scheme to Model Anisotropic Damage Using Numerical Eshelby Tensor[END_REF]. In this latter case, the distribution of micro-defects adopts preferred orientations throughout the medium. To model this behavior, a tensorial damage variable is typically introduced, (i.e., the damage effect tensor) [START_REF] Lemaitre | Anisotropic damage law of evolution[END_REF][START_REF] Lemaitre | Engineering damage mechanics: Ductile, creep, fatigue and brittle failures[END_REF]. A set of internal variables that are characteristic of various damage mechanisms can also be considered [START_REF] Ladeveze | Damage modelling of the elementary ply for laminated composites[END_REF][START_REF] Mahboob | Mesoscale modelling of tensile response and damage evolution in natural fibre reinforced laminates[END_REF].
CDM models are also categorized based on the mathematical approach utilized. Strictly analytical formalisms belong to the group of deterministic approaches. These utilize robust thermodynamic principles, thermodynamic driving forces, and a critical stress threshold to derive mathematical expressions linking the damage variable with the material properties and other descriptions of state. The appearance of micro-defects below such stress thresholds is not considered possible and every result represents a deterministic prediction of material behavior. Alternatively, probabilistic approaches define probabilities attributed to the appearance of micro-defects. The damage is often thought to occur at points in the material where the local ultimate strength is lower than the average stress. Considering the local ultimate stress as a stochastic variable leads to calculated damage evolution that is likewise probabilistic. Such probability can be introduced into a thermodynamic model that describes the material properties to within margins of error [START_REF] Fedelich | A stochastic theory for the problem of multiple surface crack coalescence[END_REF][START_REF] Rupil | Identification and Probabilistic Modeling of Mesocrack Initiations in 304L Stainless Steel[END_REF].
The probabilistic models have been built mostly to treat fracture in brittle materials, such as ceramics [START_REF] Hild | On the probabilistic-deterministic transition involved in a fragmentation process of brittle materials[END_REF] or cement [START_REF] Grasa | A probabilistic damage model for acrylic cements. Application to the life prediction of cemented hip implants[END_REF], which demonstrate statistical scatter in direct relation with damage such as crack initiation and coalescence [START_REF] Meraghni | Implementation of a constitutive micromechanical model for damage analysis in glass mat reinforced composite structures[END_REF]. Probabilistic modeling may be a useful tool in fatigue life analysis of SMA bodies, given the scattering observed in the thermomechanical response of nearly identical test samples demonstrated in experimental works [START_REF] Figueiredo | Low-cycle fatigue life of superelastic NiTi wires[END_REF][START_REF] Nemat-Nasser | Superelastic and cyclic response of NiTi SMA at various strain rates and temperatures[END_REF]. Relevant experiments (Scirè Mammano and Dragoni, 2014) determine the number of cycles to failure in NiTi SMA wires by considering samples submitted to a series of cyclic load fatigue tests at increasing strain rates. It is evident from such works that the fatigue life is to some degree uncertain and the use of stochastic models might increase prediction accuracy overall.
Several fatigue failure models for SMAs have been developed based on experimental observations. [START_REF] Tobushi | Low-Cycle Fatigue of TiNi Shape Memory Alloy and Formulation of Fatigue Life[END_REF] have proposed an empirical fatigue life equation, similar to a Coffin-Manson law, that depends on strain amplitude, temperature, and frequency of the cycles. This first model was compared to rotating-bending fatigue tests. A modified Manson-Coffin model was further proposed by [START_REF] Maletta | Fatigue of pseudoelastic NiTi within the stress-induced transformation regime: a modified CoffinManson approach[END_REF][START_REF] Maletta | Fatigue properties of a pseudoelastic NiTi alloy: Strain ratcheting and hysteresis under cyclic tensile loading[END_REF] to predict the fatigue life of NiTi SMAs under the stress-controlled cyclic loading conditions. A third Manson Coffin-like relationship has been proposed by Lagoudas et al. (2009b) to determine the irrecoverable strain accumulation of NiTiCu SMAs as a function of the number of cycles to failure for different stress levels, for both partial and complete transformations.
Energy-based fatigue life models for SMAs have also been developed, and in particular consider the dissipated energy. [START_REF] Moumni | Fatigue analysis of shape memory alloys: energy approach[END_REF] proposed an empirical power law to predict the fatigue life of super elastic NiTi SMAs. [START_REF] Kan | An energy-based fatigue failure model for super-elastic NiTi alloys under pure mechanical cyclic loading[END_REF] has modified the previous model, replacing the power-law equation by a logarithmic one. Those models were compared with fatigue tests performed on NiTi alloys under uniaxial stress-controlled cyclic loading [START_REF] Kang | Whole-life transformation ratchetting and fatigue of super-elastic NiTi Alloy under uniaxial stress-controlled cyclic loading[END_REF]. [START_REF] Song | Damage-based life prediction model for uniaxial low-cycle stress fatigue of super-elastic NiTi shape memory alloy microtubes[END_REF] has recently proposed a damage-based fatigue failure model, considering three damage mechanisms, (i.e. micro-crack initiation, micro-crack propagation, and martensite transformation induced damage). A global damage variable is defined as the ratio of the accumulated dissipation energy at the current number of cycles (N ) with regard to the accumulated dissipation energy obtained at the failure life (N f ). A damage-based fatigue failure model is proposed to predict the fatigue life, that depends on the dissipation energy at the stabilized cycle, and the dissipation energy at the N-th cycle. It is shown that the model predicts the fatigue life of super-elastic NiTi SMA micro-tubes subjected to uniaxial stress-controlled load cycles. blueHigh cycle fatigue criterion have been developed recently for SMAs. The Investigation of SMA cyclic response under elastic shakedown has led to the definition of a Dang Van, which means type endurance limit for SMA materials [START_REF] Auricchio | A shakedown analysis of high cycle fatigue of shape memory alloys[END_REF]. A Shakedown based model for high-cycle fatigue of shape memory alloys has been developed by [START_REF] Gu | Shakedown based model for high-cycle fatigue of shape memory alloys[END_REF]. Non-proportional multiaxial fatigue of pseudoelastic SMAs has been recently investigated by [START_REF] Song | Non-proportional multiaxial fatigue of super-elastic NiTi shape memory alloy micro-tubes: Damage evolution law and lifeprediction model[END_REF], which has led to the definition of a multiaxial fatigue model.
Although past developments allow determination of the fatigue life of shape memory alloy devices for uniaxial, homogeneous cyclic loadings, the present work focuses on the difficult problem of coupling damage evolution to phase transformation, irrecoverable transformationinduced plastic strain, and general three-dimensional thermomechanical states. To permit the introduction of damage into a previously well-defined and widely accepted class of model for SMA phase transformation ( Lagoudas et al. (2012)), probabilistic and anisotropic approaches are avoided. Rather, a deterministic and isotropic model for continuum damage mechanics is proposed, which is compatible with the existing models of thermomechanical response of SMA actuators, including those considering generated plastic strain consideration herein [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]). Such a model, even once its assumptions are considered, provides the most comprehensive tool for calculating fatigue in SMA actuators to date.
The organization of this work follows. The motivation of the proposed model, including the need for numerical simulations of cyclic loading in SMA bodies, has been provided in Section 1. Observations motivating specific forms of the evolution equations for damage and irrecoverable strains are overviewed in Section 2. The thermodynamical model is developed in Section 3, with the functional form of the various evolution equations related to the physical mechanisms considered being clearly presented. After some comments on model calibration in Section 4, numerical simulations and their comparison with experimental demonstrations of structural and functional fatigue in SMA bodies are presented in Section 5. Final conclusions are provided in Section 6.
Motivating Observations from Previous Studies
Studies of SMA actuation fatigue are not as numerous as those focusing on nearly isothermal (i.e., superelastic) response. This is due to both the relative importance of generally isothermal medical devices and the difficulty of applying high numbers of thermal cycles to SMA actuators. From those SMA actuation fatigue databases that are available in the literature, the experimental studies of actuation fatigue and post-mortem analyses that were carried out on Ni60Ti40 (wt. %) [START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF] and on NiTiHf [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF] have been selected for consideration herein. In those past studies, a widespread distribution of cracks were found to be present in the SMA components at failure, as shown in Fig. 1. This indicates that a progressive damage mechanism was activated during the lifetime of the SMA body. CDM appears to be particularly adapted for modeling such fatigue damage given the continuous and progressive evolution of multiple defects observed.
NiTiHf alloys have received an increased attention in the recent years according to their high potential to be utilized for high temperature actuators, (e.g., those having transformation temperatures typically above 100 • C). They can operate at high stress levels without the development of significant plastic strains [START_REF] Karaca | NiTiHf-based shape memory alloys[END_REF]. From the analysis of NiTiHf actuator fatigue tests, we see that the number of cycles to failure increases with decreasing cyclic actuation work. Actuator work is defined as the scalar product of the constant applied stress and the transverse strain recovered each cycle (see Fig. 2a). Further, the amount of irrecoverable strain generated seems to be positively correlated with the number of cycles to failure (see Fig. 2b). The development of such strains may be important in predicting the lifetime of actuators formed from a number of SMA materials or some stress levels, bluespecifically at higher stress levels (e.g., 300-600 MPa in [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF]). The study of fatigue in Ni60Ti40 actuator components also shows that the number of cycles to failure increases with decreasing cyclic actuation work (see Fig. 2c). However, the experimental results suggests that in this material loaded to lower stresses blue(e.g., 100-250 MPa in [START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF]), failure may not be correlated with the accumulation of plastic strain, though such correlation is regularly considered in ductile metals. This is shown for the lower stressed Ni60Ti40 samples (Fig. 2d). The consistent and clear negative correlation between cyclic actuation work and fatigue life in both cases motivates the choice of a thermodynamical model to describe the evolution of damage in shape memory alloys. The fact that generated TRIP might also be correlated to failure at higher stresses only motivates consideration of a stress threshold effect for the coupling between damage and TRIP strains, and this will be addressed for the first time herein.
Constitutive Modeling Framework for Phase Transformation and Damage
Physical mechanisms associated with cyclic martensitic phase transformation such as transformation strain generation and recovery, transformation-induced plastic strain generation, and fatigue damage accumulation are all taken into account within the thermodynamically consistent constant model presented in this section. Fatigue damage is described utilizing a scalar variable following an isotropic continuum theory of damage. The damage growth rate has been formulated as a function of both the stress state and the magnitude of the recoverable transformation strain such that cyclic actuation work is directly and clearly considered. Transformation-induced plasticity is also considered as per the experimental observations described in the previous section, and its generation depends on the stress state, the magnitude of the transformation strain, and a term that couples with plastic strain with damage for stress levels above an assumed material-dependent threshold.
The model is based on the framework of [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF], considering further improvements proposed by [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF] for thermodynamical models describing phase transformation that drives multiple complex phenomena. The proposed model focuses on the generation and recovery of transformation strains that occur as a result of martensitic transformation (forward and reverse); martensitic reorientation is not considered given its relative unimportance in SMA cyclic actuator applications, which are obviously the primary motivating application for this work. Concerning damage evolution, it is herein assumed, based on observations, that microscopic crack initiation and propagation are not explicitly linked to the appearance of large plastic strains [START_REF] Bertacchini | Thermomechanical transformation fatigue of TiNiCu SMA actuators under a corrosive environment -Part I: Experimental results[END_REF][START_REF] Calhoun | Actuation fatigue life prediction of shape memory alloys under the constant-stress loading condition[END_REF], but rather that the process of martensitic phase transformation may be more important. In fact, it has been shown the localized nucleation of martensite around crack tips during forward transformation can decrease the fracture toughness and induce localized propagation of cracks, even under moderate stresses [START_REF] Baxevanis | Fracture mechanics of shape memory alloys: review and perspectives[END_REF].
The evolution of damage must therefore be coupled with the martensitic transformation mechanisms directly. The framework thus adopted follows closely the work of [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF] and [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF] for the development of TRIP strain and [START_REF] Lemaitre | Mechanics of solid materials[END_REF] for the coupling between a physical mechanism such as plasticity or phase transformation with damage.
To summarize, the following internal state variables associated with multiple inelastic strain mechanisms are tracked during both forward and reverse transformations:
• The inelastic transformation strain ε t , which considers all inelastic strains associated with different physical phenomena occurring during transformation (i.e, it is composed of contributions from crystallographic transformation, plasticity, and damage). Such transformation strain is decomposed in two contributions, ε F and ε R , to represent the inelastic strain induced by forward transformation and by reverse transformation, respectively. The inelastic transformation strain is further split into a part that is recoverable(tt) and a portion that is not (TRIP strain;tp), to obtain four total contributions : ε tt-F , ε tp-F , ε tt-R and ε tp-R , such that:
ε t = ε F + ε R = ε tt-F + ε tp-F + ε tt-R + ε tp-R .
(1)
• The scalar total martensitic volume fractions induced by forward transformation (into martensite) and by reverse transformation (into austenite) (ξ F , ξ R ),
• The scalar transformation hardening energies induced by forward transformation and by reverse transformation ( g F , g R ),
• The scalar accumulated transformation-induced plastic strain accompanying forward transformation and reverse transformation (p F , p R ),
• The scalar plastic hardening energy induced by forward transformation and reverse transformation (g tp-F , g tp-R ),
• The scalar (i.e., isotropic) damage accumulation induced during forward transformation and reverse transformation (d F , d R ).
Considering the point-wise model as describing a representative volume element of volume
V (Bo and Lagoudas, 1999a) and acknowledging that both forward and reverse transformations can occur simultaneously at various points within such a finite volume, the following two rate variables are introduced : (i) ξF represents the fractional rate of change of the martensitic volume V M induced by forward transformation [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF]:
ξF = V F M V . (2)
Similarly,(ii) ξR represents the rate of change of the martensitic volume fraction (MVF) induced by reverse transformation:
blue ξR = - V R M V . ( 3
)
The rate of the total martensitic volume fraction ξ is then:
ξ = ξF -ξR , (4)
which leads to the definition of the total volume fraction of martensite:
ξ = t 0 ξF dτ - t 0 ξR dτ = ξ F -ξ R . ( 5
)
blueNote that ξ F and ξ R always take positive values, which simplifies the thermodynamic definition of the model. The physical limitation related to the definition of total volume fraction is expressed as:
0 ≤ ξ ≤ 1 ⇔ ξ R ≤ ξ F ≤ 1 + ξ R . (6)
Similarly, the rates of the various strain measures of the total accumulated plastic strain p, the total transformation hardening energy g t , the total plastic hardening g tp , and the total damage d are taken to be the sums of contributions from both forward and reverse transformations:
εt-F = εtt-F + εtp-F εt-R = εtt-R + εtp-R εtt = εtt-F + εtt-R εtp = εtp-F + εtp-R ξ = ξF -ξR ġt = ġF + ġR ṗ = ṗF + ṗR ġtp = ġtp-F + ġtp-R ḋ = ḋF + ḋR . (7)
In this way, two sets of internal variables respectively related to forward transformation (into martensite) and to reverse transformation (into austenite) are defined:
ζ F = {ξ F , ε tt-F , ε tp-F , g F , p F , g tp-F , d F }, ζ R = {ξ R , ε tt-R , ε tp-R , g R , p R , g tp-R , d R } ζ = {ζ F , ζ R } (8)
blueTo rigorously derive a three-dimensional model for damage accumulation in SMA materials that explicitly couples actuation work to material degradation, the thermodynamics of irreversible processes are utilized. The fundamentals are presented in Annex A.
Thermodynamic derivation of the proposed model
The total Gibbs free energy G is additively decomposed into a thermoelastic contribution G A from regions of the RVE in the austenitic phase, a thermoelastic contribution G M from regions of the RVE in the martensitic phase, and a mixing term G in that accounts for non-thermoelastic processes: Given the state variables chosen for the description of the thermomechanical mechanisms, the Gibbs energy for the overall SMA material is written:
G = (1 -ξ)G A (σ, θ, d) + ξG M (σ, θ, d) + G mix (σ, ε tt , g t ), (9)
The part of the Gibbs free energy related to the martensitic transformation only is taken from the model of [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF] given the conventional dependence of elastic response on damage [START_REF] Lemaitre | Engineering damage mechanics: Ductile, creep, fatigue and brittle failures[END_REF], such that
G β (σ, θ, d) = - 1 2(1 -d) σ : S β : σ -σ : α(θ -θ 0 ) + c β (θ -θ 0 ) -θ ln θ θ 0 -η β 0 θ + E β 0 , (10)
for β = A, M .
Whereas the energy of phase mixing is given as:
G mix (σ, ε t , g t , g tp ) = σ : ε t + g t + g tp . (11)
In those expressions above, S is the compliance tensor (4th order), α is the thermal expansion tensor (2nd order), c 0 is a material parameter that approximates as the specific heat capacity (additional terms arising from thermo-inelastic coupling being small [START_REF] Rosakis | A thermodynamic internal variable model for the partition of plastic work into heat and stored energy in metals[END_REF])), η 0 is the initial entropy, E 0 is the initial internal energy, and θ 0 is the initial or reference temperature . Details about the selection of the thermoelastic contribution of the phases, especially considering the term related to heat capacity, are given in [START_REF] Chatzigeorgiou | Thermomechanical behavior of dissipative composite materials[END_REF]. It is assumed, [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF][START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF][START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF], that thermoelastic parameters (including specific heat) that enter the expression of the Gibbs free energy for each phase can be regrouped into phase-dependent parameters as experiments warrant (i.e., S(ξ), α(ξ), c(ξ), η 0 (ξ) and E 0 (ξ)), where a linear rule of mixtures is assumed. For example, S(ξ) is the linearly phase-dependent and, written as
S(ξ) = S A -ξ(S A -S M ) = S A -ξ∆S, (12)
where S A and S M denote the compliance tensors of austenite and martensite, respectfully, and the operator ∆ denotes the difference in any material constant as measured in the pure martensite and pure austenite phases. Conventionally, standard isotropic forms are assumed sufficient for S and α in polycrystals. Recalling that the transformation strain ε t includes all deformations associated with martensitic transformation, recoverable and irrecoverable, the following thermodynamical quantities are expressed, recalling the method proposed by [START_REF] Germain | Continuum thermodynamics[END_REF] and invoking ( 8), (A.15), and ( 10):
ε = - ∂G ∂σ = 1 1 -d S : σ + α (θ -θ 0 ) + ε t , η = - ∂G ∂θ = α : σ + c 0 ln θ θ 0 + η 0 , γ loc = - ∂G ∂ε t : εt - ∂G ∂g t ġt - ∂G ∂ξ ξ - ∂G ∂d ḋ - ∂G ∂p ṗ - ∂G ∂g tp ġtp , = σ : εt -ġt - ∂G ∂ξ ξ - ∂G ∂d ḋ -g tp , r = -c 0 θ -θα : σ + γ loc , . (13)
To proceed with the definition of the evolution equations associated with the various physical mechanisms, we consider that the evolution of all inelastic strains is completely related to the underlying process of phase transformation, as assumed for the TRIP effect elsewhere [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF][START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF] and captured herein in (A.6).
The following specific evolution equations are then considered, where all the rate quantities are linked with the rate of change of the martensite volume fraction:
εtt-F = Λ tt-F ξF , ġF = f t-F ξF , ṗF = f tp-F ξF , εtp-F = Λ tp-F ṗF = Λ tp-F f tp-F ξF , ġtp-F = H tp-F ṗF = f tp-F ξF , ḋF = f td-F ξF , εtd-F = Λ td-F ḋ = Λ td-F f td-F ξF , (14)
and
εtt-R = Λ tt-R ξR , ġR = f t-R ξR , ṗR = f tp-R ξR , εtp-R = Λ tp-R ṗR = Λ tp-R f tp-R ξR , ġtp-R = H tp-R ṗR = f tp-R f tp-R ξR , ḋR = f td-R ξR , εtd-R = Λ td-R ḋ = Λ td-R f td-R ξR . ( 15
)
blueIn the above equations, Λ tt-F represents the evolution tensor (that is, the direction of the strain rate) for the recoverable part of the transformation strain during forward transformation, while Λ tp-F and Λ td-F represent the irrecoverable part related to plasticity and damage, respectively (during forward transformation). During reverse transformation, the three evolution tensors are denoted as Λ tt-R , Λ tp-R and Λ td-R . The functional forms • Forward transformation (set A F ):
A ξ F = A ξ = 1 2 σ : ∆S : σ + σ : ∆α (θ -θ 0 ) -ρ∆c (θ -θ 0 ) -θln θ θ 0 + ρ∆s 0 θ -ρ∆E 0 , A ε F = A ε tt-F = A ε tp-F = A ε td-F = A ε t = σ, A g F = -1 A p F = A p = 0 A g tp-F = -1 A d F = A d = 1 2(1 -d) 2 σ : S : σ, (16)
• Reverse transformation (set A R ):
A ξ R = -A ξ = - 1 2 σ : ∆S : σ -σ : ∆α (θ -θ 0 ) + ρ∆c (θ -θ 0 ) -θln θ θ 0 -ρ∆s 0 θ + ρ∆E 0 , A ε R = A ε tt-R = A ε tp-R = A ε td-R = A ε t = σ, A g t-R = 1 A p R = A p = 0 A g tp-R = -1 A d R = A d = 1 2(1 -d) 2 σ : S : σ.
(17)
Transformation limits
Since phase transformation, TRIP, and damage are assumed to be rate-independent phenomena, a threshold for the activation of such mechanisms that depends primarily on thermodynamic forces should be defined, [START_REF] Edelen | On the Characterization of Fluxes in Nonlinear Irreversible Thermodynamics[END_REF]. Specifically, the evolution of all internal variables ζ should respect the following, where S is a domain in the space of the components of A having boundary ∂S:
ζ = 0 → A ∈ S + ∂S, ζ = 0 → A ∈ ∂S. ( 18
)
Following the methodology introduced by [START_REF] Germain | Cours de Mécanique des Miliex Continus: Tome I-Théorie Générale[END_REF] and referred as generalized standard materials by [START_REF] Halphen | Sur les matériaux standards généralisés[END_REF], if ∂S is a surface with a continuous tangent plane and if Φ (A) is a function continuously differentiable with respect to A, zero on ∂S, and negative in S, then one can write:
A ∈ S , ζ = 0 A ∈ ∂S , ζ = λgradΦ ⇔ ζα = λ ∂Φ ∂A α , λ ≥ 0. ( 19
)
Further, if state variables are included as parameters and the domain S remains convex, the second law of thermodynamics is satisfied and maximum dissipation principle as well [START_REF] Halphen | Sur les matériaux standards généralisés[END_REF].
Note that the processes of forward and reverse transformations are considered independently, in the sense that dissipation related to the rate of the internal variables defined for forward transformation ζ F and ζ R (cf. ( 16) and ( 17)) should be independently non-negative, i.e.:
A F : ζF ≥ 0 ; A R : ζF ≥ 0. ( 20
)
The two criteria for forward and reverse transformations are based on the ones proposed elsewhere [START_REF] Qidwai | On the Thermodynamics and Transformation Surfaces of Polycrystalline {N}i{T}i Shape Memory Alloy Material[END_REF]; [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]; Chatziathanasiou et al.
(2015)):
Φ F = ΦF + A ξ F -f t-F (ξ) + H tp p -Y t-F , Φ R = -ΦR + A ξ R + f t-R (ξ) -H tp p + Y t-R . ( 21
)
Those two functions are null on the surfaces ∂S F and ∂S R of the convex domains S F and S R , respectively, if the important functions ΦF (σ) and ΦR (σ) are convex.
Considering (19) and assuming that λ is a positive multiplier, the rate of the internal variables are given as:
ξF = λ ∂Φ F ∂A ξ F = λ, εtt-F = λ ∂Φ F ∂A ε tt-F = ξF ∂Φ F ∂σ , ξR = λ ∂Φ R ∂A ξ R = λ, εtt-R = λ ∂Φ R ∂A ε tt-R = ξR ∂Φ R ∂σ . (22)
Comparing ( 14) and ( 15) we see:
Λ tt-F = ∂Φ F ∂σ , Λ tt-R = ∂Φ R ∂σ . ( 23
)
3.3. Choice of Functional Forms
Fully recoverable martensitic transformation
The transformation functions ΦF and ΦR are the particular terms in the transformation criteria that consider the shape of the bounding surfaces in the six-dimensional stress hyperspace; here a modified Prager function is chosen that accounts for tension-compression asymmetry but not anisotropy [START_REF] Bouvet | A phenomenological model for pseudoelasticity of shape memory alloys under multiaxial proportional and nonproportional loadings[END_REF][START_REF] Grolleau | Assessment of tension-compression asymmetry of NiTi using circular bulge testing of thin plates[END_REF]. The following formulation closely follows [START_REF] Patoor | Micromechanical Modelling of Superelasticity in Shape Memory Alloys[END_REF], [START_REF] Peultier | A simplified micromechanical constitutive law adapted to the design of shape memory applications by finite element methods[END_REF], and [START_REF] Chemisky | Constitutive model for shape memory alloys including phase transformation, martensitic reorientation and twins accommodation[END_REF]. It predicts that the initiation of SMA forward transformation depends on the stress tensor invariants and asymmetry-related parameters. Specifically,
ΦF (σ) = 3J 2 (σ) 1 + b J 3 (σ) J 3/2 2 (σ) 1 n -k σ H cur (σ). ( 24
)
The terms J 2 (σ) and J 3 (σ) denote the second and third invariants of the deviatoric part σ . These are given as:
J 2 (σ) = 1 2 σ ij σ ij , and J 3 (σ) = 1 3 σ ij σ jk σ ki , (25)
using summation notation for repeated indices. Constants b and n are associated with the ratio between stress magnitudes needed to induce forward transformation under tension and compression loading. Convexity is ensured under specific conditions detailed in [START_REF] Chatziathanasiou | Phase Transformation of Anisotropic Shape Memory Alloys: Theory and Validation in Superelasticity[END_REF].
The evolution of the maximum transformation strain H cur is represented by the following decaying exponential function (Hartl et al., 2010a):
H cur (σ) = H min ; σ ≤ σ crit , H min + (H sat -H min )(1 -e -k(σ-σ crit ) ) ; σ > σ crit . (26)
Here σ denotes the Mises stress and H min corresponds to the minimal observable transformation strain magnitude generated during full transformation under tensile loading (or the two way shape memory strain magnitude). The parameter H sat describes the maximum possible recoverable full transformation strain generated under uniaxial tensile loading. Additionally, σ crit denotes the critical Mises equivalent stress below which H cur = H min and the parameter k controls the rate at which H cur exponentially evolves from H min to H sat .
The threshold for forward transformation introduced in (3.2) is not constant and is given as [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]:
Y t-F = Y crit-F + Dσ : Λ ε F (27)
The variables D and Y crit-F are model constants associated with the differing influences of stress on transformation temperatures for forward and reverse transformation. They are calculated from knowledge of other material constants [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF].
During forward transformation, the transformation strain is oriented in the direction of the applied stress, which motivates the selected J 2 -J 3 form of the direction tensor Λ t-F .
During reverse transformation, it is assumed that the direction of transformation strain recovery is instead governed by the average orientation of the martensite. This is represented in an average sense by the value of the macroscopic transformation strain
ε tt = ε tt-F + ε tt-R
as normalized by the martensite volume fraction ξ. Specifically, we assume
Λ t-R = ε tt-F ξ F + ε tt-R ξ R . ( 28
)
Given the assumed associativity for the reverse transformation strain (see ( 23)), the transformation function ΦR for reverse transformation is then expressed as:
ΦR = σ ε tt-F ξ F + ε tt-R ξ R . ( 29
)
After ( 27), the threshold for reverse transformation is expressed as:
Y t-R = Y crit-R -Dσ : εt , (30)
where Y crit-R is another material constant (usually taken equal to Y crit-F ).
An evolution equation also links the time rate of changes of the hardening energies ( ġF and ġR ) with those of martensite ( ξF and ξR ), according to ( 14) and ( 15). Then f t-F and f t-R are referred to as the forward and reverse hardening functions, respectively, which define the current transformation hardening behavior. Note that g t , being a contribution to the Gibbs free energy, cannot depend on the time derivative of the martensitic volume fraction but only on the transformation history. As per ( 14) and ( 15), the evolution equation associated with g t changes with the transformation direction such that, given the reversibility of martensitic transformation in SMAs, in the absence of other dissipative mechanisms the Gibbs free energy should take on the same value for the same state of the external variables upon completion of any full transformation loop. If all contributions to the Gibbs free energy with the exception of g t are returned to their original values after a full transformation loop, the following condition must be satisfied to fully return G r to its initial state:
1 0 f t-F dξ + 0 1 f t-R dξ = 0. ( 31
)
This necessary condition restricts the choice of hardening function for forward and reverse transformations and constrains the calibration accordingly. The specification of a form for the hardening functions that describe smooth transition from elastic to transformation response is another key contribution of the model proposed by [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]:
f t-F (ξ) = 1 2 a 1 (1 + ξ n 1 -(1 -ξ) n 2 ) + a 3 , f t-R (ξ) = 1 2 a 2 (1 + ξ n 3 -(1 -ξ) n 4 ) -a 3 . ( 32
)
Here, n 1 , n 2 , n 3 and n 4 are real exponents in the interval (0, 1] 1 . This form is selected here since it is specifically adapted to the response of polycrystalline SMA systems wherein the transformation hardening can be quite "smooth", especially after the completion of several cycles. Such smoothness is tuned by the adjustment of the parameters {n 1 , n 2 , n 3 , n 4 }.
Forms related to the evolution equations associated with damage
The damage accumulation functions f td-F ( Φfwd ) and f td-R ( Φrev ) are based on a linear accumulation law [START_REF] Lemaitre | Mechanics of solid materials[END_REF] written in terms of the integer number N of cycles completed such that, ∆d
D crit = ∆N N f , ( 33
)
where N f and D crit are the number of cycles and local damage associated with local catastrophic failure, respectively. Note that failure will be defined as the state at which d reaches the critical value D crit .
This linear accumulation law can be written to consider continuous evolutions over time:
dd D crit = dN N f ⇒ , ḋ D crit = Ṅ N f . ( 34
)
Considering fatigue occurs only as a consequence of transformation cycles (full or partial)
and that a full cycle corresponds in the evolution of the martensitic volume fraction from 0 to 1 and back to 0, (34) can be rewritten to consider both forward and reverse transformations as (see( 14),( 15)
) ḋF = ξF D crit 2N f = ξF f td-F ḋR = ξR D crit 2N f = ξR f td-R . ( 35
)
In this way, the damage accumulation functions are defined. blueWhile it is postulated in Section 3 that damage may evolve actively during forward transformation, here we propose a general formulation that considers damage evolution during both forward and reverse transformation. Future experimental studies will be needed to ascertain the relative importance of forward versus reverse transformation as mechanisms for damage evolution.
From previous experimental studies, it has been shown that the fatigue life N f of SMA actuators is correlated to the cyclic mechanical work they perform [START_REF] Calhoun | Actuation fatigue life prediction of shape memory alloys under the constant-stress loading condition[END_REF].
blueDuring isobaric uniaxial fatigue testing (the main response motivating this more general study), the actuation work per unit of volume done in each half cycle by a constant uniaxial stress σ distributed homogeneously over a specimen generating uniaxial strain ε t is the product σε t . As an empirical measure, this so-called actuation work neglects the small inelastic permanent strains generated during a single transformation cycle such that σε t σε tt-f . It was shown that a power law was sufficient to capture cycles to failure per
N f = σε tt-f C d -γ d . ( 36
)
Examining ( 24) in such a case of uniaxial loading and assuming small values of b, we see ΦF uniax = σH cur (σ). For the full transformation considered in these motiviating studies, H cur (σ) = ε tt-f by definition of ( 26), and thus ΦF uniax = σε tt-f . Motivated by this relationship in this work, we then make a generalized equivalence between the effectiveness of the past power law and its applicability in three dimensions (pending future multi-axial studies).
Finally, noting that in the case of full transformation under proportional loading (e.g., in the uniaxial case), it can be shown that ΦF = ΦR . This allows us to then introduce
N f = ΦF C d -γ d -N 0 f = ΦR C d -γ d -N 0 f . (37)
Here, N 0 f is a parameter linked to the actuation work required for a static failure (N f = 0), while C d and γ d are parameters characteristic of the number of cycles to failure dependence on actuation work. Combining ( 35) and (37) and currently assuming that damage accumulates equally during forward and reverse transformation, the final form of the damage functions are:
f td-F ( Φ) = D crit 2 ΦF C d -γ d -N 0 f -1 , f td-R ( Φ) = D crit 2 ΦR C d -γ d -N 0 f -1 . ( 38
)
Such forms, obtained from observations of isobaric uniaxial experiments, substantially defines the evolution of damage and is applicable for a wide range of thermomechanical loadings. Obviously, a large experimental effort is required to validate this critical extension from one-dimensional (uniaxial) to three-dimensional, where such conditions as non-proportional loading or partial transformation must be considered; in this work only isobaric actuation cycles will be considered in the discussions of experimental validation.
Forms of the evolution equations associated with plasticity
The transformation plasticity magnitude function f tp (ξ) is inspired by past works [START_REF] Lagoudas | Modelling of transformation-induced plasticity and its effect on the behavior of porous shape memory alloys. Part I: Constitutive model for fully dense {SMA}s[END_REF][START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]. Several conclusions are drawn considering also the experimental observations by [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF] from actuation fatigue tests where specimens were thermally cycled under various constant stress levels (see Fig. 3):2 • For moderate stress levels (200 MPa; Fig. 3a), after a rapid increase in accumulated plastic strain, a stable regime is observed (after 1000 cycles). The plastic strain accumulates linearly from cycle to cycle during this stable regime up to the point of failure.
Similar response has been observed on Ni60Ti40 alloys [START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF] and NiTi alloys [START_REF] Lagoudas | Shape memory alloys: modeling and engineering applications[END_REF].
• For higher actuation stress levels (400 MPa; Fig. 3b), a transient regime is first observed, as with the moderate stress levels. While an apparent stable regime is observed, one can observe a slight increase in plastic strain accumulation from cycle to cycle prior to failure.
• At the highest feasible stress levels (600 MPa; Fig. 3c), the same initial transient regime is observed; after which an apparent stabilized regime is also observed, followed by an important and continuous increase of the plastic strain rate up to failure.
blueFrom these experiments the effect of stress amplitude is clear. At high stress levels the rate of change of the irrecoverable strain increases from about the half lifetime of the sample. This behavior is characteristic of a change in the material's response, and is generally explained through stress concentration due to the development of defects [START_REF] Van Humbeeck | Cycling effects. Fatigue and degradation of shape memory alloys[END_REF][START_REF] Hornbogen | Review Thermo-mechanical fatigue of shape memory alloys[END_REF]. The functional form of the irrecoverable strain evolution should therefore account for that effect by considering a coupling with damage above a critical stress threshold, since this coupling is only observed at a high stress. The following evolution law for plastic strains is thus proposed:
f tp-F (p) = w tp C tp 0 ΦF C tp γtp C tp 1 p + e -p C tp 2 + σ -σ Y tp σ Y tp αtp λ tp , f tp-R (p) = (1 -w tp )C tp 0 ΦR C tp γtp C tp 1 p + e -p C tp 2 + σ -σ Y tp σ Y tp αtp λ tp , (39)
with
λ tp = λ tp ( d D crit , 1 - D coa D crit , p 0 ), = λtp ( d, D, p 0 ), (40) λtp
( d, D, p 0 ) = p 0 d (1 -d) -2 d <= h p 0 (1 -h) -2 + 2 h (1 -h) -3 ( d -h) + h(1 -h) -2 d > h, (41)
with h = 1 -D. The function λ tp is a typical level set power law function that depends on the current value of damage, the critical value for damage D crit , and a constant D coa that indicates the change of regime of the evolution of plastic strains.
blueMethodology of Model Parameters Identification
The entire three-dimensional constitutive model for shape memory alloys experiencing cycling fatigue requires four sets of parameters to be calibrated:
• The thermoelastic model parameters,
• The parameters associated with phase transformation criteria (e.g., the conventional phase diagram),
• The parameters characteristic of damage accumulation,
• The parameters characteristic of TRIP accumulation.
These are summarized in Table 1. The thermoelastic parameters of martensite and austenite blue(e.g., Young's moduli, coefficients of thermal expansion, Poisson's ratios) are usually calibrated from mechanical and thermal uniaxial loadings, where loads are applied at temperatures outside of transformation regions. The parameters qualifying the phase
diagram (M s , M f , A s , A f , C A , C M
) along with those contained in the functional form of the maximum transformation strain H cur are calibrated based on several isobaric thermal cycles prior to the accumulation of substantial damage or TRIP.
The identification of the thermodynamical parameters of the model (ρ∆η 0 , ρ∆E 0 , a 1 , a 2 , a 3 , Y t 0 , D) and the material parameters for phase transformation are detailed in [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF]. blueAccording to the complexity of the functional forms, especially for the evolution of damage and TRIP, the parameters are generally identified utilizing an optimization algorithm that minimizes a cost function, defined as a square difference between experimental measurement and the simulated response, following the methodology found in [START_REF] Meraghni | Parameters identification of fatigue damage model for short glass fiber reinforced polyamide (PA6-GF30) using digital image correlation[END_REF]. The algorithm utilized in this work is a combined gradient based -genetic optimization scheme, which is used to successfully determine the transformation characteristic of three-dimensional SMA structures [START_REF] Chemisky | Analysis of the deformation paths and thermomechanical parameter identification of a shape memory alloy using digital image correlation over heterogeneous tests[END_REF]. blueThe suggested identification procedure, used to present the validation cases in the next section, consists of the following sequence:
1. blueDetermination of the parameters for transformation strain H cur functional form via the optimization algorithm (objective function defined in terms of transformation strain magnitude with respect to a stress value). 4. blueThe parameters for the evolution of TRIP are evaluated using the evolution of the uniaxial irrecoverable strain ε p with respect to the number of cycles. In the present approach, the parameter D coa has been set up to 0.05 (50% of D crit ), since it is clear that a change of regime occurs the mid-life of the NiTiHf actuators loaded high stress (see Fig. 3c), attributed to the evolution of damage. Note that 0 ≤ D coa ≤ D crit ≤ 1.
The parameter identification of w tp requires some specific thermomechanical loading path and usually takes values greater than 0.5 and is constrained to stay within the range 0 ≤ w tp ≤ 1 [START_REF] Chemisky | A constitutive model for cyclic actuation of high-temperature shape memory alloys[END_REF]. Since the half-cycles were not available in the database utilized to identify the parameters of the Ni60Ti40 and NiTiHf alloys, this value has been arbitrarily set to 0.6.
blueThe remaining parameters
C tp 0 , C tp 1 , C tp , γ tp , C tp 2 σ Y
tp , α tp , p 0 have been identified utilizing the optimization algorithm based on the experimental results of the evolution of irrecoverable plastic strain as a function of the number of cycles. All these parameters must be positive values.
Comparison of Experimental Results
This new model for the description of functional and structural damage has been specifically formulated to capture the combined effect of phase transformation, transformationinduced plasticity and fatigue damage of polycrystalline SMAs subjected to general threedimensional thermomechanical loading and has been implemented in the 'smartplus' library [START_REF] Chemisky | smartplus : Smart Material Algoritjms and Research Tools[END_REF]. While the capabilities of such a modeling approach to capture the effects of phase transformation have been already demonstrated by [START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF] and [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF], here we specifically consider the evolution of damage and TRIP strains.
The set of experiments utilized to validate the proposed model consider specimens loaded uniaxially to different constant stress levels(i.e., in the austenitic condition) and then subjected to thermally-induced transformation cycles up to failure. The parameters are taken from [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF], where NiTiHf actuators were tested at three relatively high constant stress levels (i.e., 200, 400 and 600 MPa., cf.Fig. 3) blueacross a temperature range from approximately 300 K to 500 K. A fourth stress level of 300 MPa is used for validation since the full characterization of the elastic response was not addressed in the source work, standard values for NiTiHf alloys are applied. An average of transformation strain magnitudes generated over full cycles at each stress level is used to define the average experimental value shown in Fig. 4 and Fig. 5.
Figure 4: Dependence of maximum transformation strain magnitude on applied stress for the considered NiTiHf alloy [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF]. Data is fit using the functional form H cur (26) based on the average recovered transformation strain over all cycles at each stress levels considered.
The parameters that define the evolution equation for the damage internal variable have been identified based on the number of cycles to failure of the SMA actuators thermally cycled at different stress levels blue and are displayed in Table 2. The comparison between the fatigue database for various stress levels and the model simulation is presented in Figure 5, where the actuation energy density in this one-dimensional (uniaxial) setting is equivalent to blue Φ = ΦF = ΦR = σH cur (σ).. Note that the stress levels of 200, 400 and 600 MPa have been used for the calibration of the damage model, while data for the stress level of 300 MPa
(2 tests) are used to validate predictions.
The parameters related to the evolution of TRIP strains have been identified based on the evolution of residual strains as measured at high temperature (i.e., in the austenitic condition). The parameter identification algorithm used is a hybrid genetic -gradientbased method developed by [START_REF] Chemisky | Analysis of the deformation paths and thermomechanical parameter identification of a shape memory alloy using digital image correlation over heterogeneous tests[END_REF] and applied here to the least-square difference between the experimental and numerical irrecoverable strains for the three stress levels tested (i.e, 200, 400 and 600 MPa). It is noted, according to the comparison presented in Figure 7, that both functional fatigue (i.e., TRIP) and structural fatigue (i.e., total life) are accurately captured by the model, since both the number of cycles to failure and the level of irrecoverable strain are correctly described for the three stress levels tested. All three stages of plastic strain evolution with cycles are represented, and the rapid accumulation of TRIP strain magnitude towards the end of the lifetime of the actuator is clearly visible in the simulated results. The typical behavior of an actuator is represented here. 3 Note in particular that the upward shift in transformation temperatures with increasing cycle count 3 The transformation temperatures, not provided in [START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF], are calibrated here using common values. is captured. The second experiment (Fig. 8) used to validate the model focuses only on functional fatigue, whereby (i.e., TRIP) an SMA actuator is subjected to 80 thermal transformation cycles under constant load corresponding to a uniaxial stress level of 200 MPa [START_REF] Lagoudas | Shape memory alloys: modeling and engineering applications[END_REF], bluewhere the derived properties are given in Table 3. The actuator is at an early stage of its expected lifetime, so only the first and second stage of the evolution of plastic strain are captured, see Fig. 8a). Note that the evolution of the transformation strain with temperature is well captured, and that the shift of the transformation temperatures between the 1 st and the 80 th cycle is again accurately described by the proposed model (see Fig. 8b). The evolution of the non-zero total strain components with respect to temperature is shown in Fig. 9. While the evolution equation for irrecoverable strain is based on the Mises equivalent stress, it is seen that the irrecoverable strain as well as transformation strain follows the direction of the imposed stress. Also, the importance of shear stress component versus uniaxial stress component is highlighted here, regarding the amount of irrecoverable strain..
Figure 1 :Figure 2 :
12 Figure1: Examples of damage (micro-crack) in nickel-rich NiTi material after thermally-induced actuation cycling (2840 cycles at 200MPa). Note the micro-cracks initiating within precipitates, resulting in relative small observable strains but eventually leading to specimen failure[START_REF] Agboola | Mechanics and Behavior of Active Materials; Integrated System Design and Implementation[END_REF]
f
tp-F and f td-F relate the magnitude of the hardening energy with the rate of change of the martensite during forward transformation, and with the amount of damage with the rate of change of the martensite during forward transformation, respectively. During reverse transformation, those quantities are denoted as f tp-R and f td-R . the energetic conjugates to the internal variables denoted A = -∂G ∂ζ (cf. (A.15)), it is deducted that the generalized thermodynamic forces related to transformation and the associated evolutions of transformation strain, transformation hardening energy, accumulated transformation-induced plastic strain, and damage are given as:
Figure 3 :
3 Figure 3: Transformation and plastic strain evolution in NiTiHf actuators under various isobaric loads (Wheeler et al., 2015) (a-c) and the comparison of these three (d). The strains are measured at high and low temperature, with a zero reference based on the beginning of the first cooling cycle. No necking is observed in any sample.
2. blueDetermination of the other phase transformation parameters with reconstruction of the phase diagram from isobaric thermal cycles.3. Determination of the fatigue damage parameters used the optimization algorithm to predict the number of cycles to failure according to the actuation energy. Using uniaxial isobaric loading, this last quantity is obtained experimentally for various stress levels.The parameters C d , γ d , N 0 f can be evaluated with this procedure, ensuring that these parameters are positive values. The parameter D crit has been estimated at 0.1 from the experimental observation of crack density in the observed fatigue samples just prior to failure.
First
and second stage Third stage -coupling with damage
Figure 5 :
5 Figure 5: Number of cycles to failure as a function of the actuation energy Φ for the considered NiTiHf alloy. Results from the isobaric tests performed at 200, 400, and 600 MPa used for calibration; data from 300 MPa tests used for validation.
Figure 6 :
6 Figure 6: blueSpectrum of evolution of damage with respect to the number of cycles. Number of failure from the isobaric tests performed at 200, 400, and 600 MPa utilized for calibration in blue dot, 300 MPa validation tests in orange.
Figure 7 :
7 Figure7: Comparison between the evolution of irrecoverable strains in NiTiHf actuators under various isobaric loads[START_REF] Wheeler | Modeling of thermo-mechanical fatigue and damage in shape memory alloy axial actuators[END_REF] with the model simulations: a), b), and c) show comparisons of the evolution of TRIP strains for the calibration stress levels of 200, 400, and 600 MPa, respectively; d) shows an example of a simulation of the evolution of the response of an actuator for the first, 100 th , 200 th , and the last (309 th ) cycle prior to failure. The blue and red dots correspond to the experimentally measured strains at high and low temperature for the considered cycles, respectively.
Figure 8 :
8 Figure 8: Comparison between the evolution of irrecoverable strains of a NiTi actuator under an isobaric load (uniaxial stress level of 200 MPa; Lagoudas (2008)) with the model simulations: a) comparisons of the evolution of TRIP strains; b) comparison of the full strain-temperature response of an actuator for the first and 80 th cycles.
Figure 9 :
9 Figure 9: Comparison between the evolution of irrecoverable strains of a NiTi actuator of an actuator for the first and 80 th cycles, considering two multiaxial isobaric loads : 1) continuous grey line : σ 11 = 50 MPa, σ 12 = σ 21 = 100 MPa, all other components of the stress tensor being 0) and 2) dashed grey line (σ 11 = 100 MPa, σ 12 = σ 21 = 50 MPa, all other components of the stress tensor being 0). The evolution of a) comparison of the total uniaxial strain (ε 11 )-temperature response , b) comparison of the total shear strain (ε 12 )-temperature response.
and structural fatigue in shape memory alloy actuators, a new phenomenological model has been proposed that considers the coupled accumulation of damage and transformation-induced plasticity and is inspired by recent three-dimensional models for phase transformation based on thermodynamics of irreversible processes. Structural fatigue is described using an evolution equation for damage based on the rate of transformation energy relative to the martensite volume fraction Φ. Such a description succeeds in capturing the number of cycles to failure of the SMA actuators thermally cycled at different stress levels. The evolution of irrecoverable strains (i.e., functional fatigue) is described based on the same rate of transformation energy, especially to describe the first (transient) and second (steady-state) stages of transformation-induced plastic strain evolution. To represent the third stage (accelerated accumulation), a power law that depends on the level of accumulated damage is applied to represent the effect of structural fatigue on the development of irrecoverable strains. It is demonstrated that this formulation can accurately describe the accumulation of TRIP strains for the three considered actuators loaded at different stress levels. It is finally shown that the expression of the transformation limits represent the shift in transformation temperatures observed during cycling loading of actuators. These various aspects combine to make this model the most complete description of shape memory alloy fatigue to date.
Table 1 :
1 Required material parameters and associated material propertiess , M f , A s , A f , C A , C M
Parameter Type Set of Constants Specific Response
Thermoelastic properties Young's moduli, Poisson's ratios,
Coefficients of therm. expan., etc.
Phase diagram
Phase transformation H min , H sat , k, σ crit Transformation strain
n 1 , n 2 , n 3 , n 4 Smoothness of transformation
Damage D crit , C d , γ d , N 0 f Evolution law for damage
TRIP w tp , C tp 0 , C tp 1 , C tp , γ tp , C tp 2
σ Y tp , α
M tp , p 0 , D coa
Table 2 :
2 Identified model parameters for the NiTiHf alloy blueMs , M f , A s , A f 293 K, 273 K, 313 K, 333 K blueC A = C M 7 MPa.K -1 bluen 1 = n 2 = n 3 = n 4 0.2
Model Parameters Identified value
blueE A = E M 70000 MPa
blueν A = ν M 0.3
blueα A = α M 0 K -1
H min 0.005
H sat 0.0277
k 0.0172 MPa -1
σ crit 120 MPa
b 0
n 2
D crit 0.14
D coa 0.07
C d 85689.2 MPa
γ d 1.040
N 0 f 7000 cycles
w tp 0.6
C tp 0 0.000245
C tp 1 0.000667
C tp 6.144682 MPa
γ tp 4.132985
C tp 2 0.006239
σ Y tp , 300 MPa
α tp 3.720168
p 0 1.861436
Table 3 :
3 Identified model parameters for the NiTi alloy blueM s , M f , A s , A f 277.15 K, 260.15 K, 275.15 K, 291.15 K blueC A , C M 8.3, 6.7 MPa.K -1 bluen 1 = n 2 = n 3 = n 4 0.1
Model Parameters Identified value
blueE A , E M 47000 MPa, 24000 MPa
blueν A = ν M 0.3
blueα A = α M 0 K -1
H min 0.05
If all four exponents equal 1, the original model of[START_REF] Boyd | A thermodynamical constitutive model for shape memory materials. Part I. The monolithic shape memory alloy[END_REF] is recovered, see Appendix A of[START_REF] Lagoudas | Constitutive model for the numerical analysis of phase transformation in polycrystalline shape memory alloys[END_REF].
It is important to note that all tests considered were not associated with any observed localized necking behavior and that nearly constant applied stress can be assumed.
The same result can be obtained by utilizing the methodology of[START_REF] Coleman | Thermodynamics with Internal State Variables[END_REF] for thermodynamics with internal state variables; however, the issues raised by[START_REF] Lubliner | On the thermodynamic foundations of non-linear solid mechanics[END_REF] that limit the case to the elastic response should also be considered.
Appendix A. Appendix A: Fundamentals of Thermodynamics of irreversible processes blueConsidering a small strain ε at a considered material point, the strong form of the first law of thermodynamics can be expressed as Ė = σ : ε -divq + ρR, (A.1) where q is the heat flux, R denotes the heat sources per unit mass, and σ is the Cauchy stress.
Similarly, the second law of thermodynamics is written in the strong form as [START_REF] Chatzigeorgiou | Periodic homogenization for fully coupled thermomechanical modeling of dissipative generalized standard materials[END_REF]:
where η = ρς is the entropy per unit volume. Combining equations (A.1) and (A.2) to eliminate extra heat sources yields
where γ is the internal entropy production per unit volume. We can also define r as the difference between the rates of the mechanical work and the internal energy of Q as the thermal energy per unit volume provided by external sources, which gives
Further, the internal entropy production can be split into two contributions, where γ loc is the local entropy production (or intrinsic dissipation) and γ con is the entropy production due to heat conduction, giving
The two laws of thermodynamics can then be simply expressed as
Combining equations (A.3), (A.5), and (A.6), one can re-express the first principle of thermodynamics as:
When designing a constitutive law, especially with the aim of tracking fatigue damage and permanent deformation in a material, it is very useful to separate the various mechanisms into categories (e.g., elastic or inelastic, reversible or irreversible, dissipative or non-dissipative)
following the methodology proposed by [START_REF] Chatzigeorgiou | Thermomechanical behavior of dissipative composite materials[END_REF]. Some of these mechanisms are responsible for permanent changes in material microstructure. To describe all observable phenomena it is required to express E in terms of the proper variables capable of expressing the material state under every possible thermomechanical loading path. Following the approach of [START_REF] Germain | Continuum thermodynamics[END_REF], the internal energy E is taken to be a convex function with regards to its arguments: the strain tensor ε, the entropy η and a set of internal state variables ζ such that
The following definitions for the derivatives of the internal energy are postulated:
For the purposes of further development, it is convenient to introduce the Gibbs free energy potential G by employing the following partial Legendre transformation [START_REF] Maugin | The thermomechanics of plasticity and fracture[END_REF]):
Expressing Ġ in terms of its arguments and using (A.13), the last expression reduces to .15), in conjunction with (A.6), is used to identify proper evolution equations for the internal state variables. Usually, the mechanical and thermal dissipations are assumed to be decoupled and non-negative, i.e. γ loc ≥ 0 and γ con ≥ 0. blue Such a constitutive model is intended to be applied to the scope of Finite Element Analyses (FEA). In most FEA softwares, the variables are updated following a procedure that include three loops. A loading step is typically partitioned in time increments and is denoted by ∆x. The increment during the global FEA solver is denoted ðx. The increment during the Newton-Raphson scheme in the material constitutive law, which is described below, is denoted by the symbol δx. Such steps consist in finding the updated value of the stress tensor and of the internal variables of the model. In a backward Euler fully implicit numerical scheme, the value of a given quantity x is updated from the previous time step n to the current n + 1 per
Such an implicit relation is usually solved iteratively during the FEA calculations, and the current value is updated from iteration m to iteration m + 1 per
blueThe return mapping algorithm is used in the constitutive law algorithm and consists of two parts: i) Initially, it is assumed that no evolution of the internal variables occurs, thus the material behaves linearly. This allows it to consider a thermoelastic prediction of all the fields. In such a prediction, the stress tensor and are estimated, while the internal variables are set to their initial value at the beginning of the time increment; (ii) The stress tensor and the internal variables are corrected such that the solution meets the requirements of the specified constitutive law (the forward transformation, reverse transformation, or both).
During the return mapping algorithm, the total current strain and temperature are held constant such that:
where k denotes the increment number during the correction loop. The system of Kuhn-Tucker set of inequalities can be summarized as:
Note that the size of the system to solve might therefore depends on the activated mechanism(s). We utilized the Fischer-Burmeister Fischer (1992) complementary function to replace the Kuhn-Tucker set of inequalities that typically results from dissipative mechanisms into a set of equations. Such formulation results in a smooth complementary problem [START_REF] Kiefer | Implementation of numerical integration schemes for the simulation of magnetic SMA constitutive response[END_REF], which does not require the information of the number of active sets.
This methodology has already been utilized by [START_REF] Schmidt-Baldassari | Numerical concepts for rate-independent single crystal plasticity[END_REF] in the context of rate-independent multi-surface plasticity, by [START_REF] Bartel | A micromechanical model for martensitic phase-transformations in shape-memory alloys based on energy-relaxation[END_REF]; [START_REF] Bartel | Thermodynamic and relaxation-based modeling of the interaction between martensitic phase transformations and plasticity[END_REF] for martensitic phase transformation modeling, and by [START_REF] Kiefer | Implementation of numerical integration schemes for the simulation of magnetic SMA constitutive response[END_REF] for the simulation of the constitutive response of magnetic SMAs.
blueAt this point the methodology presented in [START_REF] Chatziathanasiou | Modeling of coupled phase transformation and reorientation in shape memory alloys under nonproportional thermomechanical loading[END_REF] is briefly summarized here. The Fischer-Burmeister technique transforms a set of Kuhn-Tucker inequality into an equivalent equation :
This equation has two sets of roots: either Φ m ≤ 0; ṡm = 0, which means that the mechanism m is not activated, or Φ m = 0; ṡm ≥ 0 indicates that the mechanism m is activated and a solution for ṡm (and consequently for all internal variables V m ) is searched.
Next, the elastic prediction -inelastic correction method is utilized to solve the unconstrained system of equations using a Newton-Raphson scheme. The inelastic transformation strain is recalled here:
During a time increment n, at the m-th iteration of the solver and the k-th iteration of the constitutive law algorithm, the transformation strain thus writes:
To avoid lengthy expressions in the sequel, the iteration numbers will be omitted. Any quantity x denotes the x (n+1)(m+1)(k) , the increment ∆x denotes the ∆x (n+1)(m+1)(k) , the increment δx denotes the δx (n+1)(m+1)(k) and the increment ðx denotes the ðx (n+1)(m+1) . The convex cutting plane (CCP) [START_REF] Simo | Computational Inelasticity[END_REF]; [START_REF] Qidwai | On the Thermodynamics and Transformation Surfaces of Polycrystalline {N}i{T}i Shape Memory Alloy Material[END_REF] is utilized to approximate the evolution of the inelastic strain as:
blueThe comparison and the efficiency evaluation of the convex cutting plane and the closest point projection has been discussed in detail in [START_REF] Qidwai | On the Thermodynamics and Transformation Surfaces of Polycrystalline {N}i{T}i Shape Memory Alloy Material[END_REF], and it has been shown that the convex cutting plane algorithm is more efficient in most cases, even if it may require more steps to converge when strong non-proportional loadings are considered. The total current strain and temperature are held constant in displacement driven FEA. Assuming an additive decomposition of strains and the previously induced constitutive relations between elastic strain and stress, and thermal strain and temperature provides:
Since the variations of elastic compliance tensor and the thermal expansion tensor are dependent on the volume fraction of forward or reverse martensitic transformation: .11) it is therefore possible to define a total stress-influence evolution tensor, such as:
blueThus, with the help of (A.8), (A.10) and (A.11), (A.9) is now written as:
Recall that the transformations (forward and reverse) depend on the stress and the internal variables through the definition of thermodynamical forces. Applying the chain rule to these criterion yields: .14) with:
blue The numerical resolution consists of solving the following system of complementary Fischer-Burmeister functions using a Newton-Raphson scheme: .16) with:
(A.17)
In the above equations, B is a matrix containing the partial derivatives of Φ, such that B lj = j -∂Φ l ∂σ κ j + K lj . The iterative loops stop when the convergence criteria on all the complementary functions has been fulfilled.
Appendix A.1. Determination of the thermomechanical quantities and tangent moduli blueSince we require the computation of the solver iteration increments ∆ε (n+1)(m+1) = ∆ε (n+1)(m) + ðε (n+1)(m) the mechanical tangent modulus D ε is required:
To compute such quantities, the criterion is utilized and uses ( A.14): .20) Considering now only the subset of activated mechanisms, i.e. the ones that satisfy ðΦ l = 0, (A.21) the following holds true (the superscript l shall now refer to any activated mechanism, and only those):
The set of non-linear equations that can be rearranged in a matrix-like format: A.23) where ξ = ξ F , ξ R . The components of the reduced sensitivity tensor B that correspond to the active load mechanism variables, with respect to the strain and temperature, are:
Note that second-order tensors and scalar quantities can be defined for the influence of strain and temperature, respectively, on a unique lead mechanism s j : ðξ j = P j ε ðε + P j θ ðθ.
Highlights
This work presents new developments in the thermomechanical constitutive modeling of structural and functional fatigue of shape memory alloys (SMAs).
It captures the evolution of irrecoverable strain that develops during cyclic actuation of SMAs.
It describes the evolution of the structural fatigue through the evolution of an internal variable representative of damage. Final failure is predicted when such variables reaches a critical value.
The full numerical implementation of the model in an efficient scheme is described.
Experimental results associated with various thermomechanical paths are compared to the analysis predictions, including fatigue structural lifetime prediction and evolution of the response during cyclic actuation.
The analysis of three-dimensional loadings paths are considered |
01762254 | en | [
"info.info-ai"
] | 2024/03/05 22:32:13 | 2012 | https://hal.science/hal-01762254/file/PEROTTO%20-%20EUMAS%202011%20%28pre-print%20version%29.pdf | Studzinski Filipo
Perotto
Recognizing internal states of other agents to anticipate and coordinate interactions
Keywords: Factored Partially Observable Markov Decision Process (FPOMDP), Constructivist Learning Mechanisms, Anticipatory Learning, Model-Based RL
In multi-agent systems, anticipating the behavior of other agents constitutes a difficult problem. In this paper we present the case where a cognitive agent is inserted into an unknown environment composed of different kinds of other objects and agents; our cognitive agent needs to incrementally learn a model of the environment dynamics, doing it only from its interaction experience; the learned model can be then used to define a policy of actions. It is relatively easy to do so when the agent interacts with static objects, with simple mobile objects, or with trivial reactive agents; however, when the agent deals with other complex agents that may change its behaviors according to some non directly observable internal states (like emotional or intentional states), the construction of a model becomes significantly harder. The complete system can be described as a Factored and Partially Observable Markov Decision Process (FPOMDP); our agent implements the Constructivist Anticipatory Learning Mechanism (CALM) algorithm, and the experiment (called meph) shows that the induction of non-observable variables enable the agent to learn a deterministic model of most of the universe events, allowing it to anticipate other agents actions and to adapt to them, even if some interactions appear as non-deterministic in a first sight.
Introduction
Trying to escape from AI classic (and simple) maze problems toward more sophisticated (and therefore more complex and realistic) agent-based universes, we are led to consider some complicating conditions: (a) the situatedness of the agent, which is immersed into an unknown universe, interacting with it through limited sensors and effectors, without any holistic perspective of the complete environment state, and (b) without any a priori model of the world dynamics, which forces it to incrementally discover the effect of its actions on the system in an on-line experimental way; to make matters worse, the universe where the agent is immersed can be populated by different kinds of objects and entities, including (c) other complex agents, and in this case, the task of learning a predictive model becomes considerably harder.
We are especially concerned with the problem of discovering the existence of other agents' internal variables, which can be very useful to understand their behavior. Our cognitive agent needs to incrementally learn a model of its environment dynamics, and the interaction with other agents represents an important part of it. It is relatively easy to construct a model when the agent interacts with static objects, with simple mobile objects, or with trivial reactive agents; however, when dealing with other complex agents which may change its behaviors according to some non directly observable internal properties (like emotional or intentional states), the construction of a model becomes significantly harder. The difficulty increases because the reaction of each agent can appears to our agent as a non deterministic behavior, regarding the information provided by the perceptive elements of the situation.
We can anticipate at least two points of interest addressed by this paper: the first one is about concept creation, the second one is about agent inter-subjectivity. In the philosophical and psychological research community concerned with cognitive issues, the challenge of understanding the capability to develop new abstract concepts have always been a central point in most theories about how the human being can deal and adapt itself to a so complex and dynamical environment as the real world [START_REF] Murphy | The big book of concepts[END_REF], [START_REF] Piaget | [END_REF]. In contrast to the kind of approach usually adopted in AI, which easily slip into the strategy of treating exceptions and lack of information by using probabilistic methods, many cognitive scientists insist that the human mind strategy looks more like accommodating the disturbing events observed in the reality by improving his/her model with new levels of abstraction, new representation elements, and new concepts. Moreover, the intricate problem of dealing with other complex agents has also been studied by cognitive science for some time, from psychology to neuroscience. A classical approach to it is the famous "ToM" assumption (Astington, et al. 1988), [START_REF] Bateson | Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology[END_REF], [START_REF] Dennett | The Intentional Stance[END_REF] which claims that the human being have developed the capability to attribute mental states to the others, in order to represent their beliefs, desires and intentions, and so being able to understand their behavior.
In this paper, we use the Constructivist Anticipatory Learning Mechanism (CALM), defined in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], to solve the "meph problem", where a cognitive agent is inserted into an environment constituted of other objects and also of some other agents, that are non-cognitive in the sense that they do not learn anything, but that are similar to our agent in terms of structure and possible behaviors. CALM is able to build a descriptive model of the system where the agent is immersed, inducting, from the experience, the structure of a factored and partially observable Markov decision process (FPOMDP). Some positive results have been achieved due to the use of 4 integrated strategies [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], (Perotto et al. 2007), [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF]Alvares, 2007): (a) the mechanism takes advantage of the situated condition presented by the agent, constructing a description of the system regularities relatively to its own point of view, which allows to set a good behavior policy without the necessity of "mapping" the entire environment; (b) the learning process is anchored on the construction of an anticipatory model of the world, what could be more efficient and more powerful than traditional "model free" reinforcement learning methods, that directly learn a policy; (c) the mechanism uses some heuristics designed to well structured universes, where conditional dependencies between variables exist in a limited scale, and where most of the phenomena can be described in a deterministic way, even if the system as a whole is not, representing what we call a partially deterministic environment; this characteristic seems to be widely common in real world problems; (d) the mechanism is prepared to discover the existence of hidden or non-observable properties of the universe, which cannot be directly perceived by the agent sensors, but that can explain some observed phenomena. This last characteristic is fundamental to solve the problem presented in this article because it enables our agent to discover the existence of internal states in other agents, which is necessary to understand their behavior and then to anticipate it. Further discussion about situatedness can be found in [START_REF] Wilson | How to Situate Cognition: Letting Nature Take its Course[END_REF][START_REF] Wilson | How to Situate Cognition: Letting Nature Take its Course[END_REF], [START_REF] Beer | A dynamical systems perspective on agent-environment interactions[END_REF], and [START_REF] Suchman | Plans and Situated Actions[END_REF].
Thus, the basic idea concerning this paper is to describe the algorithm CALM, proposed in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], presenting its features, and placing it into the Markov Decision Process (MDP) framework panorama. The discussion is supported, on one side, by these introductory philosophical conjectures, and on the other side, by the meph experiment, which creates a multi-agent scenario, where our agent needs to induce the existence of internal variables to the other agents. In this way, the paper presents some positive results in both theoretical and practical aspects. Following the paper, section 2 overviews the MDP framework, section 3 describes the CALM learning mechanism, section 4 introduces the experiment and shows the acquired results, and section 5 concludes the paper, arguing that the discover and induction of hidden properties of the system can be a promising strategy to model other agents internal states.
Markov Decision Process Framework
Markov Decision Process (MDP) and its extensions constitute a quite popular framework, largely used for modeling decision-making and planning problems. An MDP is typically represented as a discrete stochastic state machine; at each time cycle the machine is in some state s; the agent interacts with the process by choosing some action a to carry out; then, the machine changes into a new state s', and gives the agent a corresponding reward r; a given transition function δ defines the way the machine changes according to s and a. Solving an MDP is finding the optimal (or near-optimal) policy of actions in order to maximize the rewards received by the agent over time. When the MDP parameters are completely known, including the reward and the transition functions, it can be mathematically solved by dynamic programming (DP) methods. When these functions are unknown, the MDP can be solved by reinforcement learning (RL) methods, designed to learn a policy of actions on-line, i.e. at the same time the agent interacts with the system, by incrementally estimating the utility of state-actions pairs and then by mapping situations to actions [START_REF] Sutton | Reinforcement Learning: an introduction[END_REF].
The Classic MDP
Markov Decision Process first appeared (in the form we know) in the late 1950s [START_REF] Bellman | A Markovian Decision Process[END_REF], [START_REF] Howard | Dynamic Programming and Markov Processes[END_REF], reaching a concrete popularity in the Artificial Intelligence (AI) research community from the 1990s [START_REF] Puterman | Markov Decision Processes: discrete stochastic dynamic programming[END_REF]. Currently the MDP framework is widely used in the domains of Automated Control, Decision-Theoretic Planning [START_REF] Blythe | Decision-Theoretic Planning[END_REF], and Reinforcement Learning [START_REF] Feinberg | Handbook of Markov Decision Processes: methods and applications[END_REF]. A "standard MDP" represents a system through the discretization and enumeration of its state space, similar to a state machine in which the transition function can be non-deterministic. The flow of an MDP (the transition between states) depends only on the system current state and on the action taken by the agent at the time. After acting, the agent receives a reward signal, which can be positive or negative if certain particular transitions occur.
However, for a wide range of complex (including real world) problems, the complete information about the exact state of the environment is not available. This kind of problem is often represented as a Partially Observable Markov Decision Process (POMDP) [START_REF] Kaelbling | Planning and acting in partially observable stochastic domains[END_REF]. The idea of representing non-observable elements in a MDP is not new [START_REF] Astrom | Optimal Control of Markov Decision Processes with Incomplete State Estimation[END_REF], [START_REF] Smallwood | The optimal control of partially observable Markov decision processes over a finite horizon[END_REF][START_REF] Smallwood | The optimal control of partially observable Markov decision processes over a finite horizon[END_REF], but became popular with the revived interest on the framework, occurred in the 1990s (Christman, 1992), [START_REF] Kaelbling | Acting optimally in partially observable stochastic domains[END_REF][START_REF] Kaelbling | Planning and acting in partially observable stochastic domains[END_REF]. The POMDP provides an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which the system states are observable only indirectly, via a set of imperfect, incomplete or noisy perceptions. In a POMDP, the set of observations is different from the set of states, but related to them by an observation function, i.e. the underlying system state s cannot be directly perceived by the agent, which has access only to an observation o. The POMDP is more powerful than the MDP in terms of modeling (i.e. a larger set of problems can be described by a POMDP than by an MDP), but the methods for solving them are computationally even more expensive, and thus applicable in practice only to very simple problems [START_REF] Hauskrecht | Value-function approximations for partially observable Markov decision processes[END_REF], [START_REF] Meuleau | Solving POMDPs by Searching the Space of Finite Policies[END_REF], [START_REF] Shani | Model-Based Online Learning of POMDPs[END_REF].
The main bottleneck about the use of MDPs or POMDPs is that representing complex problems implies that the state space grows-up and quickly becomes intractable. Real-world problems are generally complex, but fortunately, most of them are quite well-structured. Many large MDPs have significant internal structure, and can be modeled compactly if the structure is exploited in the representation. The factorization of states is an approach to exploit this characteristic. In the factored representation, a state is implicitly described by an assignment to some set of state variables. Thus, the complete state space enumeration is avoided, and the system can be described referring directly to its properties. The factorization of states enable to represent the system in a very compact way, even if the corresponding MDP is exponentially large [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF], [START_REF] Shani | Efficient ADD Operations for Point-Based Algorithms[END_REF]. When the structure of the Factored Markov Decision Process (FMDP) [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF] is completely described, some known algorithms can be applied to find good policies in a quite efficient way [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. However, the research concerning the discover of the structure of an underlying system from incomplete observation is still incipient [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF], [START_REF] Degris | Factored Markov Decision Processes[END_REF].
Factored and Partially Observable MDP
In order to increase the range of representable problems, the classic MDP model can be extended to include factorization of states and partial observation, and it can be so called a Factored Partially Observable Markov Decision Process (FPOMDP). In order to be factored, the description of a given state s in the original model will be decomposed and replaced by a set {x1, x2, ... xn} in the extended model; the action a becomes a set {c1, c2, ... cm}; the reward signal r becomes {r1, r2, ... rk}; and the transition function δ is replaced by a set of transformation functions {T1, T2, ... Tn}.
A FPOMDP (Degris;[START_REF] Degris | Factored Markov Decision Processes[END_REF] can be formally defined as a 4-tuple {X, C, R, T}. The finite non-empty set of system properties or variables X = {X1, X2, ... Xn} is divided into two subsets, X = P È H, where the subset P represents the observable properties (those that can be accessed through the agent sensory perception), and the subset H represents the hidden or non-observable properties; each property Xi is associated to a specified domain, which defines the values the property can assume. C = {C1, C2, ... Cm} represents the controllable variables, composing the agent actions, R = {R1, R2, ... Rk} is the set of (factored) reward functions, in the form Ri : Pi IR, and T = {T1, T2, ... Tn} is the set of transformation functions, as Ti : X C Xi , defining the system dynamics. Each transformation function can be represented as a Dynamic Bayesien Network (DBN) [START_REF] Dean | A model for reasoning about persistence and causation[END_REF][START_REF] Dean | A model for reasoning about persistence and causation[END_REF], which is an acyclic, oriented, two-layers graph. The first layer nodes represent the environment state in time t, and the second layer nodes represent the next state, in t+1 [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF]. A stationary policy π is a mapping X → C where π(x) defines the action to be taken in a given situation. The agent must learn a policy that optimizes the cumulative rewards received over a potentially infinite time horizon. Typically, the solution π* is the policy that maximizes the expected discounted reward sum, as indicated in the classical Bellman optimality equation (1957), here adapted to our FPOMDP notation.
V π* (x) = R(x) + maxc [ γ . Σx' P(x' | x, c) . V π* (x') ]
In this paper, we consider the case where the agent does not have an a priori model of the universe where it is situated (i.e. it does not have any idea about the transformation function), and this condition forces it to be endowed with some capacity of learning, in order to be able to adapt itself to the system. Even if there is a large research community studying model-free methods (that directly learn a policy of actions), in this work we adopt a model-based method, through which the agent must learn a descriptive and predictive model of the world, and so define a behavior strategy based on it. Learning a predictive model is often referred as learning the structure of the problem, which is an important research objective into the MDP framework community [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF], as well as in related approaches like Induction of Decision Trees or Decision Graphs [START_REF] Jensen | Bayesian Networks and Decision Graphs[END_REF][START_REF] Jensen | Bayesian Networks and Decision Graphs[END_REF], Bayesian Networks (BN) [START_REF] Pearl | Causality: models of reasoning and inference[END_REF], [START_REF] Friedman | Being Bayesian about Network Structure: a bayesian approach to structure discovery in bayesian networks[END_REF]Koller, 2003) and Influence Diagrams [START_REF] Howard | Dynamic Programming and Markov Processes[END_REF][START_REF] Howard | Influence Diagrams[END_REF].
In this way, when the agent is immersed in a system represented as a FPOMDP, the complete task for its anticipatory learning mechanism is both to create a model of the transformation function, and to define an optimal (or sufficiently good) policy of actions. The transformation function can be described by a dynamic bayesian network, i.e. an acyclic, oriented, two-layers graph, where the first layer nodes represent the environment situation in time t, and the second layer nodes represent the next situation, in time t+1. A policy π : X → C defines the behavior to be taken in each given situation (the policy of actions). Several algorithms create stochastic policies, and in this case the action to take is defined by a probability. [START_REF] Degris | Factored Markov Decision Processes[END_REF] present a good overview of the use of this representation in artificial intelligence, referring several related algorithms designed to learn and solve factored FMDPs and FPOMDPs, including both the algorithms designed to calculate the policy given the model [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF], (Boutilier;[START_REF] Boutilier | Computing optimal policies for partially observable decision processes using compact representations[END_REF], [START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF][START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF], (Poupart;[START_REF] Poupart | VDCBPI: an approximate scalable algorithm for large scale POMDPs[END_REF], [START_REF] Hoey | SPUDD: Stochastic Planning Using Decision Diagrams[END_REF], [START_REF] St-Aubin | APRICODD: Approximate policy construction using decision diagrams[END_REF], [START_REF] Guestrin | Solving Factored POMDPs with Linear Value Functions[END_REF], [START_REF] Sim | Symbolic Heuristic Search Value Iteration for Factored POMDPs[END_REF], and [START_REF] Shani | Efficient ADD Operations for Point-Based Algorithms[END_REF] and the algorithms designed to discover the structure of the system [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF], (Degris;[START_REF] Degris | Factored Markov Decision Processes[END_REF], [START_REF] Strehl | Efficient Structure Learning in Factored-State MDPs[END_REF], and [START_REF] Jonsson | A Causal Approach to Hierarchical Decomposition of Factored MDPs[END_REF][START_REF] Jonsson | A Causal Approach to Hierarchical Decomposition of Factored MDPs[END_REF].
Constructivist Anticipatory Learning Mechanism
The constructivist anticipatory learning mechanism (CALM), detailedly described in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], is a mechanism developed to enable an agent to learn the structure of an unknown environment where it is situated, trough observation and experimentation, creating an anticipatory model of the world, which will be represented as an FPOMDP. CALM operates the learning process in an active and incremental way, where the agent needs to choose between alternative actions, and learn the world model as well as the policy at the same time it actuates. There is no separated previous training time; the agent has an unique uninterrupted interactive experience into the system, quite similarly to real life problems. In other words, it must performing and learning at the same time.
The problem can be divided into two tasks: first, building a world model, i.e. to induce a structure which represents the dynamics of the system (composed by agent-environment interactions). Second, to establish a behavioral policy, i.e. to define the actions to do at each possible different state of the system, in order to increase the estimated rewards received over time.
The task becomes harder because the environment is only partially observable, from the point of view of the agent, constituting an FPOMDP. In this case, the agent has perceptive information from a subset of sensory variables, but the system dynamics depends also on another subset of hidden variables. To be able to create the world model, the agent needs, beyond discover the regularities of the phenomena, also discover the existence of non-observable variables that are important to understand the system evolution. In other words, learning a model of the world is more than describing the environment dynamics (the rules that can explain and anticipate the observed transformations), it is also discovering the existence of hidden properties (once they influence the evolution of the observable ones), and finally find a way to deduce the values of these hidden properties.
If the agent can successfully discover and describe the hidden properties of the FPOMDP which it is dealing with, then the world becomes treatable as a FMDP, and there are some known algorithms able to efficiently calculate the optimal (or near-optimal) policy. The algorithm to calculate the policy of actions used by CALM is similar to the one presented by [START_REF] Degris | Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems[END_REF]. On the other hand, the main challenge is to discover the structure of the problem based on the on-line observation, and CALM do it using representations and strategies inspired on (Drescher, 1993).
Knowledge Representation
CALM tries to reconstruct, by experience, each system transformation function Ti, representing it by an anticipation tree, which in turn is composed by schemas. Each schema represent some perceived regularity occurring in the environment, i.e. some regular event checked by the agent during its interaction with the world. A schema is composed by three vectors: = (context action → expectation). The context vector has each of their elements linked with a sensor. The action vector is linked with the effectors. The expectation represents the value expected for some specific sensor in the next time. In a specific schema, the context vector represents the set of equivalent situations where the schema is applicable. The action vector represents a set of similar actions that the agent can carry out in the environment. The expectation vector represents the expected result after executing the given action in the given context. Each element vector can assume any value in a discrete interval defined by the respective sensor or effector. In addition, the context vector can incorporate some "synthetic elements" not linked to any sensor but representing abstract or non-sensory properties which the existence is induced by the mechanism. Some elements in these vectors can undertake an "undefined value". For example, an element linked with a binary sensor must have one of three values: true, false or undefined (represented, respectively, by '1', '0' and '#'). In both the context and action vectors, '#' represents something ignored, not relevant to make the anticipations. There is compatibility between a schema and a certain situation when the schema's context vector has all defined elements equal to those of the agent's perception.
In the expectation vector, '#' means that the element is not deterministically predictable. The undefined value generalizes the schema because it allows to ignore some properties to represent a set of situations. Another symbols can be used to represent some special situations, in a way to reduce the number of schemas; it is the case of the symbol '=', used to indicate that the value of the expected element does not change in the specified context.
The use of undefined values makes possible the construction of an anticipation tree. Each node in that tree is a schema, and relations of generalization and specialization guide its topology (quite similar to decision trees or discrimination trees). The root node represents the most generalized situation, which has the context and action vectors completely undefined. Adding one level in the tree is to specialize one generalized element, creating a branch where the undefined value is replaced by the different possible defined values. This specialization occurs either in the context vector or in the action vector. In this way, CALM divides the state space according to the different expectations of changing, grouping contexts and actions with its respective transformations. The tree evolves during the agent's life, and it is used by the agent, even if until under construction, to take its decisions, and in consequence, to define its behavior. The structure of the schemas and an example of their organization as an anticipation tree are presented in Figure 1.
### # ### ### 0 #0# ### 1 111 0## 0 000 1## 0 10# context action expectation
Figure 1: the anticipation tree; each node is a schema composed of three vectors: context, action and expectation; the leaf nodes are decider schemas.
The context in which the agent is at a given moment (perceived through its sensors) is applied in the tree, exciting all the schemas that have a compatible context vector. This process defines a set of excited schemas, each one suggesting a different action to do in the given situation. CALM will choose one to activate, performing the defined action through the agent's effectors. The algorithm always chooses the compatible schema that has the most specific context, called decider schema, which is the leaf of a differentiated branch. This decision is taken based on the calculated utility of each possible choice. There are two kinds of utility: the first one estimates the discounted sum of rewards in the future following the policy, the second one measures the exploration benefits. The utility value used to take the decision depends on the circumstantial agent strategy (exploiting or exploring). The mechanism has also a kind of generalized episodic memory, which represents (in a compact form) the specific and real situations experimented in the past, preserving the necessary information to correctly constructs the tree. The implementation of a feasible generalized episodic memory is not evident; it can be very expensive to remember episodes. However, with some strong but well chosen restrictions (like limiting dependency representation) it can be computationally viable.
Anticipation Tree Construction Methods
The learning process happens through the refinement of the set of schemas. The agent becomes more adapted to its environment as a consequence of that. After each experienced situation, CALM checks if the result (context perceived at the instant following the action) is in conformity to the expectation of the activated schema. If the anticipation fails, the error between the result and the expectation serves as parameter to correct the model. In the schematic tree topology, the context and action vectors are taken together. This concatenated vector identifies the node in the tree, which grows up using a top-down strategy. The context and action vectors are gradually specialized by differentiation, adding, each time, a new relevant feature to identify the category of the situation. In general there is a shorter way starting with an empty vector and searching for the probably few relevant features than starting with a full vector and having to waste energy eliminating a lot of useless elements. Selecting the good set of relevant features to represent some given concept is a well known problem in AI, and the solution is not easy, even by approximated approaches. To do it, CALM adopts a forward greedy selection [START_REF] Blum | Selection of relevant features and examples in machine learning[END_REF], using the data registered in the generalized episodic memory.
The expectation vector can be seen as a label in each decider schema, and it represents the predicted anticipation when the decider is activated. The evolution of expectations in the tree uses a bottom-up strategy. Initially all different expectations are considered as different classes, and they are gradually generalized and integrated with others. The agent has two alternatives when the expectation fails. In a way to make the knowledge compatible with the experience, the first alternative is to try to divide the scope of the schema, creating new schemas, with more specialized contexts. Sometimes it is not possible and then it reduces the schema expectation.
Three basic methods compose the CALM learning function, namely: differentiation, adjustment, and integration. Differentiation is a necessary mechanism because a schema responsible for a context too general can hardly make precise anticipations. If a general schema does not work well, the mechanism divides it into new schemas, differentiating them by some element of the context or action vector. In fact, the differentiation method takes an unstable decider schema and changes it into a two level sub-tree. The parent schema in this sub-tree preserves the context of the original schema. The children, which are the new decider schemas, have their context vectors a little bit more specialized than their parent. They attribute a value to some undefined element, dividing the scope of the original schema. Each one of these new deciders engages itself in a part of the domain. In this way, the previous correct knowledge remains preserved, distributed in the new schemas, and the discordant situation is isolated and treated only in its specific context. Differentiation is the method responsible to make the anticipation tree grows up. Each level of the tree represents the introduction of some new constraint. The algorithm needs to choose what will be the differentiator element, and it could be from either the context vector or the action vector. This differentiator needs to separate the situation responsible for the disequilibrium from the others, and the algorithm chooses it by calculating the information gain, and considering a limited (parametrized) range of interdependencies between variables. Figure 2 When some schema fails and it is not possible to differentiate it in any way, then CALM executes the adjustment method. This method reduces the expectations of an unstable decider schema in order to make it reliable again. The algorithm simply compares the activated schema's expectation and the real result perceived by the agent after the application of the schema, setting the incompatible expectation elements to the undefined value ('#'). The adjustment method changes the schema expectation (and consequently the anticipation predicted by the schema). Successive adjustments can reveal some unnecessary differentiations. Figure 3 In this way, the schema expectation can change (and consequently the class of the situation represented by the schema), and the tree maintenance mechanism needs to be able to reorganize the tree when this change occurs. Therefore, successive adjustments in the expectations of various schemas can reveal unnecessary differentiations. When CALM finds a group of schemas with similar expectations to approach different contexts, the integration method comes into action, trying to join these schemas by searching for some unnecessary common differentiator element, and eliminating it. The method operates as shown in figure 4.
### A 0#0 0## A 0#0 1## A 0#0 (b) (a) ### A 0#0
Dealing with the Unobservable
When CALM reduces the expectation of a given schema by adjustment, it supposes that there is no deterministic regularity following the represented situation in relation to these incoherent elements, and the related transformation is unpredictable. However, sometimes a prediction error could be explained by considering the existence of some abstract or hidden property in the environment, which could be useful to differentiate an ambiguous situation, but which is not directly perceived by the agent sensors. So, before adjusting, CALM supposes the existence of a non-sensory property in the environment, which will be represented as a synthetic element. When a new synthetic element is created, it is included as a new term in the context and expectation vectors of the schemas. Synthetic elements suppose the existence of something beyond the sensory perception, which can be useful to explain non-equilibrated situations. They have the function of amplifying the differentiation possibilities.
In this way, when dealing with partially observable environments, CALM has two additional challenges: (a) inferring the existence of unobservable properties, which it will represent by synthetic elements, and (b) including these new elements into its predictive model. A good strategy to do this task is looking at the historical information. In the case where the POMDP is completely deterministic, it is possible to find sufficient little pieces of history to distinguish and identify all the underlying states [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF][START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF], and we suppose that it is similar when the POMDP in non-deterministic but well structured.
CALM introduces a method called abstract differentiation. When a schema fails in its prediction, and when it is not possible to differentiate it by the current set of considered properties, then a new boolean synthetic element is created, enlarging the context and expectation vectors. Immediately, this element is used to differentiate the incoherent situation from the others. The method attributes arbitrary values to this element in each differentiated schema. These values represent the presence or absence of some non-observable condition, necessary to determine the correct prediction in the given situation. The method is illustrated in figure 5, where the new elements are represented by card suites. Once a synthetic element is created, it can be used in next differentiations. A new synthetic element will be created only if the existing ones are already saturated. To avoid the problem of creating infinite new synthetic elements, CALM can do it only until a determined limit, after which it considers that the problematic anticipation is not deterministically predictable, undefining the expectation in the related schemas by adjustment. Figure 6 explains didactically the idea behind synthetic element creation. The synthetic element is not associated to any sensory perception. Consequently, its value cannot be observed. This fact can place the agent in ambiguous situations, where it does not know whether some relevant but not observable condition (represented by this element) is present or absent. Initially, the value of a synthetic element is verified a posteriori (i.e. after the execution of the action in an ambiguous situation). Once the action is executed and the following result is verified, then the agent can rewind and deduce what was the situation really faced in the past instant (disambiguated). Discovering the value of a synthetic element after the circumstance where this information was needed can seem useless, but in fact, this delayed deduction gives information to another method called abstract anticipation. If the nonobservable property represented by this synthetic element has a regular dynamics, then the mechanism can propagate the deduced value back to the schema activated in the immediate previous instant. The deduced synthetic element value will be included as a new anticipation in the previous activated schema. Figure 7 shows how this new element can be included in the predictive model. For example, in time t1 CALM activates the schema 1 = (#0 + c → #1), where the context and expectation are composed by two elements (the first one synthetic and the second one perceptive), and one action. Suppose that the next situation '#1' is ambiguous, because it excites both schemas 2 = (♣1 + c → #0) and 3 = (♦1 + c → #1). At this time, the mechanism cannot know the synthetic element value, crucial to determine what is the real situation. Suppose that, anyway, the mechanism decides to execute the action 'c' in time t2, and it is followed by the sensory perception '0' in t3. Now, in t3, the agent can deduce that the situation really dealt with in t2 was '♣1', and it can include this information into the schema activated in t1, in the form 1 = (#0 + c → ♣1).
Experiment
The CALM mechanism has already been used to successfully solve problems such as flip, which is also used by [START_REF] Singh | Learning Predictive State Representations[END_REF] and [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF][START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF], and wepp, which is an interesting RL situated problem. CALM is able to solve both of them by creating new synthetic elements to represent underlying states of the problem [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF]Álvares, 2007), (Perotto et al., 2007). In this paper, we
♣ 0 c (100%) ♣ 0 1 c (100%) (100%) ♦ 0 ♦ 0 (b) (a) 0 c 1 0 c 1 c t 0 t 1 t 2 t 3 1 c ♣ ♦ 0 1 0 c (50%) (50%) (b) (c) 0 1 0 c (100%) (100%) p 0 c h 0 c 0 (a) 0 c (100%) c 1 t 0 t 1 t 2
introduce an experiment that we call meph (acronym to the actions that the agent can do: move, eat, play, hit).
In the meph experiment, the agent is inserted into a bi-dimensional environment (like a "grid") where it should learn how to interact with other agents and objects that can be found during its life. Our agent needs to create a policy of actions in a way to optimize its feelings (internal rewards). When it is hungry, it feels good by eating some food (which is an object that can be sometimes found in the environment, among other non eatable objects like stones). Agents and objects can be differentiated by observable features, it means that our agent can sensorially distinguish what is the "thing" with which it is interacting. However, both food and stone have the same appearance, and in this way, to discover if the object is either food or stone, the agent needs to experiment, it means, the agent needs to explicitly experiment the environment in order to take more information about the situation, for example, hitting the object to listen what sound it makes. Figure 8 shows a random configuration of the environment. When our agent is excited, it founds pleasure by playing with another agent. However, the other agents also have internal emotional states; when another agent is angry, any tentative to play with results in an aggression, which causes a disagreeable painful sensation to our agent. Playing is enjoyable and safe only if both agents are excited, or at least if the other agent is not angry (and then, not aggressive), but these emotional states are internal to each agent, and cannot be directly perceived. With such configuration, our agent will need to create new synthetic elements to be able to distinguish what is food and what is stone, and to distinguish who are aggressive and who are peaceable.
At each time step, our agent can execute one of 4 actions: move, eat, play or hit. Moving itself is a good strategy to scape from other aggressive agents, but it is also the action that changes the context, allowing the agent to search for different contexts. The agent does not control precisely the movement it does; the action of moving itself causes a random rotation followed by a position changing to an adjacent cell. Eating is the good choice when the agent is hungry and it is in front of some food at the same time, action that ceases the bad sensation caused by hungry. Playing is the good action to be carried out when the agent is excited and in frontal contact with another non-aggressive agent. Hitting is the action that serves to interacting with other objects without compromise; doing it, the agent is able to identify the solution to some ambiguous situations; for example, hitting a stone has no effect, while hitting food provokes a funny sound. The same for another agents, that reacts with a noisy sound when hit, but only if it is already angry.
The agent has two limited external perceptions, both focused on the cell that is directly in front of it; the sense of vision allows it to see if the place before him contains an object, an agent, or nothing; the sense of hearing permits to listen the sounds coming from there. The agent's body has 5 internal properties, corresponding to 5 equivalent internal perceptions: pain, anger, hungry, excitation, and pleasure. Pleasure occurs always when the agent plays with another agent, independently of the other agent internal state (which is quite selfish). However, as we know, our agent can get punched if the other agent is angry, and in this case pain takes place. When our agent feels pain and hungry at the same time, it becomes angry too. Initially the agent does not know anything about the environment or about its own sensations. It does not distinguish the situations, and also does not know what consequences its actions imply.
The problem becomes interesting because playing can provoke both positive and negative rewards, the same for eating, that is an interesting behavior only in certain situations; it depends on the context where the action is executed, which is not fully observable by sensors. This is the model that CALM needs to learn by itself before establish the behavior policy. Figure 9 shows the involved variables, which will compose the schemas' identifying vectors. Figures 10 to 17 shows the anticipation trees created by the mechanism after stabilization.
Figure 9: the vectors that compose the context of a schema; synthetic properties, perceptive properties, and controllable properties (actions).
Figure 10: anticipation tree for hearing; the only action that provokes sound is hitting, it is true only if the object is food or the agent is hungry; when the agent hits (H) an object that is food or a non-aggressive agent, differentiated of their confounding pairs by the synthetic element (♣), then the agent listens a sound in the next instant; if the action is other (*) than hitting, then no sound is produced.
Figure 11: anticipation tree for vision; when the agent hits (H) another agent (A), it verifies the permanence of this other agent in its visual field, which means that hitting an agent makes it stay in the same place; however, no other actions (*) executed in front of an agent (A) can prevents it to go, and so the prediction is undefined (#); the same for moving itself (M), which causes a situation changing and the non predictability of the visual field for the next instant; in all of the other cases the vision stays unchanged.
Figure 12: anticipation tree for pain; the agent knows that playing (P) with an aggressive (♣) agent (A) causes pain; otherwise, no pain is caused.
Figure 13: anticipation tree for pleasure; playing with a peaceable (♦) agent (A) is pleasant, and it is the only known way to reach this feeling.
1
♣ # # # # # # # H 0 ♦ # # # # # # # H 0 ♣ # # # # # # # * # # # # # # # # H h a h e p p v h c h a h e p p v h c 1 ♣ # # # # # A # P 0 * # # # # # * # * h a h e p p v h c 1 ♦ # # # # # A # P 0 * # # # # # * # * h a h e p p v h c # # 0 0 # # # # # # 0 # * * # # # # # # h a h e p p v h c # # # # # # # A # * # # # # # # # * # M = # # # # # # * # * # # # # # # * # # A # # # # # # A # H # # # # # # A # #
h a h e p p v h c hearing vision pain pleasure excitation hungry anger action hidden Figure 14: anticipation tree for excitation; when the agent feels neither anger (0) nor hungry (0), it can (eventually) becomes excited; it happens in a non-deterministic way, and for this reason the prediction is undefined in this case (#), which can be understood as a possibility; otherwise (**) excitation will be certainly absent.
Figure 15: anticipation tree for hungry; eating (E) food (O♣) ceases hungry; otherwise, if the agent is already hungry, it will remains, but if it is not yet hungry, it can become.
Figure 16: anticipation tree for anger; if neither pain nor hungry, then anger turns off; if both pain and hungry, then anger turns on; otherwise, anger does not change its state.
Figure 17: hidden element anticipation tree; this element allows identifying whether an object is food or stone, and whether an agent is angry or not; CALM abstract anticipation method allows modeling the dynamics of this variable, even if t is not directly observable by direct sensory perception; the perception of the noise (1) is the result that enables the discovering of the value of this hidden property; the visual perception of an object (O), or the fact of hitting (H) another agent (A), also permit to know that the hidden element does not change.
Figure 18 shows the evolution of the mean reward comparing the CALM solution with a random agent, and with two classic Q-Learning [START_REF] Watkins | Q-learning[END_REF]Dyan, 1992) implementations: the first one non situated (the agent has the vision of the entire environment as flat state space), and the second one with equivalent (situated) inputs than CALM. The non-situated implementation of Q-Learning algorithm (Classic Q) takes much more time to start drawing a convergence curve than the others, and in fact, the expected solution will never be reached; it is due to the fact that Q-Learning tries to construct directly a mapping from states to actions (a policy), but the state space is taken as the combination of the state variables; in this implementation (because it is not situated) each of the cells in the environment compose a different variable, and then the problem becomes quickly big; by the same cause, the agent becomes vulnerable to the growing of the board (the grid dimensions imply directly in the complexity of the problem).
# # 1 # # 1 # # # 0 # # 0 # # 0 # # # = # # * # # * # # # = # # # # # # A # T # # # # # # # A # * # # # # # # # O # M # # # # # # A # # h a h e p p v h c = # # # # # # O # * # # # # # # O # # # # # # # # # N # # ♦ # # # # # # * 0 # ♣ # # # # # # * 1 #
The CALM solution converges much earlier than Q-Learning, even taken in its situated version, and CALM found a better solution also; it is due to the fact that CALM quickly constructs a model to predict the environment dynamics, including the non-observable properties in the model, and so it is able to define a good policy sooner. The "pause" in the convergence that can be seen in the graphic indicates two moments: first, the solution found before correctly modeling the hidden properties as synthetic elements, and then, the solution after having it. On the other side, Q-Learning stays attached to a probabilistic model, and in this case, without the information about the internal states of the other agents, trying to play with them becomes unsafe, and the Q-Learning solution will prefer do not do it.
Conclusions
The CALM mechanism, presented in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], can provide autonomous adaptive capability to an agent, because it is able to incrementally construct knowledge to represent the deterministic regularities observed during its interaction with the system, even in partially deterministic and partially observable environments. CALM can deal with the incomplete observation trough the induction and prediction of hidden properties, represented by synthetic elements, thus it is able to overpass the limit of sensory perception, constructing more abstract terms to represent the system, and to describe its dynamics in more complex levels. CALM can be very efficient to construct a model in non-deterministic environments if they are well structured. In other words, if the most part of transformations are in fact deterministic relatively to the underlying partially observable properties, and if the interdependence between variables are limited to a small range. Several problems found in the real world present these characteristics.
The proposed experiment (meph) can be taken as an useful problem that, even if simple, challenges the agent to solve some intricate issues such as the interaction with other complex agents. The next step in this way is to insert several cognitive agents like CALM in the same scenario; it means, agents that change our own internal model and policy of action, and in this way, present a non-stationary behavior. Again, the difficulty for one agent model the other agents in this kind of condition is even harder.
Finally, we believe that the same strategy can be adapted to several kinds of classification tasks, where a previous database of samples are available. In this case, the algorithm learns to classify new instances based on a model created from a training set of instances that have been properly labeled with the correct classes. This task is similar to several real world problems actually solved with the computer aim, such as e-mail filtering, diagnostic systems, recommendation systems, decision support systems, and so on.
Figure 2 .
2 Figure 2. Differentiation method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) sub-tree generated by differentiation.
Figure 3 .
3 Figure 3. Adjust method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) schema expectation reduction after adjust.
Figure 4 .
4 Figure 4. Integration method; (a) sub-tree after some adjust; (b) an integrated schema substitutes the sub-tree.
Figure 5 .
5 Figure 5. Synthetic element creation method; (d) incremented context and expectation vectors, and differentiation using synthetic element.
Figure 6 :
6 Figure 6: discovering the existence of non observable properties; in (a) a real experienced sequence; in (b) what CALM does not do (the attribution of a probability); in (c) the creation of a synthetic element in order to explain the observed difference.
Figure 7 :
7 Figure7: predicting the dynamics of a non observable property; in (a) a real experienced sequence; in (b) the pieces of knowledge that can explain the logic behind the observed transformations, including the synthetic property changing.
Figure 8 :
8 Figure 8: the simulation environment to the meph experiment, where the triangles represent the agents (looking and hearing forward), and the round squares represent the objects (food or stones).
Figure 18 :
18 Figure18: the evolution of mean reward in a typical execution of the meph problem, considering four different agent solutions: CALM, situated Q-Learning, Random, and Classic Q-Learning; the scenario is a 25x25 grid, where 10% of the cells are stones and 5% are food, in the presence of 10 other agents. |
01762255 | en | [
"info.info-ai"
] | 2024/03/05 22:32:13 | 2012 | https://hal.science/hal-01762255/file/PEROTTO%20-%20ICAART%202012%20%28pre-print%20version%29.pdf | Filipo Studzinski Perotto
email: filipo.perotto@gmail.com
TOWARD SOPHISTICATED AGENT-BASED UNIVERSES Statements to introduce some realistic features into classic AI/RL problems
Keywords: Agency Theory, Factored Partially Observable Markov Decision Process (FPOMDP), Constructivist Learning Mechanisms, Anticipatory Learning, Model-Based Reinforcement Learning
In this paper we analyze some common simplifications present in the traditional AI / RL problems. We argue that only facing particular conditions, often avoided in the classic statements, will allow the overcoming of the actual limits of the science, and the achievement of new advances in respect to realistic scenarios. This paper does not propose any paradigmatic revolution, but it presents a compilation of several different elements proposed more or less separately in recent AI research, unifying them by some theoretical reflections, experiments and computational solutions. Broadly, we are talking about scenarios where AI needs to deal with true situatedness agency, providing some kind of anticipatory learning mechanism to the agent in order to allow it to adapt itself to the environment.
INTRODUCTION
Every scientific discipline starts by addressing specific cases or simplified problems, and by introducing basic models, necessary to initiate the process of understanding into a new domain of knowledge; these basic models eventually evolve to a more complete theory, and little by little, the research attains important scientific achievements and applied solutions. Artificial Intelligence (AI) is a quite recent discipline, and this fact can be easily noticed by regarding its history in the course of the years. If in the 1950s and 1960s AI was the stage for optimistic discourses about the realization of intelligence in machine, the 1970s and 1980s reveal an evident reality: true AI is a feat very hard to accomplish. This movement led AI to plunge into a more pragmatic and less dreamy period, when visionary ideas have been replaced by a (necessary) search for concrete outcomes. Not by chance, several interesting results have been achieved in these recent years, and it is changing the skepticism by a (yet timid) revival of the general AI field.
If on one hand the AI discourse mood has changed like a sin wave, on the other hand the academic practice of AI shows a progressive increment of complexity with respect to the standard problems. When the solutions designed to some established problem become stable, known, and accepted, new problems and new models are proposed in order to push forward the frontier of the science, moving AI from toy problems to more realistic scenarios. Make a problem more realistic is not just increasing the number of variables involved (even if limiting the number of considered characteristics is one of the most recurrent simplifications). When trying to escape from AI classic maze problems toward more sophisticated (and therefore more complex) agent-based universes, we are led to consider several complicating conditions, like (a) the situatedness of the agent, which is immersed into an unknown universe, interacting with it through limited sensors and effectors, without any holistic perspective of the complete environment state, and (b) without any a priori model of the world dynamics, which forces it to incrementally discover the effect of its actions on the system in an on-line experimental way; to make matters worse, the universe where the agent is immersed can be populated by different kinds of objects and entities, including (c) other complex agents, which can have their own internal models, and in this case the task of learning a predictive model becomes considerably harder.
In this paper, we use the Constructivist Anticipatory Learning Mechanism (CALM), defined in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], to support our assumption. In other words, we shows that the strategies used by this method can represent a changing of directions in relation to classic and yet dominant ways. CALM is able to build a descriptive model of the system where the agent is immersed, inducting, from the experience, the structure of a factored and partially observable Markov decision process (FPOMDP). Some positive results [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], (Perotto et al. 2007), (Perotto;Alvares, 2007), (Perotto, 2011), have been achieved due to the use of 4 integrated strategies: (a) the mechanism takes advantage of the situated condition presented by the agent, constructing a description of the system regularities relatively to its own point of view, which allows to set a good behavior policy without the necessity of "mapping" the entire environment; (b) the learning process is anchored on the construction of an anticipatory model of the world, which could be more efficient and more powerful than traditional "model free" reinforcement learning methods, that directly learn a policy; (c) the mechanism uses some heuristics designed to well structured universes, where conditional dependencies between variables exist in a limited scale, and where most of the phenomena can be described in a deterministic way, even if the system as whole is not (a partially deterministic environment); which seems to be widely common in real world problems; (d) the mechanism is prepared to discover the existence of hidden or non-observable properties of the universe, which enables it to explain a larger portion of the observed phenomena. Following the paper, section 2 overviews the MDP framework and the RL tradition, section 3 describes the CALM learning mechanism, section 4 shows some experiments and acquired results, and section 5 concludes the paper.
MDP+RL FRAMEWORK
The typical RL problem is inspired on the classic rat maze experiment; in this behaviorist test, a rat is placed in a kind of labyrinth, and it needs to find a piece of cheese (the reward) that is placed somewhere far from it, sometimes avoiding electric traps along the way (the punishment). The rat is forced to run the maze several times, and the experimental results show that it gradually discovers how to solve it. The computational version of this experiment corresponds to an artificial agent placed in a bi-dimensional grid, moving over it, and eventually receiving positive or negative reward signals. Exactly as in the rat maze, the agent must learn to coordinate its actions by trial and error, in order to avoid the negative and quickly achieve the positive rewards. This computational experiment is formally represented by a geographical MDP, where each position in the grid corresponds to a state of the process; the process starts in the initial state, equivalent to the agent start position in the maze, and it evolves until the agent reaches some final reward state; then the process is reset, and a new episode take place; the episodes are repeated, and the algorithm is expected to learn a policy to maximize the estimated discounted cumulative reward that will be received by the agent in subsequent episodes.
These classic RM maze configurations present at least two positive points, when comparing to realistic scenarios: the agent needs to learn actively and on-line, it means, there is no previous separated time to learn before the time of the life; the agent must perform and improve its behavior at the same time, without supervision, by "trial-and-error". However, this kind of experiment cannot be taken as a general scheme for learning: on the one hand, the simplifications adopted (in order to eliminate some uncomfortable elements) cannot be ignored when dealing with more complex or realistic problems; on the other hand, there are important features lacking on the classic RL maze, what makes difficult comparing it to other natural learning situations. Some of these simplifications and lacks are listed below:
Non-Situativity: in the classic RL maze configuration, the agent is not really situated in the environment; in fact, the little object moving on the screen (which is generally called agent) is dissociated from the "agent as the learner"; the information available to the algorithm comes from above, from an external point of view, in which this moving agent appears as a controllable object of the environment, among the others. In contrast, realistic scenarios impose the agent sensory function as an imprecise, local, and incomplete window of the underlying situation stated by the real situation.
Geographic Discrete Flat Representation: in classic mazes, the corresponding MDP is created by associating each grid cell to a process state; so, the problem stays confined in the same two dimensions of the grid space, and the system states represent nothing more than the agent geographic positions. In contrast, realistic problems introduce several new and different dimensions to the problem. The basic MDP model itself is conceived to represent a system by exhaustive enumeration of states (a flat representation), and it is not appropriated to represent multi-dimensional structured problems; the size of the state space grows exponentially up with the number of considered attributes (curse of dimensionality), which makes the use of this formalism only viable for simple or small scenarios.
Disembodiment: in the classic configuration, the agent does not present any internal property, it is like a loose mind directly living in the environment; in consequence, it can be only extrinsically motivated, i.e. the agent acts in order to attain (or to avoid) some determined positions into the space, given from the exterior. In natural scenarios, the agent has a "body" playing the role of an intermediary between mind and external world; the body also represents an "internal environment", and the goals the agent needs to reach are given from this embodied perspective (in relation to the dynamics of some internal properties).
Complete Observation: the basic MDP design the agent as an omniscient entity; the learning algorithm observes the system in its totality, it knows all the possible states, and it can precisely perceive in what state the system is at every moment, it also knows the effect of its actions on the system, in general it is the only source of perturbation in the world dynamics. These conditions are far from common in real-world problems.
Episodic Life and Behaviorist Solution: in the classic enunciation, the system presents initial and final states, and the agent lives by episodes; when it reaches a final state, the system restarts. Generally this is not the case in real-life problems, where agents live a unique continuous uninterrupted experience. Also, solving a MDP is often synonymous of finding an optimal (or near-optimal) policy, and in this way most of the algorithms proposed in the literature are model-free. However, in complex environments, the only way to define a good policy is "understanding" what is going on, and creating an explicative or predictive model of the world, which can then be used to establish the policy.
The Basic MDP
Markov Decision Process (MDP) and its extensions constitute a quite popular framework, largely used for modeling decision-making and planning problems [START_REF] Feinberg | Handbook of Markov Decision Processes: methods and applications[END_REF]). An MDP is typically represented as a discrete stochastic state machine; at each time cycle the machine is in some state s; the agent interacts with the process by choosing some action a to carry out; then, the machine changes into a new state s', and gives the agent a corresponding reward r; a given transition function δ defines the way the machine changes according to s and a. The flow of an MDP (the transition between states) depends only on the system current state and on the action taken by the agent at the time. After acting, the agent receives a reward signal, which can be positive or negative if certain particular transitions occur.
Solving an MDP is finding the optimal (or nearoptimal) policy of actions in order to maximize the rewards received by the agent over time. When the MDP parameters are completely known, including the reward and the transition functions, it can be mathematically solved by dynamic programming (DP) methods. When these functions are unknown, the MDP can be solved by reinforcement learning (RL) methods, designed to learn a policy of actions on-line, i.e. at the same time the agent interacts with the system, by incrementally estimating the utility of state-actions pairs and then by mapping situations to actions [START_REF] Sutton | Reinforcement Learning: an introduction[END_REF].
However, for a wide range of complex (including real world) problems, the complete information about the exact state of the environment is not available. This kind of problem is often represented as a Partially Observable MDP (POMDP) [START_REF] Kaelbling | Planning and acting in partially observable stochastic domains[END_REF]. The POMDP provides an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which the system states are observable only indirectly, via a set of imperfect, incomplete or noisy perceptions. In a POMDP, the set of observations is different from the set of states, but related to them by an observation function, i.e. the underlying system state s cannot be directly perceived by the agent, which has access only to an observation o. We can represent a larger set of problems using POMDPs rather than MDPs, but the methods for solving them are computationally even more expensive [START_REF] Hauskrecht | Value-function approximations for partially observable Markov decision processes[END_REF].
The main bottleneck about the use of MDPs or POMDPs is that representing complex universes implies an exponential growing-up on the state space, and the problem quickly becomes intractable. Fortunately, most of real-world problems are quite well-structured; many large MDPs have significant internal structure, and can be modeled compactly; the factorization of states is an approach to exploit this characteristic [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF]. In the factored representation, a state is implicitly described by an assignment to some set of state variables. Thus, the complete state space enumeration is avoided, and the system can be described referring directly to its properties. The factorization of states enables to represent the system in a very compact way, even if the corresponding MDP is exponentially large [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. When the structure of the Factored Markov Decision Process (FMDP) is completely described, some known algorithms can be applied to find good policies in a quite efficient way [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF]. However, the research concerning the discovery of the structure of an underlying system from incomplete observation is still incipient [START_REF] Degris | Factored Markov Decision Processes[END_REF].
FPOMDP
The classic MDP model can be extended to include both factorization of states and partial observation, then composing a Factored Partially Observable Markov Decision Process (FPOMDP). In order to be factored, the atomic elements of the non-factored representation will be decomposed and replaced by a combined set of elements. A FPOMDP (Guestrin et al., 2001), [START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF][START_REF] Hansen | Dynamic programming for POMDPs using a factored state representation[END_REF], [START_REF] Poupart | VDCBPI: an approximate scalable algorithm for large scale POMDPs[END_REF][START_REF] Poupart | VDCBPI: an approximate scalable algorithm for large scale POMDPs[END_REF], [START_REF] Shani | Model-Based Online Learning of POMDPs[END_REF], [START_REF] Sim | Symbolic Heuristic Search Value Iteration for Factored POMDPs[END_REF], can be formally defined as a 4-tuple {X, C, R, T}. The state space is factored and represented by a finite non-empty set of system properties or variables X = {X 1 , X 2 , ... X n }, which is divided into two subsets, X = P H, where the subset P contains the observable properties (those that can be accessed through the agent sensory perception), and the subset H contains the hidden or non-observable properties; each property X i is associated to a specified domain, which defines the values the property can assume; C = {C 1 , C 2 , ... C m } represents the controllable variables, composing the agent actions; R = {R 1 , R 2 , ... R k } is a set of (factored) reward functions, in the form R i : P i IR, and T = {T 1 , T 2 , ... T n } is a set of transformation functions, as T i : X C X i , defining the system dynamics. Each transformation function can be represented by a Dynamic Bayesien Network (DBN), which is an acyclic, oriented, two-layers graph. The first layer nodes represent the environment state in time t, and the second layer nodes represent the next state, in t+1 [START_REF] Boutilier | Stochastic dynamic programming with factored representations[END_REF]. A stationary policy π is a mapping X → C where π(x) defines the action to be taken in a given situation. The agent must learn a policy that optimizes the cumulative rewards received over a potentially infinite time horizon. Typically, the solution π* is the policy that maximizes the expected discounted reward sum
In this paper, we consider the case where the agent does not have an a priori model of the universe where it is situated (i.e. it does not have any idea about the transformation function), and this condition forces it to be endowed with some capacity of learning, in order to be able to adapt itself to the system. Although it is possible directly learn a policy of actions, in this work we are interested in model-based methods, through which the agent must learn a descriptive and predictive model of the world, and so define a behavior strategy based on it. Learning a predictive model is often referred as learning the structure of the problem.
In this way, when the agent is immersed in a system represented as a FPOMDP, the complete task for its anticipatory learning mechanism is both to create a predictive model of the world dynamics (i.e. inducing the underlying transformation function of the system), and to define an optimal (or sufficiently good) policy of actions, in order to establish a behavioral strategy. [START_REF] Degris | Factored Markov Decision Processes[END_REF] present a good overview of the use of this representation in artificial intelligence, referring algorithms designed to learn and solve FMDPs and FPOMDPs.
ANTICIPATORY LEARNING
In the artificial intelligence domain, anticipatory learning mechanisms refer to methods, algorithms, processes, machines, or any particular system that enables an autonomous agent to create an anticipatory model of the world in which it is situated. An anticipatory model of the world (also called predictive environmental model, or forward model) is an organized set of knowledge allowing inferring the events that are likely to happen. For cognitive sciences in general, the term anticipatory learning mechanism can be applied to humans or animals to describe the way these natural agents learn to anticipate the phenomena experienced in the real world, and to adapt their behavior to it [START_REF] Perotto | Anticipatory Learning Mechanisms[END_REF].
When immersed in a complex universe, an agent (natural or artificial) needs to be able to compose its actions with the other forces and movements of the environment. In most cases, the only way to do so is by understanding what is happening, and thus by anticipating what will (most likely) happen next. A predictive model can be very useful as a tool to guide the behavior; the agent has a perception of the current state of the world, and it decides what actions to perform according to the expectations it has about the way the situation will probably change. The necessity of being endowed with an anticipatory learning mechanism is more evident when the agent is fully situated and completely autonomous; that means, when the agent is by itself, interacting with an unknown, dynamic, and complex world, through limited sensors and effectors, which give it only a local point of view of the state of the universe and only partial control over it. Realistic scenarios can only be successfully faced by an agent capable of discovering the regularities that govern the universe, understanding the causes and the consequences of the phenomena, identifying the forces that influence the observed changes, and mastering the impact of its own actions over the ongoing events.
CALM Mechanism
The constructivist anticipatory learning mechanism (CALM), detailed in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF], is a mechanism developed to enable an agent to learn the structure of an unknown environment where it is situated, trough observation and experimentation, creating an anticipatory model of the world. CALM operates the learning process in an active and incremental way, and learn the world model as well as the policy at the same time it actuates. The agent has a single uninterrupted interactive experience into the system, over a theoretically infinite time horizon. It needs performing and learning at the same time.
The environment is only partially observable from the point of view of the agent. So, to be able to create a coherent world model, the agent needs, beyond discover the regularities of the phenomena, also discover the existence of non-observable variables that are important to understand the system evolution. In other words, learning a model of the world is beyond describing the environment dynamics, i.e. the rules that can explain and anticipate the observed transformations, it is also discovering the existence of hidden properties (once they influence the evolution of the observable ones), and also find a way to deduces the dynamics of these hidden properties. In short, the system as a whole is in fact a FPOMDP, and CALM is designed to discover the existence of non-observable properties, integrating them in its anticipatory model. In this way CALM induces a structure to represent the dynamics of the system in a form of a FMDP (because the hidden variables become known), and there are some able to efficiently calculate the optimal (or near-optimal) policy, when the FMDP is given [START_REF] Guestrin | Efficient Solution Algorithms for Factored MDPs[END_REF].
CALM tries to reconstruct, by experience, each transformation function T i , which will be represented by an anticipation tree. Each anticipation tree is composed by pieces of knowledge called schemas, which represent some perceived regularity occurring in the environment, by associating context (sensory and abstract), actions and expectations (anticipations). Some elements in these vectors can undertake an "undefined value". For example, an element linked with a binary sensor must have one of three values: true, false or undefined (represented, respectively, by '1', '0' and '#'). The learning process happens through the refinement of the set of schemas. After each experienced situation, CALM updates a generalized episodic memory, and then it checks if the result (context perceived at the instant following the action) is in conformity to the expectation of the activated schema. If the anticipation fails, the error between the result and the expectation serves as parameter to correct the model. The context and action vectors are gradually specialized by differentiation, adding each time a new relevant feature to identify more precisely the situation class. The expectation vector can be seen as a label in each "leaf" schema, and it represents the predicted anticipation when the schema is activated. Initially all different expectations are considered as different classes, and they are gradually generalized and integrated with others. The agent has two alternatives when the expectation fails. In a way to make the knowledge compatible with the experience, the first alternative is to try to divide the scope of the schema, creating new schemas, with more specialized contexts. Sometimes it is not possible and the only way is to reduce the schema expectation.
CALM creates one anticipation tree for each property it judges important to predict. Each tree is supposed to represent the compete dynamics of the property it represents. From this set of anticipation trees, CALM can construct a deliberation tree, which will define the policy of actions. In order to incrementally construct all these trees, CALM implements 5 methods: (a) sensory differentiation, to make the tree grow (by creating new specialized schemas); (b) adjustment, to abandon the prediction of non-deterministic events (and reduce the schemas expectations) (c) integration, to control the tree size, pruning and joining redundant schemas: (d) abstract differentiation, to induce the existence of non observable properties; and (e) abstract anticipation, to discover and integrate these non-observable properties in the dynamics of the model.
Sometimes some disequilibrating event can be explained by considering the existence of some abstract or hidden property in the environment, which could be able to differentiate the situation, but which is not directly perceived by the agent sensors. So, before adjusting, CALM supposes the existence of a non-sensory property in the environment, which it will represent as a abstract element. Abstract elements suppose the existence of something beyond the sensory perception, which can be useful to explain non-equilibrated situations. They have the function of amplifying the differentiation possibilities.
EXPERIMENTS
In (Perotto et al., 2007) the CALM mechanism is used to solve the flip problem, which creates a scenario where the discovery of underlying nonobservable states are the key to solve the problem, and CALM is able to do it by creating a new abstract element to represent these states. In [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF] and (Perotto;Álvares, 2007) the CALM mechanism is used to solve the wepp problem, which is an interesting RL situated bi-dimensional grid problem, where it should learn how to behavior considering the interference of several dimensions of the environment, and of its body. Initially the agent does not know anything about the world or about its own sensations, and it does not know what consequences its actions imply. Figure 1 shows the evolution of the mean reward comparing the CALM solution with a classic Q-Learning implementation (where the agent have the vision of the entire environment as flat state space), and with a situated version of the Q-Learning agent. We see exactly two levels of performance improvement. First, the non-situated implementation (Classic Q) takes much more time to start an incomplete convergence, and it is vulnerable to the growing of the board. Second, the CALM solution converges much earlier than Q-Learning, taken in its situated version, due to the fact that CALM quickly constructs a model to predict the environment dynamics, and it is able to define a good policy sooner.
CONCLUSIONS
Over the last twenty years, several anticipatory learning mechanisms have been proposed in the artificial intelligence scientific literature. Even if some of them are impressive in theoretical terms, having achieved recognition from the academic community, for real world problems (like robotics) no general learning mechanism has prevailed. Until now, the intelligent artifacts developed in universities and research laboratories are far less wondrous than those imagined by science fiction. However, the continuous progress in the AI field, combined with the progress of informatics itself, is leading us to a renewed increase of interest in the search for more general intelligent mechanism, able to face the challenge of complex and realistic problems.
A necessary changing of directions in relation to the traditional ways to state the problems in AI is needed. The CALM mechanism, presented in [START_REF] Perotto | Un Mécanisme Constructiviste d'Apprentissage Automatique d'Anticipations pour des Agents Artificiels Situés[END_REF] has been used as an exemple of it, because it provides autonomous adaptive capability to an agent, enabling it to incrementally construct knowledge to represent the regularities observed during its interaction with the system, even in nondeterministic and partially observable environments. |
01762260 | en | [
"info.info-ai"
] | 2024/03/05 22:32:13 | 2007 | https://hal.science/hal-01762260/file/Perotto.pdf | Studzinski Filipo
Perotto
email: fsperotto@inf.ufrgs.br
Jean-Christophe Buisson
email: buisson@enseeiht.fr
Luis Otávio Alvares
email: alvares@inf.ufrgs.br
Constructivist Anticipatory Learning Mechanism (CALM) -dealing with partially deterministic and partially observable environments
This paper presents CALM (Constructivist Anticipatory Learning Mechanism), an agent learning mechanism based on a constructivist approach. It is designed to deal dynamically and interactively with environments which are at the same time partially deterministic and partially observable. We describe in detail the mechanism, explaining how it represents knowledge, and how the learning methods operate. We analyze the kinds of environmental regularities that CALM can discover, trying to show that our proposition follows the way towards the construction of more abstract or high-level representational concepts.
Introduction
The real world is a very complex environment, and the transition from sensorimotor intelligence to symbolic intelligence is an important aspect to explaining how human beings successfully deal with it [START_REF] Piaget | Construction of Reality in the Child[END_REF]). The problem is the same for a situated artificial agent (like a robot), who needs to incrementally learn the observed regularities by interacting with the world.
In complex environments (Goldstein 1999), special 'macroscopic' properties emerge from the functional interactions of 'microscopic' elements, and generally these emergent characteristics are not present in any of the sub-parts that generate it. The salient phenomena in this kind of environment tend to be related to high-level objects and processes [START_REF] Thornton | Indirect sensing through abstractive learning[END_REF], and in this case it is plainly inadequate to represent the world only in terms of primitive sensorimotor terms [START_REF] Drescher | Made-Up Minds: A Constructivist Approach to Artificial Intelligence[END_REF]).
An intelligent agent (human or artificial) who lives in these conditions needs to have the possibility to overpass the limits of direct sensorial perceptions, organizing the universe in terms of more abstract concepts. The agent needs to be able to detect highlevel regularities in the environment dynamics, but it is not possible if it is closed into a rigid 'representational vocabulary'.
The purpose of this paper is to present an agent learning architecture, inspired in a constructivist conception of intelligence [START_REF] Piaget | Construction of Reality in the Child[END_REF], capable of creating a model to describe its universe, and using abstract elements to represent unobservable properties.
The paper is organized as follows: Section 2 and 3 describe both the agent and the environment conceptions. Section 4 and 5 show the basic CALM mechanism, respectively detailing how to represent the knowledge, and how to learn it. Section 6 presents the way to deal with hidden properties, showing how these properties can be discovered and predicted through synthetic elements. Section 7 presents example problems and solutions following the proposed method. Section 8 compares related works, and section 9 finalizes the paper, arguing that this is an important step towards a more abstract representation of the world, and pointing some next steps.
Agent and Environment
The concepts of agent and environment are mutually dependent, and they need to be defined one in relation to the other. In this work, we adopt the notions of situated agent and properties based environment.
A situated agent is an entity embedded in and part of an environment, which is only partially observable through its sensorial perception, and only partially liable to be transformed by its actions [START_REF] Suchman | Plans and Situated Actions[END_REF]. Due to the fact that sensors will be limited in some manner, a situated agent can find itself unable to distinguish between differing states of the world. A situation could be perceived in different forms, and different situations could seem the same. This ambiguity in the perception of states, also referred to as perceptual aliasing, has serious effects on the ability of most learning algorithms to construct consistent knowledge and stable policies [START_REF] Crook | Learning in a State of Confusion: Perceptual Aliasing in Grid World Navigation[END_REF].
An agent is supposed to have motivations, which in some way represent its goals. Classically, the machine learning problem means enabling an agent to autonomously construct polices to maximize its goal reaching performance. The model based strategy separates the problem in two parts: (a) construct a world model and, based on it, (b) construct a policy of actions.
CALM (Constructivist Anticipatory Learning Mechanism) responds to the task of constructing a world model. It tries to organize the sensorial Berthouze, L., Prince, C. G., Littman, M., Kozima, H., and Balkenius, C. (2007) information in a way to represent the regularities in the interaction of the agent with the environment.
There are two common ways to describe an environment: either based on states, or based on properties. A states based environment can be expressed by a generalized states machine, frequently defined as POMDP [START_REF] Singh | Learning Predictive State Representations[END_REF] or as a FSA [START_REF] Rivest | Diversity-based inference of finite automata[END_REF]. We define it as Є = {Q, A, O, δ, γ} where Q is a finite not-empty set of underlying states, A is a set of agent actions, O is a set of agent observations, δ : Q × A → Q is a transition function, which describes how states change according to the agent actions, and γ : Q → O is an observation function, which gives some perceptive information related to the current underlying environment state.
A properties based environment can be expressed by ξ = {F, A, τ} where F is a finite not-empty set of properties, composed by F (p) , the subset of perceptible or observable properties, and F (h) , the subset of hidden or unobservable properties, A is a set of agent actions, and τ (i) :
F 1 × F 2 × ... × F k × A → F i
is a set of transformation functions, one for each property F i in F, describing the changes in property values according to the agent actions.
The environment description based on properties (ξ) has some advantages over the description based in states (Є) in several cases. Firstly because it promotes a clearer relation between the environment and the agent perception. In general we assume that there is one sensor to each observable property. Secondly, because the state identity can be distributed in the properties. In this way, it is possible to represent 'generalized states' and, consequently, compact descriptions of the transformation functions, generalizing some elements in the function domain, when they are not significant to describe the transformation. A discussion that corroborates our assumptions can be read in (Triviño-Rodriguez and Morales-Bueno 2000).
The most compact expression of the transition function represents the environment regularities, in the form λ * (i) :
F 1 |ε × F 2 |ε × ... × F k |ε × A|ε → F i .
This notation is similar to that used in grammars, and it means that at each property j, we can consider it for the domain or not (F j |ε).
Types of Environments
We adopt the properties based description (ξ), defining 3 axis to characterize different types of environments. The axis ∂ represents the environment determinism in the transformations, the axis ω indicates the perceptive accessibility that the agent has to the environment, and the axis ϕ represents the information gain related to the properties.
The determinism axis level (∂) is equivalent to the proportion of deterministic transformations in τ in relation to the total number of transformations. So, in the completely non-deterministic case (∂ = 0), the transformation function (τ) to any property i needs to be represented as
F 1 × F 2 × ... × F k × A → Π(F i ),
where Π(F i ) is a probabilistic distribution. On the other hand, in the completely deterministic case (∂ = 1), every transformation can be represented directly by
F 1 × F 2 × ... × F k × A → F i .
An environment is said partially deterministic if it is situated between these two axis extremities (0 < ∂ < 1). When ∂ = 0.5, for example, half of the transformations in τ are deterministic, and the other half is stochastic.
It is important to note that a single transition in the function δ of an environment represented by states (Є) is equivalent to k transformations in the function τ of the same environment represented by properties (ξ). So, if only one transformation that integrates the transition is non-deterministic, all the transition will be nondeterministic. Conversely, a non-deterministic transition can present some deterministic component transformations. This is another advantage of using the properties representation, when we combine it with a learning method based on the discovery of deterministic regularities.
The accessibility axis level (ω) represents the degree of perceptive access to the environment. It is equivalent to the proportion of observable properties in F in relation to the total number of properties.
If ω = 1 then the environment is said completely observable, which means that the agent has sensors to observe directly all the environment properties. In this case there is no perceptual confusion, and the agent always knows what is its current situation. If ω < 1, then the environment is said partially observable. The lesser ω, the higher the proportion of hidden properties. When ω is close to 0, the agent is no longer able to identify the current situation only in terms of its perception.
In other words, partially observable environments present some determinant properties to a good world model, which cannot be directly perceived by the agent. Such environments can appear to be arbitrarily complex and non-deterministic on the surface, but they can actually be deterministic and predictable with respect to unobservable underlying elements [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF].
There is a dependence relation between these two axis -accessibility and determinism. The more an agent has sensors to perceive complex elements and phenomena, the more the environment will appear deterministic to it.
Finally, the informational axis level (ϕ) is equivalent to the inverse of the average number of generalizable properties to represent the environment regularities (λ * ), divided by the total number of properties in F. The greater is ϕ (rising to 1), the more compactly the transformation function (τ) can be expressed in terms of regularities (λ * ).
In other words, higher levels of ϕ mean that the information about the environment dynamics is concentrated in the properties (i.e. there is just a small sub-set of highly relevant properties for each prediction), and lower levels of ϕ indicate that the information about the dynamics is fuzzily distributed over all the set of properties, and in this case the agent needs to describe the transformation in function of almost all properties.
Learning methods based on the discovery of regularities can be very efficient in environments where the properties are highly informative.
The Basic Idea
In this section we present the basic CALM mechanism, which is developed to model its interaction with a completely observable but partially deterministic environment (COPDE), where ω=1, but ∂<1 and ϕ<1.
CALM tries to construct a set of schemas to represent perceived regularities occurring in the environment through its interactions. Each schema represents some regularity checked by the agent during its interaction with the world. It is composed by three vectors: Ξ = (context + action → expectation). The context and expectation vectors have the same length, and each of their elements are linked with one sensor. The action vector is linked with the effectors.
In a specific schema, the context vector represents the set of equivalent situations where the schema is applicable. The action vector represents a set of similar actions that the agent can carry out in the environment. The expectation vector represents the expected result after executing the given action in the given context. Each element vector can assume any value in a discrete interval defined by the respective sensor or effector.
Some elements in these vectors can undertake the undefined value. For example, an element linked with a binary sensor must have one of three values: true, false or undefined (represented, respectively, by '1', '0' and '#'). In both the context and action vectors, '#' represents something ignored, not relevant to make the anticipations. But for the expectation vector, '#' means that the element is not deterministically predictable.
The undefined value generalizes the schema because it allows to ignore some properties to represent a set of situations. There is compatibility between a schema and a certain situation when the schema's context vector has all defined elements equal to those of the agent's perception. Note that compatibility does not compare the undefined elements. For example, a schema which has the context vector = '100#' is able to assimilate the compatible situations '1000' or '1001'.
The use of undefined values makes possible the construction of a schematic tree. Each node in that tree is a schema, and relations of generalization and specialization guide its topology (quite similar to decision trees or discrimination trees). The root node represents the most generalized situation, which has the and action vectors completely undefined. Adding one level in the tree is to specialize one generalized element, creating a branch where the undefined value is replaced by different defined values. This specialization occurs either in the context vector or in the action vector. The structure of the schemas and their organization as a tree are presented in Figure 1. The context in which the agent is at a given moment (perceived through its sensors) is applied in the tree, exciting all the schemas that have a compatible context vector. This process defines a set of excited schemas, each one suggesting a different action to do in the given situation. CALM will choose one to activate, and then the action proposed by the activated schema will be performed through the agent's effectors. The algorithm always chooses the compatible schema that has the most specific context, called decider schema, which is the leaf of a differentiated branch. Each decider has a kind of episodic memory, which represents (in a generalized form) the specific and real situations experimented in the past, during its activations.
### # ### ### 0 #0# ### 1 111 0## 0 000 1## 0 10# context action expectation
Learning Methods
The learning process happens through the refinement of the set of schemas. The agent becomes more adapted to its environment as a consequence of that. After each experienced situation, CALM checks if the result (context perceived at the instant following the action) is in conformity to the expectation of the activated schema. If the anticipation fails, the error between the result and the expectation serves as parameter to correct the tree or to adjust the schema. combines top-down and bottom-up learning strategies. In the schematic tree topology, the context action vectors are considered together. This concatenated vector identifies the node in the tree, which grows up using the top-down strategy.
The agent has just one initial schema. This root schema has the context vector completely general (without any differentiation, ex.: '#####') and expectation vector totally specific (without any generalization, ex.: '01001'), created at the first experienced situation, as a mirror of the result directly observed after the action.
The context vector will be gradually specialized by differentiation. In more complex environments, the number of features the agent senses is huge, and, in general, only a few of them are relevant to identify the situation class (1 > ϕ >> 0). In this case, a top-down strategy seems to be better, because there is a shorter way beginning with an empty vector and searching for these few relevant features to complete it, than beginning with a full vector and having to eliminate a lot of useless elements.
Selecting the good set of relevant features to represent some given concept is a well known problem in AI, and the solution is not easy, even to approximated approaches. As it will be seen, we adopt a kind of forward greedy selection [START_REF] Blum | Selection of relevant features and examples in machine learning[END_REF].
The expectation vector can be seen as a label in each decider schema, and it represents the predicted anticipation when the decider is activated. The evolution of expectations uses a bottom-up strategy. Initially all different expectations are considered as different classes, and they are gradually generalized and integrated with others. The agent has two alternatives when the expectation fails. In a way to make the knowledge compatible with the experience, the first alternative is to try to divide the scope of the schema, creating new schemas, with more specialized contexts. Sometimes it is not possible and the only way is to reduce the schema expectation.
Three basic methods compose the CALM learning function, namely: differentiation, adjustment and integration. Differentiation is a necessary mechanism because a schema responsible for a context too general can hardly make precise anticipations. If a general schema does not work well, the mechanism divides it into new schemas, differentiating them by some element of the context or action vector.
In fact, the differentiation method takes an unstable decider schema and changes it into a two level sub-tree. The parent schema in this sub-tree preserves the context of the original schema. The children, which are the new decider schemas, have their context vectors a little bit more specialized than their parent. They attribute a value to some undefined element, dividing the scope of the original schema. Each one of these new deciders engages itself in a part of the domain. In this way, the previous correct knowledge remains preserved, distributed in the new schemas, and the discordant situation is isolated and treated only in its specific context. Differentiation is the method responsible to make the schematic tree grows up. Each level of the tree represents the introduction of some constraint into the context vector. Figure 2 The algorithm needs to choose what will be the differentiator element, and it could be from either the context vector or the action vector. This differentiator needs to separate the situation responsible for the disequilibrium from the others, and the algorithm chooses it by calculating the information gain.
When some schema fails and it is not possible to differentiate it, then CALM executes the adjustment method. This method reduces the expectations of an unstable decider schema in order to make it reliable again. The algorithm simply compares the activated schema's expectation and the real result perceived by the agent after the application of the schema, setting the incompatible expectation elements to undefined value ('#').
As CALM always creates schemas with expectations totally determined (as a mirror of the result of its first application), the walk performed by the schema is a reduction of expectations, up to the point it reaches a state where remains only those elements that really represent the regular results of the action carried out in that context. Figure 3 The adjustment method changes the schema expectation (and consequently the anticipation predicted by the schema). Successive adjustments can reveal some unnecessary differentiations. After an adjustment, CALM needs to verify the possibility to regroup some related schemas. It is the integration method that searches two schemas with equivalent expectations to approach different contexts in a same sub-tree, and join these schemas into a single one, eliminating the differentiation. The method is illustrated in figure 4. To test this basic CALM method, we have made some experiments in simple scenarios showing that the agent converges to the expected behavior, constructing correct knowledge to represent the environment deterministic regularities, as well as the regularities of its body sensations, and also the regular influence of its actions over both. We may consider that these results have corroborated the mechanism ability to discover regularities and use this knowledge to adapt the agent behavior. The agent has learned about the consequences of its actions in different situations, avoiding emotionally negative situations, and pursuing those emotionally positive. A detailed description of these experiments can be viewed in (Perotto and Alvares 2006).
### A 0#0 0## A 0#0 1## A 0#0 (b) (a) ### A 0#0
Dealing with the Unobservable
In this section we will present the extended mechanism, developed to deal with partially observable and partially deterministic environments (CALM-POPDE), where ∂<1, ϕ<1, and also ω<1.
In the basic mechanism (CALM-COPDE), presented in previous sections, when some schema fails, the first alternative is to differentiate it based on direct sensorimotor (context and action) elements. If it is not possible to do that, then the mechanism reduces the schema expectation, generalizing the incoherent anticipated elements. When CALM reduces the expectation of a given schema, it supposes that there is no deterministic regularity following the represented situation in relation to these incoherent elements, and the related transformation is unpredictable.
However, sometimes the error could be explained by considering the existence of some abstract or hidden property in the environment, which could be able to differentiate the situation, but which is not directly perceived by the agent sensors. In the extended mechanism, we introduce a new method which enables CALM to suppose the existence of a non-sensorial property in the environment, which it will represent as a synthetic element.
When a new synthetic element is created, it is included as a new term in the context and expectation vectors of the schemas. Synthetic elements suppose the existence of something beyond the sensorial perception, which can be useful to explain non-equilibrated situations. They have the function of amplifying the differentiation possibilities.
In this way, when dealing with partially observable environments, CALM has two additional challenges: a) infer the existence of unobservable properties, which it will represent by synthetic elements, and b) include these new elements into its predictive model. A good strategy to do this task is looking at the historical information. [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF] have proved that it is always possible to find sufficient little pieces of history to distinguish and identify all the underlying states in D-POMDPs.
The first CALM-POPDE additional method is called abstract differentiation. When a schema fails in its prediction, and when it is not possible to differentiate it by the current set of considered properties, then a new boolean synthetic element is created, enlarging the context and expectation vectors. Immediately, this element is used to differentiate the incoherent situation from the others. The method attributes arbitrary values to this element in each differentiated schema. These values represent the presence or absence of some unobservable condition, necessary to determine the correct prediction in the given situation. The method is illustrated in figure 5, where the new elements are represented by card suites. Once a synthetic element is created, it can be used in next differentiations. A new synthetic element will be created only if the existing ones are already saturated. To avoid the problem of creating infinite new synthetic elements, CALM can do it only until a determined limit, after which it considers that the problematic anticipation is simply unpredictable, undefining the expectation in the related schemas by adjustment.
The synthetic element is not associated to any sensorial perception. Consequently, its value cannot be observed. This fact can place the agent in ambiguous situations, where it does not know whether some relevant but not observable condition (represented by this element) is present or absent.
Initially, the value of a synthetic element is verified a posteriori (i.e. after the execution of the action in an ambiguous situation). Once the action is executed and the following result is verified, then the agent can rewind and deduce what was the situation really faced in the past instant (disambiguated). Discovering the value of a synthetic element after the circumstance where this information was needed can seem useless, but in fact, this delayed deduction gives information to the second CALM-POPDE additional method, called abstract anticipation.
If the unobservable property represented by this synthetic element has a regular behavior, then the mechanism can "backpropagate" the deduced value for the activated schema in the previous instant. The deduced synthetic element value will be included as a new anticipation in the previous activated schema.
For example, in time t 1 CALM activates the schema Ξ 1 = (#0 + x → #1), where the context and expectation are composed by two elements (the first one synthetic and the second one perceptive), and one action. Suppose that the next situation '#1' is ambiguous, because it excites both schemas Ξ 2 = (♣1 + x → #0) and Ξ 3 = (♦1 + x → #1). At this time, the mechanism cannot know the synthetic element value, crucial to determine what is the real situation. Suppose that, anyway, the mechanism decides to execute the action 'x' in time t 2 , and it is followed by the sensorial perception '0' in t 3 . Now, in t 3 , the agent can deduce that the situation really dealt with in t 2 was '♣1', and it can include this information into the schema activated in t 1 , in the form Ξ 1 = (#0 + x → ♣1).
Example Problem and Solution
To exemplify the functioning of the proposed method we will use the flip problem, which is also used by [START_REF] Singh | Learning Predictive State Representations[END_REF] and [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF]. They suppose an agent who lives in a two states universe. It has 3 actions (l, r, u) and 2 perceptions (0, 1). The agent do not have any direct perception to know what is the underlying current state. It has the perception 1 when the state changes, and the perception 0 otherwise. Action u keeps the state the same, action l causes the deterministic transition to the left state, and action r causes the deterministic transition to the right state. The flip problem is showed as a Mealy machine in figure 6. CALM-POPDE is able to solve this problem. Firstly it will try to predict the next observation in function of its action and current observation. However, CALM quickly discovers that the perceptive observation is not useful to the model, and that there is not sufficient information to make correct anticipations. So, it creates a new synthetic element which will be able to represent the underlying left (♣) and right (♦) states.
Figure 7 shows the first steps in the schematic tree construction for the flip problem. We suppose that the first movements do not betray the existence of a hidden property. These movements are : "r1, u0, l1, r1, l1, u0, r1". Figure 8 shows the first abstract differentiation, after the sequence "r0", and also the abstract anticipation, that refers to the immediately previous sequence ("r1"). Figure 9 shows the abstract anticipation coming from the repetition of "r0". Figure 10 shows a new abstract differentiation and its anticipations by following "l1, l0, l0". Finally, figure 11 shows the final solution, with the last differentiation resulting from the execution of "u0, l0, r1, u0, r0". # # r
F p F h A # r 1 # l 1 # u 0 # # # F p F h A # r # r 1 1 # l # l 1 1 # u # u 0 0 # # # # # #
♣ # r ♦ 1 ♦ # r ? 0 # # l ? 1 # # u ? 0 # # # # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ? 0 ? 0 # # l # # l ? 1 ? 1 # # u # # u ? 0 ? 0 # # # # # # Figure 8. First abstract differentiation. # # r ♣ # r ♦ 1 ♦ # r ♦ 0 # # l ? 1 # # u ? 0 # # # # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ♦ 0 ♦ 0 # # l # # l ? 1 ? 1 # # u # # u ? 0 ? 0 # # # # # # Figure 9. Abstract anticipation. # # r ♣ # r ♦ 1 ♦ # r ♦ 0 # # u ? 0 # # # # # l ♣ # l ♣ 0 ♦ # l ♣ 1 # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ♦ 0 ♦ 0 # # u # # u ? 0 ? 0 # # # # # # # # l # # l ♣ # l ♣ # l ♣ 0 ♣ 0 ♦ # l ♦ # l ♣ 1 ♣ 1 Figure 10
. New abstract differentiations and anticipations.
# # r In a second problem, we consider a robot which have the mission of buying some cans in a drink machine. It has 3 alternative actions: "insert a coin" (i), "press the button" (p), or "go to another machine" (g); it can see the state of an indicator light in the machine: "off" ( ) or "on" ( ); and it perceives whether a can is returned (☺) or not ( ). There are 2 hidden properties: "no coin inserted" ( ) or "coin inserted" ( ); and "machine ok" ( ) or "machine out of service" ( ). The light turns on ( ) just during one time cycle, and only if the agent presses the button without having inserted a coin before, otherwise the light indicator is always off. The goal in this problem is to take a determined number of drinks without losing coins in bad machines.
♣ # r ♦ 1 ♦ # r ♦ 0 # # # # # l ♣ # l ♣ 0 ♦ # l ♣ 1 ♣ # u ♣ 0 ♦ # u ♦ 0 # # u # # r # # r ♣ # r ♣ # r ♦ 1 ♦ 1 ♦ # r ♦ # r ♦ 0 ♦ 0 # # # # # # # # l # # l ♣ # l ♣ # l ♣ 0 ♣ 0 ♦ # l ♦ # l ♣ 1 ♣ 1 ♣ # u ♣ # u ♣ 0 ♣ 0 ♦ # u ♦ # u ♦ 0 ♦ 0 # # u # # u
This example poses two challenges to the agent: First, the machine does not present any direct perceptible change when the coin is inserted. Since the agent does not have any explicit memory, apparently it faces the same situation both before and after having inserted the coin. However, this action changes the value of an internal property in the drink machine.
Precisely, the disequilibrium occurs when the agent presses the button. In an instantaneous point of view, sometimes the drink arrives, and the goal is attained ( + p → ☺), but sometimes only the led light turns on ( + p → ). To reach its goal, the agent needs to coordinate a chain of actions (insert the coin and press the button), and it can do that by using a synthetic element which represents this machine internal condition ( or ).
Second, the agent does not have direct perceptive indications to know if the machine is working, or if it is out of service. The agent needs to interact with the machine to discover its operational condition ( or ). This second problem is a little bit different from the first, but it can be solved by the same way. The agent creates a test action, which enables it to discover this hidden property before inserting the coin. It can do that by pressing the button.
Table 1 presents the set of decider schemas that CALM learns to the drink machine problem. We remarks that Ξ 4 presents the unpredictable transformation that follows the action g (go to another machine) due to the uncertainty of the operational state of the next machine. The test is represented in Ξ 1 and Ξ 2 , that can be simultaneously activated because the ambiguity of the result that follows the activation of Ξ 4 but anticipates the operational state of the machine.
Ξ 1 = ( # # # + p → ) Ξ 2 = ( # # + p → ) Ξ 3 = ( # # + p → ☺) Ξ 4 = ( # # # # + g → # ) Ξ 5 = ( # # # + i → ) Ξ 6 = ( # # # + i → )
Related Works
CALM-POPDE is an original mechanism that enables an agent to incrementally create a world model during the course of its interaction. This work is the continuity of our previous work (Perotto [START_REF] Alvares | Incremental Inductive Learning in a Constructivist Agent[END_REF], extended to deal with partially observable environments.
The pioneer work on Constructivist AI has been presented by [START_REF] Drescher | Made-Up Minds: A Constructivist Approach to Artificial Intelligence[END_REF]. He proposed the first constructivist agent architecture (called schema mechanism), that learns a world model by an exhaustive statistical analysis of the correlation between all the context elements observed before each action, combined with all resulting transformations. Drescher has also suggested the necessity to discover hidden properties by creating 'synthetic items'.
The schema mechanism represents a strongly coherent model, however, there are no theoretical guarantees of convergence. Another restriction is the computational cost of the kind of operations used in the algorithm. The need of space and time resources increases exponentially with the problem size. Nevertheless, many other researchers have presented alternative models inspired by Drescher, like as [START_REF] Yavuz | PAL: A Model of Cognitive Activity[END_REF], [START_REF] Birk | Schemas and Genetic Programming[END_REF], (Morrison et al. 2001), [START_REF] Chaput | The Constructivist Learning Architecture[END_REF]) and [START_REF] Holmes | Schema Learning: Experience-based Construction of Predictive Action Models[END_REF].
Our mechanism (CALM) differs from these previous works because we limit the problem to the discovery of deterministic regularities (even in partially deterministic environments), and in this way, we can implement direct induction methods in the agent learning mechanism. This approach presents a low computational cost, and it allows the agent to learn incrementally and find high-level regularities.
We are also inspired by [START_REF] Rivest | Diversity-based inference of finite automata[END_REF]) and (Ron 1995), who had suggested the notion of state signature as an historical identifier to the DFA states, strongly reinforced recently by [START_REF] Holmes | Looping Suffix Tree-Based Inference of Partially Observable Hidden State[END_REF], who have developed the idea of learning anticipations trough the analysis of relevant pieces of history.
Discussion and Next Steps
The CALM mechanism can provide autonomous adaptive capability to an agent, because it is able to incrementally construct knowledge to represent the deterministic regularities observed during its interaction with the environment, even in partially deterministic universes.
We have also presented an extension to the basic CALM mechanism in a way which enables it to deal with partially observable environments, detecting highlevel regularities. The strategy is the induction and prediction of unobservable properties, represented by synthetic elements.
Synthetic elements enable the agent to overpass the limit of instantaneous and sensorimotor regularities. In the agent mind, synthetic elements can represent 3 kinds of "unobservable things". (a) Hidden properties in partially observed worlds, or sub-environment identifiers in discrete non-stationary worlds. (b) Markers to necessary steps in a sequence of actions, or to different possible agent points of view. And (c), abstract properties, which do not exist properly, but which are powerful and useful tools to the agent, enabling it to organize the universe in higher levels.
With these new capabilities, CALM becomes able to overpass the sensorial perception, constructing more abstract terms to represent the universe, and to understand its own reality in more complex levels.
CALM can be very efficient to construct models in partially but highly deterministic (1 > ∂ >> 0), partially but highly observable (1 > ω >> 0), and its properties are partially but highly informative (1 > ϕ >> 0). Several problems found in the real world present these characteristics.
Currently, we are improving CALM to enable it to form action sequences by chaining schemas. It will allow the creation of composed actions and plans. We are also including methods to search good policies of actions using the world model constructed by the learning functions.
The next research steps comprehends: to formally demonstrate the mechanism efficiency and correctness; to make comparisons between CALM and related solutions proposed by other researchers; and to analyze the mechanism performance in more complex problems.
Future works can include the extension of CALM to deal with non-deterministic regularities, noisy environments, and continuous domains.
Figure 1 .
1 Figure 1. Schematic Tree. Each node is a schema composed of three vectors: context, action and expectation. The leaf nodes are decider schemas.
Figure 2 .
2 Figure 2. Differentiation method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) sub-tree generated by differentiation.
Figure 3 .
3 Figure 3. Adjust method; (a) experimented situation and action; (b) activated schema; (c) real observed result; (d) schema expectation reduction after adjustment.
Figure 4 .
4 Figure 4. Integration method; (a) sub-tree after some adjustment; (b) an integrated schema substitutes the sub-tree.
Figure 5 .
5 Figure 5. Synthetic element creation method; (d) incremented context and expectation vectors, and differentiation using synthetic element.
Figure 6 .
6 Figure 6. The flip problem.
Figure 7 .
7 Figure 7. Initial schematic tree to the flip problem. The vector represents synthetic elements (Fh), perceptible elements (Fp) and actions (A). The decider schemas show the expectations.
Figure 11 .
11 Figure 11. Final schematic tree to solve the flip problem.
. Proceedings of the Seventh International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems. Lund University Cognitive Studies, 135.
Table 1 .
1 Schemas to the drink machine problem. |
00176230 | en | [
"sdv.bbm.gtp"
] | 2024/03/05 22:32:13 | 2007 | https://hal.science/hal-00176230/file/carbocyaninecompl2.pdf | Sylvie Luche
Cécile Lelong
Hélène Diemer
Alain Van Dorsselaer
Thierry Rabilloud
email: thierry.rabilloud@cea.fr
Ultrafast coelectrophoretic fluorescent staining of
come
Introduction
The analysis of proteins separated on electrophoresis gels, mono-or bidimensional, relies on the ability to detect them in a suitable way. The ideal protein detection protocol should be sensitive, homogeneous from one protein to another and linear throughout a wide dynamic range. In addition, suitable features include compatibility with digestion and mass spectrometry, speed, convenience and low cost. Up to now, no protein detection method matches perfectly these prerequisites. Colloidal Coomassie Blue [START_REF] Neuhoff | Improved staining of proteins in polyacrylamide gels including isoelectric focusing gels with clear background at nanogram sensitivity using Coomassie Brilliant Blue G-250 and R-250[END_REF] is simple, cheap and rather linear, but its lacks sensitivity and requires long staining times for optimal sensitivity. Conversely, silver staining is sensitive, but labor-intensive, and its linearity is limited. Moreover, despite recent advances [START_REF] Richert | About the mechanism of interference of silver staining with peptide mass spectrometry[END_REF], [START_REF] Chevallet | Improved mass spectrometry compatibility is afforded by ammoniacal silver staining[END_REF], its compatibility is limited. Fluorescent detection methods, on their side, show a convenient linearity and an adequate sensitivity, although the latter varies from the one of colloidal Coomassie [START_REF] Steinberg | SYPRO orange and SYPRO red protein gel stains: one-step fluorescent staining of denaturing gels for detection of nanogram levels of protein[END_REF] to the one of silver [START_REF] Berggren | Background-free, high sensitivity staining of proteins in one-and two-dimensional sodium dodecyl sulfate-polyacrylamide gels using a luminescent ruthenium complex[END_REF], [START_REF] Mackintosh | A fluorescent natural product for ultra sensitive detection of proteins in one-dimensional and two-dimensional gel electrophoresis[END_REF] . Although superior to the one of silver staining, their compatibility with mass spectrometry does not always equal the one of Coomassie Blue, and this has been unfortunately shown to be the case for Sypro Ruby [START_REF] Rabilloud | A comparison between Sypro Ruby and ruthenium II tris (bathophenanthroline disulfonate) as fluorescent stains for protein detection in gels[END_REF] and Deep Purple [START_REF] Chevalier | Different impact of staining procedures using visible stains and fluorescent dyes for large-scale investigation of proteomes by MALDI-TOF mass spectrometry[END_REF], i.e. the most sensitive variants. In addition to this drawback, optimal sensitivity requires rather long staining times, and the cost of the commercial reagents can become a concern when large series of gels are to be produced.
Last but not least, all these staining methods take place along with gel fixation, i.e. protein insolubilization in the gel. This means in turn that recovery of proteins after staining, e.g. for electroelution or blotting purposes, is rather problematic. There are however a few exceptions to the latter rule. Protein staining with zinc and imidazole [START_REF] Castellanos-Serra | Detection of biomolecules in electrophoresis gels with salts of imidazole and zinc II: a decade of research[END_REF] is rather sensitive, but absolutely not linear. In addition, protein excision for mass spectrometry is difficult because of poor contrast. Fluorescent staining with Nile Red [START_REF] Daban | Use of the hydrophobic probe Nile red for the fluorescent staining of protein bands in sodium dodecyl sulfate-polyacrylamide gels[END_REF] is rapid and does not require fixation. However, the staining is quite photolabile, which complicates imaging and excision, and the sensitivity is limited. Finally, staining with Sypro Tangerine [START_REF] Steinberg | Fluorescence detection of proteins in sodium dodecyl sulfate-polyacrylamide gels using environmentally benign, nonfixative, saline solution[END_REF] just requires a single bath in saline solution containing the fluorophore. However, its cost and limited sensitivity have limited its use. So the situation is worse in this category of non-fixing detection than in the case of classical detection after fixation. We report here protein detection with carbocyanines in SDS gels. This method is most efficient by co-electrophoresis of proteins with the fluorophore. This provides in turn excellent staining speed (30 minutes between end of electrophoresis and imaging). In addition, the method is cost-effective and compatible with several downstream processes, e.g. mass spectrometry or blotting. Finally, the sensitivity is intermediate between the one of Coomassie blue and the one of ruthenium complexes.
Material and methods
Samples
Molecular weight markers (broad range, Bio-Rad) were diluted down to 10 ng/µl for each band in SDS buffer (Tris-HCl 125mM pH 7.5, containing 2% (w/v) SDS, 5% (v/v) thioglycerol, 20% (v/v) glycerol and 0.005% (w/v) bromophenol blue). The diluted solution was heated in boiling water for 5 minutes. A tenfold dilution in SDS buffer was performed to get a 1ng/µl per protein dilution JM 109 E. coli cells were grown up to an OD of 0.5 in LB medium. Bacteria were collected by centrifugation (2000g 5 minutes) and washed twice in PBS, and once in isotonic wash buffer (Tris-HCl 10mM pH 7.5, 0.25M sucrose, 1mM EDTA). The final pellet was suspended in its volume of isotonic wash buffer, transferred in an ultracentrifuge tube, and 4 volumes of concentrated lysis solution (8.75M urea, 2.5M thiourea, 5% CHAPS, 50mM DTT and 25mM spermine base) were added. After lysis at room temperature for 30 minutes, the viscous lystae was centrifuged at 200,000g for 1 hour at room temperature. The supernatant was collected, the protein concentration was estimated and the solution was made 0.4% (w/v) in carrier ampholytes (Pharmalyte 3-10). The solution was stored frozen at -20°C until use. J774 cells and HeLa cells were grown in spinner flasks in DMEM + 5% fetal calf serum up to a density of 1 million cells /ml. The cells were collected by centrifugation (1000g 5 minutes), washed and lysed as described for bacteria.
Electrophoresis
SDS electrophoresis
10%T gels (160x200x1.5 mm) were used for protein separation. The Tris taurine buffer system was used [START_REF] Tastet | A versatile electrophoresis system for the analysis of high-and low-molecular-weight proteins[END_REF], operated at a ionic strength of 0.1 and a pH of 7.9. The final gel composition is thus Tris 180mM, HCl 100 mM, acrylamide 10% (w/v), bisacrylamide 0.27%. The upper electrode buffer is Tris 50mM, Taurine 200mM, SDS 0.1%. The lower electrode buffer is Tris 50mM, glycine 200mM, SDS 0.1%. For 1D SDS gels, a 4% stacking gel in Tris 125mM, HCl 100mM was used. No stacking gel was used for 2D electrophoresis.
The gels were run at 25V for 1hour, then 12.5W per gel until teh dye front has reached the bottom of the gel.
Alternatively, the standard Tris-glycine system, operating at pH 8.8 and ionic strength of 0.0625 was used.
IEF
Home made 160mm long 4-8 or 3-10.5 linear pH gradient gels were cast according to published procedures [START_REF] Rabilloud | Sample application by in-gel rehydration improves the resolution of two-dimensional electrophoresis with immobilized pH gradients in the first dimension[END_REF]. Four mm-wide strips were cut, and rehydrated overnight with the sample, diluted in a final volume of 0.6ml of rehydration solution (7M urea, 2M thiourea, 4% CHAPS and 100mM dithiodiethanol [START_REF] Rabilloud | Improvement of the solubilization of proteins in two-dimensional electrophoresis with immobilized pH gradients[END_REF], [START_REF] Luche | About thiol derivatization and resolution of basic proteins in two-dimensional electrophoresis[END_REF]). The strips were then placed in a multiphor plate, and IEF was carried out with the following electrical parameters 100V for 1 hour, then 300V for 3 hours, then 1000V for 1 hour, then 3400 V up to 60-70 kVh.
After IEF, the gels were equilibrated for 20 minutes in Tris 125mM, HCl 100mM, SDS 2.5%, glycerol 30% and urea 6M. They were then transferred on top of the SDS gels and sealed in place with 1% agarose dissolved in Tris 125mM, HCl 100mM, SDS 0.4% and 0.005% (w/v) bromophenol blue. Electrophoresis was carried out as described above.
Detection on gels
Colloidal coomassie blue staining was performed using a commercial product (G-Biosciences) purchased from Agro-Bio (La Ferté Saint Aubin, France).
Silver staining was performed according to the fast silver staining method of Rabilloud [START_REF] Rabilloud | A comparison between low background silver diammine and silver nitrate protein stains[END_REF].
Ruthenium fluorescent staining was performed according to Lamanda et al [START_REF] Lamanda | Improved Ruthenium II tris (bathophenantroline disulfonate) staining and destaining protocol for a better signal-tobackground ratio and improved baseline resolution[END_REF], and Sypro tangerine staining was performed in PBS accoding to Steinberg et al [START_REF] Steinberg | Fluorescence detection of proteins in sodium dodecyl sulfate-polyacrylamide gels using environmentally benign, nonfixative, saline solution[END_REF]. The gels were imaged with a Fluorimager laser scanner, operating at 488nm excitation wavelength.
For co-electrophoretic carbocyanine staining, the desired carbocyanine (all were purchased from Fluka, Switzerland) was first dissolved at 30mM in DMSO.This stock solution is stable for severla weeks at room temperature, provided it is stored in the dark. Various carbocyanines were tested, varying in their aromatic nucleus (oxacarbocyanine, thiacarbocyanine and tertramethylindocarbocyanine) and in the size of the side chains (ethyl to octadecyl). All carbocyanines can be excited at 302nm on a UV table. Oxacarbocyanines are also excited at 488nm, while thia-and indo-carbocyanines can be excited at 532nm. The carbocyanine of interest was dissolved at 3µM (final concentration) in the upper electrode buffer. The fluorescent electrode buffer was used in place of standard buffer. After electrophoresis, the colored gel was rinsed with 5-10 volumes of water. One 15 minutes rinse is sufficient for Tris-glycine gels operating at an ionic strength of 0.0625. Two 15 minutes rinses are required for gels operating at an ionic strength of 0.1. The gels were then visualized on a UV table or using a laser scanner. Alternate washing solutions (e.g. 5% acetic acid, or 5% acetic acid + 20% ethanol, or 0.05% SDS) were also tested. they often required longer washing times to provide contrast, without leading to increased sensitivity. For staining decay experiments, the gels were stained as described above, using water as a contrasting agent. After the initial rinses, the gels were scanned, and then returned to their water bath without shaking. The gels were then scanned at hourly intervals, and retruned to their bath between each scan For post-electrophoretic carbocyanine staining, the general scheme described by Malone [START_REF] Malone | Practical aspects of fluorescent staining for proteomic applications[END_REF] was used and adapted as follows/ After electrophoresis, the gels are fixed overnight in 2% phosphoric acid, 30% ethanol and 0.004% SDS. They are then washed 3 x 20 minutes in 2% phosphoric acid, 0.004% SDS, and then equilibrated for 4 hours in 2% phosphoric acid, 0.004% SDS containing 3 µM of the desired carbocyanine. Visualization could be carried out directly from the carbocyanine bath on a UV table or using a laser scanner.
Image analysis
Images acquired on a Fluorimager laser scanner (488nm excitation wavelength, no emission filter, 200 micron pixe size) were analyzed directly by the ImageQuant software provided with the instrument for SDS-PAGE gels of markers. For two-dimensional gels, the images were converted to the TIFF format, and then analyzed with the delta 2D sotware (Decodon, Germany). The default detection parameters calculated by the software were used and no manual edition of the spots was performed.
Blotting
Proteins were transferred on PVDF membranes using the semi-dry blotting system of Kyhse-Andersen [START_REF] Kyhse-Andersen | Electroblotting of multiple gels: a simple apparatus without buffer tank for rapid transfer of proteins from polyacrylamide to nitrocellulose[END_REF]. Gels can be transferred directly after electrophoresis. Otherwise, gels stained with carbocyanines were first re-equilibrated for 30 minutes in carbocyanine-free electrode buffer prior to transfer. Mock-colored gels, i.e. gels run with a colorless-electrode buffer but rinsed 2x15 minutes in water, were also reequilibrated prior to transfer. After transfer, the proteins were detected on the PVDF sheets with india ink [START_REF] Hancock | India ink staining of proteins on nitrocellulose paper[END_REF].
2.6. Mass spectrometry 2.6.1. Spot excision: For fluorescent stains, spot excision was performed on a UV table operating at 302nm. The spots were collected in microtiter plates. The spots were generally not destained prior to acetonitrile washing. However, the carbocyanine-stained spots were sometimes fixed in 50% ethanol for 30 minutes. The solvent was then removed and the spots were stored at -20°C until use.
In gel digestion :
In gel digestion was performed with an automated protein digestion system, MassPrep Station (Waters, Manchester, UK). The gel plugs were washed twice with 50 µL of 25 mM ammonium hydrogen carbonate (NH4HCO3) and 50 µL of acetonitrile. The cysteine residue were reduced by 50 µL of 10 mM dithiothreitol at 57°C and alkylated by 50 µL of 55 mM iodoacetamide. After dehydration with acetonitrile, the proteins were cleaved in gel with 10 µL of 12.5 ng/µL of modified porcine trypsin (Promega, Madison, WI, USA) in 25 mM NH4HCO3. The digestion was performed overnight at room temperature. The generated peptides were extracted with 60% acetonitrile in 5% acid formic.
MALDI-TOF-MS analysis
MALDI-TOF mass measurements were carried out on UltraflexTM TOF/TOF (Bruker DaltonikGmbH, Bremen, Germany). This instrument was used at a maximum accelerating potential of 25kV in positive mode and was operated in reflectron mode. The sample were prepared by standard dried droplet preparation on stainless steel MALDI targets using α-cyano-4hydroxycinnamic acid as matrix. The external calibration of MALDI mass spectra was carried out using singly charged monoisotopic peaks of a mixture of bradykinin 1-7 (m/z=757.400), human angiotensin II (m/z=1046.542), human angiotensin I (m/z=1296.685), substance P (m/z=1347.735), bombesin (m/z=1619.822), renin (m/z=1758.933), ACTH 1-17 (m/z=2093.087) and ACTH 18-39 (m/z=2465.199). To achieve mass accuracy, internal calibration was performed with tryptic peptides coming from autolysis of trypsin, with respectively monoisotopic masses at m/z = 842.510, m/z = 1045.564 and m/z = 2211.105 . Monoisotopic peptide masses were automatically annotated using Flexanalysis 2.0 software. Peaks are automatically collected with a signal to noise ratio above 4 and a peak quality index greater than 30. 2.6.4. MS Data analysis Monoisotopic peptide masses were assigned and used for databases searches using the search engine MASCOT (Matrix Science, London, UK) [START_REF] Perkins | Probability-based protein identification by searching sequence databases using mass spectrometry data[END_REF]. All proteins present in Swiss-Prot were used without any pI and Mr restrictions. The peptide mass error was limited to 50 ppm, one possible missed cleavage was accepted.
Results and discussion
Three different principles can be used for fluorescent detection on gels. The most obvious one, covalent labelling, has been used with limited sensitivity in the past with UV-excitable probes such as fluorescamine [START_REF] Jackowski | Fluorescamine staining of nonhistone chromatin proteins as revealed by two-dimensional polyacrylamide gel electrophoresis[END_REF] or MDPF [START_REF] Urwin | A multiple high-resolution mini two-dimensional polyacrylamide gel electrophoresis system: imaging two-dimensional gels using a cooled charge-coupled device after staining with silver or labeling with fluorophore[END_REF]. An important improvement of this approach has been made recently with the introduction of multiplex labelling [START_REF] Unlu | Difference gel electrophoresis: a single gel method for detecting changes in protein extracts[END_REF]. However, the low number of labelling sites on proteins limits the absolute sensitivity of this approach. While this can be partly compensated by the increase of software performance for pure detection purposes, the approach suffers from a weak signal, which renders spot excision for characterization purposes difficult. The second obvious approach is to use a probe that binds non covalently to proteins, in which case the contrast between the fluorescent protein zones and the background is governed by the differential concentration of the probes in the two sites. This approach is used in the very popular staining methods with ruthenium complexes [START_REF] Berggren | Background-free, high sensitivity staining of proteins in one-and two-dimensional sodium dodecyl sulfate-polyacrylamide gels using a luminescent ruthenium complex[END_REF] [START_REF] Lamanda | Improved Ruthenium II tris (bathophenantroline disulfonate) staining and destaining protocol for a better signal-tobackground ratio and improved baseline resolution[END_REF] or with epinococconone [START_REF] Mackintosh | A fluorescent natural product for ultra sensitive detection of proteins in one-dimensional and two-dimensional gel electrophoresis[END_REF]. The last approach uses molecules which fluorescence depends on the chemical environment in which they are. Classical examples are Nile red [START_REF] Daban | Use of the hydrophobic probe Nile red for the fluorescent staining of protein bands in sodium dodecyl sulfate-polyacrylamide gels[END_REF], Sypro Orange and Sypro Red [START_REF] Steinberg | SYPRO orange and SYPRO red protein gel stains: one-step fluorescent staining of denaturing gels for detection of nanogram levels of protein[END_REF] and Sypro Tangerine [START_REF] Steinberg | Fluorescence detection of proteins in sodium dodecyl sulfate-polyacrylamide gels using environmentally benign, nonfixative, saline solution[END_REF]. This latter approach suffers generally from a lack of sensitivity. Despite commercial claims, their sensitivity is close to the one of colloidal Coomassie Blue for a substantially higher cost. This is not true for Nile red, but the limited solubility of this dye and its photolability limit the practicability of this approach. As a matter of facts, the ideal fluorescent probe in this family should combine a high absorptivity, an important differential quantum yield between hydrophilic and lipophilic environment, yet being water-soluble. Last but not least, the molecule should have a high photostability to provide flexibility to the staining, e.g. for spot excision. Carboyanines are a family of molecules that show many of these positive features. They are indeed at the basis of the high sensitivity multiplex labelling methods [START_REF] Unlu | Difference gel electrophoresis: a single gel method for detecting changes in protein extracts[END_REF]. However, initial trials of incubating electrophoresis gels in dilute carbocyanine solutions, mimicking the Sypro Orange protocol, did not give any interesting result. We therefore tried to infuse the dye in the gel by co-electrophoresis along with the proteins, as has been described with Coomassie Blue [START_REF] Schagger | Coomassie blue-sodium dodecyl sulfatepolyacrylamide gel electrophoresis for direct visualization of polypeptides during electrophoresis[END_REF]. Interesting results were obtained using water or water-alcohol-acid mixtures as the contrast-developing agent after electrophoresis, as shown in figure 1. However, acidic fixatives showed a tendency to develop an intense background in the low molecular weight region of the gels when carrier ampholytes are present, as in the case of 2D electrophoresis. Furthermore, the suggestion to lower the SDS concentration [START_REF] Schagger | Coomassie blue-sodium dodecyl sulfatepolyacrylamide gel electrophoresis for direct visualization of polypeptides during electrophoresis[END_REF] induces some tailing in the 2D spots, as shown in figure 2.
These results prompted us to carry out a more thorough investigation of the structure-efficiency relationships in the carbocyanine family. Carbocyanine can vary in the side chains and/or in the aromatic nucleus, as shown on figure 3. As the staining is supposed to be driven by the interaction between the probe and the protein-SDS complexes, a long side chain is supposed to increase the affinity for lipophilic environments and thus a strong fluorescence. However, a long side chain also increases the likelihood of selfaggregation of the probe (inducing background) and also decreases the solubility of the probe. The results of such tests are shown in figure 4. This figure clearly shows an optimum at moderately long side chains. Too short side chains (ethyl) show poor sensitivity, while very long side chains (octadecyl) are not very soluble and cannot be used at optimal concentrations, thereby limiting sensitivity. Initial tests were carried out at 3µM carbocyanine concentration, i.e. 1 molecule of probe per 1000 molecules of SDS. While lowering this concentration decreased the sensitivity of staining, increasing it did not enhance staining. We therefore kept the 3µM concentration, which also allowed a very economical staining. We also tried to replace the oxacarbocyanines with indocarbocyanines or with thiacarbocyanines. Tests on a UV table, the only "universal" source able to excitate all these molecules, suggested that oxacarbocyanines were the optimal family to work with (data not shown). At this point, we tried to determine the response factor of the staining for different proteins. To this purpose, we used 1D electrophoresis of serial dilution of marker proteins, as shown in figure 1, and then quantified the fluorescence of the bands with the ImageQuant software. The results are shown on figure 5. The stain appears fairly linear for each protein, but diferent response factors are clearly obvious from one protein to another. Serum albumin showed the highest response factor, which might be linked to the propensity of this protein to bind various low molecular weight organic compounds. We then compared this stain with the most closely stain operating under non-fixing conditions, i.e. Sypro Tangerine. The results are shown on figure 6. Sypro Tangerine is clearly inferior in sensitivity, especially in the low molecular weight region of the gel. This discrepancy between carbocyanine and Sypro tangerine could however be due to the diffusion of low molecular weight proteins in the gel during non-fixing staining, suggesting that the carbocyanine stain could be indeed a fixing one due to a special property of these molecules. To rule out this possibility, we performed blotting experiments, which are shown in figure 7. While a deleterious effect of the staining process can be noted, the protein can still be transferred efficiently to the membrane,while the stain went through the membrane. Finally, we tested the sensitivity and the compatibility of this fluorescent stain with mass spectrometry, in comparison to standard methods such as colloidal Coomassie Blue [START_REF] Neuhoff | Improved staining of proteins in polyacrylamide gels including isoelectric focusing gels with clear background at nanogram sensitivity using Coomassie Brilliant Blue G-250 and R-250[END_REF] or ruthenium complexes [START_REF] Lamanda | Improved Ruthenium II tris (bathophenantroline disulfonate) staining and destaining protocol for a better signal-tobackground ratio and improved baseline resolution[END_REF]. The results are shown on figure 8 and on table 1. On the sensitivity level, the carbocyanine stain lies between Coomassie Blue and ruthenium complexes, while being much quicker to complete. A comparison of the homogeneity of staining was made between ruthenium, Coomassie Blue and carbocyanine. To this purpose, triplicate gels were run, stained and the resulting images were analyzed with the Delta2D software. The distributions of the staining intensities standard deviations are shown for both methods on figure 9.
On the mass spectrometry compatibility level, the carbocyanine stain was comparable to colloidal Coomassie Blue and to ruthenium complexes,as shown on Table 1 on spots showing detected with the three staining methods but probing a range of staining intensities, and spread on various positions of the gels. Due to the non-fixing nature of the stain, we could use the spots as coming from the gels, i.e. without fixation, but with the buffer components. In this case, the usual spot washes prior to digestion were the only cleaning step. Alternatively, we could also fix the excised spots with classical acid-alcohol mixtures, thereby insolubilizing the proteins more efficiently in the gel but also providing more extensive cleaning by removal of electrophoresis buffer components. Finally, it can be noted that carbocyanines offer excellent resistance to photoinduced fading, allowing ample time for spot excision on a UV table.
Quite differently from the method described in patents [START_REF] Gee | Methods for detecting anionic and non-anionic compositions using carbocyanine dyes. US patent application 20050244976[END_REF], this cyanine staining method is based on the interaction of oxacarbocyanine molecules with SDS-protein complexes, and is thus nonfixing. As the cyanine fluorescence is maximal in lipophilic environments, the rationale of this method is to disrupt SDS micelles causing the background fluorescence, while keeping enough SDS at the protein containing sites to promote fluorescence. This is achieved by water rinses, which decrease the ionic strength of the gel, thereby increasing the critical micellar concentration of SDS [START_REF] Helenius | Properties of detergents[END_REF], and thus promoting SDS micelles dissociation. This means in turn that the protocol has to be adapted to the electrophoretic system used, as shown in figure 10. While a single rinse is optimal for systems operating at low ionic strength, such as the popular Laemmli system, increased rinses are required for systems operating at higher ionic strength, such as the Tris taurine system. While a non-fixing process provides interesting features (speed of staining, further mobilization of the proteins, e.g. for protein blotting), it also means in turn that the stain decreases and the proteins diffuse with time, as shown in figure 11. Evaluation of the total stain by the Delta 2D software showed a 10% decrease in total signal intensity over 6 hours, but it can easily be seen that the fuzziness of the stain increases. Another staining scheme can also be devised, starting from gels that are fixed in the presence of SDS, as described by Malone et al. [START_REF] Malone | Practical aspects of fluorescent staining for proteomic applications[END_REF]. Typical results are shown on figure 12. In this case, the short chain oxacarbocyanines proved more efficient than carbocyanines with longer chains.
Concluding remarks
Carbocyanine staining is based on the increase of fluorescence of these molecules in the SDSprotein complexes obtained after SDS PAGE. Two setups can be used. The long setup, starting from fixed gels, gives a stable staining pattern. However, it is not as sensitive as ruthenium complexes, almost as long, so that its only advantage is economical. The short setup, which uses coelectrophoresis of the fluorescent probe along with the proteins in the SDS gel, does not give a stable pattern over time, so that its applicability for very large gel series is limited. However, the staining pattern is stable for more than one hour, which allows ample time for image acquisition. In addition this short setup provides a very fast (less than one hour), economical, sensitive (intermediate between Coomassie Blue and ruthenium complexes) and versatile stain because of its non fixative nature. Loadings from left to right, per band of protein: 400ng, 200ng, 100ng, 50ng, 20ng, 10ng, 5ng, 2ng. The upper electrode buffer contained 3µM diheptyloxacarbocyanine and 0.1% SDS. After electrophoresis, the gels were rinsed 3x20 minutes prior to imaging A: water rinses. B: rinses in 20% ethanol. C: rinses in 20% ethanol + 5% acetic acid Figure 2: effect of SDS concentration 200 micrograms of E.coli proteins were separated by two dimensional electrophoresis (4-8 pH gradients, Tris taurine system). The electrode buffer contained 3µM diheptyloxacarbocyanine and 0.1% SDS (panel A) or 0.05% SDS (panel B). After electrophoresis, the gels were soaked 3x20 minutes in 5% acetic acid, 20% ethanol and 75% water (by volume) prior to imaging. Note the stained crescent of ampholytes (panel A, arrow) and the deformation of the lower part of the gels and the vertical tailing of some spots induced by the reduced SDS concentration (panel B, arrows). Ro check thatb this tailing is not just a staining artefact, a second pair of gels were run with normal (panel C) and reduced (panel D) SDS concentrations, without carbocyanine staining, and silver stained. Vertical tailing was also observed in this case The molecular weight marker proteins used in figure 1 were separated by SDS PAGE, as detailed in figure 1. The gels were stained by co-electrophoretic staning with diheptyloxacarbocyanine, using water as the contrasting agent. The fluorescence intensity was then measured by the ImageQuant software, and plotted against protein load for different proteins. Squares : bovine serum albumin ; diamonds : hen ovalbumin ; circles : rabbit phosphorylase ; crosses : E. coli beta-galactosidase
5. Acknowledgments.The support from the Région Rhone Alpes by a grant (analytical chemistry subcall, priority research areas call[2003][2004][2005][2006]) is gratefully acknowledged.
Figure 1 :
1 Figure 1: staining of molecular weight markers serial dilutions of broad range molecular weight markers were loaded onto a SDS gel (10% acrylamide, Tris taurine system). the standard composition is the following: MYO: myosin; GAL: beta galatosidase; PHO: glycogen phosphorylase; BSA: bovine serum albumin; OVA: ovalbumin; CAR: carbonic anhydrase; STI: soybean trypsin inhibitor; LYS: lysozyme. Loadings from left to right, per band of protein: 400ng, 200ng, 100ng, 50ng, 20ng, 10ng, 5ng, 2ng. The upper electrode buffer contained 3µM diheptyloxacarbocyanine and 0.1% SDS. After electrophoresis, the gels were rinsed 3x20 minutes prior to imaging A: water rinses. B: rinses in 20% ethanol. C: rinses in 20% ethanol + 5% acetic acid
Figure 3 :
3 Figure 3: general structural formula of carbocyanines carbocyanines can vary in the side chains (R1 and R2), in the bridge length (n) (n=1 in carbocyanines and n=2 in dicarbocyanines) and in the aromatic nucleus, where the X and Y positions can be occupied by an oxygen atom (oxacarbocyanines), a sulfur atom (thiacarbocyanines) or a dimethyl subsituted carbon atom (indocarbocyanines).
Figure 4 :
4 Figure 4: evaluation of optimal chain length in carbocyanine staining 200 micrograms of proteins (J774 cells) were separated by two dimensional electrophoresis (4-8 pH gradients, Tris taurine system). The gels were stained by co electrophoretic carbocyanine staining using diethyl oxacarbocyanine (panel A), dipentyl oxacarbocyanine (panel B), diheptyl oxacarbocyanine (panel C), dioctadecyl oxacarbocyanine (panel D). A sensitivity optimum appears at 5-7 carbons in the side chain.
Figure 5 :
5 Figure 5 : Dynamic response factor for various proteins.The molecular weight marker proteins used in figure1were separated by SDS PAGE, as detailed in figure1. The gels were stained by co-electrophoretic staning with diheptyloxacarbocyanine, using water as the contrasting agent. The fluorescence intensity was then measured by the ImageQuant software, and plotted against protein load for different proteins. Squares : bovine serum albumin ; diamonds : hen ovalbumin ; circles : rabbit phosphorylase ; crosses : E. coli beta-galactosidase
Figure 6 :
6 Figure 6: Comparison of carbocyanine staining with other commercial stains
Figure 7 :
7 Figure 7: Blotting efficiency after carbocyanine staining
Figure 8 :
8 Figure 8: Comparison of staining by different methods. 300 micrograms (panel A) or 150 micrograms (panels B and C) of proteins from a complete cell extract (J774 cells) were separated by two dimensional gel electrophoresis (pH 3-10), using the Tris-Taurine system, either under control conditions (panels A and B) or with co-electrophoretic carbocyanine staining (panel C). The gel shown in panel A was then stained with Coomassie Blue with a commercial product and without prior fixation (note the intense blue zone corresponding to carrier ampholytes). The gel shown in panel B was stained with a fluorescent ruthenium complex. The spots excised for further characterization with mass spectrometry are shown with arrows
Figure 9 :
9 Figure 9 : relative standard deviation of spot quantitation with different detection methods. Triplicate gels were run according to the conditions and loadings detailed in figure 8. The spots were then quantitated and matched with the delta2D software, and the relative standard deviation (rsd) for quantitation (i.e. standard deviation/mean , experssed in percentile) was calculated for each standard method and displayed as a histogram. A : histogram for carbocyanine staining (2571 spots detected) B : histogram for ruthenium complex staining (3492 spots detected) C : histogram for colloidal coomassie staining (2231 spots detected) The median rsd (i.e. equal numbers of spots with a lower and higher rsd) are the following : Carbocyanine : 19.6% ; Ruthenium : 20.1% ; Coomassie Blue 23.9%
Figure 10 :
10 Figure 10: Effect of rinses in different buffer systems
Figure 11 :
11 Figure 11: Stability of staining over time. 150 micrograms of proteins from a complete cell extract (J774 cells) were separated by two dimensional gel electrophoresis (4-8 pH gradients) and stained co-electrophoretically with carbocyanine. After the initial stain, the gels were left in water and scanned preiodically. Gel A: Starting gel (three water rinses, 15 minutes each). Gel B: two hours stay in water after first scan. Gel C: Three hours stay in water after gels scan. Gel D: Starting gel (three water rinses, 15 minutes each). Gel E: four hours stay in water after first scan. Gel F: six hours stay in water after gels scan Two sets of gels were required because of gel breakage upon repetitive manipulation.
Figure 12 :
12 Figure 12: fixing staining with carbocyanines 300 micrograms of E.coli proteins were separated by two dimensional electrophoresis (3-10 pH gradients, Tris taurine system). The gels were stained co-electrophoretically with diheptyl oxacarbocyanine (panel A), or migrated without carbocyanine. The gels were then fixed and poststained with dipentyl oxacarbocyanine (panel B) or diethyl oxacarbocyanine (panel C)
Table 1 :
1 MS analysis of proteins stained by different methodsHomologous spots excised from two-dimensional gels (4-8 linear pH gradients, 10% acrylamide) loaded with equal amounts of J774 proteins (200µg) and stained by various methods were digested, and the digests were analysed by MALDI mass spectrometry First column: detection method: CBB= colloidal Coomassie Blue; OF: diheptyloxacarbocyanine, fixation of the spots post excision in aqueous alcohol; ONF: diheptyloxacarbocyanine, no fixation; Ru: ruthenium complex Second and third column: protein name and SwissProt accession number, respectively Fourth column: theoretical Mw and pI Fifth column: sequence coverage of the MS analysis Sixth column: number of observed peptides matching the protein sequence in the database
STAIN PROTEIN NAME ACC. MW / pI %C nb pep.
CBB HEAT SHOCK PROTEIN 84B Q71LX8 83229 / 4,57 56% 43
OF HEAT SHOCK PROTEIN 84B Q71LX8 83229 / 4,57 57% 49
ONF HEAT SHOCK PROTEIN 84B Q71LX8 83229 / 4,57 55% 47
Ru HEAT SHOCK PROTEIN 84B Q71LX8 83229 / 4,57 57% 45
CBB ELONGATION FACTOR 2 P58252 95122 / 6,42 25% 21
OF ELONGATION FACTOR 2 P58252 95122 / 6,42 34% 29
ONF ELONGATION FACTOR 2 P58252 95122 / 6,42 31% 24
Ru ELONGATION FACTOR 2 P58252 95122 / 6,42 54% 41
CBB HEAT SHOCK COGNATE 71 KDA PROTEIN P63017 70827 / 5,37 56% 36
OF HEAT SHOCK COGNATE 71 KDA PROTEIN P63017 70827 / 5,37 53% 37
ONF HEAT SHOCK COGNATE 71 KDA PROTEIN P63017 70827 / 5,37 63% 44
Ru HEAT SHOCK COGNATE 71 KDA PROTEIN P63017 70827 / 5,37 56% 29
CBB TRANSKELOTASE P40142 67588 / 7,23 50% 32
OF TRANSKELOTASE P40142 67588 / 7,23 56% 23
ONF TRANSKELOTASE P40142 67588 / 7,23 48% 25
Ru TRANSKELOTASE P40142 67588 / 7,23 64% 39
UBIQUINOL-CYTOCHROME-C
CBB REDUCTASE COMPLEX CORE Q9CZ13 52735 / 5,75 53% 30
PROTEIN I, …
UBIQUINOL-CYTOCHROME-C
OF REDUCTASE COMPLEX CORE Q9CZ13 52735 / 5,75 52% 22
PROTEIN I, …
ONF UBIQUINOL-CYTOCHROME-C REDUCTASE COMPLEX CORE Q9CZ13 52735 / 5,75 38% 16 |
01762458 | en | [
"spi.mat",
"spi.meca",
"spi.meca.msmeca",
"spi.meca.mema",
"spi.meca.solid"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01762458/file/LEM3_COST_2018_MERAGHNI.pdf | E Tikarrouchine
G Chatzigeorgiou
F Praud
B Piotrowski
Y Chemisky
F Meraghni
email: fodil.meraghni@ensam.eu
Three-dimensional FE 2 method for the simulation of non-linear, rate-dependent response of composite structures
Keywords: Multi-scale finite element computation, FE 2 method, periodic homogenization, composite materials, elastoviscoplastic behavior, ductile damage
In this paper, a two scale Finite Element method (FE 2 ), is presented to predict the non-linear macroscopic response of 3D composite structures with periodic microstructure that exhibit a timedependent response. The sensitivity to the strain rate requires an homogenization scheme to bridge the scales between the macroscopic boundary conditions applied and the local evaluation of the strain rate. In the present work, the effective response of composite materials where the matrix has a local elasto-viscoplastic behavior with ductile damage are analyzed using periodic homogenization, solving simultaneously finite element problems at the microscopic scale (unit cell) and at the macroscopic scale. This approach can integrate any kind of periodic microstructure with any type of non-linear behavior for the constituents (without the consideration of non-linear geometric effects), allowing to treat complex mechanisms that can occur in every phase and at their interface. The numerical implementation of this simulation strategy has been performed with a parallel computational technique in ABAQUS/Standard,with the implementation of a set of dedicated scripts. The homogenization process is performed using a user-defined constitutive law that solve a set full-field non-linear simulations of a Unit Cell and perform the necessary homogenization of the mechanical quantities. The effectiveness of the method is demonstrated with three examples of 3D composite structures with plastic or viscoplastic and ductile damage matrix. In the first example, the numerical results obtained by this full field approach are compared with a semi-analytical solution on elastoplastic multilayer composite structure. The second example investigates the macroscopic response of a complex viscoplastic composite structure with ductile damage and is compared with the mean field Mori-Tanaka method. Finally, 3D corner structure consisting of periodically aligned short fibres composite is analysed under complex loading path. These numerical simulations illustrate the capabilities of the FE 2 strategy under non-linear regime.
when time dependent constitutive models describe the response of the constituents
Introduction
Polymer based composite materials are considered to be a good technological solution for automotive and aeronautic industries, thanks to their structural durability and their lightness. A major preoccupation of these industries is to predict the response of such structures with in-service loadings. This requires the development of predictive models that are able to capture the microstructure impact on the mechanical response, and the proper identification of the mechanical properties of the constituents. In this purpose, advanced modelling and simulation methods that integrate the effect of the microstructure is an active area of research. According to the bibliography, several numerical approaches have been proposed for the numerical simulation of the non-linear response of polymer based composite structures including: i) Phenomenological models, which predict the overall response of the composite materials without taking into account the effect of the different constituents observed at the microscopic scale. Several authors have proposed constitutive models that integrate various rheologies and deformation mechanisms, i.e. viscoelasticity [START_REF] Moreau | Analysis of thermoelastic effects accompanying the deformation of pmma and pc polymers[END_REF][START_REF] Akhtar | Thermo-mechanical large deformation response and constitutive modeling of viscoelastic polymers over a wide range of strain rates and temperatures[END_REF], viscoplasticity [START_REF] Duan | A uniform phenomenological constitutive model for glassy and semi crystalline polymers[END_REF][START_REF] Achour | Implicit implementation and consistent tangent modulus of a viscoplastic model for polymers[END_REF][START_REF] Drozdov | Cyclic viscoplasticity of solid polymers: The effects of strain rate and amplitude of deformation[END_REF], coupled viscoelasticity and viscoplasticity [START_REF] Miled | Coupled viscoelastic-viscoplastic modeling of homogeneous and isotropic polymers: Numerical algorithm and analytical solutions[END_REF][START_REF] Miled | Micromechanical modeling of coupled viscoelastic-viscoplastic composites based on an incrementally affine formulation[END_REF], or even both coupled viscoelasticity, viscoplasticity and damage [START_REF] Launay | Cyclic behaviour of short glass fibre reinforced polyamide: Experimental study and constitutive equations[END_REF][START_REF] Launay | Multiaxial fatigue models for short glass fiber reinforced polyamide -part i: Nonlinear anisotropic constitutive behavior for cyclic response[END_REF][START_REF] Krairi | A thermodynamically-based constitutive model for thermoplastic polymers coupling viscoelasticity, viscoplasticity and ductile damage[END_REF][START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF];
ii) Multi-scale methods, that can be classified into two main categories: mean-field and full field approaches. The mean-field approaches are used to describe the behavior of composites for certain categories of microstructures through the Mori-Tanaka Method [START_REF] Mori | Average stress in matrix and average elastic energy of materials with misfitting inclusions[END_REF][START_REF] Doghri | Homogenization of two-phase elasto-plastic composite materials and structures: Study of tangent operators, cyclic plasticity and numerical algorithms[END_REF] or the self-consistent scheme [START_REF] Hill | A self-consistent mechanics of composite materials[END_REF]15,[START_REF] Walpole | On bounds for the overall elastic moduli of inhomogeneous systems-i[END_REF][START_REF] Milton | Variational bounds on the effective moduli of anisotropic composites[END_REF]. These methodologies have been developed in order to estimate the overall behavior of the composite using average stress and strain quantities for each material phase [START_REF] Castañeda | The effective mechanical properties of nonlinear isotropic composites[END_REF][START_REF] Meraghni | Micromechanical modelling of matrix degradation in randomly oriented discontinuous-fibre composites[END_REF][START_REF] Meraghni | Implementation of a constitutive micromechanical model for damage analysis in glass mat reinforced composite structures[END_REF]. These methods have been proved to be accurate for the linear cases. However, for non-linear constitutive laws, especially when the matrix phase exhibits a non-linear behavior, the response of these approaches is inaccurate. It is commonly observed in the literature that the response of the composite obtained by mean-field methods appears to be stiffer than the reality especially when the matrix is ductile and the reinforcements are stiffer [START_REF] Gavazzi | On the numerical evaluation of eshelby's tensor and its application to elastoplastic fibrous composites[END_REF][START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF][START_REF] Chaboche | On the capabilities of mean-field approaches for the description of plasticity in metal matrix composites[END_REF]. The numerical simulation of these composite systems has necessitated the development of full-field approaches. To determine the response of a composite structure, accounting for the description of the microstructure, the so-called FE 2 method, appear to be an adequate solution. The major benefit of the FE 2 method is the ability to analyse complex mechanical problems with heterogeneous phases that present a variety of behavior at different scales. This idea was originally introduced by Feyel [START_REF] Frédéric | Multiscale fe2 elastoviscoplastic analysis of composite structures[END_REF], then this method was used and developed by several authors, for example [START_REF] Feyel | Fe2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre sic/ti composite materials[END_REF][START_REF] Nezamabadi | A multilevel computational strategy for handling microscopic and macroscopic instabilities[END_REF][START_REF] Nezamabadi | A multiscale finite element approach for buckling analysis of elastoplastic long fiber composites[END_REF][START_REF] Asada | Fully implicit formulation of elastoplastic homogenization problem for two-scale analysis[END_REF][START_REF] Tchalla | An abaqus toolbox for multiscale finite element compu-tation[END_REF][START_REF] Schröder | Algorithmic two-scale transition for magneto-electro-mechanically coupled problems: Fe2-scheme: Localization and homogenization[END_REF][START_REF] Papadopoulos | The impact of interfacial properties on the macroscopic performance of carbon nanotube composites. a fe2-based multiscale study[END_REF].
The majority of these works consider two-dimensional structures, which if they provide a good study case for the analysis of the capabilities of the method, is of limited interest for practical use for the prediction of the overall response of heterogeneous materials and composites, since the spatial arrangement of the phases is mostly three-dimensional.
In this paper, a two-level FE 2 method, based on the concept of periodic homogenization under the small strain assumption is implemented in a commercial FE code (ABAQUS/Standard). The method predicts the 3D non-linear macroscopic behavior of a composite with periodic microstructure by considering that each macroscopic integration point is a material point where the characteristics at the macroscopic scale are represented by its own unit cell, which includes the material and geometrical characteristics of the constituents (fibre, matrix) in the microstructure. Therefore, a multilevel finite element analysis has been developed using an implicit resolution scheme, with the use of a Newton-Raphson algorithm to solve simultaneously the non-linear system of equations on the two scales (macroscopic and microscopic).
The main advantage of this methodology is that it can account for any type of non-linear behavior of the constituents (plasticity, viscoelasticity, viscoplasticity and damage), as well as any type of periodic microstructure. The proposed FE 2 approach is implemented through a parallelization technique, leading to a significant reduction of the computational time.
The layout of this paper is as follows: in section 2, the theoretical formulation of the homogenization theory is described as well as the principle of scale transition between the local and the global fields. The section also presents the rate dependent constitutive law considered for the matrix phase. In section 3, details of the numerical implementation of the FE 2 method is given for a 3D non-linear problem in ABAQUS/Standard with the parallel implementation. In section 4, the approach is validated by comparing the FE 2 results with semi-analytical method on 3D multilayer composite structure. Afterwards, an example of 3D composite structure exhibiting non-uniform strain fields, in which the microstructure consists of an elastoviscoplastic polymer matrix with ductile damage, reinforced by short glass fibres is presented. The numerical results of the simulation are compared with the Mori-Tanaka method. Finally, the capabilities of this method are shown by simulating the mechanical response of a more complex structure under complex loading path with different strain rate.
Theoretical background and Scale transition
In this section, the periodic homogenization principle, as well as the transition between the two scales (microscopic and macroscopic) are presented. The principal objective is to determine the macroscopic quantities (stress and tangent modulus) that are obtained through periodic homogenization by accounting for the different mechanisms that exist in the microscopic level, as non-linear plastic/viscoplastic behavior with ductile damage of the matrix. After that, the local constitutive law of each constituent is presented, where a linear elastic law is chosen for the reinforcement and a constitutive model that incorporate elastoviscoplasticity coupled with ductile damage for the matrix.
Theoretical background for periodic homogenization
The objective of the periodic homogenization theory is to define a fictitious homogenized medium having an equivalent response of the heterogeneous medium that is representative of the microstructure. A periodic medium is characterized by a repeated unit cell in the three spatial directions, which forms an unit cell. The theory of periodic homogenization is valid as long as the separation between the scales exists, i.e. the sizes of the unit cell are much smaller than the macroscopic sizes of the medium (x >> x) (Fig. 1). In this paper, the notation (•) will be used to denote macroscopic quantities. The motion of any macroscopic and microscopic material points M(x) and M(x, x) respectively, are governed by the macroscopic and the microscopic equations (Tab.1).
In Tab.1 σ, ε, σ and ε represent the microscopic and the macroscopic stress and strain tensors respectively, b v is the body forces, V and V are the volumes of the micro and the macro structure.
Equations
Macro-scale Micro-scale
∀ x ∈ V ∀ x ∈ V, ∀ x ∈ V Equilibrium div x σ + b v = 0 div x (σ) = 0 Kinematics ε = 1 2 (Grad x (u) + Grad x (u)) ε = 1 2 (Grad x (u) + Grad x (u)) Constitutive law σ = F(x, ε) σ = F(x, x, ε)
Strain energy rate Ẇε = σ : ε Ẇε = σ : ε Moreover, x, x, u and u are the microscopic and the macroscopic positions and displacement vectors, while F and F are operators that define the micro and macro relationships between the stress and strain. Both F and F are considered non-linear operators in this work.
The homogenization theory attempts to define the F operator, which characterizes the macroscopic behavior, from the local behaviors defined by the F operator. In order to make this possible, it is necessary to introduce the concept of scale transition between the macro and the micro scales.
According to the average stress and strain theorems, it can be demonstrated that the stress and strain averages within the unit cell are equal to the stress and strains corresponding to uniform tractions and linear displacements respectively that are applied at its boundaries. These averages represent the macroscopic stress and strain tensors respectively. The relationships between the two scales are given by the following equations:
σ = σ = 1 V V σ dV = 1 V ∂V σ.n ⊗ x dS (1)
ε = ε = 1 V V ε dV = 1 2V ∂V (u ⊗ n + n ⊗ u) dS ( 2
)
where n is the outgoing normal of the unit cell boundary ∂V. • is the mean operator and ⊗ the dyadic product.
Non-linear scale transition: incremental approach
Since the homogenization is based on the separation between the different scales, the connection between these scales (microscopic and macroscopic problems) should be defined in order to be able to predict the overall behavior of the structure.
Microscopic problem
The periodicity condition implies that, the displacement field u of any material point located in x can be described by an affine part, in which a periodic fluctuation u is added as is presented in Fig. 2: The periodic fluctuating quantity u takes the same value on each pair of opposite parallel sides of the unit cell and the strain average produced by u is null [Eq. 5]. Therefore, the full strain average is well equal to the macroscopic strain [Eq. 6].
u(x, x, t) = ε(x, t) • x + u (x, x, t) (3)
ε (u) = ε + ε (u ) (4)
ε (u ) = 1 V V ε (u ) dV = 0, (5)
ε (u) = ε + ε (u ) = ε (6)
The traction vector σ. n is anti periodic and satisfies the conditions of equilibrium within the unit cell. The micro problem is formulated as follows:
σ = F (x, ε (u (x))) ∀x ∈ V, div x (σ (x)) = 0 ∀x ∈ V, u i -u j = ε . x i -x j ∀x ∈ V (7)
where u i , u j , x i and x j are the displacements and the positions of each pair of opposite parallel material point of the unit cell boundary respectively, while ε is the macroscopic strain. The relationship between the microscopic stress and the microscopic strain in incremental approaches is provided by the linearised expression [Eq. 8]:
∆σ(x) = C t (x) : ∆ε (x) ∀x ∈ V, (8)
where C t is the local tangent operator tensor defined as the numerical differentiation of the stress with respect to the total strain.
Macroscopic problem
The relationship between macroscopic stress and strain cannot be explicitly provided by a stiffness tensor. Nevertheless, for a given macroscopic strain, the macroscopic stress response can be computed using an implicit resolution scheme, where the local behavior is linearized and corrected at each strain increment [Eq. 8]. Then, using the same incremental methodology, the macroscopic behavior can also be linearized in order to predict the next increment.This linearization requires to write the macroscopic constitutive law in the non-linear form of Eq. 9. The equilibrium at the macroscopic level in the absence of body forces is given by the Eq. 10.
∆σ x = C t (x) : ∆ε (x) ∀x ∈ V, (9)
div x ∆σ x = 0 ∀x ∈ V, (10)
where ∆σ(x) is the macroscopic stress tensor associated with the point x of the macrostructure at each macroscopic strain increment.
The relationship between the macroscopic stress and strain is given in Voigt notation in Eq. 11.
The macroscopic tangent operator C t is recovered by computing the macroscopic stress resulting from the six elementary strain states written in Eq. 12 (also in Voigt notation) at each macroscopic strain increment:
∆σ 1 ∆σ 2 ∆σ 3 ∆σ 4 ∆σ 5 ∆σ 6 = C t,
× ∆ε 1 ∆ε 2 ∆ε 3 2 ∆ε 4 2 ∆ε 5 2 ∆ε 6 (11)
∆ε (1) = (K 0 0 0 0 0) T ∆ε (2) = (0 K 0 0 0 0) T ∆ε (3) = (0 0 K 0 0 0) T ∆ε (4) = (0 0 0 K 0 0) T ∆ε (5) = (0 0 0 0 K 0) T ∆ε (6) = (0 0 0 0 0 K) T (12)
Then, the ij component of the tangent operator is given by the i th component of the stress vector calculated with the j th elementary strain state, divided by the j th component of the strain vector of the elementary strain state:
C t, i j = ∆σ ( j) i K , i, j = 1, 2, 3, 4, 5, 6. (13)
Usually, K is chosen to be equal to 1.
Local elasto-viscoplastic behavior with ductile damage for the matrix
The constitutive law of the matrix material is defined through a thermodynamically based phenomenological model for viscoplasticity and ductile damage in semi-crystalline polymers [START_REF] Lemaitre | Mechanics of solid materials[END_REF][START_REF] Krairi | A thermodynamically-based constitutive model for thermoplastic polymers coupling viscoelasticity, viscoplasticity and ductile damage[END_REF][START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF]. These materials exhibit a dissipative behavior that combines solid and fluid properties with some apparent stiffness reduction. The model is described by the rheological scheme given in the Fig. 3. It is composed of: one single linear spring, subjected to an elastic strain ε e , and a viscoplastic branch, subjected to a viscoplastic strain ε p which consists of a frictional element, a non-linear spring and a non-linear dash-pot. The linear spring and the viscoplastic branch are positioned in series. The model is formulated within the thermodynamics framework [START_REF] Lemaitre | Mechanics of solid materials[END_REF][START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF]. The state laws are obtained by differentiation of the Helmholtz potential with respect to the state variables. This potential is formulated as the sum of the stored energies of the spring and the viscoplastic branch.
ρψ ε, r, ε p , D = 1 2 ε -ε p : (1 -D) C e : ε -ε p + r 0 R(ξ) dξ (14)
The internal state variables ε p , r and D represent the viscoplastic strain, effective equivalent viscoplastic strain variable and the damage variable respectively. C e is the initial fourth order stiffness tensor of the single spring, classically defined for bulk isotropic materials. R is the hardening function, chosen under the form of the power law function, that must be increasing, positive and vanishes at r = 0:
R (r) = Kr n , (15)
where K and n are the viscoplastic material parameters. According to the second principle of thermodynamics, dissipation is always positive or null (Clausius Duhem inequality). Assuming that the mechanical and thermal dissipations are uncoupled, the rate of the mechanical dissipated energy Φ is positive or zero and is given by the difference between the strain energy rate Ẇε and the stored energy rate ρ ψ (Eq. 16).
Φ = Ẇε -ρ ψ = σ : ε -ρ ∂ψ ∂ε : ε + ∂ψ ∂ε p : εp + ∂ψ ∂r : ṙ + ∂ψ ∂D : Ḋ = σ : εp -Rṙ + Y Ḋ ≥ 0. ( 16
)
The viscoplasticity and damage are considered to be coupled phenomena [START_REF] Lemaitre | Coupled elasto-plasticity and damage constitutive equations[END_REF][START_REF] Krairi | A thermodynamically-based constitutive model for thermoplastic polymers coupling viscoelasticity, viscoplasticity and ductile damage[END_REF]. Consequently, the evolution of ε p , r and D are described by the normality of a convex indicative function that satisfies the above inequality:
F (σ, R, Y; D) = eq(σ) 1 -D -R -R 0 f (σ, R; D) + S (β + 1) (1 -D) Y S β+1 f D (Y;D) (17)
In the last expression, the term f (σ, R; D) denotes the yield criterion function which activates the mechanism (ṙ > 0 if f > 0, else ṙ = 0). The function f is expressed in the effective stress space. f D is an additive term that takes into account the evolution of the damage at the same time as the viscoplasticity. eq(σ) denotes the von Mises equivalent stress, R 0 denotes the yield threshold, while S and β are damage related material parameters. The viscous effect is introduced by considering a relation between the positive part of f and ṙ through a function Q. This function is chosen under the form of the power law:
f + = Q (ṙ) , Q (ṙ) = Hṙ m (18)
where H and m are the material parameters. The function Q (ṙ) must be increasing, positive and
null at ṙ = 0.
This type of model allows to capture some well known effects of thermoplastic polymers, namely the rate effect through the creep and relaxation phenomena, as well as the stiffness reduction due to the ductile damage. Tab. 2 summarizes the thermodynamic variables, the evolution laws and the von Mises type viscoplastic criterion of the model. In the table, Dev(σ) denotes the deviatoric part of the stress.
ε σ = ρ ∂ψ ∂ε = (1 -D) C e : ε -ε p State variables Associated variables Evolution laws r R = ρ ∂ψ ∂r = R (r) ṙ = - ∂F ∂R λ = λ ε p -σ = ρ ∂ψ ∂ε p εp = ∂F ∂σ λ = 3 2 Dev(σ) eq(σ) ṙ 1 -D D Y = ρ ∂ψ ∂D Ḋ = ∂F ∂Y λ = Y S β ṙ 1 -D Multiplier Criterion Active ( λ > 0 if f > 0) λ = r f (σ, R; D) = eq(σ) 1 -D -R -R 0 f + = Q (ṙ)
Multi scale FE computation and numerical implementation
To predict the macroscopic behavior of a composite structure, taking into account the effect of the microstructure, homogenization scheme within the framework of a FE 2 is an accurate solution. According to Feyel [START_REF] Frédéric | Multiscale fe2 elastoviscoplastic analysis of composite structures[END_REF], this approach considers that the macroscopic problem and the microscopic heterogeneous unit cell are solved simultaneously. On the macroscopic scale, the material is assumed as a homogenized medium with non-linear behavior. The macroscopic response is calculated by solving an appropriate periodic boundary value problem at the microscopic level within a homogenization scheme. The important macroscopic information (strain) passes to the unit cell through the constraint drivers. The concept of constraint drivers is explained in the next subsection.
It is pointed out that the response at the macroscopic scale is obtained by the homogenization process and is frequently called "homogenized". The macroscopic fields and tangent moduli depend on the microscopic response at each unit cell. Since the macroscopic strains are heterogeneous in the structure, the homogenized response varies at every macroscopic point, providing a type of spatial heterogeneity.
Unit cell computations for periodic homogenization using the concept of constraint drivers
The method of constraint drivers is a numerical technique which allows to apply any state of macroscopic stress, strain or even mixed stress/strain on a periodic finite element unit cell. More detailed exposition about this concept is given in [START_REF] Li | On the unit cell for micromechanical analysis of fibre-reinforced composites[END_REF][START_REF] Li | General unit cells for micromechanical analyses of unidirectional composites[END_REF][START_REF] Shuguang | Unit cells for micromechanical analyses of particle-reinforced composites[END_REF].
In the finite element framework a unit cell for periodic media should be associated with periodic mesh. This means that for each border node, there must be another node at the same relative position on the opposite side of the unit cell. The aim of the constraint drivers in a periodic homogenization approach is to apply a macroscopic strain ε on the unit cell, taking into account the periodic boundary conditions. In practice, a displacement gradient is applied between each pair of opposite parallel border nodes (denoted by the indices i and j). This gradient is directly related to the macroscopic strain tensor ε i j by the following general kinematic relationship:
u i = u j ⇐⇒ u i -u j = ε . (x i -x j ) ∀x ∈ V (19)
The proposed method introduces the six components of the macroscopic strain tensor as additional degrees of freedom (constraint drivers) that are linked to the mesh of the unit cell using the kinematic equation 19. The displacements of these additional degrees of freedom, noted as , respectively, and they permit to recover directly the corresponding components of the macroscopic stress tensor (Fig. 4) at the end of the unit cell calculations. Dividing the dual force by the unit cell volume leads to the corresponding macroscopic stresses.
u
Concept and numerical algorithm of FE 2 method
After defining the concept of constraint drivers, the implementation of a two-scale finite element approach is the next step in the computational homogenization framework. The proposed method lies within the general category of multi-scale models. In this method the macroscopic constitutive behavior is calculated directly from the unit cell, providing the geometry and the phenomenological constitutive equations of each constituent. The FE 2 method consists of three main steps according to [START_REF] Frédéric | Multiscale fe2 elastoviscoplastic analysis of composite structures[END_REF]:
(1) A geometrical description and a FE model of the unit cell.
(2) The local constitutive laws expressing the response of each component of the composite within the unit cell.
(3) Scale transition relationships that define the connection between the microscopic and the macroscopic fields (stress and strain).
The scale transition is provided by the concept of homogenization theory, using volume averaging of microscopic quantities of the unit cell, which is solved thanks to periodic boundary conditions. The macroscopic fields (stress and strain) are introduced in a unit cell by using the six additional degrees of freedom (constraint drivers), that are linked with the boundaries through the kinematic equations [Eq. 19]. The macroscopic behavior of a 3D composite structure is computed by considering that the material response of each macroscopic integration point is established from the homogenization of a unit cell that is connected to each macroscopic integration point. Each unit cell contains the local constitutive laws of different phases and the geometrical characteristics of the microstructure.
The FE 2 approach presented here has been developed using an implicit resolution scheme, with the use of a Newton-Raphson algorithm, that solves the non-linear problems at the two scales. At each macroscopic integration point, the macroscopic stress and the macroscopic tangent operator are computed for the calculated macroscopic strain at each time increment, by solving iteratively a FE problem at the microscopic scale.
Concept of transition between scales in FE 2 computations
In the framework of FE 2 modelling the global resolution step is performed at each time increment by solving a local equilibrium problem at each macroscopic integration point. At each step, the microscopic problem is solved by applying the macroscopic strain increment to the unit cell through the periodic boundary conditions. The system of equations in the linearized incremental form is given as follows:
∆σ (x) = C t (x) : ∆ε (x) ∀x ∈ V, div x (∆σ (x)) = 0 ∀x ∈ V, ∆u i -∆u j = ∆ε . x i -x j ∀x ∈ V (20)
By using the developed user subroutine at the microscopic scale which contains the non-linear local behavior of the constituents, the microscopic stress, tangent operator and internal state variables V k are computed at every microscopic point. The macroscopic stress σ is then computed through volume averaging of the microscopic stresses, and the local tangent operators of all microscopic points are utilized to obtain the macroscopic tangent operator C t by solving six elastic-type loading cases with the elementary strain states described in subsection 2.1.2. The internal state variables and the local stress are saved as initial conditions for the next time increment. Once the macroscopic quantities σ and C t are computed, the analysis at the macroscopic level is then performed and the macroscopic strain increment ∆ε is provided by the Finite Element Analyses Package ABAQUS at every macroscopic point through the global equilibrium resolution. This information is passed to the macroscopic scale by using a user defined constitutive model (denoted here as Meta-UMAT) that represent the behavior of a macroscopic material point and contains the unit cell equations and hence the process returns to the local problem. The iterative procedure inside the Meta-UMAT is depicted in Fig. 5. The loop is repeated until numerical convergence is achieved in both micro and macro-scales numerical problems. After the convergence, the analysis proceeds to the next time step. Both the Meta-UMAT and the structural analysis in the macroscopic level define the FE 2 approach.
Algorithm of FE 2 and parallel calculation
The algorithm of the FE 2 computational strategy for the non-linear case in ABAQUS/Standard is presented in Fig. 6.
As shown in Figures 5 and6, the macroscopic problem is solved at each increment in a linearized manner, considering the homogenized tangentmod modulus C t . The elastic prediction -inelastic correction is performed at the scale of the constituents laws (Micro-UMAT) using the well-known "return mapping algorithm -convex cutting plane" scheme [START_REF] Simo | Computational Inelasticity[END_REF].
The aim of the FE 2 approach is to perform structural numerical simulations, thus reduction of the computational time is of outmost importance. Since the FE 2 homogenization requires very costly computations, parallel calculation procedures for running the analysis on multiple CPUs are unavoidable.
Parallel implementation of the FE 2 code in ABAQUS/Standard
It is known that the FE 2 computation is expensive in terms of CPU time, caused by the transition between the two scales and the degree of freedom number of the microscopic and the macroscopic models. To reduce this computational time, a parallel implementation of the FE 2 procedure is set-up in ABAQUS/Standard. All the Finite Element Analyses of the unit cells within a sin-initialisation -Apply the PBCs on the unit cell.
-Compute the initial macroscopic tangent modulus C t .
Macro-level -Solve the macro problem.
-Get the macroscopic strain increment ∆ε n+1 .
Micro-level
-Python script for the micro problem.
-Compute the local fields σ, ε, C t .
-Compute the macroscopic stress σ.
-Compute the macroscopic tangent modulus C t .
Global check convergence
Next increment n=n+1 Update all fields: gle macroscopic element (one per integration point) is sent on a single computation node (a set of processors) and their are solved iteratively. The computations that correspond to each macroscopic element are solved in different computations nodes. Thus, theoretically the parallelization can be performed simultaneously on every element. In practice the parallel computation is limited to the number of available calculation nodes. Note that the computations of every microscopic Finite Element Analysis can also be computed in parallel within the computation node if it possess several processors (which is often the case) and this parallelization process is governed by the Finite Element Analyses Package ABAQUS. In practice, the Meta-UMAT calls an appropriate python script that solves the local problem (including the computation of the macroscopic tangent modulus) at each macroscopic integration point, with the use of the microscopic UMAT, which contains the local non-linear behavior of the constituents (Fig. 7). Afterwards, the global solver of ABAQUS checks that all calculations at different processors are completed before proceeding to the resolution of the macroscopic problem, before passing to the next time increment, or the next macroscopic iteration.
V k , σ, ε, C t , σ, C t . C t ∆ε n+1 C t , σ non yes
Implementation of the microscopic problem in ABAQUS
With regard to the microscopic problem, as mentioned previously, the Meta-UMAT executes a properly designed python script in each macroscopic integration point of the composite structure with the first macroscopic strain increment given by ABAQUS.
The periodic boundary conditions (PBCs) and the macroscopic strain are applied on the unit cell by means of the python script at each time increment, since this last information is given at each integration point from the prediction of the strain increment that should satisfy the global equilibrium. The script also calls the solver ABAQUS to solve the microscopic Finite Element Analyses, which utilize the microscopic user subroutines that contains the local constitutive laws of the constituents, in order to obtain the microscopic response through a return mapping iterative process.
Once the local equilibrium is achieved, the local response (σ and C t ) are computed. Then, the macroscopic stress is recovered as a reaction force divided by the unit cell volume on the constraint drivers (section 3.1). The macroscopic tangent modulus is calculated by mapping the local tangent moduli on the unit cell through the six elementary strain states. Through the python script, the macroscopic quantities (σ and C t ) are calculated and transferred to in the Meta-UMAT. At this point, the global equilibrium is checked, if the convergence is reached, we proceed to the next time increment n+1.
Applications and Capabilities of the FE 2 framework
In order to validate the two-scale computational approach within the framework of 3D nonlinear composite structures, two test cases have been addressed: the first one is a periodic multilayer composite structure with non-linear, elastoplastic phases. It has been demonstrated that the use of an incremental linearized temporal integration approach, there exist a semi-analytical solution for this problem [START_REF] Chatzigeorgiou | Computational micro to macro transitions for shape memory alloy composites using periodic homogenization[END_REF]. This test case is utilized as a validation of the implementation of the FE 2 framework. The second one is the simulation of three-dimensional composite structure, with a two-phase microstructure: A matrix phase that exhibits a coupled elastoviscoplastic with ductile damage response, reinforced by short glass fibres. The results of such multi-scale simulation are compared with a legacy modelling approach, i.e. the use of an incremental Mori-Tanaka scheme [START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF].
Comparison with semi-analytical homogenization method for elastoplastic multilayer composites
The multi-scale structure simulated is presented in Fig. 8 and is composed at the microscopic scale of a periodic stack of two different layers, one with an elastic response (superscript e) and the second one with an elastic-plastic response (superscript p). The volume fraction of the two phases is equal, i.e. c e = c p = 0.5. The macroscopic shape of the structure is a cuboid. For the elastic-plastic phase, the plastic yield criterion is given by:
f p (σ, p) = eq(σ) -R(p) -R 0 ≤ 0. ( 21
)
where eq(σ) is the equivalent Von Mises stress and R 0 is the yield threshold. The hardening function R(p) is chosen under the form of a power law [START_REF] Lemaitre | Mechanics of solid materials[END_REF]:
R(p) = K.p n (22)
where K and n are material parameters. p is the accumulated plastic strain. The material parameters of the two phases are given in Tab. 3. As discussed in the Section 3.1, periodic boundary conditions are applied to the unit cell of the multilayer material. The macroscopic boundary conditions imposed correspond to a pure shear loading and are such that the relationship between the displacement at the boundary is u 0 = ε 0 n, n being the outward normal of the surface and all the components of the tensor ε 0 are zero except ε 12 = ε 21 , see Fig. 9-b. Note that under such conditions, the numerical results of the two Finite Element Analyses should be mesh-independent, since homogeneous fields are considered in all the phases. The results of the two approaches, the FE 2 and the semi-analytical are identical (Fig. 10), which demonstrates the capability of the computational method to predict the response of 3D non-linear multi-scale composite structures.
3D structure (Meuwissen) with short fibre reinforced composite
To demonstrate the capabilities of the FE 2 approach to identify the overall behavior of 3D composite structures close to parts that are commonly manufactured, the second test case is performed on structure where heterogeneous strain and stress fields are observed during a tensile load field. the composite material is considered as an elastoviscoplastic polymer matrix with ductile damage, reinforced by aligned glass short fibres arranged in a periodic hexagonal array (Fig. 11). The volume fractions of the matrix and the fibres are V m = 0.925 and V f = 0.075 respectively, while the aspect ratio for the elliptic fibre is (4, 1, 1). The fibres elastic properties are the following: a Young's modulus E f = 72000 MPa and a Poisson's ratio ν f = 0.26. The material properties of the matrix phase are listed in Tab. 4. It should be mentioned that these material parameters are motivated by the work of [START_REF] Praud | Phenomenological multi-mechanisms constitutive modelling for thermoplastic polymers, implicit implementation and experimental validation[END_REF], but they do not consider the viscoelastic response, which is taken into account in that article. Thus, the material properties are related to viscoplastic behavior coupled to damage in polymeric media. The structure presented in the Fig. 12-a is clamped at the left side and subjected to the loading path of Fig. 12-b at the right side. The displacement controlled path consists in three loading steps with different velocities (u (1) x = 1 mm.s -1 , u(2)
x = 0.2 mm.s -1 , u (3)
x = 0.8 mm.s -1 ) followed by an unloading stage at a displacement rate of (u (4) x = 2 mm.s -1 ). The results of the full-field FE 2 method are compared with those obtained by using the incremental mean-field Mori-Tanaka method. This method has been widely utilized for the simulation of composites [START_REF] Doghri | Homogenization of two-phase elasto-plastic composite materials and structures: Study of tangent operators, cyclic plasticity and numerical algorithms[END_REF] as well as smart structures [START_REF] Piotrowski | Modeling of niobium precipitates effect on the ni47ti44nb9 shape memory alloy behavior[END_REF]. Such homogenization scheme is however considered valid under specific cases [START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF], and some specific corrections might be required [START_REF] Chaboche | On the capabilities of mean-field approaches for the description of plasticity in metal matrix composites[END_REF][START_REF] Brassart | Homogenization of elasto-plastic composites coupled with a nonlinear finite element analysis of the equivalent inclusion problem[END_REF]. Since the proposed corrections are not unique and depends on the type of composites, the regular incremental method is employed, where the linearized problem is written in term of the anisotropic algorithmic tangent modulus of the non-linear phases [START_REF] Lagoudas | Elastoplastic behavior of metal matrix composites based on incremental plasticity and the mori-tanaka averaging scheme[END_REF]. The advantage of the Mori-Tanaka scheme relies in its computational efficiency, since it is a semi-analytical method and accounts for the material non-linearities only on an average sense and not at every local microscopic point in the unit cell. The overall load-displacement response computed using FE 2 approach is shown in Fig. 13 and compared to the global response predicted by the mean-field Mori-Tanaka approach. As expected the both approaches predict comparable responses notably in the elastic regime. However, for the viscoplastic regime, the mean-field based simulation does not capture well the strain rate change of the applied loading path and provides stiffer response than the full-field FE 2 . This aspect is known and occurs when one phase exhibits a non-linear behavior. Similar observations have been reported to the literature especially when the matrix phase behaves as viscoelastic-viscoplastic media [START_REF] Miled | Micromechanical modeling of coupled viscoelastic-viscoplastic composites based on an incrementally affine formulation[END_REF][START_REF] Chaboche | On the capabilities of mean-field approaches for the description of plasticity in metal matrix composites[END_REF].
The authors proposed specific numerical formulations to address this limit of mean-field based methods. Fig. 14 demonstrates stress-strain curves at the macroscopic point A (Fig. 15-a). Due to the semi-analytical form of the Mori-Tanaka method, the computations are faster than in the FE 2 method but it requires a smaller time increment. The results indicate that the response of the two approaches describe the changing of the rate loading caused by the viscous behavior of polymer matrix, but it is clearly shown that the Mori-Tanaka response misdescribed this phenomena because it is more rigid with a considerable loss of plasticity as expected, compared to the FE 2 .
The results illustrate that the response of the composite is highly influenced by the presence of the matrix, exhibiting both viscoplastic response through relaxation phenomena, as well as stiffness reduction during unloading due to the ductile damage.
It is worth noticing that the inelastic characteristics of the different phases are mainly taken into account in the microscale and, accordingly, the unit cell is adequately meshed (6857 elements).
The authors have performed several analyses at different meshes of the macroscopic structure and have confirmed that the chosen meshing of 100 elements was sufficient for the purposes of the manuscript.
At a characteristic critical point of the structure (centre of one notch), the deformed macro-scale structure and the microscopic stress response (component 11) of the unit cell that represent a macroscopic integration point A are shown in Fig. 15. It is clear that at such critical material point, the adopted incremental Mori-Tanaka scheme do not predict the local response with a sufficient accuracy to be able to utilize such results for the computation of damage evolution of fatigue life predictions, which are unavoidable in the case of most load-bearing application of composite structures. Even if the mesh convergence is difficult to reach to obtain exact results, the FE 2 framework could provide a standard of predictability that is much higher than the mean-fields methods when a composite is simulated, where the matrix present a strongly non-linear response. It deserves to be mentioned that the periodic homogenization gives excellent results for 3D structures, and the numerical accuracy depends on that of the FE calculations. However, when addressing plate or shell structures, the periodic homogenization requires proper modifications, as described in [START_REF] Kalamkarov | Analysis, design and optimization of composite structures[END_REF], due to the loss of periodicity in the thickness direction (out of plane). This results in less accurate prediction for the out of plane Poisson ratio. Nevertheless, the out of plane periodicity can be reasonably assumed when the microstructure contains high number of fibers or layers in the thickness direction.
Complex 3D structure with corner shape
In this section, a second 3D composite structure is simulated in order to illustrate the capability and the flexibility of the approach, when more complex boundary conditions are applied to the macroscopic structure. The modelled structure consists of a 3D part having a corner shape (Fig. 16-a). It is made of a thermoplastic aligned short fibre reinforced composite in which the matrix and reinforcement phases exhibit the same behavior as in Section 4.2. The structure is clamped at the bottom side and subjected to a normal uniform displacement path at left side (Fig. 16-b).
The displacement controlled path consists in two loading steps with different displacement rates (u (1) x = 2.1875 mm.s -1 , u(2) x = 0.15625 mm.s -1 ) and an unloading step at a displacement rate of (u (3) x = 0.9375 mm.s -1 ). In Figs. 17 and 18, the whole response of the composite in terms of macroscopic stress vs strain are depicted at two distinct points A and B (Fig. 19-c). The approach is able to reproduce the effect of such microstructure on the overall response of the composite, as on the most stressed point shown in Fig. 19. Indeed, on the clamped part (point A), it is clear that the structure is subjected to a tensile load according to the 22 direction, and a shear load in the direction 12.
These results are attributable primarily by the macroscopic boundary conditions. Furthermore, for the point B, a high stress value in the direction of loading was noticed. Fig. 19-c For the parallelization procedure and with the same number of increments (42 increments), the computation becomes 18 times faster than the non parallel solution. The actual computational time of the analysis performed on 18 processors was approximately 72h for a macroscopic structure containing 90 elements of type C3D8 with 6857 microscopic elements of type C3D4.
Conclusions and further work
This work presents a non-linear three-dimensional two-scale finite element (FE 2 ) framework fully integrated in the Finite Element Analysis Package ABAQUS/Standard, using parallel computation. The main advantage of the method is that it does not require an analytical form for the constitutive law at the macro-scale, while accounting for the microstructural effects and the local behaviors. It can integrate any kind of periodic microstucture with any type of non-linear behavior of the reinforcement (fibres and/or particles) and the matrix (plastic, viscoelastic, viscoplastic and damage).
The multi-scale strategy has been tested on three independent numerical examples: In the first example, a 3D multilayer composite structure with elastoplastic phases is simulated and compared with semi analytical solution, to validate the numerical implementation. In the second example, a short glass fibre reinforced composite with elastoviscoplastic-damageable matrix under complex loading is examined through the FE 2 strategy and the results are compared to those obtained by the Mori-Tanaka method. The obtained responses were in agreement with those presented in the literature in similar cases, and highlight the importance of utilizing full-field method for a generic modelling strategy with high predictability capabilities. In the third example, 3D a complex composite structure with corner shape is simulated in which the microstucture is made of an elastoviscoplastic matrix with ductile damage reinforced by short glass fibre. The capabilities of such approach to reproduce the effect of such microstructure on the macrostructure response at each macroscopic integration point has been demonstrated. The response of the structure clearly highlights creep and relaxation phenomena, which are characteristic for rate dependent responses. This viscous behavior and the stiffness reduction observed during unloading have been induced by the viscoplastic nature of the polymer matrix. It worth noticing that for composites where the matrix is viscoplastic material, the Mori-Tanaka method under proper modifications can provide quite accurate results [START_REF] Mercier | Homogenization of elastic-viscoplastic heterogeneous materials: Self-consistent and mori-tanaka schemes[END_REF][START_REF] Miled | Micromechanical modeling of coupled viscoelastic-viscoplastic composites based on an incrementally affine formulation[END_REF] compared to the full-field based approach.
A last advantage of this approach is that it can be extended to predict the overall fully coupled thermomechanical response of 3D composite structures [START_REF] Berthelsen | Computational homogenisation for thermoviscoplasticity: application to thermally sprayed coatings[END_REF][START_REF] Chatzigeorgiou | Computational micro to macro transitions for shape memory alloy composites using periodic homogenization[END_REF] with more complex mechanisms between fibres-matrix as interfacial damage mechanisms. Such fully-coupled analyses on multiscale structures should be of a high interest for industrial applications that are usually computed with commercial finite element analyses packages.
Figure 1 :
1 Figure 1: Schematic representation of the homogenization computational.
Figure 2 :
2 Figure 2: Definition of the displacement field as the sum of an affine part and a periodic fluctuation
Figure 3 :
3 Figure 3: Rheological scheme of the viscoplastic behavior and ductile damage [11].
Figure 4 :
4 Figure 4: Connection of the constraint drivers with the unit cell.
Figure 5 :
5 Figure 5: Meta-UMAT for the overall response computation of the composite using FE 2 approach at time increment n+1.
Figure 6 :
6 Figure 6: The flow chart of the two scales FE 2 algorithm in ABAQUS/Standard for non-linear case.
Figure 7 :
7 Figure 7: Parallelization steps of the FE 2 code.
Figure 8 :
8 Figure 8: Multilayer composite structure with their microstructure associated with each macroscopic integration point.
Figure 9 :
9 Figure 9: Multilayer composite structure under shear loading path.
Figure 10 :
10 Figure 10: Comparison of the numerical result of FE 2 approach with semi-analytical solution on multilayer witch elastoplastic phases in term of macroscopic stress-strain response.
(a) Mesh of the entire unit cell.(b) Short fibres reinforcement.
Figure 11 :
11 Figure 11: Composite microstructure.
(a) Mesh of the entire 3D composite structure. (b) Applied loading path.
Figure 12 :
12 Figure 12: Tensile and compression test on the 3D Meuwissen test tube.
Figure 13 :Figure 14 :
1314 Figure 13: The overall load-displacement response of the structure in the directions 11. Comparison between the FE 2 and the Mori-Tanaka solution.
( a )
a Macroscopic stress field of the composite structure. (b) Microscopic stress field of the microstructure at point A.
Figure 15 :
15 Figure 15: FE 2 solution with ABAQUS/Standard (component 11).
shows the stress response (component 11) of the macroscopic structure and the resulting microscopic stress in the two unit cells situated at two different macroscopic integration points A and B (Figs. 19-a and 19-b respectively). The response of the composite is highly affected by the matrix behavior through the relaxation phenomena caused by the change of the loading rate. The apparent stiffness reduction during the unloading caused by the ductile damage in the matrix is clearly observed.
(a) Macroscopic mesh of the 3D composite structure. (b) Applied loading path.
Figure 16 :
16 Figure 16: Tensile and compression test on the 3D composite structure with corner.
- 1 .
1 75 -1.50 -1.25 -1.00 -0.75 -0.50 -0.25 0
Figure 17 :
17 Figure 17: Macroscopic response of the composite at point A in term of stress-strain in the directions 11, 22, 33 and shear 12.
Figure 18 :
18 Figure 18: Macroscopic response of the composite at point B in term of stress-strain in the directions 11, 22, 33 and shear 12.
(a) Microscopic stress field of the microstructure at point A. (b) Microscopic stress field of the microstructure at point B. (c) Macroscopic stress field of the 3D composite structure (component 11).
Figure 19 :
19 Figure 19: FE 2 solution with ABAQUS/Standard (component 11).
Table 1 :
1 Macroscopic and microscopic scale transition[START_REF] Praud | Modélisation multi-échelle des composites tissés à matrice thermoplastique sous chargements cycliques non proportionnels[END_REF]
Table 2 :
2 State and evolution laws
Observable state variable Associated variable
cd 11 , u cd 22 , u cd 33 , u cd 12 , u cd 13 and u cd 23 , take the values of each component of the macroscopic strain tensor ε 11 , ε 22 , ε 33 , 2ε 12 , 2ε 13 and 2ε 23 , respectively. The dual forces of the constrain drivers are noted as F cd 11 , F cd 22 , F cd 33 , F cd 12 , F cd 13 and F cd 23
Table 3 :
3 Material parameters for the two phases
Elastic-plastic phase
Parameter value unit
E p 2000 MPa
ν p 0.3 -
R 0 10 MPa
K 60.0 MPa
n 0.15 -
Elastic phase
E e 6000 MPa
ν e 0.2 -
Table 4 :
4 Material parameters for polymer matrix
Parameter value unit
E m 1680 MPa
ν m 0.3 -
R 0 10 MPa
K 365.0 MPa
n 0.39 -
H 180.0 MPa.s
m 0.3 -
S 6.0 MPa
β -1.70 - |
01762547 | en | [
"math.math-oc",
"info.info-gt"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01762547/file/convergenceNonatomicFullProofs.pdf | Paulin Jacquot
email: paulin.jacquot@polytechnique.edu
Cheng Wan
email: cheng.wan.2005@polytechnique.org
† Cheng
Routing Game on Parallel Networks: the Convergence of Atomic to Nonatomic
We consider an instance of a nonatomic routing game. We assume that the network is parallel, that is, constituted of only two nodes, an origin and a destination. We consider infinitesimal players that have a symmetric network cost, but are heterogeneous through their set of feasible strategies and their individual utilities. We show that if an atomic routing game instance is correctly defined to approximate the nonatomic instance, then an atomic Nash Equilibrium will approximate the nonatomic Wardrop Equilibrium. We give explicit bounds on the distance between the equilibria according to the parameters of the atomic instance. This approximation gives a method to compute the Wardrop equilibrium at an arbitrary precision.
Introduction
Motivation. Network routing games were first considered by Rosenthal [START_REF] Rosenthal | The network equilibrium problem in integers[END_REF] in their "atomic unsplittable" version, where a finite set of players share a network subject to congestion. Routing games found later on many practical applications not only in transport [START_REF] Marcotte | Equilibria with infinitely many differentiated classes of customers[END_REF][START_REF] Wardrop | Some theoretical aspects of road traffic research[END_REF], but also in communications [START_REF] Orda | Competitive routing in multiuser communication networks[END_REF], distributed computing [START_REF] Altman | Nash equilibria in load balancing in distributed computer systems[END_REF] or energy [START_REF] Atzeni | Demand-side management via distributed energy generation and storage optimization[END_REF]. The different models studied are of three main categories: nonatomic games (where there is a continuum of infinitesimal players), atomic unsplittable games (with a finite number of players, each one choosing a path to her destination), and atomic splittable games (where there is a finite number of players, each one choosing how to split her weight on the set of available paths).
The concept of equilibrium is central in game theory, for it corresponds to a "stable" situation, where no player has interest to deviate. With a finite number of players-an atomic unsplittable game-it is captured by the concept of Nash Equilibrium [START_REF] Nash | Equilibrium points in n-person games[END_REF]. With an infinite number of infinitesimal players-the nonatomic case-the problem is different: deviations from a finite number of players have no impact, which led Wardrop to its definition of equilibria for nonatomic games [START_REF] Wardrop | Some theoretical aspects of road traffic research[END_REF]. A typical illustration of the fundamental difference between the nonatomic and atomic splittable routing games is the existence of an exact potential function in the former case, as opposed to the latter [START_REF] Nisan | Algorithmic game theory[END_REF]. However, when one considers the limit game of an atomic splittable game where players become infinitely many, one obtains a nonatomic instance with infinitesimal players, and expects a relationship between the atomic splittable Nash equilibria and the Wardrop equilibrium of the limit nonatomic game. This is the question we address in this paper.
Main results. We propose a quantitative analysis of the link between a nonatomic routing game and a family of related atomic splittable routing games, in which the number of players grows. A novelty from the existing literature is that, for nonatomic instances, we consider a very general setting where players in the continuum [0, 1] have specific convex strategy-sets, the profile of which being given as a mapping from [0, 1] to R T . In addition to the conventional network (congestion) cost, we consider individual utility function which is also heterogeneous among the continuum of players. For a nonatomic game of this form, we formulate the notion of an atomic splittable approximating sequence, composed of instances of atomic splittable games closer and closer to the nonatomic instance. Our main results state the convergence of Nash equilibria (NE) associated to an approximating sequence to the Wardrop equilibrium of the nonatomic instance. In particular, Thm. 11 gives the convergence of aggregate NE flows to the aggregate WE flow in R T in the case of convex and strictly increasing price (or congestion cost) functions without individual utility; Thm. 14 states the convergence of NE to the Wardrop equilibrium in ((R T ) [0,1] , . 2 ) in the case of player-specific strongly concave utility functions. For each result we provide an upper bound on the convergence rate, given from the atomic splittable instances parameters. An implication of these new results concerns the computation of an equilibrium of a nonatomic instance. Although computing an NE is a hard problem in general [START_REF] Koutsoupias | Worst-case equilibria[END_REF], there exists several algorithms to compute an NE through its formulation with finite-dimensional variational inequalities [START_REF] Facchinei | Finite-dimensional variational inequalities and complementarity problems[END_REF]. For a Wardrop Equilibrium, a similar formulation with infinite-dimensional variational inequalities can be written, but finding a solution is much harder.
Related work. Some results have already been given to quantify the relation between Nash and Wardrop equilibria. Haurie and Marcotte [START_REF] Haurie | On the relationship between nashcournot and wardrop equilibria[END_REF] show that in a sequence of atomic splittable games where atomic splittable players replace themselves smaller and smaller equal-size players with constant total weight, the Nash equilibria converge to the Wardrop equilibrium of a nonatomic game. Their proof is based on the convergence of variational inequalities corresponding to the sequence of Nash equilibria, a technique similar to the one used in this paper. Wan [START_REF] Wan | Coalitions in nonatomic network congestion games[END_REF] generalizes this result to composite games where nonatomic players and atomic splittable players coexist, by allowing the atomic players to replace themselves by players with heterogeneous sizes.
In [START_REF] Gentile | Nash and wardrop equilibria in aggregative games with coupling constraints[END_REF], the authors consider an aggregative game with linear coupling constraints (generalized Nash Equilibria) and show that the Nash Variational equilibrium can be approximated with the Wardrop Variational equilibrium. However, they consider a Wardrop-type equilibrium for a finite number of players: an atomic player considers that her action has no impact on the aggregated profile. They do not study the relation between atomic and nonatomic equilibria, as done in this paper. Finally, Milchtaich [START_REF] Milchtaich | Generic uniqueness of equilibrium in large crowding games[END_REF] studies atomic unsplittable and nonatomic crowding games, where players are of equal weight and each player's payoff depends on her own action and on the number of players choosing the same action. He shows that, if each atomic unsplittable player in an n-person finite game is replaced by m identical replicas with constant total weight, the equilibria generically converge to the unique equilibrium of the corresponding nonatomic game as m goes to infinity. Last, Marcotte and Zhu [START_REF] Marcotte | Equilibria with infinitely many differentiated classes of customers[END_REF] consider nonatomic players with continuous types (leading to a characterization of the Wardrop equilibrium as a infinite-dimensional variational inequality) and studied the equilibrium in an aggregative game with an infinity of nonatomic players, differentiated through a linear parameter in their cost function and their feasibility sets assumed to be convex polyhedra.
Structure. The remaining of the paper is organized as follows: in Sec. 2, we give the definitions of atomic splittable and nonatomic routing games. We recall the associated concepts of Nash and Wardrop equilibria, their characterization via variational inequalities, and sufficient conditions of existence. Then, in Sec. 3, we give the definition of an approximating sequence of a nonatomic game, and we give our two main theorems on the convergence of the sequence of Nash equilibria to a Wardrop equilibrium of the nonatomic game. Last, in Sec. 4 we provide a numerical example of an approximation of a particular nonatomic routing game.
Notation. We use a bold font to denote vectors (e.g. x) as opposed to scalars (e.g. x).
2 Splittable Routing: Atomic and Nonatomic
Atomic Splittable Routing Game
An atomic splittable routing game on parallel arcs is defined with a network constituted of a finite number of parallel links (cf Fig. 1) on which players can load some weight. Each "link" can be thought as a road, a communication channel or a time slot on which each user can put a load or a task. Associated to each link is a cost or "latency" function that depends only of the total load put on this link.
O D t = 1, c 1 t = 2, c 2 • • • t = T, c T
Definition 1. Atomic Splittable Routing Game
An instance G of an atomic splittable routing game is defined by:
• a finite set of players I = {1, . . . , I},
• a finite set of arcs T = {1, . . . , T },
• for each i ∈ I, a feasibility set X i ⊂ R T + , • for each i ∈ I, a utility function u i : X i → R,
• for each t ∈ T , a cost or latency function c t (.) : R → R .
Each atomic player i ∈ I chooses a profile (x i,t ) t∈T in her feasible set X i and minimizes her cost function:
f i (x i , x -i ) := t∈T x i,t c t j∈I x j,t -u i (x i ).
(
) 1
composed of the network cost and her utility, where x -i := (x j ) j =i . The instance G can be written as the tuple:
G = (I, T , X , c, (u i ) i ) , (2)
where
X := X 1 × • • • × X I and c = (c t ) t∈T .
In the remaining of this paper, the notation G will be used for an instance of an atomic game (Def. 1).
Owing to the network cost structure (1), the aggregated load plays a central role. We denote it by X t := i∈I x i,t on each arc t, and denote the associated feasibility set by:
X := X ∈ R T : ∃x ∈ X s.t. i∈I x i = X . (3)
As seen in ( 1), atomic splittable routing games are particular cases of aggregative games: each player's cost function depends on the actions of the others only through the aggregated profile X.
For technical simplification, we make the following assumptions:
Assumption 1. Convex costs Each cost function (c t ) is differentiable, convex and increasing.
Assumption 2. Compact strategy sets For each i ∈ I, the set X i is assumed to be nonempty, convex and compact.
Assumption 3. Concave utilities Each utility function u i is differentiable and concave.
Note that under Asms. 1 and 3, each function f i is convex in x i .
An example that has drawn a particular attention is the class of atomic splittable routing games considered in [START_REF] Orda | Competitive routing in multiuser communication networks[END_REF]. We add player-specific constraints on individual loads on each link, so that the model becomes the following. Example 1. Each player i has a weight E i to split over T . In this case, X i is given as the simplex:
X i = { x i ∈ R T + : t x i,t = E i and x i,t ≤ x i,t ≤ x i,t
} . E i can be the mass of data to be sent over different canals, or an energy to be consumed over a set of time periods [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF]. In the energy applications, more complex models include for instance "ramping" constraints r i,t ≤ x i,t+1 -x i,t ≤ r i,t .
Example 2. An important example of utility function is the distance to a preferred profile y i = (y i,t ) t∈T , that is:
u i (x i ) = -ω i x i -y i 2 2 = -ω i t (x i,t -y i,t ) 2 , (4)
where ω i > 0 is the value of player i's preference. Another type of utility function which has found many applications is :
u i (x i ) = -ω i log (1 + t x i,t ) , (5)
which increases with the weight player i can load on T .
Below we recall the central notion of Nash Equilibrium in atomic non-cooperative games.
Definition 2. Nash Equilibrium (NE) An NE of the atomic game G = (I, X , (f i ) i ) is a profile x ∈ X such that for each player i ∈ I: f i ( xi , x-i ) ≤ f i (x i , x-i ), ∀x i ∈ X i . Proposition 1. Variational Formulation of an NE Under Asms. 1 to 3, x ∈ X is an NE of G if and only if: ∀x ∈ X , ∀i ∈ I, ∇ i f i ( xi , x-i ), x i -xi ≥ 0 , (6)
where 6) is the necessary and sufficient first order condition for xi to be a minimum of f i (., x-i ).
∇ i f i ( xi , x-i ) = ∇f i (•, x-i )| •= xi = c t ( Xt ) + xi,t c t ( Xt ) t∈T -∇u i ( xi ). An equivalent condition is: ∀x ∈ X , i∈I ∇ i f i ( xi , x-i ), x i -xi ≥ 0 . Proof. Since x i → f i (x i , x -i ) is convex, (
Def. 1 defines a convex minimization game so that the existence of an NE is a corollary of Rosen's results [START_REF] Rosen | Existence and uniqueness of equilibrium points for concave n-person games[END_REF]:
Theorem 2 (Cor. of [START_REF] Rosen | Existence and uniqueness of equilibrium points for concave n-person games[END_REF]. Existence of an NE If G is an atomic routing congestion game (Def. 1) satisfying Asms. 1 to 3, then there exists an NE of G.
Rosen [START_REF] Rosen | Existence and uniqueness of equilibrium points for concave n-person games[END_REF] gave a uniqueness theorem applying to any convex compact strategy sets, relying on a strong monotonicity condition of the operator (∇ xi f i ) i . For atomic splittable routing games [START_REF] Orda | Competitive routing in multiuser communication networks[END_REF], an NE is not unique in general [START_REF] Bhaskar | Equilibria of atomic flow games are not unique[END_REF]. To our knowledge, for atomic parallel routing games (Def. 1) under Asms. 1 to 3, neither the uniqueness of NE nor a counter example of its uniqueness has been found. However, there are some particular cases where uniqueness has been shown, e.g. [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF] for the case of Ex. 1.
However, as we will see in the convergence theorems of Sec. 3, uniqueness of NE is not necessary to ensure the convergence of NE of a sequence of atomic unsplittable games, as any sequence of NE will converge to the unique Wardrop Equilibrium of the nonatomic game considered.
Infinity of Players: the Nonatomic Framework
If there is an infinity of players, the structure of the game changes: the action of a single player has a negligible impact on the aggregated load on each link. To measure the impact of infinitesimal players, we equip real coordinate spaces R k with the usual Lebesgue measure µ.
The set of players is now represented by a continuum Θ = [0, 1]. Each player is of Lebesgue measure 0.
Definition 3. Nonatomic Routing Game
An instance G of a nonatomic routing game is defined by:
• a continuum of players Θ = [0, 1],
• a finite set of arcs T = {1, . . . , T },
• a point-to-set mapping of feasibility sets X . : Θ ⇒ R T + , • for each θ ∈ Θ, a utility function u θ (.) : X θ → R,
• for each t ∈ T , a cost or latency function c t (.) : R → R.
Each nonatomic player θ chooses a profile x θ = (x θ,t ) t∈T in her feasible set X θ and minimizes her cost function:
F θ (x θ , X) := t∈T x θ,t c t X t -u θ (x θ ), (7)
where X t := Θ x θ,t dθ denotes the aggregated load. The nonatomic instance G can be written as the tuple:
G = (Θ, T , (X θ ) θ∈Θ , c, (u θ ) θ∈Θ ) . (8)
For the nonatomic case, we need assumptions stronger than Asms. 2 and 3 for the mappings X . and u., given below: Assumption 4. Nonatomic strategy sets There exists M > 0 such that, for any θ ∈ Θ, X θ is convex, compact and X θ ⊂ B 0 (M ), where B 0 (M ) is the ball of radius M centered at the origin. Moreover, the mapping θ → X θ has a measurable graph Γ
X := {(θ, x) : θ ∈ Θ, x ∈ X θ } ⊂ R T +1 .
Assumption 5. Nonatomic utilities There exists Γ > 0 s.t. for each θ, u θ is differentiable, concave and ∇u θ ∞ < Γ. The function Γ X (θ, x θ ) → u θ (x θ ) is measurable.
Def. 3 and Asms. 4 and 5 give a very general framework. In many models of nonatomic games that have been considered, players are considered homogeneous or with a finite number of classes [START_REF] Nisan | Algorithmic game theory[END_REF]Chapter 18]. Here, players can be heterogeneous through X θ and u θ . Games with heterogeneous players can find many applications, an example being the nonatomic equivalent of Ex. 1: Example 3. Let θ → E θ be a density function which designates the total demand E θ for each player θ ∈ Θ. Consider the nonatomic splittable routing game with feasibility sets
X θ := {x θ ∈ R T + : t x θ,t = E θ }.
As in Ex. 1, one can consider some upper bound x θ,t and lower bound x θ,t for each θ ∈ Θ and each t ∈ T , and add the bounding constraints ∀t ∈ T , x θ,t ≤ x θ,t ≤ x θ,t in the definition of X θ .
Heterogeneity of utility functions can also appear in many practical cases: if we consider the case of preferred profiles given in Ex. 2, members of a population can attribute different values to their cost and their preferences.
Since each player is infinitesimal, her action has a negligible impact on the other players' costs. Wardrop [START_REF] Wardrop | Some theoretical aspects of road traffic research[END_REF] extended the notion of equilibrium to the nonatomic case.
Definition 4. Wardrop Equilibrium (WE)
x * ∈ (X θ ) θ is a Wardrop equilibrium of the game G if it is a measurable function from θ to X and for almost all θ ∈ Θ,
F θ (x * θ , X * ) ≤ F θ (x θ , X * ), ∀x θ ∈ X θ ,
where
X * = θ∈Θ x * θ dθ ∈ R T . Proposition 3. Variational formulation of a WE Under Asms. 1, 4 and 5, x * ∈ X is a WE of G iff for almost all θ ∈ Θ: c(X * ) -∇u θ (x * θ ), x θ -x * θ ≥ 0, ∀x θ ∈ X θ . (9)
Proof. Given X * , ( 9) is the necessary and sufficient first order condition for x * θ to be a minimum point of the convex function F θ (., X * ).
According to [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF], the monotonicity of c is sufficient to have the VI characterization of the equilibrium in the nonatomic case, as opposed to the atomic case in [START_REF] Facchinei | Finite-dimensional variational inequalities and complementarity problems[END_REF] where monotonicity and convexity of c are needed.
Theorem 4 (Cor. of Rath, 1992 [16]). Existence of a WE If G is a nonatomic routing congestion game (Def. 3) satisfying Asms. 1, 4 and 5, then G admits a WE.
Proof. The conditions required in [START_REF] Rath | A direct proof of the existence of pure strategy equilibria in games with a continuum of players[END_REF] are satisfied. Note that we only need (c t ) t and (u θ ) θ∈Θ to be continuous functions.
The variational formulation of a WE given in Thm. 3 can be written in the closed form: Theorem 5. Under Asms. 1, 4 and 5, x * ∈ X is a WE of G iff:
θ∈Θ c(X * ) -∇u θ (x * θ ), x θ -x * θ dθ ≥ 0, ∀x ∈ X . (10)
Proof. This follows from Thm. 3. If x * ∈ X is a Wardrop equilibrium so that (9) holds for almost all θ ∈ Θ, then [START_REF] Koutsoupias | Worst-case equilibria[END_REF] follows straightforwardly. Conversely, suppose that x * ∈ X satisfies condition [START_REF] Koutsoupias | Worst-case equilibria[END_REF] but is not a WE of G. Then there must be a subset S of Θ with strictly positive measure such that for each θ ∈ S, (9) does not hold: for each θ ∈ S, there exists
y θ ∈ X θ such that c(X * ) -∇u θ (x * θ ), y θ -x * θ < 0 For each θ ∈ Θ \ S, let y θ := x * θ . Then y = (y θ ) θ∈Θ ∈ X , and θ∈Θ c(X * ) -∇u θ (x * θ ), y θ -x * θ dθ = θ∈S c(X * ) -∇u θ (x * θ ), y θ -x * θ dθ < 0 contradicting (10).
Corrolary 6. In the case where u θ ≡ 0 for all θ ∈ Θ, under Asms. 1 and 4,
x * ∈ X is a WE of G iff: c(X * ), X -X * ≥ 0, ∀X ∈ X . (11)
From the characterization of the WE in Thm. 5 and Thm. 6, we derive Thms. 7 and 8 that state simple conditions ensuring the uniqueness of WE in G.
Theorem 7. Under Asms. 1, 4 and 5, if u θ is strictly concave for each θ ∈ Θ, then G admits a unique WE.
Proof. Suppose that x ∈ X and y ∈ X are both WE of the game. Let X = θ∈Θ x θ dθ and Y = θ∈Θ y θ dθ. Then, according to Theorem 5,
θ∈Θ c(X) -∇u θ (x θ ), y θ -x θ dθ ≥ 0 ( 12
) θ∈Θ c(Y ) -∇u θ (y θ ), x θ -y θ dθ ≥ 0 (13)
By adding ( 12) and ( 13), one has
θ∈Θ c(X) -c(Y ) -∇u θ (x θ ) + ∇u θ (y θ ), y θ -x θ dθ ≥ 0 ⇒ c(X) -c(Y ), θ∈Θ (y θ -x θ )dθ + θ∈Θ -∇u θ (x θ ) + ∇u θ (y θ ), y θ -x θ dθ ≥ 0 ⇒ c(X) -c(Y ), X -Y + θ∈Θ -∇u θ (x θ ) + ∇u θ (y θ ), x θ -y θ dθ ≤ 0
Since for each θ, u θ is strictly concave, ∇u θ is thus strictly monotone. Therefore, for each θ ∈ Θ, -∇u θ (x θ ) + ∇u θ (y θ ), x θ -y θ ≥ 0 and equality holds if and only if
x θ = y θ . Besides, c is monotone, hence c(X) -c(Y ), X -Y ≥ 0. Consequently, c(X) -c(Y ), X -Y + θ∈Θ -∇u θ (x θ ) + ∇u θ (y θ ),
x θ -y θ dθ ≥ 0, and equality holds if and only if for almost all θ ∈ Θ, x θ = y θ . (In this case, X = Y .)
Theorem 8. In the case where u θ ≡ 0 for all θ ∈ Θ, under Asms. 1 and 4, if c = (c t ) T t=1 : [0, M ] T → R T is a strictly monotone operator, then all the WE of G have the same aggregate profile X * ∈ X .
Proof. Suppose that x ∈ X and y ∈ X are both WE of the game. Let X = θ∈Θ x θ dθ and Y = θ∈Θ y θ dθ. Then, according to Corollary 6,
c(X), Y -X ≥ 0 (14) c(Y ), X -Y ≥ 0 ( 15
)
By adding ( 14) and ( 15), one has
c(X) -c(Y ), Y -X ≥ 0
Since c is strictly monotone, c(X) -c(Y ), X -Y ≥ 0 and equality holds if and only X = Y .
Consequently, X = Y .
Remark 1. If for each t ∈ T , c t (.) is (strictly) increasing, then c is a (strictly) monotone operator from [0, M ] T → R T .
One expects that, when the number of players grows very large in an atomic splittable game, the game gets close to a nonatomic game in some sense. We confirm this intuition by showing that, considering a sequence of equilibria of approximating atomic games of a nonatomic instance, the sequence will converge to an equilibrium of the nonatomic instance.
Approximating Nonatomic Games
To approximate the nonatomic game G, the idea consists in finding a sequence of atomic games (G (ν) ) with an increasing number of players, each player representing a "class" of nonatomic players, similar in their parameters.
As the players θ ∈ Θ are differentiated through X θ and u θ , we need to formulate the convergence of feasibility sets and utilities of atomic instances to the nonatomic parameters.
Approximating the nonatomic instance
Definition 5. Atomic Approximating Sequence (AAS) A sequence of atomic games G (ν) = I (ν) , T , X (ν) , c, (u (ν) i ) i is an approximating sequence (AAS) for the nonatomic instance G = Θ, T , (X θ ) θ , c, (u θ ) θ if for each ν ∈ N, there exists a partition of cardinal I (ν) of set Θ, denoted by (Θ
(ν) i ) i∈I (ν) , such that: • I (ν) -→ +∞, • µ (ν) := max i∈I (ν) µ (ν) i -→ 0 where µ (ν) i := µ(Θ (ν) i ) is the Lebesgue measure of subset Θ (ν) i , • δ (ν) := max i∈I (ν) δ (ν) i
-→ 0 where δ i is the Hausdorff distance (denoted by d H ) between nonatomic feasibility sets and the scaled atomic feasibility set:
δ (ν) i := max θ∈Θi d H X θ , 1 µ (ν) i X (ν) i , (16)
• d (ν) := max i∈I (ν) d (ν) i -→ 0 where d i is the L ∞ -distance (in B 0 (M ) → R)
between the gradient of nonatomic utility functions and the scaled atomic utility functions:
d (ν) i = max θ∈Θi max x∈B0(M ) ∇u (ν) i µ (ν) i x -∇u θ (x) 2 . ( 17
)
From Def. 5 it is not trivial to build an AAS of a given nonatomic game G, one can even be unsure that such a sequence exists. However, we will give practical examples in Secs. 3.4.1 and 3.4.2.
A direct result from the assumptions in Def. 5 is that the players become infinitesimal, as stated in Thm. 9.
Lemma 9. If (G (ν) ) ν is an AAS of a nonatomic instance G, then considering the maximal diameter M of X θ , we have:
∀i ∈ I (ν) , ∀x i ∈ X (ν) i , x i 2 ≤ µ (ν) i (M + δ (ν) i ) . ( 18
)
Proof. Let x i ∈ X (ν) i . Let θ ∈ Θ (ν) i
and denote by P X θ the projection on X θ . By definition of δ (ν) i , we get:
x i µ (ν) i -P X θ x i µ (ν) i 2 ≤ δ (ν) i (19) ⇐⇒ x i 2 ≤ µ (ν) i δ (ν) i + P X θ x i µ (ν) i 2 ≤ µ (ν) i (δ (ν) i + M ) . (20)
Lemma 10. If (G (ν) ) ν is an AAS of a nonatomic instance G, then the Hausdorff distance between the aggregated sets X = Θ X . and X (ν) = i∈I (ν) X (ν) i is bounded by:
d H X (ν) , X ≤ δ (ν) . (21)
Proof. Let (x θ ) θ ∈ X be a nonatomic profile. Let P i denote the Euclidean projection on X (ν) i for i ∈ I (ν) and consider
y i := P i Θ (ν) i x θ dθ ∈ X (ν)
i . From ( 16) we have:
Θ x θ dθ - i∈I (ν) y i 2 = i∈I (ν) Θ (ν) i x θ dθ -y i 2 (22) = i∈I (ν) Θ (ν) i x θ - 1 µ (ν) i y i dθ 2 (23) ≤ i∈I (ν) Θ (ν) i x θ - 1 µ (ν) i y i 2 dθ (24) ≤ i∈I (ν) Θ (ν) i δ (ν) i dθ = i∈I (ν) µ (ν) i δ (ν) i ≤ δ (ν) , (25)
which shows that d X, X (ν) ≤ δ (ν) for all X ∈ X . On the other hand, if i∈I (ν) x i ∈ X (ν) , then let us denote by Π θ the Euclidean projection on X θ for θ ∈ Θ, and
y θ = Π θ 1 µ (ν) i x i ∈ X θ for θ ∈ Θ (ν) i . Then we have for all θ ∈ Θ (ν) i , 1 µ (ν) i x i -y θ 2 ≤ δ (ν) i
and we get:
i∈I (ν) x i - Θ y θ dθ 2 ≤ i∈I (ν) Θ (ν) i 1 µ (ν) i x i -y θ dθ 2 (26) ≤ i∈I (ν) Θ (ν) i 1 µ (ν) i x i -y θ 2 dθ (27) ≤ i∈I (ν) µ (ν) i δ (ν) i ≤ δ (ν) , (28)
which shows that d X, X ≤ δ (ν) for all X ∈ X (ν) and concludes the proof.
To ensure the convergence of an AAS, we make the following additional assumptions on costs functions (c t ) t : Assumption 6. Lipschitz continuous costs For each t ∈ T , c t is a Lipschitz continuous function on [0, M ]. There exists C > 0 such that for each t ∈ T , |c t (•)| ≤ C.
Assumption 7.
Strong monotonicity There exists c 0 > 0 such that, for each t ∈ {1, . . . , T },
c t (•) ≥ c 0 on [0, M ]
In the following sections, we differentiate the cases with and without utilities, because we found different convergence results in the two cases.
Players without Utility Functions: Convergence of the Aggregated Equilibrium Profiles
In this section, we assume that u θ ≡ 0 for each θ ∈ Θ.
We give a first result on the approximation of WE by a sequence of NE in Thm. 11.
Theorem 11. Let (G (ν) ) ν be an AAS of a nonatomic instance G, satisfying Asms. 1, 2, 4, 6 and 7. Let ( x(ν) ) a sequence of NE associated to (G (ν) ), and (x * θ ) θ a WE of G. Then:
X(ν) -X * 2 2 ≤ 2 c 0 × B c × δ (ν) + C(M + 1) 2 × µ (ν) ,
where
B c := max x∈B0(M ) c(x) 2 .
Proof. Let P i denote the Euclidean projection onto X (ν) i and Π the projection onto X . We omit the index ν for simplicity. From [START_REF] Marcotte | Equilibria with infinitely many differentiated classes of customers[END_REF], we get:
c(X * ), Π( X) -X * ≥ 0 . ( 29
)
On the other hand, with x * i := Θi x θ dθ, we get from (1):
0 ≤ i∈I c t ( Xt ) + xi,t c t ( Xt ) t∈T , P i (x * i ) -xi (30) = c( X), i P i (x * i ) -X + R( x, x * ) (31) with R( x, x * ) = i xi,t c t ( Xt ) t , P i (x * i ) -xi .
From the Cauchy-Schwartz inequality and Thm. 9, we get:
|R( x, x * )| ≤ i∈I (ν) xi,t c t ( Xt ) t 2 × P i (x * i ) -xi 2 (32) ≤ i∈I (ν) (µ (ν) i (M + δ (ν) i )C × 2(µ (ν) i (M + δ (ν) i )) (33) ≤ 2C(M + 1) 2 max i µ (ν) i . ( 34
)
Besides, with the strong monotonicity of c and from ( 29) and (30):
c 0 X -X * 2 ≤ c( X) -c(X * ), X -X * = c( X), X -X * + c(X * ), X * -X ≤ c( X), X - i P i (x * i ) + c(X * ), X * -Π( X) + c( X), i P i (x * i ) -X * + c(X * ), Π( X) -X ≤ |R( x, x * )| + 0 + 2B c × max i δ i ,
which concludes the proof.
Players with Utility Functions: Convergence of the Individual Equilibrium Profiles
In order to establish a convergence theorem in the presence of utility functions, we make an additional assumption of strong monotonicity on the utility functions stated in Asm. 8. Note that this assumption holds for the utility functions given in Ex. 2.
Assumption 8. Strongly concave utilities For all θ ∈ Θ, u θ is strongly concave on B 0 (M ), uniformly in θ: there exists α > 0 such that for all x, y ∈ B 0 (M ) 2 and any τ ∈]0, 1[ :
u θ ((1 -τ )x + τ y) ≥ (1 -τ )u θ (x) + τ u(y) + α 2 τ (1 -τ ) x -y 2 .
Remark 2. If u θ (x θ ) is α θ -strongly concave, then the negative of its gradient is a strongly monotone operator:
-∇u θ (x θ ) -∇u θ (y θ ), x θ -y θ ≥ α θ x θ -y θ 2 . (35)
We start by showing that, under the additional Asm. 8 on the utility functions, the WE profiles of two nonatomic users within the same subset Θ (ν) i are roughly the same. Proposition 12. Let (G (ν) ) ν be an AAS of a nonatomic instance G and (x * θ ) θ the WE of G satisfying Asms. 1, 4, 5 and 8. Then, if θ, ξ ∈ Θ (ν) i , we have:
x * θ -x * ξ 2 2 ≤ 2 α M d (ν) i + (B c + Γ)δ (ν) i v ξ (x * ξ ) -c(X) ,
which gives the desired result when combined with (40).
This result reveals the role of the strong concavity of utility functions: when α goes to 0, the right hand side of the inequality diverges. This is coherent with the fact that, without utilities, only the aggregated profile matters, so that we cannot have a result such as Thm. 12.
According to Thm. 12, we can obtain a continuity property of the Wardrop equilibrium if we introduce the notion of continuity for the nonatomic game G, relatively to its parameters: Definition 6. Continuity of a nonatomic game The nonatomic instance G = Θ, T , (X θ ) θ , c, (u θ ) θ is said to be continuous at θ ∈ Θ if, for all ε > 0, there exists η > 0 such that:
∀θ ∈ Θ, θ -θ ≤ η ⇒ d H (X θ , X θ ) ≤ ε max x∈X θ ∪X θ ∇u θ (x) -∇u θ (x) 2 ≤ ε . (41)
Then the proof of Thm. 12 shows the following intuitive property:
Proposition 13. Let G = Θ, T , (X θ ) θ , c, (u θ ) θ be a nonatomic instance. If G is continuous at θ 0 ∈ Θ and (x * θ ) θ is a WE of G, then θ → x * θ is continuous at θ 0 .
The next theorem is one of the main results of this paper. It shows that a WE can be approximated by the NE of an atomic approximating sequence. and(x * θ ) θ the WE of G. Under Asms. 1 to 6 and 8, the approximating solution defined by
Theorem 14. Let (G (ν) ) ν be an AAS of a nonatomic instance G. Let ( x(ν) ) a sequence of NE associated to (G (ν) ),
x(ν)
θ := 1 µ (ν) i x(ν) i for θ ∈ Θ (ν) i satisfies: θ∈Θ x(ν) θ -x * θ 2 2 dθ ≤ 2 α B c + Γ)δ (ν) + C(M + 1) 2 µ (ν) + M d (ν) .
Proof. Let ( xi ) i be an NE of G (ν) , and x * ∈ X the WE of G. For the remaining of the proof we ommit the index (ν) for simplicity.
Let us consider the nonatomic profile defined by xθ := 1 µi xi for θ ∈ Θ i , and its projection on the feasibility set ŷθ := P X θ ( xθ ). Similarly, let us consider the atomic profile given by x * i := Θi x * θ dθ for i ∈ I (ν) , and its projection y * i := P Xi (x * i ). For notation simplicity, we denote ∇u θ by v θ . From the strong concavity of v θ and the strong monotonicity of c, we have:
α θ∈Θ x(ν) θ -x * θ 2 2 + c 0 X(ν) -X * 2 2 (42) ≤ Θ c( X) -v θ ( xθ ) -(c(X * ) -v θ (x * θ )) , xθ -x * θ dθ (43) = Θ c( X) -v θ ( xθ ), xθ -x * θ dθ + Θ c(X * ) -v θ (x * θ ), x * θ -xθ dθ . ( 44
)
To bound the second term, we use the characterization of a WE given in Thm. 3, with ŷθ ∈ X θ :
Θ c(X * ) -v θ (x * θ ), x * θ -xθ dθ (45) = Θ c(X * ) -v θ (x * θ ), x * θ -ŷθ dθ + Θ c(X * ) -v θ (x * θ ), ŷθ -xθ dθ (46) ≤ 0 + i∈I (ν) Θi c(X * ) -v θ (x * θ ) 2 × ŷθ -xθ 2 dθ (47) ≤ i∈I (ν) Θi (B c + Γ) × δ i ≤ (B c + Γ) × δ . (48)
To bound the first term of (44), we divide it into two integral terms:
Θ c( X) -v θ ( xθ ), xθ -x * θ dθ (49) = i∈I (ν) Θi c( X) -v i ( xi ), xθ -x * θ dθ + Θi v i ( xi ) -v θ ( xθ ), xθ -x * θ dθ . ( 50
)
The first integral term is bounded using the characterization of a NE given in Thm. 1:
i∈I (ν) Θi c( X) -v i ( xi ), xθ -x * θ dθ (51) = i∈I (ν) c( X) -v i ( xi ), xi -x * i (52) ≤ i∈I (ν) c( X) -v i ( xi ), xi -y * i + i∈I (ν) c( X) -v i ( xi ), y * i -x * i (53) ≤ -R( x, x * ) + i∈I (ν) c( X) -v i ( xi ) 2 × y * i -x * i 2 (54) ≤ 2C(M + 1) 2 µ + (B c + Γ) × 2M δ i∈I (ν) µ i (55) = 2C(M + 1) 2 µ + (B c + Γ) × 2M δ . (56)
For the second integral term, we use the distance between utilities (17):
i∈I (ν) Θi v i ( xi ) -v θ ( xθ ), xθ -x * θ dθ (57) ≤ i∈I (ν) µ i v i ( xi ) -v θ ( xθ ) 2 × xθ -x * θ 2 (58) ≤ i∈I (ν) µ i d i × 2M ≤ d2M . (59)
We conclude the proof by combining (48),( 56) and (59).
As in Thm. 12, the uniform strong concavity of the utility functions plays a key role in the convergence of disaggregated profiles ( x(ν) θ ) ν to the nonatomic WE profile x * .
Construction of an Approximating Sequence
In this section, we give examples of the construction of an AAS for a nonatomic game G, under two particular cases: the case of piecewise continuous functions and, next, the case of finitedimensional parameters.
Piecewise continuous parameters, uniform splitting
In this case, we assume that the parameters of the nonatomic game are piecewise continuous functions of θ ∈ Θ: there exists a finite set of K discontinuity points 0
≤ σ 1 < σ 2 < • • • < σ K ≤ 1,
and the game is uniformly continuous (Def. 6) on (σ k , σ k+1 ), for each k ∈ {0, . . . , K + 1} with the convention σ 0 = 0 and σ K = 1.
For ν ∈ N * , consider the ordered set of I ν cutting points (υ
(ν) i ) Iν i=0 := k ν 0≤k≤ν ∪ {σ k } 1≤k≤K and define the partition (Θ (ν) i ) i∈I (ν) of Θ by: ∀i ∈ {1, . . . , I ν } , Θ (ν) i = [υ (ν) i-1 , υ (ν) i ). ( 60
)
Proposition 15. For ν ∈ N * , consider the atomic game G (ν) defined with I (ν) := {1 . . . I ν }, and for each i ∈ I (ν) :
X (ν) i := µ (ν) i X ῡ(ν) i and u (ν) i := x → µ (ν) i u ῡ(ν) i 1 µ (ν) i x , with ῡ(ν) i = υ (ν) i-1 +υ (ν) i 2 . Then G (ν) ν = I (ν) , T , X (ν) , c, u (ν)
ν is an AAS of the nonatomic game G = (Θ, T , X . , c, (u θ ) θ ).
Proof. We have I (ν) > ν -→ ∞ and for each i ∈ I (ν) , µ(Θ
(ν) i ) ≤ 1
ν -→ 0. The conditions on the feasibility sets and the utility functions are obtained with the piecewise uniform continuity conditions. If we consider a common modulus of uniform continuity η associated to an arbitrary ε > 0, then, for ν large enough, we have, for each i ∈< I (ν) , µ
(ν) i < η. Thus, for all θ ∈ Θ (ν) i , |ῡ (ν) i
-θ| < η, so that from the continuity conditions, we have:
d H X θ , 1 µ (ν) i X (ν) i = d H (X θ , X ῡ(ν) i ) < ε (61)
and max
x∈B0(M ) ∇u (ν) i µ (ν) i x -∇u θ (x) 2 = µ (ν) i µ (ν) i ∇u ῡ(ν) i 1 µ (ν) i µ (ν) i x -∇u θ (x) 2 < ε , (62)
which concludes the proof.
Proof. The proof follows [START_REF] Batson | Combinatorial behavior of extreme points of perturbed polyhedra[END_REF] in several parts, but we extend the result on the compact set B, and drop the irredundancy assumption made in [START_REF] Batson | Combinatorial behavior of extreme points of perturbed polyhedra[END_REF].
For each b, we denote by V (b) the set of vertex of the polyhedron Λ b . Under Assumption 4, V (b) is nonempty for any b ∈ B.
First, as Λ b is a polyhedra, we have Λ b = conv(V (b)) where conv(X) is the convex hull of a set X. As the function x → d(x, Λ b ) defined over Λ b is continuous and convex, by the maximum principle, its maximum over the polyhedron Λ b is achieved on V (b). Thus, we have:
d H (Λ b , Λ b ) = max[ max x∈Λ b d(x, Λ b ) , max x∈Λ b d(Λ b , x)] (64) = max[ max x∈V (b) d(x, Λ b ) , max x∈V (b ) d(Λ b , x)] (65)
≤ max[ max
x∈V (b) d(x, V (b )) , max x∈V (b ) d(V (b), x)] (66) =d H (V (b), V (b )) . (67)
Let's denote by H i (b) the hyperplane {x :
A i x = b i } and by H - i (b) = {x : A i x ≤ b i } and H + i (b) = {x : A i x ≥ b i } the associated half-spaces. Then Λ b = i∈[1,m] H - i (b). Now fix b 0 ∈ B and consider v ∈ V (b 0 )
. By definition, v is the intersection of hyperplanes i∈K H i (b 0 ) where K ⊂ {1, . . . , m} is maximal (note that k := card(K) ≥ n otherwise v can not be a vertex).
For J ∈ {1, . . . , m}, let A J denote the submatrix of A obtained by considering the rows A j for j ∈ J. Let us introduce the sets of derived points (points of the arangement) of the set K, for each b ∈ B:
V K (b) := {x ∈ R n ; ∃J ⊂ K ; A J is invertible and x = A -1 J b} . By definition, V K (b 0 ) = {v} and, for each b ∈ B, V K (b) is a set of at most k n elements. First, note that for each b ∈ B and v := A -1 J b ∈ V K (b), one has: v -v = A -1 J b 0 -A -1 J b ≤ A -1 J × b 0 -b ≤ α b 0 -b (68)
where α := max with y θ = (0, E θ ) the preference of user θ for period P and ω θ := θ the preference weight of player θ.
We consider approximating atomic games by splitting Θ uniformly (Sec. 3.4.1) in 5, 20, 40 and 100 segments (players). We compute the NE for each atomic game using the best-response dynamics (each best-response is computed as a QP using algorithm [START_REF] Brucker | An o(n) algorithm for quadratic knapsack problems[END_REF], see [START_REF] Jacquot | Analysis and implementation of an hourly billing mechanism for demand response management[END_REF] for convergence properties) and until the KKT optimality conditions for each player are satisfied up to an absolute error of 10 -3 . Fig. 2 shows, for each NE associated to the atomic games with 5, 20, 40 and 100 players, the linear interpolation of the load on the peak period x θ,P (red filled area), while the load on the offpeak period can be observed as x θ,O = E θ -x θ,P . We observe the convergence Figure 2: Convergence of the Nash Equilibrium profiles to a Wardrop Equilibrium profile to the limit WE of the nonatomic game as stated in Thm. 14. We also observe that the only discontinuity point of θ → x * θ,P comes from the discontinuity of θ → E θ at θ = 0.7, as stated in Thm. 13.
Conclusion
This paper gives quantitative results on the convergence of Nash equilibria, associated to atomic games approximating a nonatomic routing game, to the Wardrop equilibrium of this nonatomic game. These results are obtained under different differentiability and monotonicity assumptions. Several directions can be explored to continue this work: first, we could analyze how the given theorems could be modified to apply in case of nonmonotone and nondifferentiable functions. Another natural extension would be to consider routing games on nonparallel networks or even general aggregative games: in that case, the separable costs structure is lost and the extension is therefore not trivial.
Figure 1 :
1 Figure 1: A parallel network with T links.
η := min j∈{1...m}\K d v, H + j (b) . By the maximality of K, η > 0. As x → d(x, H + j ) is continuous for each j, and from (68) , there exists δ > 0 such that: b0 -b ≤ δ =⇒ ∀v ∈ V K (b), min j∈{1...m}\K d v , H + j (b) > 0. Next, we show that, for b such that b 0 -b ≤ δ, there exists v ∈ V K (b) ∩ V (b). We proceed by induction on k -n. If k = n, then v = A -1 K b 0 and for any b in the ball S δ (b 0 ), V K (b) = {A -1 K b}. Thus v := A -1 K b verifies A K v = b K , and A j v < b j for all j / ∈ K, thus v belongs to V (b). If k = n + t with t ≥ 1, there exists j 0 ∈ K such that with K = K \ {j 0 }, V K (b 0 ) = {v}. Consider the polyhedron P = i∈K H - i (b 0 ). By induction, there exists J ⊂ K such that A -1 J b is a vertex of P . If it satisfies also A j0 x ≤ b j0 then it is an element of V (b). Else, consider a vertex v of the polyhedron P ∩ H - j0 (b) on the facet associated with H j0 (b). Then, v ∈ V K (b) and, as b ∈ S δ (b 0 ), it verifies A j v < b j for all j / ∈ K, thus v ∈ V (b). Thus,in any case and for b ∈ S δ (b 0 ), d(v, V (b)) ≤ vv ≤ α b 0 -b and finally d (V (b 0 ), V (b)) ≤ α b 0 -b . The collection S δ b (b) b∈B is an open covering of the compact set B, thus there exists a finite subcollection of cardinal r that also covers B, from which we deduce that there exists D ≤ max(rα) such that: ∀b, b ∈ B, d H (V (b ), V (b)) ≤ D bb .
Acknowledgments
We thank Stéphane Gaubert, Marco Mazzola, Olivier Beaude and Nadia Oudjane for their insightful comments.
This work was supported in part by the PGMO foundation.
Finite dimension, meshgrid approximation
Consider a nonatomic routing game G = (Θ, X , F ) (Def. 3) satisfying the following two hypothesis:
• The feasibility sets are K-dimensional polytopes: there exist A ∈ M K,T (R) and b : Θ → R K bounded, such that for any θ, X θ := {x ∈ R T ; Ax ≤ b θ }, with X θ nonempty and bounded (as a polytope, X θ is closed and convex).
• There exist a bounded function s : Θ → R q and a function u : R q × B 0 (M ) → R such that for any θ ∈ Θ, u θ = u(s θ , .). Furthermore, u is Lipschitz-continuous in s. For ν ∈ N * , we consider the uniform meshgrid of ν K+q classes of
Let
[s k , s k ] which will give us a set of I (ν) = ν K+q subsets. More explicitly, if we define:
the set of indices for the meshgrid, and with the cutting points b k,
n of Θ as:
Since some of the subsets Θ
n can be of Lebesgue measure 0, we define the set of players I (ν) as the elements n of Γ (ν) for which µ(Θ
Remark 3. If there is a set of players of positive measure that have equal parameters b and s, then the condition max i∈I (ν) µ i → 0 will not be satisfied. In that case, adding another dimension in the meshgrid by cutting Θ = [0, 1] in ν uniform segments solves the problem. Proposition 16. For ν ∈ N * , consider the atomic game G (ν) defined by:
Before giving the proof of Thm. 16, we show the following nontrivial Thm. 17, from which the convergence of the feasibility sets is easily derived.
Proof of Thm. 16. First, to show the divergence of the number of players and their infinitesimal weight, we have to follow Remark 3 where we consider an additional splitting of the segment Θ = [0, 1]. In that case, we will have I (ν) ≥ ν hence goes to positive infinity and for each n ∈ I (ν) , µ(Θ
ν hence goes to 0. Then, the convergence of the strategy sets follows from the fact that, for each n ∈ I (ν) :
and from Thm. 17 which implies that, for each θ ∈ Θ (ν)
n :
Finally, the convergence of utility functions comes from the Lipschitz continuity in s. For each n ∈ I (ν) and each θ ∈ Θ (ν) n , we have:
which terminates the proof. Of course, the number of players considered in Thm. 16 is exponential in the dimensions of the parameters K + q, which can be large in practice. As a result, the number of players in the approximating atomic games considered can be very large, which will make the NE computation really long. Taking advantage of the continuity of the parametering functions and following the approach of Thm. 15 gives a smaller (in terms of number of players) approximating atomic instance.
Numerical Application
We consider a population of consumers Θ = [0, 1] with an energy demand distribution θ → E θ . Each consumer θ splits her demand over T := {O, P }, so that her feasibility set is X θ := {(x θ,O , x θ,P ) ∈ R 2 + x θ,O + x θ,P = E θ }. The index O stands for offpeak-hours with a lower price c O (X) = X and P are peak-hours with a higher price c P (X) = 1 + 2X. The energy demand and the utility function in the nonatomic game are chosen as the piecewise continuous functions: |
01762555 | en | [
"spi.auto"
] | 2024/03/05 22:32:13 | 2017 | https://theses.hal.science/tel-01762555/file/PHAM_2017_diffusion.pdf | Dirac structure KDS Kirchhoff Dirac structure PCH Port-Controlled Hamiltonian PH Port-Hamiltonian MPC Model Predictive Control KiBam Kinetic Battery Model MPPT Maximum Power Point Tracking PMSM Permanent Magnet Synchronous Machine MTPA Maximum
Thanks
This thesis is the result of three years work within the LCIS laboratory of Grenoble INP, Valence, France, with funding from the European project Arrowhead. During this important period of my life, I received a lot of help and support from many people to make this work fruitful. I take this opportunity to express my gratitude to all of them.
First of all, I wish to thank my supervisors, Mr. Laurent LEFEVRE, Ms. Ionela PRODAN and Mr. Denis GENON-CATALOT. Four years ago, Laurent gave me a chance to do a research internship at LCIS which was a huge challenge for me at that time. This continued with a PhD thesis. With his supervision and support, I did not feel any pressure, just the passion which motivated me to advance my works. To Ionela, my nearly direct supervisor, I will never forget the thousand of hours she spent discussing my works, correcting my papers and showing me how to work efficiently. I also thank to Denis for his great support on the industrial collaboration and constant communication with a friendly smile. To all of them, I truly appreciate their kindness and availability for answering my questions and provide help whenever I needed. Moreover, by sharing my social life with them through numerous friendly dinners and outdoor activities, I believe that they are also my friends with whom I feel free to express my opinion and emotion. Working with them has been my most valuable professional experience and founds my first steps in the research.
I wish to thank to the jury members who spent their time and effort to evaluate my thesis, Mr. Bernard MASCHKE, Mr. Damien FAILLE, Ms. Françoise COUENNE and Ms. Manuela SECHILARIU. I am grateful to the reviewers, Ms. Françoise COUENNE and Ms. Manuela SECHILARIU, for accepting to review my thesis, and for their insightful remarks which helped in improving the quality of the manuscript.
During these last years, I also had the chance to discuss with many people on different research topics. I thank to Ms. Trang VU, former PhD student at LCIS, for the fruitful discussions on the modelling method. I am thankful to Mr. Cédric CHOMEL, head of the electric R&D project at SODIMAS, France, for his scientific collaboration. I thank also to Mr. Thang PHAM, former researcher at LCIS, for his supervision during my first months in the laboratory. I thank to Mr. Florin STOICAN, associate professor at UPB, Romania, and visiting researcher at LCIS, for his kind discussions on various theoretical problems. I also thank to Ms. Chloé DESDOUITS, PhD student at Schneider Electric, for helping me on some practical informations.
Furthermore, I am grateful to the LCIS assistant team who helped me easily integrate to the research life of the laboratory. Firstly, Ms. Jennyfer DUBERVILLE and Ms. Carole SEYVET, the LCIS secretaries, were always available to get through the administrative issues, this giving me more time to focus on the research works. I also take this opportunity to thank everyone in the computing service, Mr. Cedric CARLOTTI and Mr. Karim Oumahma, for their important technical support. I am thankful to Mr. André LAGREZE, associate professor at LCIS, for organizing sportive activities which got me out of the office and allowed me to refresh.
A huge thanks to my international friends, Silviu, Lai, Huy, Thinh, Mehdi, Antoine, Ayoub, Khadija, Thanos, Igyso, Youness, Guillaume, through whom I update the news and with whom I share my research life. I also acknowledge the presence and help of the vietnamese families of Mr. Hieu, Mrs. Le, Mrs. Trang, Mrs. Tien as well as the vietnamese students Tin, Phuoc, Thong, Anh, Hung, Lap, Hai, Loc, Linh, Vuong. I will always remember our holidays celebrations, the birthday parties and the new member welcomes.
Finally, I want to dedicate my biggest thank to my family, my father, my mother and my brother, who always stood by my sides and made me stronger whenever I felt weak. I thank them for always believing in me and in all my decisions. i
Notations
DC microgrids
The capacity of a process to provide useful power (defined as the variation of the energy characterizing that process) has been one of the transforming elements of human society. Energy exists in a variety of forms, such as electrical, mechanical, chemical, thermal, or nuclear, and can be transformed from one form to another [EIA, 2017]. Energy sources are divided into two groups: renewable energy (e.g., solar energy, geothermal energy, wind energy, biomass) or nonrenewable energy (e.g., petroleum products, hydrocarbon gas liquids, natural gas, coal, nuclear energy). They are called the primary energy sources. However, to transport energy from one place to another we need the energy carriers, also called the secondary energy sources, e.g., the electricity and the hydrogen. In this thesis, we only discuss about the electricity carrier. The network of transmission lines, substations, transformers which delivers electricity from the energy sources to the energy consumers is called the electrical grid [Smartgrid.gov, 2017]. However, the conventional electrical grid are facing many challenges which can be outlined as follows.
• The increasing of the power demands causes the network power congestion when the available power from the energy sources is limited. This frequently leads to a "blackout" which spreads rapidly due to the lack of the communication between the grid and the control centers.
• Without the information about the available energy the customers can not make optimal decisions to reduce the electricity consumption during the expensive peak period.
• The conventional grid is not flexible enough to support the power fluctuation caused by the renewable energies.
• There are many regions where the energy consumers can not easily reach the global electrical grid, e.g., in the forest, in the desert.
The last challenge motivates the use of a local electrical grid, called a microgrid, which can work without the connection with the global electrical grid. What is a microgrid? The U.S. Department of Energy calls it "a group of interconnected customer loads and Distributed Energy Resources (DER) within clearly defined electrical boundaries that acts as a single controllable entity that can connect and disconnect from the grid (known as "islanding")" [Shireman, 2013] (see also Fig. 1.1.1). DERs are small power sources that can be aggregated to provide the power necessary to meet a regular demand [Haas, 2017]. Thus, the DER implies the distributed energy storage system and the distributed energy generation system, i.e., the renewable energy source. The distributed energy generation systems are integrated to the local system to reduce the impact on the environment of the fossil fuel resources. However, the electricity price of the external grid varies during a day. It may be expensive when the energy demand is high. Moreover, the power supplied by the distributed energy generation system is unstable. Consequently, the distributed energy storage system is used to store energy when it is available and cheap. Then, it is reused in the contrary case. In microgrids, all the components are connected to a common bus (transmission lines) through converters. There are two types of microgrids: AC (Alternative Current) and DC (Direct Current). A microgrid is called AC or DC if its components are connected to the AC or DC transmission lines, also called AC or DC bus. The microgrid topology which characterizes the power interconnections within the microgrid is defined by the topology of transmission lines. There are three types of microgrid topologies: radial, ring-main and meshed [START_REF] Bucher | Multiterminal HVDC networkswhat is the preferred topology[END_REF]. Each one of these topologies having their advantages and disadvantages. Note that the network Figure 1.1.1: A typical microgrid system [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF].
design is usually chosen according to a planning criteria, i.e., cost reliability, contingency and the like. The microgrids are an important innovation for the society due to the following reasons.
• Through microgrids, people in isolated regions can access to the electricity.
• The microgrid motivates the integration of renewable sources to the energy system. This reduces the bad impacts of the fossil fuel to our environment.
• Due to the declining cost of renewable sources and the rising cost of fossil fuels, the incorporation of the renewable energy into the local energy system (microgrid) will make attractive values for the small and medium enterprise.
• Microgrid controllers regulate and optimize operations of various DERs. This makes the DERs more manageable and thus, simplifies the global energy management.
• Through the islanding mode, the microgrid improves the energy reliability of the electrical grid for essential emergency response facilities (e.g., police stations, hospitals, military operations).
Since the microgrid is a complex system, there are many problems to be studied such as:
1. What is the suitable control architecture to deal with the computation and communication limits?
2. How are the microgrid components modelled such that necessary properties are taken into account, e.g., time scale, power transfer?
3. How is the microgrid controlled such that the energy dissipation or the energy cost is minimized?
4. How can we integrate various components with different physical characteristics to the microgrid?
5. What are the suitable topologies?
6. How can we integrate control algorithms to the physical microgrid?
7. What is the control algorithm which guarantees the power demand satisfaction when faults occur?
To answer to some of these questions a literature review on the existing modelling approaches, control methods and the architectures are presented in the following.
DC microgrids from a control theoretic perspective
The control algorithm plays an important role in the implementation of the microgrid. Embedding controllers in microgrids allow to manage loads and DERs in order to avoid blackouts, optimize operations of microgrid components and deal with the power fluctuation of the renewable source. The implementation of the microgrid in our modern life is an active subject in the industry community through various successful projects. A student team at Eindhoven University of Technology invents the first solar-powered family car [START_REF] Tu/E | Student team unveils world's first solar-powered family car[END_REF]. Tesla enterprise started to build "smart" roofs for houses which have PV panels and reduce to 0 the electricity spendings [Fehrenbacher, 2017]. Moreover, Schindler enterprise introduced the solar elevator in 2013 which can serve the passenger during a power outage [START_REF] Zemanta | Solar powered elevators on their way to the U.S[END_REF]. The energy efficiency in gridconnected elevator systems is also considered in the Arrowhead project [Arrowhead, 2017]. Currently, the solar-powered plane which aims at flying around the world without fuel is studied within the Solar Impulse project [Solar.Impulse, 2017].
The study of complex dynamical systems is a fundamental control issue. Microgrids are complex energy systems since they include many subsystems of different natures (e.g., mechanical, electrical, electronic, magnetic, thermodynamic, chemical), with different characteristics and time scales. The control decisions are expected to be derived and implemented in real-time. Feedback started to be used extensively to cope with uncertainties of the system and its environment.
The issues faced when dealing with such complex systems include:
• Modelling methods: they should explicitly describe the useful properties of the microgrid, e.g., suitable time scale, energy conservation [START_REF] Pham | Port-Hamiltonian model and load balancing for DC-microgrid lift systems[END_REF][START_REF] Schiffer | A survey on modeling of microgrids: from fundamental physics to phasors and voltage sources[END_REF].
• Reference profile generation: gives indications for the DERs to track while taking into account future predictions [Pham et al., 2015a, Pham et al., 2017].
• Efficient energy management: it optimizes some economy or technology criteria, e.g., electricity cost, computation time. Note that there are mathematically two types of criteria (objective functions): finite-dimensional cost [START_REF] Larsen | Distributed control of the power supply-demand balance[END_REF],Stegink et al., 2017] and continuous-time cost [START_REF] Parisio | Stochastic model predictive control for economic/environmental operation management of microgrids: an experimental case study[END_REF], Pham et al., 2017].
• Control architecture: it handles the structure of the control law and the communication topology for the microgrid.
• Constraint handling: it aims at formulating the constraints in the control design.
• Stability: this implies the reference tracking problems within the microgrid components and the power balance among them [START_REF] Alamir | Constrained control framework for a stand-alone hybrid[END_REF], Zonetti et al., 2015, Schiffer et al., 2016a, de Persis et al., 2016, de Persis and Monshizadeh, 2017]. The power balance in the DC microgrid corresponds to the DC bus voltage control in the fast time scale where the DC bus dynamics are considered.
• Robustness: this guarantees the planned microgrid operation despite disturbances.
• Fault tolerant control: aims at guaranteeing the load power satisfaction under unexpected events, e.g., when some of the generators fails to provide power to the other microgrid components [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF].
The above enumerated objectives require different modelling approaches (or models), control design methods and control architectures.
Modelling
As previously mentioned the microgrids are complex energy systems containing heterogeneous components distributed in space and time. This makes the modelling of the microgrid system a complicated task. There are different modelling methods employed in the literature, of course depending on the control objectives.
Fuzzy modelling is a system representation based on the fuzzy sets [START_REF] Takagi | Fuzzy identification of systems and its applications to modeling and control[END_REF]. The consequences are linear models which describe the system for different operating points. This method is used to forecast the renewable and load powers in the microgrids while taking into account the power uncertainty [START_REF] Sáez | Fuzzy prediction interval models for forecasting renewable resources and loads in microgrids[END_REF]. The obtained model does not explicitly exhibit the underlying power-preserving structure of the microgrid.
Agent-based modelling can be defined as a three-tuple comprising of a set of agents (homogeneous or heterogeneous), an environment and the ability to negotiate and interact in a cooperative manner [Wooldridge, 2002]. An agent can be a physical entity, e.g., the distributed energy storage unit [START_REF] Lagorse | A multi-agent system for energy management of distributed power sources[END_REF], or a virtual entity, e.g., a piece of software which provides the electricity price or stores data [START_REF] Dimeas | Operation of a multiagent system for microgrid control[END_REF]. It has a partial representation of the environment, e.g., in the power system, an agent may only know the voltage of its own bus. This characteristic allows the agent-based control of complex system with a little data exchange and computation demands. An agent communicates with other agents and autonomously makes decisions. Agent-based modelling for the microgrid are studied in numerous works such as [START_REF] Dimeas | Operation of a multiagent system for microgrid control[END_REF], Krause et al., 2006, Weidlich and Veit, 2008, Jimeno et al., 2010, Lagorse et al., 2010]. However, the microgrid model obtained by this approach does not explicitly take into account the dynamics of the individual components. Thus, the system properties are not fully considered.
Differential equations-based modelling describes the system through a set of differential equations [Khalil, 2002]. It allows an explicit representation of the system dynamics which are derived from the physical constitutive equations and balance equations such as Newton's law, Ohm's law, Kirchhoff's law, Lenz's law etc. The obtained model captures the system natural property [Paire, 2010, Alamir et al., 2014, Lefort et al., 2013, Prodan et al., 2015, Parisio et al., 2016, dos Santos et al., 2016]. However, this system description does not explicitly exhibit the underlying power-preserving structure and the energy conservation.
Port-Hamiltonian modelling describes a system as a combination of power-preserving interconnections, energy storages, resistive elements and the external environment [Duindam et al., 2009, van der Schaft andJeltsema, 2014]. The interconnection is expressed by algebraic relations of conjugate variable pairs (whose product is a power), e.g., the Kirchhoff's laws for the currents and the voltages in an electrical circuit. The energy storages and the resistive elements represent the subsystems where the energy is stored and dissipated, respectively. The external environment represents control actions, other systems or energy sources. The system model may be graphically described by a Bond Graph. From this graph, the system dynamics may be automatically derived as a set of algebraic and differential equations. Usually, they reduce to the differential equations written in a specific form. In [START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF], Benedito et al., 2017], this approach is used to model the transmission lines and/or the system of converters in DC microgrids. Similar approach have been applied for the AC microgrid in [START_REF] Stegink | A unifying energy-based approach to stability of power grids with market dynamics[END_REF][START_REF] Schiffer | A survey on modeling of microgrids: from fundamental physics to phasors and voltage sources[END_REF]. However, none of previous works consider the slow time scale models for the distributed energy resources (DERs). This will be one of the contributions of the present thesis.
Control approach
The main objective in the load balancing for microgrid systems is to generate the real-time power references which need to be tracked by the local component controls in the faster time scale. Note that microgrid components are strongly nonlinear and must satisfy various constraints, e.g., the battery charge limits, the maximal power supplied by the external grid. To deal with the presented objective and the nonlinearity, different control approaches used in the literature are presented in the following.
Passivity-based methods exploit a physical system property which is the energy balance [START_REF] Ortega | Putting energy back in control[END_REF]. By choosing the suitable desired stored and dissipated energies of the closed-loop system we derive the control law through the matching equation. The passivity-based control is efficient to deal with the passive nonlinear system since it makes use of the stored energy in the system as the Lyapunov function. This method is used for the stabilization of the transmission lines and of the system of converters in AC microgrid [Schiffer et al., 2016a, de Persis et al., 2016, de Persis and Monshizadeh, 2017] and in High Voltage Direct Current grid [START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF]. However, this method is not suitable for the non-passive and/or constrained systems.
Gradient-based methods formulate the constrained optimization problem of the controller as a virtual dynamical system such that the steady state of the virtual dynamics corresponds to the solution of the optimal control problem [START_REF] Feijer | Stability of primaldual gradient dynamics and applications to network optimization[END_REF]]. This virtual system is derived using the Karush-Kuhn-Tucker conditions (see also Section 5 Chapter 5 in [Boyd and Vandenberghe, 2004]). The presented method allows to take into account constraints and optimization cost in the control design. It is used to optimize the power distribution within the microgrid [START_REF] Stegink | A unifying energy-based approach to stability of power grids with market dynamics[END_REF], Li et al., 2016, Benedito et al., 2017]. However, the previous works do not take into account the electrical storage system dynamics, the renewable and load power prediction which are essential for the microgrid energy management.
Constrained optimization-based control formulates the control law as the solution of an optimization problem. A popular method of this approach is the dynamic programming based on the Bellman's principle (principle of optimality), i.e., an optimal policy has the property that what ever the initial state and initial decisions are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision [Bellman, 1957, Liberzon, 2011]. This control method is used to find the DERs power profiles in the microgrid [START_REF] Costa | A stochastic dynamic programming model for optimal use of local energy resources in a market environment[END_REF], Handschin et al., 2006, Bilbao et al., 2014]. However, since the control law at a given instant depends on laws at the previous instants, previous control laws must be kept during the computation process. This makes the computation complex. The reference power profiles for the DER can be also found off-line (before the system operation) [START_REF] Lifshitz | Optimal control of a capacitor-type energy storage system[END_REF], Pham et al., 2015a, Touretzky and Baldea, 2016]. However, this methods is not robust in real-time control.
Another type of the optimization-based control is the Model Predictive Control (MPC) [START_REF] Rawlings | Model Predictive Control: Theory and Design[END_REF]. It finds the optimal open-loop control sequence at each time instant and applies the first control action as the system input. For the reference tracking objective in the tracking MPC, optimization costs usually penalize the discrepancies between the actual and reference signals. If it is not the case, i.e. the cost function penalizes an economic cost such as the dissipated energy or the electricity cost, we call it the economic MPC [START_REF] Ellis | Economic Model Predictive Control[END_REF]. This MPC type is used to generate the reference profiles for controllers in faster time scales. In [START_REF] Prodan | A model predictive control framework for reliable microgrid energy management[END_REF], Desdouits et al., 2015, Parisio et al., 2016, dos Santos et al., 2016], the economic MPC is used to generate on-line the power reference profiles for the DERs. This will be the main focus of the present thesis.
Robust optimization formulates an uncertainty-affected optimization problem as a deterministic program whose solutions are feasible for all allowable realization of the data [START_REF] Bertsimas | Robust discrete optimization and network flows[END_REF]. This method is used to generate the reference power profiles for the microgrid components with the model parameter uncertainties in [START_REF] Battistelli | Optimal energy management of small electric energy systems including v2g facilities and renewable energy sources[END_REF]. This method can be used with the MPC to improve it robustness. However, since we have not systematically considered uncertainties for microgrids in this thesis yet, using the robust optimization method may make control algorithms more complex.
To improve the efficiency of the microgrid control, different control strategies are studied in the literature which will be discussed in the next subsection.
Control architecture
From the presented modelling and control problems of the microgrid, we can distinguish several control strategies (centralized/distributed/decentralized/-hierarchical/multi-layer) motivated by the two following reasons [Scattolini, 2009]:
-numerical complexity of the multi-objective optimization problem, -communication limitations/geographical distribution among the heterogeneous components of the micogrid system.
In the centralized architecture, a single controller collects all system outputs and gives policies for all system control variables [START_REF] Becherif | Modeling and passivity-based control of hybrid sources: Fuel cell and supercapacitors[END_REF], Alamir et al., 2014]. However, it is usually difficult to control a complex system, such as the microgrid, using a centralized controller due to limited computation capacity and/or the restricted communication bandwidth. Thus, the decentralized architecture is employed where the control and controlled variables may be partitioned into disjoint sets from which local regulators are designed independently [START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF],Schiffer et al., 2016a,de Persis et al., 2016]. In the distributed architecture, some information is exchanged among the local regulators [START_REF] Larsen | Distributed control of the power supply-demand balance[END_REF],Qi et al., 2013,Zhao and Dörfler, 2015, Li et al., 2016].
Low level regulator 1 Since the microgrid components are distributed in space (geographical distribution) and in time (multitime scale dynamics) and since different control objectives are considered, for the different time scales we consider a class of the distributed architecture, the hierarchical (or multi-layer) architecture [Scattolini, 2009, Christofides et al., 2013] (see also Fig. 1.2.1). At the higher level, the regulator output variables are used as the references for the lower level control or directly applied to the system (i.e., the system input). At the lower level, the regulators aim at tracking the given references from the higher level regulators while implementing their own objectives. Furthermore, there are different ways to determine the system models in the higher control level, e.g., the steady state of the global dynamics [START_REF] Backx | Integration of model predictive control and optimization of processes: Enabling technology for market driven process operation[END_REF], the slower dynamics with the steady state of the fast dynamics [START_REF] Picasso | An MPC approach to the design of two-layer hierarchical control systems[END_REF], Chen et al., 2012] or the entire multi-time scale dynamics [START_REF] Ellis | Integrating dynamic economic optimization and model predictive control for optimal operation of nonlinear process systems[END_REF].
The multi-layer control is applied to microgrid systems in [START_REF] Paire | A real-time sharing reference voltage for hybrid generation power system[END_REF], Lefort et al., 2013, Sechilariu et al., 2014, Touretzky and Baldea, 2016, Cominesi et al., 2017]. In [START_REF] Paire | A real-time sharing reference voltage for hybrid generation power system[END_REF], the high regulator generates the reference currents for the distributed energy storage system and for the external grid by using priority rules. In [Sechilariu et al., 2014, Touretzky andBaldea, 2016], the high level regulators generate the off-line reference power for the lower level regulators as the solution of an optimization problem which takes into account the long time scale model of the microgrid components. In [START_REF] Lefort | Hierarchical control method applied to energy management of a residential house[END_REF], Cominesi et al., 2017], optimization problems in the high level regulators are solved on-line and generate the on-line power reference for the lower regulators. In the presented works, the low level regulators aim at tracking the given references from the high level controls and balance the power in the microgrid. Usually, the microgrid models used in the high and low level controls are in different time scales, i.e., slow and fast. In many cases, a good reference tracking does not respect the power balance due to the differences of the predicted power profiles in these time scales, e.g., the predicted power profiles in the slow time scale are the average approximations of the power profiles in the fast time scale. Thus, the low level regulators are separated into two different control levels [START_REF] Paire | A real-time sharing reference voltage for hybrid generation power system[END_REF], Sechilariu et al., 2014], or the tracking objectives are relaxed [START_REF] Lefort | Hierarchical control method applied to energy management of a residential house[END_REF]. However, in the presented works, the power-preserving interconnection between the slow and fast dynamics is not considered.
Thesis orientation
The literature review presented above gives evidence that the microgrid control domain is vast and disparate. In this thesis we limit ourselves to several modelling and control objectives, related and originating from a particular architecture (a DC microgrid system). We will concentrate on a particular system made of an elevator, and its auxiliary components (storage nodes, power bus, capacitive and resistive elements, etc) in the context of Arrowhead project [Arrowhead, 2017].
The main objectives of the present work is to formulate a multi-layer optimization-based control for optimizing the energy distribution within DC microgrids based on their PH models. The novelty resides in the PH model whose properties will be considered in different aspects of the control design, e.g., time discretization model, model simplification.
Two PH formulations are considered for microgrids: hybrid input-output representation [START_REF] Pham | Port-Hamiltonian model and load balancing for DC-microgrid lift systems[END_REF] and PH system on graphs [START_REF] Pham | Power balancing in a DC microgrid elevator system through constrained optimization[END_REF]. The first formulation is a compact form for multiphysic microgrids while the second formulation is not compact but captures the topology of the electrical network.
Then, two discretization methods of PH systems are investigated. The first method uses a high order B-spline-based parameterization of the system flat output [Pham et al., 2015a]. Thus, the approximated trajectory respects the continuous-time system dynamics, but is not easy to be found. The second method uses the parameterization of all the system variables based on the first-order B-splines [Pham et al., 2015a, Pham et al., 2017]. The obtained discrete trajectory does not respect the continuous-time model but still preserves the energy balance and is easy to be found.
Using the obtained models, two multi-layer control schemes are investigated. In both of them, the higher level regulators generate the optimal reference profiles for the lower level regulators to track. Their differences reside in the model discretization methods and in the relations between the models used in two layers. In the first control design, the models used in two layers are the same [Pham et al., 2015a]. In the second control design, the models in the higher and lower layers correspond to the slow and fast parts of the system.
Contribution of the thesis
This thesis extends the microgrid model in [Paire, 2010] and considers a PH representation following the ideas in [van der Schaft and Jeltsema, 2014]. We concentrate first on the elevator system and dissipative energy minimization objective. Novel combination between differential flatness with B-splines parameterization and MPC are used to design a control scheme which includes the off-line reference profile generation and the on-line tracking control. Next, the global load balancing problem is solved using the economic MPC with a PH model of the microgrid in slow time scale.
More precisely, the main contributions of the thesis are summarized in the following:
• The model of the multi-source elevator dynamical system in [Paire, 2010] is developed and rewritten in the PH formulation. Especially, the well-known Permanent Magnet Synchronous Machine model in direct-quadrature coordinate is derived using the PH framework (coordinate transformation, model reduction based on PH formulation). The microgrid PH formulation is given in an appropriate form which can be easily generalized for general DC microgrids including converters and corresponding energy devices (e.g., energy storage device, distributed energy resource system, loads and the like).
• An energy-preserving time discretization method for PH system generalized from [START_REF] Stramigioli | Sampled data systems passivity and discrete Port-Hamiltonian systems[END_REF], Aoues, 2014] is proposed. The discrete system is described in the implicit form as a combination of discrete-time interconnection and discrete-time element models. We prove that the time-invariant coordinate transformation preserves the energy conservation property of the discrete-time PH system. This method is applied for the electro-mechanical elevator system and for the global DC microgrid under different discretization schemes. The schemes are validated over numerical simulations and compared with classical Euler discretization schemes. The results show that the accuracies of the first-order methods are improved by the energy-preserving method which eliminates numerical energy dissipations or sources.
• For minimizing the dissipated energy in the electro-mechanical elevator system during an elevator travel, an optimization-based control is studied. This represents the combination of an off-line reference profile generation and on-line tracking control. The reference profiles are formulated as the solution of a continuous-time optimization problem. By using the differential flatness and B-spline-based parameterization, this problem is approximated by a finite-dimensional optimization problem of the control points corresponding to the B-splines. The novelty resides in the appropriate constraints of the control points which guarantee the satisfaction of the continuous-time constraints. Extensive simulation results prove the efficiency of the studied method.
• Load balancing for the DC microgrid is investigated by using an economic MPC approach [START_REF] Ellis | Economic Model Predictive Control[END_REF] for a simplified microgrid model. The simplified model is derived by assuming that the fast dynamics of the supercapacitor, the converter and transmission lines are quickly stabilized. Then, this model is discretized by the studied energy-preserving time discretization model. By taking into account the discretized microgrid dynamics, the electro-mechanical elevator power profile, the renewable power profile and the electricity price profile, an economic MPC is formulated. The control method is validated through simulations with the numerical data given by the industrial partner SODIMAS, in France.
Organization of the manuscript
This thesis includes 6 chapters, including this introduction (see also Fig. 1.5.1).
• Chapter 2 first recalls some notions and definitions for Port-Hamiltonian (PH) systems. Next, we develop using PH formulations the dynamical models of the multi-source elevator system components and of the global system. Furthermore, the reference profiles, the system constraints and different control objectives are introduced.
• Chapter 3 presents the energy-preserving time discretization method for the PH system and its properties. Then, the presented method is used for discretizing the dynamics of the electro-mechanical elevator system and of the global microgrid system. The proposed discretization methods are validated through some numerical simulations.
• Chapter 4 presents an optimization-based control approach for minimizing the dissipated energy within the electro-mechanical elevator. Firstly, we describe some necessary tools like differential flatness, Bspline-based parameterization and MPC. Using their properties, we formulate the optimization problems for the off-line reference profile generation and the on-line reference tracking. The control method for EME system is validated through some numerical simulations with nominal/perturbation-affected cases and with the open-loop/closed-loop systems.
• Chapter 5 studies the load balancing for the DC microgrid system using economic MPC. Firstly, the simplified microgrid model is represented using the Port-Hamiltonian formulation on graphs. Then, the economic MPC is formulated using the presented microgrid model. Some simulations are implemented to validate the proposed control method.
• Chapter 6 completes the thesis with conclusions and discussions on future directions.
The presented organization of the thesis is graphically illustrated in Fig. 1.5.1. Each rectangle box represents a chapter including various processes denoted by ellipses. Each arrow describes the relation between two processes (or chapters), i.e., the process (or chapter) at the arrow end uses results (e.g., variables, models, methods, ideas) of the process (or chapter) at the arrow origin. Chapter 2 DC microgrid modelling
Introduction
Because of the diversity of microgrid dynamics, there are many control objectives that require different models. In this work, we concentrate on two main objectives: energy cost optimization and transmission lines stability. The first one aims to generate reference trajectories or set points. The second one aims to stabilize the dynamics to the given trajectories or set points. Consequently, the previously mentioned objectives correspond to different time scales 1 of the microgrid dynamics. The energy management problem (i.e., deals with the energy cost optimization while satisfying the load requirements) relates to the slow time scale dynamics. This includes the battery dynamics, the renewable power profile, electricity price profile and statistic rules of load during a day. The transmission lines stability problem relates to the fast time scale dynamics of the transmission lines and converter.
There are various works for microgrids which consider the above objectives within different time scales. For example, in [START_REF] Zhao | Distributed control and optimization in DC microgrids[END_REF], the authors propose a distributed control approach for regulating the voltages of the DC transmission lines. The studied system is a resistor network which connects voltage sources and passive current loads. Besides, the controller guarantees the optimal power sharing within the network. The authors of [START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF] study the combination of DC transmission lines, AC/DC converters and three-phase electrical generators. The energy sources and loads are modelled as voltage sources. Their work presents to a decentralized controller for stabilizing the transmission line voltages. Similar work for AC microgrids to control the network frequency is investigated in [START_REF] Schiffer | Conditions for stability of droop-controlled inverter-based microgrids[END_REF]. Since the previous works consider the fast time scale dynamics of microgrids, the studied models for energy sources, loads and energy storage devices are simple. For longer time scale dynamics studies, [Lagorse et al., 2010,Xu and[START_REF] Xu | [END_REF] additionally consider the limited capacities of energy storage devices and maximal available power of renewable source. However, these informations can only be taken into account instantaneously since the authors do not consider the electricity storage models for the prediction. In [START_REF] Alamir | Constrained control framework for a stand-alone hybrid[END_REF], a fast electrical storage dynamics (i.e., a supercapacitor) is considered within the DC bus controller. However, without the slow time scale dynamics of energy storage and renewable power generator, the presented model can not be used for the energy cost optimization.
Furthermore, many works study the energy cost optimization [START_REF] Lefort | Hierarchical control method applied to energy management of a residential house[END_REF], Prodan et al., 2015, Desdouits et al., 2015, Lifshitz and Weiss, 2015, Parisio et al., 2016, dos Santos et al., 2016, Touretzky and Baldea, 2016]. The authors in [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF],Desdouits et al., 2015,Lifshitz and Weiss, 2015] use simple models for the battery and/or transmission lines which do not entirely capture the real dynamics. Some works use a first-order model for the electrical storage unit [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF],Desdouits et al., 2015,Parisio et al., 2016]. In fact, the electrical storage unit (e.g., a battery) may include many sub storage parts which are connected by resistive elements. Only some of these parts can directly supply the energy. For the slow time scale, the internal charge distribution between these parts can not be ignored. Thus, a first-order model for the electrical storage unit may give incorrect informations about the real available charge. Also, in these works, the transmission lines network dynamics are simply described by a power balance relation. This is not realistic for DC microgrids where each component is placed far from the others. Hence, the resistance of the transmission lines can not be neglected.
In general, the microgrid dynamics has at least two energetic properties which may be useful for studying the energy cost optimization: the energy balance and the underlying power-preserving structure. [Lefort 1 There are three time scales corresponding to the hour, the minute and the second. et al., 2013,Touretzky andBaldea, 2016] do not take explicitly into account these properties when developing the model of the microgrid system. Thus, they may be lost while studying the energy cost optimization through the model discretization and reduction:
11
• Generally, the energy cost optimization is a continuous-time optimization problem where the solution is the time profile of control variables (see Appendix C.2). Usually, it is infeasible to find its exact solution. Therefore, we may discretize the optimization problem to obtained the finite-dimensional optimization problem which is easier to solve (details of finite-dimensional optimization problem can be found in Appendix C.1). Moreover, its discretization requires the discrete-time model of the microgrid dynamics.
• Generally, the microgrid dynamics has different time scales. To reduce the computation complexity, the energy cost optimization usually uses the slow dynamics obtained by reducing the fast dynamics of the global model using singular perturbation approach [START_REF] Kokotović | Singular perturbations and order reduction in control theory-an overview[END_REF].
Since the considered DC microgrid in this work is a multi-sources elevator system (the scheme of this microgrid is given in Fig. 2. 1.1), it is worth to mentioning that there exists an additional control objective which requires a middle time scale dynamics. This time scale, which corresponds to the elevator cabin travel2 , is shorter than the time scale of battery operation and longer than the time scale of transmission lines operation.
The above mentioned issues motivate a multi-time scale model for the DC microgrid which explicitly describes the power exchange and the system energetic structure. A well-known candidate method for this objective is the Port-Hamiltonian formulation [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]. Therefore, in this chapter, we first introduce some basic definitions and notions of Port-Hamiltonian formulations and then concentrate on the modeling of the DC microgrid multi-source elevator system through a Port-Hamiltonian approach.
This chapter contains two main contributions as follows:
• The Port-Hamiltonian models are formulated for the components of the DC microgrid, for the electromechanical elevator system and for the global system. The microgrid components include converters (AC/DC and DC/DC), electricity storage devices (battery and supercapacitor), an electrical machine and a mechanical elevator. After deriving the dynamics, we consider their steady states which relate to the system order reduction in the slow time scale dynamics of the global system. Besides, the electro-mechanical elevator system includes the AC/DC converter, the Permanent Magnet Synchronous Machine (PMSM) and the mechanical elevator. Firstly, its Port-Hamiltonian model is expressed taking into account the three magnetic fluxes of the stator coils. Next, we use the Park transformation [START_REF] Nicklasson | Passivity-Based Control of a class of Blondel-Park transformable electric machines[END_REF] and a constraint elimination process to derive the reduced-order dynamics for electro-mechanical elevator. In this model, three original stator fluxes are replaced by two fluxes, called direct and quadrature (d-q) fluxes, respectively. Furthermore, we prove that the coordinates transformation and constraint elimination for electro-mechanical elevator preserve the Port-Hamiltonian form. It leads to the reduced Port-Hamiltonian model of PMSM which is used popularly in literature [START_REF] Petrović | Interconnection and damping assignment approach to control of pm synchronous motors[END_REF]. By considering the Park transformation and the constraint elimination for electro-mechanical elevator in the framework of Port-Hamiltonian formulation, we explicitly describe their underlying energetic meanings. Finally, the global DC microgrid dynamics are derived by parallel connecting the components to the DC transmission lines. The obtained dynamics are expressed as an input-state-output Port-Hamiltonian dynamical system and a power constraint of external port variables. Moreover, it includes different time scale dynamics which will be used for different control objectives.
• Based on the introduced model, constraints and cost functions are formulated for further solving the load balancing problem. The studied constraints include the kinematic limitations of elevator, the ranges of charge levels of electricity storage units, the limited power of battery, the limit values of control signals. They are determined by the passenger request and the manufacturer. These constraints are reformulated using the energetic state variables. Then, three control objectives corresponding to three time scales are presented: regulate the voltage of transmission lines, regulate the elevator cabin position, minimize the electricity cost in a day. Three corresponding controllers are ranged from low to high, respectively. The stability of a low closed-loop dynamics is described by an additional algebraic constraint in the higher control problem. Consequently, from the presented objectives, we can derive different control problems of the microgrid in a hierarchical way which require coherent combinations of the profile generation and the profile tracking.
The organization of this chapter is as follows. Section 2.2 recalls some notions on Bond Graph representations and basics definitions of Port-Hamiltonian systems. Section 2.3 presents the Port-Hamiltonian formulation of the energy sources, transmission lines and electricity storage units. Section 2.4 formulates the dynamics of the electro-mechanical elevator by a Port-Hamiltonian formalism. In Section 2.5, the global model of the DC microgrid is derived using the fast and slow time scale separation. Based on the presented model, Section 2.6 introduces the constraints and the cost functions for the microgrid control and energy management. Finally, the conclusions for this chapter are presented in Section 2.7. 2.2 From Bond Graphs to Port-Hamiltonian formulations
Supercapacitor
Bond Graphs
Bond Graph is a graphical energetic representation for physical dynamical systems [START_REF] Sueur | Bond-graph approach for structural analysis of mimo linear systems[END_REF], Karnopp and Rosenberg, 1975, Duindam et al., 2009]. Some advantages of the Bond Graph approach are:
• it focuses on energy as the fundamental concept to appropriately model physical systems;
• It is a multi-domain representation. The same concepts and mathematical representations are used for various physical elements as for example, mechanical, electrical, hydraulic, pneumatic, thermodynamical [START_REF] Couenne | Structured modeling for processes: A thermodynamical network theory[END_REF], chemical [Couenne et al., 2008a] and the like;
• It is a multi-scale representation. The physical elements can be decomposed hierarchically in smaller interconnected components.
Each edge is called a bond and represents the bi-directional flow of the power. They are characterized by a pair of conjugated variables named the effort, e, and the flow, f , whose product is the power. The edge orientation is represented by a stroke that forms a half-arrow with the line indicating the positive power direction. Besides, the Bond Graph representation uses the notion of causality indicating which side of a bond determines the instantaneous effort and which determines the instantaneous flow. It is a symmetric relationship, i.e., when one side causes the effort, the other side causes the flow. When formulating the dynamical equations which describe the system, causality defines, for each modelling element, which variable is dependent and which is independent. A causal stroke is added to one end of the power bond to indicate that the opposite end defines the effort.
The labelled nodes are elements that can be distinguished on the basis of their properties with respect to energy. They connect to the bonds through their ports. There are nine basic nodes categorized in five groups of energetic behaviours:
• Storage nodes represent one-port elements describing the stored energy and they are denoted by C or I, e.g., the capacitance, the inductance.
• Supply nodes represent effort and flow sources having one port and they are denoted by Se and Sf respectively, e.g., the voltage source, the velocity source.
• Reversible transformers represent two-ports elements which modify the effort/flow ratio while preserving the power flow and they are denoted by T F or GY, for transformers of gyrators respectively, such as ideal electric transformer, ideal electric motor, and the like.
• The junction nodes represent multi-ports elements that describe topological constraints such as parallel and series electrical circuits. Let (f 1 , e 1 ), . . . , (f n , e n ) ∈ R 2 be the flow and effort pairs at the n ports of the 0-junction. The constitutive equations of such a 0-junction are:
e 1 = e 2 = . . . = e n , f 1 + f 2 + . . . + f n = 0,
while for the 1-junction, the constitutive equations are:
f 1 = f 2 = .
. . = f n , e 1 + e 2 + . . . + e n = 0.
• The irreversible transformation nodes which represent energy-dissipating elements are denoted by R, e.g., ideal electric resistor, ideal friction, etc.
Note that some of these elements can be modified by an external signal without changing the node nature or affecting the power balance. It is called the modulation and just some of the mentioned elements can be modulated: the supply/demand, the reversible and irreversible transformation. Moreover, to describe a complex physical system, it is necessary to replace many nodes having similar characteristics by one node. Thus, a node in a Bond Graph can have multiple ports (multi-ports) which are connected by many bonds (multi-bonds), see Fig. 2.2.2. We present in Fig. 2.2.1 an illustrative example which describes the Bond Graph of a simple DC RC electrical circuit.
Port-Hamiltonian systems
This section introduces some basic definitions and notions related to PH systems [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF] which will be further used for modelling the DC microgrid elevator system.
The central elements of PH systems are Dirac structures (DS) which describe power-conserving interconnections. By considering a vector/flow linear space F with its dual/effort linear space F * = E, we define a symmetric bilinear form ., . on the space F × F * as:
(f 1 , e 1 ), (f 2 , e 2 ) = e 1 |f 2 + e 2 |f 1 , (2.2.1)
with (f 1 , e 1 ), (f 2 , e 2 ) ∈ F×F * , and e|f denotes the duality product (power product). Next, the corresponding DS is defined as follows.
Definition 2.2.1 (Dirac structure [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]). A (constant) DS on F×F * is a subspace D ⊂ F×F * such that D = D ⊥ , where ⊥ denotes the orthogonal complement with respect to the bilinear form , . In practice, many system dynamics include constraints (for instance, three phase synchronous machine which is considered within the DC microgrid elevator system). The elimination of these constraints results in a state-modulated DS. This motivates the following definition.
Definition 2.2.2 (Modulated DS [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]). Let X be a manifold for the energy storage, with its tangent space T x X and co-tangent space T *
x X. f ∈ F and e ∈ F * are the port variables of the additional ports. A modulated DS, D(x) is point-wise specified by a constant DS:
D ⊂ T x X × T * x X × F × F * , x ∈ X.
Moreover, a physical system may be constructed from some physical subsystems. Thus, the combination of systems leads to the composition of DSs.
Consider a Dirac structure D A on a product space F 1 × F 2 of two linear spaces F 1 and F 2 and another Dirac structure D B on a product space F 2 × F 3 with the additional linear space F 3 . The space F 2 is the space of shared flow variables, and F * 2 the space of shared effort variables. We define the feedback interconnection by:
f A = -f B , ∈ F 2 , e A = e B ∈ F * 2 , (2.2.2) with (f A , e A ) ∈ D A and (f B , e B ) ∈ D B .
Definition 2.2.3. Feedback composition of DS [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]: The feedback composition of DS D A and D B , denoted by D A ||D B , is defined as
D A ||D B = {(f 1 , e 1 , f 3 , e 3 ) ∈ F 1 × F * 1 × F 3 × F * 3 | ∃(f 2 , e 2 ) ∈ F 2 × F * 2 s.t (f 1 , e 1 , f 2 , e 2 ) ∈ D A and (-f 2 , e 2 , f 3 , e 3 ) ∈ D B }.
(2.2.
3)
The DS has some important properties such as:
• the power is conserved, i.e. for all (f , e) ∈ D, e|f = 0,
• the feedback composition of DSs is again a DS.
The DS admits several representations. One of them is the constrained input-output representation which is presented by the following proposition.
Proposition 2.2.4. Constrained input-output representation of DS [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]: Every DS, D ⊂ F × F * can be represented as
D = (f , e) ∈ F × F * |f = De + G D λ, G T D e = 0, λ ∈ V , (2.2.4)
with a skew-symmetric mapping D : F → F * , a linear mapping G D such that ImG D = {f |(f , 0) ∈ D} and a linear space V with the same dimension as F.
A PH system is constructed by connecting the DS with the energy storage, the energy dissipative element and the environment through corresponding ports. Therefore, the DS ports (f , e) from Definition (2.2.1) are partitioned into energy storage ports (f S , e S ), resistive ports (f R , e R ) and external ports (f E , e E ).
Definition 2.2.5 (PH system [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]). Consider a state-space X with its tangent space T x X, co-tangent space T *
x X, and a Hamiltonian H : X → R, defining the energy-storage. A PH system on X is defined by a DS,
D ⊂ T x X × T * x X × F R × F * R × F E × F * E , having energy storage port (f S (t), e S (t)) ∈ T x X × T * x X with f S (t) = -ẋ(t) and e S (t) = ∇H(x), a resistive structure: R R = {(f R (t), e R (t)) ∈ F R × F * R |r(f R (t), e R (t)) = 0, e R (t)|f R (t) ≤ 0 } , (2.2.5)
and the external ports (f E (t), e E (t)) ∈ F E × F * E . Generally, the PH dynamics are described by:
(-ẋ(t), ∇H(x), f R (t), e R (t), f E (t), e E (t)) ∈ D. (2.2.6)
Physically, the DS describes the system interconnection which is usually constant. However, in practice, a system dynamics may be described by a PH formulation associated with port variables constraints. These constraints may be reduced. The obtained dynamics can be cast in the PH formulation with a state-modulated interconnection matrix, although the original DS is constant. This means that in the constrained input-output representation of a state-modulated DS (2.2.4), the structure matrices D, G D depend on the state variables.
We here present a popular class of explicit PH system which is called the input-state-output PH system with direct feed-through. This system admits the following assumptions:
• The resistive structure R R defined by (2.2.5) is given by a linear relation
r(f R , e R ) = R R f R (t) + e R (t) = 0, (2.2.7)
where R R is symmetric and positive.
• The structure matrices D, G D in (2.2.4) have the following formulations:
G D = 0, D(x) = -J(x) -G SR (x) -G(x) G T SR (x) 0 G RE (x) G T (x) -G T RE (x) M(x) , (2.2.8)
where J(x), M(x) are skew-symmetric.
Then, the explicit formulation of a PH system is written as:
ẋ(t) = [J(x) -R(x)] ∇H(x) + [G(x) -P(x)] e E (t), f E (t) = [G(x) + P(x)] T ∇H(x) + [M(x) + S(x)] e E (t), (2.2.9)
where x(t), e E (t), f E (t) are the state, input and output vectors, respectively, J(x) describes the direct interconnection of the energy state variables, M(x) describes the direct interconnection of input variables. The resistive matrices R(x), P(x), S(x) are given by the following expressions:
R(x) = G SR (x)R R G T SR (x), P(x) = G SR (x)R R G RE (x), S(x) = G T RE (x)R R G RE (x).
(2.2.10)
Since R R is symmetric and positive, R(x), P(x), S(x) satisfy the following expression:
R(x) P(x) P T (x) S(x) ≥ 0.
Furthermore, if the system interconnection is switched between many topologies (many different DSs), some additional binary variables are considered and placed in the interconnection matrices (see Chapter 13 in [van der Schaft and Jeltsema, 2014]). Especially, in the electronic circuit, the mentioned variables indicate the transistors states, i.e., 0 and 1 correspond to the closed/open states, respectively. Moreover, in the case of converters, since these states are repeatedly changed with high frequencies, one replaces the binary variables by the continuous average ones [START_REF] Escobar | A Hamiltonian viewpoint in the modeling of switching power converters[END_REF]. They are defined by the ratio of the time duration, when the binary variable is 1, and the switching cycle duration. They are named the duty cycle and denoted by d(t). Therefore, when this additional variable takes the decision role, the control signal is not only the external port variable as in the Port-Controlled Hamiltonian (PCH) system3 (2.2.9). Thus, we will consider the general class of PH systems such as:
ẋ(t) = [J(x, d) -R(x, d)] ∇H(x) + [G(x, d) -P(x, d)] e E (t), f E (t) = [G(x, d) + P(x, d)] T ∇H(x) + [M(x, d) + S(x, d)] e E (t), (2.2.11)
In many cases, J(x, d) is an affine function of the duty cycle d(t) [START_REF] Escobar | A Hamiltonian viewpoint in the modeling of switching power converters[END_REF], R(x, d) is an nonlinear function of d(t), which usually appears with the ideal model of the converter (without dynamics). Both formulations (2.2.9) and (2.2.11) will be used throughout the manuscript to describe the dynamics of the microgrid components.
Energy-supplying system
The energy-supplying system of the DC microgrid (see also Fig. 2.1.1) includes all the elements which supply the energy to the load system such as:
-the electricity storage devices (e.g., batteries and/or supercapacitor) with their corresponding power converters;
-the external energy sources (e.g., three phase electrical grid) and their associated converters;
-the renewable energy sources (e.g., solar panels) and their associated converters;
-the transmission lines (DC bus).
Converters
The converters are necessary to connect the electrical devices to the DC bus. In the multi-sources elevator system we consider two types of converters: DC/DC and DC/AC. DC/DC converter: The DC/DC converter is modelled as an ideal Cuk circuit (see Fig. 2.3.1) which can provide an output voltage lesser or greater than the input voltage [van Dijk et al., 1995,Escobar et al., 1999]. It includes two inductors L b1 , L b2 (with the corresponding magnetic fluxes φ L b1 (t), φ L b2 (t)), two capacitors C b1 , C b2 (with the corresponding charges q C b1 (t), q C b2 (t)) and a pair of switches characterized by their timeaveraged models (with the duty cycle d b (t) ∈ (0, 1)) (the reader is referred to [van Dijk et al., 1995] for the time-averaged model). We investigate the converter which connects the battery and the DC bus. Moreover, from (2.2.11), the PH formulation for the converter dynamics is derived as in [START_REF] Escobar | A Hamiltonian viewpoint in the modeling of switching power converters[END_REF]:
-ẋ cb (t) v bb (t) i b (t) = -J cb (d b ) -G cb -G cbt G T cb 0 0 G T cbt 0 0 ∇H cb (x cb ) -i bb (t) v b (t) , (2.3.1)
where the state vector includes the magnetic flux of the inductors, L b1 , L b2 , and the charges of the capacitors, C b1 , C b2 , such as
x cb (t) = [φ L b1 (t) q C b1 (t) φ L b2 (t) q C b2 (t)] T ∈ R 4 . (2.3.2)
The voltage and current at the connection point between the converter and the DC bus are denoted by v b (t), i b (t) ∈ R, respectively. The voltage and current at the connection point between the converter and the battery are denoted by v bb (t), i bb (t) ∈ R, respectively. The Hamiltonian is the stored energy in the inductors and capacitors such that:
H cb (x cb ) = 1 2 x cb (t) T Q cb x cb (t), (2.3.3) with Q -1 cb = diag {L b1 , C b1 , L b2 , C b2 }.
Furthermore, the structure matrices J bc (d b ), G bc , G bi are given by the following expressions:
J cb (d b ) = 0 -d b (t) 0 0 d b (t) 0 1 -d b (t) 0 0 -1 + d b (t) 0 -1 0 0 1 0 , G cb = 0 0 0 1 , G cbt = 1 0 0 0 . (2.3.4)
From the presented model of the DC/DC converter, we can see that there is no dissipation matrix R(x, d), i.e., this (ideal) converter does not loose any energy. Therefore, at the steady state, the power-converting efficiency must be 1.
Next, from (2.3.1)-(2.3.4), we derive the converter ratio:
v bb (t) v b (t) = - 1 -d b (t) d b (t) ∈ (-∞, 0), i bb (t) i b (t) = d b (t) 1 -d b (t)
∈ (0, +∞).
(2.3.5)
Similarly, the dynamics of the DC/DC converter associated to the supercapacitor are described by (2.3.6) with the duty cycle d s (t), the state variable x cs (t), the Hamiltonian H cs (x cs ), the input and output variables i ss (t), v ss (t), i s (t), v s (t) ∈ R, respectively. The interconnection matrices are denoted by J cs (d s ), G cs , G cst with the values given in (2. 3.4). Based on these ingredients, the dynamics of the supercapacitor unit are given as:
ẋ cs (t) = J cs (d s )∇H cs (x cs ) + G cs [-i ss (t)] + G cst v s (t), v ss (t) = G T cs ∇H cs (x cs ), i s (t) = G T cst ∇H cs (x cs ), (2.3.6)
DC/AC converter: The DC/AC converter associated with the PMSM is illustrated in Fig. 2.3.2. It transforms the direct current (DC) to the three phase alternative current (AC) and vice versa. The converter circuit is modelled as three parallel pairs of ideal switches. They are characterized by three duty cycles ďl Fig. 2.3.2). Therefore, the relation of its port variables can be represented by:
(t) = [d a (t) d b (t) d c (t)] T ∈ [0, 1] 3 (see
i l (t) vl (t) = 0 -ďl (t) T ďl (t) 0 v l (t) ǐl (t) , (2.3.7)
where the phase voltages and currents are denoted by vl (t) ∈ R 3 and ǐl (t) ∈ R 3 , the corresponding DC bus voltage and current are denoted by i l (t) ∈ R, v l (t) ∈ R. From (2.3.7), we can see that the relation matrix of the converter input and output (2.3.7) is skew-symmetric which implies the power conservation property, i.e., i l (t)v l (t) + ǐl T (t) vl (t) = 0.
Energy sources
This section presents the models of the renewable source and external grid. Both of them can be modelled as current sources (i.e., they are controllable).
+ _ Renewable power source
Renewable energy:
In the present work we consider only the solar panels as renewable sources. In practice, a renewable source includes a grid of solar panels which supply electrical power depending on the solar radiance, panel temperature and device voltage, [START_REF] Kong | New approach on mathematical modeling of photovoltaic solar panel[END_REF]. To connect the panel to the microgrid, a DC/DC converter is used to control the delivered voltage. It is regulated to the value where the supplied power is maximal using the Maximum Power Point Tracking (MPPT) algorithm [START_REF] Femia | Distributed maximum power point tracking of photovoltaic arrays: Novel approach and system analysis[END_REF]. In this work, we assume that the regulator quickly stabilizes the supplied power to the maximal values. Furthermore, we neglect the temperature effect and consider an ideal converter (without energy dissipation). Therefore, the panel unit will be simply modelled as a power source P r (t) which only depends on the time as in Fig. 2.3.3. Hence, the voltage v r (t) and current i r (t) of the solar panel unit satisfy the following constraint: i r (t)v r (t) = -P r (t) < 0.
(2.3.8) + _ External grid External grid: The conventional energy source unit is the three phase electrical grid associated with an AC/DC converter. By this unit, the three phase alternative voltage is adapted to the voltage of the DC bus by modulating the three duty cycles of the AC/DC converter. Reference values of the delivered current to the DC bus are sent to the local controller of this converter. We assume that this controller steers the delivered current to this reference quickly. Therefore, the external grid is modelled as a current source i e (t) ∈ R with the corresponding voltage v e (t) ∈ R as in Fig. 2.3.4.
Transmission lines
The previous storage devices, energy source devices and electro-mechanical system are connected through the transmission lines (DC bus). The most simple model for these lines includes a capacitor which is connected in parallel with the power units [START_REF] Paire | A real-time sharing reference voltage for hybrid generation power system[END_REF]. However, it is not suitable for a large system where the connection lines should be taken into account. In the slow time scale, such a system can be modelled as a resistor network [START_REF] Zhao | Distributed control and optimization in DC microgrids[END_REF]. In the fast time scale, the line model includes capacitors and resistors4 (see Fig. The input current, output voltage and state vectors are denoted by:
i t (t) = [i tb (t) i ts (t) i tl (t) i te (t) i tr (t)] T ∈ R 5 , v t (t) = [v tb (t) v ts (t) v tl (t) v te (t) v tr (t)] T ∈ R 5 ,
x t (t) = [q t,1 (t) q t,2 (t) q t,3 (t) q t,4 (t) q t,5 (t)] T ∈ R 5 .
(2.3.9) Also, the current and voltage vectors of the lines resistors are denoted by:
i tR (t) = [i t,1 (t) i t,2 (t) i t,3 (t) i t,4 (t) i t,5 (t) i t,6 (t)] T ∈ R 6 , v tR (t) = [v t,1 (t) v t,2 (t) v t,3 (t) v t,4 (t) v t,5 (t) v t,6 (t)] T ∈ R 6 .
(2.3.10)
Their relations are described by the Ohm's law as: (2.3.11) where
v tR (t) + R tR i tR (t) = 0,
R tR = diag {R t,1 , R t,2 , R t,3 , R t,4 , R t,5 , R t,6 } ∈ R 6×6
represents the line resistors. Note that, R tR is symmetric and positive.
The Hamiltonian for the energy stored in the DC bus is chosen as: (2.3.12) where the weighting matrix (2.3.13) where the interconnection matrices G t , G tSR are given by:
H t (x t ) = 1 2 x t (t) T Q t x t (t),
Q t = diag {C b , C s , C l , C e , C r } -1 ∈ R 5×5
-ẋt (t) v tR (t) v t (t) = 0 -G tSR -G t G T tSR 0 0 G t 0 0 ∇H t (x t ) i tR (t) i t (t) ,
G t = I 5 , G tSR = -1 0 0 0 -1 -1 1 -1 0 0 0 0 0 1 1 0 0 1 0 0 0 -1 0 0 0 0 -1 1 0 0 . ( 2
i t (t) = R t v t (t).
The resistive matrix R t is also called the weighted Laplacian matrix of resistor network. In [van der Schaft, 2010], the properties of this matrix for the general resistor network is studied.
Proposition 2.3.1. Since the resistor network does not have the star topology (i.e., each end of each resistor is connected to a bus capacitor), the resistive matrix R t is semi-positive [van der Schaft, 2010, Zonetti et al., 2015].
Proof. If the resistor network has the star topology, the dynamics of transmission lines includes some constraints of resistor currents with the following form: (2.3.16) where A is an appropriate matrix. This is the current Kirchhoff's law for the resistor ends which are connected together but not to any capacitor. Thus, the dynamics (2.3.13) is not valid for the star topology of resistor network.
Ai tR (t) = 0,
In our case, by multiplying the two sides of the first equation in (2. 3.13) with the ones-vector, 1 T 5 , we obtain:
1 T 5 ẋt (t) -1 T 5 G tSR i tR (t) -1 T 5 G t i t (t) = 0.
(2.3.17)
In the previous equation, the first term indicates the total charge current of 5 bus capacitors. The third term of G t in (2. 3.14) indicates the total supplied current of the components. Since the resistors do not store the electricity, the first and third term must be equal. Thus, the second term must be zero such as:
1 T 5 G tSR i tR (t) = 0, ∀i tR (t) ∈ R 5 , ⇒ 1 T 5 G tSR = 0, ⇒ 1 T 5 G tSR R -1 tR G T tSR = 0, ⇒ 1 T 5 R t = 0.
(2. 3.18) Therefore, the resistive matrix R t is semi-positive. This concludes the proof.
To simplify the notation in the global DC microgrid dynamics, we partition the input matrix G tSR into five input matrices G tb , G ts , G tl , G te , G tr ∈ R 5×1 corresponding to the battery unit, supercapacitor unit, electro-mechanical elevator, external grid and renewable source, respectively, such that:
G tb G ts G tl G te G tr = G tSR .
(2.3.19)
Electrical storage unit
Generally, the dynamics of the electrical storage unit can be described by the formulation (2.2.9). In this work we consider particular types of electrical storage unit, that are, a lead-acid battery and a supercapacitor.
Lead-acid battery: An ideal battery model considers that the voltage is constant during the charging or discharging periods. This model can be useful only in case of low load and current (when compared to the battery's maximal capacity). For more general models we need to take into account some nonlinear effects which affects the available charge (see [START_REF] Jongerden | Which battery model is use? Software[END_REF]). There exist various electrical circuit models (see, for example, [START_REF] Durr | Dynamic model of a lead acid battery for use in a domestic fuel cell system[END_REF]) which describe the battery dynamics accurately, but they are too complex for an application to the real time optimal power balancing problem. The authors in [START_REF] Esperilla | A model for simulating a lead-acid battery using bond graphs[END_REF] proposed a Bond Graph battery model for which the parameters are difficult to identify. Thus, we need a simple enough model which can capture all the necessary properties of the system such as: the increase/decrease of voltage with charging/discharging current and state of charge, the increase/decrease in capacity with increasing charge or discharge rates, the recovery effect and the hysteresis by using an internal variable. There are at least two possible analytical models, the diffusion model, [START_REF] Rakhmatov | An analytical high-level battery model for use in energy management of portable electronic systems[END_REF] and the Kinetic Battery Model (KiBaM), [START_REF] Manwell | Lead acid battery storage model for hybrid energy systems[END_REF]. Although these models have been developed separately, the KiBaM model can be considered as a first order approximation of the diffusion model, [START_REF] Jongerden | Which battery model is use? Software[END_REF]. Hence, we consider the KiBaM model a good choice for our work, [START_REF] Lifshitz | Optimal energy management for grid-connected storage systems[END_REF]. Next, a PH formulation for the battery model will be developped (see also Fig. 2.3.6). As illustrated in Fig. 2.3.6, the battery model includes two electronic "wells" with the corresponding charges q b1 (t), q b2 (t), a bridge to connect them described by a coefficient k > 0, and a serial resistor R b . For simplicity, we assume that the battery voltage limits, E min , E max , are the same for both charging and discharging modes. Thus, the battery dynamics is represented by the following relations: (2.3.20) where b ∈ (0, 1) is a charge factor, q max is the maximal charge, i bb (t), v bb (t) ∈ R are the current and the voltage, respectively. By defining the state variable from the two charges of the battery x b (t) = [q b1 (t) q b2 (t)] T ∈ R 2 , we describe the Hamiltonian, which indicates the energy stored in the battery, as: (2.3.21) where the minimal battery voltage, Q b1 ∈ R 1×2 , and the inverts of battery charge capacity, Q b2 ∈ R 2×2 , are represented by:
qb1 (t) = -k q b1 (t) b + k q b2 (t) 1 -b + i bb (t), qb2 (t) = k q b1 (t) b -k q b2 (t) 1 -b , i bb (t) = - E max -E min bq max R b q b1 (t) - E min R b + v bb (t) R b ,
H b (x b ) = Q b1 x b (t) + 1 2 x T b (t)Q b2 x b (t),
Q b1 = E min E min , Q b2 = diag E max -E min bq max , E max -E min (1 -b)q max .
(2. 3.22) The resistive current, i bR (t) ∈ R 2 , represents the electricity currents through the serial resistor, R b , and the bridge between two charges. The resistive voltage, v bR (t) ∈ R 2 , represents the voltages of the serial resistor, R b , and the bridge between two charges. The Ohm's laws for these resistive elements are given by:
v bR (t) + R bR i bR (t) = 0, (2.3.23)
with the resistive matrix of the resistive elements: (2.3.25) where the structure matrices are given by
R bR = diag {R bi , R b } ∈ R 2×2 , with R bi = E max -E min kq max . ( 2
-ẋb (t) v bR (t) i bb (t) = 0 -G bSR 0 G T bSR 0 G bRE 0 -G T bRE 0 ∇H b (x b ) i bR (t) v bb (t) ,
G bSR = -1 1 1 0 ∈ R 2×2 , G bRE = 0 1 ∈ R 2×1 . (2.3.26)
From the previous model and numerical parameter values (determined using the data given by the industrial partner SODIMAS France), the battery has following important characteristics:
• R bi characterizes the battery internal current between the two charges (see also Fig. 2.3.6). Usually, R bi R b , i.e., the internal current is much smaller than the battery charging/discharging current through R b . Thus, in the fast time scale, the internal charge q b2 (t) mostly does not change.
• The battery charging mode corresponds to the positive sign of the output current, i bb (t) > 0. When
i bb (t) = 0, i.e., v bb (t) = -(G T bSE R -1 bR G bSE ) -1 (G bSE R -1 bR G T bSR )∇H b (x b ), (2.3.27)
the internal current is still non-zero to redistribute the charges q b1 (t), q b2 (t). From (2.3.24), (2.3.26)-(2.3.27), we can easily prove that the redistribution stops (i.e., ẋb (t) = 0) when the internal potentials are equal (i.e., ∂ q b1 H = ∂ q b2 H).
Supercapacitor: Supercapacitors are suitable for electrical power application. They have high performances in electrical power supplying [START_REF] Lai | High energy density double-layer capacitors for energy storage applications[END_REF]. They usually contain two parallel electrodes with an electrolyte without chemical reaction because of an added separator. A practical model for the supercapacitor can be found in [START_REF] Zubieta | Characterization of double-layer capacitors for power electronics applications[END_REF]]. Here we consider a simpler model which contains the serial connection of a capacitor, C s , with a resistor, R s . Thus the state variable is defined by the supercapacitor charge, x s (t) = q s (t) ∈ R. The Hamiltonian indicating the energy stored in the supercapacitor is expressed as:
H s (x s ) = 1 2 q 2 s (t) C s . (2.3.28)
Similarly with battery dynamics (2.3.25) the supercapacitor dynamical model is represented by: (2.3.29) where i sR (t), v sR (t) ∈ R are the current and voltage of the serial resistor, R s , satisfying the Ohm's law:
-ẋs (t) v sR (t) i ss (t) = 0 -G sSR 0 G T sSR 0 G sRE 0 -G T sRE 0 ∇H s (x s ) i sR (t) v ss (t) ,
R s i sR (t) + v sR (t) = 0. (2.3.30)
The structure matrices G sSR , G sRE ∈ R in (2.3.29) are given as:
G sSR = 1, G sRE = 1. (2.3.31)
The electro-mechanical elevator
This section presents the PH formulations for the electro-mechanical elevator dynamics. Electro-mechanical elevator system represents the combination of the AC/DC converter, Permanent Magnet Synchronous Machine (PMSM) and mechanical elevator. In the literature, the d-q model of PMSM is usually derived using the Park transformation. Its PH formulation is presented in [START_REF] Nicklasson | Passivity-Based Control of a class of Blondel-Park transformable electric machines[END_REF], Petrović et al., 2001].
In this work, we first consider the PH formulation of the electro-mechanical elevator in the coordinates which include the three magnetic fluxes of machine stator coils. This is called original model. Then, the Park transformation is applied. Since there is still a flow constraint in the transformed model, we eliminate it to obtain the d-q PH model. Thus, two main contributions are provided here:
• the original model of the electro-mechanical elevator is represented in the PH formulation.
• the Park transformation is considered explicitly in the PH formulation.
Next, we first present the PH formulation for the original model of the electro-mechanical elevator. Then, we derive the d-q PH model using the Park transformation.
Original model
This component is a combination of the mechanical elevator, the PMSM and the three phase DC/AC converter (see Fig. 2.4.1). Physically, the original model of the subsystem considers three phase fluxes of the stator as the state variables.
Mechanical elevator: The mechanical elevator includes the cabin (including the passengers) and the counterweight with the corresponding masses m c , m p . They are connected together by a cable and hung on a pulley with the radius ρ. The friction is assumed to be negligible. The mechanical energy is the sum of kinetic and potential energies:
H m (p l , θ m ) = 1 2 p l (t) 2 I l -(m c -m p ) gρθ m (t), (2.4.1)
where θ m (t) is the pulley angle, g = 9.81m/s 2 is the gravity acceleration,
I l = [m c -m p ] ρ 2 is the mechanical inertia, and p l (t) = I l θm (t) is the mechanical momentum. Let x m (t) = [p l (t) θ m (t)
] ∈ R 2 denote the state vector for the mechanical elevator. From the kinematic relation and Newton's law, we obtain the following dynamics: where τ e (t) ∈ R is the magnetic torque of the PMSM and ω l (t) ∈ R is the rotor angular speed.
-ṗl (t) -θ m (t) ω l (t) = 0 -1 -1 1 0 0 1 0 0 ∂ p l H m (θ m , p l ) ∂ θm H m (θ m , p l ) τ e (t) , (2.4.2) Mechanical system
Permanent Magnet Synchronous Machine: The PMSM includes a permanent magnet rotor and a threephases stator. The rotor flux is characterized by the magnet flux φ f . Its projections on three stator coils are denoted by Φ f abc (θ e ) ∈ R 3 , with the rotor angle θ e (t) ∈ R. The stator is modelled as a system of three symmetric coils with the inductance matrix L abc (θ e ) ∈ R 3×3 . It depends on the rotor angle since the air gap between the rotor and stator coils varies with time. (Details on the stator coils, Φ f abc (θ e ) and the inductance matrix L abc (θ e ), are provided in the Appendix A).
Hence, the PMSM magnetic energy stored in the stator is:
H e ( Φl , θ e ) = 1 2 Φl (t) -Φ f abc (θ e ) T L -1 abc (θ e ) Φl (t) -Φ f abc (θ e ) .
(2.4.3)
From the Kirchhoff's laws and Lenz's law, we obtain the interconnection structure of PMSM as: .4.4) where vl (t), ǐl (t) ∈ R are the voltage and current vectors of the stator at the connection point with the DC/AC converter. The model derivation of PMSM dynamics (2.4.4) is explained in detail in Appendix A.
-Φl (t) -θe (t) i lR (t) -ǐl (t) 0 -τ e (t) = 0 0 I 3 -I 3 1 3 0 0 0 0 0 0 -1 -I 3 0 0 0 0 0 I 3 0 0 0 0 0 -1 T 3 0 0 0 0 0 0 1 0 0 0 0 ∂ Φl H e ( Φl , θ e ) ∂ θe H e ( Φl , θ e ) v lR (t) vl (t) v ln (t) ω l (t) , ( 2
The resistive elements, which we assume that they are linear, correspond to the stator resistors characterized by the resistance R l for each phase. The Ohm's law is written as:
v lR (t) = -R l i lR (t).
(2.4.5)
Electro-mechanical elevator: The dynamics of electro-mechanical elevator are derived by connecting the PMSM dynamics (2.4.4) with the mechanical dynamics (2.4.2) through the mechanical port (τ e (t), ω l (t)) in (2.4.4) and with the DC/AC relation (2.3.7) through the electrical port ( ǐl (t), vl ) in (2. 4.4). Therefore, since the DC/AC converter does not store the energy, the Hamiltonian of electro-mechanical elevator is the total of the mechanical energy (2.4.1) and the magnetic energy (2.4.3):
H l ( Φl , x m ) = H e ( Φl , θ e ) + H m (x m ).
(2.4.6)
Let us define the global state variable vector xl (t) ∈ R 6 which includes the stator magnetic fluxes Φl (t), the rotor angle θ e (t), the mechanical momentum p l (t) and the pulley angle θ m (t):
xl (t) = [ Φl (t) T θ e (t) p l (t) θ m (t)] T ∈ R 6 .
(2.4.7)
By combining the DC/AC relation (2.3.7), the mechanical elevator dynamics (2.4.2) and the PMSM dynamics (2.4.4), we derive the implicit PH model for the electro-mechanical elevator as follows:
˙l x(t) = Jl ∇H l (x l ) + ǦlSR v lR (t) + ǦlSE ( ďl )v l (t) + ǦlSn v ln (t), (2.4.8a) i lR (t) = ǦT lSR ∇H l (x l ), (2.4.8b) i l (t) = ǦT lSE ( ďl )∇H l (x l ), (2.4.8c) 0 = ǦT lSn ∇H l (x l ), (2.4.8d) v lR (t) = -R l i lR (t).
(2.4.8e)
where the interconnection matrix Jl and the input matrices ǦlSR , ǦlSE ( ďl ), ǦlSn are described by the following expressions:
Jl = 0 0 0 0 0 0 1 0 0 -1 0 -1 0 0 1 0 ∈ R 6×6 , ǦlSR = -I 3 0 0 0 ∈ R 6×3 , (2.4.9a) ǦlSE ( ďl ) = ďl (t) 0 0 0 ∈ R 6×1 , ǦlSn = -1 3 0 0 0 ∈ R 6×1 . (2.4.9b)
As we see in dynamics (2.4.8), there is still a constraint (2.4.8d) in the external port output. This is discarded in the next subsection where a reduced PH model is obtained.
Remark 2.4.1. From the Hamiltonians described by (2.4.1), (2.4.3) and (2.4.6), we see that the Hamiltonian H l (x l ) is convex but not positive definite. This means that it does not admit a minimum point. Equivalently, the electro-mechanical elevator does not have an equilibrium point corresponding to the zero input v l (t) = 0. The storage elements reside in the PMSM stator and mechanical elevator. The resistive element only resides in the PMSM stator. There are two external ports: the zero-flow source Sf representing the flow constraint (2.4.8d) and the electrical port (i l , v l ) connecting to the transmission lines. Note that the Dirac structure of the DC/AC converter is modulated by the duty cycle ďl (t). We can see that the Bond Graph describes different physical domains in the same theoretic formalism: magnetic, electric and mechanic.
Transmission lines
Reduced order model
In the following we present the d-q PH formulation for the electro-mechanical elevator (2.4.8) by using the Park transformation and a constraint reduction. Especially, these two processes are considered in the PH formulation. Since the Park transformation is time invariant, the transformed system is also a PH system. For the reduced model, a simple condition to preserve the PH form is fortunately satisfied. First, we define the Park matrix P(θ m ) ∈ R 3×3 as: .4.10) where p ∈ N denotes the number of pole pair in the machine stator,
P(θ m ) = 2 3 cos(α a (θ m )) cos(α b (θ m )) cos(α c (θ m )) -sin(α a (θ m )) -sin(α b (θ m )) -sin(α c (θ m )) 1 √ 2 1 √ 2 1 √ 2 , ( 2
α a (θ m ) = pθ m (t), α b (θ m ) = pθ m (t) - 2π 3 , α c (θ m ) = pθ m (t) + 2π 3
. The Park matrix is used to transform the PMSM flux vector Φl (t) in (A.0.4)
to Φ l (t) = P(θ m ) Φl (t). For the electro-mechanical elevator, the state vector includes not only the stator fluxes but also the mechanical momentum and rotor angle as in (2.4.7). Thus, we define the extended Park transformation as: .4.11) Note that, only the stator flux is transformed:
x l (t) = P(θ m ) 0 0 I 3 xl (t). ( 2
x l (t) = [Φ l (t) θ e (t) p l (t) θ m (t)] T ∈ R 6 .
(2.4.12)
The Jacobian matrix of the transformation (2.4.11) is given by:
W(x l ) ∂ xl x l = P(θ m ) dP dθ m (θ m )Φ l (t) 0 0 1 0 0 0 I 2 . (2.4.13)
The transformed PH model is explained by the following proposition.
Proposition 2.4.2. Any time invariant transformation of the state-space of the electro-mechanical elevator system (2.4.8) preserves the PH form.
Proof. Let W(x l ) be the Jacobian matrix of a time invariant transformation of state-space x l (x l ). From the chain rules for the state vector and Hamiltonian, we obtain:
ẋl (t) = W(x l ) ˙l x(t), ∇H l (x l ) = W T (x l )∇H l (x l ).
(2.4.14)
By defining the structure matrices
J l (x l ) ∈ R 6×6 , R l (x l ) ∈ R 6×6 , G l,SE (x l , ďl ) ∈ R 6×1 , G l,Sn (x l ) ∈ R 6×1 as: J l (x l ) = W(x l ) Jl W T (x l ), G lSR (x l ) = W(x l ) ǦlSR , G lSE (x l , ďl ) = W(x l ) ǦlSE ( ďl ), G lSn (x l ) = W(x l ) ǦlSn , (2.4.15)
we get the transformed dynamics of the electro-mechanical elevator:
ẋl (t) = J l (x l )∇H l (x l ) + G lSR (x l )v lR (t) +G lSE (x l , ďl )v l (t) + G lSn (x l )v ln (t), i lR (t) = G lSR T (x l )∇H l (x l ), i l (t) = G lSE T (x l , ďl )∇H l (x l ), 0 = G lSn T (x l )∇H l (x l ), v lR (t) = -R l i lR (t).
(2.4.16)
Since Jl is skew-symmetric, then J l (x l ) has the same property. Consequently, the time invariant transformation of state-space x l (x l ) preserves the PH form of the electro-mechanical elevator dynamics.
Remark 2.4.3. Note that, from the transformed system dynamics (2.4.16) there is still a flow constraint (fourth equation in (2.4.16)). Also, the structure matrix [J l (x l ) G lSR (x l ) G lSE (x l , ďl ) G lSn (x l )] is not full rank. This motivates model order reduction which preserves the PH formulation. In the following proposition we indicate a state-space projection to reduce the transformed dynamics (2.4.16).
Proposition 2.4.4. Let x l (t) ∈ R 4 be the reduced state variable of x l (t) by the state-space projection (2.4.17) with
x l (t) = G ⊥ x l (t),
G ⊥ = I 2 0 0 0 0 I 2 ∈ R 4×6 .
(2.4.18)
In the case of electro-mechanical elevator, the presented state-space projection has the following properties:
1. It satisfies the relations:
∇H l (x l ) = (G ⊥ ) T ∇H l (x l ), G ⊥ G lSn (x l ) = 0. (2.4.19)
2. It reduces electro-mechanical elevator PH dynamics (2.4.16) to PH dynamics.
Proof. We start by proving the first property. The first coordinate of the reduced state variable vector x l (t) defined by (2.4.17) is called the direct flux and denoted by φ ld (t). Similarly, the second coordinate is called the quadrature flux and denoted by φ lq (t). They describe the projections of the stator magnetic fluxes on two perpendicular axis associated with the rotor. Using these definitions and the reduction (2.4.17)-(2.4.18), the electrical, mechanical and electro-mechanical state vectors are given by:
x e (t) = φ ld (t) φ lq (t) ∈ R 2 , x m (t) = p l (t) θ m (t) ∈ R 2 , x l (t) = x e (t) x m (t) ∈ R 4 .
(2.4.20)
Moreover, the electro-mechanical elevator Hamiltonian defined by (2.4.1), (2.4.3) and (2.4.6) may be rewritten using the reduced state variables: (2.4.21) with the direct and quadrature inductances defined by: (2.4.22) where L 1 , L 2 , L 3 are the parameters of the original stator fluxes in (A.0.2). By some calculation from the mechanical energy (2.4.1), the electromagnetic energy (2.4.3), the electro-mechanical elevator energy (2.4.6), the Park transformation (2.4.10), the Jacobian (2.4.13) and the state-space projection (2.4.17)-(2.4.18) we derive (2. 4.19).
H l (x l ) = 1 2L d φ ld (t) - 3 2 φ f 2 + 1 2 φ lq (t) 2 L q + 1 2 p l (t) 2 I l -(m c -m p ) gρθ m (t),
L d = L 1 + 1.5L 2 -L 3 , L q = L 1 -1.5L 2 -L 3 ,
Next, we prove the second property stated in the Proposition. By multiplying the transformed dynamics (2.4.16) with G ⊥ , we get: (2.4.23) Thanks to the reduction (2.4.17) and the property (2. 4.19), we obtain the reduced electro-mechanical elevator dynamics as: (2.4.24) where the reduced structure matrices are given by:
G ⊥ ẋl (t) = G ⊥ J l (x l )∇H l (x l ) +G ⊥ G lSR (x l )v lR (t)+ +G ⊥ G lSE (x l , ďl )v l (t) +G ⊥ G lSn (x l )v ln (t), i lR (t) = G lSR T (x l )∇H l (x l ), i l (t) = G lSE T (x l , ďl )∇H l (x l ), 0 = G lSn T (x l )∇H l (x l ), v lR (t) = -R l i lR (t),
ẋl (t) = J l (x l )∇H l (x l ) +G lR (x l )v lR (t) + G l ( ďl , x l )v l (t), i lR (t) = G T lR (x l )∇H l (x l ), i l (t) = G T l (d l , x l )∇H l (x l ), v lR (t) = -R l i lR (t),
J l (x l ) = G ⊥ J l (x l )(G ⊥ ) T , G lR (x l ) = G ⊥ G lSR (x l ), G l ( ďl , x l ) = G ⊥ G lSE ( ďl , x l ).
(2.4.25)
From the previous calculation, we note that the input matrix G l ( ďl , x l ) depends on the pulley angle θ m (t). Therefore, we define the equivalent duty cycle vector denoted by d l (t) ∈ R 2 to simplify the input matrix G l ( ďl , x l ) in the form:
G l (d l ) = d l (t) 0 ∈ R 4×1 , (2.4.26) with d l (t) = 2 3 cos(pθ m ) cos(pθ m - 2π 3 ) cos(pθ m + 2π 3 ) -sin(pθ m ) -sin(pθ m - 2π 3 ) -sin(pθ m + 2π 3 ) ďl (t). (2.4.27)
From the presented reduced model in (2.4.24), we emphasize some characteristics of the electro-mechanical elevator dynamics related to the interconnection matrix and to the Hamiltonian:
1. The interconnection matrix J l (x l ) of the dynamics (2.4.24) depends on the state variable x l (t) and is not integrable.
2. The electro-mechanical elevator dynamics include the electrical and mechanical domains. From their details, we can partition the structure matrices as:
J l (x e ) = 0 J em (x e ) -J T em (x e ) J m , G lR (x l ) = G lRe (x m ) 0 , (2.4.28)
where
J m = 0 -1 1 0 , J em (x e ) = φ lq (t) 0 -φ ld (t) 0 , G lRe (x m ) = 2 3 cos(pθ m ) cos(pθ m - 2π 3 ) cos(pθ m + 2π 3 ) -sin(pθ m ) -sin(pθ m - 2π 3 ) -sin(pθ m + 2π 3 ) .
(2.4.29)
From (2.4.21), we note that the Hamiltonian is a quadratic function: .4.30) where the weight matrices are given by: .4.31) with the weight matrices corresponding to the electrical and mechanical domains:
H l (x l ) = Q l0 + x T l (t)Q l1 + 1 2 x T l (t)Q l2 x l (t). ( 2
Q l0 = 3 4 φ 2 f L d , Q l1 = Q l1e Q l1m , Q l2 = Q l2e 0 0 Q l2m , ( 2
Q l1e = - 3 2 φ f L d 0 , Q l1m = 0 -(m c -m p )gρ , Q l2e = 1 L d 0 0 1 L q , Q l2m = 1 I l 0 0 0 .
(2.4.32)
Transmission lines
Identification
The parameter values for the derived model can be measured directly on the actual system. However, the PMSM parameters can not be measured directly on the system. They are quite complex (intrusive and sensitive) and the obtained measured parameters' values may be different from the actual values in nominal operating conditions. Therefore, we will use identification methods to determine the parameters x by measured data of some physical quantities, called inputs and outputs of the system which are denoted by û(t), ŷ(t), respectively.
Let ȳ(t) be the output value which is derived from the data of the input û(t) and parameter x using the system dynamics: Ȳd = g(x, Ûd ), (2. 4.33) where Ȳd and Ûd are the discrete functions of output ȳ(t) and input û(t), respectively. The outputs ŷ(t) and ȳ(t) are used to construct the cost function V ( Ŷd , Ȳd ) with the discrete function Ŷd of ŷ(t). In this work, we penalize the cost the discrepancy between ŷ(t) and ȳ(t) such that:
V ( Ŷd , Ȳd ) = N i=1 (ȳ(i) -ŷ(i)) T (ȳ(i) -ŷ(i)) . (2.4.34)
Then, by replacing (2.4.33) in (2.4.34), we obtain a cost which depends on the parameters V ( Ŷd , x, Ûd ), or simply V (x). By minimizing this cost function wrt. the parameters x, we obtain their approximate values.
The electro-mechanical elevator dynamics described by (2.4.20)-(2.4.21), (2.4.24), (2.4.26) and (2.4.28) can be expressed in the following explicit form:
φld (t) = - R l L d φ ld (t) - 3 2 φ f + φ lq (t)p l (t) I l + v ld (t), φlq (t) = - R l L q φ lq (t) - 1 I l φ ld (t)p l (t) + v lq (t), ṗl (t) = 3 2 φ f L d φ lq (t) + L d -L q L d L q φ ld (t)φ lq (t) + Γ res , θm (t) = 1 I l p l (t).
(2. 4.35) In this model, there are seven parameters: I l , Γ res , p, R l , L d , L q , φ f . The mechanical parameters (I l , Γ res ) and the number of pole pairs (p) are assumed to be known (they may be identified separately). Practically, the remaining 4 parameters will be determined by identification methods using the currents and voltages values (given data) of the three phases ǐl (t), vl (t) and the angular velocity of the rotor ω l (t). However, in the d-q frame, the replacing voltage and current v l (t), i l (t) are determined by using the Park transformation described by (2.3.7), (2.4.27) and (3.3.1). Finally, the identification algorithms will make use of the following data:
v ld (t), v lq (t), i ld (t), i lq (t), ω l (t).
It is theoretically sufficient to use the two first electrical equations of (2.4.35) which relate to the magnetic fluxes, for the identification purpose. However, in the second equation of (2.4.35), the value of rotor mechanical momentum coefficient is too small. This produces poor results for the estimation of the rotor's flux in the case of experiments with short durations and zero initial value for the flux. Therefore, three first equations of (2.4.35) are used.
We investigate hereinafter two identification methods based on two identification models for the electromechanical elevator: direct dynamic identification model (DDIM) [Khatounian et al., 2006, Robert andGautier, 2013] and inverse dynamic identification model (IDIM) [START_REF] Khatounian | Parameters estimation of the actuator used in haptic interfaces: Comparison of two identification methods[END_REF],Zentai and Dabóczi, 2008, Robert and Gautier, 2013]. However, note that the mentioned choices of inputs and outputs have not been generalized for the PH system yet, i.e., they do not relate to the input and output at the external power port.
The Output Error method (OE) [START_REF] Robert | Global identification of mechanical and electrical parameters of synchronous motor driven joint with a fast CLOE method[END_REF]: In this method, let
ŷ(t) = i l (t) ṗl (t) ω l (t) , ȳ(t) = īl (t) pl (t) ωl (t) , û(t) = v l (t), x = R l L d L q φ f
denote the output data, the estimated outputs, the input data and the vector of parameters, respectively. Some numerical methods to find the argument xmin minimizing V (x) in (2.4.34) needs to be used such as the gradient method, the Newton method [START_REF] Robert | Global identification of mechanical and electrical parameters of synchronous motor driven joint with a fast CLOE method[END_REF]. We follow [START_REF] Khatounian | Parameters estimation of the actuator used in haptic interfaces: Comparison of two identification methods[END_REF], using the Levenberg-Marquardt algorithm to ensure a robust convergence even in the case of a bad initialization of x. This identification scheme is robust to noise in the measurements as well, but requires larger amount of computational effort than the method we present next.
The Least Square and Inverse model method (LSI) [START_REF] Zentai | Offline parameter estimation of permanent magnet synchronous machine by means of LS optimization[END_REF]:
In this method, let ŷ(t) = v l (t) ṗl (t) -Γ res , ȳ(t) = vl (t) pl (t) -Γ res , û(t) = i l (t) ω l (t) , x = R l L d L q φ f
denote the output data, the estimated outputs, the input data and the vector of parameters with Γ res = -∂ θm H l (x l ) = (m cm p )gρ. This choice allows to rewrite the system dynamics by the linear constraints of parameters: (2.4.36) where γ(û) is a suitable function derived from (2.4.35). Thus, the cost function V (x) in (2.4.34) is actually quadratic whose minimum is well-known. This method exhibits good results in the case without noise but unfortunately is rather sensitive to the noise disturbances in the current data. Therefore, to improve the obtained results, it is necessary to use it with a low-pass filter using these.
ȳ(t) = γ(û)x,
Simulation results: The "measured" data are obtained by simulation ran with the parameters values received from the PMSM manufacturer. The PMSM model proposed in (2. 4.24) was implemented in Matlab/Simulink 2016a. During the simulations, the input voltages, output currents and rotor speed are recorded to create data for identification (see also Fig. 2.4.4). Using these data, both identification algorithms are tested in terms of convergence and noise sensitivity. : numerical data of dq currents, i l (t), dq voltages, v l (t), rotor speed, ω l (t), and rotor angle, θ m (t).
Three experimental scenarios are implemented corresponding to the nominal case (without noise), noiseaffected data and the case with noise and a supplementary low-pass filter. We provide more details on the simulation data and configuration in [START_REF] Pham | An energy-based control model for autonomous lifts[END_REF]. The parameter values used in these experience are J = 0.1 kg.m 2 , Γ res = 5 N.m, p = 20, R s = 0.53 Ω, L d = 8.96 mH, L q = 11.23 mH, φ f = 0.05 W b. The sample time is h = 10 -5 s and the simulation duration is t = 0.5 s. The first experience (without noise) was also carried out with a shorter duration (t = 0.005 s) in order to emphasize the effect on the identification results. The amplitude of noises is 1 A for data of currents, and is 1 rad/s for data of rotor speed. From the obtained results in Tables 2.4.1-2.4.3 for the presented scenarios, some conclusions are deduced:
-The estimated values using the OE identification method are less sensitive to noise wrt the estimated values obtained using the LSI identification method;
-The estimated values of the direct inductance, L d , and the quadrature inductance, L q , are more accurate wrt the stator resistance, R l , and the rotor magnetic flux, φ f . -The longer the running time for the LSI method is, the best is the convergence. On the contrary, the convergence of the OE method is worse when the simulation time increases because the errors of the estimated output of the OE method increase.
The simulations show that, in our case, the LSI method associated with low-pass filter is better than the other identification methods since it ensures both, a robust convergence and small volume of computations. This results are similar to the results presented by [START_REF] Khatounian | Parameters estimation of the actuator used in haptic interfaces: Comparison of two identification methods[END_REF] in simulation (deterministic case without converter) and experimentally. The values of appropriate sample time and duration can be adapted according to the chosen current waveforms.
The global DC microgrid model
This section presents the global PH model of the DC microgrid elevator taking into account the PH models of the presented microgrid components and the microgrid power-preserving interconnection.
Bond graph for the multi-source elevator system
Global model
As previously mentioned, all the electrical power components are connected to the corresponding port of the DC bus. This connection is described by a simple power-preserving relation:
i t (t) v com (t) = 0 -I 5 I 5 0 v t (t) i com (t) , (2.5.1)
where i t (t), v t (t) ∈ R 5 are the currents and voltages at the connection ports of the DC bus defined by (2.3.9), and are the currents and voltages of the battery unit, supercapacitor unit, electro-mechanical elevator, external grid and renewable power source, respectively. Consequently, the global state variable, x(t), and Hamiltonian, H(x), gather the states and energy of the DC microgrid components such as:
i com (t) = [i b (t) i s (t) i l (t) i e (t) i r (t)] ∈ R 5 , v com (t) = [v b (t) v s (t) v l (t) v e (t) v r (t)] ∈ R 5 , (2.5.2)
x(t) = x T t (t) x T cb (t) x T cs (t) x T s (t) x T e (t) x T m (t) x T b (t) T ∈ R 20 , (2.5.3a) H(x) = H t (x t ) + H cb (x cb ) + H cs (x cs ) + H s (x s ) + H e (x e ) + H m (x m ) + H b (x b ). (2.5.3b)
Note that in (2.5.3) the subscripts denote the corresponding variables for the transmission lines, battery/supercapacitor converters, machine stator, battery, supercapacitor and mechanical elevator, respectively. The global flow and effort variables of the resistive elements are denoted by:
e R (t) = i T tR (t) i T bR (t) i T sR (t) v T lR (t) T ∈ R 11 , (2.5.4a) f R (t) = v T tR (t) v T bR (t) v T sR (t) i T lR (t) T ∈ R 11 , (2.5.4b)
and their relation is described by the Ohm's law:
f R (t) + R R e R (t) = 0, (2.5.5)
with R R ∈ R 12×12 denoting the resistive matrix given as:
R R = diag{R tR , R bR , R s , R -1 l I 3 }. ( 2
f (t) = D(d, x)e(t), P r (t) = -i r (t)v r (t), f R (t) = -R R e R (t),
(2.5.7)
where the flow, f (t) ∈ R 34 , and the effort, e(t) ∈ R 34 , variables are given by:
f (t) = -ẋ(t) f R (t) v e (t) v r (t) , e(t) = ∇H(x) e R (t) i e (t) i r (t) , (2.5.8)
and the structure matrix, D(d, x) ∈ R 34×34 , is described by:
D(d, x) = -J(d, x) -G SR (θ m ) -G e -G r G T SR (θ m ) 0 0 0 G T e 0 0 0 G T r 0 0 0 , (2.5.9)
which contains the structure matrices
J(d, x) ∈ R 20×20 , G T SR (x l ) ∈ R 20×12 , G r , G e ∈ R 20×1 described by: J(d, x) = 0 -G tb G T cbt -G ts G T cst 0 -G tl d T l (t) 0 0 G cbt G T tb J cb (d b ) 0 0 0 0 0 G cst G T ts 0 J cs (d s ) 0 0 0 0 0 0 0 0 0 0 0 d l (t)G T tl 0 0 0 0 J em (x e ) 0 0 0 0 0 -J T em (x e ) J m 0 0 0 0 0 0 0 0 , (2.5.10a) G SR (θ m ) = G tSR 0 0 0 0 G cb G T bRE 0 0 0 0 G cs G T sRE 0 0 0 G sSR 0 0 0 0 G lRe (x m ) 0 0 0 0 0 G bSR 0 0 . (2.5.10b) G r = G tr 0 , G e = G te 0 , (2.5.10c)
From the above model (2.5.7)-(2.5.10c), we emphasize some of the DC microgrid characteristics:
1. The Hamiltonian H(x) is a quadratic function of the form:
H(x) = Q 0 + x(t)Q 1 + 1 2 x T (t)Q 2 x(t), (2.5.11) with Q 0 ∈ R, Q 1 ∈ R 20 and Q 2 ∈ R 20×2
is diagonal and non-negative. The minimum of Hamiltonian does not exist due to the linear potential energy. Usually, when the operation point (state) is determined, we change the coordinate to this point to obtain a new Hamiltonian which is usually useful for stabilizing the system.
2. Some external ports of the DC network connect to the renewable energy units which are modelled as DC power sources (see Eq. (2.3.8)). Therefore, we can not derive the input-state-output PH system, and the DC microgrid model is represented in the implicit form (2.5.7). However, this implicit form describes in detail the component models and their interconnection which will be useful for the global control objective.
3. The interconnection matrix J(d, x) in (2.5.10a) is an affine function of the state variables (see relations (2.4.29) and (2.5.10a)).
4. The interconnection matrix J(d, x) in (2.5.10a) is an affine function of each converter duty cycles 5 . This means that these variables do not change the global energy but the internal power distribution between the components.
5. The studied microgrid illustrated in Fig. 2.1.1 includes three physical domains: chemical (battery, supercapcitor), mechanical (machine rotor, mechanical elevator) and electrical (converter, transmission lines, machine stator). Based on the actual system, there are three time-scale dynamics corresponding to the electrical domain, the mechanical elevator and the supercapacitor and the battery, respectively. They will be considered with the corresponding objectives in the next subsection.
Control objectives
As mentioned in the introduction, there are many control objectives considered for the microgrid. In the present work we concentrate on the energy management objective under an optimization-based control framework. Based on the modelling formulation presented above, in this section, we present the general profiles, constraints and cost which will be used in the next chapters for the optimization-based control design. All the elements of the electrical system are characterized by certain profiles of reference. Load power profiles: In the dynamics (2. 4.24), the cabin mass, m c , is constant during the travel of the elevator. However, it changes when the elevator stops corresponding to the weight of the inside passengers. Furthermore, the start/stop instant and the cabin position are also modified many times in a day. For the elevator operation the cabin mass, m c , the start instant, t in , the stop instant, t f i and the cabin position, θ, are the control signals which are decided by the passenger and/or the elevator programmer.
Reference profiles
We denote by N t the number of elevator travels in a day. Then, the vector m c , t in , t f i , θ ∈ R Nt captures N t values of cabin mass, initial instant, final instant and final rotor angle of N t travels:
m c = [m c,1 . . . m c,Nt ] T , t in = [t in,1 . . . t in,Nt ] T , t f i = [t f i,1 . . . t f i,Nt ] T , θ = [θ 1 . . . θ Nt ] T .
(2.6.1)
These vectors with the duty cycles of the machine converter define the control variables and lead to a consumption power profile of the electro-mechanical elevator. In our work, m c , t in , t f i , θ are assumed to be decided by the passenger, i.e., they are fixed and unknown. Therefore, the long-term consumed electrical power is nearly fixed and admits a statistical rule. Taking into account the available statistical measurements of electricity consumption we consider the reference power of the consumer denoted by P l (t). However, the short-term load power is still modified by the minimization of the machine dissipated energy during each travel which will be presented in Chapter 4.
Renewable power profile: From the description of the renewable source in Subsection 2.3.2 we denote by P r (t) the power profile of the renewable source estimated from meteorological data.
Electricity price profile: Lastly, by using existing historical data of electricity market, we denote the predicted electricity price profile by price(t). Also, we assume that the selling and buying prices are the same. Fig. 2.6.1 depicts the evolution of the price on the electricity market in a day.
Constraints
Electro-mechanical elevator system: The operation constraints corresponding to the electro-mechanical elevator system are caused by physical limitations and passengers requirements. In fact, to avoid a high machine temperature which can destroy the machine and change its physical characteristics, the current amplitude needs to satisfy the following constraint (see [START_REF] Lemmens | PMSM drive current and voltage limiting as a constraint optimal control problem[END_REF] for details):
ǐl (t) 2 ≤ I l,max .
(2.6.2)
In the d-q frame, (2.6.2) is rewritten as:
φ ld (t) - 3 2 φ f /L d φ lq (t)/L q 2 ≤ I l,max √ 2 .
(2.6.3)
Because of the machine design and the passenger's comfort requirements, the rotor speed and the rotor angle need to be less than a priori chosen values. The elevator speed limit is defined by:
ω l,min ≤ ω l (t) ≤ ω l,max , (2.6.4)
with ω l,min , ω l,max ∈ R denoting the minimal and maximal mechanical elevator momentum which is proportional to the machine speed. This comes from the limitation of the mechanical solidity, and it is given by the manufacturer.
The duty cycle ďl (t) ∈ R 3 has to be in the interval [0, 1] 3 . However, by using the Park transformation, we derive the corresponding non-linear constraint in the d-q coordinate:
d l (t) 2 ≤ 1 √ 2 .
(2.6.5)
Usually, there is also the constraint on the rotor position such as (2.6.6) where θ l,min , θ l,max ∈ R are respectively the minimal and maximal rotor angles of the pulley during a travel. θ l,min and θ l,max are the initial and final angles. Fortunately, the constraint (2.6.6) can be eliminated thanks to the constraints (4.3.6c), (4.3.8) and (2.6.7). It is clearly that the time integration of the rotor speed is the rotor angle, i.e.
θ l,min ≤ θ m (t) ≤ θ l,max ,
θ m (t) = θ 0 + t t0 ω l (t)dt.
(2.6.7)
If the elevator goes down, θ l,min = θ 0 , θ l,max = θ f , and ω l,min = 0. Thus, from (4.3.6c), (4.3.8) and (2.6.7), we obtain the following inequality:
⇒ 0 = t t0 ω l,min dt ≤ t t0 ω l (t)dt ≤ t f t0 ω l (t)dt = θ f -θ 0 = θ l,max -θ l,min , ⇒ 0 ≤ θ m (t) -θ l,min ≤ θ l,max -θ l,min , ⇒ (4.3.8).
Therefore, (4.3.6c), (4.3.8) and (2.6.7) are the sufficient condition for (4.3.8) when the elevator go down. Similarly, we have the same conclusion for the case the elevator goes up.
Battery unit: In general, an electrical storage unit has some limitations on the quantity of charged energy. Furthermore, the battery stored charge must be greater than half its capacity (kept in case of unexpected events):
0.5x b,max ≤ x b (t) ≤ x b,max , (2.6.8)
with x b,max ∈ R 2 . Also, the battery charge/ discharge current respects some limitation range given by the manufacturer: i b,min ≤ i bb (t) ≤ i b,max , (2.6.9)
with i b,min , i b,max ∈ R. The duty cycle needs to respect the following limitations:
0 ≤ d b (t) ≤ 1.
(2.6.10)
Supercapacitor unit: The supercapacitor has the advantage of high power level. However, its charge level is low and can be described by: s q s,max ≤ q s (t) ≤ q s,max , (2. 6.11) with the maximal supercapacitorcharge q s,max ∈ R and s ∈ [0, 1]. Similar with the battery unit, the duty cycle fo the supercapacitor is also limited as follows:
0 ≤ d s (t) ≤ 1.
(2.6.12)
External grid: The constraints of the external grid come from the limited available power P e (t) as in (2.6.13).
Since the DC bus voltage v e (t) in Fig. 2.3.4 is nearly constant, the limitation becomes a constraint on the current, i e (t), described as: i e,min ≤ i e (t) ≤ i e,max , (2. 6.13) with i e,min , i e,max ∈ R. A low limit current implies a low subscription cost. Thus, a way to minimize the electricity cost is by decreasing the peak power purchased from the external grid. However, in this work we aim at decreasing this cost by purchasing the electricity when it is cheap and reusing it when the electricity price is expensive.
Cost functions
In this section, we present the cost functions for the microgrid control and supervision. Due to the complexity of microgrid dynamics (multi-time scales, high dimension, nonlinearity), a centralized optimization problem is intractable. Thus, as also employed in the literature, the energy management for the microgrid is separated into two sub-problems with different time scales, objectives and control variables, that is, dissipated energy minimization and electricity cost minimization:
• The first objective is for the fast time scale (within a range of seconds-minutes), corresponding to the supercapacitor dynamics and mechanical elevator. It aims to minimize the dissipated energy within the microgrid during an elevator travel.
• The second objective is investigated for the slow time scale (within a range of minutes-hours) corresponding to the battery dynamics, renewable power, electricity price and passenger travel statistic. It aims to minimize the electricity cost by the external current i e (t) and the passenger requests given by m c , t in , t f i , θ defined in Section 2.6.1. Therefore, the cost function will be generally non-quadratic and time-varying.
Dissipated energy minimization:
The control for the elevator position is the combination of DC bus voltage control at a fast time scale and the microgrid component regulation at a slower time scale. The DC bus control needs to guarantee a constant reference for the DC bus voltage. However, as illustrated in Fig. 2.3.5, we see that each component has its own DC bus voltage due to the resistance of the transmission lines. Therefore, the bus stability aims to regulate the load voltage to a reference value in order to guarantee its normal operation. Moreover, the constant DC bus voltage also indicates the satisfaction of required load electrical power, i.e., the power balancing objective will be ensured. Some methods to control the DC bus voltage can be found in [START_REF] Alamir | Constrained control framework for a stand-alone hybrid[END_REF], Zhao and Dörfler, 2015, Zonetti et al., 2015] which is not the main objective of our work. Consequently, the bus voltage is quickly stabilized and can be represented by an additional constraint to the original microgrid dynamics given in (2.5.7)-(2.5.10c):
A 1 x(t) = b 1 ,
(2. 6.14) with
A 1 = [0 T 2 1 0 T 17 ] ∈ R 1×20 , b 1 = C t,3 v * l ∈ R, v *
l is the reference DC bus voltage and C t,3 is the corresponding bus capacitor.
Each component has its own objective, e.g., for a reliable system operation, the elevator must arrive to the requested building floor within a suitable time interval. Hence, the supercapacitor charge must be at the nominal value at the end of the elevator travel for the security of the next travel. These objectives are described by the following constraint:
A 2 x(t 1 ) = b 2 , (2.6.15)
where t 1 is the final instant of the elevator travel,
A 2 = 0 3×2 I 3 ∈ R 3×5 , b 2 x * s x * m T ∈ R 3 ,
with the reference values of supercapacitor charge, x * s ∈ R, of the machine speed and angle, x * m ∈ R 2 . This constraint can be simplified by removing the supercapacitor unit and replacing the electro-mechanical elevator with a statistic average power unit represented by P l (t) which is derived from the data of passenger mass m c , elevator start/stop instants t in , t f i , building floor required by the passenger θ. The simplified dynamics is used for the higher control level, i.e., electricity cost minimization.
The cost function penalizes the dissipated energy within the microgrid during the elevator travel: (2.6.16) where t 0 , t 1 are the initial and final instants of the elevator travel; f R (t), e R (t) ∈ R 11 are the resistive flow and effort vectors defined in (2.5.4), d(t) ∈ R 4 are the converter duty cycles of the components. For simplicity, we decompose this cost into two following costs:
V (d(t)) = - t1 t0 f T R (t)e R (t)dt,
V s (d b (t), d s (t)) = - t1 t0 f T R (t) I 9 0 0 0 e R (t)dt, (2.6.17a) V l (d l (t)) = - t1 t0 f T R (t) 0 0 0 I 2 e R (t)dt, (2.6.17b) where d s (t), d b (t) ∈ R, d l (t) ∈ R 2
are the converter duty cycles of the supercapacitor unit, of the battery unit and the electro-mechanical elevator, respectively. V l (d l (t)) will be considered in the controller of the electro-mechanical elevator in Chapter 4. The optimization problem corresponding to V s (d b (t), d s (t)) and its relation with the previous optimization will be considered in future works.
Electricity cost minimization: The electricity cost minimization for a day is meaningful since the electricity cost changes during a day. Therefore, the energy can be stored in the battery when it is cheap and reuse it when the electricity price is expensive. Thus, the electricity cost is described by the integration of the product of electricity price and purchased power from the external grid:
V e (i e ) = - 24h 0h price(t)i e (t)v e (t)dt,
(2. 6.18) where price(t) is the electricity price at the instant t; i e (t), v e (t) ∈ R are the external grid current and voltage, respectively.
Conclusions
This chapter introduced a Port Hamiltonian formulation for a DC microgrid elevator system. The model takes into account the nonlinear dynamics of the components controlled by the corresponding converters. The Park transformation for the Permanent Magnet Synchronous Machine is considered in a unified Port-Hamiltonian formalism. Then, the global dynamics of the DC microgrid is derived as an implicit Port-Hamiltonian dynamical system. Finally, the multilayer control problem is formulated within a constrained optimizationbased control framework which includes the system physical limitations, network operation and suitable cost functions. Further details on Port-Hamiltonian formulations applied for various works in energy systems can be found in [START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF], Stegink et al., 2017[START_REF] Dòria-Cerezo | Passivity-based control of multi-terminal HVDC systems under control saturation constraints[END_REF][START_REF] Schiffer | A survey on modeling of microgrids: from fundamental physics to phasors and voltage sources[END_REF].
The next chapter will concentrate on the discretization method for the proposed Port Hamiltonian model formulation which will be later employed within the discrete-time load balancing problem of the microgrid system.
Chapter 3
Energy-preserving discretization
Introduction
For numerical simulation and practical control purposes, a discrete-time model derived from the continuoustime model is important. Since any discrete model is only an approximation of the continuous process from which is derived, the discretization method only aims at preserving some specific properties of the continuous model. Therefore, the continuous dynamics need to be formulated in an appropriate form, explicitly describing the system properties. In here we study the energy management for microgrids, hence the continuous dynamics should describe their stored energy and the power exchanges. These properties of the continuous systems are explicitly taken into account through the Port-Hamiltonian (PH) formulation [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]. In what follows, we present some existing methods based on this formulation to preserve the energy and/or the power exchange in the discretization process.
In the PH systems, the stored energy is represented by the Hamiltonian. The power exchange structure within the system is represented by a skew-symmetric matrix called the interconnection matrix (for more details, see in the appendix B.2). This matrix describes a Dirac structure (DS) and depends on the state coordinates. Moreover, when the interconnection matrix satisfies the integrability condition, there exists state coordinates where the interconnection matrix of the corresponding Hamiltionian system is constant (see Section B.2). Thanks to the skew-symmetric form of the interconnection matrix and the chain rule for the time derivative of the Hamiltonian, we can prove the energy conservation property along the state trajectory. Another important property of the Hamiltonian system is the symplecticity property defined in the appendix B (see Section B.1, Definition B.1.3), which considers the volume conservation along the state trajectories. More specifically, the symplecticity means that the volume defined by a set of initial states is equal to the volume defined by the corresponding set of state at the studied instant. This symplecticity is proved for Hamiltonian systems with canonical interconnection matrices as in [START_REF] Marsden | [END_REF]Ratiu, 1999, Hairer et al., 2006].
Some discretization methods for Hamiltonian systems are studied in [START_REF] Hairer | Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations[END_REF]. The authors aim at preserving the invariants (e.g., Hamiltonian, Casimirs) or the symplecticity property. The invariant conservation means that it is the same at each time step. The symplectic method only conserves the energy periodically, i.e., the energies at some fixed instants are the same. By preserving energy or by symplecticity, the energy levels at different time instants are kept near the continuous energy levels. Thus, the discrete state variables approximate well the continuous state variables even with long discrete-time step and long range simulation.
The open, lossless PH systems are obtained by adding external ports to the Hamiltonian system [van der Schaft and Jeltsema, 2014, [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]. If the external port variables are the control signals, the PH system is called a lossless Port-Controlled Hamiltonian (PCH) system. In this case, the DS is described by the interconnection and input matrices, which are called the structure matrices. Due to the energy supplied from the exterior, the Hamiltonian does not remain constant. Instead, we have an energy balance for the stored and the supplied energy at the external ports. There is no standard definition of symplecticity for PCH system. It can be defined by the conservation of DS, which is similar to the symplecticity in the Hamiltonian system, using a Poisson structure. This property is easy to test when the structure matrices are constant.
There are some works which tackle the energy balance and/or the symplecticity property preservation for the discrete-time model of lossless PCH systems [START_REF] Talasila | Discrete port-Hamiltonian systems[END_REF], Laila and Astolfi, 2006, Aoues, 2014]. In [START_REF] Talasila | Discrete port-Hamiltonian systems[END_REF], the authors propose a theoretical definition for the discrete-time PCH system which preserves the symplecticity. However, the proposed discrete-time model does not preserve the energy balance since the chain rule for the time derivation of the discrete Hamiltonian is not taken into account.
In [Laila andAstolfi, 2006, Aoues, 2014], the authors propose discretization schemes which aim at preserving the energy balance. The proposed algorithms also preserve the energy in the case of the closed Hamiltonian system. Generally, it is difficult to find a discretization method which preserves both the symplecticity and the energy balance. They can be nevertheless achieved in the case of linear lossless PCH system with a canonical interconnection matrix [START_REF] Aoues | Canonical interconnection of discrete linear port-Hamiltonian systems[END_REF].
By connecting the resistive elements to the lossless PCH system, we obtain the lossy PCH system [van der Schaft andJeltsema, 2014, Duindam et al., 2009]. In this case, the definition of symplecticity is similar to the one for the lossless PCH system. The energy balance is now defined by the zero sum of the energy flowing through the storage, resistive and external ports. The authors in [START_REF] Stramigioli | Sampled data systems passivity and discrete Port-Hamiltonian systems[END_REF], Aoues, 2014, Falaize and Hélie, 2017] propose some discretization methods which preserve the energy balance by taking into account the skew-symmetric form of the interconnection matrix. For their implementation the discrete-time flow/effort variables and the discrete-time interconnection matrix are considered. Also, they satisfy the discrete-time chain rule of the time derivation of the Hamiltonian and the hybrid input-output representation of DS of the continuous PCH system.
In our work, another class of open lossy PH system is studied where the control signals modulate the interconnection matrices (and consequently the DS) [START_REF] Escobar | A Hamiltonian viewpoint in the modeling of switching power converters[END_REF]. In here, some external port variables have to satisfy time-varying power constraints which comes from the integration of renewable sources. Since we consider the open-loop PH system, the control signals can vary arbitrarily and the DS is generally not constant. Consequently, the symplecticity for the discrete-time model is not possible. Thus, the proposed definition for the discrete-time PH system in [START_REF] Talasila | Discrete port-Hamiltonian systems[END_REF] can not be used directly. However, the energy balance conservation can still be obtained for the discrete-time model using the method presented in [START_REF] Stramigioli | Sampled data systems passivity and discrete Port-Hamiltonian systems[END_REF], Aoues, 2014, Falaize and Hélie, 2017]. In this particular work, we consider it for two non-linear subsystems of the DC microgrid, that is, the electro-mechanical elevator and the global multi-source elevator system.
This chapter presents two main contributions as follows.
• The time discretization method in [START_REF] Stramigioli | Sampled data systems passivity and discrete Port-Hamiltonian systems[END_REF], Aoues, 2014] is generalized for nonlinear lossy open PH systems with control-modulated structure matrices. The studied system includes the energy storage, the resistive element, the effort/flow source, the time-varying power source and the DS. The DS is described by a skew-sysmetric matrix which explicitly depends on the state and control variables. Therefore, the non-linearities lie in the state and control modulation of the structure matrices and in the time-varying power constraint of the external port. We define the discrete-time state, flow and effort variables which satisfy the discrete relation of the system elements as follows. The hybrid input-output representation of DS is preserved by keeping the skew-symmetric interconnection matrix.
The chain rule for the Hamiltonian time derivative is preserved by an appropriate choice of the discrete storage effort/flow variables. The discrete forms of the resistive elements and of the effort/flow sources are similar to the continuous ones. Also, the discretization of the time-varying power is represented by its average on a time step. Thanks to the presented discretization formulation, the discrete-time model preserves the energy balance. Moreover, we show that a time invariant coordinate transformation transforms an energy-preserving time discretization PH system to another one. This can be used to improve the accuracy of the discretization method.
• The presented method is validated through various simulations for the electro-mechanical elevator which is a lossy PCH system. Fortunately, since the Hamiltonian is quadratic, we can easily find a discretization scheme for the storage port variables which guarantees the chain rule for the Hamiltonian time derivation. As mentioned in the previous chapter, the electro-mechanical elevator dynamics is a combination of two time scale dynamics: electrical and mechanical parts. The electrical dynamics corresponds to the PMSM stator and is faster than the mechanical elevator dynamics. Therefore, to study the behavior of the electro-mechanical elevator dynamical model, we use small discretization time steps. This makes the numerical errors of the mechanical variables (i.e., elevator speed and position) insignificant. Also, for the studied discretization methods we implement and compare different schemes such as the explicit, implicit and midpoint rules for the interconnection matrix.
The presented method is applied for the multi-sources elevator system which is an open lossy PH system with a control-modulated structure. Similarly to the case of the electro-mechanical elevator, only the short time duration simulations are considered. It actually corresponds to the fast dynamics of the converters and DC bus. In this case, the midpoint rule within the energy-preserving method is used to compare with the first-order Euler methods. We find that its order is less than one obtained in the simulations for the electro-mechanical elevator. Besides, it does not create the energy sum error and results in high accuracies of the discrete state variables.
This chapter is organized as follows. Section 3.2 formally defines the energy-preserving discretization method. Section 3.3 presents the implementation of the proposed method for the electro-mechanical elevator which serves for the machine parameter identification. Section 3.4 considers the implementation of the same discretization method for the global multi-source elevator system in the fast time scale. Some conclusions and discussions are presented in Section 3.5.
Energy-preserving discretization
Time discretization concept
A time approximation method has three elements: approximation basis, weight vector, approximate function. Consider the time interval [t 0 , t f ], and the set G of all functions g : [t 0 , t f ] → R. The approximation basis is defined by choosing N independent functions λ i ∈ G with i ∈ {1, 2, . . . , N }, we have the approximation basis. We gather them in a vector Λ(t) = [λ 1 (t) . . . λ N (t)] T ∈ R N . Next, we represent the function g(t) in these basis by N real numbers g i with i ∈ {1, 2, . . . , N }, which we denote as the weights or coordinates of g(t). Similarly, we gather them in a vector g d = [g 1 (t) . . . g N (t)] ∈ R 1×N . The approximation of the function g(t) is denoted by g a (t) ∈ R and is defined as:
g a (t) = N i=1 g i λ i (t) = g d Λ(t). (3.2.1)
The approximation (3.2.1) is convergent if
lim N →∞ g a (t) = g(t), ∀t ∈ [t 0 , t f ]. (3.2.2)
Note that the approximation basis is chosen according to different criteria, e.g., the required accuracy, the available computation. However, to simplify the numerical computation we consider in (3.2.1) the first-order B-splines basis functions defined as:
λ i (t) = 1, if t 0 + (i -1)h ≤ t < t 0 + ih, 0, else , ∀i ∈ {1, 2, . . . , N }, (3.2.3)
where h = t ft 0 N is the time step. Note that, this choice is suitable for practical control. In fact, the real control signals are implemented at the beginning of each time step and kept constant until the next time step while waiting for the next control signals. The same situation happens for the feedback signals which are usually sampled with the same frequency. Therefore, in our work, the time discretization is defined as the time approximation with the first-order B-splines basis as in (3.2.1)- (3.2.3). The weights g d in (3.2.1) represent the discrete functions of g(t).
Using the above prerequisites the next subsection investigates the discrete functions of the state, flow and effort variables given in the PH Definition 2.2.5.
Energy-preserving discretization method
For the PH systems, the dynamical equations are described in terms of relations among sets of effort, flow and state variables. Thus, the time discretization of PH dynamics is defined as a set of algebraic equations which include discrete functions of the state, control, flow and effort variables. The main idea of the investigated method is to preserve the energy balance while maintaining unchanged the structure matrices describing the continuous time model.
First we define the discretized constitutive equations for the interconnection and for the DC microgrid elements (i.e., energy storage, static elements, time varying power source). The skew-symmetric form of the discrete-time interconnection matrix will guarantee power conservation within the interconnection. The chain rule for the discrete Hamiltonian time derivative and the average power for the time-varying power source will guarantee the energy conservation.
Discrete interconnection matrix: Let us recall the hybrid input-output representation of a PH system (see also Section 2.2.2 and [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]):
f (t) = D(x, u)e(t, u), (3.2.4)
where x(t) ∈ X ⊂ R n , u(t) ∈ U ⊂ R m are the state and control variables. The port variables are gathered in the flow and effort vectors as:
f (t) = f S (t) f R (t) f E (t) T , e(t) = e S (t) e R (t) e E (t, u) T . (3.2.5)
where f R (t), e R (t) are the flow and effort variables of the resistive element defined in (2.2.5), and f E (t), e E (t) are the flow and effort variables of the environment where some ports satisfy some time-varying constraints.
The flow and effort of the energy storage are described correspondingly by the time derivative of state variable f S (t) =ẋ(t), and the gradient vector of the Hamiltonian e S (t) = ∇H(x). Furthermore, in (3.2.4) the system interconnection is described by the skew-symmetric matrix D(x, u) given as:
D(x, u) = -J(x, u) -G SR (x, u) -G(x, u) G T SR (x, u) 0 G RE (x, u) G T (x, u) -G T RE (x, u) M(x, u) = -D T (x, u). (3.2.6)
Note that the control variables u(t) modulate the external effort. If the interconnection matrix D does not depend on the control signal u, the PH system defined by (3.2.4)-(3.2.6) is PCH system. Moreover, the skew-symmetric property of D(x, u) entails the internal power conservation of the system (3.2.4), i.e., e T (t)f (t) = 0.
The chain rule for the Hamiltonian, that is In fact, this represents, the passivity property with storage function H(x) and input/output pair e E (t), f E (t).
Ḣ(t) = ẋT (t)∇H(x), ( 3
Besides the state modulation, in the converter system the interconnection is switched between different topologies. For describing the topologies some binary variables are added in the interconnection matrices. However, since they are switched repeatedly with a high frequency, we replace the binary variables by their averages. More specifically, this average is the ratio of the time duration when the binary variable is 1 and the switching cycle duration (i.e., the sum of the duration when the binary variable is 1 and the duration when the binary variable is 0). Furthermore, they are also the control variables.
Hereinafter, the discretization of the interconnection matrix D(x, u) in (3.2.6) is derived which varies with the discrete time. Definition 3.2.1 (Discrete interconnection). Let D(x, u) be the interconnection matrix defined in (3.2.4) and (3.2.6). Its discretization D d over the time interval [t 0 , t f ] is an arranged set of N matrices D i with i ∈ {1, . . . , N } denoting the time step such that:
D i = D(g D (i, x d ), u i ), (3.2.10)
where g D is a chosen discretization map g D : {1, . . . , N } × X N → X and x d ∈ X N is the discrete function of x(t). Thus, the discretization of the Dirac structure (3.2.4) is described by: .2.11) The previous definition states that the discrete form of the interconnection matrix D(x, u) is obtained by replacing x and u with g D and u i , respectively. There are many possible choices for g D , e.g.,
f i = D(g D (i, x d ), u i )e i . ( 3
g D (i, x d ) = x l,i , g D (i, x d ) = x i+1 , g D (i, x d ) =
x l,i + x i+1 2 .
(3.2.12)
The above discretization schemes are not computationally demanding and this will be proven through simulations for the electro-mechanical elevator and the microgrid system. We denote the discrete functions of effort and flow variables given in (3.2.5)
-hf T S,i (x d )e S,i (x d ) = H(x i ) -H(x i-1 ), ∀i ∈ {1, . . . , N }, lim N →+∞ f S,i (x d ) = ẋ(t i ), ∀i ∈ {1, . . . , N }, lim N →+∞ e S,i (x d ) = ∇H(x i ), ∀i ∈ {1, . . . , N }. (3.2.14)
Generally, it is difficult to find all admissible choices for the discrete flow and effort vectors. However, we can characterize a class of these solutions. A popular discretization scheme for the storage flow vector is the left finite difference formula:
f S,i (i, x d ) = - x i -x i-1 h . (3.2.15)
Then, the corresponding discretization for the storage effort vector is given by the discrete derivative of the Hamiltonian: e S,i (i,
x d ) = DH(x i , x i-1 ). (3.2.16)
where the discrete derivative, Dg, of a function, g, is defined in the following definition.
Definition 3.2.3. (Discrete derivative [Gonzalez, 1996]) Let X be an open subset of R m . A discrete derivative for a smooth function g : X → R is a mapping Dg : X × X → R m with the following properties:
1. Directionality:
Dg(x 1 , x 2 ) T (x 1 -x 2 ) = g(x 1 ) -g(x 2 ) for any x 1 , x 2 ∈ X, 2. Consistency: Dg(x 1 , x 2 ) = ∇g x 1 + x 2 2 +O( x 1 -x 2 ) for all x 1 , x 2 ∈ X with x 1 -x 2 sufficiently small.
Proposition 3.2.4. The discretization scheme defined by (3.2.15) and (3.2.16) is admissible.
In the following, we present two examples for finding the discrete derivative of the Hamiltonian.
Example 3.2.5. (Midpoint rule for the discrete derivative of a quadratic Hamiltonian [START_REF] Hairer | Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations[END_REF]).
Let H(x) be a quadratic Hamiltonian:
H(x) = Q 0 + Q T 1 x + 1 2 x T Q 2 x, (3.2.17) where x ∈ X ⊂ R n , Q 0 ∈ R, Q 1 ∈ R n×1 , Q 2 ∈ R n×n such that Q 2 = Q T 2 ≥ 0.
The discretization for the storage effort vector defined by the midpoint rule is a discrete derivative of the Hamiltonian H(x) in (3.2.17):
e S,i (i, x d ) = Q 1 + Q 2 x i + x i-1 2 , ∀i ∈ {1, . . . , N }. (3.2.18)
Consequently, the discretization scheme defined by (3.2.15) and (3.2.18) is admissible thanks to Proposition 3.2.4. This method will be employed later in Section 3.3 and 3.4 for the simulations of the electro-mechanical elevator and of the microgrid system.
Example 3.2.6. (Discrete gradient [Aoues, 2014, Falaize andHélie, 2017]). Another method for finding the discrete derivative of the Hamiltonian is by using the discrete gradient. Consider that the discrete-time states x i-1 , x i are two opposite vertices of a hypercube in the state space X ∈ R n , where x i-1 is the origin and x i is the final state. We consider a path from x i-1 to x i including n sub-paths which are the edges of the hypercube. Thus, there are n! choices for this path. The path and the corresponding Hamiltonian evolution are described by the following graph:
x i-1 → . . . → . . . x k,i-1 . . . x j,i-1 . . . → . . . x k,i . . . x j,i-1 . . . → . . . x k,i . . . x j,i . . . → . . . → x i , x i-1 → . . . → . . . → x k,i-1 → x j,i-1 → . . . → . . . , H(x i-1 ) → . . . → . . . → H k (x k,i-1 ) → H j (x j,i-1 ) → . . . → H(x i ), (3.2.19)
where x k,i and x j,i are the k th and j th coordinates of the state vector at the instant i, x j,i-1 ∈ R n is the discrete state vector where only the j th coordinate is different from the j th coordinate of the discrete state vector on the left in the series (3.2.19), i.e., x k,i-1 . The discrete gradient of the Hamiltonian is denoted by ∇H(x i-1 , x i ) ∈ R n where the j th coordinate is defined by:
∇ j H(x i-1 , x i ) = H j (x j,i-1 ) -H k (x k,i-1 ) x j,i -x j,i-1 . (3.2.20)
From Proposition 6 in [Aoues, 2014], the mapping ∇H : R n × R n → R n defined by (3.2.20) is a discrete derivative. Therefore, using Proposition 3.2.4, the discretization scheme for the storage effort vector, e S (t), and (3.2.15) for the storage flow vector, f S (t), are admissible:
e S,i (i, x d ) = ∇H(x i-1 , x i ). (3.2.21)
Furthermore, if the Hamiltonian is quadratic as in (3.2.17) and the weight matrix Q 2 is diagonal, the discrete gradient of the Hamiltonian is rewritten as:
∇H(x i-1 , x i ) = . . . H j (x j,i-1 ) -H k (x k,i-1 ) x j,i -x j,i-1 . . . = . . . Q 1,j x j,i + 1 2 Q 2,j,j x 2 j,i -Q 1,j x j,i-1 - 1 2 Q 2,j,j x 2 j,i-1 x j,i -x j,i-1 . . . = . . . Q 1,j + Q 2,j,j x j,i + x j,i-1 2 . . . = Q 1 + Q 2 x i + x i-1 2 ,
where Q 1,j is the element at the j th row of matrix Q 1 , Q 2,j,j is the element at the j th row and j th column of matrix Q 2 . In this case, we see that the discrete gradient of the Hamiltonian satisfies the midpoint rule for the Hamiltonian gradient vector.
Besides the linear discretization scheme for the flow vector (3.2.15), an admissible choice may be nonlinear. For example, if all the coordinates of the state vector are different from zero, a discretization for the storage flow variable is given by:
f S,j,i (i, x d ) = - x 2 j,i -x 2 j,i-1 2hx j,i , j = 1, . . . , n, i = 1, . . . , N. (3.2.22)
Next, the discretization for the storage effort vector is obtained by rewriting (3.2.16) as:
e S,j,i (i, x d ) = x j,i x j,i + x j,i-1 DH j (x i-1 , x i ), j = 1, . . . , n, i = 1, . . . , N. (3.2.23)
where n ∈ N is the dimension of the state space X, f S,j,i , e S,j,i , x j,i , DH j are, respectively, the j th coordinates of the flow, effort, state and discrete derivative vectors at the instant t i . By multiplying the discrete time flow in (3.2.22) by the discrete time effort in (3.2.23), we can verify the energy conservation condition (3.2.14) as follows:
-hf S,i (x d ) T e S,i (x d ) = -h n j=1 f S,j,i (i, x d )e S,j,i (i, x d ) = -h n j=1 - x 2 j,i -x 2 j,i-1 2hx j,i x j,i x j,i + x j,i-1 DH j (x i-1 , x i ) = n j=1 (x j,i -x j,i-1 ) DH j (x i-1 , x i ) = H(x i ) -H(x i-1 ).
Generally, the nonlinear discretization schemes are not definite over the whole state space, e.g., the scheme (3.2.22) is not definite in {x ∈ X|∃j ∈ {1, . . . , n}, x j = 0}. Furthermore, they require a huge computation due to the complex derived algebraic equations for the discretization scheme. Moreover, these equations may have no solution. However, in some cases, to improve the accuracy of the discretization we must use nonlinear schemes. An example for the nonlinear flow discretization will be considered in Section 3.2.3. Discrete static elements: An element is called static if its port variables satisfy a static relation. From this definition, the static elements of the microgrid system are the static resistive element, the constant power source, the effort/flow sources, the fast time scale dynamics at its steady state. These static relations are defined for the discrete time model as in the following definition. Definition 3.2.7 (Discrete static elements). Let f (t), e(t) be the conjugate variables of the static ports with the static relation:
s(f , e) = 0, ∀t ∈ [t 0 , t f ]. (3.2.24)
Their discrete functions f d , e d are chosen such that:
s(f i , e i ) = 0, ∀i ∈ {1, . . . , N }. ( 3
.2.25)
Discrete time-varying power source: There are some ports where the product of the conjugate variables is not static, i.e., time-varying power source P (t) with:
f (t)e(t) = P (t), ∀t ∈ [t 0 , t f ], ( 3
f i e i = P i , with P i = 1 h (i+1)h ih P (t)dt, ∀i ∈ {1, . . . , N }. (3.2.27)
The previous definition of the discretization of time varying power source implies that the supplied energy of the discrete power during the interval [t i , t i-1 ] is equal to the supplied energy of the continuous power. Obviously, it requires the exact data for the supplied power which is not always available.
Discussions: The discrete-time dynamics of the PH system (3.2.4)-(3.2.6) are specified by the discretization schemes as in Definitions 3.2.1-3.2.8. We underline that the presented discretization method preserves the energy balance, i.e.,
H(x
i ) -H(x i-1 ) = f T R,i e R,i + f T E,i e E,i , ∀i ∈ {1, . . . , N }. (3.2.28)
Also, if the Hamiltonian is bounded from below, the discrete-time PH system is passive. However, the discretized energy may not be equal to the continuous one.
Remark 3.2.9. Note that the chosen discretization scheme needs to satisfy the convergence condition (Definition 4.1 in [START_REF] Hairer | Solving Ordinary Differential Equations I: Nonstiff Problems[END_REF]). If it is a linear multistep method, the necessary and sufficient conditions for the convergence are the stability and consistency (Theorem 4.2 in [START_REF] Hairer | Solving Ordinary Differential Equations I: Nonstiff Problems[END_REF]). However, since this is not the main goal of our work, we assume that these conditions are always respected.
Discrete-time Port-Hamiltonian system through coordinate transformation
In this subsection we consider the discrete-time PH system through the coordinate transformation. According to Proposition 2.4.2 every invertible time invariant transformation of the state space z = ϕ(x) preserves the PH formulation.
Let the Jacobian matrix of the state transformation be defined by:
W(x) ∂ x ϕ(x). (3.2.29)
For the simplicity, we denote by W(z) the representation of the Jacobian matrix by the transformed state W(ϕ -1 (z)) where ϕ -1 (z) implies the inverse state transformation. The transformed PH system is described by:
f z (t) = D z (z, u)e z (t, z, u), (3.2.30)
where z(t) ∈ ϕ(X), u(t) ∈ U are the state and control variables. The port variables are represented by the global flow and effort vectors as: (3.2.31) where the storage flow and effort vectors are given by f S,z (t) =ż(t), e S,z (t) = ∇H(z). The transformed interconnection matrix is given by:
f z (t) = f S,z (t) f R,z (t) f E,z (t) T , e z (t) = e S,z (t) e R,z (t) e E,z (t, u) T ,
D z (z, u) = -W(z)J(z, u)W T (z) -W(z)G SR (z, u) -W(z)G(z, u) G T SR (z, u)W T (z) 0 G RE (z, u) G T (z, u)W T (z) -G T RE (z, u) M(z, u) . (3.2.32)
Note that the relations of the port variables in two coordinates are described as follows: Thanks to the explicit description of the energy through the Hamiltonian in the PH formulation, the energy-preserving discretization method was developed. It is important to mention that, there exists a special property called Casimir for which a coordinate transformation of the PH formulation is necessary. Note that Casimir C(x) is a function which is constant along the state trajectory with any Hamiltonian (see in the Appendix B, Section B.2.2). The following example illustrates an energy-preserving discretization scheme which preserves Casimir by using a suitable coordinate transformation.
f S,z (t) = -ż(t) = -W(x) ẋ(t) = W(x)f S (t), e S (t) = ∇H(x) = W T (x)∇H(z) = W T (x)f S,z (t), f R,z (t) = f R (t), e R,z (t) = e R (t), f E,z (t) = f E (t), e E,z (t) = e E (t). ( 3
x i = ϕ -1 (z i ), f S,i = W -1 (g D (i, z d ))f S,z,i , e S,i = W T (g D (i, z d ))e S,z,i , f R,i = f R,z,i , e R,i = e R,z,i , f E,i = f E,z,i , e E,i = e E,z,i , (3
Example 3.2.11. (Discretization scheme for preserving the Hamiltonian and the Casimir)
PH system and Casimir property: Consider a PH system given by: ẋ(t) = J(x)∇H(x), (3.2.35) where
x(t) = [x 1 (t) x 2 (t) x 3 (t)] T ∈ R 3
is the state vector, the interconnection matrix and the Hamiltonian are given by:
J(x) = 0 0 x 2 0 0 -x 1 -x 2 x 1 0 , H(x) = 1 2 x 1 - 1 2 2 + 1 2 x 2 2 + 1 2 x 2 3 = Q 0 + Q 1 x + 1 2 x T Q 2 x, (3.2.36) with Q 0 = 1 4 , Q 1 = - 1 2 0 0 , Q 2 = I 3 .
It is easy to verify that the Hamiltonian H(x) and the Casimir
C(x) = x 2 1 + x 2 2
2 are the invariants of the considered PH system. In what follows, we present 5 time discretization schemes for the system (3.2.35)-(3.2.36) including:
• 2 classical schemes: the explicit and implicit Euler schemes,
• 2 energy-preserving schemes called the mix scheme 1 and the mix scheme 2,
• an energy-preserving scheme which also preserves the Casimir.
We partition the time duration [0, T ] to N time subintervals with the time step h = T N . Let i ∈ {1, . . . , N } be the time index.
The explicit/implicit Euler schemes: They are respectively given by the following expressions:
1 h (x i -x i-1 ) =J(x i-1 )(Q 1 + Q 2 x i-1 ), (3.2.37a) 1 h (x i -x i-1 ) =J(x i )(Q 1 + Q 2 x i ). (3.2.37b)
The mix scheme 1 and the mix scheme 2: Since the Hamiltonian is quadratic with the diagonal weight matrix Q 2 , we can choose the midpoint rule for the discretization of the effort vector ∇H(x). Thus, they are given by the following expressions:
1 h (x i -x i-1 ) =J(x i-1 ) Q 1 + Q 2 x i-1 + x i 2 , (3.2.38a) 1 h (x i -x i-1 ) =J(x i ) Q 1 + Q 2 x i-1 + x i 2 . (3.2.38b)
The Casimir conservation scheme: Let z(t) = [z 1 (t) z 2 (t) z 3 (t)] T ∈ R 3 be the transformed state vector with
z 1 = x 1 , z 2 = x 3 x 2 , z 3 = C(x) = x 2 1 + x 2 2 2 .
Thus, the inverse state transformation is:
x 1 = z 1 , x 2 = 2z 3 -z 2 1 , x 3 = z 2 2z 3 -z 2 1 .
Note that we consider only the haftspace defined by x 2 > 0 where the transformation is feasible. The Jacobian matrix of this transformation is:
W(x) = 1 0 0 0 - x 3 x 2 2 1 x 2 x 1 x 2 0 = 1 0 0 0 - z 2 2z 3 -z 2 1 1 2z 3 -z 2 1 z 1 2z 3 -z 2 1 0 = W(z).
The transformed PH system is described by: 3.2.39) where the interconnection matrix and the Hamiltonian are, respectively:
ż(t) = J z ∇H(z), (
J z = 0 1 0 -1 0 0 0 0 0 , H(z) = 1 2 -z 2 1 z 2 2 + 2z 2 2 z 3 -z 1 + 2z 3 + 1 4 . (3.2.40)
An energy-preserving discretization for the transformed system (3.2.39)-(3.2.40) is described by:
1 h (z i -z i-1 ) = J z - 1 2 (z 1,i + z 1,i-1 ) z 2 2,i - 1 2 (z 2,i + z 2,i-1 ) z 3,i - 1 2 z 2 1,i-1 z 2 2,i-1 + 1
.
We note that the elements on the third line of the interconnection matrix J z are zeros. Therefore, C i -C i-1 = z 3,iz 3,i-1 = 0, ∀i. Thus, the C(z) is preserved along the discrete-time state trajectory. Using the inverse state transformation, we determine the Casimir preserving discretization scheme of the original system (3.2.35)-(3.2.36):
f i = J(x i-1 )e i , (3.
2.41)
with the discrete time flow vector:
f i = - 1 h 1 0 0 0 - x 3,i-1 x 2 2,i-1 1 x 2,i-1 x 1,i-1 x 2,i-1 0 -1 x 1,i -x 1,i-1 x 3,i x 2,i - x 3,i-1 x 2,i-1 x 2 1,i + x 2 2,i 2 - x 2 1,i-1 + x 2 2,i-1 2 , (3.2.42)
and the discrete time effort vector:
e i = 1 0 0 0 - x 3,i-1 x 2 2,i-1 1 x 2,i-1 x 1,i-1 x 2,i-1 0 T - 1 2 (x 1,i + x 1,i-1 ) x 2 3,i x 2 2,i - 1 2 x 3,i x 2,i + x 3,i-1 x 2,i-1 x 2 1,i + x 2 2,i 2 - 1 2 x 2 1,i-1 x 2 3,i-1 x 2 2,i-1 + 1 . ( 3
.2.43)
We can see that the admissible discrete time flow vector, f i (i, x d ), is not in a linear form.
Simulations: The simulations for this example are implemented with the simulation duration T = 3s and the initial state x 0 = [1 1 -1] T . The state variable errors of these schemes are computed as
error = max i ||x i -x(t i )|| 2 ,
where x(t) is the continuous time state at the instant t, ||.|| 2 is the second norm. After implementing some simulations with different time steps h = 0.1s, h = 0.01s, h = 0.001s, we obtain Fig. 3.2.1 which indicates that the mentioned schemes are first-order (the slope of the straight lines). 3.3 Numerical results for the electro-mechanical elevator
Discrete-time model
In this section we use the discretization method presented in Section 3.2 for discretizing the electro-mechanical elevator dynamics given by (2.4.24). In the dynamics (2.4.24), the input matrix G l is modulated by the control variable d l . We simplify the input matrix by defining the voltages and currents on the direct and quadrature phase of the machine stator, denoted by v l (t), i l (t) ∈ R 2 , such that:
v l (t) d l (t)v l (t), i l (t) [I 2 0]∇H l (x l ).
Then, the dynamics (2.4.24) is rewritten as:
f l (t) = D l (x l )e l (t), v lR (t) = -R l i lR (t), (3.3.1)
where the flow/effort variables and the interconnection matrix are:
f l (t) = -ẋl (t) i lR (t) i l (t) , e l (t) = ∇H(x) v lR (t) v l (t) , D l (x l ) = -J l (x l ) -G lR (x l ) -G ll G T lR (x l ) 0 0 G T ll 0 0 , (3.3.2)
with the input matrix:
G ll = I 2 0 ∈ R 4×1 . (3.3.3)
The Bond Graph description of this system can be found in Fig. 2
f l,i = D l (g D (i, x l,d ))e l,i , (3.3.4a) v lR,i = -R l i lR,i , (3.3
.4b)
∀i ∈ {1, . . . , N }, where x l,d is the discretized function of the state variable x l , g D (i, x l,d ) is described in the Definition 3.2.1. Since the Hamiltonian is quadratic, the two corresponding discretization schemes can be chosen as the forward finite difference formula and the midpoint rule forẋl and g H (i, x l,d ), respectively, as in (3.2.15), (3.2.18):
f l,i = - x l,i -x l,i-1 h i lR,i i l,i , e l,i = Q l1 + Q l2 x l,i + x l,i-1 2 v lR,i v l,i , (3.3.5)
where g H (i, x l,d ) is defined in the Definition 3.2.2 of the discrete storage, Q l1 , Q l2 are the weight matrices in the Hamiltonian (2.4.30)-(2.4.31). Combining (3.3.4) and (3.3.5), we obtain the discrete model of the electro-mechanical elevator system as: (3.3.6) ∀i ∈ {1, . . . , N }. Note that the maps g D (i, x l,d ) is freely chosen. In our work, three first schemes in (3.2.12) are considered. The benefits of the above presented methods are shown through simulations in the following section.
- x l,i -x l,i-1 h i lR,i i l,i = D l (g D (i, x l,d )) Q l1 + Q l2 x l,i + x l,i-1 2 v lR,i v l,i , v lR,i = -R l i lR,i ,
Simulation results
This section presents some simulation results for the discrete-time model of the electro-mechanical elevator system. The forthcoming simulations use the parameters presented in (2.4.21) and (2.4.28) with the numerical data given by the industrial partner Sodimas (an elevator company from France). They are listed in Table 3.3.1. The simulation setting is presented in Table 3.3.2: the simulation duration, the time steps, the control variables and the initial state variables. They satisfy the constraints on the rotor speed, on the position and on the duty cycle given in Section (2.6.2). The electro-mechanical elevator dynamics admit the multi-time scale property, i.e., the electromagnetic dynamics is much faster than the mechanical dynamics. In order to consider the transient period for all state variables (where the discretization effect is visible), we study the fastest dynamics for a duration T = 0.15 s (determined through simulations) and various time steps from h = 10 -5 s to h = 5.10 -2 s as in Table 3.3.2 Different energy-preserving discretization schemes: Firstly, we implement the energy-preserving discretization method with different schemes for g D (i, x l,d ) (i.e., different discrete interconnection matrices) defined in (3.3.6). Three simple schemes are considered:
1. Scheme 1: g D (i, x l,d ) = x l,i , (3.3.7) 2. Scheme 2: g D (i, x l,d ) = x l,i+1 , (3.3.8)
3. Scheme 3:
g D (i, x l,d ) = x l,i + x l,i+1 2 .
(3.3.9) Fig. 3.3.1 illustrates the logarithm maximal absolute discrepancies between the discrete and continuous states with different time steps:
error j = max i∈{1,...,N } |x j,i -x j (t 0 + ih)|,
where h is the time step, x j ∈ R is the j th coordinate of the state variable vector. Furthermore, from Fig. 3.3.1 we find that the orders of the mix Scheme 1, mix Scheme 2 and midpoint rule are 1, 1 and 2, respectively, which are equal to the slopes of the straight lines. We can see that the magnetic fluxes in the fast time scale converge to the steady values determined by the mechanical variables in the slower time scale. From the electro-mechanical dynamics, we can observe that only the rotor speed affects the two magnetic fluxes. Thus, even when the rotor position increases, the steady states of the two magnetic fluxes do not change. Since Scheme 3 is second-order, the obtained discrete state errors are smaller than the ones from the two others.
Energy-preserving discretization method and other methods:
The next simulations present some comparisons of the energy-preserving method under Scheme 1 (see (3.3.6) and (3.3.7)) and two classical discretization schemes (the explicit Euler, the implicit Euler). Based on (3.3.6), the discretization by the explicit and im-plicit Euler methods are respectively given by:
- x l,i -x l,i-1 h i l,R,i i l,i =D l (x l,i-1 ) Q l,1 + Q l,2 x l,i-1 v l,R,i v l,i , (3.3.10a) - x l,i -x l,i-1 h i l,R,i i l,i =D l (x l,i ) Q l,1 + Q l,2 x l,i v l,R,i v l,i , (3.3.10b)
where the discretization for the resistive element is given by (3.3.4b). Fig. 3.3.4 illustrates the logarithm of the maximal absolute errors of the discrete time states with different time steps:
error j = max i∈{1,...,N } |x j,i -x j (t 0 + ih)|.
where h is the time step, x j ∈ R is the j th coordinate of the state vector. From the results, we can realize that the order of the studied discretization methods is 1. Beside the state error, we study here the created energy sum error which is defined by the discrepancy between the increasing energy and the one supplied from the resistive and external elements:
E h = H(x l,i ) -H(x l,i-1 ) -(P R + P E )h. (3.3.11)
The power sum error is defined by:
P h = E h h . (3.3.12)
From the previous definitions, we can easily prove that the explicit Euler method creates the positive energy sum error.
E ex = Q l,0 + Q T l,1 x l,i + 1 2 x l,i Q T l,2 x l,i -Q l,0 + Q T l,1 x l,i-1 + 1 2 x l,i-1 Q T l,2 x l,i-1 -(i T l,R,i v l,R,i + i T l,i v l,i )h
, thanks to (2.4.30), = 1 2 (x l,ix l,i-1 ) T Q l,2 (x l,ix l,i-1 ) > 0 , thanks to (3.3.10a) and D = -D T .
Similarly, the implicit Euler method creates the negative energy sum error
E im = - 1 2 (x l,i -x l,i-1 ) T Q l,2 (x l,i -x l,i-1 ),
and the energy-preserving method does not create the energy sum error, i.e., E en = 0. The evolution of the Hamiltonian of the continuous and discrete systems are given in Fig. 3.3.6. We observe that, though Scheme 1 preserves the energy, the discrete energy error may be greater than the ones obtained by the classical first order methods. From the state errors in Fig. 3 Discussions: From the above simulations for the electro-mechanical elevator, some remarks are in order. An energy-preserving method does not create the energy sum error/power. Scheme 3 of the energy-preserving method for the electro-mechanical elevator is actually the midpoint method.
Numerical results for the global multi-source elevator dynamics 3.4.1 Discrete-time model
This section studies an energy-preserving discrete-time model for the DC microgrid presented in Section 2.5.2. Since the microgrid dynamics (2.5.7) have the multi-time scale property, the transient period of the fast dynamics takes place in a short time interval which requires short time steps. Other than investigating the efficiency of the discretization method, we will illustrate the consistency of the multi-time scale property and the stability of the fast dynamics. Therefore, we consider here a short time duration where the slow variables are assumed to be constant.
In the discretization method proposed in Section 3.2, we aim at finding the maps g D (i, x d ), f (i, x d ) and e(i, x d ). For the simplicity, the discretization of the flow vector is chosen by the left finite difference formula (3.2.15). Furthermore, since the Hamiltonian is quadratic as (2.5.11), the suitable choice for the effort discretization is given by the midpoint rule (3.2.18). Therefore, the discretization of the flow and effort variables in (2.5.8) is described as:
f i = - x l,i -x l,i-1 h f R,i v e,i v r,i , e i = Q 1 + Q 2 x l,i + x l,i-1 2 e R,i i e,i i r,i , ∀i ∈ {1, . . . , N }. (3.4.1)
Based on the results obtained for the three scheme comparisons for g D (i, x d ) in Section 3.3, we choose here the midpoint discretization:
g D (i, x d ) = x l,i + x l,i-1 2 , ∀i ∈ {1, . . . , N },
which leads to the discrete interconnection:
f i = D x l,i + x l,i-1 2 , d i e i , ∀i ∈ {1, . . . , N }. (3.4.2)
From Definition 3.2.7, the discrete model for the static resistive element is expressed as:
f R,i = -R R e R,i , ∀i ∈ {1, . . . , N }. (3.4.3)
From Definition 3.2.8, we get the discrete model for the renewable power source:
f r,i e r,i = P r,i , with P r,i = 1 h (i+1)h ih P r (t)dt, ∀i ∈ {1, . . . , N }. (3.4.4) Consequently, by combining (3.4.1)- (3.4.4) the discrete-time model for the microgrid is given as: (3.4.5) ∀i ∈ {1, . . . , N }.
- x l,i -x l,i-1 h f R,i v e,i v r,i = D x l,i + x l,i-1 2 , d i Q 1 + Q 2 x l,i + x l,i-1 2 e R,i i e,i i r,i , P r,i = -i r,i v r,i , f R,i = -R R e R,i ,
Simulations for the global multi-source elevator dynamics
This section presents simulation results for the discrete-time model of the DC microgrid elevator illustrated in Fig. 2.1.1. The parameters are presented in Section 2.3 and 2.4 with some numerical data given by the industrial partner Sodimas (an elevator company from France) which are given in Table 3.3.1 and 3.4.1. [Ω] 0.13 R t,2
[Ω] 0.17 R t,3
[Ω] 0.19 R t,4
[Ω] 0.23 R t,5
[Ω] 0.29 R t,6
[Ω] 0.31 Reference DC bus voltage
v * l [V ] 400 DC converter inductances L b,1 , L b,2 , L s,1 , L s,2 [mH] 0.25 DC converter capacitances C b,1 , C b,2 , C s,1 , C s,2 [F ] 0.008 Battery maximal charge q max [Ah] 183 Battery charge factor b 0.4 Battery internal coefficient k [s -1 ] 0.000105 Maximal voltage E max [V ] 13.8 Minimal voltage E min [V ] 13 Battery resistor R b [Ω] 0.015 Supercapacitor charge C s [C] 58 Supercapacitor resistor R s [Ω] 0.026 Renewable power P r [W ] 400 External current i e [A] - 1
Different scenarios with many time steps and discretization methods are considered. In fact, other than the energy-preserving method, the explicit/implicit Euler methods are also considered for comparisons. The duty cycles d(t) ∈ R 4 are chosen constant such that the constraints (2.6.5) and (2.6.12) are satisfied. The dynamics of the energy-supplying system admit the multi-time scale property, e.g., the dynamics of converter and DC bus are faster than the others (mechanical and chemical ones). We consider the transient period where the discretization effect is visible, we study the fastest dynamics over a time duration T = 0.01s (determined by the simulation), and various time step from h = 10 -6 s to h = 10 -4 s. Furthermore, during this short time interval, the external current i e and renewable power P r are assumed to be constant. The simulation configurations are presented in Table 3.4.2. x cs (0) [0 3.2 0 0.24] T Initial electro-mechanical elevator state x l (0) [1.2 1 0 0] T Fig. 3.4.1 describes the discrete state errors of the microgrid state variables (the battery, the supercapacitor, the converters and the DC bus) with different time steps. From the simulation results, we can observe that the orders of the explicit/implicit Euler and energy-preserving methods are 1. Especially, the order of the electro-mechanical elevator state errors under Scheme 3 is not 2 as the simulation results under this scheme in Section 3.3.2. This is caused by the short time simulation where these variables are nearly constant. Since the microgrid interconnection is modulated only by these variables, it makes the interconnection matrix constant, and the microgrid dynamics becomes linear (due to the quadratic Hamiltonian). Therefore, the midpoint rule becomes a first-order method. Fig. 3.4.2 illustrates the Hamiltonian evolution in the continuous and discrete cases. At the beginning, it decreases because of the energy dissipation in the resistors. Moreover, the transient period in the Hamiltonian dynamics (from 0 ms to 1 ms) corresponds to the fast dynamics of the converter and transmission lines. We note again that the transient period of these dynamics are much shorter than the transient period of the machine stator dynamics which approximates 0.15 s as in Fig. 3.3.2. Fig. 3.4.3 describes the element and the power sum error of the discrete methods. Similar to the conclusion in Section 3.3.2, we find that the explicit/implicit Euler method create the positive/negative power sum error for the considered microgrid system while the midpoint rule does not.
Conclusions
In this chapter, we formulated an energy-preserving time discretization method for nonlinear Port-Hamiltonian systems. We proposed separate discretizations for each of the essential elements in the PH formulation such as, the power-preserving interconnection, the energy storage, the static element and the time-varying power source. The energy conservation property is guaranteed by preserving the skew-symmetric form of the interconnection matrix and the chain rule for the time derivative of the Hamiltonian. Moreover, we showed that a discrete-time system obtained by a time invariant coordinate transformation for an energy-preserving discrete-time system is also an energy-preserving discrete-time system. An illustrative example is presented where this combination is used to improve the accuracy of the discretization method. The formulated energypreserving time discretization method is interesting since it is suitable for nonlinear PH system where the interconnection matrix is modulated by the control variables. Furthermore, for the passive PH system, the discrete-time model preserves the passivity which will be useful for control purposes.
We apply the presented method and two classical methods (explicit/implicit Euler methods) for the electro-mechanical system and the multi-sources elevator system within the fast time scale.
The energy-preserving time discretization method leads to the high accuracy discrete-time model with respect to some classical same order discretization methods. Moreover, the accuracy order of a time discretization scheme depends on the considered time scale. Some works which have proposed some discretizations methods can be found in [START_REF] Stramigioli | Sampled data systems passivity and discrete Port-Hamiltonian systems[END_REF], Talasila et al., 2006, Hairer et al., 2006, Aoues, 2014, Falaize and Hélie, 2017]. Optimization-based control for the electro-mechanical elevator
Introduction
As presented in Section 2.6, the dissipated energy minimization for the electro-mechanical elevator respecting the system dynamics, state and input constraints is formulated as a constrained continuous-time optimization problem. This chapter presents in detail the problem formulation and its solution through the combined use of Port-Hamiltonian system representation, differential flatness with B-splines parameterization and MPC (Model Predictive Control). Generally, it is difficult to solve a constrained continuous-time optimization problem. In the literature, some solutions are given for the unconstrained and linear case, for example, the Linear Quadratic Regulator (LQR) which is similar with a proportional controller determined from the solution of the Algebraic Riccati Equation [Liberzon, 2011]. In LQR it is assumed that the whole state is available for control at all times. One possible generalization is the Linear Quadratic Gaussian Regulator (LQG) where the design of a Kalman filter is employed. The LQG is also studied for the case of linear Port-Controlled Hamiltonian systems in [Wu, 2015]. Note that LQR and LQG are tracking controllers, i.e., they stabilize the system to the references by penalizing in the cost function the discrepancies between the actual signals and the references. In [START_REF] Lifshitz | Optimal control of a capacitor-type energy storage system[END_REF], the authors find the solution of a constrained continuous-time optimization problem for a capacitor-type energy storage system. The optimal control problem includes an economic cost function, a first-order dynamics and a linear constraint. Methods for finding the solution of a constrained non-linear optimal control problem with more general cost functions are still under study. A possible solution is to approximate the continuous optimal problem by a discrete-time optimization, [Liberzon, 2011, Boyd andVandenberghe, 2004], which is easier to study and to implement.
There are various methods for approximating a continuous-time optimization by a discrete-time optimization. A popular approach is by using the zero-order B-splines to parametrize the variables (see Section 3.2.1). This is easy to implement [START_REF] Rawlings | [END_REF]Mayne, 2009, Ellis et al., 2017]. A drawback of this approach is represented by the fact that the approximated variables do not respect the system dynamics. Thus, higher dimensions are necessary for good approximations, this requiring significant computations. To reduce the computational complexity, the optimization problem can be decomposed into an off-line reference trajectory generation and an on-line tracking control problem. This is the approach we follow in this work for the optimal control of the electro-mechanical elevator of the DC microgrid system illustrated in Fig. 2
.4.1
The electro-mechanical elevator includes the Permanent Magnet Synchronous Machine (PMSM), a mechanical elevator and an AC/DC converter. Usually, the reference profiles of the elevator speed and the motor currents are separately generated. The elevator speed (also the rotor speed) is chosen as a symmetrical trapezoidal curve [START_REF] Vittek | Energy near-optimal control strategies for industrial and traction drives with a.c. motors[END_REF]. In the feasible domain determined by the current and voltage bounds, the motor current references are optimized by minimizing the MTPA (Maximum Torque Per Ampere) criterion [START_REF] Lemmens | PMSM drive current and voltage limiting as a constraint optimal control problem[END_REF]. However, this result is useful only for machine speed control, i.e., the effect of the speed profile is not considered for the energy optimization. In [START_REF] Chen | Adaptive minimum-energy tracking control for the mechatronic elevator system[END_REF], the profiles of both stator current and rotor speed are generated in the transient period based on differential flatness [START_REF] Fliess | Flatness and defect of non-linear systems: introductory theory and examples[END_REF] with a polynomial parameterization. Note that, no constraints are taken into account.
Furthermore, for the PMSM tracking control, various methods are proposed in the literature. A conventional method is the Proportional Integral (PI) control combined with anti-windup techniques for dealing with the physical limits [START_REF] Mardaneh | Nonlinear PI controller for interior permanent magnet synchronous motor drive[END_REF]. Another approach is the backstepping method proposed in [START_REF] Bernat | Adaptive control of permanent magnet synchronous motor with constrained reference current exploiting backstepping methodology[END_REF] where the current constraints are tackled by switching the reference speed. In [START_REF] Petrović | Interconnection and damping assignment approach to control of pm synchronous motors[END_REF] the Interconnection and Damping Assignment Passivity-Based Control (IDA-PBC) [START_REF] Ortega | Interconnection and damping assignment passivity-based control of Port-Controlled Hamiltonian systems[END_REF], an energy-based control method used mainly for nonlinear PH systems, was applied for the control of PMSM. However, this approach does not explicitly take into account the constraints. In [START_REF] Bolognani | Design and implementation of Model Predictive Control for electrical motor drives[END_REF], the authors increase the state vector dimension to obtain a linear system which is used for formulating the tracking MPC. Note that the previously mentioned works do not consider the tracking control for the rotor angle. This is considered in [START_REF] Vittek | Energy near-optimal control strategies for industrial and traction drives with a.c. motors[END_REF] and [START_REF] Chen | Adaptive minimum-energy tracking control for the mechatronic elevator system[END_REF] through the forced dynamics control and adaptive control, respectively, but without taking into account the state and input constraints.
In here we use the differential flatness properties [Lévine, 2009] to express the state and input variables of the electro-mechanical elevator system in function of some flat outputs and their finite time derivatives. It allows us to take into account the system dynamics and to reduce the number of variables. Next, the flat outputs are parametrized using B-splines with appropriate order [Suryawan, 2011].
While the flat output offers some important theoretical guarantees (continuous time constraint validation, trajectory feasibility and the like) it still remains difficult to implement it in practice. The difficulties come from the nonlinear nature of the mappings between flat output and states and inputs. In particular, the input mappings are complex and thus lead to nonlinear constraints (for example those involving input magnitude or rate) and non-convex costs (for example when considering the system's energy).
The solution followed here is to consider the flat representations for the electro-mechanical elevator (the slow part of the plant dynamics) and provide these as references to the fast part of the dynamics. Note that giving these references implies that we need to check an equality constraint with nonlinear terms. This is not easy to implement and leads to a numerically cumbersome formulation. A relaxation of the equality (soft constraints where the slack term is penalized in the cost) is therefore proposed.
The original contributions of this chapter are summarized in the following:
• We formulate a quadratic cost function for the dissipated energy minimization through an appropriate choice of the system flat outputs (i.e., two stator current and rotor speed of the electro-mechanical elevator system). This choice leads to a continuous-time equality constraint related to the gravity torque which is further rewritten in the optimization problem as a soft constraint.
• We provide sufficient conditions for the control points describing the B-splines which guarantee the satisfaction of the stator currents and voltages constraints at all times. This condition is applied for the constraints of the stator currents and of the stator voltages.
• We formulate the tracking MPC problem for stabilizing the state variables to the reference profiles. We consider here both the nonlinear model and the linearized model of the discretized electro-mechanical elevator system. Through the simulations, we illustrate the usefulness of the linearized model for reducing the computation time of the MPC laws. Also, the open-loop simulations illustrate that the dynamics of the two currents and of the rotor speed are asymptotically stable while the dynamics of the rotor angle is not. This motivates us to concentrate on considering the tracking control for the rotor angle. The efficiency of the formulated tracking MPC are validated through simulation results of the closed-loop system.
This chapter is organized as follows. Section 4.2 presents the differential flatness notion, the B-splines curves and their properties and tracking MPC. In Section 4.3 we apply the previous tools for the constrained optimal control of the electro-mechanical elevator. Conclusions and discussions on the obtained results are given in Section 4.4.
Basic tools for the constrained optimal control
Constrained optimal control theory deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved under some given input and state constraints [Liberzon, 2011] (more details about different types of optimization problems are presented in Appendix C-D). This section recalls first the standard formulation of a constrained optimal control problem. Next, the notions of differential flatness, B-spline parameterization and tracking MPC are presented along the lines in [Lévine, 2009, Suryawan, 2011, Rawlings and Mayne, 2009].
The formulation of a constrained optimization problem includes the cost function, the control system and the constraints. Let x(t) ∈ R n , u(t) ∈ R m , y(t) ∈ R l denote the state, control and output variables, respectively. For a given initial time, t 0 , and a given initial state vector, x 0 , the control system is described by the following dynamics:
ẋ(t) = g(x(t), u(t)), x(t 0 ) = x 0 . (4.2.1)
The cost is denoted by V (x(t), u(t)) and given as:
V (x(t), u(t)) = V f (x f ) + t f t0 V r (t, x(t), u(t))dt. (4.2.2)
where V f and V r are given functions (terminal cost and running cost, respectively), t f is the final (or terminal) time, and x f = x(t f ) is the final (or terminal) state vector. This cost is determined according to the desired objective, e.g., dissipated energy, electricity cost. The state and control variables are limited by the equality and inequality constraints as:
g i (x(t), u(t)) = 0, i = 1, . . . , N g , ∀t ∈ [t 0 , t f ], h i (x(t), u(t)) ≤ 0, i = 1, . . . , N h , ∀t ∈ [t 0 , t f ], (4.2.3)
where N g , N h are the number of equality and inequality constraints, respectively. These constraints come from physical limitations and/or the operation requirements. Another constraint is defined by the target set S f ⊂ [t 0 , ∞) × R n of the final time t f and of the final state x f . For example, depending on the optimization formulation, we have the following target sets:
• S f = [t 0 , ∞) × R n corresponds to a free-time free-endpoint problem, • S f = [t 0 , ∞) × {x 1 } corresponds to a free-time fixed-endpoint problem, • S f = {t 1 } × R n corresponds to a fixed-time free-endpoint problem, • S f = {t 1 } × {x 1 } corresponds to a fixed-time fixed-endpoint problem.
The final time and the final state must satisfy the following constraint:
(t f , x(t f )) ∈ S f . (4.2.4)
Consequently, from the previous ingredients, the constrained optimal control problem finds a control law u(t)
which minimizes V (x(t), u(t)) (4.2.5a) subject to ẋ(t) = g(x(t), u(t)), x(t 0 ) = x 0 , ∀t ∈ [t 0 , t f ], (4.2.5b) g i (x(t), u(t)) = 0, i = 1, . . . , N g , ∀t ∈ [t 0 , t f ], (4.2.5c) h i (x(t), u(t)) ≤ 0, i = 1, . . . , N h , ∀t ∈ [t 0 , t f ], (4.2.5d) (t f , x(t f )) ∈ S f . (4.2.5e)
Generally, a constrained optimal control problem is difficult to solve. Note that the arguments of the cost V (x(t), u(t)) in (4.2.2) are functions, this denoting that (4.2.5) is a continuous-time optimization problem which is numerically intractable. As mentioned in the introduction of this chapter, only the unconstrained optimal control, the so-called LQR, with the linear dynamics and quadratic cost function is analytically solved [Liberzon, 2011]. When having constraints a nonlinear dynamics and a non-quadratic cost, the continuoustime optimization problem (4.2.5) is usually approximated by a discrete-time optimization problem. Usually, this is obtained by projecting the time-depending variables over some basis functions, and then replacing the cost and the constraints in (4.2.5) with the cost and the constraints of the coefficients of the projections. Also, it is important that the desired finite-dimensional optimization problem remains convex such that well-established theory in the literature can be applied [START_REF] Boyd | Convex Optimization[END_REF]].
In the following, the combination of differential flatness, B-splines and MPC notions will provide us the necessary properties for accomplishing the above objectives.
Differential flatness
This subsection recalls the differential flatness notion which is a structural property of the system dynamics exhibiting the state and control variables by a finite number of derivatives of a defined flat output [START_REF] Fliess | Flatness and defect of non-linear systems: introductory theory and examples[END_REF], Lévine, 2009] (see also Fig. 4.2.1). By using the differential flatness, we eliminate differential equations and reduce the number of variables in the optimization problem. Definition 4.2.1. (Flat system [Lévine, 2009]) The dynamical system (4.2.1) is called differentially flat if there exist variables z(t) ∈ R m such that the state and control variables can be algebraically expressed in terms of z(t) and a finite number of its higher-order derivatives:
x(t) = Φ x (z(t), ż(t), . . . , z (q) (t)), u(t) = Φ u (z(t), ż(t), . . . , z (q+1) (t)), (4.2.6) where z(t) = γ(x(t), u(t), u(t), . . . , u (q) (t)) is called the flat output and q + 1 is its maximal order derivative.
Remark 4.2.2. Generally, it is difficult to prove that the system is flat and find the flat output. We recall here some important remarks which are obtained by theoretical approaches.
1. For any linear and nonlinear flat system the number of flat outputs equals the number of inputs [Lévine, 2009].
2. For linear systems, the flat differentiability (existence and constructive forms) is implied by the controllability property [Lévine, 2009].
Figure 4.2.1: Differentially flat systems [Prodan, 2012].
By substituting the state variables, x(t), and the input variables, u(t), obtained from (4.2.6) in the optimization problem (4.2.5), we obtain the optimization problem rewritten in function of the flat output:
min z(t) V (z(t), ż(t), . . . , z (q+1) (t)) (4.2.7a) subject to Φ x (z(t 0 ), ż(t 0 ), . . . , z (q) (t 0 )) = x 0 , (4.2.7b) g i (z(t), ż(t), . . . , z (q) (t)) = 0, i = 1, . . . , N g , ∀t ∈ [t 0 , t f ], (4.2.7c) h i (z(t), ż(t), . . . , z (q) (t)) ≤ 0, i = 1, . . . , N h , ∀t ∈ [t 0 , t f ], (4.2.7d) (t f , Φ x (z(t f ), ż(t f ), . . . , z (q) (t f ))) ∈ S f . (4.2.7e)
As we can see the system dynamics are eliminated from the optimization problem (4.2.7). Note that the number of eliminated constraints equals the number of eliminated variables (state variables). However, (4.2.7) is still a continuous-time optimization problem. Its discretization will be considered in the next subsection when using B-splines-based parameterization. Note that the output vector dimension of the Port-Controlled Hamiltonian (PCH) system always equals to the dimension of the input vector since they are the conjugate variables (see Definition 2.2.5). From the previous property of the PCH system and the first point in Remark 4.2.2 we exemplify here through a specific PCH system where the system output represents also the flat output. Example 4.2.3. Consider an input-state-output PCH system given as: 4.2.8) where x(t) ∈ R n , y(t), u(t) ∈ R m are the state, output and input vectors, respectively, J(x), R(x) ∈ R n×n are the interconnection and resistive matrices, G ∈ R n×m is the input matrix, H(x) ∈ R is the Hamiltonian. Assume that:
ẋ(t) = [J(x) -R(x)] ∇H(x) + Gu(t), y(t) = G T ∇H(x), (
• the input matrix, G, is square, i.e., m = n,
• the input matrix, G, is invertible, i.e., det(G) = 0,
• the Hamiltonian is quadratic and positive, i.e., there exist the matrices
Q 0 ∈ R, Q 1 ∈ R n×1 , Q 2 ∈ R n×n such that Q 2 = Q T 2 > 0 and H(x) = Q 0 + Q T 1 x(t) + 1 2 x T (t)Q 2 x(t).
Then, the output, y(t), and the state, x(t), are the flat outputs of system (4.2.8). In fact, since H(x) is quadratic and positive definite, its gradient vector has the following affine form:
∇H(x) = Q 1 + Q 2 x(t), (4.2.9)
where Q 2 is invertible and positive. From (4.2.8) and ( 4.2.9), we derive the state variable x(t) from y(t) as:
x(y) = G T Q 2 -1 y(t) -G T Q 1 . (4.2.10)
Then, by replacing x(y) in ( 4.2.10) to (4.2.8), we obtain the input: This choice of the flat output has some following interests. If the structure matrices, J(x) and R(x), are affine, the state, x(y), and the input, u(y), are the affine and quadratic functions, respectively. Additionally, if the cost function is the dissipated energy, and if R is constant, the cost is a quadratic function of the flat output. This example will be used hereafter for the electro-mechanical elevator system.
u(y) = G -1 { ẋ1 (y) -[J(x) -R(x)] [Q 1 + Q 2 x(y)]} . ( 4
B-splines-based parameterization
In this subsection, the presented flat output will be projected over a finite set of time basis functions to discretize the optimization (4.2.7). We consider N basis functions
λ i (t) ∈ R with i = 0, . . . , N -1, t ∈ [t 0 , t f ].
Let z i ∈ R n with i = 1, . . . , N be the coefficients of the projections, or control points. Then, the flat output is approximated by:
z(t) = N -1 i=0 z i λ i (t) = ZΛ(t). (4.2.12) where Z = [z 1 . . . z N ] ∈ R m×N , Λ(t) = [λ 1 (t) . . . λ N (t)] T ∈ R N .
The basis function must have (q + 1) th derivative, i.e., λ i (t) ∈ C (q+1) , to ensure that the approximated variable z(t) has (q + 1) th derivative. By replacing (4.2.6) and ( 4.2.12) in the optimization problem (4.2.7) we obtain:
min Z V (Z) (4.2.13a) subject to Φ x (t 0 , Z) = x 0 , (4.2.13b) g i (t, Z) = 0, i = 1, . . . , N g , ∀t ∈ [t 0 , t f ], (4.2.13c) h i (t, Z) ≤ 0, i = 1, . . . , N h , ∀t ∈ [t 0 , t f ], (4.2.13d) (t f , Φ x (t f , Z)) ∈ S f . (4.2.13e)
Note that the optimization problem (4.2. 13) is a finite-dimensional optimization problem where the arguments are the N vectors z 1 , . . . , z N ∈ R m . However, the constraints explicitly depend on time, this requiring a continuous time validation. Furthermore, the explicit form of the discrete cost V (Z) is not easily found. Thus, the numerical solution of the optimization problem (4.2.13) is difficult to find. Therefore, we present hereinafter two methods to approximate the solution.
Discrete-time approximation of the cost and constraints: A simple method to approximate the solution is that we only verify these constraints at some chosen instants t j ∈ [t 0 , t f ] with t j < t j+1 . Let the time interval [t 0 , t f ] be divided into N dv sub-intervals such that t jt j-1 = h dv with j = 1, . . . , N dv . Thus, the cost V (Z) can be approximated by: 4.2.14) where h dv = t ft 0 N dv is the time step. Then, (4.2.13) is rewritten as:
V (Z) = V f (ZΛ(t f )) + h dw N dv -1 j=0 V r (t j , ZΛ(t f )) , (
min Z V (Z) (4.2.15a) subject to Φ x (t 0 , Z) = x 0 , (4.2.15b) g i (t j , Z) = 0, i = 1, . . . , N g , j = 1, . . . , N dg , (4.2.15c) h i (t j , Z) ≤ 0, i = 1, . . . , N h , j = 1, . . . , N dh , (4.2.15d) (t f , Φ x (t f , Z)) ∈ S f , (4.2.15e)
where N dg , N dh ∈ N are the numbers of the verification instants of the constraints (4.2.15c) and ( 4.2.15d), respectively. Remark 4.2.4. Note that this approach is simple to implement but is not complete since it provides no guarantees for the intra-sample behavior.
Continuous-time validation of the cost and constraints: By using suitable basis functions, Λ(t), the cost, V (Z), can be easily formulated as an explicit function of the control points, Z. Moreover, the time-varying constraints (4.2.13c)-( 4.2.13d) can be replaced by sufficient conditions which are time invariant constraints of the control points [Suryawan, 2011, Stoican et al., 2017]. In what follows, we present the B-splines-based parameterization and some of their properties. These basis functions are used because of their ease of enforcing continuity across way-points and ease of computing the derivatives. Also, the degree of the B-splines only depends on which derivative order is needed to ensure continuity.
The i th B-spline function of order d is denoted by λ i,d (t). It is defined using the following recursive formula [Suryawan, 2011]: (4.2.16) where
λ i,1 (t) = 1, τ i ≤ t < τ i+1 , 0, otherwise, λ i,d (t) = t -τ i τ i+d-1 -τ i λ i,d-1 (t) + τ i+d -t τ i+d -τ i+1 λ i+1,d-1 (t),
τ i ∈ [t 0 , t f ] is called knot such that τ i+1 ≥ τ i , i = 0 . . . ν, ν + 1 ∈
N is the number of knots. The knot vector which gathers all the knots is denoted by T ∈ R ν such that: 4.2.17) where τ 0 = t 0 , τ ν = t f . The relation between the knot number, ν, the B-spline order, d, and the B-spline number, N , is: State and input derivatives are combinations of B-splines derivatives. Due to their specific properties, B-splines derivatives can be expressed as combination of B-splines of lower order. In turn these can be expressed as combination of higher order B-splines with the weights changing for each sub-interval of the knot vector. This assumes, of course, that the B-splines of various order share the same knot vector (minus the start and end points). 1, 2, 3, 4, 5, 6, 7, 7, 7} 4 10 11 {0, 0, 0, 0, 1,2,3,4,5,6,7,7,7, 7} Definition 4.2.5 (Internally similar knot vectors [Suryawan, 2011]). Two knot vectors T 1 , T 2 as in (4.2.17) are said to be internally similar if they have the same elements except for the leftmost and the rightmost breakpoints, which differ in their multiplicities.
T = {τ 0 , τ 1 , . . . , τ ν-1 , τ ν } = {t 0 , . . . , t 0 d , τ d , . . . , τ ν-d-1 , t f , . . . , t f d }, (
ν = N + d. ( 4
Let us enumerate in the following several important properties of B-splines which will be used later [START_REF] Stoican | Constrained trajectory generation for uav systems using a b-spline parametrization[END_REF]: P1. A spline curve of order d is C d-1 -continuous at its breakpoints and C ∞ -continuous at any other point [START_REF] Stoican | Constrained trajectory generation for uav systems using a b-spline parametrization[END_REF].
P2. The B-splines basis function λ i,d (t) is zero outside the interval [τ i-1 , τ i+d-1 ) and the sum of all the B-splines equals 1 at all the time [Suryawan, 2011], i.e.
λ i,d (t) > 0, ∀t ∈ [τ i-1 , τ i+d-1 ), λ i,d (t) = 0, otherwise, N i=1 λ i,d (t) = 1, ∀t ∈ [t 0 , t f ]. (4.2.19)
P3. The curve z(t) defined by (4.2.12) and (4.2.16) is contained in the convex hulls 1 of the sets including 1 We call a point of the form z(t) = α j z j |z j ∈ C, α j ≥ 0, N j=1 α j = 1, i = 1, ..., N [START_REF] Boyd | Convex Optimization[END_REF].
the control points [Suryawan, 2011]:
z(t) ∈ conv{z k-d+1 , . . . , z k+1 }, ∀t ∈ [τ k , τ k+1 ], d -1 ≤ k ≤ N -1, z(t) ∈ conv{z 1 , . . . , z N }, ∀t ∈ [t 0 , t f ]. (4.2.20)
This property is illustrated in Fig. 4.2.3 with a 2D spline curve generated using 7 B-splines of order 4 and 7 control points:
0 0 , 3 1 , 3 5 , 5 5 , 8 3 , 8 0 , 6 -2 ,
and the knot vector: T = {0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 7, 7, 7}. P4. The previous properties are also validated for the products of the B-splines [START_REF] Stoican | Constrained trajectory generation for uav systems using a b-spline parametrization[END_REF].
Consider two natural numbers 0 ≤ r 1 , r 2 ≤ d -2. Based on the knot vector T in (4.2.17) we denote the internally similar knot vectors for the B-splines of order dr 1 , dr 2 by T 1 , T 2 , respectively. The corresponding B-splines vectors are Λ d-r1 (t) ∈ R N -r1 , Λ d-r2 (t) ∈ R N -r2 . Then, the products of these two B-splines have the following properties:
λ i,d-r1 (t)λ j,d-r2 (t) > 0, ∀t ∈ (τ k , τ k+1 ), k -d + r 1 + 1 ≤ i ≤ k, k -d + r 2 + 1 ≤ j ≤ k, λ i,d-r1 (t)λ j,d-r2 (t) = 0, otherwise, N -r1-1 j=0 N -r2-1 l=0 λ i,d-r1 (t)λ j,d-r2 (t) = 1, ∀t ∈ [t 0 , t f ]. (4.2.21)
As mentioned before, a general function of z may include many operators such as the addition, the derivative and the multiplication. To describe them on the same basis functions, we need the following theorems. theorem 4.2.6 ( [Suryawan, 2011]). The r th derivative of d th order B-spline vector Λ d (t) ∈ R N can be expressed as a linear combination of elements of (4.2.22) where Λ d (t) and Λ d-r (t) are defined over the internally similar knot vectors, and -r) .
Λ d-r (t) ∈ R N -r : Λ (r) d (t) = M d,d-r Λ d-r (t),
M d,d-r ∈ R N ×(N
theorem 4.2.7 ( [Suryawan, 2011]). A set of B-spline basis functions Λ d-r (t) of a certain degree defined on a knot vector can be represented as a linear combination of B-spline basis functions Λ d (t) of a higher degree defined over an internally similar knot vector, this applies segment-wise, i.e. 4.2.23) where S k,d-r,d ∈ R (N -r)×N .
Λ d-r (t) = S k,d-r,d Λ d (t), ∀t ∈ [τ k , τ k+1 ], (
The two important elements in an optimization problem are the cost function and the constraints. When replacing the B-splines parameterization in the optimization problem formulation (4.2.13), the cost and the constraints are rewritten in function of the control points Z, which become the new variables of the optimization. In the particular case of a quadratic cost function involving just the system states, if the dependency between the flat out z(t) and the states is linear then the cost remains quadratic2 . The following proposition describes this particular case. Proposition 4.2.8 ( [Suryawan, 2011]). Let z(t) be defined by (4.2.12)-( 4.2.16) over the time interval [t 0 , t f ]. Consider a real function V (z) ∈ R and matrices A 1 , A 2 , A 3 of suitable dimensions such that:
V (z) = t f t0 A 1 z (r) (t) + A 2 z(t) + z T (t)A 3 z(t) dt. (4.2.24)
The previous cost can be described by a quadratic function of the control points, Z, as: (4.2.25) where the weight matrices A 2,i , A 3,i,j with i, j = 1, . . . , N are given by:
V (Z) = N i=1 A 2,i z i + N i=1 N j=1 z T i A 3,i,j z j ,
A 2,i = t f t0 A 1 λ (r) i (t) + A 2 λ i (t) dt, A 3,i,j = t f t0 λ i (t)A 3 λ j (t)dt. (4.2.26)
The authors in [START_REF] Stoican | Constrained trajectory generation for uav systems using a b-spline parametrization[END_REF] propose some sufficient conditions on the control points Z which guarantee the satisfaction of constraint g(z) ∈ G at all the time, where g(z) is a quadratic function of z(t), and G is a convex set (see Lemma 1 and Proposition 1 in [START_REF] Stoican | Constrained trajectory generation for uav systems using a b-spline parametrization[END_REF]). In this work, the studied constraint function g(z) is obtained by the multiplication of the flat outputs and their derivatives. By extending this result for the case of the addition operator, we obtain the a result presented in two following proposition.
Proposition 4.2.9. Let z(t) be defined by (4.2.12)-(4.2.16), r be a natural number with 1 ≤ r ≤ d -2, g, g ∈ R be scalars, A 1 , A 2 , A 3 be matrices of suitable dimensions, and g(z) ∈ R be a function such that
g(z) = A 1 z (r) (t) + A 2 z(t) + z T (t)A 3 z(t). ( 4
.2.27)
A sufficient condition for the constraint g ≤ g(z) ≤ g, ∀t ∈ [τ k , τ k+1 ], is given by
g ≤ p k,i,j ≤ g, (4.2.28)
where kd + 2 ≤ i, j ≤ k + 1; p k,i,j is defined by
p k,i,j = A 1 ZM d,d-r S k,d-r,d,i + A 2 z i + z T i A 3 z j . (4.2.29) S k,d-r,d,i is the i th column of the matrix S k,d-r,d defined in Theorem 4.2.7. Proof. Consider the time interval [τ k , τ k+1 ]. Let β i,j,d (t) = λ i,d (t)λ j,d (t) with 1 ≤ i, j ≤ N .
From the parameterization (4.2.12) and the B-spline property (4.2.19), we obtain:
z(t) = N i=1 z i λ i,d (t) = N i=1 z i λ i,d (t) N j=1 λ j,d (t) = N i=1 N j=1 z i β i,j,d (t). (4.2.30)
Using the parameterization (4.2.12) and the B-spline properties (4.2.22)-(4.2.23), we describe the time derivative of the flat output as:
z (r) (t) = N i=1 ZM d,d-r S k,d-r,d,i λ i,d (t)
z (r) (t) = N i=1 N j=1 PM d,d-r S k,d-r,d,i β i,j,d (t). (4.2.32)
Substituting the parameterization (4.2.12) to the third term of g(z) in (4.2.27), we derive:
z(t) T A 3 z(t) = N i=1 N j=1 z T i A 3 z j β i,j,d (t). ( 4
g(z) = N i=1 N j=1 p k,i,j β i,j,d (t). (4.2.34) Since β i,j,d (t) with k -d + 1 ≤ i, j ≤ k satisfies the conditions (4.2.21), g(z) remains in the convex hull of {p k,i,j } with k -d + 1 ≤ i, j ≤ k. Thus, if {p k,i,j } with k -d + 1 ≤ i, j ≤ k satisfies (4.2.28), the continuous-time constraint g ≤ g(z) ≤ g, ∀t ∈ [τ k , τ k+1 ] is satisfied.
Proposition 4.2.9 is also valid for the equality constraint which is proved by the following corollary.
Corollary 4.2.10. Let z(t) be defined by (4.2.12)-( 4.2.16), r be a natural number with 1 ≤ r ≤ d-2, g, g ∈ R be scalars, A 1 , A 2 , A 3 be matrices of suitable dimensions, and g(z) ∈ R be the function defined by (4.2.27).
A sufficient condition for the constraint g(z
) = 0, ∀t ∈ [τ k , τ k+1 ], is that p k,i,j = 0, k -d + 2 ≤ i, j ≤ k + 1,
where p k,i,j is defined by (4.2.29).
Generally, we have many constraints which are gathered in g(z) ∈ G, where g(z) is a vector, and G is a convex set. Based on Proposition 4.2.9 and Corollary 4.2.10, we propose a sufficient condition for constraint g(z) ∈ G in what follows.
Proposition 4.2.11. Let z(t) be defined by (4.2.12)-( 4.2.16), r be a natural number with 1 ≤ r ≤ d, G ⊂ R Ng be a convex set, A 1 , A 2 , A 3,1 , . . . , A 3,N be matrices of suitable dimensions, and g(z) ∈ R Ng such that
g(z) = A 1 z (r) (t) + A 2 z(t) + m l=1 z l (t)A 3,l z(t) ∈ G, ∀t ∈ [τ k , τ k+1 ]. (4.2.35) z l (t) ∈ R is the l th coordinate of the flat output z(t). A sufficient condition for the constraint (4.2.35) is that p k,i,j ∈ G, (4.2.36)
where kd + 2 ≤ i, j ≤ k + 1 and p k,i,j is defined by:
p k,i,j = A 1 ZM d,d-r S k,d-r,d,i + A 2 z i + m l=1 z l,i A 3,l z j , (4.2
.37)
where Z ∈ R m×N denotes the control points matrix, z j ∈ R n denotes the j th control point, z l,i ∈ R denotes the l th coordinate of the i th control point.
Consequently, if the cost function V (z) in (4.2.7a) and the constraints (4.2.7c)-( 4.2.7d) admit the forms (4.2.24) and (4.2.35), then the optimization problem (4.2.13) can be rewritten:
min Z V (Z) (4.2.38a) subject to Φ x (t 0 , Z) = x 0 , (4.2.38b) p k,i,j (Z) ∈ G, k -d + 2 ≤ i, j ≤ k + 1, (4.2.38c) (t f , Φ x (t f , Z)) ∈ S f . (4.2.38d)
where k denotes the B-spline time interval index, V (Z) is defined by (4.2.25), and p k,i,j (Z) is defined by (4.2.37).
Convex optimization problems can be solved using a variety of algorithms [START_REF] Boyd | Convex Optimization[END_REF]. For example, elementary algorithms with simple computational steps are used for solving convex optimization problems arising in machine learning, data mining, and decision making [Negar, 2014]. Also, the interior point methods are a family of algorithms solving linear optimization programs which come along with an efficient performance guarantee [Wächter, 2002]. Other types of algorithms are Newton's method, gradient and subgradient methods, which combined with primal and dual decomposition techniques it becomes possible to develop a simple distributed algorithms for a problem [START_REF] Stegink | A unifying energy-based approach to stability of power grids with market dynamics[END_REF].
The theory of convex optimization has taken significant strides over the years. However, all approaches fail if the underlying cost function is not explicitly given, it is even worse if the cost function is non convex [START_REF] Ghadimi | Accelerated gradient methods for nonconvex nonlinear and stochastic programming[END_REF].
There are specialized solvers which can handle nonlinear optimization problems with relatively large prediction horizon, e.g., BARON [START_REF] Tawarmalani | A polyhedral branch-and-cut approach to global optimization[END_REF], FMINCON [START_REF] Coleman | On the convergence of interior-reflective newton methods for nonlinear minimization subject to bounds[END_REF], IPOPT [START_REF] Biegler | Large-scale nonlinear programming using ipopt: An integrating framework for enterprise-wide dynamic optimization[END_REF].
Model Predictive Control for tracking
After obtaining the control and state reference profiles, the tracking controller is designed to stabilize the system state around the generated reference. Since the generated reference profiles may stay close to the limit, the constraints should be taken into account in the tracking control.
Among the control methods dealing with the constraints, MPC is considered as a good candidate [START_REF] Rawlings | [END_REF]Mayne, 2009,Ellis et al., 2017]. Depending on the cost function, we usually distinguish two types of MPC: tracking MPC [Maciejowski, 2002,Rawlings and[START_REF] Rawlings | [END_REF] and economic MPC [START_REF] Angeli | On average performance and stability of economic model predictive control[END_REF],Grüne, 2013, Ellis et al., 2017]. Tracking MPC penalizes the discrepancies between the actual and reference profiles (state, output, control variables). Economic MPC penalizes the general "profit" cost, e.g., dissipation energy, electricity cost. An example for the cost functions in a tracking and an economic MPC are illustrated in Fig. 4.2.4. MPC is the on-line version of the constrained optimal control (see also Annex D). However, to make it practical for the real-time control, one • discretizes the system dynamics with the time (see Chapter 3 for details on the discretization of PH systems),
• replaces the cost and constraint functions by functions of the discrete-time variables,
• studies the finite horizon, i.e., fixed-time free-endpoint
S f = {t 1 } × R n in (4.2.4).
For simplicity, we shift the coordinate origin to the generated reference trajectory by the following state and control transformations:
x(t) x(t)x(t), ũ(t) u(t)u(t). (4.2.39)
Note that (4.2.39) is a time-varying transformation which leads to the time-varying constraints of the shifted state and control variables such that:
x(t) ∈ X(t), ũ(t) ∈ Ũ(t). ( 4.2.40)
The system dynamics in the new coordinate is called the discrepancy dynamics which will be used in the MPC formulation. Since the MPC is usually formulated in the discrete-time form, we must choose a suitable discrete-time discrepancy dynamics. We have at least two ways to obtain this dynamics corresponding to the two procedures: discretization after transformation, and transformation after discretization. Both of them will be investigated for the electro-mechanical elevator in the next section.
Remark 4.2.12. For the PCH system, the transformed system obtained by a time-varying coordinate transformation is not a PCH [START_REF] Stadlmayr | Tracking control for Port-Hamiltonian systems using feedforward and feedback control and a state observer[END_REF]. This remark is also valid for the discrete-time case.
After partitioning the prediction time interval [t, t 1 ] to N p similar time intervals, we denote predicted values of the discrete-time discrepancy variables by x(t + jh|t), ũ(t + jh|t), where j = 0, . . . , N p and h = (t 1t)/N p is the time step. We consider the recursive construction of an optimal open-loop state and control sequences:
X(t) {x(t|t), . . . , x(t + jh|t), . . . , x(t + (N p -1)h|t), x(t + N p h|t)}, Ũ(t)
{ũ(t|t), . . . , ũ(t + jh|t), . . . , ũ(t + (N p -1)h|t)} at instant t over a finite receding horizon, N p , which leads to a feedback control policy by the effective application of the first control action β(t, x) ũ(t|t) as system input:
Ũ * (t) = argmin Ũ(t) Ṽf (x(N p |t)) + Np-1 j=0 Ṽr (x(j|t), ũ(j|t)) (4.2.41a) subject to
x(t + (j + 1)h|t) = ĝ(x(t + jh|t), ũ(t + jh|t)), x(t|t) = x(t), j = 0, . . . , N p -1, (4.2.41b)
x(t + jh|t) ∈ X(t + jh), j = 1, . . . , N p , (4.2.41c) ũ(t + jh|t) ∈ Ũ(t + jh), j = 0, . . . , N p -1. (4.2.41d)
x(t + N p h|t) ∈ Xf (t 1 ), (4.2.41e)
where (4.2.41b) describes the discrete-time model of (4.2.5b), and ĝ(.) depends on the time discretization method (see also Chapter 3). In the MPC formulation (4.2.41), the tuning control parameters are the final cost, Ṽf (x(N p |t)), the stage cost, Ṽr (x(j|t), ũ(j|t)) and the final constraint, Xf (t 1 ). Let x(t + jh|t, Ũ) denote the state variable at the instant t + jh corresponding to the application of the control sequence Ũ to the system dynamics (4.2.41b). Let the sets of the state and control sequences, XS(t), ŨS(t), and the state sequences, X(t, Ũ), be defined by:
XS(t) X(t) × . . . X(t + jh) × . . . X(t + N p h -h) × . . . Xf (t 1 ), ŨS(t) Ũ(t) × . . . Ũ(t + jh) × . . . Ũ(t + N p h -h), X(t, Ũ) {x(t|t, Ũ), . . . , x(t + jh|t, Ũ), . . . , x(t + N p h -h|t, Ũ), x(t + N p h|t, Ũ)}.
Let XNp (t) ⊂ X(t) denote the set including all the state variables such that the state sequence X(t, Ũ) belongs to XS(t), i.e.
XNp (t) = x ∈ X(t)| ∃ Ũ ∈ ŨS(t), X(t, Ũ) ∈ XS(t) .
Let V min (t, x) be the minimum of the cost function in the optimization problem (4.2.41). Based on the presented elements, the following theorem recalls the stability conditions for the closed-loop time-varying system using the tracking MPC formulated in (4.2.41).
theorem 4.2.13 ( [Rawlings and Mayne, 2009]). Suppose the following assumptions hold.
• The function ĝ(.), Ṽr (.), and Ṽf (.) are continuous; for all t ≥ t 0 , ĝ(0, 0) = 0, Ṽr (0, 0) = 0, Ṽf (0) = 0.
• For all t ≥ t 0 , X(t) and Xf are closed, Xf ⊂ X(t) and Ũ(t) are compact; each set contains the origin.
• For all t ≥ t 0 and ∀x(t) ∈ Xf , there exists ũ ∈ Ũ(t) such that ĝ(x(t), ũ) ∈ Xf .
• For all t ≥ t 0 ,
min ũ∈ Ũ(t) { Ṽf (ĝ(x(t), ũ)) + Ṽr (x(t), ũ)| ĝ(x(t), ũ) ∈ Xf } ≤ Ṽf (x(t)), ∀x(t) ∈ Xf .
• The terminal cost Ṽf (.) and terminal constraint set Xf are time invariant.
• The running cost Ṽr (.) and the terminal cost Ṽf (.) satisfy, for all t ≥ t 0 ,
Ṽr (x(t), ũ) ≥ α 1 (|x|), ∀x ∈ XNp (t), ∀ũ ∈ Ũ(t), Ṽf (x) ≤ α 2 (|x|), ∀x ∈ Xf ,
in which α 1 (.) and α 2 (.) are K ∞ functions3 .
• There exists a K ∞ function α(.) such that
V (t, x) ≤ α(|x|), ∀x ∈ XNp (t), t ≥ t 0 .
Then, for each initial time t ≥ t 0 , the origin is asymptotically stable with a region of attraction XNp (t) for the time-varying system x(j + 1|t) = ĝ(x(j|t), ũ(τ, x)), τ ≥ t.
PH formalism for tracking MPC: As presented above, the PH formalism is useful for the system stability analysis and for the control design based on the interconnection, dissipation and stored energy of the system dynamics [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]]. An interesting property of PH system is the passivity where the energy (Hamiltonian) is considered as a Lyapunov function. There are many control methods developed for the PH system as presented in [van der Schaft andJeltsema, 2014, Wei and[START_REF] Wei | [END_REF]], e.g., Control by Interconnection, Interconnection and Damping Assignment Passivity-Based Control (IDA PBC). However, all these methods can not explicitly deal with the state and input constraints while MPC is chosen for this purpose. While the theory on linear MPC gained ground over the last decades [START_REF] Rawlings | Model Predictive Control: Theory and Design[END_REF], the non linear and economic MPC causes novel behavior. For example, stability demonstration for the closedloop nonlinear system is difficult since a Lyapunov function is not easy to find. From the previous arguments, while both PH formalism and MPC are established tools in the literature, to the best of our knowledge they have never been considered together by the control community.
Specifically, we propose to use the PH formalism such that, via an MPC control action, the closed-loop dynamics are describing a Port-Hamiltonian system. This is done in three steps: i) choosing the desired PH closed-loop system; ii) finding the explicit control laws and iii) finding the corresponding MPC.
Since any MPC-based closed-loop system is in fact a switched system [START_REF] Bemporad | The explicit linear quadratic regulator for constrained systems[END_REF], the desired PH system must also be a switched PH system. [Kloiber, 2014] proposes design methods for stable switched PH systems. Next, from the explicit form of the closed-loop system, we find the explicit control laws by solving the matching equation. Then, the processes to find MPC laws corresponding to given explicit laws is seen as an inverse parametric programing problem [Nguyen, 2015].
Constrained optimization-based control for the electro-mechanical elevator
Considering the theoretical tools presented above, this section concentrate on the control of the electromechanical elevator system for minimizing the dissipated energy while respecting some physical constraints.
Let us begin by presenting the dynamical electro-mechanical elevator model used in the optimization problem.
Electro-mechanical elevator system: As presented in Section 2.4, the electro-mechanical elevator is represented by the combination of the AC/DC converter, the Permanent Magnet Synchronous Machine (PMSM) and the mechanical elevator. Using (2.4.24),(2.4.26),, we write in a compact form the system dynamics:
ẋl (t) = [J l (x l ) -R l ] ∇H l (x l ) + G ll v l (t), i l (t) = G T ll ∇H l (x l ), (4.3.1)
where x l (t) = [φ ld (t) φ lq (t) p l (t) θ m (t)] T ∈ R 4 is the state vector consisting of a direct stator flux, a quadrator stator flux, a mechanical momentum and a pulley angle. Furthermore, in (4.3.1) v l (t) = d l (t)v l (t) ∈ R 2 is the input vector, which also represents the control variables describing the direct and quadrature voltages of the motor stator, i l (t) ∈ R 2 is the output vector describing the direct and quadrature currents of the motor stator, d l (t) ∈ R 2 is the AC/DC converter duty cycle defined by (2.4.27) and v l (t) ∈ R is the DC bus voltage at the connection point with the corresponding converter. The structure matrices
J l (x l ) ∈ R 4×4 , R l ∈ R 4×4
and G ∈ R 4×2 are given by:
J l = 0 0 φ lq (t) 0 0 0 -φ ld (t) 0 -φ lq (t) φ ld (t) 0 -1 0 0 1 0 , R l = R l 0 0 0 0 R l 0 0 0 0 0 0 0 0 0 0 , G ll = 1 0 0 1 0 0 0 0 , (4.3.2)
where R l is the phase resistance of the PMSM stator. The Hamiltonian, which has a quadratic form, describes the magnetic energy in PMSM stator, the kinematic energy and the potential energy in the mechanical elevator:
H l (x l ) = 1 2L d φ ld (t) - 3 2 φ f 2 + 1 2 φ 2 lq (t) L q + 1 2 p 2 l (t) I l -Γ res θ m (t), = Q l0 + Q T l1 x l (t) + 1 2 x T l (t)Q l2 x l (t), (4.3.3)
where φ f is the magnetic flux of the PMSM rotor, L d , L q are the direct and quadrature inductances of the PMSM stator, I l is the mechanical inertia, Γ res is the mechanical torque caused by the gravity,
Q l0 ∈ R, Q l1 ∈ R 4×1 , Q l2 ∈ R 4×4
are the weight matrices such as:
Q l0 = 3 4 φ 2 f L 2 d , Q l1 = - 3 2 φ f L d 0 0 -Γ res , Q l2 = 1 L d 0 0 0 0 1 L q 0 0 0 0 1 I l 0 0 0 0 0 . (4.3.4)
Dissipated energy: The dissipated energy in the PMSM, which will be also added in the cost function, is expressed by:
V l (x l ) = t f t0 ∇H l (x l ) T R l ∇H l (x l )dt, (4.3.5)
where t 0 and t f are the initial and final instants of an elevator travel, H(x l ) is the Hamiltonian defined in (4.3.3), R l is the resistive matrix defined in (4.3.2). Constraints: In Section 2.6.2 we enumerated some physical constraints which are rewritten here for the direct and quadrature voltages and currents of the motor stator, and for the rotor speed:
v l (t) 2 ≤ v ref √ 2 , (4.3.6a) i l (t) 2 ≤ I l,max √ 2 , (4.3.6b) ω l,min ≤ ω l (t) ≤ ω l,max , (4.3.6c)
where v ref is the reference voltage of the DC bus, I l,max is the maximal PMSM current amplitude, ω l (t) = p l (t) I l ∈ R is the PMSM rotor speed, ω l,min , ω l,max ∈ R are, respectively, the minimal and maximal rotor speed of the mechanical elevator. Moreover, one of the momentum limits ω l,min , ω l,max is zero. The initial elevator speed and position fulfill the following constraints:
p l (t 0 ) = 0, θ m (t 0 ) = θ 0 , (4.3.7)
Similarly, the target set at the final time of the elevator travel is given by:
S l,f = {t f } × x l ∈ R 4 |p l = 0, θ m = θ f , (4.3.8)
where θ f is the final pulley angle which usually equals the maximal angle if the cabin goes down and equals the minimal angle if the cabin goes up.
Constrained optimal control problem: Combining all the element presented above, the constrained optimal control problem is formulated as:
min V l (x l ) ( 4
(t f , x l (t f )) ∈ S l,f , (4.3.9d)
where V l (x l ) is the dissipated energy defined by (4.3.5), S l,f is the target set defined by (4.3.8). This set implies that (4.3.9) is a fixed-time and partial fixed-endpoint optimization problem.
In the optimization problem (4.3.9), the cost function is quadratic, the constraints are nonlinear due to the ellipsoidal constraints (4.3.6a)- (4.3.6b). A finite-dimensional optimization problem for approximating (4.3.9) will be formulated in the next subsection by using appropriate parameterization variables and their B-splines-based parameterization.
Speed profile generation
This subsection reformulates the optimization problem (4.3.9) using differential flatness and B-splines-based parameterization. In most cases, there exist different flat outputs for a system dynamics, but there is no general method to find them. For simplicity we concentrate on the flat outputs represented by the state, input and output variables. Therefore, a first choice of flat outputs for the dynamics (4.3.1) is given by the stator direct flux z 1 (t) = φ ld (t) and the pulley angle z 2 (t) = θ m (t):
z(t) = [z 1 (t) z 2 (t)] T = [φ ld (t) θ m (t)] T ∈ R 2 .
(4.3.10) Using Definition 4.2.1 the rest of the state and control variables are described in function of the flat output:
φ ld (t) = z 1 (t), (4.3.11a)
φ lq (t) = I l z2 (t) + Γ res L d -L q L d L q z 1 (t) + 3 2 φ f L d , (4.3.11b) p l (t) = I l ż2 (t), (4.3.11c) θ m (t) = z 2 (t), (4.3.11d) u l,1 (t) = ż1 (t) + R l z 1 (t) - 3 2 φ f L d - I l z2 (t) + Γ res L d -L q L d L q z 1 (t) + 3 2 φ f L d ż2 (t), (4.3.11e) u l,2 (t) = I l z2 (t) + Γ res L d -L q L d L q z 1 (t) + 3 2 φ f L d , t + R l I l z2 (t) + Γ res L d -L q L d z 1 (t) + 3 2 L q φ f L d + z 1 (t) ż2 (t), (4.3.11f)
We can easily see that the above choice of the flat output will make the cost and the constraint functions in (4.3.5)-(4.3.8) more complex, i.e., by using B-splines-based parameterization we can not rewrite the cost as a function of the control points. Furthermore, (4.3.11) requires a complex numerical computation. In what follows we choose the flat outputs of a subsystem of the electro-mechanical elevator which are used to parametrize all the variables. First of all, we observe that the dynamical system (4.3.1)-( 4.3.4) can be decomposed in two subsystems where one of them has the form presented in Example 4.2.3 and the other subsystems has the first-order dynamics:
ẋ1 (t) ẋ2 (t) = J 11 (x 1 ) -R 11 J 12 -J T 12 0 ∇H 1 (x 1 ) ∇H 2 (x 2 ) + G 1 0 v l (t) i l (t) = G 1 0 T ∇H 1 (x 1
), (4.3.12) where the state vectors, x 1 (t) ∈ R 3 , x 2 (t) ∈ R, are given as:
x 1 (t) = φ ld (t) φ lq (t) p l (t) T , x 2 (t) = θ m (t), (4.3.13)
the structure matrices,
J 11 ∈ R 3×3 , J 12 ∈ R 3×1 , G 1 ∈ R 3×1 , R 11 ∈ R 3×3
, are given by:
J 11 (x 1 ) = 0 0 φ lq 0 0 -φ ld -φ lq φ ld 0 , J 12 = 0 0 -1 , R 11 = R l 0 0 0 R l 0 0 0 0 , G 1 = 1 0 0 1 0 0 , (4.3.14)
the Hamiltonian described by H 1 (x 1 ) and H 2 (x 2 ) is:
H 1 (x 1 ) = 1 2L d φ ld (t) - 3 2 φ f 2 + 1 2 φ 2 lq (t) L q + 1 2 p 2 l (t) I l = Q 10 + Q T 11 x 1 (t) + 1 2 x T 1 (t)Q 12 x 1 (t), H 2 (x 2 ) = -Γ res θ m (t), (4.3.15) with Q 10 ∈ R, Q 11 ∈ R 3×1 and Q 12 ∈ R 3×3
. In (4.3.12), the dynamics coupling is described by the interconnection matrix J 12 . Let u c (t) and y c (t) denote the conjugate variables at the interconnection ports4 between x 1 (t) and x 2 (t). Then, the dynamics corresponding to the state variable x 1 (t) are given by5 : 4.3.16) and the dynamics corresponding to x 2 (t) are described by: .3.17) where the input vector, u 1 (t) ∈ R 3 , the output vector, y 1 (t) ∈ R 3 , and the input matrix, G 1 ∈ R 3×3 , of the first subsystem (4.3.16) are defined by: (4.3.18) As also shown in Example 4.2.3 the flat output of (4.3.16) can be chosen as:
ẋ1 (t) = [J 11 (x 1 ) -R 11 ] [Q 11 + Q 12 x 1 (t)] + Gu 1 (t), y 1 (x 1 ) = G T [Q 11 + Q 12 x 1 (t)] , (
ẋ2 (t) = -y c (x 1 ), u c = -Γ res . ( 4
u 1 (t) = v l (t) u c (t) T , y 1 (t) = i l (t) y c (t) T , G = G 1 J 12 = I 3 .
z(t) = y 1 (t). (4.3.19)
Using (4.3.16) and the flat output z(t) in (4.3.19), we can express the state and control variables x 1 (t), u 1 (t) as: .3.20) Note that z(t) is the flat output of the system (4.3.16) which can not generally describe the state variable x 2 (t) of the system (4.3.17) by its derivatives. However, from (4.3.17), (4.3.18) and (4.3.19), the state vector x 2 (z) can be described by:
x 1 (z) = G T Q 12 -1 z(t) -G T Q 11 , u 1 (z) = G -1 G T Q 12 -1 ż(t) -[J 11 (z) -R 11 ] G -T z(t) . ( 4
x 2 (z) = x 2 (t 0 ) - t t0 y c (t)dt = x 2 (t 0 ) -0 0 1 t t0 z(t)dt. (4.3.21)
The cost function in (4.3.5) is rewritten in function of the flat output as follows:
V l (z) = t f t0 z(t) T R z z(t)dt, (4.3.22)
where the weight matrix,
R z ∈ R 3×3 , is given by R z = G -1 R T 11 G -T
, with G defined in (4.3.18) and R 11 defined in (4.3.14). Thanks to formulation (4.3.20)-(4.3.22), we can reformulate the optimization problem (4.3.9) as follows:
min z(t) V l (z) (4.3.23a) subject to z(t) ∈ G y , (4.3.23b) u 1 (z) ∈ G u , (4.3.23c) [0 0 1]z(t 0 ) = 0, (4.3.23d) [0 0 1]z(t f ) = 0, (4.3.23e)
[0 0 1] t f t0 z(t)dt = θ f -θ 0 , (4.3.23f)
where the convex sets G y , G u ⊂ R 3 are defined by
G y = (x 1 , x 2 , x 3 ) ∈ R 3 | x 2 1 + x 2 2 ≤ I l,max / √ 2, ω l,min ≤ x 3 ≤ ω l,max , (4.3.24a)
G u = (u 1 , u 2 , u 3 ) ∈ R 3 | u 2 1 + u 2 2 ≤ v ref √ 2 , u 3 = -Γ res . (4.3.24b) Remark 4.3.1.
After comparing the optimization problem (4.3.23) with the original problem (4.3.9), we draw several remarks:
• by using the flat output we can eliminate the electro-mechanical elevator dynamics in the optimization problem (4.3.23);
• there is an additional equality constraint in (4.3.23c) and (4.3.24b), which needs to be fulfilled at all times.
Next, by parametrizing the flat output, z(t), as in (4.2.12) using the B-splines basis function we obtain a discrete number of constraints in the optimization problem (4.3.23). In addition, we define the time integrations of the basis functions by:
λ Ij (t) = t t0 λ j (t)dt, j = 1, . . . , 3, (4.3.25a) Λ I (t) = t t0 Λ(t)dt. (4.3.25b)
From (4.2.12) and (4.3.25), we can describe the time integration of z(t) by:
t t0 z(t)dt = ZΛ I (t). (4.3.26)
Thanks to the properties of the B-splines enumerated in Section (4.2.2), λ j (t), we rewrite the cost and the constraint functions in (4.3.23) as explicit functions of the control points, z j . Based on Proposition 4.2.9, the quadratic cost function V l (z) in (4.3.22) is rewritten as:
V l (Z) = N i=1 N j=1 z T i t f t0 λ i (t)R z λ j (t)dt z j , (4.3.27)
where N is the number of the B-splines. Based on Proposition 4.2.11, we formulate the sufficient condition for the satisfaction of the constraint (4.3.23b), y 1 (t) ∈ G y , at all times as:
z j ∈ G y , ∀j = 1, . . . , N. (4.3.28)
From the description of the interconnection matrix, J 11 , in (4.3.14) and of the state variables in (4.3.20), we can express J 11 (z) as:
J 11 (z) = 3 s1=1 3 s2=1 J 11,s2 a T s2 G T Q 12 -1 a s1 z s1 (t), (4.3.29)
where z s1 (t) ∈ R is the s th 1 coordinate of the flat output vector, z(t), the matrices J 11,1 , J 11,2 , J 11,3 ∈ R 3×3 are defined by:
J 11,1 = 0 0 0 0 0 -1 0 1 0 , J 11,2 = 0 0 1 0 0 0 -1 0 0 , J 11,3 = 0, (4.3.30)
and the vectors a 1 , a 2 , a 3 ∈ R 3×1 are defined by:
a 1 = 1 0 0 T , a 2 = 0 1 0 T , a 3 = 0 0 1 T . (4.3.31)
Based on Proposition 4.2.11 and (4.3.20), we obtain the sufficient condition for the satisfaction at all times of the constraint (4.3.23c), u 1 (t) ∈ G u , as:
u 1,i,j = A 1 ZM d,d-1 S k,d-1,d,i + A 2 z i + 3 s1=1 z s1,i A 3,s1 z j ∈ G u , (4.3.32)
for all k, i, j
∈ N satisfying d -1 ≤ k ≤ n -1, k -d + 2 ≤ i ≤ k + 1, k -d + 2 ≤ j ≤ k + 1.
The matrices A 1 , A 2 and A 3 are defined by: (4.3.33) Note that in (4.3.24) and (4.3.32), there is an equality constraint over the input u 1 (t) which is in general, difficult to respect. Therefore, we propose a soft-constrained reformulation [START_REF] Kerrigan | Soft constraints and exact penalty functions in model predictive control[END_REF] of the optimization (4.3.23). More precisely, the cost (4.3.27) and the constraint (4.3.23c) are rewritten as: (4.3.34c) where Q ∈ R is a positive coefficient, ∈ R is the relaxation factor. Using (4.2.19), (4.3.26), the constraints (4.3.23d)-(4.3.23f) are rewritten as: (4.3.35) where z 3,1 , z 3,N ∈ R are the third coordinates of the first and N th control points. By replacing the cost and the constraint functions in (4.3.27)-(4.3.35) [START_REF] Biegler | Large-scale nonlinear programming using ipopt: An integrating framework for enterprise-wide dynamic optimization[END_REF]) which can handle the nonlinear optimization problem with relative large prediction horizon.
A 1 = G -1 G T Q 12 -1 , A 2 = G -1 R 11 G -T , A 3,s1 = G -1 3 s2=1 J 11,s2 a T s2 G T Q 12 -1 a s1 G -T , s 1 = 1, 2 , 3.
Vl (Z, ) = V l (Z) + Q , (4.3.34a) u 1,i,j ∈ Ǧu = (u 1 , u 2 , u 3 )| u 2 1 + u 2 2 ≤ v ref √ 2 , -Γ res -≤ u 3 ≤ -Γ res + , (4.3.34b) ≥0,
z 3,1 = 0, z 3,N = 0, [0 0 1] ZΛ I (t f ) = θ f -θ 0 .
Once we obtain the optimal control point described by Z, we can generate the reference profiles for the system states, x 1 (t), representing the stator magnetic fluxes and the pulley speed, x 2 (t), representing the pulley angle, and for the control variable, v l (t), representing the motor voltages. This will be numerically considered in the next simulation results.
Simulation results
This section presents the simulation results for the electro-mechanical elevator reference profile generation. The forthcoming simulations use the parameters listed in Table 3.3.1 with the numerical data given by the industrial partner SODIMAS (an elevator company from France). Details on the simulation setting are presented in Table 4.3.1. The numerical optimization problem is solved by using Yalmip [Löfberg, 2004] and IPOPT [START_REF] Biegler | Large-scale nonlinear programming using ipopt: An integrating framework for enterprise-wide dynamic optimization[END_REF] solvers in Matlab 2013a. 1×10 -7 d=6 3×10 -5 1×10 -5 7×10 -5 28×10 -5 19×10 -5 (4.3.34), and the computation time with different B-splines parameterizations, i.e., various orders, various B-splines numbers. The numerical value of the relaxation weight, Q , in (4.3.34), is given as: Q = 10 5 [(N m) -1 ]. We see that with B-splines of order 5 the soft constraint technique gives the smallest torque error. Thus, 8 B-splines of order 5 will be used to generate the reference profiles in the following. .2 illustrate the obtained reference profiles of the output currents, i l (t), of the rotor speed, ω l (t), of the rotor angle, θ m (t), and of the input voltage, v l (t) and of the magnetic torque, u c , using two different methods:
• Method 1: trapezoidal rotor speed profiles with the MTPA method (see Appendix E) as in [START_REF] Vittek | Energy near-optimal control strategies for industrial and traction drives with a.c. motors[END_REF].
• Method 2: differential flatness with the B-splines-based parameterization as in Section 4.2.1-4.2.2.
Solving the constrained optimization problem (4.3.36) we show in Fig. 4.3.1 that the constraints on the currents, on the voltages and on the rotor speed are respected. Moreover, since the rotor speed varies slowly, the rotor acceleration is small. Thus, the motor nearly generates a constant torque which requires smallvarying currents. Therefore, the currents are not far from the limits. Also, note that the generated mechanical torque through our method (above denoted Method 2) is not equal to the elevator gravity torque, it respects the soft constraints defined in (4.3.34). Figure 4.3.2 describes the reference rotor angle profile which is the integration of the speed profile with the time. It satisfies the boundary constraints of the initial and final rotor angles in (4.3.23f).
Comparing the results using the above two methods we have several remarks. The results obtained using Method 1 do not satisfy the electro-mechanical elevator dynamics and give a higher dissipated energy with respect to Method 2, that is, 2709 J. The results obtained using Method 2 do not respect the equality constraint in (4.3.23c) but provides the lower dissipated energy with respect to Method 1, that is, 2646 J. Our results can be improved by modifying the order and number of the used B-splines. The reference profiles from Method 2 will be used for the coming tracking control problem.
Reference profile tracking
In this subsection, we consider a MPC law for tracking the reference profile of the electro-mechanical elevator system. This choice is due to the fact that the reference profiles stay close to the boundary, and the actual signal may not satisfy the constraints. Thus, some of the constraints in (4.3.28) and (4.3.34b) should be taken into account in the tracking control design.
To implement the MPC, we need to use the discretized formulations of the system dynamics, the cost function, the constraints and of the reference profiles. First, the simulation time interval [t 0 , t f ] is partitioned into N equal subintervals with the time step defined by h = t ft 0 N . The discrete-time state and control reference profiles are described by: v l,k = v l (t 0 + kh), ∀k = 0, . . . , N -1,
x l,k = x l (t 0 + kh), ∀k = 0, . . . , N. (4.3.37) For comparison we provide in the following two discrete-time models which will be used for the tracking MPC. Nonlinear discrete-time model: The first model is described by the nonlinear discrete-time dynamics (3.3.6) with Scheme 1 in (3.3.7): (4.3.38) where
x l,k+1 -x l,k h = [J l (x l,k ) -R 11 ] Q l1 + Q l2 x l,k+1 + x l,k 2 + G ll v l,k , i l,k = G T ll Q l1 + Q l2 x l,k+1 + x l,k 2 ,
J l ∈ R 4×4 , R l ∈ R 4×4 , Q l1 ∈ R 4×1 , Q l2 ∈ R 4×4
are the matrices defined in (4.3.2)- (4.3.4).
MPC with the nonlinear discrete-time model in 4.3.38: In this work, we investigate the MPC which allow to track the generated reference profiles. The cost function penalizes the discrepancies between the actual signals and the references. We consider the recursive construction of an optimal open-loop state and control sequences:
X l (t) = {x l (t|t), . . . , x l (t + N p |t)}, U l (t) = {v l (t|t), . . . , v l (t + N p -1|t)},
at instant t over a finite receding horizon N p , which leads to a feedback control policy by the effective application of the first control action v l (t|t) as system input:
U * l (t) = argmin U l (t) V (X l , U l ) (4.3.39) subject to
the discrete-time dynamics (4.3.38), ∀k = 0, . . . , N p -1, (4.3.40) where G y is defined in (4.3.24), the cost function penalize the discrepancies between the predicted state/input (4.3.41) In (4.3.41) Q f , Q x , Q u are symmetric weight matrices with appropriate dimensions. Generally, these matrices must be positive definite to guarantee the stability of the state vector and to modify the convergence speed.
v l (t + k|t) 2 ≤ v ref √ 2 , ∀k = 0, . . . , N p -1, [I 3 0] [Q l1 + Q l2 x l (t + k|t)] ∈ G y , ∀k = 0, . . . , N p ,
V (X l , U l ) = x l (t + N p |t) -x l,t+Np Q f + Np-1 k=0 x l (t + k|t) -x l,t+k Qx + v l (t + k|t) -v l,t+k Qu .
In the next simulation results for the tracking control, we concentrate on the stability objective. Moreover, through the simulation we observe that the dynamics of the current and rotor speed is asymptotically stable. Therefore, only the rotor angle needs to be stabilized. Consequently, only the elements of the weight matrices Q f , Q x corresponding to the rotor angle are positive. Since (4.3.38) is nonlinear, the resolution of the optimization problem (4.3.40) requires a complex computation. Therefore, as a second solution, we consider the linearization of this model and the corresponding MPC formulation.
Linearized discrete-time model: The second proposed model of the electro-mechanical elevator system is obtained using the energy-preserving discretization method proposed in Section 3.3 for the linearized model of the continuous-time model (4.3.1)-(4.3.4). Let the discrepancies between the actual state, input and output vectors, x l (t) ∈ R 4 , v l (t) ∈ R 2 , i l (t) ∈ R 2 , and their references, x l (t), v l (t), i l (t) be denoted by: (4.3.42) Note that x 1 (t), v l (t) respect the electro-mechanical elevator dynamics (4.3.1)- (4.3.3). Therefore, from (4.3.1)- (4.3.3), (4.3.42) and x l (t) ≈ x l (t), we obtain the linearized discrepancy dynamics:
xl (t) = x l (t) -x l (t), ṽl (t) = v l (t) -v l (t), ĩl (t) = i l (t) -i l (t).
ẋl (t) = [J l (x l ) -R l -S(x l )] Q l2 xl (t) + G ll ṽl (t), ĩl = G T ll Q l2 xl (t) (4.3.43)
where J l (x l ), R l , Q l2 ∈ R 4×4 are defined in (4.3.1)-(4.3.4) and S(x l ) ∈ R 4×4 is defined as: 4.3.44) with J l,1 , . . . , J l,4 ∈ R 4×4 defined as: (4.3.45) By using the energy-preserving discretization method in Chapter 3 we obtain the discrete-time models of the linearized dynamics (4.3.43) as: V ( Xl , Ũl ) (4.3.47) subject to
S(x l ) = [J l,1 (Q l1 + Q l2 x l (t)) . . . J l,4 (Q l1 + Q l2 x l (t))] , (
J l,1 = 0 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 0 , J l,2 = 0 0 1 0 0 0 0 0 -1 0 0 0 0 0 0 0 , J l,3 = 0, J l,4 = 0.
xl,k+1 -xl,k h = [J l (x l,k ) -R l -S(x l,k )] Q 12 xl,
the elevator discrete-time dynamics (4.3.46), ∀k = 0, . . . , N -1,
v l,t+k + ṽl (t + k|t) 2 ≤ v ref √ 2 , ∀k = 0, . . . , N -1, [I 3 0] [Q l1 + Q l2 (x l,t+k + xl (t + k|t))] ∈ G y , ∀k = 0, . . . , N, (4.3.48)
where G y is defined in (4.3.24), the cost function penalize the discrepancies between the predicted state/input signals and the reference profiles:
V ( Xl , Ũl ) = xl (t + N p |t) Q f + Np-1 k=0 xl (t + k|t) Qx + ṽl (t + k|t) Qu , (4.3.49) Q f , Q x , Q u are
the symmetric matrices with the appropriate dimensions.
The presented MPC in (4.3.39)-(4.3.40) and in (4.3.47)-(4.3.48) includes 4 tuning parameters: prediction horizon N p , state weight matrix Q x , final state weight matrix Q f and the input weight matrix Q u . They are chosen such that the enumerated conditions in Theorem 4.2.13 are satisfied. The increasing of the prediction horizon leads to the decreasing of the value function which corresponds to the decreasing of the discrepancies of state and input with respect to the references. However, this makes the optimization problem more complex. By increasing the input weight matrix, the controller gives more importance to the input reference tracking than state reference tracking. The MPC formulations (4.3.47)-(4.3.49) will be used in the following simulations.
Simulation results
This section presents the simulation results for the reference profile tracking problem of the electro-mechanical elevator illustrated in Fig. 2 4.3.3 illustrates the time evolutions and discrepancies of the state and input variables for the case of nominal dynamics with the feedforward control. Since the actual control signals are equal to the reference one, v l (t) = v l (t), the input discrepancies are zero, ṽl (t) = 0. However, the discrepancies of the state variables differs from zero due to two following reasons. The first reason is that there are differences between the time discretizations of the electro-mechanical dynamics and of the reference profiles. The second reason is related to the employment of the soft constraint instead of the equality constraint (4.3.34). 4.3.4 describe the time evolution and tracking errors of the state and input variables for the case of perturbation-affected dynamics and feedforward control. The perturbation is defined by giving a random state variables near the state reference at a chosen time instant. In the following simulations, the random discrepancies are chosen as δx = [0.5 1 4 -1] T . Physically, this perturbation represents a very strong interaction between the system and the environment during a short time duration (much shorter than the time step, h). By modifying the state variables, this also modifies the system stored energy, H l (x l ). For the simplicity, we do not consider here the uncertainty on the control variables, on the feedback (output) signals and on the model parameters. From the figures, we note that the dynamics of two currents and rotor speed are asymptotically stable around the corresponding reference profiles. It is described by the convergences to zero of the corresponding discrepancies. However, the error of the rotor angle is constant. It causes the angle errors at the end of the elevator travel, i.e., the cabin does not stop at the required position.
Remark 4.3.2. By considering the dynamics of the two currents and of the rotor speed in the linearized model (4.3.43), we determine the corresponding eigenvalues. Though their real parts as illustrated in Fig. 4.3.5 are negative, we can not conclude that the nonlinear discrepancy dynamics are asymptotically stable [Khalil, 2002]. The previous constant angle error motivates us to concentrate on the rotor angle tracking in the MPC formulation. Note that when the rotor angle error converges to zero, the rotor speed will increase which may overpass the speed limitation. This problem is especially important when the perturbation affects the system at the instant t = 15 s when the rotor speed and the voltages are closest to the boundary (illustrated in Fig.4.3.4). Based on the simulations for the open-loop system, we consider some simplifications for the tracking control problem such as:
• only the discrepancies of the rotor angle and of the stator voltages are penalized in the MPC cost such that
Q x = Q f = diag{0, 0, 0, k 0 } ∈ R 4×4 , Q x = Q u = I 2 ∈ R 2×2 , with k 0 > 0;
• only the perturbation on the rotor speed is considered;
• the perturbation-affected state variables always respect the constraints (4.3.6);
• one-step MPC is considered, i.e., N p = 1.
The controller parameters are enumerated in the Table 4.3.4. The numerical optimization problem is solved by using Yalmip [Löfberg, 2004] and IPOPT [START_REF] Biegler | Large-scale nonlinear programming using ipopt: An integrating framework for enterprise-wide dynamic optimization[END_REF] in Matlab 2013a. 4.3.6-4.3.7 describe the time evolution and discrepancies of the state and input variables for the case of the perturbation-affected dynamics with the MPC formulated in (4.3.39)-(4.3.40) and in (4.3.47)-(4.3.48). We observe that the proposed tuning MPC parameters guarantee the asymptotic stability of the electro-mechanical elevator system at the generated reference profiles. Besides, increasing the weight parameter, k 0 , increases the vibration of the state discrepancies but can not reduce the convergence time.
x diag{0, 0, 0, k 0 } Final state weight matrix Q f diag{0, 0, 0, k 0 } Input weight matrix Q u I 2 Figures
Conclusions
In this chapter we firstly recalled the method using the differential flatness, the B-splines-based parameterization and the tracking MPC for approximating the constrained optimal control problem. A special case of the Port-Controlled Hamiltonian system is presented where the output variables are also the flat outputs. Using the properties of B-splines, a sufficient condition for the satisfaction at all the time of the continuous-time quadratic constraints is given. Moreover, the exploitation of the Port-Controlled Hamiltonian formalism for designing the tracking MPC is discussed with the concerned references. The presented control method is modified for adapting to the case of the electro-mechanical elevator system. The modifications reside on the "flat output-like" parameterization variables and the equality constraint relaxation by the soft constraint technique. Then, some simulation results for the reference profile generation illustrate the efficiency of the used method. They are compared to the method using the trapezoidal reference speed profile and the Maximum Torque Per Ampere technique. From the simulations with the feedforward control, we realize that the dynamics of the stator currents and of the rotor speed are asymptotically stable, but the rotor angle dynamics is not. Then, a tracking MPC is used to stabilize the rotor angle errors validated by the numerical simulations.
More details on differential flatness used for electro-mechanical systems can be found in [Delaleau andM.Stankovi, 2004, Chen et al., 2013]. B-splines formulations can be found in [Suryawan, 2011]. Some applications of Tracking Model Predictive Control for regulating the Permanent Magnet Synchronous Machine are introduced in [START_REF] Bolognani | Design and implementation of Model Predictive Control for electrical motor drives[END_REF], Rodriguez et al., 2013].
In the future works, we may take into account the quadratic equality constraint of the control points instead of the soft constraint technique in the reference profile generation. Also, the convergence time of the closed-loop discrepancy dynamics may be reduced by considering the cost of the discrepancies of the two currents and of the rotor speed.
Chapter 5
Power balancing through constrained optimization for the DC microgrid
Introduction
As presented in Sections 1.3 and 2.6.3, the objective of the high level control of the DC microgrid system is the electricity cost minimization. This is formulated as a constrained optimization problem which takes into account the slow time scale dynamics of the microgrid, constraints over the distributed energy storage systems, power predictions of the loads, the distributed energy generation system and the electricity price of the electrical utility grid [START_REF] Lifshitz | Optimal control of a capacitor-type energy storage system[END_REF],Prodan and Zio, 2014,Desdouits et al., 2015,Parisio et al., 2016, dos Santos et al., 2016, Touretzky and Baldea, 2016]. The slow dynamics correspond to the energy storage unit (e.g., battery, thermal system). In the case where the load demand cannot be modified, all the available renewable energy is used and the power balance is guaranteed, the microgrid energy management problem reduces to an electrical storage scheduling problem.
Usually, the excess power of a distributed system is evaluated by selling to the external grid, or it can be stored in an electrical storage system. Therefore, the storage scheduling is an important issue, knowing the fact that the storage capacity (i.e., power and energy) is limited. [Paire et al., 2010, Xu and[START_REF] Xu | [END_REF] proposed a reactive method (without the prediction) that uses logic rules to switch the system to different operation modes. To reduce the required computation and increase the robustness, this method is formulated in [START_REF] Lagorse | A multi-agent system for energy management of distributed power sources[END_REF] through the use of multi-agent systems paradigm. However, this approach requires a high storage capacity and is not efficient since, in some cases, the battery can charge from the external grid when the electricity price is expensive. An off-line optimization-based control approach which takes into account the system dynamics, constraints and power prediction is proposed in [START_REF] Lifshitz | Optimal control of a capacitor-type energy storage system[END_REF].
Furthermore, to improve the robustness, some works concentrate on its on-line version, i.e., MPC (Model Predictive Control) (see, e.g., [START_REF] Dos | [END_REF]). Note that there are two types of MPC: tracking MPC [Maciejowski, 2002,Rawlings and[START_REF] Rawlings | [END_REF] and economic MPC [Grüne, 2013,Ellis et al., 2017] (see also Section 4.2.3 of Chapter 4). The tracking MPC aims at stabilizing the systems to given references by penalizing in the cost function the discrepancies between controlled variables and their references. Moreover, for the effectiveness, chosen cost functions are usually convex which are minimal on the corresponding reference profiles. In economic MPC, the cost functions reflect profit criteria which are generally nonlinear and non-convex. Moreover, this controller is used to generate references for lower levels regulators. Thus, the MPC for minimizing the electricity cost of microgrid systems can be called economic MPC [START_REF] Touretzky | A hierarchical scheduling and control strategy for thermal energystorage systems[END_REF].
The authors in [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF],Desdouits et al., 2015,Lifshitz and Weiss, 2015,Parisio et al., 2016,dos Santos et al., 2016] use simple models for the battery and/or transmission lines which do not entirely capture the real dynamics. Some works use a first-order model for the electrical storage unit [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF], Desdouits et al., 2015, Parisio et al., 2016]. In fact, the electrical storage unit (e.g., a battery) may include many sub storage parts which are connected by resistive elements. Only some of these parts can directly supply the energy. For the slow time scale, the internal charge distribution between these parts can not be ignored. Thus, a first-order model for the electrical storage unit may give incorrect informations about the real available charge. Also, in these works, the transmission lines network dynamics are simply described by a power balance relation. This is not realistic for DC microgrids where each component is placed far from the others. Hence, the resistance of the transmission lines can not be neglected.
95
In general, the microgrid dynamics has at least two energetic properties which may be useful for studying the energy cost optimization: the energy balance and the underlying power-preserving structure. [Lefort et al., 2013,Touretzky andBaldea, 2016] do not take explicitly into account these properties when developing the model of the microgrid system. Thus, they may be lost while studying the energy cost optimization through the model discretization and reduction. Thus, we delineate the following remarks:
• The energy cost optimization is a continuous-time optimization problem for which the solution gives the time profile of the control variables (see Appendix C.2). Usually, it is difficult to find its exact solution. Therefore, we can discretize the optimization problem to obtained a finite-dimensional optimization problem which is easier to solve (details on finite-dimensional optimization problems are given in Appendix C.1). Moreover, its discretization requires the discrete-time model of the microgrid dynamics.
• The microgrid dynamics has different time scales. To reduce the computation complexity, the energy cost optimization usually uses the slow dynamics obtained by reducing the fast dynamics of the global model using singular perturbation approach [START_REF] Kokotović | Singular perturbations and order reduction in control theory-an overview[END_REF].
The present chapter proposes a discrete-time economic MPC for power balancing in a continuous DC microgrid. More precisely, the contributions of this work are the following:
• A PH formulation which completely describes the power interconnection of the DC microgrid components is developed. Moreover, PH representation on graphs (see also [van der Schaft and Maschke, 2013]) allows us to explicitly preserve the physical relations of current and voltage in an electrical circuit.
• A discrete-time model satisfying the energy conservation property is derived.
• A centralized economic MPC design for battery scheduling is developped taking into account the global discrete time model of the system, constraints and electricity cost minimization.
• Extensive simulation results are provided for different scenarios which validate the proposed constrained optimization approach.
This chapter is organized as follows. Section 5.2 details some basic notion on PH systems on graphs and continuous-time optimization-based control formulation. Section 5.3 introduces the DC microgrid model and the constraints. Next, Section 5.4 formulates the on-line constrained optimization problem for reliable battery scheduling. Section 5.5 details the simulation result under different scenarios. Finally, Section 5.6 draws the conclusions and presents the future work.
Port-Hamiltonian systems on graphs
This section briefly introduces some basic definitions and notions related to PH systems on graphs, which will be further used for modeling the DC network (for more details the reader is referred to [van der Schaft and Maschke, 2013]). Note that the PH formalism for system dynamics allows to explicitly describe the power-preserving interconnection within the physical system (see also Section 2.2). However, its general representations (e.g., hybrid input-output, constrained input-output, kernel, image [van der Schaft and Jeltsema, 2014]) do not preserve the topology of the power network which is achieved using the representation of PH system on graphs [START_REF] Fiaz | A port-Hamiltonian approach to power network modeling and analysis[END_REF]. This formalism is obtained by describing the system power-preserving interconnection (i.e., the Dirac structure of the PH system defined in Section 2.2) using a directed graph. Definition 5.2.1. [Directed (closed) graph, [van der Schaft and Maschke, 2013]] A directed graph G = (V, E) consists of a finite set V of N v vertices, a finite set E of N e directed edges, together with a mapping from E to the set of ordered pairs of V, where no self-loops are allowed. The incidence matrix B ∈ R Nv×Ne describes the map from E to V such that:
B ij = 1, if node i is a head vertex of edge j, - 1
, if node i is a end vertex of edge j, 0, else.
(5.2.1)
Note that the incidence matrix always satisfy the following property [van der Schaft and Maschke, 2013]:
1 T Nv B = 0 T Ne , (5.2.2)
where N v , N e ∈ N are the numbers of vertices and edges in the graph. This is illustrated in the following example.
Example 5.2.2. Fig.
B = 1 0 0 1 1 1 -1 -1 0 0 0 0 0 0 1 -1 0 0 0 1 -1 0 -1 -1 .
(5.2.3)
Next, the graph notion is used to define the Kirchhoff-Dirac structure (KDS) of a DC circuit of N v nodes and N e edges. The associated graph, G, is chosen such that each vertex corresponds to a node of the DC circuit, each edge corresponds to an electrical element1 , and edge directions are positive directions of the element currents. This graph, G, is characterized by an appropriate incidence matrix, B, defined in Definition 5.2.1. This matrix is used for describing the admissible node potentials, edge currents and voltages of the circuit which is the KDS defined in the following definition. Definition 5.2.3. [KDS on graphs, [van der Schaft and Maschke, 2013]] The KDS on graphs is defined as: 5.2.4) where B is the incidence matrix of the electrical circuit graph G as defined in (5.2.1), v p ∈ R Nv denotes the node potential, v ∈ R Ne denotes the edge voltage and i ∈ R Ne denotes the edge current.
D(G) {(i, v) ∈ R Ne × R Ne |Bi = 0, ∃v p ∈ R Nv such that v = -B T v p }, (
Example 5.2.4. Fig. 5.2.2 illustrates the Bond Graph representation defined in Section 2.2.1 and the directed graph defined in Definition 5.2.3 for a DC circuit. The circuit includes 4 nodes and 6 elements. Thus, the corresponding directed graph, G, includes 4 vertices and 6 edges which is given in Fig. 5.2.1. Therefore, this graph is characterized by the incidence matrix in (5.2.3). Moreover, the node potential vector, v p (t) ∈ R 4 , the edge current vector, i(t) ∈ R 6 , and the edge voltage vector, v(t) ∈ R 6 , are described as:
v p (t) = [ v 1 (t) v 2 (t) v 3 (t) v 4 (t)] T , v(t) = [v C1 (t) v R1 (t) v I (t) v E (t) v C2 (t) v R2 (t)] T , i(t) = [i C1 (t) i R1 (t) i I (t) i E (t) i C2 (t) i R2 (t)] T .
(5.2.5) Similar to Definition 2.2.5, a PH system is constructed by connecting the KDS with the energy storage, the energy dissipative element and the environment through corresponding ports. Thus, N e edge ports (i, v) as in Definition (5.2.3) are partitioned into N S energy storage ports (i S , v S ), N R resistive ports (i R , v R ) and N E external ports (i E , v E ). Definition 5.2.5. [PH system on graphs, [van der Schaft and Maschke, 2013]] Consider a state space X with its tangent space T x X, co-tangent space T *
x X, and a Hamiltonian H : X → R, defining the energy-storage.
A PH system of KDS D(G) on X is defined by a Dirac structure D(G) ⊂ T x X × T * x X × R N R × R N R × R N E × R N E having energy-storing port (i S , v S ) ∈ T x X × T *
x X, a resistive structure:
R = (i R , v R ) ∈ R N R × R N R r(i R , v R ) = 0, i T R v R ≤ 0 ,
and the external ports
(i E , v E ) ∈ R N E × R N E .
Generally, the PH dynamics are described by:
(-ẋ(t), ∇H(x), i R (t), v R (t), i E (t), v E (t)) ∈ D(G).
Slow time scale model of the DC microgrid
This section derives the slow time scale model of the DC-microgrid elevator system illustrated in Fig. 2.1.1. It corresponds to the battery dynamics, the renewable power and the electricity price, within a range of minuteshours [START_REF] Parisio | Stochastic model predictive control for economic/environmental operation management of microgrids: an experimental case study[END_REF]. Moreover, the fast time scale dynamics corresponding to the transmission lines (x t (t) ∈ R 5 ), the converter (x cs (t), x cb (t) ∈ R 4 ) and the supercapacitor (x s (t) ∈ R) can be eliminated thanks to the singular perturbation argument presented in Appendix F. According to this argument, the slow dynamics are obtained by considering the global dynamics with the steady state of the fast dynamics. This steady state is described by the following constraints: ẋt (t) =0, (5.3.1a) ẋ cs (t) =0, (5.3.1b) ẋ cb (t) =0, (5.3.1c) ẋs (t) =0, (5.3.1d) where the details of the state vectors, x cs (t), x cb (t), x t (t), x s (t), are presented in (2.3.1), (2.3.6), (2.3.9) and (2.3.29).
The state vector of the electro-mechanical elevator, x l (t) ∈ R 4 , in the slow time scale is also steady. However, when the elevator stops, passengers come in/out which modify the stored energy of the electromechanical elevator. Therefore, in the slow time scale this subsystem is modeled as a combination of the dynamics and of a power source. Moreover, in the slow time scale the electro-mechanical elevator dynamics are assumed to be steady. Thus, this subsystem is reduced to a power source.
The constraint (5.3.1) for the microgrid dynamics (2.5.7) and the mentioned simplification for the electromechanical elevator will be used to derive the component models in the slow time scale.
Components models and constraints
This subsection presents the slow time scale model of the microgrid components including the external grid, the battery unit, the supercapacitor unit, the renewable source, the electro-mechanical elevator and the transmission lines.
External grid: The external grid is modelled as a controllable current source i e (t) ∈ R (see also Fig. 2.3.4) satisfying the constraint (2.6.13): i e,min ≤ i e (t) ≤ i e,max , (5.3.2)
with i e,min , i e,max ∈ R.
Renewable source: As in Section 2.3.2 the solar panel unit is modelled as a power source (see also Fig. 2.3.3) characterized by the current, i r (t) ∈ R, and the voltage, v r (t) ∈ R, which satisfy the constraint (2.3.8):
i r (t)v r (t) = -P r (t) < 0.
( Load unit: The load component of the DC microgrid represents a combination of the electro-mechanical elevator and an AC/DC converter. During an elevator travel (i.e., some seconds), it is modelled as the dynamics of the mechanical momentum, p l (t) ∈ R, of the rotor angle, θ l (t) ∈ R, and of the motor magnetic flux, Φ l (t) ∈ R 2 , which are described in (4.3.1). However, within a range of minutes-hour, the stored energy of the electro-mechanical elevator is modified when the passenger come in/out the cabin as presented in Section 2.6.1. Moreover, the electro-mechanical elevator dynamics is assumed to be at the steady state in the slow time scale. The required load power depends on the profiles of the passenger mass, m c , of the required arrival floor, θ, of the elevator travel start instant, t in , and of the stop instant, t f i (see also Eq. (2.6.1)). These vectors vary arbitrarily but respect statistical rules. They determine the load average power in the slow time scale, P l (t), which is nearly the same for every day. Thus, this power is determined from the recorded data in the past and is used for the required power in the future.
Opposing to the electro-mechanical elevator model in Chapter 4, we here use a simpler model which is a power source P l (t) ∈ R (see also Fig. 5.3.1) under current, i l (t) ∈ R, and voltage, v l (t) ∈ R, constraints: .3.4) Note that, in the nominal case with the slow time scale, the load power, P l (t) equals to the reference load power, P l (t). Battery: Since the battery dynamics correspond to the slow time scale [START_REF] Parisio | Stochastic model predictive control for economic/environmental operation management of microgrids: an experimental case study[END_REF], the battery model is described by the electrical circuit in Fig. 2 5.3.15) where the resistor current vector, i tR (t) ∈ R 4 , the voltage vector, v tR (t) ∈ R 4 , and the resistive matrix, R tR ∈ R 4×4 , are defined by:
i l (t)v l (t) = P l (t). ( 5
-ẋb (t) v bR (t) i bb (t) = 0 -G bSR 0 G T bSR 0 G bRE 0 -G T bRE 0 ∇H b (x b ) i bR (t) v bb (t) , (5.3.5a) v bR (t) = -R bR i bR (t), ( 5
i tR (t) + R -1 tR v tR (t) = 0, (
i tR (t) = [ i t,bl (t) i t,be (t) i t,er (t) i b,rl (t) ] T , v tR (t) = [ v t,bl (t) v t,be (t) v t,er (t) v b,rl (t) ] T , R tR = diag {R bl , R be , R er , R lr }.
( 5.3.16) Next, using the definition of the KDS in (5.2.4) we present the interconnections of the DC microgrid network through a closed graph.
DC microgrid network
The microgrid network includes all the elements enumerated above, the battery charges, the battery resistors, the load, the renewable source, the external grid, the DC/DC converter and the DC bus resistors. For a better graph on the modelling approach adopted in this work the multi-source elevator system is equivalently represented by the electrical DC circuit in Fig 5.3.3 where we denote at the circuit node 1 the common ground.
Using Definition 5.2.3, we represent the microgrid electrical circuit in Fig. 5.3.3 by a directed graph as in Fig. 5.3.4. The edge current, i(t) ∈ R 13 , and voltage, v(t) ∈ R 13 , vectors are denoted by: 5.3.17) where 5.3.18) gather the currents and voltages of the load, the external grid and the renewable source, respectively, 5.3.19) gather the DC/DC converter current and voltage variables at the two sides. Let v p (t) ∈ R 8 gather the potentials at the nodes in the circuit. As illustrated in Fig. 5.3.3, we consider node 1 as the circuit "ground" node of the DC microgrid hence, its potential is set to zero: 5.3.21) where B ∈ R 8×13 is the incidence matrix defined in (5.2.1): .3.22) Alternative description for the microgrid network: By considering characteristics of the microgrid network, we simplify some algebraic equations in (5.3.21)-(5.3.22) and eliminate the node potential vector, v p (t). Let the edge current and voltage vectors, i(t) and v(t), defined in (5.3.17) be partitioned into the vectors i
i(t) = [-ẋb T (t) -i T E (t) i T c (t) i T bR (t) i T tR (t)] T , v(t) = [∇H T b (t) v T E (t) v T c (t) v T bR (t) v T tR (t)] T , (
i E (t) = [i l (t) i e (t) i r (t)] T ∈ R 3 , v E (t) = [v l (t) v e (t) v r (t)] T ∈ R 3 (
i c (t) = [i bb (t) i b (t)] T ∈ R 2 , v c (t) = [v bb (t) v b (t)] T ∈ R 2 (
v(t) = -B T v p (t), 0 = Bi(t), (
B = -1 -1 1 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 -1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 -1 0 0 -1 0 0 0 -1 0 0 0 0 0 0 -1 1 0 0 0 0 0 -1 0 0 0 0 0 0 -1 1 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 0 0 0 0 0 -1 0 0 1 1 0 0 . ( 5
1 (t) ∈ R 7 , i 2 (t) ∈ R 6 , v 1 (t) ∈ R 7 and v 2 (t) ∈ R 6 such as: i 1 (t) = [ -ẋb T (t) i T E (t) i T c (t) ] T , v 1 (t) = [ ∇H T b (t) v T E (t) v T c (t) ] T , i 2 (t) = [ i T bR (t) i T tR (t) ] T , v 2 (t) = [ v T bR (t) v T tR (t) ] T .
( 5.3.23) Note that i 1 (t), v 1 (t) describes the currents and voltages of the battery capacitors, the energy sources and the converter. i 2 (t), v 2 (t) describe the currents and voltages of the circuit resistors. From the details of the microgrid incidence matrix, B, in (5.3.22) we note that it can be rewritten as: 5.3.24) where B 11 ∈ R 1×7 , B 22 ∈ R 7×6 , and B 21 ∈ R 7×7 is invertible. From (5.3.20), (5.3.21), (5.3.23) and (5.3.24) the microgrid network described in (5.3.21)-(5.3.22) is rewritten as:
B = B 11 0 B 21 B 22 , (
i 1 (t) v 2 (t) = 0 -B -1 21 B 22 B -1 21 B 22 T 0 v 1 (t) i 2 (t) , (5.3.25a)
B 11 i 1 (t) =0, (5.3.25b) where i 1 (t), v 1 (t), i 2 (t), v 2 (t) are defined in (5.3.23). Note that the equations (5.3.25a) implies the powerpreserving property of the microgrid network.
Remark 5.3.1. Bond Graph (see also Section 2.2.1) of the microgrid circuit in Fig. 5.3.3 is illustrated in Fig. 5.3.5. Although this graph explicitly describes the power flows within the microgrid, the derived Dirac structure representation does not fully capture the topology of the electrical circuit which is given in the incidence matrix, B, [START_REF] Fiaz | A port-Hamiltonian approach to power network modeling and analysis[END_REF].
Next, we introduce the microgrid dynamics which characterizes the centralized system.
subject to the constraints: 5.3.29) where we define the matrices A 5.3.30d) x b (t) ∈ R 2 is the state vector describing the battery charges, H b (x b ) is the Hamiltonian describing the stored energy in the battery (see also (2.3.21)), i E (t), v E (t) ∈ R 3 describe the current and voltage variables of the load, the external grid and the renewable source (see also (5.3.18)). Furthermore, in (5.3.28) we denote by L(d) ∈ R 5×5 a symmetric full rank matrix (similar to the weighted Laplace matrix of a resistor network [van der Schaft, 2010]) depending on the duty cycle d as in (5.3.10): .3.31) Using (5.3.18)-(5.3.19), (5.3.23), (5.3.27) and (5.3.30), the battery charge/discharge current constraint in (5.3.30) is rewritten in the constraint of the state x(t) ∈ R 2 , the current variables i E (t) ∈ R 3 , the voltage variables v E ) ∈ R 3 and the duty cycle d(t) ∈ R: .3.32) Reference profiles: The load, the external grid and the renewable source are characterized by certain profiles of reference as presented in Section 2.6.1. Taking into account the available statistical measurements of electricity consumption we consider the reference power of the consumer denoted by P l (t). Next, we denote by P r (t) the power profile of the renewable source estimated from meteorological data. Lastly, using existing historical data of electricity market, we denote the predicted electricity price profile by price(t). Also, we assume that the selling and buying prices are the same. These parameter profiles with the centralized model and constraints of the microgrid system are used to formulate in the forthcoming section the global optimal power balancing problem.
0 = -B 11 B -1 21 B 22 A 2 (d) ∇H b (x b ) T v T E (t) T , P l (t) = v l (t)i l (t), P r (t) = -v r (t)i r (t), (
0 (d) ∈ R 6×1 , A 1 (d) ∈ R 6×6 , A 2 (d) ∈ R 6×5 , R ∈ R 6×6 by: A 0 (d) = R -1 B T 3 d(t) 1 T , (5.3.30a) A 1 (d) = A 0 (d) A T 0 (d)R -1 A 0 (d) -1 A 0 (d) T , (5.3.30b) A 2 (d) = R -1 -A 1 (d) B T 1 B T 2 , (5.3.30c) R = R bR 0 0 R tR , (
L(d) = B 1 B 2 A 2 (d). ( 5
i min ≤ [0 1 0 0 0 0]A 2 (d) ∇H b (x b ) v E (t) ≤ i max . ( 5
Optimization-based control for the DC microgrid
The main goal of this work is to provide a control strategy for the DC microgrid system and, in particular, for the storage scheduling. The previously developed dynamics, constraints and profiles will be used in a discretetime constrained optimization problem. Hence, we will first introduce the global discrete-time model of the DC microgrid which preserves the energy conservation properties of the continuous time model formulated in (5.3.28)-(5.3.31).
Energy-preserving discrete-time model
In general, when discretizing a continuous time system, the energy conservation property should always be taken into account. For a nonlinear PH system as in (5.3.26) this property can be ensured by preserving the KDS (5.3.26a)- (5.3.26c) and the energy flowing through the storage ports (see also Chapter 3, Section 3.2). Let (.)(j) be the discrete value of variable (.)(t) at time instant t = t 0 + (j -1)h s with the time step h s and the initial time instant t 0 . Using Definition 3.2.1 of the discrete-time Dirac structure we obtain the discrete-time interconnection of the microgrid illustrated in (5.3.26a)-(5.3.26c): 5.4.1b) 5.4.1c) Using Definition 3.2.8 of the discretization of the time-varying power source we derive the discrete-time models of the load and of the renewable source:
i 1 (j) v 2 (j) = 0 -B -1 21 B 22 B -1 21 B 22 T 0 v 1 (j) i 2 (j) , (5.4.1a) v bb (j) i b (j) = 0 -d(j) d(j) 0 i bb (j) v b (j) , (
B 11 i 1 (j) = 0, (
i l (j)v l (j) = P l (j), (5.4.2a) i l (r)v r (j) = -P r (j), (5.4.2b) where the discrete-time power profiles, P l (j), P r (j), are the average values of the reference continuous-time power profiles, P l (t), P r (t), as in (3.2.27). Using Definition 3.2.7 of the discrete-time static element for the microgrid resistors described in (5.3.26f)-( 5.3.26g) we obtain the discrete-time Ohm's law:
v bR (j) = -R bR i bR (j), (5.4.3a) v tR (j) = -R tR i tR (j), (5.4.3b) where the resistive matrices, R bR ∈ R 2×2 R tR ∈ R 4×4 , are defined in (2.3.24)- (5.3.16).
Note that discretizations of the current and voltage vectors, i
1 (t), i 2 (t), v 1 (t), v 2 (t) defined in (5.3.23) imply that: i 1 (j) = [ f T S (j) i T E (j) i T c (j) ] T , v 1 (j) = [ e T S (j) v T E (j) v T c (j) ] T , i 2 (j) = [ i T bR (j) i T tR (j) ] T , v 2 (j) = [ v T bR (j) v T tR (j) ] T , (5.4.4)
where f S (j), e S (j) ∈ R 2 are the discrete vectors of the charge time derivative,ẋb , and of the Hamiltonian gradient vector, ∇H b (x b ). The discrete-time current and voltage vectors, i E (j
) ∈ R 3 , v E (j) ∈ R 3 , i c (j) ∈ R 2 , v c (j) ∈ R 2 , are defined by: i E (j) = [ i l (j) i e (j) i r (j) ] T , v E (j) = [ v l (j) v e (j) v r (j) ] T , i c (j) = [ i bb (j) i b (j) ] T , v c (j) = [ v b (j) v bb (j) ] T .
(5.4.5) Now, we discuss about the discretization of the energy storage characterized by the flow and effort variables, f S (j), e S (j). From (2.3.21) we note that the Hamiltonian, H b (x b ), is a quadratic function. Thus, according to Example 3.2.5, we obtain the discrete-time scheme for the energy storage flow and effort variables, f S (j), e S (j), as: (5.4.6) Using the discrete-time models of the microgrid interconnection (5.4.1), the load (5.4.2a), the renewable power (5.4.2b), the resistors (5.4.3) and the current and voltage variables (5.4.4)-(5.4.5), we obtain the discrete-time model of the microgrid system: 5.4.7) subject to the set of constraints: (5.4.8) where the matrices A
f S (j) = - x b (j) -x b (j -1) h , e S (j) = Q b1 + Q b2 x b (j) + x b (j -1) 2 ,
f S (j) i E (j) = L(d(j)) e S (j) v E (j) , (
0 = B 11 B -1 21 B 22 A 2 (d(j)) e T S (j) v T E (j) T , P l (j) = v l (j)i l (j), P r (j) = -v r (j)i r (j), f S (j) = - x b (j) -x b (j -1) h , e S (j) = Q b1 + Q b2 x b (j) + x b (j -1) 2 ,
0 (d) ∈ R 6×1 , A 1 (d) ∈ R 6×6 , A 2 (d) ∈ R 6×5 , L(d) ∈ R 5×5
, are defined in (5.3.30)- (5.3.31), f S (j), e S (j) ∈ R 2 are defined in (5.4.6). Also, the discretization of the static constraints (5.3.10), (5.3.6a) and (5.3.32) are described as: Proof. Thanks to the quadratic form of the Hamiltonian, H b (x b ), in (2.3.21) and the discrete-time storage model (5.4.6) with the symmetry matrix Q b2 , it is easy to verify that:
0 ≤ d(j), 0.5x b,max ≤ x b (j) ≤ x b,max , i e,min ≤ i e (j) ≤ i e,max , i min ≤ [0 1 0 0 0 0]A 2 (d(j)) e S (j) v E (j) ≤ i max , (
H b (x b (j)) -H(x b (j -1)) = -f T S (j)e T S (j)h s .
(5.4.10) From (5.4.1a)- (5.4.1b) and (5.4.5), we obtain the power conservation property:
i T 1 (j)v 1 (j) + i T 2 (j)v 2 (j) = 0, i T c (j)v c (j) = 0, (5.4.11) Substituting i 1 (j), v 1 (j) in (5.4.4) to (5.4.11), we obtain:
f T S (j)e S (j) + i T E (j)v E (j) + i T 2 (j)v 2 (j) = 0, (5.4.12) From (5.4.3)-(5.4.5), (5.4.10)-(5.4.12), we obtain the system energy conservation relation:
H(x(j)) -H(x(j -1)) = i e (j)v e (j)h sv 2 (j) T R -1 v 2 (j)h s + jhs (j-1)hs (P l (τ ) + P r (τ ))dτ, where R ∈ R 6×6 is the resistive matrix defined in (5.3.30), h s ∈ R is the discretization time step. This result indicates that the evolution of the system energy equals the supplied energy minus the dissipated energy on the resistive elements, or simply the energy conservation.
Remark 5.4.2. Note that, besides the discretization method presented here, there is another method based on differential flatness and high-order B-splines-based parameterization (see Section 4.2). However, in this case, the flat output is difficult to find. Thus, this method is not considered in the presented work.
Next, we formulate the optimization problem for the online scheduling of the battery operation with the twin goals of minimizing the price of the acquired electricity while in the same time respecting the constraints introduced earlier.
optimization problem is nonlinear both in cost and in constraints (as seen in (5.4.14)-(5.4.16)). Still, there are specialized solvers (like IPOPT, [START_REF] Biegler | Large-scale nonlinear programming using ipopt: An integrating framework for enterprise-wide dynamic optimization[END_REF]) which can handle relatively large prediction horizons.
Note that the increase of the prediction horizon length N p in (5.4.16) entails that the optimization problem minimizes the cost along the entire horizon. It may, however, be the case that the cost function is affected by uncertainties such that the cost values subsequent to the present values along the prediction horizon are less reliable. A solution is to vary the weight γ ∈ (0, 1) from (5.4.16) associated to each cost value over the prediction horizon (i.e., varying γ we may assign less importance to the cost values which are further in the future [START_REF] Hovd | Handling state and output constraints in MPC using timedependent weights[END_REF]).
Simulation results
This section presents simulation results under different scenarios for the operation and control of the DC microgrid elevator system illustrated in Fig. 2.1.1 and equivalently represented by the electrical DC circuit in Fig. 5.3.3. The forthcoming simulations use the reference profiles described in Section 2.6.1 and the battery parameters presented in (5.4.6)-(5.4.9) with the numerical data given by the industrial partner Sodimas (an elevator company from France) and [START_REF] Desdouits | Multisource elevator energy optimization and control[END_REF]. They are illustrated in Fig. 5.5.1 and Table 5.5.1. Two simulation scenarios are considered here: nominal and perturbation-affected electrical power of load and renewable unit. The perturbation is assumed to be bounded in a symmetrical tube as in (5.5.2). We use different values for the SoC (States of Charge) of the battery state x b (t) ∈ R 2 given in (5.3.5):
SoC 1 (t) =
x b1 (t)
x 1,max , (5.5.1a)
SoC 2 (t) = x b2 (t)
x 2,max , (5.5.1b) 5.5.1c) where x b1 (t), x b2 (t) ∈ R are the first and the second coordinates of the state vector, x b (t) in (5.3.5a). 0.31, 0.29, 0.23, 0.19} The numerical optimization problem is solved by using Yalmip [Löfberg, 2004] and IPOPT [Wächter, 2002] in Matlab 2015a. The constrained closed-loop dynamics implementation are done by using fsolve function in Matlab 2015a with a fixed sampled time h = 36 seconds over a horizon of 24 hours. Note that this sampling time corresponds to the discretization of the continuous nonlinear dynamics. The update of the power profiles remains of 30 minutes. Nominal scenario: Fig. 5.5.2 illustrates the nominal scenario, where the battery charges x(t) along the simulation horizon (i.e., 24h). From 7 to 9 o'clock, the first charge, SoC 1 (t), attains the maximal limit but the second charge, SoC 2 (t), and the total charge, SoC, do not. It means that the battery can still be charged but with a smaller current. Moreover, because the battery charges respect their constraints, we conclude that the load power demand is always satisfied. Battery current 3 describes the actual electrical power charged/discharged by the DC components. Note that their positive signs indicate that the power is supplied to the microgrid. Also, it can be observed that when the electricity price is cheap, the battery is charged. Conversely, it is discharged during the high load and expensive electricity price. Furthermore, to minimize the cost, the battery is discharged completely to half its maximum capacity at the end of the day in preparation for the next day. Increasing the battery capacity has a diminishing effect on the overall cost reduction. We tested this assumption in simulation as illustrated in Fig. 5.5.4. Above a capacity of 13 times the initial capacity value q max described by (5.3.7) there is no discernible improvement. This is justified by the fact that there is enough capacity to reduce at minimum the external grid demand. In fact this may change with the length of the prediction horizon and a varying electricity price (where it makes sense for the battery to arbitrate the fluctuations). Perturbation-affected scenario: Similar simulations are implemented for a perturbation-affected scenario. More precisely, the electrical power of load and renewable source are within some uncertainty range: P l (t) ∈ P l (t) [1lmin , 1 + lmax ] ,
SoC(t) = x b1 (t) + x b2 (t) q max , (
(5.5.2a) P r (t) ∈ P r (t) [1rmin , 1 + rmax ] ,
( 5.5.2b) where (.) are positive numbers taken here as lmin = lmax , rmin = rmax with the values set to 0.2. The battery state of charge and components electrical power are presented in Figs. 5.5.5 and 5.5.6. Fig. 5.5.5 illustrates the battery state of charge (for the situations considered in (5.5.2)) with bounded uncertainty affecting the electrical power load and renewable. We can observe that the battery charge respects the imposed constraints and the load power demand is always satisfied. Note that this result is not significantly different from the nominal case in Fig. 5.5.2 since the integral of perturbation is zero as specified by (5.5.2). Furthermore, Fig. 5.5.6 describes the components actual provided electrical power under the perturbationaffected scenario. Since the current (and power) of the external grid is fixed, most of the fluctuation of the microgrid electrical power is absorbed by the battery.
Conclusions
This chapter introduced an efficient power scheduling for a DC microgrid under a constrained optimizationbased control approach. Firstly, a detailed model of the DC microgrid system was presented using Port-Hamiltonian formulations on graphs, with the advantage of preserving the underlying asset of an electrical Battery current system, the power conservation property. Next, a centralized optimization problem was formulated for efficient battery scheduling taking into account operating constraints, profiles and costs. Simulation results validated the proposed approach. Briefly, the original contributions of this work stem from:
• the DC microgrid is modeled through Port Hamilonian formulations. The procedure is a general one and can be easily extended and applied for any microgrid structure, with the advantage of explicitly taking into account the power conservation of the system interconnections;
• the constrained optimization problem proposed which finds the optimum balance between battery usage and the profit gained from electricity management;
• the simulation results for the energy management of a particular DC microgrid elevator system which validate the proposed approach.
As future work, we envision several directions of improvement for the constrained optimization-based control scheme: i) feasibility by considering the properties and specific form of Port Hamiltonian formulations; ii) computation improvements by reducing the prediction horizon; iii) robustness by taking explicitly in consideration the disturbances, etc. Furthermore, we envision the extension of this approach by taking explicitly into account different times scales in the control design scheme. Some recent works for additional informations regarding the hierarchical microgrid control are presented in [START_REF] Sechilariu | DC microgrid power flow optimization by multi-layer supervision control. Design and experimental validation[END_REF],Iovine et al., 2017]. Also, the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid is studied in [START_REF] Velarde | Application of robust model predictive control to a renewable hydrogen-based microgrid[END_REF].
Chapter 6
Conclusions and future developments
Conclusions
The present manuscript studied different optimization-based control strategies for the optimal energy distribution within DC microgrids. For explicitly describing the power-preserving interconnection of microgrids, the system model described by [START_REF] Paire | A real-time sharing reference voltage for hybrid generation power system[END_REF] was extended using the PH (Port-Hamiltonian) formalism presented in [van der Schaft and Jeltsema, 2014]. Then, an optimization-based control design, which combines a differential flat output (parameterized with B-splines) with a tracking MPC, was considered for minimizing the dissipated energy within the electro-mechanical elevator. Furthermore, an economic MPC (Model Predictive Control) approach, coupled with a PH model on graphs of the microgrid, was investigated for optimal power balancing within the system.
We have to underline that the micro and smart grid field is vast and a thorough investigation goes beyond the bounds of possibility. Therefore, we have concentrated our contribution on a specific thematic related to modelling and control approaches for DC microgrid systems. However, we believe that the acclaimed concepts from Port-Hamiltonian formulations, differential flatness, Model Predictive Control and B-splines can be used, combined and redesigned for dealing with different challenging issues appearing in the control of complex energy systems.
We have begun the manuscript by presenting a dynamical model for DC microgrids using Bond Graph and PH formalisms with multiple time scales. This modelling method was used to describe all the microgrid components, their interconnections and the global system. The obtained microgrid dynamics were described in an implicit form composed of the differential and algebraic equations with the interconnection matrices being modified by the control variables (i.e., the duty cycles). Based on the derived model, we presented typical microgrid constraints and cost functions which were then used to formulate the optimal control problems.
To integrate the presented microgrid model within the optimization-based control formulation, we proposed a discretization scheme which preserves the power-preserving interconnection and the energy conservation properties of the continuous-time PH model. This scheme was obtained by combining the discrete-time models of PH system elements, i.e., Dirac structure, energy storage, resistive, environment. The proposed discretization method was validated for the cases of the electro-mechanical elevator and the DC microgrid in the fast time scale (corresponding to the dynamics of the converters, the DC bus, the supercapacitor and the electro-mechanical elevator). These simulation results illustrated the efficiencies of the proposed discretization method with respect to other classical methods (implicit and explicit Euler methods were used for the comparisons).
We have studied a constrained optimization-based control design method composed of the off-line reference profile generation and the on-line tracking control for minimizing the dissipated energy within the electromechanical elevator. Differential flatness with B-splines parameterization were used to represent the system dynamics and the state and input constraints. The obtained reference profiles were compared to classical profiles, obtained using the "trapezoidal speed" and the "Maximum Torque Per Ampere" methods. Online we realized the reference profile tracking through an MPC. The efficiency of the proposed method were highlighted through simulations of the singular perturbation-affected scenario.
Finally, we have developped an economic MPC approach to investigate the power balancing for the DC microgrid. The slow microgrid dynamical model corresponding to the battery dynamics time scale was first obtained from the simplifications of the fast dynamics, i.e., the converters, the DC bus, the supercapacitor and the electro-mechanical elevator system were considered at the steady state. The model was represented using the PH formalism on graphs which explicitly describes the microgrid circuit topology. For the MPC design, the presented dynamics were discretized using the energy-preserving discretization method proposed in Chapter 3. We formulated an economic MPC which takes into account the discrete-time model, the renewable and load power profiles and the electricity price. The control method was validated through simulations with different scenarios.
Since the combination of PH formalism and constrained optimization-based control is a new approach for the microgrid energy management topic, there are still many open questions that will be detailed in the last section.
Future developments
The short term future works are connected with the numerical feasibilities of the optimization-based control design presented throughout the manuscript and with the multi-layer control strategies for DC microgrids.
In Chapter 4, a remaining hard point is that there is a continuous-time equality constraint in the reference profile generation problem which is difficult to numerically satisfy. Since this constraint is parametrized using flat outputs and B-splines, there may be two possible solutions. In the first solution, we use the equality to simplify the dynamics such that there is no longer any equality constraint to deal with. In the second solution, we modify the B-spline parameterization (e.g., change the B-splines order and modify the number of B-splines) such that equality conditions involving the control points have solutions.
The control approach presented in Chapter 4 may be extended for minimizing the dissipated energy within the fast part of the microgrid dynamics. The fast part corresponds to the converters, the DC bus, the supercapcitor and the electro-mechanical elevator. The state variables are the converters capacitors, the converters fluxes, the DC bus capacitors, the supercapacitor charge, the magnetic fluxes of the motor, the mechanical momentum of the elevator and the motor angle. The control variables are the external current, the converter duty cycles of the battery unit, the supercapacitor unit and the electro-mechanical elevator. Furthermore, from the simulation results in Chapter 3, we notice that the dynamics of the converters, DC bus and the motor stator are faster than the others (i.e., the dynamics of the mechanical elevator and the supercapacitor). Thus, based on the singular perturbation method [Khalil, 2002], we could use only the dynamics of the mechanical elevator and the supercapacitor in the reference profile generation procedure to reduce the computational complexity.
In Chapter 5, the designed economic MPC for the microgrid power balancing with short prediction horizons (e.g., 7 hours) often becomes infeasible during the simulation. This is due to the fact that the regulator cannot predict the lack of stored energy at the load peak power moment (i.e., the demand profile varies too much to be compensated by the short-prediction MPC). The drawback may be solved using an additional term to the cost function which matches control laws to the laws obtained in the long prediction horizon case (i.e., 24h in the current design). Moreover, energy efficiencies of the proposed control design should be compared with other methods such as the priority rule approach [Paire, 2010] and the economic MPC with first order model of energy storage units [START_REF] Dos | [END_REF].
The extension of the MPC for the optimal microgrid power balancing to a multi-layer control design is our approach to deal with different control objectives and different time scales of the DC micrigrid. Fig. 6.2.1 presents an electrical circuit of the DC microgrid, and Fig. 6.2.2 shows the control architecture. The control decomposition is based on the separation of the control objectives, the microgrid dynamics and the constraints.
The high level regulator aims at minimizing the electricity cost while taking into account the slow part of the microgrid dynamics, the electricity price, the power balance and the constraints of the external grid current, the battery current and the battery charge (see also Fig. 6.2.2). The slow model corresponds to the battery dynamics, the renewable power profile and the slow load power profile. The state and control variables are the battery charges and the external grid current, respectively (see also Fig. 6.2.1 where the electrical circuit which takes into account these specifications of the microgrid is presented). The power balance is assumed to be always satisfied thanks to low level regulators. The control laws are formulated using an economic MPC which penalizes the electricity cost in a finite horizon (see also Chapter 5). Then, this high level regulator sends the computed references of the external grid current (control signal), the supercapacitor voltage and the load voltage to the low level regulator (see also Fig. 6.2.2). Tracking control The low level regulator aims at tracking the references given by the high level regulator while considering the fast part of the microgrid dynamics, the constraint of the supercapacitor charge and the battery current. The fast microgrid model corresponds to the supercapacitor dynamics and the fast load power profile (obtained using the reference profiles designed in Chapter 4). The state and control variables are the su-percapacitor charge and the duty cycles, respectively. The control laws are formulated using tracking MPC which penalizes the discrepancies between the actual supercapacitor charge, the actual load voltage and their references given by the high level regulator (see also Fig. 6.2.2).
The long term future works are concerned with the insightful integration of the PH formalisms in the constrained optimization-based control designs with multi-layer architecture. This should be reflected through considerations of PH system properties (e.g., power-preserving interconnection, energy conservation, Casimir function) in different aspects of the control such as model reduction and stability.
The model reduction is important when designing multi-layer controls for multi time scale systems, such as microgrids, since it drastically simplifies the computational complexity of the regulators. According to the singular perturbation method [Khalil, 2002], the reduced model implies the slow part of the system dynamics after reducing the fast part. This method mainly admits two following assumptions: i) in the fast time scale the slow state variables are constant; ii) in the slow time scale the fast dynamics are asymptotically stable.
Moreover, for preserving system properties described in the PH or graph formulations, reduced models have been derived using the PH [START_REF] Polyuga | Effort-and flow-constraint reduction methods for structure preserving model reduction of port-Hamiltonian systems[END_REF]van der Schaft, 2012,Wu, 2015] or PH on graphs [Monshizadeh and van der Schaft, 2014] formalisms. However, these works consider systems with constant interconnections which is not the case for the microgrid investigated in our work, since the microgrid interconnection is modulated by the control signals, i.e., the duty cycles. Thus, reduction processes for the microgrid model depend on low level control designs which give the relation between the control and state variables.
The model reduction using the B-splines approximation for the system's flat outputs is used to approximate the continuous-time optimization problems in control designs (see Chapter 4). However, we have not considered any insightful relation between the flat outputs and the PH formalism yet. Since the differential flatness implies the feasible system trajectory, there may be a connection between flat outputs and Casimir functions of the Port-Controlled Hamiltonian system (a class of PH systems).
The stability of the constrained closed-loop system using tracking MPC may be studied through the PH formalism. As presented above, the PH formalism is useful for the system stability analysis and for the control design based on the interconnection, the dissipation and the stored energy of the system dynamics [START_REF] Duindam | Modeling and Control of Complex Physical Systems: The port-Hamiltonian approach[END_REF]]. An interesting property of PH system is the passivity where the energy (Hamiltonian) is considered as a Lyapunov function. There are many control methods developed using the PH formalism as presented in [van der Schaft andJeltsema, 2014, Wei and[START_REF] Wei | [END_REF], e.g., Control by Interconnection, Interconnection and Damping Assignment Passivity-Based Control (IDA PBC). However, none of these methods can explicitly deal with the state and input constraints while MPC handles these easily. While the theory on linear MPC has gained ground over the last decades [START_REF] Rawlings | Model Predictive Control: Theory and Design[END_REF], the non-linear and economic MPC are still open research topics. For example, the stability demonstration for the closed-loop non-linear system is difficult since a Lyapunov function is not easy to find. From the previous arguments, while both PH formalism and MPC are established tools in the literature, to the best of our knowledge (state of the art in 2017), they have never been considered together by the control community.
We propose to use the PH formalism such that, via an MPC control action, the closed-loop dynamics are describing a PH system. This is done in three steps: i) choosing the desired PH closed-loop system, ii) finding the explicit control laws, iii) finding the corresponding MPC. Since any MPC-based closed-loop system is in fact a switched system [START_REF] Bemporad | The explicit linear quadratic regulator for constrained systems[END_REF], the desired PH system must also be a switched PH system. [Kloiber, 2014] proposes design methods for stable switched PH systems. Next, from the explicit form of the closed-loop system, we find the explicit control laws by solving the matching equation. Then, the process to find MPC laws corresponding to given explicit laws is seen as an inverse parametric programing problem [Nguyen, 2015].
-˙l Φ(t) -θe (t)
(4.3.16). . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.3.6 The time evolutions and the discrepancies of the state variables for the case of perturbationaffected dynamics with MPC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.3.7 The time evolutions and the discrepancies of the input variables for the case of perturbation-
Figure 1 .
1 Figure 1.2.1: Typical hierarchical control architecture.
Figure 1 . 5 . 1 :
151 Figure 1.5.1: The organization of the thesis.
Figure 2 . 1 . 1 :
211 Figure 2.1.1: The microgrid elevator system.
Figure 2
2 Figure 2.2.1: The Bond Graph for simple series and parallel DC electrical circuit.
Figure 2
2 Figure 2.2.2: The PH system.
1 :
1 Figure 2.3.1: Cuk circuit for the DC/DC converter.
Figure 2 . 3 . 2 :
232 Figure 2.3.2: Electrical circuit for the DC/AC converter and PMSM stator.
Figure 2 . 3 . 3 :
233 Figure 2.3.3: Renewable source model.
Figure 2 . 3 . 4 :
234 Figure 2.3.4: External grid model.
5 :
5 Figure 2.3.5: Electrical circuit for the transmission lines.
Figure 2 . 3 . 6 :
236 Figure 2.3.6: Battery model: (a) the KiBaM model(b) the corresponding electrical circuit.
Figure 2 . 4 . 1 :
241 Figure 2.4.1: Electro-mechanical elevator scheme.
Fig. 2
2 Fig. 2.4.2 describes the Bond Graph of the original model. In the same figure, D DC/AC , D e and D m represent the Dirac structures of the DC/AC converter, the PMSM and the mechanical elevator, respectively.The storage elements reside in the PMSM stator and mechanical elevator. The resistive element only resides in the PMSM stator. There are two external ports: the zero-flow source Sf representing the flow constraint (2.4.8d) and the electrical port (i l , v l ) connecting to the transmission lines. Note that the Dirac structure of the DC/AC converter is modulated by the duty cycle ďl (t). We can see that the Bond Graph describes different physical domains in the same theoretic formalism: magnetic, electric and mechanic.
Figure 2
2 Figure 2.4.2: Bond Graph for the electro-mechanical elevator model in the original coordinates.
Fig. ( 2
2 Fig. (2.4.3) illustrate the Bond Graph of the reduced model for the electro-mechanical elevator. By comparing the Bond Graphs in Fig. 2.4.2 and Fig. 2.4.3, we can see that the state-space transformation and the order reduction modify the representations of the Dirac structures of PMSM stator and DC/AC converter.
Figure 2 . 4 . 3 :
243 Figure 2.4.3: Bond Graph for the electro-mechanical elevator model in the d-q coordinates.
Figure 2.4.4: numerical data of dq currents, i l (t), dq voltages, v l (t), rotor speed, ω l (t), and rotor angle, θ m (t).
Fig 2. 5
5 Fig 2.5.1 illustrates the Bond Graph of the multi-source elevator system. The Dirac structures of the transmission lines, the mechanical elevator, the PMSM and the associated converter, the battery, the battery converter, the component interconnection and the supercapacitor converter are represented by the nodes D t , D m , D e , D b , D bc , D I and D sc , respectively. The energy storage of the transmission lines, the mechanical elevator, the PMSM, the battery, the battery converter and the supercapacitor are represented by the nodes C t , C m , C e , C b , C bc , C s and C sc , respectively. The resistive elements reside in the transmission lines, PMSM, battery and supercapacitor. They are represented by R t , R l , R b and R s , respectively. The renewable power source and the external grid are denoted by P r (t) and S f (Details on the flow source S f is presented in Section 2.2.1), respectively. Moreover, the control variables d l (t), d b (t), d s (t) modulate the Dirac structures of the PMSM and converter, D e , of the battery converter, D bc , and of the supercapacitor converter, D sc . The external current i e (t) modulates the flow source S f , i.e., i e (t) denotes the port control variable.
Figure 2 . 5 . 1 :
251 Figure 2.5.1: Bond Graph representation of the DC microgrid electrical circuit.
Figure 2 . 6 . 1 :
261 Figure 2.6.1: Profiles of load, renewable power and electricity price.
Figure
Figure 3.2.1: State variable errors of the discretization schemes: explicit Euler, implicit Euler, mix scheme 1, mix scheme 2 and Casimir-preserving scheme.
Figure 3
3 Figure 3.2.2: The evolutions of the Hamiltonian and Casimir of the discrete time systems obtained by the explicit Euler, implicit Euler, mix scheme 1, mix scheme 2 and Casimir-preserving schemes.
Figure 3 . 3 . 1 :
331 scheme 1 scheme 2 scheme 3
Fig. 3 .
3 Fig. 3.3.2 illustrates the state evolutions of the continuous dynamics and discrete dynamics with the time step h = 0.01 s. We can see that the magnetic fluxes in the fast time scale converge to the steady values determined by the mechanical variables in the slower time scale. From the electro-mechanical dynamics, we can observe that only the rotor speed affects the two magnetic fluxes. Thus, even when the rotor position increases, the steady states of the two magnetic fluxes do not change. Since Scheme 3 is second-order, the obtained discrete state errors are smaller than the ones from the two others.
Figure 3 Figure 3 . 3 . 3 :
3333 Figure 3.3.2: The state evolution of the continuous dynamics (cont.) and of the energy-preserving discrete models by using Scheme 1, Scheme 2, Scheme 3.
Figure 3 . 3 . 4 :
334 Figure 3.3.4: The state errors of the electro-mechanical elevator by using the explicit Euler (ex), implicit Euler (im), energy-preserving (st) methods.
Fig. 3 .Figure 3 . 3 . 5 :
3335 Fig. 3.3.5 illustrates the previous demonstrations. The first three sub-figures describe the supplied power from storage, resistive and external elements. The last one indicates the power sum error defined by(3.3.12).
Figure 3 . 3 . 6 :
336 Figure 3.3.6: The stored energy evolutions of the continuous dynamics and discrete ones by using the explicit Euler, implicit Euler and Scheme 1 methods.
Figure 3 . 4 . 1 :Figure 3
3413 Figure 3.4.1: The microgrid state errors by using the explicit Euler (ex), implicit Euler (im) and energypreserving (midpoint) methods.
Figure 3 . 4 . 3 :
343 Figure 3.4.3: The element and power sum errors of the discrete time system by using the explicit Euler, implicit Euler and energy-preserving methods.
Figure 4
4 Figure 4.2.2: B-splines of order 1 to 4.
α
j z j , where α j ≥ 0, N -1 j=0 α j = 1, i = 1, ..., N , a convex combination of the points {z 1 , . . . , z N }. The convex hull of a set C, denoted by conv (C), is the set of all convex combinations of points in C, i.e., convC = N j=1
Figure 4 .
4 Figure 4.2.3: B-spline curve (red), its control polygon (blue) and convex hull (green).
Figure 4
4 Figure 4.2.4: Tracking MPC cost (left) and economic MPC cost (right).
Figure 4 . 3 . 1 :
431 Figure 4.3.1: Reference profiles for the currents, voltages, rotor speed and magnetic torque.
Figure 4
4 Figure 4.3.2: Reference profile for the rotor's angle.
.4.1. The references are generated using differential flatness and B-splines-based parameterization with continuous time validation of the constraints. They are given in Sections 4.3.1-4.3.2 and described in Fig. 4.3.1. The parameters are presented in Table3.3.1 with the numerical data given by the industrial partner SODIMAS. The time step of the discrete-time model is h = 0.001s.
Figure 4 . 3 . 3 :
433 Figure 4.3.3: Time evolution and tracking errors of the state variable for the case of nominal dynamics and feedforward control.
Figures
Figures 4.3.3 illustrates the time evolutions and discrepancies of the state and input variables for the case of nominal dynamics with the feedforward control. Since the actual control signals are equal to the reference one, v l (t) = v l (t), the input discrepancies are zero, ṽl (t) = 0. However, the discrepancies of the state variables differs from zero due to two following reasons. The first reason is that there are differences between the time discretizations of the electro-mechanical dynamics and of the reference profiles. The second reason is related to the employment of the soft constraint instead of the equality constraint(4.3.34).
Figure 4 . 3 . 4 :
434 Figure 4.3.4: Time evolution and errors of the state variable for the case of perturbation-affected dynamics and feedforward control.
Figures
Figures 4.3.4 describe the time evolution and tracking errors of the state and input variables for the case of perturbation-affected dynamics and feedforward control. The perturbation is defined by giving a random state variables near the state reference at a chosen time instant. In the following simulations, the random discrepancies are chosen as δx = [0.5 1 4 -1] T . Physically, this perturbation represents a very strong interaction between the system and the environment during a short time duration (much shorter than the time step, h). By modifying the state variables, this also modifies the system stored energy, H l (x l ). For the simplicity, we do not consider here the uncertainty on the control variables, on the feedback (output) signals and on the model parameters. From the figures, we note that the dynamics of two currents and rotor speed are asymptotically stable around the corresponding reference profiles. It is described by the convergences to zero of the corresponding discrepancies. However, the error of the rotor angle is constant. It causes the angle errors at the end of the elevator travel, i.e., the cabin does not stop at the required position.
Figure 4 . 3 . 5 :
435 Figure 4.3.5: The eigenvalues of the subsystem 1 in (4.3.16).
Figure 4 . 3 . 6 :
436 Figure 4.3.6: The time evolutions and the discrepancies of the state variables for the case of perturbationaffected dynamics with MPC.
Figure 4
4 Figure 4.3.7: The time evolutions and the discrepancies of the input variables for the case of perturbationaffected dynamics with MPC.
Figure 5
5 Figure 5.2.2: (a) Electrical circuit (b) Bond Graph (c) Directed graph.
Figure 5 . 3 . 1 :
531 Figure 5.3.1: Electro-mechanical elevator model in the slow time scale.
Figure 5
5 Figure 5.3.2: Electrical circuit of the transmission lines in the slow time scale: (a) the original resistor network (b) the simplified resistor network.
Figure 5 . 3 . 3 :
533 Figure 5.3.3: Electrical circuit of the DC microgrid in the slow time scale.
Figure 5 . 3 . 4 :
534 Figure 5.3.4: Directed graph corresponding to the DC microgrid circuit in the slow time scale.
1 .
1 The discrete-time model defined by (5.4.7)-(5.4.8) preserves the energy conservation property.
Figure 5 . 5 . 1 :
551 Figure 5.5.1: Profiles of load, renewable power and electricity price.
Figure 5 .
5 Figure 5.5.2: Battery charges as in (5.5.1) (nominal scenario).
Fig. 5 .
5 Fig. 5.5.3 describes the actual electrical power charged/discharged by the DC components. Note that their positive signs indicate that the power is supplied to the microgrid. Also, it can be observed that when
Figure 5 . 5 . 3 :
553 Figure 5.5.3: Actual electrical power charged/discharged by the DC components.
Figure 5 . 5 . 4 :
554 Figure 5.5.4: Cost and battery capacity relation.
Figure 5 . 5 . 5 :
555 Figure 5.5.5: Battery charges as in (5.5.1) (perturbation scenario).
Figure 5 .
5 Figure 5.5.7: Actual electrical power charged/discharged by the DC components under perturbation scenario.
Figure 6
6 Figure 6.2.1: DC microgrid electrical circuit.
Figure 6 .
6 Figure 6.2.2: Two-layers control design.
∂
Φl H e ( Φl , θ e ) ∂ θe H e ( Φl , θ e )
The state variable errors of the energy-preserving discretization schemes for the electro-mechanical elevator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3.3.2 The state evolution of the continuous dynamics (cont.) and of the energy-preserving discrete models by using Scheme 1, Scheme 2, Scheme 3. . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 The stored energy evolution of the continuous dynamics (cont.) and of the energy-preserving discrete models by using Scheme 1, Scheme 2, Scheme 3. . . . . . . . . . . . . . . . . . . . . . 3.3.4 The state errors of the electro-mechanical elevator by using the explicit Euler (ex), implicit Euler (im), energy-preserving (st) methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 The element power of the continuous dynamics (cont.) and discrete ones by using the explicit Euler, implicit Euler, Scheme 1 methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 The stored energy evolutions of the continuous dynamics and discrete ones by using the explicit Euler, implicit Euler and Scheme 1 methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 The microgrid state errors by using the explicit Euler (ex), implicit Euler (im) and energypreserving (midpoint) methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 The microgrid stored energy evolution of the continuous dynamics and discrete ones by using the explicit Euler, implicit Euler and energy-preserving methods. . . . . . . . . . . . . . . . . 3.4.3 The element and power sum errors of the discrete time system by using the explicit Euler, -spline curve (red), its control polygon (blue) and convex hull (green). . . . . . . . . . . . . 72 4.2.4 Tracking MPC cost (left) and economic MPC cost (right). . . . . . . . . . . . . . . . . . . . . 75 4.3.1 Reference profiles for the currents, voltages, rotor speed and magnetic torque. . . . . . . . . . 85 4.3.2 Reference profile for the rotor's angle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.3 Time evolution and tracking errors of the state variable for the case of nominal dynamics and feedforward control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.3.4 Time evolution and errors of the state variable for the case of perturbation-affected dynamics and feedforward control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.3.5 The eigenvalues of the subsystem 1 in
vi xii List of Figures
Variable 4.2.3 B
Notation Description
I n 1 n 0 n f, f identity matrix of size n × n array of ones of size n × 1 array of zeros of size n × 1 flow variables
e, e effort variables
x, x state variables
F, E flow and effort spaces
F * dual space of F
D Parameter and variable fonts Dirac structure X state space
Element R Font set of real number
Scalar parameter capital letter N set of natural number
Hamiltonian T x X capital letter stangent space of X
Scalar variable R resistive matrix normal letter
Vector J, D normal and bold letter interconnection matrix
Matrix G capital and bold letter input matrix
Set H capital and blackboard bold letter Hamiltonian function
d converter duty cycle
i, i currents
v, v Operator P voltages power
Notation C Description capacitance, capacitor
∂ x1 g L, L partial derivative of g with respect to x 1 inductance, inductor
R ∇H ẋ q the vector of gradient of H(x) resistance, resistor time derivative of vector x(t) capacitor charge
Im G φ, Φ imagine of matrix G magnetic flux
det(G) Q determinant of matrix G weight matrix (in Hamiltonian, in cost function)
G T k transpose matrix of G internal battery coefficient, or number index
G ⊥ b orthogonal matrix of G charge factor of the battery
diag{a, b, c} diagonal matrix with the diagonal elements a, b, c E internal voltage of the battery
x p ρ n! m c , m p pulley radius p-norm of vector x with p = 1, 2 or ∞ factorial n masses of the cabin and the counterweight
conv (G) m c convex hull of the set G vector of masses of the cabin
Subscript and superscript Notation Description (.) cb variable of the converter associated with the battery (.) cs variable of the converter associated with the supercapacitor (.) bb variable of the battery (.) ss variable of the supercapacitor (.) b variable of the battery unit (.) s variable of the supercapacitor unit (.) d discretized function (.) a , (.) approximation of (.) (.) variable (.) in the original elevator coordinates (.) variable (.) in the transformed elevator coordinates (.) * reference value of (.) v I l mechanical inertia p l explicit Euler, implicit Euler, mix scheme 1, mix scheme 2 and Casimir-preserving schemes. . mechanical momentum θ m pulley angle ω l rotor angular speed τ e magnetic torque of the motor P(θ m ) Park transformation W Jacobian matrix t, t time instant and vector of time instants n, N natural numbers V 1 , V 2 cost functions pr(t) electricity price N p prediction horizon x(t + jh|t) predicted state vectors at the instant t + jh Q x , Q u weight matrices of the state and input variables N number of control points λ j,d B-spline of order d Λ d B-spline vector of order d τ i knot T knot vector 3.3.1 xi
implicit Euler and energy-preserving methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Differentially flat systems [Prodan, 2012]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 B-splines of order 1 to 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
describes the transmission line capacitors. Note that ẋt (t) and ∇H t (x t ) are the charge current and voltage vectors of the DC bus capacitors.Employing the Kirchhoff's laws for the DC bus electrical circuit, we obtain the following dynamics of transmission lines:
Table 2 .
2 4.1: Real and estimated values in the Case without Noise R
Real value 0.53 0.00896 0.01123 0.05
OE 0.5s 0.6000 0.0798 0.0167 0.0395
0.005s 0.6010 0.0088 0.0111 0.0425
LSI 0.5s 0.5290 0.0090 0.0112 0.0500
0.005s 0.5281 0.0090 0.0113 0.0524
s (Ω) L d (H) L q (H) φ f (W b)
Table 2 .
2 4.2: Real and estimated values in the case with noise R
Real value 0.53 0.00896 0.01123 0.05
OE 0.6000 0.0799 0.0158 0.0396
LSI 0.347 0.0004 0.0012 0.0407
s (Ω) L d (H) L q (H) φ f (W b)
Table 2 .
2 4.3: Real and estimated values in the case with noise and with a low-pass filter R
Real value 0.53 0.00896 0.01123 0.05
OE 0.6001 0.0797 0.0193 0.0392
LSI 0.5294 0.0089 0.0112 0.0503
s (Ω) L d (H) L q (H) φ f (W b)
), the transmission lines description (2.3.9), (2.3.19), and the electro-mechanical elevator structure matrices (2.4.26), (2.4.28), we obtain the global dynamics of the microgrid system as follows:
Gathering the dynamical equations (2.3.1), (2.3.8), (2.3.15), (2.3.25), (2.3.29), (2.4.24), the components
connection (2.5.1
.5.6)
by f S,d , f R,d , f E,d , e S,d , e R,d , e E,d . Thanks to(3.2.10) and the skew-symmetry form of D i , the power conservation is also satisfied in the discrete case:f T S,i e S,i + f T R,i e R,i + f T E,i e E,i = 0, ∀i ∈ {1, . . . , N },(3.2.13) wheref S,i , f R,i , f E,i , e S,i , e R,i , e E,i are the i-th elements of the discrete functions f S,d , f R,d , f E,d , e S,d , e R,d, e E,d , respectively. Discrete energy storage: In the continuous case, the Hamiltonian satisfies the chain rule. This will be also guaranteed in the discrete time case by appropriate choices of the discrete functions for the storage port variables f S,d , e S,d which satisfy the discrete energy balance equation on each time interval [t i-1 , t i ].Definition 3.2.2 (Discrete energy storage). Let f S,d (x d ), e S,d (x d ) and x d ∈ X N be the discrete functions of f S (t), e S (t) and x(t), respectively. They are said admissible if:
E,z,d be the discretizations of the state variable and the flow and effort vectors of the PH system (3.2.30)-(3.2.32) by using the energy-preserving discretization method as in Definitions 3.2.1-3.2.2, 3.2.7-3.2.8. Then, the discretizations of the state, flow and effort vectors of the PH system (3.2.4)-(3.2.6) obtained by
.2.33) Proposition 3.2.10. Let z d , f S,z,d , f R,z,d , f E,z,d , e S,z,d , e R,z,d , e
The discrete-time state, flow and effort vectors in(3.2.34) satisfy conditions(3.2.11),(3.2.14), thus, concluding the proof.
.2.34) is an energy-preserving discretization of the PH system (3.2.4)-
(3.2.6)
.
Proof.
3.2.1: State variable errors of the discretization schemes: explicit Euler, implicit Euler, mix scheme 1, mix scheme 2 and Casimir-preserving scheme. Fig. 3.2.2 illustrates the evolution of the Hamiltonian and Casimir of the presented discretization schemes. We can see that the schemes (3.2.37a) and (3.2.37b) do not preserve these invariants. The schemes (3.2.38a) and (3.2.38b) preserve the Hamiltonian (i.e., energy) but not the Casimir. The scheme (3.2.41)-(3.2.43) preserves both the Hamiltonian and the Casimir.
Table 3 .
3 .
3.1: Numerical data for the electro-mechanical elevator.
Name Notation Unit Value
Number of pole pair p 10
Stator resistor R l [Ω] 0.53
Direct stator inductance L d [mH] 8.96
Quadrature stator inductance L q [H] 11.23
Rotor linkage flux φ f [V s] 0.94
Cabin mass M c [kg] 350.00
Counterweight mass M p [kg] 300.00
Pulley radius ρ [cm] 6.25
Gravity acceleration g [m/s 2 ] 9.81
Table 3.3.2: Configuration for the electro-mechanical elevator simulation.
Name Notation Unit Value
Time interval T [s] 0.15
Time steps {10 Input voltages h [s] v l [V ] [230 230] T
Initial states x l0 [7 7 150 0] T
-5
, 10 -4 , 10 -3 , 10 -2 , 5.10 -2 }
Table 3 .
3 4.1: Numerical data for the ESS.
Name Notation Unit Value
DC bus capacitors C b , C s , C l , C e , C r [C] 0.008
DC bus resistors R t,1
Table 3 .
3
4.2: Configuration for microgrid simulation.
Name Notation Unit Value
Time interval T [s] 0.01
Duty cycles d [0.97 0.93 0.5 0.5] T
External current i e [A] -1
Renewable power P r [W ] 400
Initial DC bus charges x t (0) [C] [3.432 2.808 3.432 3.120 3.120] T
Initial battery charges x b (0) [C] [210816 316224] T
Initial battery converter states x cb (0) [0 3.2 0 0.104] T
Initial supercapacitor charges x s (0) [C] 1740
Initial supercapacitor converter states
Table 4 .
4 2.1: Parameters for the B-spline example in Fig. 4.2.2. oder, d B-spline number, N knot number, ν
knot vector, T
in the optimization problem(4.3.23), we obtain: Notice that, in the optimization problem(4.3.36), the cost function is quadratic, the constraints in(4.3.28) are convex, the constraints in(4.3.35) are linear. However, since the constraints (4.3.34) are nonlinear, (4.3.36) is a nonlinear optimization problem. Still, there are specialized solvers (like IPOPT,
min Z, Vl (Z, ) (4.3.36a)
subject to the constraints (4.3.28), (4.3.34b)-(4.3.35). (4.3.36b)
Table 4 .
4 3.1: Setting for the simulations of the electro-mechanical elevator speed profile generation.
Name Notation Unit Value
Time interval [t 0 , t f ] [s] [0, 30]
B-spline number N 10
B-spline order d 4
Number of pole pairs p 10
Direct inductance L d [mH] 8.96
Quadrature inductance L q [mH] 11.23
Stator resistance R s [Ω] 0.53
Rotor linkage flux φ f [V.s] 0.944
Mechanical inertia J [kg.m 2 ] 3.53
Gravity torque Γ res [N.m] 149
Maximal current amplitude i max [A] 41.2
DC-link voltage v ref [V ] 400
Maximal angular speed ω l,max [rad/s] 29.6
Minimal angular speed ω l,min [rad/s] 0
Initial rotor angle θ 0 [rad] 0
Final rotor angle θ f [rad] 592.6
Table 4 .
4 3.3: Computation time in seconds of the off-line reference profile generation.
N=8 N=9 N=10 N=11 N=12
d=4 6.0800 7.0780 7.9130 8.8400 9.9960
d=5 8.3550 10.0300 11.9450 13.6260 15.2460
d=6 11.0610 13.8170 16.8470 20.5200 23.8510
Tables 4.3.2-4.3.3 present values of the relaxation factor, in
Table 4 .
4
3.4: MPC parameters.
Variable Notation Unit Value
Sample time h [s] 0.001
Prediction horizon N p 1
State weight matrix Q
5.2.1 illustrates a directed graph which is composed by 4 vertices and 6 directed edges. Using Definition 5.2.1 of directed graphs, we determine the incidence matrix, B ∈ R 4×6 , of the graph in Fig.5.2.1 as in (5.2.1):
Vertex 4
E d g e 1 E d g e 2 Edge 5 Edge 6 E d g e 3 E d g e 4
Vertex 2 Vertex 3
Vertex 1
Figure 5.2.1: Directed graph example.
Table 5 .
5 5.1: Numerical data for the microgrid components.
Name Notation Value
Scheduling time step h s [h] 0.5
Prediction horizon N p 48
Weighting parameter Battery parameters γ ∈ (0, 1) Q b1 [V ] 0.5 [ 13 13 ] T
Battery constraints Q b2 [V /C] x max [Ah] diag {0.3036, 0.2024} [ 73.2 109.8 ] T
i b,min [A] -20
i b,max [A] 20
Grid constraints i e,min [A] -8
i e,max [A] 8
Load voltage reference v ref [V ] 380
Resistors R [Ω] diag {0.012, 0.015,
Electrical power of the DC microgrid components storage unit: vc2(t) • ic2(t) load: -Pl(t) external grid: ve(t) • ie(t) renewable: Pr(t)
-500 0 500 1,000 Electrical power [W] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Time [h]
Figure 5.5.6: Actual electrical power charged/discharged by the DC components under perturbation scenario.Electrical power of the DC microgrid components under perturbation storage unit: vc2(t) • ic2(t) load: -Pl(t) external grid: ve(t) • ie(t) renewable: Pr(t)
-500 0 500 1,000 1,500 Electrical power [W] 9 9.2 9.4 9.6 9.8 10 10.2 10.4 10.6 10.8 11 11.2 11.4 11.6 11.8 12 12.2 12.4 12.6 12.8 13
Time [h]
Electrical power of the DC microgrid components under perturbation
-500 0 500 1,000 1,500 Electrical power [W] 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 storage unit: vc2(t) • ic2(t) 20 21 22 23 load: -Pl(t) external grid: ve(t) • ie(t) renewable: Pr(t) 24
Time [h]
The elevator operation is separated into many travels during a day. At the travel end, the cabin must arrive to the desired building floor while the supercapacitor electrical storage must be at the reference voltage to prepare for the next travel.
Port-Controlled Hamiltonian system is a special case of PH system where the control variables are also the input variables.
A more general model for the transmission lines includes capacitors, inductors and resistors[START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF]. For simplicity we choose here to use capacitors and resistors.
A similar conclusion for a system of AC/DC converters can be found in[START_REF] Zonetti | Modeling and control of HVDC transmission systems from theory to practice and back[END_REF]
Note that, usually there is not a linear dependency between the input variables and the flat output. Therefore, the cost and the constraints are more difficult to handle since they have a more complex form in function of the control points variables.
A continuous function α : [0, ∞) → [0, ∞) is said to belong to class K∞ if: * it is strictly increasing, * α(0) = 0, * lim r→∞ α(r) = ∞.
Usually, we use two pairs of conjugate variables to describe the inputs and outputs of the two subsystems. They satisfy a feedback interconnection defined by (2.2.2). However, for simplicity, we use one pair of the conjugate variables with the appropriate signs in each subsystems.
Note that, in Example 4.2.3 we intentionally chose the dynamical system as in (4.2.8) to show the corresponding flat output representation. It can be verified that(4.3.16) is the system presented inExample 4.2.3.
An electrical element can be a circuit of many basic elements (e.g., the resistor, the inductor, the capacitor, the voltage source, the current source).
by i bR (t), v bR (t) ∈ R 2 , the battery current and voltage are denoted by i bb (t), v bb (t) ∈ R, the structure matrices, G bSR ∈ R 2×2 and G bRE ∈ R 2×1 , are defined in (2. 3.26) and the resistive matrix, R bR ∈ R 2×2 , is defined in (2. 3.24). Furthermore, from (2.6.8)-(2.6.9) the constraints of the battery charges and of the battery current are given as: 0.5x b,max ≤ x b (t) ≤ x b,max , (5.3.6a) i b,min ≤ i bb (t) ≤ i b,max , (5.3.6b) where x b,max ∈ R 2 is the maximal charge vector, i b,min , i b,max ∈ R are the minimal and maximal charge currents. In the one-dimension model of the battery [START_REF] Prodan | Fault tolerant predictive control design for reliable microgrid energy management under uncertainties[END_REF], the maximal charge q max ∈ R is derived from x max by the relation: q max = 1 T 2 x max .
(5.3.7)
Supercapacitor: From the supercapacitor dynamics (2.3.28)-(2. 3.31) and the constraint (5.3.1d), we derive that:
i ss (t) = 0, (5.3.8) where i ss (t) ∈ R is the supercapacitor current. (5.3.8) implies that the supercapacitor is not charged or discharged. Thus, it can be eliminated from the microgrid model in the slow time scale.
The DC/DC converter: As illustrated in Section 2.3.1, the battery unit has an associated DC/DC converter which is described in (2. 3.1)-(2.3.4). In the slow time scale the converter is assumed to be at the steady state defined by the dynamics (2.3.1)-(2. 3.4) and the constraint (5.3.1c). These lead to the relations (5.3.11). For simplicity we define the representative duty cycle, d(t), such that: 5.3.9) where d b (t) ∈ R is the real duty cycle of the converter. From (5.3.9) and (2.6.12) we derive the constraint of d(t): 0 ≤ d(t). (5.3.10) Consequently, in the slow time scale the converter model is assumed to be an ideal transformer described by the following relations: 5.3.11) where i bb (t), i b (t) ∈ R denotes the DC/DC converter current variables, v b (t), v bb (t) ∈ R denote the voltage variables at the two sides as in Fig. 2.3.1.
Similarly, the converter associated to the supercapacitor is also modelled in the slow time scale as:
d sup (t) 0 i ss (t) v s (t) , (5.3.12) where d sup (t) is the representative converter duty cycle, v ss (t), v s (t) ∈ R are the converter voltage variables at the two sides, i ss (t), i s (t) ∈ R are the converter current variables at the two sides as in Fig. 5.3.3. From (5.3.8) and (5.3.12), we obtain: i s (t) = 0.
(5.3.13) (5.3.13) implies that there is not the charged/discharged current through the converter associated to the supercapacitor. Thus, we can eliminate this converter from the microgrid model in the slow time scale. Moreover, from the interconnection between the transmission line and the microgrid components described in (2.3.9), (2.5.1)-(2.5.2) and the elimination of the supercapacitor unit described in (5.3.13), we obtain:
i ts (t) = a T i t (t) = 0, (5.3.14) with a = [0 1 0 0 0] T ∈ R 5 .
Transmission lines: The transmission line model is illustrated in Fig. 2.3.5 and is described in the equations (2.3.9)-(2. 3.14). In the slow time scale the transmission line dynamics are considered at the steady state illustrated by the constraint (5.3.1a). The constraints (5.3.1a) and (5.3.14) imply the elimination of the transmission line capacitors and of the supercapacitor unit in the electrical circuit described in Fig. 2.3.5. Therefore, in the slow time scale the transmission lines (i.e., the DC bus) are modelled as a resistor network
Global DC microgrid model
Combining the above relations (5.3.3), (5.3.4), (5.3.5b), (5.3.11), (5.3.15), (5.3.23) and (5.3.25) we formulate the global microgrid model: (5.3.26c) i r (t)v r (t) = -P r (t), (5.3.26d) (5.3.26g) where i 1 (t), v 1 (t) ∈ R 7 gather the current and voltage variables of the microgrid components (see also (5.3.23)), i 2 (t), v 2 (t) ∈ R 6 gather the current and voltage variables of the resistors of the battery and of the transmission lines (see also (5.3.23)). Also, in (5.3.26)
are the current and voltage variables at the two sides of the DC/DC converter (see also Fig. 5.3.3), i r (t), v r (t) ∈ R are the current and voltage variables of the renewable source, i l (t), v l (t) ∈ R are the current and voltage variables of the load (i.e., the electro-mechanical elevator). Furthermore,
are the structure matrices defined in (5.3.24). Next, i bR (t), v bR (t) ∈ R 2 are the current and voltage variables of the battery resistors, i tR (t), v tR (t) ∈ R 4 are the current and voltage variables of the transmission line resistors, R bR ∈ R 2×2 , R tR ∈ R 2×2 are the resistive matrices of the battery and of the transmission lines.
Compact microgrid dynamics: For compactness we define the matrices
( 5.3.27) From (5.3.18)-(5.3.19), (5.3.23) and (5.3.27) we rewrite the microgrid dynamics (5.3.26) as: 5.3.28) Low level control
Scheduling formulation
In the microgrid model (5.3.28)-(5.3.30) there are two variables that can be used as control variables: the duty cycle d(t) and the external grid current i e (t). Fig. 5.4.1 illustrates the general control scheme of the DC microgrid system where two control levels can be considered. At a lower level (corresponding to fast dynamics) the aim is to keep the load voltage v l (t) constant and at a higher level (corresponding to slow dynamics), an optimal scheduling of the battery operation should be provided. In this work, we concentrate only on the latter problem, and the first objective is assumed to be achieved in the much faster time scale (e.g., [Zonetti et al., 2015, Zhao and[START_REF] Zhao | [END_REF]) by using the duty cycle d(t). Thus, we consider that the only control variable is the external grid current i e (t). Also, at the low level, we assume that the load voltage is forced to a desired value
( 5.4.13) Furthermore, in this work, we aim to minimize the electricity cost that is chosen as the cost in the optimization problem (5.4.14)-(5.4.16) of the scheduling control. Therefore, the considered controller is different from the conventional MPC which penalizes the discrepancy between the system state and the setpoint for the tracking objective. Due to its profit objective, the controller is called economic MPC ( [START_REF] Touretzky | A hierarchical scheduling and control strategy for thermal energystorage systems[END_REF]).
We consider the recursive construction of an optimal open-loop control sequence i e (t) = {i e (t|t), . . . , i e (t + jh s |t), . . . , i e (t + (N p -1)h s |t)} at instant t over a finite receding horizon N p , which leads to a feedback control policy by the effective application of the first control action as system input: 5.4.14) subject to: discrete-time dynamics (5.4.6)-(5.4.8), constraints (5.4.9), (5.4.13), (5.4.15) with j = 1, . . . , N p . In (5.4.14) we make use of the electricity price price(t) to penalize buying and encourage selling with the cost described by the following relation: .4.16) The profiles introduced in Section 2.6.1 appear as parameters here (e.g., the electricity price profile, price(t), the load electrical power, P l (t), and the renewable electrical power, P r (t)). Therefore, the cost (5.4.16) is variable due to the variation in the energy price, but otherwise is linear with respect to the input variable. We can see that the dynamics (5.4.6)-(5.4.8) and the constraints (5.4.9), (5.4.13) are overall nonlinear. Thus, the Appendix A
Permanent Magnet Synchronous Machine
This section explains the PMSM dynamics in detail its dynamical model including a permanent magnet rotor and a three phases stator [START_REF] Nicklasson | Passivity-Based Control of a class of Blondel-Park transformable electric machines[END_REF]. The PH formulation involves the determination of the energy storage, the resistive element, the external environment and the interconnection (Dirac structure) for the PMSM dynamical system. Energy storage: It can be seen that the machine energy is stored in the magnetic field of three stator coils. Their currents and voltages are denoted by i l,L (t) ∈ R 3 and v l,L (t) ∈ R 3 , respectively. It is well-known that the energy of the inductors are expressed by:
where the inductance matrix L abc (θ e ) ∈ R 3×3 depends on the rotor angle since the air gap between the rotor and stator varies with different rotor position such that:
where L 1 , L 2 , L 3 correspond to the self and mutual inductors of the stator coils, p is the number of the pole pairs, ϕ a (t) = 2pθ e (t), ϕ b (t) = 2pθ e (t) -2π 3 , ϕ c (t) = 2pθ e (t) + 2π 3 are three magnetic phases of the stator coils. The permanent magnet rotor is characterized by the magnetic flux φ f that causes three fluxes on three stator coils represented by Φ f abc (θ e ) ∈ R 3 :
Then, the magnet flux through the stator is the total of the fluxes of self inductance and of the rotor:
Next, i l,L (t) is derived from (A.0.4) and replaced in (A.0.1). We obtain the stored energy formulation H e ( Φl , θ e ) as a function of stator magnetic fluxes, Φl (t), and rotor angle, θ e (t),:
Consequently, from (A.0.4), (A.0.5) and the Lenz's law, the stator inductance current and voltage are rewritten as:
Resistive elements: Obviously, the resistive elements correspond to the stator resistors characterized by the resistance R l for each phase. Assuming that they are linear, the Ohm's law is written as:
External environment: The external environment supplies energy to the PMSM through three ports: electrical voltage source îl (t), vl (t) ∈ R 3 and mechanical source τ e (t), ω l (t) ∈ R. The stator central node is not connected to an external port that leads to a zero current constraint. This is modeled by adding a zero current source i ln (t), v ln (t) ∈ R such that:
i ln (t) = 0. (A.0.8)
Dirac structure: Finally, from the physical laws, we derive the Dirac tructure of PMSM which describes the relations of the previous element conjugate variables. Firstly, the Kirchhoff laws for the stator electrical circuit are given by:
(A.0.9)
Secondly, the magnetic torque of the machine is expressed as:
Next, the relation of the rotor angle and its speed is given by:
From (A.0.6)-(A.0.11) we derive the PMSM dynamics as:
Appendix B
Symplecticity of Hamiltonian system
This section presents firstly the symplectic vector space and related notions. Then, the symplecticity of the closed-Hamiltonian system in canonical form and on symmplectic submanifolds are defined.
B.1 Symplectic vector space and manifold
Definition B.1.1 (Symplectic bilinear form [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF]). A symplectic bilinear form is a mapping ϕ : S × S → R that is
• bilinear: linear in each argument separately,
• alternating: ϕ(s, s) = 0 hold for all s ∈ S, and
• nondegenerate: ϕ(s, r) = 0 for all s ∈ S implies that r is zero.
A simple example of symplectic form is given as:
where I n ∈ R n×n is the identity matrix.
Definition B.1.2 (Symplectic vector space [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF]). A symplectic vector space is a vector space S over a field F (for example the real numbers R) equipped with a symplectic bilinear form.
The symplectic bilinear form for the basis vectors (s 1 , ..., s n , r 1 , ..., r n ) is given by:
Similarly, the definition of the symplectic manifold M is formulated by replacing the linear vector space S in the previous definition with the manifold M.
Definition B.1.3 (Symplectic map [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF]). Suppose that S, W are symplectic vector spaces with the corresponding symplectic forms φ, ρ. A differentiable map η : S → W is called symplectic if it preserves the symplectic forms, i.e.,
for all s, s 1 , s 2 ∈ S, where ∂ s η(s) is the Jacobian of η(s).
Definition B.1.4 (Symplectic group and symplectic transformation [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF]). In the previous definition, if S = W, then a symplectic map is called a linear symplectic transformation of S. In particular, in this case one has that ϕ(γ(s), γ(r)) = ϕ(s, r) and thus, the linear transformation γ preserves the symplectic form. The set of all symplectic transformations forms a group called the symplectic group and denoted by Sp(S).
B.2 Hamiltonian system B.2.1 Hamiltonian system in canonical form
The Hamiltonian system in canonical form has the form:
where x(t) ∈ X ⊂ R 2n is state variable, H(x) is the Hamiltonian. Thus, the system (B.2.1) is a special case of Port-Hamiltonian system defined in Section B.2. The interconnection matrix J ϕ represents the Dirac structure. H(x) describes the energy in the storage element. Let ϕ t : X ⊂ R 2n of (B.2.1) be the mapping that advances the solution by time t, i.e., ϕ t (x 0 ) = x(t, x 0 ), where x(t, x 0 ) is the solution of the system (B.2.1) corresponding to initial value x(0) = x 0 . The dynamics (B.2.1) has two important properties [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF].
1. Hamiltonian is the first integral, i.e., its time derivative is zero, Ḣ(t) = 0.
2. The state evolution satisfies the simplecticity which is described by the following theorem given by Poincaré in 1899.
theorem B.2.1. [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF] Let H(x) be a twice continuously differentiable function on X ⊂ R 2n . Then, for each fixed t, the flow ϕ t is a symplectic transformation wherever it is defined, i.e.,
where ∂ x0 ϕ t (x 0 ) is the Jacobian of ϕ t (x 0 ).
B.2.2 Hamiltonian system with state-modulated Dirac structure
The interconnection matrix J ϕ depends on the state variables in some situations. For example, the Hamiltonian system (B.2.1) is represented on other coordinates, or some constraints are added. Therefore, the Hamiltonian is described as:
where x ∈ X is the state variable, J(x) is skews-symmetric. The interconnection matrix J(x) represents the Dirac structure. H(x) describes the energy in the storage element. In this case, the Dirac structure is called modulated by the state variable.
The Hamiltonian system (B.2.2) has some following properties [van der Schaft and Jeltsema, 2014].
It is easy to see that Hamiltonian and Casimir are constant on the state trajectory, i.e., Ḣ(t) = 0, Ċ(t) = 0.
• (Integrability) Loosely speaking, a Dirac structure is integrable if it is possible to find local coordinates for the state-space manifold such that the Dirac structure expressed in these coordinates is a constant Dirac structure (i.e., it is not modulated anymore by the state variables). Integrability in this case means that the structure matrix J satisfies the conditions:
for i, j, k = 1, . . . , n. Using Darboux's theorem (see [START_REF] Marsden | Introduction to Mechanics and Symmetry: A basic exposition of classical mechanical systems[END_REF]) around any point x 0 where the rank of matrix J(x) is constant, there exist the local canonical coordinates z = [z 1 z 2 c] T in which the dynamics (B.2.2) are rewritten as:
In (B.2.5) c indicates the independent Casimirs. From Theorem B.2.1 and transformed system (B.2.5), the time evolution of z is a symplectic transformation.
Appendix C
Optimization C.1 Discrete optimization
The discrete optimization is studied in [START_REF] Boyd | Convex Optimization[END_REF]. A discrete optimization problem has the form minimize g 0 (x) subject to
Here the vector x = [x 1 . . . x n ] T is the optimization variable of the problem, the function g : R n → R is the objective functions, the function g i : R n → R, i = 1, . . . , m, are the constraint functions, and the constants b 1 , . . . , b m are the limits, or bounds, for the constraints. A vector x * is called optimal if it has the smallest objective value among all vectors that satisfy the constraints: for any z with g
We generally consider families or classes of optimization problems, characterized by particular forms of the objective and constraint functions. As an important example, the optimization problem (C.1.1) is called a linear program if the objective and constraint functions are linear, i.e., satisfy
for all x, y ∈ R n and all α, β ∈ R. If the optimization problem is not linear, it is called a nonlinear program.
An important class of optimization problems is the convex optimization problems [START_REF] Boyd | Convex Optimization[END_REF]. For these types of problems the objective and the constraint functions are convex, hence they satisfy the following inequality g i (αx + βy) ≤ αg i (x) + βg i (y), i = 0, . . . , m, (C. 1.3) for all x, y ∈ R n and all α, β ∈ R with α + β = 1, α ≥ 0, β ≥ 0. Comparing (C.1.2) and (C.1.3), we see that convexity is more general than linearity: inequality replaces the more restrictive equality, and the inequality must hold only for certain values of α and β. Since any linear program is therefore, a convex optimization problem, we can consider convex optimization to be a generalization of linear programming.
C.2 Continuous-time optimization
The details of continuous-time optimization is referred to [Liberzon, 2011]. In Section C.1 we considered the problem of minimizing a function g : R n → R. Now, instead of R n we want to allow a general vector space V, and in fact we are interested in the case when this vector space V is infnite-dimensional. Specifically, V will itself be a space of functions. Typical function spaces that we will consider are spaces of functions from some interval [a; b] → R n (for some n ≥ 1).
Let us denote a generic function in V by . The letter x is reserved for the argument of y. (x will typically be a scalar, and has no relation with x ∈ R n from the previous section). The function to be minimized is a real-valued function on V, which we now denote by W . Since W is a function on a space of functions, it is called a functional. To summarize, an continuous-time optimization problem has the form minimize W (y) (C.2.1)
123
with the functional W : V → R.
We also need to equip our function space V with a norm . . This is a real-valued function on V which is positive definite, homogeneous, and satisfies the triangle inequality. The norm gives us the notion of a distance, or metric. This allows us to define local minima. We will see how the norm plays a crucial role in the subsequent developments.
We are now ready to formally define local minima of a functional. Let V be a vector space of functions equipped with a norm . , let A be a subset of V, and let W be a real-valued functional defined on V (or just on A). A function y * ∈ A is a local minimum of W over A if there exists an > 0 such that for all y ∈ A satisfying yy * < we have W (y * ) ≤ W (y).
Appendix D
Optimal control D.1 Optimal control formulation
In this section, we briefly present the optimal control problem with its ingredients: control system, cost functional and target set. All the details of optimal control can be found in [Liberzon, 2011].
The first basic ingredient of an optimal control problem is the control system. It generates possible behaviors and is described by ordinary differential equations (ODEs) of the form
where x is the state taking values in R n , u is the control input taking values in some control set U ⊂ R m , t is time, t 0 is the initial time, and x 0 is the initial state.
The second basic ingredient is the cost functional. For a given initial data (t 0 ; x 0 ), the cost functional assigns a cost value to each admissible control. It is denoted by W . In the finite-horizon case, it has the form:
In the previous forms, V f and L are given functions (running cost and terminal cost, respectively), t f is the final (or terminal) time, and x f = x(t f ) is the final (or terminal state). Note again that u itself is a function of time. This is why we say that W (u) is a functional (a real-valued function on a space of functions).
The last basic ingredient of an optimal control problem is the target set. It is defined as the desired set S f ⊂ [t 0 , ∞) × R n of the final time t f and the final state x f . Depending on its formulation, we have the following corresponding problems:
Then, the optimal control problem can be defined as follows:
D.2 Dynamic programming and Hamilton-Jacobi-Bellman equation
Finite-horizon optimal control problem This section studies the solution of the optimal control problem (D. 1.3). According to Bellman, in place of determining the optimal sequence of decisions from the fixed state of the system, we wish to determine the optimal decision to be made at any state of the system. Only if we know the latter, we understand the 125 intrinsic structure of the solution. The approach realizing this idea is called dynamic programming. It leads to the necessary and sufficient conditions for optimality expressed in the Hamilton-Jacobi-Bellman (HJB) equation.
For the convenience in the later explanation, we define the value function such as:
where the notation u [t0,t f ] indicates that the control u is restricted to the interval [t 0 , t f ]. In the infinite horizon case, t f is replaced by +∞. Loosely speaking, we can think of V (t, x) as the optimal cost at (t, x). It is important to note that the existence of an optimal control is not actually assumed, which is why we work with an infimum rather than a minimum in (D.2.1).
The necessary and sufficient condition, which describes the solution the optimal control problem (D.1.3), is derived from the principle of optimality. It is found by Bellman in and expressed as follows.
Lemma D.2.1. For every (t, x) ∈ [t 0 , t 1 ) × R n and every ∆t ∈ (0, t 1t]), the value function V defined in (D.2.1) satisfies the relation
where x(s) on the right hand side is the state trajectory corresponding to the control u [t0,t f ] and satisfying
This statement implies that to search for an optimal control, we can search over a small time interval for a control that minimizes the cost over this interval plus the subsequent optimal value cost. Thus, the minimization problem on the interval [t, t f ] is split into two, one on [t, t + ∆t] and the other on [t + ∆t, t f ].
Relying on first-order Taylor expansions, we can easily derive the following expressions:
Substituting the expressions given by (D.2.3) into the right-hand side of (D.2.2), we obtain
The two V (t, x) terms cancel out (because the one inside the infimum does not depend on u and can be pulled outside), which leaves us with
Let us now divide by ∆t and take it to be small. In the limit as ∆t → 0 the higher-order term o(∆t)/∆t disappears, and the infimum is taken over the instantaneous value of u at time t (in fact, already in (D.2.4) the control values u(s) for s > t affect the expression inside the infimum only through the o(∆t) term). Pulling ∂ t V (t, x) outside the infimum as it does not depend on u, we conclude that the equation
must hold for all t ∈ [t 0 , t f )] and all x ∈ R n . This equation for the value function is called the Hamilton-Jacobi-Bellman (HJB) equation. In the previous equation, the time derivative of value function does not depend on the terminal cost V f (t f , x f ). However, the value function must satisfy the boundary condition which is given by the terminal cost:
From the previous procedure, we can realize that the HJB equation (D.2.5) and the boundary condition (D.2.6) are the necessary conditions for the solution of the optimal control problem (D. 1.3). In fact, they are also the sufficient conditions. The proof is referred to Section 5.1.4 in [Liberzon, 2011]. Furthermore, in the case of infinite-horizon optimal control problem, the necessary and sufficient conditions does not include the boundary condition (D.2.6).
Infinite-horizon optimal control problem Generally, the infinite-horizon optimal control problem is more complicated than the finite-horizon one. Thus, we consider here one of its important simple case. In this case, the control system and running cost are time invariant. The terminal cost is zeros. The previous assumption are described as ẋ(t) = g(x, u), x(t 0 ) = x 0 , (D.2.7)
It is clear that in this scenario, the cost does not depend on the initial time, hence the value function depends on x only: V = V (x). Thus, the HJB equation (D.2.5) is rewritten as
In this case, there is not the boundary condition for the value cost V (x).
D.3 Linear Quadratic Regulator
Finite-horizon Linear Quadratic Regulator Based on the results for general optimal control in the previous section, we now consider Linear Quadratic Regulator (LQR). In this case, the control system in (D.1.1) is linear.
, where t f is a fixed time. Besides, the cost functional in (D.1.2) is given by:
From Section 6.1 in [Liberzon, 2011], we recall the following solution of optimal control problem (D.1.3) with the control system (D.3.1) and cost functional (D.3.2). The optimal control laws is given as
where the matrix P(t) is the solution of Riccati Differential Equation (RDE):
with the boundary condition: Infinite-horizon Linear Quadratic Regulator The infinite-horizon Linear Quadratic Regulator problem is a special case of infinite-horizon optimal control problem. The control system and running cost are time invariant. And there is not the terminal cost. Thus, the control system and cost functional are expressed as:
Consequently, the RDE (D.3.4) is rewritten as:
Note that there is not the boundary condition for matrix P.
Appendix E
Trapezoidal elevator speed profile and PMSM current profile using MTPA method
In this section, we present a reference profile generation method which is used popularly for the elevator system. In this method, the reference speed profile is trapezoidal determined by the maximal limits (ω l,min , ω l,max ), travel initial/final instant (t 0 , t f ) and travel angular distance (θ 0 , θ f ). The accelerations at the beginning and at the end of a travel should be also limited for the comfortable of the passenger. However, in our work, it is small enough to be neglected. Then, the reference current profiles are derived using the Maximum Torque Per Ampere (MTPA) method. It aims at determining the minimal current corresponding to a given torque at each time instant while respecting some constraints on the current and voltage. This objective is actually the minimization of the instantaneous dissipated power in the motor. The electro-mechanical elevator dynamics (4.3.1) can be rewritten as:
where i d (t), i q (t) are respectively the direct and quadrature currents of the motor stator; v d (t), v q (t) are respectively the direct and quadrature currents of the motor stator; L d (t), L q (t) are respectively the direct and quadrature inductances of the motor stator; p is the number of pole pair; Γ res is the elevator gravity torque; J is the mechanical elevator inertia.
Trapezoidal reference speed profile In this work, we consider the case where the elevator goes down. The trapezoidal speed profile is composed of three straight lines. It is characterized by the acceleration a and two instants t 1 , t 2 such that t 0 < t 1 < t 2 < t f . Their values will be determined by (ω l,min , ω l,max ), (t 0 , t f ) and (θ 0 , θ f ). The time interval [t 0 ; t 1 ] is the acceleration phase; [t 1 ; t 2 ] is the constant speed phase; [t 2 ; t f ] is the deceleration phase. Since the profile is symmetric, we have:
Besides, from the definition of acceleration, we obtain:
The travel angular distance is given by θ fθ 0 which leads to:
(E.0.5)
If t 0 = 0, θ 0 = 0, ω l,min = 0, we have
MTPA method Since the dynamics of the mechanical elevator are much slower than one of the electrical part, we only consider the dynamics of two stator currents i d (t), i q (t) in the following reference current generation. Furthermore, the current dynamics are considered at the steady state, i.e. i d = 0, i q = 0. Thus, the dynamics (E.0.1) is replaced by:
The currents and voltages are limited by:
where I l,max is the maximal current amplitude, v ref is the DC bus voltage. The instantaneous dissipated power is given by P dis (i d , i q ) = R s i 2 d (t) + i 2 q (t) , (E.0.8)
where R s denotes the stator resistance. Consequently, the instantaneous current and voltage reference profiles are determined by the solution of the following optimization problem at each time instant:
subject to the constraints (E.0.6), (E.0.7).
(E.0.9)
Appendix F
Singular Perturbation
This section presents a model class of dynamical system where the derivatives of some of the states are multiplied by a small positive parameter [Khalil, 2002]. It is described as: ẋ = g(t, x, z, ), x(t 0 ) = ξ( ), (F.0.1) ż = l(t, x, z, ), z(t 0 ) = η( ), (F.0.2) with t 0 ∈ [0, t 1 ). We assume that the function g, l are continuously differential in their arguments for (t, x, z, We say that the model (F.0.1)-(F.0.2) is in standard form if (F.0.3) has k > 0 isolated real roots
To obtain the ith reduced system, we substitute (F.0.4) into (F.0.1), at = 0 to obtain ẋ = g(t, x, h(t, x), 0), x(t 0 ) = ξ 0 , (F.0.5)
where we indicate h i by h. Singular perturbations cause a multitime-scale behavior of the system dynamic. Denote the solution of (F.0.5) by x(t). Then, the quasi-steady-state is
We shift the quasi-steady-state of z to the origin z = zh(t, x).
Besides, the new time variable τ = (tt 0 )/ indicate the fast time scale. Setting = 0 frees the parameters (t, x) in their slowly varying region. Thus, the model (F.0.1)-(F.0.2) becomes ẋ = g(t, x, z + h(t, x), ), x(t 0 ) = ξ( ), (F.0.6)
which has equilibrium at z = 0. (F.0.7) is called the boundary-layer system.
theorem F.0.1. [Khalil, 2002] Consider the singular perturbation problem of (F.0.1) and (F.0.2). Assume that the following conditions are satisfied
for some domains S x ⊂ R n , S z ⊂ R m , in which S x is convex and S z contains the origin.
Appendix F. Singular Perturbation
• The function g, l, their first partial derivatives with respect to (x, z, ), and the first partial derivative of l with respect to t are continuous; the function h(t, x) and the Jacobian [∂l(t, x, z, 0)/∂z] have continuous first partial derivatives with respect to their arguments; the initial data ξ( ) and η( ) are smooth functions of .
• The reduced system (F.0.5) has a unique solution x ∈ S, t ∈ [t 0 , t 1 ], where S is a compact subset of S x .
• The origin is an exponentially stable equilibrium point of the boundary-layer system (F.0.7), uniformly in (t, x); let R z ⊂ S z be the region of attraction of (F.0.7) and Ω z be a compact subset of R z .
Then, there exists a positive constant * such that ∀η(0)h(t 0 , ξ(0)) ∈ Ω z and 0 < < * , the singular perturbation system (F.0.1)-(F.0.2) has a unique solution x(t, ), z(t, ) on [t 0 , t 1 ], and
hold uniformly for t ∈ [t 0 , t 1 ], where z is the solution of the boundary-layer system (F.0.7). Moreover, given any t b > t 0 , there is * * ≤ * such that
Abstract
The goal of this thesis is to provide modelling and control solutions for the optimal energy management of a DC (direct current) microgrid under constraints and some uncertainties. The studied microgrid system includes electrical storage units (e.g., batteries, supercapacitors), renewable sources (e.g., solar panels) and loads (e.g., an electro-mechanical elevator system). These interconnected components are linked to a three phase electrical grid through a DC bus and its associated converters. The optimal energy management is usually formulated as an optimal control problem which takes into account the system dynamics, cost, constraints and reference profiles. An optimal energy management for the microgrid is challenging with respect to classical control theories. Needless to say, a DC microgrid is a complex system due to its heterogeneity, distributed nature (both spatial and in sampling time), nonlinearity of dynamics, multi-physic characteristics, presence of constraints and uncertainties. Not in the least, the power-preserving structure and the energy conservation of a microgrid are essential for ensuring a reliable operation.
These challenges are tackled through the combined use of PH (Port-Hamiltonian) formulation, differential flatness and economic MPC (Model Predictive Control). The PH formalism allows to explicitly describe the power-preserving structure and the energy conservation of the microgrid and to connect heterogeneous components under the same framework. The strongly non-linear system is then translated into a flat representation. Taking into account differential flatness properties, reference profiles are generated such that the dissipated energy is minimized and the various physical constraints are respected. Lastly, the modelling approach is extended to PH formalism on graphs which is further used in an economic MPC formulation for minimizing the purchasing/selling electricity cost within the DC microgrid. The proposed control strategies are validated through extensive simulation results over the elevator DC microgrid system using real profiles data.
Résumé
Cette thèse aborde les problèmes de la la modélisation et de la commande d'un micro-réseau courant continu (CC) en vue de la gestion énergétique optimale, sous contraintes et incertitudes. Le micro-réseau étudié contient des dispositifs de stockage électrique (batteries ou super-capacités), des sources renouvelables (panneaux photovoltaïques) et des charges (un système d'ascenseur motorisé par une machine synchrone à aimant permanent réversible). Ces composants, ainsi que le réseau triphasé, sont reliés à un bus commun en courant continu, par des convertisseurs dédiés. Le problème de gestion énergétique est formulé comme un problème de commande optimale qui prend en compte la dynamique du système, des contraintes sur les variables, des prédictions sur les prix, la consommation ou la production et des profils de référence.
Le micro-réseau considéré est un système complexe, de par l'hétérogénéité de ses composants, sa nature distribuée, la non-linéarité de certaines dynamiques, son caractère multi-physiques (électro-mécanique, électro-chimique, électro-magnétique), ainsi que la présence de contraintes et d'incertitudes. La représentation consistante des puissances échangées et des énergies stockées, dissipées ou fournies au sein de ce système est nécessaire pour assurer son opération optimale et fiable.
Le problème posé est abordé via l'usage combiné de la formulation hamiltonienne à port, de la platitude et de la commande prédictive économique basé sur le modèle. Le formalisme hamiltonien à port permet de décrire les conservations de la puissance et de l'énergie au sein du micro-réseau explicitement et de relier les composants hétérogènes dans un même cadre théorique. Les non linéarités sont gérées par l'introduction de la notion de platitude différentielle et la sélection de sorties plates associées au modèle hamiltonien à ports. Les profils de référence sont générés à l'aide d'une paramétrisation des sorties plates de telle sorte que l'énergie dissipée soit minimisée et les contraintes physiques satisfaites. Les systèmes hamiltoniens sur graphes sont ensuite introduits pour permettre la formulation et la résolution du problème de commande prédictive économique à l'échelle de l'ensemble du micro-réseau CC. Les stratégies de commande proposées sont validées par des résultats de simulation pour un système d'ascenseur multi-sources utilisant des données réelles, identifiées sur base de mesures effectuées sur une machine synchrone. |
01762638 | en | [
"spi.gciv.ec"
] | 2024/03/05 22:32:13 | 2018 | https://pastel.hal.science/tel-01762638/file/HU.pdf | Keywords: Product Development Process, free-form surface deformation, linear and non-linear equations, locally over-constrained subparts, redundant and conflicting constraints, structural decomposition, numerical analysis, decision support, design intent
And finally, I would like to thank China Scholarship Council for supporting
First of all, I would like to thank all the members of the INSM (Ingénierie numérique de systèmes mécaniques) team of the LSIS (Laboratoire de sciences de l'information et des systèmes) who explicitly or implicitly made the completion of this work a possibility.
I would like to thank all the members of the jury for the attention they have paid to my work and for their constructive remarks and questions. More precisely, I thank:
-Dominique Michelucci who made me the honor of being the president of this jury, -Franca giannini and Dominique Michelucci who have accepted the difficult task of being my reporters, -Gilles Chabert and Géraldine Morin who have examined carefully this manuscript.
I would like to thank my director Professor Jean-Philippe Pernot, who shared his experience in the field of free form surfaces deformation despite a very busy timetable. I have benefited a lot from the discussion with him about NURBS geometry, techniques of writing papers, work presentation etc. and I really appreciate his help during my time in LSIS. Also, I would like to thank my co-director Dr Mathias Kleiner. I have been gifted with his assistance in computer science and really appreciate him for revising my research papers.
I cannot forget all the people with whom I spend many hours and which are much more than colleagues. Merci donc à Widad, Ahmed, Romain, Yosbel, Katia, Ali, et tous les autres. Merci d'avoir pris le temps de parler lentement avec moi et d'être plus que des collègues.
Résumé en Français 0.1 Introduction
De nos jours, les concepteurs s'appuient sur le logiciel de CAO 3D pour modéliser des formes complexes de forme libre basée sur les courbes et surfaces. en design industriel, cette étape de modélisation géométrique est souvent encapsulés dans un plus grand processus de développement de produit (DDP) qui peuvent comporter la conception préliminaire, l'ingénierie inverse, la simulation ainsi que les étapes de fabrication dans laquelle plusieurs acteurs interagissent [START_REF] Falcidieno | Processing free form objects within a Product Development Process framework[END_REF]. En fait, la forme finale d'un produit est souvent le résultat d'un long et fastidieux processus d'optimisation qui vise à satisfaire les exigences associées aux différentes étapes et acteurs de la DDP. Exigences peut être vu comme contraintes. Ils sont généralement exprimés soit avec les équations, une fonction d'être réduits au minimum, et/ou en utilisant des procédures [START_REF] Gouaty | Variational geometric modeling with black box constraints and DAGs[END_REF]. Ce dernier se réfère à la notion de boîte noire, les contraintes n'est pas question dans le présent document, qui se concentre seulement sur les contraintes géométriques qui peuvent être exprimés par des objets linéaires ou équations non linéaires.
Pour satisfaire les exigences, les concepteurs peuvent agir sur les variables associées aux différentes étapes de la DDP. Plus précisément, dans ce document, les variables sont censés être les paramètres de la surfaces NURBS impliqués dans le processus d'optimisation de forme. Pour façonner un objet de forme libre défini par de telles surfaces, les concepteurs ont ensuite de spécifier les contraintes géométriques l'objet a à satisfaire. Par exemple, un patch doit passer par un ensemble de points 3D et de satisfaire à des contraintes de position, la distance entre deux points situés sur un patch est fixe, deux patchs doivent répondre à des contraintes de tangence ou conditions de continuité d'ordre supérieur, etc. Ces contraintes géométriques donnent lieu à un ensemble de linéaire et équations non linéaires reliant les variables dont les valeurs doivent être trouvés. En raison À l'appui local pro-
Résumé en Français
ii priété de NURBS [START_REF] Piegl | The NURBS Book. Monographs in Visual Communication[END_REF], Les équations n'impliquent pas toutes les variables et certaines décompositions peuvent être prévus. De plus, les concepteurs peuvent exprimer involontairement plusieurs fois les mêmes exigences à l'aide de différentes contraintes, menant ainsi à des équations redondantes. Mais les concepteurs peuvent également générer des équations contradictoires involontairement et peut-être affronter avec contraintes et configurations insatisfaisant.
Parfois, des configurations avec contraintes peut être résolu par l'insertion, l'utilisation des degrés de liberté (DDL) avec le noeud de Boehm algorithme d'insertion. En conséquence, de nombreux points de contrôle sont ajoutés dans les régions où peu de ddl sont nécessaires [START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF]. Cette augmentation incontrôlée de la DDLs a une incidence sur la qualité générale de la finale les surfaces qui deviennent plus difficiles à manipuler que les premiers. En outre, certaines contraintes structurelles plus-ne peut pas disparaître à la suite de cette stratégie d'aide à la décision dédiés et approches doivent être développées pour identifier et gérer les configurations avec contraintes. Contrairement à 2D avancée sketchers disponible dans la plupart des logiciels de CAO, commerciale et qui peuvent identifier de manière interactive la sur-contraintes pendant le processus de dessin, il n'est pas encore tout à fait possible d'effectuer une pré-analyser l'état de la 3D à base de systèmes d'équation NURBS avant de les soumettre à un solveur. Ainsi il y a une nécessité de développer une nouvelle approche pour la détection et la résolution des contraintes redondantes et contradictoires dans les systèmes d'équation NURBS. Cela correspond à l'identification et le traitement de sur-contraint, bien limitées et sous-Pièces contraintes. Dans cet article, le traitement correspond à la suppression des contraintes avant de résoudre. Une fois les contraintes supprimées, le système d'équation devient souvent sous-contraint et le concepteur doit également ajouter une exigence par la moyenne d'une fonction d'être réduits au minimum afin de résoudre et Trouver les valeurs des inconnues. Cet aspect ne fait pas partie de l'approche proposée mais il sera discuté lors de l'introduction des résultats dans lequel un particulier est fonctionnelle réduite au minimum.
La suppression des contraintes spécifiées par l'utilisateur est une étape précédente comme le résultat ne satisfait pleinement ce que les designers ont spécifié. Ainsi, non seulement il est important d'élaborer une approche sur-mesure de supprimer les contraintes, mais il est également souhaitable de développer des mécanismes d'aide à la décision qui peuvent aider les concepteurs de cerner et d'éliminer les bonnes contraintes, c'est-à-dire ceux qui préserver autant que possible le but de la conception initiale.
Cette contribution est d'aborder ces deux questions difficiles en pro-iii Contexte et travaux connexes posant une approche d'aide à la décision d'origine pour gérer des configurations géométriques avec contraintes lorsque la déformation de surfaces de forme libre. L'algorithme linéaire poignées ainsi que d'équations non linéaires et exploite la propriété de soutien locales NURBS. S'appuyant sur une série de décompositions structurelles associées à des analyses numériques, la méthode détecte et traite aussi bien que redondante contraintes contradictoires. Depuis le résultat de ce processus de détection n'est pas unique, plusieurs critères sont mis à conduire le concepteur à identifier les contraintes qui devraient être retirés afin de limiter l'impact sur sa/son design original intention. Ainsi, même si le noyau de l'algorithme travaille sur des équations et des variables, la décision est prise en tenant compte des contraintes géométriques spécifiées par l'utilisateur à un niveau élevé.
Le papier est organisé comme suit : La section 0.2 présente le contexte et examine les travaux connexes. La section 0.3 présente le cadre de notre algorithme, énonce les principes et les caractéristiques de ses différentes étapes et propose des critères d'évaluation de ses résultats. L'approche proposée est ensuite validé sur les deux exemples académiques et industriels qui sont décrites à la section 0.4. Enfin, la section 0.5 conclut ce document par une discussion sur les principales contributions ainsi que les travaux futurs.
Contexte et travaux connexes
Cette section présente comment les concepteurs peuvent préciser leurs besoins au sein d'un problème d'optimisation. Il analyse également les méthodes utilisées pour détecter plus de structure ou numériques-contraintes.
Modélisation de plusieurs exigences dans un problème d'optimisation
Au cours des dernières décennies, de nombreuses techniques de déformation ont été proposés et il n'est pas le but de cet article pour détailler toutes. La plupart du temps, quand on parle de travailler sur des techniques de déformation des courbes et surfaces NURBS, l'objectif est de trouver la position X de certains points de contrôle de façon à satisfaire aux contraintes spécifiées par l'utilisateur qui peut être traduit en un ensemble de linéaire et/ou d'équations non linéaires F (X) = 0. car le problème est souvent à l'échelle mondiale sous-contraint, c.-à-d. il y a moins d'équations que de variables inconnues, l'un des objectifs de la fonction G(X) doit également être réduit au minimum. En conséquence, la déformation des formes de forme libre est Résumé en Français iv souvent le résultat de la résolution d'un problème d'optimisation :
F (X) = 0 min G(X) (1)
Pour certaines applications particulières, le problème d'optimisation peut aussi considérer que les degrés, le noeud des séquences ou les poids des NURBS sont inconnus. Cependant, dans ce document, seule la position des points de contrôle sont considérés comme inconnus. En fonction de l'approche, l'objectif différent fonction peut être adopté, mais ils ressemblent souvent à une fonction d'énergie qui peut s'appuyer sur des modèles physiques ou mécaniques. Les contraintes boîte à outils peut également contenir des contraintes plus ou moins sophistiqué avec plus ou moins intuitive des mécanismes permettant de les définir.
Penser à la DDP ainsi qu'aux besoins de génération des formes permettant de satisfaire aux diverses exigences, l'on peut remarquer que les concepteurs ont accès à trois principaux paramètres à préciser leurs besoins et objectifs associés au sein d'un problème d'optimisation. Ils peuvent effectivement agir sur les inconnues X de décider quels points de contrôle sont fixés et quels sont ceux qui peuvent se déplacer. De cette façon, ils indiquent les parties de la forme initiale qui ne devrait pas être affecté par la déformation. Bien sûr, les concepteurs peuvent faire usage de la boîte à outils pour spécifier les contraintes les équations F (X) = 0 pour être convaincu. Enfin, les concepteurs peuvent également spécifier certaines de leurs exigences par la fonction G(X) d'être réduit au minimum. Par exemple, ils peuvent décider de conserver ou non la forme d'origine tout en réduisant au minimum une fonction énergétique caractérisant la forme de déformation.
Cependant, la plupart des déformations de forme libre-forme techniques ne considérer que le problème résultant de l'ensemble des équations F (X) = 0 est sous-contraint (Elber, 2001;[START_REF] Bartoň | Topologically guaranteed univariate solutions of underconstrained polynomial systems via no-loop and single-component tests[END_REF] et peu d'attention a été accordée à l'analyse et le traitement les sur-contraints. Cet article propose une approche pour détecter les équations redondantes et contradictoires, et d'aider le concepteur à résoudre ces problèmes par la simple élimination d'un certain nombre de contraintes. Cependant, les sections 0.3.4 et 0.4 discuter de la possibilité de fixer plus ou moins de points de contrôle et modifier ainsi le vecteur inconnu X, ainsi que la possibilité de modifier le comportement en déformation globale grâce à la personnalisation de la fonction objectif G(X) d'être réduit au minimum. v Contexte et travaux connexes
Contraintes géométriques
Géométriques sur-contraintes sont classés structurelles et numérique surcontraintes [START_REF] Sridhar | Algorithms for the structural diagnosis and decomposition of sparse, underconstrained design systems[END_REF]. Structurelle sur-contraintes peut être détecté à partir d'une analyse de la DDLs, au niveau de la géométrie ou les équations. Numérique sur-contraintes sont habituellement déterminées à partir de l'analyse de la solvabilité du système d'équations. Puisque notre approche est basée sur des équations, les deux aspects sont à définir.
Contraintes structurelles
Jermann et al. donner une définition générale d'une structure avec contraintes, bien limitées et sous-systèmes d'équations limitée à un niveau macro plutôt et compte tenu de la dimension de l'espace [START_REF] Jermann | Decomposition of geometric constraint systems: a survey[END_REF]. Cette définition a été ici adapté au système d'équations où le système devrait être fixé à l'égard d'un système de coordonnées global.
Définition 1. Le degré de liberté DDL(v) d'une entité géométrique v est le nombre de paramètres indépendants qui doivent être définis pour déterminer sa position et l'orientation. par exemple, dans l'espace 2D, il est égal à 2 pour les points et lignes. Pour un système de contraintes géométriques G avec un ensemble V de géométries, le degré de liberté de toutes les géométries est DDLs = v∈V DDL(v). Définition 4. Un système de contraintes géométriques G est structurelle sur-contraint s'il existe un sous-système satisfaisant DDCs > DDLs. Structurelle sur-contraint sont les contraintes que transformer un structurelle surcontraint système dans un structurelle bien-contraint system lorsqu'ils sont supprimés.
Enfin, les définitions de comptage basées sur le DDL-comparer le nombre d'équations pour le nombre de variables d'un système. Cependant, il n'est pas couvrir les cas tels que les licenciements géométriques induites par les théorèmes géométriques. Afin de couvrir ces situations, les définitions Résumé en Français vi algébriques sont introduits.
Sur-contraintes numériques
Les définitions structurelles antérieures ne peuvent pas distinguer les contraintes redondantes et contradictoires. Cependant, du point de vue algébrique, c'est que cet examen pourrait être traitée correctement par Grobner ou méthodes Wu-Ritt [START_REF] Chou | Ritt-Wu's decomposition algorithm and geometry theorem proving[END_REF]. Ces méthodes sont couramment utilisées dans l'algèbre abstraite et requiert de solides bases mathématiques pour comprendre.
Définition 5. Soit G = (E, V, P ) un système de contraintes géométriques, où E est un ensemble d'équations, V est un ensemble de variables et P est un ensemble de paramètres. E r est une collection non-vide de sous-ensembles de E, appelée textbf equations de base (nous l'appelons base en bref, satisfaisant:
• non base contient correctement un autre base;
• si E r1 et E r2 sont base respectivement et si e est une équation de E r1 , alors il y a une équation f de E r2 tel que {(E r1 -e) ∪ f } est aussi un base.
Définition 6. Soit G = (E, V, P ) un système de contraintes géométriques. Soit E r un base. Pour une équation e, en l'ajoutant à E r formant un nouveau groupe: {E r ∪ e}. Si {E r ∪ e} est solvable, alors e est une équation redondante.
Définition 7. Soit G = (E, V, P ) un système de contraintes géométriques. Soit E r un base. Pour une équation e, en l'ajoutant à E r formant un nouveau groupe: {E r ∪ e}. Si {E r ∪ e} n'est pas solvable, alors e est une équation conflictuelle.
Définition 8. Soit G = (E, V, P ) un système de contraintes géométriques composé de deux sous-systèmes:
G b = (E b , V, P ) et G o = (E o , V, P ) avec {E = E b ∪ E o , E b ∩ E o = ∅}.
Si E b est un base, alors E o est un ensemble de sur-contraintes numériques.
Il convient de noter que l'ensemble des contraintes de base and numérique sur-contraintes d'un système donné n'est pas unique. Ainsi, décision-support des mécanismes et des critères doivent être définis pour aider les concepteurs à identifier le bon redondant et à supprimer les contraintes contradictoires. vii Contexte et travaux connexes
Modélisation du système de contraintes géométriques
Tel que discuté précédemment, un système de contraintes géométriques peuvent être décrites, que ce soit au niveau des équations ou au niveau de la géométrie. D'une part, d'un système d'équations, il existe des méthodes algébriques en mesure de s'attaquer directement aux problèmes de cohérence [START_REF] Cox | Groebner Bases[END_REF] ou d'analyser la structure indirectement à partir d'un graphe bipartite où deux classes de noeuds représentent des variables et équations indépendamment (Bunus and Fritzson, 2002a). D'autre part, pour la modélisation au niveau de la géométrie, deux types de graphique sont principalement utilisés : soit les graphes bipartis avec deux classes de noeuds représentant les entités géométriques et contraintes séparément (Hoffmann, Lomonosov, and Sitharam, 1998), ou les graphiques de contrainte avec les noeuds représentant les entités géométriques et les bords des contraintes représentant [START_REF] Gao | Solving geometric constraint systems. II. A symbolic approach and decision of Rc-constructibility[END_REF][START_REF] Hoffman | Decomposition plans for geometric constraint systems, Part I: Performance measures for CAD[END_REF].
Toutefois, compte tenu des systèmes NURBS de contrainte n'est pas simple car il existe plusieurs types de variables contribuant à la forme de déformation. d'une part, système d'équations d'activer un moyen viable de paramètres de modélisation où comme noeuds et poids peuvent être définies comme des variables. D'autre part, la modélisation graphique au niveau de la géométrie est actuellement limitée à des variables comme les coordonnées des points de contrôle et les poids qui leur sont associés. Représentant des variables telles que degrés, noeuds, les valeurs de u et v paramètres à l'aide de graphes de contraintes au niveau de la géométrie n'est pas démontré de façon convaincante dans la littérature. Dans l'oeuvre de Lesage [START_REF] Lesage | Un modèle dynamique de spécifications d'ingénierie basé sur une approche de géométrie variationnelle[END_REF], les points de contrôle entre les vecteurs sont utilisés pour représenter des objets Nurbs ainsi que les contraintes géométriques telles que l'incidence ou de tangence. Néanmoins, sa méthode est limitée aux cas où les points de contrôle sont inconnues seulement et n'est pas suffisamment général pour la comparaison de la modélisation à l'équation.
Selon les définitions précédentes, les méthodes de détection peuvent être classées en deux catégories [START_REF] Sridhar | Algorithms for the structural diagnosis and decomposition of sparse, underconstrained design systems[END_REF]: structurelle sur-contraintes et numérique sur-contraintes détection.
Structurelle sur-contraintes détection
De nombreuses structurelle sur-contraintes peuvent être identifiés en comptant DDLs au cours de la processus de décomposition du système. Parmi les méthodes de décomposition, ceux qui sont utiles pour trouver avec contraintes structurellement sous-parties ont été identifiés et classés en trois Résumé en Français viii catégories selon le type de sous-systèmes.
Modèles spécifiques
Cette catégorie est basée sur le fait que, lors de l'examen de leurs plans d'ingénierie, la plupart des systèmes peuvent être décomposés tout en reconnaissant une configuration spécifique, construit par règle et compas. division récursive est d'abord proposé par Owen pour gérer les systèmes de contraintes 2D où seule la distance et l'angle les contraintes sont impliqués [START_REF] Owen | Algebraic solution for geometry from dimensional constraints[END_REF]. Il a introduit plusieurs règles pour la détection de la redondance, y compris des règles pour détecter les sous-parties trop rigide et règles pour vérifier si les licenciements d'angle (Owen, 1996). Fudos et Hoffman a adopté la méthode de réduction graphique qui fonctionne bien avec les systèmes bien-contraint et sur-contraint en 2 dimensions [START_REF] Fudos | A graph-constructive approach to solving systems of geometric constraints[END_REF]. Ces méthodes sont en temps polynomial mais pas assez général en raison du peu de répertoire de modèles, qui ne peut pas couvrir tous les types de configurations géométriques.
Rigidité structurelle
Cette classe regroupe les méthodes qui se décomposer un système en sous-systèmes rigides structurelles. Les méthodes varient en fonction de la structure adoptée-rigidité définition ainsi que sur les algorithmes de recherche correspondant. En modifiant le débit maximal du réseau supplémentaires théorie, Hoffmann et al. développé la Dense algorithme pour identifier les 1-bien-contraint sous-graphe [START_REF] Hoffmann | Making constraint solvers more usable: overconstraint problem[END_REF]. Leur définition des sous-système rigide découle de Laman's théorème sur la caractérisation de la rigidité des cadres de bar (Combinatorial Rigidity). Toutefois, la définition n'est pas traiter correctement les contraintes telles que les incidences et parallélismes, qui sont largement utilisés dans les systèmes de CAO. Jermann et al. modifié leur définition en introduisant la notion de degree of rigidity (DoR) pour remplacer la dimension D [START_REF] Jermann | Algorithms for identifying rigid subsystems in geometric constraint systems[END_REF]. La différence entre les deux réside sur le fait que la valeur de DoR varie en sous-systèmes alors que d reste constante quelle que soit la sous-systèmes sont (par exemple en 2D, D =3 et en 3D, D =6). Basée sur la même théorie du flux réseau, son algorithme Over-rigid traite correctement les contraintes et/ou l'incidence parallèle spécifié mais est encore limitée à des conditions où l'incidence les dégénérescences générique en raison de théorèmes géométriques comme le co-linéarités, co-planarities sont interdits [START_REF] Jermann | Algorithms for identifying rigid subsystems in geometric constraint systems[END_REF]. ix
Contexte et travaux connexes
Correspondance maximum
Ces méthodes reconnaître les modèles avec contraintes structurelles en comparant directement les DDLs et DDCs d'un système (ou sous-système) sans prendre en compte la dimension constante dépendante D. Dulmage-Mendelsohn (D-M) de l'algorithme de décomposition permet de décomposer un système d'équations en sur-contraintes, bien-contraintes, et sous-contraintes sous-systèmes [START_REF] Dulmage | Coverings of bipartite graphs[END_REF]. Il a été utilisé pour le débogage dans l'équation de la modélisation des systèmes tels que Modelica (Bunus and Fritzson, 2002b). Par ailleurs, il calcule un graphique acyclique (DAG) qui offre une résolution de l'ordre parmi les composantes fortement connectées (SCC) du système. Serrano a été intéressé par l'utilisation de l'algorithme de la théorie des graphes avec contraintes pour empêcher les systèmes où toutes les contraintes et les entités géométriques sont d'un DDL [START_REF] Serrano | Constraint management in conceptual design[END_REF]. Latham et al. a étendu le travail de Serrano en proposant la comparaison pondérée b maximum pour identifier les contraintes à l'arbitraire avec DDLs [START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF].
Une partie avec contraintes contient soit redondant ou contraintes contradictoires. Mais les autres parties peuvent également contenir des surcontraintes de numérique [START_REF] Podgorelec | Dealing with redundancy and inconsistency in constructive geometric constraint solving[END_REF]. C'est parce que l'analyse ne tient pas compte de la structure de l'information numérique d'un système. Par conséquent, pour mieux identifier les contraintes comme les plus subtils de la redondance géométrique, méthodes numériques doivent être adoptées.
Détection numérique de sur-contraintes
Toute contrainte géométrique peut être transformé en un ensemble d'équ ations algébriques (Hoffmann, Lomonosov, and Sitharam, 1998). Par conséqu ent, géométriques sur-contraints sont l'équivalent d'un ensemble de contradiction or d'équations redondantes. Ici, les méthodes de détection numérique ont été classées en deux catégories selon le type de contraintes.
Détection de sur-contraintes linéaires
L'élimination de Gauss, factorisation LU avec pivot partiel et Factorisation QR avec pivotant colonne ont été adoptées avec succès pour trouver des équations contradictoires/redondants ainsi que de groupe dans les systèmes d'équations linéaires [START_REF] Strang | Linear Algebra and Its Applications[END_REF]. Light and Gossard a appliqué l'élimination de Gauss pour calculer le rang ainsi que pour identifier davantage les équations invalides [START_REF] Light | Variational geometry: a new method for modifying part geometry for finite element analysis[END_REF]. Serrano a étendu
Résumé en Français
x son travail pour vérifier l'existence de sur-contraintes dans les composants fortement liés d'un système d'équations [START_REF] Serrano | Automatic dimensioning in design for manufacturing[END_REF]. Ces méthodes ont permis des détections stables et rapides mais se limitent aux cas linéaires.
Détection non-linéaire des sur-contraintes
Méthodes symboliques sont théoriquement fiables mais la complexité du temps est exponentielle. Kondo a utilisé la méthode de base de Grobner pour tester la dépendance entre les contraintes de dimension 2D [START_REF] Kondo | Algebraic method for manipulation of dimensional relationships in geometric models[END_REF]. Gao et Chou ont présenté l'algorithme de décomposition de Wu-Ritt pour déterminer si un système est trop contraint [START_REF] Gao | Solving geometric constraint systems. II. A symbolic approach and decision of Rc-constructibility[END_REF]. Cependant, les deux méthodes ne trouvent pas directement les groupes d'extension de contraintes excessives.
Méthodes d'optimisation ont été utilisées pour résoudre les problèmes de satisfaction des contraintes, qui fonctionne bien pour les systèmes sous contraintes [START_REF] Ge | Geometric constraint satisfaction using optimization methods[END_REF].
Méthodes d'analyse matricielle jacobienne permettent une détection plus rapide en étudiant la structure jacobienne des équations. Cependant, ils ne sont pas en mesure de distinguer les contraintes redondantes et conflictuelles. La principale différence entre ces méthodes est la configuration où la matrice jacobienne devrait se développer. Si le système est résoluble, Haug a proposé de perturber la racine commune et de recalculer le rang une fois que la matrice jacobienne est déficiente [START_REF] Haug | Computer aided kinematics and dynamics of mechanical systems[END_REF]. Cependant, si le système n'est pas résoluble, Foufou et al. suggèrent une méthode probabiliste numérique (NPM), qui analyse la matrice jacobienne à des configurations aléatoires [START_REF] Foufou | Interrogating witnesses for geometric constraint solving[END_REF]. Cependant, il existe un risque que la matrice jacobienne soit classée en défaut aux points choisis, mais elle est à plus grande échelle partout ailleurs. Par conséquent, NPM est pratique dans le calcul mais peut conduire à des contraintes excessives détectées incorrectement. Au lieu de sélectionner au hasard les configurations, Michelucci et al. Ont suggéré d'étudier la structure jacobienne à la configuration des témoins où les contraintes d'incidence sont satisfaites (Michelucci et al., 2006). La witness configuration et la cible configuration partage la même structure jacobienne. En conséquence, tous les sur-contraints sont identifiées. Plus récemment, Moinet et al ont développé des outils pour identifier des contraintes conflictuelles en analysant le témoin d'un système linéarisé de les équations [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF]. Leur approche a été appliquée au cas de test de double banane bien connu sur lequel notre approche sera également testée dans la section 0.4. À partir de la discussion ci-dessus, il est clair que différentes méthodes xi Une approche générique couplant les décompositions structurelles et les analyses numériques sont capables de gérer certains systèmes de contraintes géométriques. Cependant, aucune méthode ne peut couvrir tous les cas parfaitement selon nos critères. Ainsi, dans cet article, une nouvelle approche qui couple structurelle comme ainsi que des méthodes numériques sont proposées.
Une approche générique couplant les décompositions structurelles et les analyses numériques
Cette section décrit notre approche pour détecter et traiter les contraintes géométriques redondantes et conflictuelles. L'idée principale est de décomposer le système d'équations en petits blocs qui peuvent être analysés itérativement en utilisant des méthodes numériques dédiées. Le cadre global et l'algorithme sont introduits avant de préciser les différentes étapes impliquées.
Figure 1: Cadre global composé de trois boucles imbriquées définissant la structure principale de l'algorithme de détection.
Cadre de détection global
Le cadre général a été modélisé en figure 1. Il est basé sur trois boucles imbriquées: la décomposition structurelle en composants connectés (CC); la décomposition structurelle d'un CC dans ses sous-parties (G1, G2, G3) et son DAG correspondant de composants fortement connectés (SCC); l'analyse numérique itérative de ces SCC. Le pseudo-code pour les procédures principales est fourni dans la section 0.3.2.
Résumé en Français xii
Boucle parmi les composants connectés
Le système des équations (SE) est initialement représenté par une structure de graphique G, où les noeuds correspondent aux variables et aux arêtes aux équations. La structure est d'abord décomposée en n components connectés {CC 1 , • • • , CC n } à l'aide de Breadth First Search (BFS) [START_REF] Leiserson | A work-efficient parallel breadth-first search algorithm (or how to cope with the nondeterminism of reducers)[END_REF]. Une telle décomposition est rendue possible grâce à la propriété de support local de NURBS ou tout simplement en utilisant des contraintes qui dissocient quoi se produit selon les instructions x, y et z du cadre de référence (p. ex. contraintes de position ou de coïncidence). Par conséquent, les contraintes géométriques peuvent être détectées séparément pour chaque CC i .
Boucle parmi les sous-parties obtenues par décomposition D-M
La décomposition de DM est utilisée pour décomposer structurellement CC i en un maximum de trois sous-parties: G i1 (sous-partie sur-contrainte), G i2 (sous-partie bien contrainte) et G i3 (sous-sous-contrainte). Chaque souspartie (si elle existe) sera analysée itérativement à l'aide de la troisième boucle imbriquée expliquée ci-dessous.
Cependant, un seul passage de la troisième boucle sur chaque G ij n'est pas suffisant. En effet, toute passe peut conduire à la suppression des contraintes, qui modifie la structure de CC i et nécessite donc d'appliquer la décomposition D-M à nouveau après la passe pour obtenir des sous-parties mises à jour. L'exposant d est utilisé pour noter que CC d i (resp. G d ij ) se réfère à CC i (resp. G ij ) après son d th DM. Bien que le nombre de passes requises soit inconnu à l'avance, il est garanti que le processus converge vers un état où seule une sous-partie G i3 est restée. En d'autres termes, les contraintes seront supprimées ou déplacées vers la troisième sous-partie le long du processus. ). En d'autres termes, les contraintes et les variables sont supprimées jusqu'à ce que nous obtenions un système sous-contraint avec plusieurs solutions, ce qui signifie qu'il n'y a plus de propagation possible. Dans cette dernière étape, comme indiqué dans la partie inférieure gauche de la figure, le système restant est analysé pour les conflits numériques et procède avec le prochain composant connecté.
Boucle parmi les composants fortement connectés
Pseudo-code
Cette section fournit le pseudo-code pour les deux procédures principales de l'approche, entouré de rectangles pointillés sur la figure 1.
Résumé en Français
xiv Algorithm 1 Structural decomposition
1: SE ← System of Equations 2: G ← Graph(SE) 3: [CC 1 , • • • , CCn] ←BFS(G) 4: for i = 1 to n do 5: [G 1 i1 , G 1 i2 , G 1 i3 ] ←DM(CC i )
6:
CC 1 i ← CC i
7:
for j = 1 to 3 do
8:
d ← 1
9:
continue←True 10:
while continue & G d ij = ∅ do 11: continue,CC d+1 i ← findRC(CC d i , G d ij )
12:
d ← d + 1
13:
[G d i1 , G d i2 , G d i3 ] ←DM(CC d i )
14:
end while
15:
end for 16: end for 17:
return [CC d 1 , • • • , CC d n ] Algorithm 2 findRC: Numerical analysis of G d ij subpart of CC d i Require: CC d i and G d ij Ensure: Boolean continue and updated CC d i 1: [SCC d1 ij1 , • • • , SCC d1 ijN ] ←linkedSCC(G d ij ) 2: m ← 1 3: G d1 ij ← G d ij 4: while [SCC dm ij1 , • • • , SCC dm ijN ] = ∅ do
5:
l ← 0
6:
for k = 1 to N do
7:
if onlyVariable(SCC dm ijk ) then 8:
l ← l + 1 9: else 10: [R, C] ← numFindRC(SCC dm ijk ) 11: if [R, C] == ∅ then 12: solution ← solve(SCC dm ijk )
13:
propagate(solution,CC d i )
14:
R ← checkRedundant(CC d i )
15:
C ← checkConflicting(CC d i )
16:
end if 17:
CC d i ← removeRCfromCC(CC d i , [R, C])
18:
end if
19:
if l == N then all red blocks contain only variables 20:
[R,C]← numFindRC(CC d i )
21:
CC d i ← removeRCfromCC(CC d i , [R, C])
22:
return False,CC d i
23:
end if
24:
end for 25:
G d(m+1) ij ← update(CC d i )
26:
m ← m + 1
27:
[SCC dm ij1 , • • • , SCC dm ijN ] ←linkedSCC(G dm ij )
28: Selon le type de contraintes, c'est-à-dire linéaire ou non linéaire, les méthodes diffèrent et sont présentées dans les sous-sections suivantes. La notation suivante A[i : j, l : k] est utilisée pour définir la matrice obtenue en coupant les i th à j th lignes et les l th pour kth colonnes de A.
Système linéaire
Dans l'approche proposée, la factorisation QR avec le pivotement des colonnes est utilisée pour détecter les contraintes linéaires. La factorisation QR avec une permutation de colonne facultative P , déclenchée par la présence d'un troisième argument de sortie, est utile pour détecter la singularité ou la carence de rang la figure 2 montre le processus de détection global. Les lignes droites de couleur horizontale correspondent aux équations linéaires du système Ax = b à résoudre, où A a une dimension m × n. Ici, on suppose que le rang du système est r, ce qui signifie qu'il y a des équations indépendantes de r avec m > r. Le rang est calculé à l'aide de SVD, ce qui Résumé en Français xvi est relativement stable par rapport à d'autres méthodes.
En ce qui concerne la factorisation QR, les colonnes sont échangées au début de k ième étape pour s'assurer que:
A (k) k (k : m)) 2 = max j k A (k) j (k : m)) 2 (2) où A (k) j (k : m) = A[k : m, j].
À chaque étape de la factorisation, la colonne de la matrice non factorisée restante avec la plus grande norme est utilisée comme base pour cette étape et est déplacé vers la position principale [START_REF] Golub | Matrix Computations[END_REF]. Cela garantit que les éléments diagonaux de R se produisent en ordre décroissant et que toute dépendance linéaire parmi les colonnes est certainement révélée en examinant ces éléments. La matrice de permutation P réarrange les colonnes de A t afin que les colonnes apparaissent dans l'ordre décroissant de leur norme.
Les premières r colonnes de A t P sont les contraintes de base de A t et les premières r colonnes de Q form une base orthogonale (figure 2.b). Puisque les colonnes mr restantes dépendent linéairement des premières r columns (Dongarra and Supercomputing, 1990), elles sont les contraintes excessives. Le rang r correspond également au nombre de valeurs non nulles d'éléments diagonaux de R.
Pour trouver des dépendances linéaires entre les colonnes, la déduction suivante est nécessaire. D'abord, la matrice Q(:, 1 : r) est inversée en utilisant l'équation suivante:
A t (:, 1 : r) = Q(:, 1 : r).R(1 : r, 1 : r) et est ensuite utilisé dans l'équation suivante:
A t (:, r + 1 : n) = Q(:, 1 : r).R(1 : r, r + 1 : n)
fournissant ainsi la relation suivante entre les deux matrices tranchées A t (: , r + 1 : n) and A t (:, 1 : r):
A t (:, r + 1 : n) = A t (:, 1 : r).R(1 : r, 1 : r) -1 R(1 : r, r + 1 : n)
Enfin, pour identifier les équations redondantes et contradictoires, le nouveau b vector après factorisation est redéfini comme suit:
b new = b(r + 1 : n) -b(1 : r).R(1 : r, 1 : r) -1 R(1 : r, r + 1 : n)
Système non-linéaire
En considérant un système d'équations non linéaires, on utilise un processus d'identification en deux phases. Tout d'abord, la Witness Configuration Method (Michelucci et al., 2006) est utilisé pour trouver toutes les contraintes excessives (phase I), et Grobner Basis ou Incremental Solving est ensuite appliqué pour distinguer davantage les contraintes redondantes et contradictoires (phase II).
Phase I. Prenant avantage de la méthode proposée par Moinet et al. [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF], une configuration de témoin générique est générée à partir de la forme initiale de l'objet à déformer (étape 1). Effectivement, dans notre cas, les variables x sont les positions des points de contrôle qui ont un emplacement initial x (0) avant déformation. Ensuite, la factorisation QR avec le pivotement de colonne est utilisée pour analyser cette configuration de témoin (étape 2)., la séquence des équations est réorganisée. Le premier r (le rang des équations Jacobian matrix est indépendant tandis que les autres sont les sur-contraintes. Dans la figure 3, les lignes courbes courbes représentent des équations non linéaires.
Résumé en Français xviii
Résoudre
Si solvable, f j est redondant, sinon c'est contradictoire [START_REF] Lamure | Qualitative study of geometric constraints[END_REF]. Sinon, la résolution incrémentale est choisie. Pour expliquer les deux méthodes, supposons que le système de contraintes suivant est disponible après la phase I (figure 4):
f 1 =0 fj=0 base sur f r =0 f m =0 f 1 =0 f r =0
f 1 (x 1 , x 2 , • • • , x n ) = 0 f 2 (x 1 , x 2 , • • • , x n ) = 0 . . . f r (x 1 , x 2 , • • • , x n ) = 0 f r+1 (x 1 , x 2 , • • • , x n ) = 0 . . . f m (x 1 , x 2 , • • • , x n ) = 0
(3) où les équations 1 à r sont les contraintes de base et les équations (r + 1) à m sont les sur-contraintes.
La méthode de résolution incrémentielle insère de façon incrémentielle la sur-contrainte f j = 0, j ∈ {r + 1, • • • , m}, dans l'ensemble des contraintes de base formant ainsi un nouveau groupe d'équations {f 1 = 0, • • • , f r = 0, f j = 0}. Si le nouveau groupe est résoluble, alors l'équation f j = 0 est redondante, sinon elle est contradictoire. Bien sûr, les contraintes de base sont toujours résolvables.
xix Une approche générique couplant les décompositions structurelles et les analyses numériques Quand Grobner basis [START_REF] Cox | Groebner Bases[END_REF] sont utilisés, la méthode calcule d'abord la réduction Grobner basis rgb r de l'idéal f 1 , • • • , f r . Puisque l'ensemble des équations sont résolubles, rgb r = {1}. Ensuite, une boucle sur toutes les sur-contraintes f j = 0, j ∈ {r + 1, • • • , m}, réduction Grobner basis rgb r+j de l'idéal f 1 , • • • , f r , f j est calculé. Si rgb r+j ≡ rgb r , alors f j = 0 est une équation redondante. Si rgb r ⊂ rgb r+j , alors f j = 0 est une équation redondante. Finalement, si rgb r+j = {1}, alors f j = 0 est une équation conflictuelle.
Validation et évaluation des solutions
La section 0.2.1 a introduit les multiples façons de modéliser les exigences dans un problème d'optimisation en spécifiant un vecteur inconnu X, les contraintes à satisfaire F (X) = 0 et la fonction G(X) pour minimiser.
L'approche décrite dans cette section permet d'identifier des équations redondantes et contradictoires. L'exactitude est assurée puisqu'elle consiste en un algorithme à virgule fixe qui ne s'arrête que lorsque le système est résoluble. De plus, toute équation supprimée est garantie soit conflictuelle ou redondant avec l'ensemble restant. On a donc montré que l'ensemble des équations F (X) = 0 peut être décomposé en deux sous-ensembles: F b (X) = 0 contenant les équations de base, F o (X) = 0 les sur-contraintes.
Pour rester proche des exigences que le concepteur a en tête, l'approche proposée passe alors du niveau des équations au niveau des contraintes. Ainsi, les contraintes géométriques associées aux équations F o (X) = 0 sont analysées et toutes les équations liées à ces contraintes sont regroupées dans un nouvel ensemble d'équations F o (X) = 0. Ainsi, les contraintes géométriques associées aux équations F o (X) = 0 sont analysées et toutes les équations liées à ces contraintes sont rassemblées dans un nouvel ensemble d'équations F o (X) = 0. Enfin, les équations liées aux contraintes qui ne sont ni conflictuelles ni redondantes forment l'autre ensemble F b (X) = 0. Cette transformation permet de travailler au niveau des contraintes et non au niveau des équations. Ceci est beaucoup plus pratique pour l'utilisateur final intéressé à travailler au niveau des exigences géométriques.
Puisque cette décomposition n'est pas unique, elle donne naissance à diverses solutions finales potentielles (la décomposition interactive est hors de portée de cet article). Plusieurs critères sont maintenant introduits pour évaluer ces solutions en fonction de l'intention initiale de conception. Caractériser la qualité des solutions obtenues, l'ensemble des paramètres spécifiés par l'utilisateur P est introduit Cet ensemble rassemble tous les paramètres que le concepteur peut introduire pour définir les contraintes que sa forme Résumé en Français xx doit satisfaire. Par exemple, la distance d imposée entre deux points d'une surface NURBS est un paramètre caractérisant une partie de l'intention de conception. Ensuite, l'idée est d'évaluer dans quelle mesure les solutions s'écartent de l'intention initiale de conception et notamment en termes de paramètres P .
Pour ce faire, le problème d'optimisation contenant les contraintes de base est résolu:
F b (X) = 0 min G(X) (4)
et la solution X est alors utilisée pour évaluer les sur-contraintes non satisfaites F o (X ) ainsi que les valeurs réelles P des paramètres P spécifiés par l'utilisateur. Par exemple, si la distance spécifiée par l'utilisateur d entre les deux patches ne peut pas être atteinte, la distance réelle d sera mesurée sur la solution obtenue. A partir de cette solution, il est possible d'évaluer trois Critères:
• Déviation en termes de paramètres/contraintes: ce critère vise à mesurer dans quelle mesure les valeurs réelles P des paramètres proviennent des paramètres spécifiés par l'utilisateur P . Ce critère aide à comprendre si l'intention de conception est préservée en termes de paramètres et par conséquent en termes de contraintes.
df = i |P i -P i | i |P i | (5)
• Déviation en termes de fonction pour minimiser : ce critère évalue directement dans quelle mesure la fonction objective G a été minimisée. Ici, la fonction est simplement calculée à partir de la solution X du problème d'optimisation. Pour préserver l'intention de conception, cette valeur doit être minimisée. Ainsi, il peut être utilisé pour comparer les solutions entre eux.
dg = G(X ) (6)
• Degré de quasi-dépendance: la carence de rang de la matrice jacobienne sur le témoin révèle clairement les dépendances entre contraintes. Cependant, pour les systèmes d'équations basés sur NURBS, les contraintes peuvent être indépendantes mais proches d'être dépend antes. Dans ce cas, la matrice jacobienne de F b (X) au point de solution X est mal conditionnée et la solution correspondante peut être de mauvaise qualité Le troisième critère évalue donc le nombre de xxi Résultats et discussion condition (cond) de la matrice jacobienne comme mesure de la quasidépendance [START_REF] Kincaid | Numerical analysis: mathematics of scientific computing[END_REF]:
cond = cond(J J F b (X )) (7)
Enfin, même si ces critères caractérisent la qualité de la solution X par rapport à l'intention de conception, ils n'ont pas été combinés dans un indicateur unique. Ainsi, les résultats de la section suivante seront évalués en analysant et en comparant trois critères pour chaque solution.
Résultats et discussion
Cette section présente deux configurations sur lesquelles la technique de détection et de résolution de sur-contraintes proposée a été testée: Le premier concerne le cas académique de double banane largement étudié dans la littérature Il a été utilisé pour comparer notre solution à celles générées par d'autres. Le second exemple est plus industriel et concerne la mise en forme d'un verre composé de plusieurs patchs NURBS.
Cas de test Double-Banane
Les variables X, les contraintes F (X) = 0 et les paramètres P du cas de test Double Banane sont exactement les mêmes que ceux testés par Moinet et al. [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF]. La seule différence est qu'ils utilisent une formulation sans coordonnées tandis que la nôtre est à base cartésienne. Ici, l'objectif est de trouver la position des 8 noeuds d'une structure 3D afin que la longueur des 18 arêtes satisfasse les spécifications spécifiées par l'utilisateur dimensions. La figure 5 illustre le Double-Banane dans sa configuration initiale.
La configuration de Double-Banana ne contient qu'un seul composant connexe, tel que révélé par BFS. L'analyse structurale utilisant la décomposition de DM montre qu'elle est sous-contrainte et notre algorithme suit alors la partie inférieure de la figure 1. L'analyse est utilisée dans notre fonction numFindRC et une surcontrainte est détectée. Plus spécifiquement, l'équation e9 est ici détectée. En utilisant notre approche de résolution incrémentale, l'équation est en outre caractérisée comme contradictoire. L'équation e9 est donc supprimée et le système est résolu en utilisant la position initiale des noeuds comme valeurs initiales des variables. En utilisant les résultats, l'équation e9 est alors réévaluée et le paramètre associé est comparé à la valeur spécifiée par l'utilisateur. Dans le cas présent, e9
Résumé en Français xxii
Pt2 Pt1 Pt3 Pt4 Pt5 Pt6 Pt7 Pt8
Figure 5: Géométrie initiale de la double banane comme décrit dans [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF].
n'est pas satisfait puisqu'il est égal à 44.47 par rapport à la condition initiale de 45 spécifiée par l'utilisateur. Ainsi, l'écart par rapport à l'intention de conception est de df = 0.53/45.
Notre algorithme donne une solution beaucoup plus proche de l'intention de conception initiale que l'algorithme de Moinet et al., et Comme l'a révélé BFS, le système peut être décomposé en 2 + 24 × N rows connection CC i où N rows est le nombre de lignes libres de bouge toi. Parmi eux, seuls deux composants CC 1 et CC 2 contiennent à la fois des variables et des équations alors que les autres ne contiennent que des variables (Table 2). L'analyse de ces deux composantes donne lieu à l'identification de 2 équations conflictuelles qui correspondent soit à la position, soit à la distance. Le résultat du processus de détection n'étant pas unique, 9 configurations sont obtenues et sont rassemblées dans le table 3. Ici, il faut se rappeler xxv que même si le processus de détection identifie des équations conflictuelles, notre algorithme supprime les contraintes associées à ces équations. Par exemple, la configuration 1 considère que les deux contraintes de distance (une entre les patches P1 et P4 et l'autre entre P2 et P3) doivent être supprimées (0 dans la table) et les 4 contraintes de position sont conservées (1 dans la table ).
Résultats et discussion
Config. DIS(P1,P4) DIS(P2,P3) POS(P1) POS(P2) POS(P3) POS(P4)
1 0 0 1 1 1 1 2 1 0 0 1 1 1 3 1 0 1 1 1 0 4 0 1 1 0 1 1 5 0 1 1 1 0 1 6 1 1 0 0 1 1 7 1 1 1 1 0 0 8 1 1 1 0 1 0 9 1 1 0 1 0 1
Table 3: Statut des contraintes de distance et de position (0 à enlever et 1 à garder) pour résoudre les 9 configurations sur-contraintes.
Toutes les configurations sont alors résolues en agissant à la fois sur le nombre de rangées supérieures à fixer (N rows = 4 or 5), et la fonction objectif à minimiser (soit G 1 (X) ou G 2 (X)). Les résultats sont rassemblés dans les tableaux 4 et 5. Chaque configuration est évaluée à l'aide des trois critères précédemment introduits dg, df et cond. Certaines solutions sont montrées dans la figure 7.
On peut d'abord remarquer que selon les configurations, l'écart df sur les contraintes varie. Par exemple, avec N rows = 4 et en minimisant G 1 (X), la configuration 7 génère une solution plus proche de l'intention de conception Résumé en Français xxvi que la configuration 6 (0.10684 < 0.12607 dans Table 4). Pour la configuration 3, il est clair que l'écart par rapport à l'intention de conception en termes de contraintes est plus important en minimisant la surface de la surface finale qu'en minimisant la variation de forme (0.2288 > 0.10179 dans le table 4 ). Ceci est clairement visible sur les figures 7.c 1 et 7.c 2 .
En considérant la minimisation de la variation de forme, on peut voir que la configuration 3 est moins intéressante que la configuration 1 en ce sens qu'elle minimise moins la variation de forme (15459.52 > 13801.04 dans le table 4).
Enfin, pour une configuration donnée, on peut remarquer que lorsque le nombre de lignes libres augmente, c'est-à-dire quand il y a plus de liberté, la fonction objective diminue et la solution est donc plus proche de l'intention de conception. Ceci est visible lors de la comparaison des valeurs des Tableaux 4 et 5. Ainsi, la sélection des variables X est également importante lors de la mise en place du problème d'optimisation.
Introduction
Nowadays, designers rely on 3D CAD software to model sophisticated shapes based on free-form curves and surfaces. In industrial design, this geometric modeling step is often encapsulated in a larger Product Development Process (PDP) which may incorporate preliminary design, reverse engineering, simulation as well as manufacturing steps wherein several actors interact. Actually, the final shape of a product often results from a long optimization process which tries to satisfy the requirements associated to the different steps of the PDP. Requirements can be seen as constraints. They are generally expressed either with equations, a function to be minimized, and/or using procedures.
To satisfy the requirements, designers can act on variables associated to the different steps of the PDP. More specifically, variables are supposed to be the parameters of the NURBS surfaces involved in the shape optimization process. To shape a free-form object defined by such surfaces, designers then have to specify the geometric constraints the object has to satisfy. For example, a patch has to go through a set of 3D points and satisfy to position constraints, the distance between two points located on a patch is fixed, two patches have to meet tangency constraints or higher-order continuity conditions, etc. Those geometric constraints give rise to a set of linear and non-linear equations linking the variables whose values have to be found. Due to the local support property of NURBS, the equations do not involve all the variables and some decompositions can be foreseen. Additionally, designers may express involuntarily several times the same requirements using different constraints thus leading to redundant equations. But the designers may also involuntarily generate conflicting equations and may have to face over-constrained and unsatisfiable configurations.
Sometimes, over-constrained configurations can be solved by inserting extra degrees of freedom (DoFs) with the Boehm's knot insertion algorithm. As a consequence, many control points are added in areas where not so many DoFs are necessary. This uncontrolled increase of the DoFs impacts the overall quality of the final surfaces which become more difficult to manipulate than the initial ones. Furthermore, some structural over-constraints cannot disappear following this strategy and dedicated decision-support approaches have to be developed to identify and manage over-constrained configurations.
Unlike advanced 2D sketchers available in most commercial CAD software, and which can interactively identify the over-constraints during the sketching process, it is not yet completely possible to pre-analyze the status of 3D NURBS-based equation systems before submitting them to a solver. Thus there is a need for developing a new approach for the detection and resolution of redundant and conflicting constraints in NURBSbased equation systems. This corresponds to the identification and treatment of over-constrained, well-constrained and under-constrained parts. In this thesis, the treatment corresponds to the removal of constraints before solving. Once the constraints removed, the equation system often becomes under-constrained and the designer also has to add a requirement by mean of a function to be minimized so as to solve and find the values of the unknowns. This aspect is not part of the proposed approach but it will be discussed when introducing the results in which a particular functional is minimized.
Removing user-specified constraints is a primary step as the result do not fully satisfies what the designers have specified. Thus, not only is it important to develop an approach able to remove over-constraints, but it is also desirable to develop decision-support mechanisms which can help the designers identifying and removing the right constraints, i.e. the ones which preserve as much as possible the initial design intent.
In this thesis, the aim is to address these two difficult issues by proposing an original decision-support approach to manage over-constrained geometric configurations when deforming free-form surfaces. Our approach handles linear as well as non-linear equations and exploits the local support property of NURBS. Based on a series of structural decompositions coupled with numerical analyses, the method detects and treats redundant as well as conflicting constraints. Since the result of this detection process is not unique, several criteria are introduced to drive the designer in identifying which constraints should be removed to minimize the impact on his/her original design intent. Thus, even if the kernel of the algorithm works on equations and variables, the decision is taken by considering the geometric constraints specified by the user at a high level.
The manuscript is composed of an introduction, 4 chapters and a final conclusion as follows:
Introduction
• Chapter 1 shows the whole picture of Product Development Process (PDP) and points out the position of our research. We introduce the structure of PDP, its relationship with free-form shape modeling, modeling user specified requirements, and transforming from requirements to constraints, from constrains to equations.
• Chapter 2 provides an overview of techniques for geometric overconstraints detection. In the context of free-form surfaces modeling, we evaluate different techniques under given criteria and select the ones that might be helpful to solve our problems by testing them on different examples.
• Chapter 3 illustrates our approach: an algorithm combining system decomposition, numerical methods, symbolic methods, and optimization techniques to detect redundant/conflicting constraints as well as the corresponding spanning groups. In addition, our approach enables to provide different sets of results and thus criteria are proposed to compare them, to assist user choosing the over-constraints he/she wants to remove.
• Chapter 4 illustrates the detection and resolution processes on both academic and industrial examples. Moreover, it shows the efficiency of the decomposition method used in our approach and the impact of tolerances on the detection result.
• Finally, we discuss limits and perspectives of our research in the Conclusions and perspectives section.
Chapter 1
Positioning of the research
This chapter discusses briefly the Product Development Process (PDP) and
shows that its output can be seen as the result of an optimization problem where multiple requirements should be satisfied (section 1.1). Requirements can be realized by manipulating geometric models in CAD modelers using different approaches (section 1.2). Modeling multiple requirements on freeform objects can be seen as defining a shape deformation problem (section 1.3). Requirements can be transformed into constraints (section 1.4) and constraints can be further transformed into equations (section 1.5). Since the users' design intent are sometimes uncertain and requirements may contain redundancies/conflicts, there is a need to debug them so that users can better understand what they really want (section 1.6).
Product Development Process
Product design is a cyclic and iterative process, a kind of systematic problem solving, which manages the creation of the product itself under different conditions. The development process includes the idea generation, concept phase, product styling and design and detail engineering, all of which are conducted in the context of adapting and satisfying requirements of the different stages. Usually, designers use CAD tools to deal with associated requirements during the industrial Product Development Process (PDP). Clearly, the PDP is not unique and vary deeply from company to company, depending on the complexity of the product to be designed, the equipment used, the team of specialists involved, the stages of a product to be designed and so on. Regardless of the variability, Falcidieno at al [START_REF] Falcidieno | Processing free form objects within a Product Development Process framework[END_REF] proposed a generic structure of a PDP as a reference scheme that can be used by specific companies, on different scenarios and classes of products (figure 1.1). Singe-headed arrows represent information and/or digital models that are communicated from one activity to another as soon as the first activity has been carried out. Double-headed arrows are not prescribing systematic communications between two or more activities. They can be reduced to single-way communications for some specific scenarios [START_REF] Falcidieno | Processing free form objects within a Product Development Process framework[END_REF]. As it is shown in figure 1.1, the PDP can be divided into four main stages: Preliminary design, Embodiment design, Detailed design and Process planning. In the Preliminary design stage, the desired product characteristics are described and the product requirements are defined. The description of product characteristics provides the basis for the definition of the requirement specifications and target specifications of a new model. The requirement specifications include a complete description of the new product characteristics. For example, in the context of automotive development projects, the description of product characteristics is supported by far-reaching market studies, research into constantly changing customer demands and on evaluation of future legislation-based conditions in target markets. Requirement specifications takes into account the detailed information about the requirements of product design and the desired behavior of a product in terms of its operation. The Embodiment design stage includes the functional and physical concept of the new product. On one hand, the new techniques are used and evaluated with respect to their functional configurations and interactions. For example, in the case of car design, new technologies in mechanical parts, such as new safety equipment or environmentally friendly propulsion technologies, are implemented and verified in terms of their general functionalities within the full-vehicle system. On the other hand, the physical concept covers the definition of the product composition. New functions are Product Development Process developed in mechanisms. In automotive development processes, the physical concept defines the vehicle body structure layout in consideration of crash and stiffness requirements as well as it addresses basic requirements of the new car concept, such as driving performances, fuel consumption, vehicle mass, and estimated values of driving dynamics.
The Detailed design stage directly depends on the product concept phase. Based on the knowledge from the concept phase, the geometric information of all components are modeled in details and optimized while considering the assembly of the product and the interactions of components. Also, in this stage, materials of the components are defined and the boundaries for the production planning are derived.
Finally, the last stage consists of production-related planning. This stage is mainly concerned with determining the sequence of individual manufacturing operations needed to produce a given part or product. But it also refers to the planning of use of blanks, spare parts, packaging material, user instructions, etc. The resulting operation sequence is documented on a form typically referred to as a route sheet containing a listing of the production operations and associated machine tools for a work part or assembly. This phase goes hand in hand with the design process because manufacturing boundaries often influence the design of components. Therefore, the production, assembly and inspection-oriented development and the manufacturing-related optimization interact with geometry creation and calculation processes.
With the goal of saving development time and costs, the PDP involves virtual product-model-based processes to generate new products using dedicated tools. Depending on the categories of development applied, there are different types of tools: computer-aided design (CAD), computer-aided styling (CAS), computer-aided engineering (CAE), Computer-aided manufacturing (CAM), Computer-aided quality assurance (CAQ) and Computeraided testing (CAT). These tools enable the creation of product geometry and implementation of product characteristics. All those information are stored within the Digital Mock Up which is managed by PDM (Product Design Management) and PLM (Product Lifecycle Management) systems. Modern CAD systems allows for integrating multi-representation and multi-resolution geometric models to shape complex components and modeling products possibly incorporating free-form surfaces [START_REF] Pernot | Incorporating free-form features in aesthetic and engineering product design: State-of-the-art report[END_REF]. The CAD modeling approaches will be discussed in the next section.
Anyhow, the PDP allows the creation of products which can be seen as the result of an optimization process where various requirements (e.g. functional, aesthetic, economical, feasibility) have to be satisfied so as to obtain desirable solutions. However, not all the requirements can be fulfilled and an approximation has to be found. Moreover, requirements are even conflicting in some cases and methods for detecting and treating conflicts are to be proposed. They are presented and compared in chapter 2. This PhD thesis addresses such a difficult problem of detection and treatment of so called over-constraints when manipulating geometric models which are part of the DMU of complex systems.
CAD modeling approaches 1.2.1 Manipulating geometric models
As suggested by [START_REF] Maculet | Conception, modélisation géométrique et contraintes en CAO: une synthèse[END_REF], the manipulation of a geometric model can be performed at three levels:
• Level 0: manipulation of variables, or parameters (e.g.: coordinates of a control point, coordinates of a point in parametric space...).
• Level 1: manipulation of elementary geometric entities (points, line segments, curves, surfaces); it corresponds to the parametric and variational modelers solving elementary geometric constraints (e.g.: distance between two points, angle between two tangent lines, etc.).
• Level 2: manipulation of more complex geometric entities, composed of simple elements of level 1(e.g.: groove in an area of an object); it corresponds to the feature-based approaches, to solve more complex constraints (e.g.: length of the groove), and which are generally associated with a semantic meaning or with geometric properties.
Industrial CAD software relies on an incremental B-Rep (Boundary Representation) modeling paradigm where volume modeling is performed iteratively through high-level operators (Hoffmann, Lomonosov, and Sitharam, 1998). These operators allows for acting directly on the geometric entities of level 1 to directly shape the CAD models by manipulating structural and detail features. However, even if CAD software allow working on the geometric entities of level 2 based on the operators such as pad, pocket, shaft to get rid of the direct use and manipulation of canonical surfaces and NURBS [START_REF] Piegl | The NURBS Book. Monographs in Visual Communication[END_REF], a lot of intermediate operations are required to get the desirable shape of an object. The work is procedural and designers have to break down the object body into basic shapes so as to link to different operators of the software. This even truer in the freeform domain where CAD softwares generate complex free-form shapes incrementally and interactively through a sequence of simple shape modeling operations. The chronology of
CAD modeling approaches
these operations is at the basis of a history tree describing the construction process of an object. Consequently, without a real construction tree, free-form shape modifications are generally tedious and frequently result in update failures. Clearly, an approach closer to the designers' way of thinking is missing and there is still a gap between the shapes designers have in mind and the tools and operators provided to model them. Various approaches have been introduced to bridge this gap and are briefly discussed in the next section: parametric modeling, feature-based modeling and variational modeling approaches.
Modeling approaches
The parametric modeling approach allows for modifying an object by changing instantiations of its constitutive geometric model sequentially. Usually, a parametric system can be divided into subsystems that can be solved one after another in a given order [START_REF] Farin | Handbook of computer aided geometric design[END_REF]. During the process of parametric modeling, designers are hard to make sure that the added constraints stay consistent with previous ones as well as the inserted variables are enough to correctly describe the variations of a product. Therefore, final systems are generally under-constrained or sometimes over-constrained. Designers have to check carefully the constraints with respect to the product variations he/she needs so that the final system stays well-constrained.
However, if a system can be divided into subsystems which are simultaneously solvable, then the system is variational. Designation of variational configuration is first proposed by Lin, Gossard and Light [START_REF] Lin | Variational geometry in computer-aided design[END_REF]. When defining a 3D shape, the constraints are often geometric constraints, which relate to different geometric primitives or features [START_REF] Bettig | Geometric constraint solving in parametric computer-aided design[END_REF]. For example, they can be distances or angles between (special points or axes of) geometric primitives or features, incidence or tangency relations between parts of two geometric primitives or features. Compared to the parametric modeling, variational modeling gives a better answer to the designer's needs. If the final configuration is wellconstrained, the solver is able to find a correct set of solutions. In configurations that are under/over-constrained, decomposition methods have been introduced to supplement the solving methods. The research in variational modeling quickly concentrates on the important problem of constraints modeling, how to represent and organize them with DAGs, and finally how to solve them [START_REF] Hoffmann | Erep: An editable, highlevel representation for geometric design and analysis[END_REF].
The feature-based modeling approach considers geometric entities made up of simple elements which are called features. For example, the feature 'hole' is composed by a set of cylinders and planes attached to an initial plane (figure 1.2). In the freeform domain, four level classification has been proposed by Fontana [START_REF] Fontana | Free form features for aesthetic design[END_REF] and extended by Pernot [START_REF] Pernot | Fully free form deformation features for aesthetic and engineering designs[END_REF]. Designers manipulate directly on shape primitives that can be parametrized and pre-defined rather than acting at lower level ones by using features to build their CAD models. As it is shown in figure 1.3, constraints can be indirectly specified on the control points of control polygon or directly onto the surface potentially made of multiple trimmed patches connected together with continuity conditions. To satisfy their requirements, designers often insert numerous DOFs through the use of the Boehm's knot insertion algorithm [START_REF] Boehm | The insertion algorithm[END_REF], resulting in configurations with more variables than equations. The approach proposed a) b)
Figure 1.2: Geometric model composed of free form features [START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF].
in this PhD can serve the three above mentioned modeling strategies. As discussed in section 1.3, the proposed approach is intended to analyze a set of constraints, and the associated equations, independently of the adopted modeling strategy.
Modeling multiple requirements in an optimization problem
Product requirements refer to the specifications that lead to criteria to evaluate design variants and select the one that performs best when using the product. As it is shown in Figure 1.1, requirements can be specified during various stages of a PDP from preliminary design to process planing. Usually, the requirements are of two categories: qualitative and quantitative ones. Quantitative requirements like a power or a velocity can be subjected to tolerances, which gives a flexibility to find the compromise. However, qualitative requirements like aesthetics are not related to tolerances, and Modeling multiple requirements in an optimization problem compromises are more subjective, which in some sense can reduce the uncertainties. To find a compromise, a set of requirements can be adjusted like adding/removing shape details, modifying dimensions, applying geometric constraints on the digital shape models. Often, the product shape results from an optimization problem where the various requirements are specified by the actors of the PDP. Actually, the final shape of a product often results from a long and tedious optimization process which tries to satisfy the requirements associated to the different steps and actors of the PDP. Those requirements can be of different types and their computation may require the need of external tools or libraries. For example, the shape of a turbine blade is the result of a complex optimization process which is to find the best compromise between notably its aerodynamic and mechanical performances. In general, requirements can be seen as constraints. They are generally expressed either with equations, a function to be minimized, and/or using procedures [START_REF] Gouaty | Variational geometric modeling with black box constraints and DAGs[END_REF]. The latter refers to the notion of black box constraints, which are not addressed in this manuscript (section 1.5.2). Here, we focuses only on geometric constraints that can be expressed by linear or non-linear equations. , and preserves the initial shape c) of initial glass [START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF].
Thinking to the PDP as well as to the needs for generating shapes which satisfy multiple requirements, one can notice that designers can specify their requirements and associated design intent within a shape deformation problem through accessing to three main parameters. On one hand, when speak-ing of deformation techniques working on NURBS curves and surfaces, the goal is to find the position X of some control points so as to satisfy userspecified constraints which can be translated into a set of linear and/or non-linear equations F (X) = 0. For example, a patch has to go through a set of 3D points and satisfy to position constraints (some of these constraints have been used to drive the glass deformation of figure 1.3), the distance between two points located on a patch is fixed, two patches have to meet tangency constraints or higher-order continuity conditions, etc. Since the problem is often globally under-constrained, i.e. there are less equations than unknown variables, an objective function G(X) also has to be minimized. As a consequence, the deformation of free-form shapes often results from the resolution of an optimization problem:
F (X) = 0 min G(X) (1.1)
For some particular applications, the optimization problem can also consider that the degrees, the knot sequences or the weights of the NURBS are unknown. Depending on the approach, different objective functions G(X) can be adopted but they often look like an energy function which may rely on mechanical or physical models. Figure 1.3 b and c show two results when using two different minimization with the same set of constraints. The constraints toolbox can also contain more or less sophisticated constraints with more or less intuitive mechanisms to specify them. Note that, in this thesis, only the position of the control points are considered unknown. On the other hand, designers can effectively act on the unknowns X to decide which control points are fixed and which ones can move. In the example of figure 1.3, the bottom row of control points are fixed. In this way, they specify the parts of the initial shape which should not be affected by the deformation. Of course, designers can make use of the constraints toolbox to specify the equations F (X) = 0 to be satisfied. Finally, designers can also specify some of their requirements through the function G(X) to be minimized. For example, they can decide to preserve or not the original shape while minimizing an energy function characterizing the shape deformation.
From requirements to constraints
As discussed above, multiple requirements in a PDP can be modeled within an optimization problem. Here, we summarize the constraints satisfying different design requirements as well as a structure of these constraints which must be incorporated to develop a fully constraint-based modeler for curves and surfaces.
From requirements to constraints
Taxonomy of constraints
Section 1.3 has discussed the design intent where both the system of equations F (X) = 0 and the objective function G(X) should be taken into consideration. Therefore, the corresponding ways of specifying constraints can be [START_REF] Cheutet | Constraint modeling for curves and surfaces in CAGD: A survey[END_REF]):
• Strict constraints, which are named classically constraints in the literature and can be transformed into a system of equations F (X) = 0. They must be strictly respected during the shape creation and manipulation processes. For example, the current sketchers in CAD modelers are only using this type of constraints.
• Soft constraints, which corresponds to objective function G(X) to be minimized. These constraints are used in the declarative modeling approach to allow the description of the object properties, but also to deform free-form surfaces in some other approaches. They can express the final aspect of a component shape or at least, the expectation to obtain a solution close to it.
The above two categories classify constraints specification from a mathematical point of view. However, thinking to the PDP, the notion of constraints during a design phase can be very large. Since it is commonly used at all of its successive steps, different users have different meaning for design constraints. According to [START_REF] Cheutet | Constraint modeling for curves and surfaces in CAGD: A survey[END_REF], constraints can be classified into four semantic levels in the context of shape generation and modification, depending on the type of the constrained entity:
• Level 1: constraints attached to a geometric element of a configuration: such as local constraints used to manipulate its shape like position constraints.
• Level 2: constraints between two or more geometric elements of a configuration: for instance, to preserve the integrity of the configuration during the shape modification, such as maintaining G0/G1/G2 continuity between trimmed patches.
• Level 3: constraints attached to the whole configuration like a volume constraint, for example.
• Level 4: constraints related to the product itself rather than to the geometry. For example, the product should resist during its usage. This specification makes use of mechanical properties such as the acceptable maximum stress. In this case, constraints link the geometry with parameters of the material as well as boundary conditions of the product.
The above levels describe how to express the constraints attached to a product. In the next subsections, the constraints classically used for curve and surface modeling are described in more details, according to the categories previously defined in this section. Most of those constraints can be handled by the approach developed in this PhD thesis.
Strict constraints
This section describes the strict constraints commonly used in shape modeling and part of the first two categories previously listed (constraint attached to one geometric element or between two or more geometric elements of a component). Because they are directly related to geometric parameters, they are named as geometric constraints in literature.
Constraints to control the shape of a local entity
Local geometric constraints are used to locally control a shape. The control is achieved through enforcing the curve/surface to pass through a new user-specified location using the local support property of the underlying geometric model (figure 1.4, b). The constraining entity usually is a geometric point while the constrained geometry can be curves, patches and meshes. [START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF] The local support property can be explained as following. A B-Spline From requirements to constraints patch is defined by the following equation:
(u, v) ∈ N u × N v , P (u, v) = m i=0 n j=0 N ip (u).N jq (v).s ij , with N u = [u 0 , u m+p+1 ] and N v = [v 0 , v n+q+1 ],
(1.2)
where p and q are the degrees in u and v respectively, m + 1 and n + 1 are the number of control points in u and v direction respectively, N u and N v are the knot sequences in u and v.
The displacement δ hk of a control point s hk induces a surface displacement governed by the following equation:
(u, v) ∈ N u × N v , P (u, v) = P (u, v) + N hp (u).N kq (v).δ hk (1.3)
Thus, the extent of the modified area directly depends on the influence area of the bi-variate basis function N hp (u).N kq (v), whereas the amplitude of the modification is directly related both to the shape of this bi-variate function and to the displacement vector δ hk . The example of figure 1.4 a, displays the bivariate basis function associated to a control point which is displaced to produce a local modification of a patch (figure 1.4, b). The influence area of this displacement is therefore delimited by a rectangular domain
I hk = [u h , u h+m+1 ] × [v k , v k+n+1 ]
. This is an interesting property to be able to decompose a problem into subproblems.
Curve constraints on a surface
In car aesthetic design, stylists manipulate a product by forcing the surface to match a given curve at some stage of the product specification. Thus, the surface model of the product is directed by the curves, which strongly affect the shape of the product. The constraining entity is a curve and the constrained entity is a surface. In the case of continuous surfaces like NURBS surfaces, the curve constraint can be decomposed into a set of point constraints, with additional parameters related to the application domain (figure 1.5). The discretization process has a strong influence on the resulting shape. For example, if the discretization is too coarse, then the shape variations of the initial curve would be lost; If the degrees of freedom is not properly distributed after discretization, then over-constrained configurations will be generated. There are cases for which the decomposition is not necessary, but they are so particular that they are not often encountered in industry [START_REF] Michalik | Constraint-based design of B-spline surfaces from curves[END_REF]. shows a shape that is not "smooth" after deformation. This discontinuity results from the trimmed self-intersection of the surface. Thus, the integrity of the original configuration is not well preserved in this case. In case of continuity conditions, the shape of a model can be defined by a set of patches and the continuity conditions in position, tangency and/or curvature between them have to be taken into account to preserve the model integrity during the shape transformations (second semantic level, section 1.4.1). More specifically, for a surface composed of a set of connected trimmed patches, the continuity of the surface depends on the continuity of each patch and on the continuity along their connections. Concerning the connections, a C 0 continuity indicates a continuity of position along the common trimming lines of two patches, a C 1 indicates a continuity of the first-order derivatives along the common trimming lines and a C 2 continuity indicates a continuity of the second-order partial derivatives along the common trimming lines. Unfortunately, most of the time the characteristics of the two connected patches (degrees of the basis functions, number of control points and so on) prohibit the satisfaction of those equalities. Therefore, the continuity must be approximated (G i instead of C i ) and satisfied at specific points of the trimming lines. In this case, the G 0 continuity corresponds to position continuity at specific points, G 1 to tangency continuity and G 2 to curvature continuity.
From requirements to constraints
Constraints to control the shape of a global configuration
This section describes constraints that act on the whole curve/surface (constraints from semantic level 3). They cannot be decomposed into a set of point constraints like previous examples since they refer to integral properties of the associated curve/surface. In 2D space, curves can be constrained to preserve either a prescribed area or a constant length or to preserve some symmetry with a predefined axis during the deformation process [START_REF] Sauvage | Length preserving multiresolution editing of curves[END_REF][START_REF] Hahmann | Area preserving deformation of multiresolution curves[END_REF]Elber, 2001). In 3D space, the volume preservation is important for achieving realistic deformations of solid objects in computer graphics [START_REF] Lasseter | Principles of traditional animation applied to 3D computer animation[END_REF]. Other constraints such as moments have been studied in [START_REF] Elber | Linearizing the area and volume constraints[END_REF][START_REF] Gonzalez-Ochoa | Computing moments of objects enclosed by piecewise polynomial surfaces[END_REF]. An example of area constraint is provided in figure 1.6.
Constraints to satisfy engineering requirements
This section deals with constraints of the fourth semantic level. As discussed in section 1.4.1, these constraints are usually needed at a given stage of the PDP and their expressions incorporate geometric as well as technological parameters, such as changing the shape of a component in some areas while maintaining the maximum stress value in a given area. However, usually these constraints cannot be decomposed into constraints of the other semantic levels since other quantities than the geometric ones are in-volved. Moreover, evaluating the results satisfying these constraints would require the use of a specific algorithm, like a Finite Element Analysis. Most of time, these constraints are seen by the designer as black boxes and the results obtained are then incorporated into a geometric constraint solving process (section 1.5.2). After that, the user can have ideas on which parameters need to be modified by analyzing the solutions. This level is not addressed in this PhD manuscript, and black box constraints cannot be handled yet.
Soft constraints
Constraints in terms of objective function to minimize [START_REF] Pernot | Multi-minimisations for shape control of fully freeform deformation features (/spl delta/-F/sup 4/)[END_REF] have shown that soft constraints can also be used to monitor the shape deformation (figure 1.3). Since many configurations are based on a set of trimmed patches, the corresponding deformation problem is globally under-constrained, and the number of control points is generally far greater than the number of constraints. Usually, objective function indicates a user's design tendency of a curve/surface behavior after deformation. For example, shape fairness is often used to obtain the smoothest and the most graceful shapes, and the criterion corresponds to the minimization of an energy having a physical meaning and leading to natural surfaces. Therefore, soft constraints can be used as a criterion to choose one solution among all those satisfying strict constraints. Hence, soft constraints can be used to help the designer adjust the shape in accordance to complementary parameters that cannot be incorporated into geometric constraints. This type of constraint, which act on the function G(X) to be minimized, cannot be directly handled by the proposed approach which focuses on the identification of redundancies and conflicts in a set of strict constraints. Those aspects are discussed in the results section.
1.5 From constraints to equations 1.5.1 Expressing constraints with equations CAD modelers provide their solvers of geometric constraints and usually the solver has its own constraints editor. Basically, the constraints concern vertices of interest, straight lines, planes, circles, spheres, cylinders or freeform curves and surfaces whose parameters are the unknown variables. Constraints ranging from level 1 to level 3 (Section 1.4.1) can be rep-From constraints to equations resented with equations. Those equations can be linear or non-linear. Classical solvers use these constraints to sketch and constrain the shape of desired models. For example, the 2D distance constraint d between two points (x, y) and (x 0 , y 0 ) is translated to the equation (x -x 0 ) 2 + (y -y 0 ) 2 -d 2 = 0. Continuity constraints between two patches can also be represented with equations. Moreover, those mathematical equations can also be represented using computational graph, which is based on Directed Acyclic Graphs (DAGs). In such a representation, a DAG is a tree with shared vertices. The leaves of the tree are either variables (i.e. parameters or unknowns) or numerical coefficients. The internal nodes of the tree are either elementary arithmetic operations or functions such as exp; sin; cos; tan. The DAG is also called white box DAG, since it allows for computing the derivatives and hessians automatically. If the mathematical equations associated to geometric constraints are available, it is possible to compute the expressions of the derivatives with formal calculus, which can be resorted to using Grobner basis or Wu-Ritt method if all the constraints are algebraic and can be triangulated into the form f 1(U ; x1) = f 2(U ; x1; x2) =:::= 0 (U is the parameters vector and x i are the unknown variables).
Black box constraints
On the contrary, a DAG is called a black box DAG, and a constraint is called a black box constraint when the corresponding constraints cannot be represented with equations or are not computable in practice [START_REF] Gouaty | Variational geometric modeling with black box constraints and DAGs[END_REF]. This corresponds to constraints of level 4 discussed in section 1.4.1. Examples such as, the maximum of the Von Mises stress should be smaller than 100MPa, the final product should cost less than 100, the mass of the object should be smaller than 100 kilograms, there should not be collisions between the parts, are requirements which cannot be transformed into a set of equations. In the work of [START_REF] Gouaty | Variational geometric modeling with black box constraints and DAGs[END_REF], they proposed to use black box DAGs for Variational Geometric Modeling of free-form surfaces and subdivision surfaces and they presented a prototype, DECO, to show the feasibility and promises of this approach. Black box constraints can happen when free-form surfaces are generated tediously from fairly sophisticated modeling functions (e.g. sweep, loft, blend). Of course, these black box constraints cannot be manipulated in the same way as if some equations were available and solvers have to take into account these constraints expressed by functions i.e. constraints requiring the call to a function. In the context of this PhD thesis, we will only consider configurations involving constraints defined by linear and/or non-linear equations. Configurations involving black box constraints have not been addressed.
Needs for better understanding the design intent
1.6.1 Management of the uncertainties Today's modelers and solvers cannot fully handle the uncertainties when designers define their requirements. When designing free-form objects, it is impossible to precisely specify a shape at the beginning. Because the idea designers have in mind are usually not fully refined at the conceptual phase but evolves with engineering conditions and simulation results. As a consequence, the PDP requires many back and forth attempts before getting desirable results. Moreover, the results can sometimes be acceptable if they are close to target ones. This can happen when deforming free-form objects, since it inherits problems from the non-linear optimization domain, such as results coverages to local minima rather than global minimum. The distance between local minima and global minimum is uncertain but under the tolerance that designers can accept. Finally, the uncertainties can happen directly on constraints. That is, constraints with inequalities and not only strict constraints. Values of these constraints remain uncertain before solving but are acceptable if their real values after solving are within the range of inequalities. [START_REF] Pernot | Constraints Automatic Relaxation to Design Products with Fully Free Form Features[END_REF].
Pernot et al [START_REF] Pernot | Constraints Automatic Relaxation to Design Products with Fully Free Form Features[END_REF] showed an example where the deformation of a surface is constrained by constraint lines. As it is shown in figure 1.7(b), a target line and a limiting line are specified on an initial patch. However, during the drawing of the target line, designer does not have accurate criterion to sketch the end points of the target line. After the deformation, the Gaussian map of the curvature shows the presence of hollows around the end points of the target line. These extremities of the Needs for better understanding the design intent target line are considered as uncertainty areas which must not be taken into account during the deformation process. Such results are not acceptable for designers. They proposed to relax user-specified constraints with two types of scenarios and explains the effects of relaxation on an amplified configuration in figure 1.7(a). The approach developed in this PhD allows for the detection of redundant and conflicting constraints. Thus, once identified, those configurations can either be deleted or the associated constraints can be relaxed.
Detection and treatment of over-constraints
User-specified requirements may not always be consistent and the overall set can be over-constrained. It is up to the solver to detect those inconsistencies and to give feedbacks on how to remove them. In most of today's modeler, and as is shown in figure 1.8, a geometric configuration can be of three types:
• Under-constrained: number of unknowns is greater than the number of equations. This case happens quite often since designers often insert extra DOFs to satisfy requirements.
• Well-constrained: number of unknowns is equal to the number of equations.
• Over-constrained: number of unknowns is less than the number of equations. And the type of extra equations have two possibilities:
redundant: these equations are consistent with the other ones. That is, they do not affect the solution of original system.
conflicting: fully inconsistent with the others when constraints express contradictory requirements and lead to no satisfactory solution. More specifically, in terms of free-form geometry, the equations do not involve all the variables due to its local support property. Designers may express involuntarily several times the same requirements using different constraints thus leading to redundant equations. But the designers may also involuntarily generates conflicting equations and may have to face overconstrained and unsatisfiable configurations. The configuration of a set of constraints can however be even more complex: a problem can be globally under-constrained and locally over-constrained (figure 1.9). Tools to detect globally over-constrained configurations exist but are limited to a set of geometric constraints applied on Euler geometries like points, lines, planes, etc [START_REF] Guillet | Modification et construction de formes gauches soumises à des contraintes de conception[END_REF]. As it is shown in figure 1.8, advanced 2D sketchers available in most commercial CAD software can interactively identify the over-constraints during the sketching process. However, they do not allow for the detection of redundant and/or conflicting constraints. Moreover, the detection of locally over-constrained configurations is much difficult to handle, especially in the case of hybrid geometries composed of polylines, curves, meshes and surfaces. It is not yet possible to analyze the status of 3D NURBS-based equation systems before submitting them to a solver. For example, Spline3 in figure 1.9 is locally over-constrained but globally underconstrained. Constraint analysis of the 2D sketcher shows the whole geometry is globally over-constrained, which is not correct.
Sometimes, over-constrained configurations can be solved by inserting extra degrees of freedom (DoFs) with the Boehm"s knot insertion algorithm. As a consequence, many control points are added in areas where not so many DoFs are necessary [START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF]. This uncontrolled increase of the DoFs impacts the overall quality of the final surfaces which become more difficult to manipulate than the initial ones. Furthermore, some structural over-constraints cannot disappear following this strategy and dedicated decision-support approaches have to be developed to identify and fully manage over-constrained configurations.
As a consequence, this PhD thesis has tried to overcome those limitations while enabling for a proper identification of locally redundant or conflicting configurations.
Conclusion
In this chapter, we first introduced the structure of PDP and its relationship with the modeling of free-form shapes. Then, we explained modeling multiple requirements in terms of controlling free-form shapes by transforming requirements into constraints and transforming constraints into equations. Since user specified requirements sometimes can be redundant and conflicting, we showed what current modelers can/cannot do with respect to system of constraints and discussed the necessity of debugging the constraints system of free-form configurations. In next chapter, concepts of geometric over-constraints, methods of debugging constraints systems, and the selection of methods for free-form configurations will be discussed.
Chapter 2
Geometric over-constraints detection
In this chapter, background with respect to the modeling of geometric constraints systems are first introduced (section 2.1). Then definitions of geometric over-constraints are summarized and compared in section 2.2. Then, detection methods are introduced and compared in section 2.4, where the evaluation criteria are defined and extracted from geometric constraints solving domain (section 2.3). Some of the methods are tested on different use cases in order to find ones that might be interesting for detecting over-constraints of free form configurations (section 2.5).
.
Representation of geometric constraints systems
A geometric model can be manipulated either through variables or with geometric entities. This requires a geometric model to be represented either in the equation or geometry levels. In this section, we will show how the models are presented in terms of the two levels.
Graph fundamentals
Graphs are mathematical concepts that have found many uses in computer science. A graph is used to describe a structure where some pairs of objects are involved with some kind of relationships. In constraint systems, objects are geometric entities and relationships are geometric constraints. Therefore, they are commonly used in geometric constraints solving domain. For example, bipartite graph is used in the work of (Ait-Aoudia, Jegou, and Michelucci, 2014) whereas [START_REF] Fudos | A graph-constructive approach to solving systems of geometric constraints[END_REF]) use directed graph. These two types of graphs are mostly used in literature. However, we also include directed graph in this section because it is often used in constraint solving process and will be used in the later chapters of this manuscript as well.
Bipartite graph
Definition In graph theory, a bipartite graph (G = (U ∪ V, E)) is a graph whose vertices can be divided into two disjoint sets U and V (that is, U and V are independent sets) such that every edge e (e ∈ E) connects u (u ∈ U ) to v (v ∈ V ). Vertex sets U and V are usually called the parts of the graph. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles [START_REF] Diestel | Graph Theory[END_REF]. For example, figure 2.1-a is a bipartite graph, where U is a set of applicants and V is a set of jobs. In terms of geometric constraints system, U and V can either be equations and variables, or constraints and entities depending on the level of modeling.
Matching problems are often concerned with bipartite graphs. A matching in a bipartite graph is a set of the edges chosen in such a way that no two edges share an endpoint. For a geometric constraints system, a matching happens between one variable and one equation or between one constraint and one entity. The unmatched ones within such bipartite graph enables to know if there are constraints/equations or entities/variables left, which indicates the constrained status of a system. To find them, we need to take advantage of Maximum bipartite matching at the level of equations or Maximum weighted bipartite matching at the level of geometries.
Maximum bipartite matching A maximum matching is a matching of maximum size (maximum number of edges). In a maximum matching, if any edge is added to it, it is no longer a matching. There can be more than one maximum matching for a given bipartite graph. For example, figure 2.1-b shows a maximum matching found for the bipartite graph in figure 2.1a, meaning that there are maximum five people who can get jobs. In this example, an applicant can get one job, and reversely a job can only be assigned to one person. The maximum matching can be solved by converting a bipartite graph into a flow network (figure 2.2-a) and then using Ford-Fulkerson algorithm [START_REF] Jr | Maximal flow through a network[END_REF] to find the maximum flow in the flow network.
The method is used to analyze the structure of equation systems. The maximum matching approach was first used by Serrano [START_REF] Serrano | Automatic dimensioning in design for manufacturing[END_REF] for systems of non-linear equations appearing in conceptual design problems. Also, it has been adopted by Dulmage-Mendelsohn decomposition algorithm to debug a geometric constraints system [START_REF] Dulmage | Coverings of bipartite graphs[END_REF]. However, if a system is represented not at the level of equations but at the level of geometries, then the method is not applicable. In this case, Maximum weighted bipartite matching is used, which will be discussed in next paragraph.
Representation of geometric constraints systems
Maximum weighted bipartite matching Maximum weighted bipartite matching is a generalization of maximum bipartite matching. For a bipartite graph G = (U ∪ V, E), and edge weights w i,j , find a matching of maximum total weight. These weights are used as variables during the process of maximizing the total weight. The optimization process can be solved by algorithms like Negative cycles, hungarian method, and Prime dual method (Bang-Jensen and Gutin, 2008). Finding the maximum weighted bipartite matching has been used by Latham et al. to analyze the connectivity of a constraints system [START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF], which is modeled directly at the level of geometries.
Directed graph
A graph can be directed. For example, if the vertices represent people at a party, and there is a directed edge from a person A to a person B corresponds
applicants U jobs V a) s t applicants U jobs V b) s t
Representation of geometric constraints systems
Strongly connected component A strongly connected component is a maximal subgraph of a directed graph G such that for every pair of vertices (u, v) in the subgraph, there is a directed path from u to v and a directed path from v to u. A strongly connected component of G is an induced subgraph which is strongly connected and no additional edges or vertices from G can be included in the subgraph without breaking its property of being strongly connected (figure 2.4-b). Strongly connected components are also used to compute the Dulmage-Mendelsohn decomposition, a classification of the edges of a bipartite graph, according to whether or not they can be part of a perfect matching in the graph [START_REF] Dulmage | Coverings of bipartite graphs[END_REF]. A perfect matching of a graph is a matching (i.e., an independent edge set) in which every vertex of the graph is incident to exactly one edge of the matching. Therefore, it is a matching containing n/2 edges (the largest possible), meaning perfect matchings are only possible on graphs with an even number of vertices. Strongly connected components can be found by Breadth First Search (BFS) or Depth First Search (DFS) in linear time [START_REF] Tarjan | Depth-first search and linear graph algorithms[END_REF]. Weakly connected component A weakly connected component is a maximal subgraph of a directed graph G such that for every pair of vertices (u, v) in the subgraph, there is an undirected path from u to v and a directed path from v to u. A weakly connected component of G is an induced subgraph which is weakly connected and no additional edges or vertices from G can be included in the subgraph without breaking its property of being weakly connected. The subgraph marked blue of figure 2.4-a is a weakly connected component. As far as we know, weakly connected components are not used by any method of geometric constraint solving. But in this manuscript, we propose an algorithm to find a set of constraints to which an over-constraint is redundant or conflicting (section 3.4).
Undirected graph
A graph can also be undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this graph is undirected because any person A can shake hands with a person B only if B also shakes hands with A. An undirected graph is a graph in which edges have no orientation. The edge from v to u is identical to the edge from u to v, i.e., they are not ordered pairs (figure 2.5). In geometric constraints solving, it is often used as constraint graph to initially represent a constraints system (section 2.1.3).
Matrix
Apart from graph representation, matrix is another format for showing the relationships between related objects in graph theory. Depending on the type of objects, matrix can either be incidence matrix or adjacency matrix.
Incidence matrix Incidence matrix shows the relationship between edges and vertices of a graph. For a directed graph
G = (V, E) with V = {v 1 , • • • , v n } and E = {e 1 , • • • , e m }, the incidence matrix is a n × m matrix B, such that B i,j = 1 if the edge e j leaves vertex v i , -1 if it enters vertex v i , and 0 other- wise (figure 2.6). For an undirected graph G = (V, E) with V = {v 1 , • • • , v n } and E = {e 1 , • • • , e m }
, the incidence matrix is a n × m matrix B, such that B i,j = 1 if the edge e j connects vertex v i and 0 otherwise (figure 2.7).
Representation of geometric constraints systems
Incidence matrix is mainly used for practical computation of an algorithm since it stores information between vertices and edges of a structure. Sometimes, it is also interesting to know the relationship between vertices of a structure before design a sophisticated algorithm. For example, if most of the vertices of a structure are unconnected, then the structure can be decomposed into small parts, and each part can be further analyzed. In such case, adjacent matrix for representing a system is applied. Adjacency matrix Adjacency matrix represents relationship between pair of vertices of a graph. As a result, it is a square matrix. For an undirected graph without self-loops, the adjacency matrix is a square |V | × |V | matrix A such that its element A ij is 1 when there is an edge between vertex V i and vertex V j , and 0 when there is no edge. For a directed graph without self-loops, the adjacency matrix is a square |V | × |V | matrix A such that its element A ij is 1 when there is an edge from vertex V i to vertex V j , -1 when the edge is directed from vertex V j to vertex V i , and 0 when there is no edge (figure 2.8). Here, |V | represents the number of elements in V . For an undirected graph, the adjacency matrix is a square |V | × |V | matrix A such that its element A ij is 1 when there is an edge between vertex V i and vertex V j , and 0 when there is no edge (figure 2.9).
a b c d 1 1 -1 0 0 2 0 1 -1 0 3 0 1 0 -1 4 1 0 0 -1 b) a b c d 1 1 -1 0 0 2 0 1 -1 0 3 0 1 0 -1 4 -1 0 0 1 a)
a b c d a 0 1 0 -1 b -1 0 1 1 c 0 -1 0 0 d 1 -1 0 0 b) a b c d a 0 1 0 1 b -1 0 1 1 c 0 -1 0 0 d -1 -1 0 0 a)
Representation of geometric constraints systems
Graphs at the level of equations
In this section, we give an example of a geometric constraints system by modeling it at the level of equations. As discussed in section 1.5.1, the method use variables to describe shape, position etc. of geometries as well as algebraic equations to show their relationships. Consequently, a constraint system is represented by a group of algebraic equations.
1 p 2 p 3 p 4 p 3 d 4 d 1 3 4 1 l 2 l 3 l 4 l 1 1 1 : , c PtonLine p l 2 2 2 : , c PtonLine p l 3 3 3 : , c
p i , (x i , y i ) i=1•••4 ; l i , (y = a i × x + b i ) i=1•••4 , then algebraic equations are: e 1 : a 1 • x 1 + b 1 = y 1 ; e 2 : a 2 • x 2 + b 2 = y 2 e 3 : a 3 • x 3 + b 3 = y 3 ; e 4 : a 4 • x 4 + b 4 = y 4 e 5 : (x 3 -x 2 ) 2 + (y 3 -y 2 ) 2 = d 2 3 e 6 : (x 4 -x 3 ) 2 + (y 4 -y 3 ) 2 = d 2 4 e 7 : cos(θ 1 )• ----→ p 2 -p 1 • ----→ p 4 -p 1 = ----→ p 2 -p 1 • ----→ p 4 -p 1 e 8 : cos(θ 2 )• ----→ p 1 -p 2 • ----→ p 3 -p 2 = ----→ p 1 -p 2 • ----→ p 3 -p 2 e 9 : cos(θ 3 )• ----→ p 2 -p 3 • ----→ p 4 -p 3 = ----→ p 2 -p 3 • ----→ p 4 -p 3 e 10 : cos(θ 4 )• ----→ p 1 -p 4 • ----→ p 3 -p 4 = ----→ p 1 -p 4 • ----→ p 3 -p 4 e 11 : a 1 • x 4 + b 1 = y 4 ; e 12 : a 2 • x 1 + b 2 = y 1 e 13 : a 3 • x 2 + b 3 = y 2 ; e 14 : a 4 • x 3 + b 4 = y 3 (2.1)
The equations can then be analyzed by using numerical methods. However, they can further be transformed into an equation graph, if structural analysis methods are to be used.
An equation graph is a bipartite graph where two classes of nodes represent equations and variables respectively. In this case, the equation graph of equations 2.1 is shown in figure 2.11.
2 x 2 y 3 x 3 y 4 x 4 y 1 a 1 b 2 a 2 b 3 a 3 b 4 a 4 b Figure 2
.11: Bipartite graph of equations 2.1
Graphs at the level of geometries
The above system can also be represented at the level of geometries. Extract geometric entities and constraints into vertices of a graph provides another way of system modeling.
Bipartite graph Similar to figure 2.11, bipartite graph can model a constraint system by using two classes of vertices to represent geometric entities and constraints respectively as well as edges to show their relationships (figure 2.12). From above examples, we can see that modeling geometric systems at the level of geometries is more intuitive than at the level of equations. But it ignores numerical information of a system, since the latter generates concrete equations. The selection of the two ways depends on users' practical needs.
Basic definitions
This section introduces several key concepts in the manuscript. The first one is geometric over-constraints which are defined both at the level of geometries and equations. Also, it is necessary to introduce singular configurations since they often confuse the judgment of geometric over-constrained configurations. After that, definition of over-constraints in terms of free form geometry are formalized.
Geometric over-constraints at the level of geometries
At this level, definitions are classified into two groups: constraint graph group and bipartite graph group. For the former, the constraint graph is transformed into a weighted constraint graph, where the weight of a vertex represents DoFs of an entity and the weight of an edge represents DoFs removed by a constraint. For the bipartite graph group, only the weight of vertices are added: the weight of an entity equals to its DoFs and the weight of a constraint equals to the DoFs it can remove.
Definitions with respect to constraint graph
Here, we use G = (V, E) to represent a constraint system with | V | number of entities and | E | number of constraints.
In Rigidity Theory (Combinatorial Rigidity), Laman's theorem [START_REF] Laman | On graphs and rigidity of plane skeletal structures[END_REF] characterizes the rigidity of bar frameworks, where a geometric system is composed of points constrained by distances.
Theorem 1 A constraint system in the 2D plane composed of N points linked by M distances is rigid iff 2 • N -M = 3 and for any subsystem composed of n points and m distances, 2 • n -m ≥ 3.
The constraints and entities are limited to distances and points respectively. Podgorelec [START_REF] Podgorelec | Dealing with redundancy and inconsistency in constructive geometric constraint solving[END_REF] extended the theorem by assuming that each geometric element has 2 DoFs and each constraint eliminates 1 DoF. Therefore, the weight of vertices and edges are of the constraint graph is 2 and 1 respectively.
Definition 1 For constraint graph G = (V, E), a geometric constraint system is:
• Structurally over-constrained if there is a subgraph G = (V , E ) with 1 • |E | > 2 • |V | -3,
• Structurally under-constrained if G is not structurally over-constrained and 1
• |E| < 2 • |V | -3, or • Structurally well-constrained if G is not structurally over-constrained and 1 • |E| = 2 • |V | -3. Definition 2 A constraint e is a structural over-constraint if a structurally over-constrained subsystem G = (V , E ) of G with e ∈ E , can be derived such that G = (V , E -e) is structurally well-constrained.
An example is given to illustrate Definition 1. The system is composed of 3 points (each has 2 dof s) with different constraints in 2D space. For figure 2.14-a, it is over-constrained because it contains 3 distance constraints and 3 vertical position constrains. Since each consumes 1 dof , the total system consumes 6 dof s, satisfying 6 > 2×3-3. Figure 2.14-b is structurally well-constrained since the constraints are reduced into 3 distance constraints, which satisfying 3 = 2×3-3. Figure 2.14-c is structurally under-constrained since only 2 distance constraints are left, satisfying 3 < 2 × 3 -2.
Basic definitions
Definition 1 is correct if only all geometric entities are points and all constraints are distances in 2D. It cannot be used to characterize geometric constraint systems where constraints other than distance constraints are involved. For example, in the case of angle constraints in 2D: 3 line segments with 3 incidence constraints form a triangle with 3 • 4 -3 • 2 = 6 DoFs. If added 3 angle constraints(each remove 1 dof), the system will be Structurally well-constrained according to Definition 1. But in fact, 2 angle constraints is enough since the third one is linear combination of the other two. Another counter example is the double banana geometry, where segments represent point-point distances in 3D, which will be discussed in section 2.5.2. Definition 3 Degree of freedom (DoF) of a geometry entity (DoF (v), v is the geometry) is the number of independent parameters that must be set to determine its position and orientation. For a system G = (V, E), its DoFs is defined as DoF (G) = v∈V DoF (v).
Definition 4 Degree of freedom of a geometric constraint (DoC(e), e is the constraint) is the number of independent equations needed to represent it. For a system G = (V, E), the DoFs all constraints can remove is DoC(E) = e∈E DoC(e).
Definition 5 For constraint graph G = (V, E), a geometric constraint system is:
• Structurally over-constrained if there is a subgraph G = (V , E ) sat- isfying DoC(E ) > DoF (V ) -D, • Structurally well-constrained if DoC(E) = DoF (V ) -D and all sub- graphs G = (V , E ) satisfying DoC(E ) ≤ DoF (V ) -D, or
• Structurally under-constrained if DoC(E) < DoF (V )-D and contains no structurally over-constrained subgraphs.
A typical example that Definition 5 cannot treat properly is 2 points binding with distance constraint in 3D. It allows only 5 of 6 possible independent displacements since the system cannot rotate around axis crossing the 2 points. More counter examples in [START_REF] Jermann | Algorithms for identifying rigid subsystems in geometric constraint systems[END_REF] suggest that the value of D depends on the system itself rather than dimension. Therefore, Jerman et al introduced Degree of Rigidity (DoR) to replace the DoFs a system is expected to have if it is rigid. Their definitions are as follows.
Definition 6 For constraint graph G = (V, E), a geometric constraint system is:
• Structurally over-constrained if there is a subgraph G = (V , E ) sat- isfying DoC(E ) > DoF (V ) -DoR(V ), • Structurally well-constrained if DoC(E) = DoF (V ) -DoR(V ) and all subgraphs G = (V , E ) satisfying DoC(E ) ≤ DoF (V ) -DoR(V ), or • Structurally under-constrained if DoC(E) < DoF (V ) -DoR(V )
and contains no structurally over-constrained subgraphs.
The rule of computing DoR is described in [START_REF] Jermann | Algorithms for identifying rigid subsystems in geometric constraint systems[END_REF]. Within the rule, for two secant planes in 3D, the DoR is 5 while for two parallel planes is 4. Similarly, the DoR of 3 collinear points is 2, while the DoR of 3 non collinear points is 3.
Basic definitions
A pure graphbased method has no mean to know if 3 points are collinear or not, or if two planes are parallel or not. It either assumes the configuration is generic or it can try to look if the parallelism/collinearity is an explicit constraint of the system; but it may happen that the parallelism/collinearity is a remote consequence of a set of constraints, thanks to Desargues, or Pappus, or Pascal, or Miquel theorems: the incidence in the conclusion is a non trivial consequence of the hypothesis. This will be further discussed in section 2.2.1.
Definitions with respect to bipartite graph
Latham et al [START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF] introduced similar definitions based on a connectivity graph. It is a graph where vertices represent geometric entities and constraints, which can be treated as a bipartite graph. Note that, we use G = (U, V, E) to denote a bipartite graph whose partition has the vertices U (entities) and V (constraints), with E denoting the edges of the graph.
Definition 7 For bipartite graph G = (U, V, E), a geometric constraint system is:
• Structurally over-constrained if it contains an unsaturated constraint,
• Structurally under-constrained if it contains an unsaturated entity.
A vertex u or v is said to be unsaturated if DoF (u) or DoC(v) is not equal to the number of weights of incident edges in a maximal weighting. And an unsaturated constraint is treated as a structural over-constraint. The weights of edges are computed by maximal weighted matching of a bipartite graph. In section 2.4.1, an example will be given to illustrate the definition.
Geometric redundancy
Geometric redundancy refers to those additional constraints trying to constrain internally established relations. The relations are consequences of domain-dependent mathematical theorems hidden in a geometric configuration. Users are typically not aware of these implicit constraints and will always try to constrain the internal established relations by additional constraints. Geometric redundancy does not use parameters. Therefore, this type of geometric over-constraints cannot be detected by methods based on Dofcounting. In 2D, a typical example is the 3-angles constraints specified on a triangle. Obviously, total value of three angles equals to 180 • . It is not necessary to specify all three angles as constraints because the value of third one can be easily derived once the values of other two angles are defined. Therefore, specifying the 3-angles constraints will generate a geometric redundancy that is either redundant or conflicting. In 3D, every incidence theorem(Desargues, Pappus, Pascal etc) provides implicit dependent constraints (Michelucci and Foufou, 2006a). For example, Pappus's hexagon theorem [START_REF] Coxeter | Geometry revisited[END_REF]) states that given one set of collinear points A, B, C, and another set of collinear points D, E, F , then the intersection points X, Y , Z of line pairs AE and DB, AF and DC, BF and EC are collinear, lying on the Pappus line. In this case, if specifying line pairs XY and Y Z to be collinear, then this constraint is the geometric redundancy (figure 2.15).
Geometric over-constraints at the level of equations
In this section, we summarize the definitions used when a qualitative study of geometric systems is to be performed at the level of equation. Modeling in geometric level is geometric entity oriented, which preserves geometric information of the system. Modeling at the level of equations, however, discards geometric properties of a system but enables a fine detection of geometric over-constraints.
Basic definitions
Structural definition System of equations are transformed into bipartite graph, where vertices represent equations and variables respectively (figure 2.11). The characterization is based on the results of maximum matching (Ait-Aoudia, Jegou, and Michelucci, 2014). Here, we assume that G = (U, V, E) is a bipartite graph with U and V (U ∩ V = ∅) representing variables and equations respectively, and E representing edges.
Definition 8 For bipartite graph G = (U, V, E) and its subgraph G = (U , V , E ). G is:
• Structurally over-constrained if the number of elements in U is smaller (in cardinality) than the number of V . i.e. | U |<| V |
• Structurally well-constrained iff G has perfect matching.
• Structurally under-constrained if the number of elements in U is larger (in cardinality) than the number of elements in
V . i.e. | U |>| V | Definition 9 Let M be a maximum matching of G = (U, V, E
). If M is not perfect matching and V is the subset of V which is not saturated by M , then equations of V are the Structural over-constraints.
Numerical definition
Informally, an over-constrained problem has no solutions, a well-constrain ed problem has a finite number of solutions, but an under-constrained problem has infinitively many solutions. Based on the Matroid theory [START_REF] Oxley | Matroid Theory. Oxford graduate texts in mathematics[END_REF], we give definitions of basis equations, redundant and conflicting equation as follows.
Definition 10 Let G = (E, V, P ) be a geometric constraints system, where E is a set of equations, V is a set of variables and P is a set of parameters. Let E r be a non-empty collection of subsets of E, called basis equations (we call it basis in short), satisfying:
• no basis properly contains another basis;
• if E r1 and E r2 are basis respectively and if e is any equation of E r1 , then there is an equation
f of E r2 such that {(E r1 -e) ∪ f } is also a basis.
Definition 11 Let G = (E, V, P ) be a geometric constraints system. Let E r be a basis. For an equation e, adding it to E r forming a new group: {E r ∪ e}. If {E r ∪ e} is solvable, then e is a redundant equation.
Definition 12 Let G = (E, V, P ) be a geometric constraints system. Let E r be a basis. For an equation e, adding it to E r forming a new group: {E r ∪ e}. If {E r ∪ e} is non-solvable, then e is a conflicting equation.
Definition 13 Let G = (E, V, P ) be a geometric constraints system which is composed of two subsystems:
G b = (E b , V, P ) and G o = (E o , V, P ) with {E = E b ∪ E o , E b ∩ E o = ∅}. If E b is a basis, then E o is a set of numerical over-constraints.
Property For an over-constraint E oi ∈ E o , the Spanning Group E sg of E oi is a group of independent constraints, with which E oi is redundant or conflicting. For linear systems, the spanning group
E sg = {e sg1 , e sg2 , • • • , e sgn } ⊂ E b of E oi satisfies: E oi = n j=1 c j e sgj + b (2.2)
where c j = 0 and is the corresponding scalar coefficient, {e sg1 , e sg2 , • • • , e sgn } are linear independent and b is the bias. Thus, E oi is a linear combination of {e sg1 , e sg2 , • • • , e sgn , b}. Moreover, E oi is redundant if b = 0 otherwise it is conflicting.
However, E sg is not unique for a given E oi . For example, assuming a linear system of constraints represented at the level of equations:
e1 : x 1 + x 2 + x 3 + x 4 = 1 e2 : x 1 + 2x 2 + 3x 3 + x 4 = 4 e3 : x 1 -2x 2 + x 3 + x 4 = 5 e4 : 6x 1 + x 3 + 2x 4 = 7 e5 : 8x 1 + 5x 3 + 4x 4 = 17 e6 : 11x 1 + x 2 + 10x 3 + 7x 4 = 27 (2.3)
Clearly, the system is over-constrained since there are more equations than variables. Through linear analysis of the system, we find that e5 is a linear combination of {e2, e3, e4, 1} and is spanned by {e2, e3, e4}; e6 is a linear combination of {e1, e2, e3, e4, 1} and is spanned by {e1, e2, e3, e4} (figure 2.16). Since the bias of the two groups is 1, both e5 and e6 are conflicting. In this case, {e1, e2, e3, e4} can be treated as a set of basis constraints Basic definitions since all the equations are independent and the number of them equals to the number of variables.
e1 :
x 1 + x 2 + x 3 + x 4 = 1 e2 : x 1 + 2x 2 + 3x 3 + x 4 = 4 e3 : x 1 -2x 2 + x 3 + x 4 = 5 e4 : 6x 1 + x 3 + 2x 4 = 7 (2.4) e5 e6 e2 e3 e4 e1 e2 e3 e4 = 1e2+1e3+1e4+1 = 1e1+2e2+2e3+1e4+1
Figure 2.16: Spanning group of e5 and e6: numbers marked green are coefficients while the ones marked red are the biases However, if we replace e4 with e5, the new set {e1, e2, e3, e5} is also the basis constraints set satisfying Definition 13. Linear analysis result shows that e4 is a linear combination of {e2, e3, e5, -1} and is spanned by {e2, e3, e5}; e6 is a linear combination of {e1, e2, e3, e5} and is spanned by {e1, e2, e3, e5} (figure 2.17). Also, e4 is conflicting and e5 is redundant according to the corresponding bias values.
e1 : x 1 + x 2 + x 3 + x 4 = 1 e2 : x 1 + 2x 2 + 3x 3 + x 4 = 4 e3 : x 1 -2x 2 + x 3 + x 4 = 5 e5 : 8x 1 + 5x 3 + 4x 4 = 17 (2.5) e4 e6 e2 e3 e5 e1 e2 e3 e5 = -1e2-1e3+1e5-1 = 1e1+1e2+1e3+1e5
Figure 2.17: Spanning group of e4 and e6: numbers marked green are coefficients while the one marked red is the bias From the figure 2.16 and figure 2.17, we can see that the spanning group of e6 is not unique, which depends on the set of basis constraints. Also, the type of over-constraint can change: e6 is conflicting for basis constraints set 2.4 while redundant for basis constraints set 2.5.
Evaluation of the definitions
A set of criteria are defined to evaluate these definitions (table 2.1). These criteria corresponds to the columns of the table, which are: D is a dimension (system)-dependent constant discussed in section 2.2.1; geometries refer to the geometric type a definition used to specify; counter example lists of geometries that a definition cannot deal with; fixation is used to highlight definitions that includes determining the location and orientation of a geometric configuration; geometric redundancy is to distinguish definitions that cannot be mislead by geometric redundancy. From the table, we can see that definitions can de divided into two groups: Def 1,5,6 and Def 7,8,10, either based on criteria D or on fixation. Because the former group manipulate geometric elements directly at the level of geometries and geometric constraints are usually supposed to be independent of all coordinates system, they can not be used to determine the location and orientation of a geometric configuration (no fixation) as well as carefully defined the value of D. Also, the defined type of geometries and constraints are limited. For example, Def 1 and Def 5 are defined for points geometries and distances constraints only. But collinear (and cocyclic, coconic, cocubic, etc.) points are forbidden. Def 6 extends the type of geometries to points, lines and planes as well as it allows for incidence constraints to be defined. Def 7 extracts geometric entities and constraints to DoF s and DoCs, and define over-constraints by simply comparing the number of Criteria for evaluating the approaches DoF s of geometric entities and DoCs of geometric constraints. In this way, the definition is not limited to any specific class of geometric entities and constraints. Def 8,10, however, are numerical definitions manipulating geometries and constraints at the level of equations. These definitions can cover any geometric entities and constraints once they can be transformed into equations. The counter examples are black box constraints (section 1.5.2) which cannot be represented with equations. Moreover, these definitions expect systems to be fixed with respect to a global coordinate system and thus D = 0. Finally, since geometric redundancy does not use parameters of any geometries of a constraints system, it cannot be covered by any of these definitions.
Criteria for evaluating the approaches
To carry out appropriate analyses and comparisons between the overconstraints detection approaches, various evaluation criteria and a ranking system are here proposed. They depend on the application domain needs and characteristics. Considering the detection process as well as users' needs for debugging, these approaches have been classified into five main categories: criteria related to the process of detecting over-constraints; criteria related to the system decomposition; criteria related to the system modeling; criteria related to the way of generating results and criteria in terms of evaluating results. Such ranking system permits a qualitative classification of the various approaches according to the specified criteria. Here, a boolean scale is sufficient to characterize the capabilities of the approaches. Firstly, the symbol /⊕ is used to tag the methods not adapted/well adapted, incomplete/complete with respect to the considered criterion (table 2.2). They state a negative/incomplete ( ) or positive/complete (⊕) tendency of the approaches with respect to the given criteria. They are defined in such a way that the optimal method would never be assigned the symbol( ). Secondly, in case the information contained in the articles do not enable the assessment of a criterion, symbol (?) is used. Finally, the symbol ( ) means criteria that have no meaning for the method and are simply not applicable. For example, distinguishing redundant and conflicting constraints is a criteria for evaluating numerical detection methods. However, there is no meaning applying it to evaluate structural detection methods since the latter only generate structural over-constraints. Of course, synthesis results are from our understanding of the publications.
Criteria attached to the level of detecting over-constraints
The first criterion is relative to the type of geometric over-constraints (Figure 2.18,a), which are either numerical (a⊕) or structural (a ). Second criterion concentrates on distinguishing redundant and conflicting constraints detected by numerical methods (Figure 2.18,b). Finally, in engineering design, designers could better debug and modify a geometric over-constraint if its spanning group is informed (Figure 2.18,c). This criterion evaluates numerical methods only.
Criteria related to the system decomposition
Decomposition is an important phase in geometric constraints solving domain. A large system is decomposed into small solvable subsystems which speed up the solving process. A desirable method should return the decomposition result to a user for debugging purpose by generating overconstrained components, which helps him/her locating the geometric overconstraints (d⊕). Also, the ability to generate rigid subsystems should be considered. Here, the rigid are of two meanings. Numerical methods detect the rigid subsystem which is solvable (finite solutions, e⊕) while structural methods detect the rigid subsystem which is structurally well-constrained (Definition 5, e ). Usually, the rigid subsystems are arranged with solving order and over-constraints with each subsystem can be detected by analyzing the subsystem individually.
Criteria for evaluating the approaches
Overconstraints
Basisconstraints
Redundantconstraints
Conflictingconstraints
Basisconstraints
Redundant-Group
Conflicting-Group Decomposition methods should take into account the singularities. Indeed, many methods work under a genericity hypothesis and decompose systems into generically solvable components. A generic configuration remains rigid (non-rigid) before and after an infinitesimal perturbation (Combinatorial Rigidity). A singular configuration, however, transforms from rigid (nonrigid) to non-rigid (rigid) after an infinitesimal perturbation. It happens when geometric elements are drawn with unspecified properties (collinearity, coplanarity, etc.). It may be the case that a solution of a decomposed system lies into a singular variety, e.g., includes some unspecified collinearity or coplanarity. In this case, it happens that the generically solvable components are no more solvable. For instance, the double-banana system (figure 2.36) is generically over-constrained but becomes under-constrained if the height of both bananas is the same since the two "bananas" can fold continuously along the line passing through their extremities. Moreover, Jacobian matrix at singular configuration is rank deficiency, which introduces dependences between constraints. For example, the Jacobian matrix of the subsystem {p3, l2, c, c7, c8, c13} of figure 2.19 is of size 7 × 7. But its rank is 5. The configuration is singular. Obviously, there is no redundant constraints and the singularity comes from the tangent constraints between c and l1, l2 [START_REF] Xiaobo | Singularity analysis of geometric constraint systems[END_REF].
Figure 2.19: Singular configuration as described in [START_REF] Xiaobo | Singularity analysis of geometric constraint systems[END_REF]
Criteria related to system modeling
This set of criteria characterize detection approaches with respect to system modeling: the type of geometries (g) and constraints (h), modeling at the level of equation or geometry (i), 3D or 2D space (j). The first criterion characterizes the type of geometries. Currently, geometric entities are either Euler geometries (g ) such as line segments, cylinders, spheres etc or NURBS geometries (g⊕). The second criterion deals with linear (h ) and non-linear (h⊕) constraints. The third criterion describes a system either at the level of equation (i⊕) or geometry (i ). Finally, a modeling system can either be in 2D (j ) or 3D (j⊕) space.
Criteria related to the way of generating results
In reality, a designer may require that a modeler outputs geometric overconstraints iteratively when modeling a geometric system interactively. Iteratively means the method enables to generate results through steps/loops (k ) while single-pass methods generate the results all at once (k⊕). Also, a userfriendly method should enable the treatment of results for debugging purpose (m⊕). That is, locate the results at the level of geometries so that users can modify/remove them.
Results generation
Gradation of criteria Level Criteria ⊕ k way of detection single-pass iteratively l debugging yes no
Table 2.6: Criteria related to the way of generating results (set 4)
Detection approaches
Now that the criteria used to evaluate the different approaches have been introduced, this state-of-the-art gathers together existing techniques that are capable of detecting geometric over-constraints. The techniques are classified with respect to Definitions in section 2.2.1. The table 2.7 gathers together the results of this analysis.
Methods working at the level of geometry
This group of methods detect geometric over-constraints based on Dof analysis. Since these methods operate geometric entities directly, geometric information of the over-constraints are retained and thus easy to interpret.
Methods corresponding to the Definition 1
Fudos and Hoffman [START_REF] Fudos | A graph-constructive approach to solving systems of geometric constraints[END_REF]) introduced a constructive approach to solve a constraint graph, where geometric entities are lines and points and geometric constraints are distances and angles. In their reduction algorithm, triangles are found and merged recursively until the initial graph is rewritten into a final graph. The structurally over-constrained system/subsystem are detected in two ways. Firstly, before finding triangles, the approach checks if the subgraph is structurally over-constrained. Secondly, if a 4-cycle graph is met during the reduction process, then the system is structurally over-constrained. A 4-cycle graph corresponds to two clusters sharing two geometric elements, which is structurally over-constrained.
Results of evaluating the method are as following:
• Criteria set 1 Although the method allows for checking the constrained status of a system, it does not specify how to find the structural over-constraints as well as finding the spanning groups (a,c?). Since the method is structural, it is meaningless addressing how to distinguish redundant and conflicting constraints (b ).
• Criteria set 2 The method enables to identify a 4-cycle graph which is structurally over-constrained component (d⊕). Also, the triangles found during the recursive process are the rigid subsystems (e ). Regarding to dealing with singular configurations, it is not mention in the original paper (f?).
• Criteria set 3 Normally, the constraint systems are composed of Euler geometries (g ) with non-linear constraints (distances, angles h⊕) and modeled at the geometric level(i ) in 2D space (j ).
• Criteria set 4 Since detecting geometric over-constraints are not addressed, there is no meaning discussing how the over-constraints are generated (k ) as well as debugging them(l ).
Methods corresponding to the Definition 5
Hoffman et al adapted their Dense algorithm (Hoffmann, Lomonosov, and Sitharam, 1998) to locate 1-overconstrained subgraph (satisfying DOCs > DOF s -D + 1) of 1-overconstrained graph [START_REF] Hoffmann | Making constraint solvers more usable: overconstraint problem[END_REF]. The algorithm is composed of four main steps.
1. overloads the capacity from one arc from the source to a constraint by D + 2.
2. distributes a maximum flow in the overloaded network. Doc i + D + 2 from each constraint to its end points(entities) to find a subgraph of density -D + 1. Such dense graph is found when there exists an edge whose edge cannot be distributed even with redistribution [START_REF] Hoffmann | Finding solvable subsets of constraint graphs[END_REF]. The algorithm continues to locate minimal 1-overconstrained subgraph but in our opinion, to check if whether a system is over-constrained or not, it is sufficient that the algorithm terminates at step 3. The authors suggest to further extend the algorithm to incrementally detect k-Overconstrained graphs. The algorithm allows for updating constraints efficiently and maintaining dynamically. Once the constraints have been identified, they are removed. The algorithm excludes large geometric structures that have rotational symmetry, however.
Results of evaluating the method are:
• Criteria set 1 The method does not specify neither detecting geometric over-constraints nor the spanning groups (a,c?). Since the method is structural, talking about distinguishing redundant and conflicting constraints is meaningless (b ).
• Criteria set 2 The algorithm locates the 1-overconstrained subgraph (d⊕) rather than rigid subsystems (e ). Regarding singular analysis of a system, it is not addressed by the method (f?).
• Criteria set 3 The evaluation of this set of criteria on the method is the same with the previous's one except that the modeling dimension can be both 2D and 3D (j⊕ ).
• Criteria set 4 Since detecting geometric over-constraints are not addressed, there is no meaning discussing how the over-constraints are generated (k ) as well as debugging them (l ).
Methods corresponding to the Definition 6
Hoffmann's algorithm cannot deal with constraints such as alignments, incidences and parallelisms either generic or non-generic. Based on their work, Jermann et al [START_REF] Jermann | Algorithms for identifying rigid subsystems in geometric constraint systems[END_REF] proposed the Over-rigid algorithm with the following modifications:
1. The overload is applied on a virtual node R (figure 2.21, figure 2.22) whereas in the Dense algorithm, it is applied on an constraint node. 3. The R node is attached to Dor-minimal subsets of objects in order to find over-rigid subsystems.
The overload is
The Dor varies with different subsystems. Readers can refer to the original paper to know the computation details. The Over-rigid algorithm is initially designed to check whether a system is structurally well-constrained or not. However, the authors do not show in specific how to detect structurally over-constrained systems as Hoffmann et al did to the Dense algorithm. Since the algorithm modified the Dense algorithm, the algorithm can be adapted to detect over-constrained systems if setting the overload to Dor + 2 and following the step 1-3 of the Dense algorithm (section 2.4.1). The evaluation of adapted version of Over-rigid algorithm is the same as the modified version of Dense algorithm. Latham et al [START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF] detected over-constrained subgraphs based on DoFs analysis by finding a maximum weighted matching (MWM) of a bipartite graph. The method decomposes the graph into minimal connected components which they called balanced sets. If a bal-anced set is in a predefined set of patterns, then the subproblem is solved by a geometric construction, otherwise a numeric solution is attempted. The method addresses symbolic constraints and enables to identify under-and over-constrained configurations.
As it is shown in figure 2.23, constraints system is initially represented with a constraint graph with two classes of nodes representing DoFs of geometric entities and constraints respectively. Then, it is transformed into a directed graph by specifying directions from constraints nodes to entities nodes. After that, maximum matching between constraints and entities is applied and those unsaturated constraints nodes are geometric overconstraints. Moreover, they addressed the over-constrained problems by prioritizing the given constraints, where over-constraints can automatically be corrected using constraint priorities. .23: Over-constraints detection process of an example taken from [START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF]. The unsaturated node is the geometric over-constraint. .
Results of evaluating the method are:
Detection approaches
• Criteria set 1 The unsaturated constraints are geometric over-constraints (a ). Since the detected over-constraints are structural, there is no meaning discussing redundant and conflicting constraints as well as the spanning groups (b,c ).
• Criteria set 2 The subgraph containing an unsaturated constraint node is the over-constrained component (d⊕). It can be found by tracing the descendant nodes of the unsaturated n ode. Moreover, the algorithm enables a decomposition of the system into balanced subsets which are the rigid subsystems (e ). Also, analyzing the singular configurations is not discussed by the authors (f?).
• Criteria set 3 The results of evaluation of this set of criteria are the same with those of Over-rigid algorithm except the whole system is modeled in 3D space (j⊕).
• Criteria set 4 The over-constraints are detected in single-pass way (k⊕).
And they proposed to correct the constraints according to constraints priorities (l⊕).
Methods working at the level of equation
In general, almost all the geometric constraints can be translated mechanically into a set of algebraic equations (Hoffmann, Lomonosov, and Sitharam, 1998). Therefore, detecting geometric over-constraints is equivalent with identifying a set of conflicting/redundant equations. However, even if detection works at the level of equations, the treatment is to be done at the level of geometries and constraints.
Methods corresponding to the Definition 8
A variation of Latham's method directly deals with algebraic constraints, where a maximum cardinality of bipartite matching is used. D-M algorithm decomposes an equations system into smaller subsystems by transforming the equation system into a bipartite graph and canonically decomposes the bipartite graph through maximum matchings and minimum vertex covers. It decomposes a system into over-constrained, well-constrained and underconstrained subsystems [START_REF] Dulmage | Coverings of bipartite graphs[END_REF]. It has been used for debugging in equation-based modeling systems such as Modelica (Bunus and Fritzson, 2002a). Serrano has been interested in using graph-theoretic algorithm to prevent over-constrained systems where all constraints and geometric entities are of DoF one [START_REF] Serrano | Constraint management in conceptual design[END_REF]. The process of D-M decomposition: D-M(A) = A(p, q) does not require A need to be square or full structural rank. A(p, q) is split into a 4-by-4 set of coarse blocks: where A12, A23, and A34 are square with zero-free diagonals. The columns of A11 are the unmatched columns, and the rows of A44 are the unmatched rows. Any of these blocks can be empty. The whole decomposition is composed of coarse and fine decomposition.
Geometric over-constraints detection
56 11 A 12 A 13 A 14 A 23 A 24 A 34 A 44 A 0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 0 1 0 1 1 1 0 0 0 1 0 0 1 1 1 0 0 0 • • • • • • • • • • • • • • • • • •
Coarse decomposition
• [A11A12] is the underdetermined part of the system-it is always rectangular and with more columns than rows, or does not exist.
• A23 is the well-determined part of the system-it is always square.
• [A34; A44] is the overdetermined part of the system-it is always rectangular with more rows than columns, or does not exist.
Fine decomposition
The above sub-matrices are further subdivided into block upper triangular form via the fine decomposition. Consequently, strong connected components are generated and linked with solving order (Ait-Aoudia, Jegou, and Michelucci, 2014). By analyzing each component following the solving order, the system is updated dynamically and overconstraints are generated iteratively.
Results of evaluating the method are:
• Criteria set 1 Equations of [A34; A44] are structural over-constraints (a ). Evaluation of criteria b and c is meaningless since the method is structural (b,c ).
• Criteria set 2 [A34; A44] after coarse decomposition is the structural over -constrained subpart (d⊕). The strong connected components after fine decomposition are structural rigid subsystems (e ). The method does not discuss on analyzing singular configurations (f ).
• Criteria set 3 Since the modeling is based on system of equations, any geometric constraints that are able to be transformed into system of equations can be analyzed by the method. Therefore, the results for evaluating this set of criteria are (g⊕ , h⊕ , i⊕, j⊕ ).
• Criteria set 4 Structural over-constraints are contained in O G and output in a single-pass way along with generation of O G (k⊕). The method does not discuss on debugging the over-constraints (l?).
Methods corresponding to the Definition 10
Linear methods In this section, we gather together the existing techniques from linear algebra that are capable of analyzing linear constraint systems. We consider linear constraint systems in the matrix form Ax = b, where A has dimension m × n , and n ≥ m ≥ r with r being the rank. The notation A[i : j, l : k] defines the matrix obtained by slicing the ith to jth rows, and the lth to kth columns of A. According to [START_REF] Strang | Linear Algebra and Its Applications[END_REF], methods such as Gauss-Jordan Elimination, LU and QR Factorization present a good characteristic of locating inconsistent/redundant equations.
Gauss-Jordan Elimination with partial pivoting
The elimination process is terminated once a reduced row echelon form is obtained (An example is shown in figure 2.25). Exchanging rows at the start of kth stage to ensure that: Results of evaluating the method are as follows:
A (k) kk = max i k A (k) ik (2.6) where A (k) ik = A[i, k],
•• ••• ••• ••• ••• ••• ••• •••
• Criteria set 1 The method allows for detecting redundant and conflicting constraints (a,b⊕). However, the method does not tell how to find spanning groups of an over-constraint (c?).
• Criteria set 2 The method does not enable to decompose a system. There is no meaning evaluate the method with respect to system decomposition criteria (d,e,f ).
• Criteria set 3 The method analyzes linear equations. Therefore, any geometry (g⊕ ) with linear constraints (h ) in 3D or 2D space (j⊕ ) modeling at equation level (i⊕) can be handled by the method.
• Criteria set 4 The over-constraints are output all at once (k⊕) after detection without debugging them (l ).
In the work of [START_REF] Light | Variational geometry: a new method for modifying part geometry for finite element analysis[END_REF], they used this method to detect invalid dimensioning schemes. Note that, in the following sections, G-J is short for Gauss-Jordan elimination with partial pivoting.
LU factorization with partial pivoting
The method is a high-level algebraic description of G-J [START_REF] Okunev | Necessary and sufficient conditions for existence of the LU factorization of an arbitrary matrix[END_REF]. The process is shown in figure 2.26, where P is the permutation matrix reordering the rows. The number of non-zero diagonal elements of U is the rank r. The Detection approaches last m -r rows of the reordered matrix P * A corresponds to the numerical over-constraints. However, the factorization itself does not manipulate directly on b, which means that distinguishing redundant and conflicting constraints is unavailable. To know them, we need further extension:
Ax = b P A = LU U x = L -1 P b (2.7)
Now the distinguish step is similar to the one of G-J. That is, by comparing the last m -r elements of L -1 P b with 0, redundant and conflicting constraints can be distinguished. However, one has to notice that the deduction process is under the condition that L should be invertible.
To the best of our knowledge, using this method to detect geometric over-constraints in not convincingly demonstrated in literature. Evaluation of the method with respect to five criteria is the same as the one of G-J. Note that in the following sections, we use LU in short for LU factorization with partial pivoting.
QR Factorization with column pivoting Before applying QR , coefficients matrix A should be transposed first(A = A t ) since QR operates on columns of a matrix. QR exchanges columns at the start of the kth stage to ensure that: As shown in figure 2.27, P is the permutation matrix where the information about columns exchanges is stored. R is a triangular matrix where rank r is the number of non-zeros diagonal elements. Equations corresponding to A t .p[:, r + 1 : m] are the over-constraints [START_REF] Dongarra | Proceedings of the Fourth SIAM Conference on Parallel Processing for Scientific Computing[END_REF].
A (k) k (k : m)) 2 = max j k A (k) j (k : m)) 2 (2.
Similar to LU, further deduction is needed to distinguish redundant and conflicting constraints. First, the matrix Q(:, 1 : r) is inverted using the following equation:
A t (:, 1 : r) = Q(:, 1 : r).R(1 : r, 1 : r) (2.9)
and is then used in the following equation:
A t (:, r + 1 : n) = Q(:, 1 : r).R(1 : r, r + 1 : n) (2.10)
thus providing the following relationship between the two sliced matrices A t (: , r + 1 : n) and A t (:, 1 : r):
A t (:, r + 1 : n) = A t (:, 1 : r).R(1 : r, 1 : r) -1 R(1 : r, r + 1 : n) (2.11)
The relationship between over-constraints and independent constraints are revealed in the matrix R(1 : r, 1 : r) -1 R(1 : r, r+1 : n) in equation 2.11. From the matrix, the spanning group of an over-constraint could also be known.
Finally, to identify the redundant and conflicting equations, the new b vector after factorization is redefined as follows:
b new = b(r + 1 : n) -b(1 : r).R(1 : r, 1 : r) -1 R(1 : r, r + 1 : n) (2.12)
Redundant and conflicting constraints can be further distinguished by comparing the value of the last m -r elements of b new with 0.
The method was adopted by Hu et al [START_REF] Hu | Over-constraints detection and resolution in geometric equation systems[END_REF] to detect over-constraints of B-splines. Evaluation of the method with respect to five criteria is the same as the one of G-J. We use QR in short for the method for discussion in following sections.
Non-linear methods Detecting non-linear geometric constraints systems is more complicated than linear ones. Since non-linear detection methods include symbolic methods from abstract algebra, we introduce the mathematical fundamentals to make it easy to understand. The following two theorems are induced from [START_REF] Cox | Ideals, varieties, and algorithms[END_REF]. Readers can find more details about concepts like ideals, affine varieties etc in the book.
Theorem 1 For a system of polynomial equations
f 0 = f 1 = • • • = f s = 0, where f 0 , f 1 , • • • , f s ∈ C[x 1 , • • • , x n ]; If affine variety W (f 1 , • • • , f s ) = ∅ while W (f 0 , f 1 , • • • , f s ) = ∅, then f 0 = 0 is a conflicting equation; If W (f 0 , f 1 , • • • , f s ) = W (f 1 , • • • , f s ) = ∅, then f 0 = 0 is a redundant equa- tion; If W (f 0 , f 1 , • • • , f s ) = ∅, W (f 1 , • • • , f s ) = ∅ and W (f 0 , f 1 , • • • , f s ) = W (f 1 , • • • , f s ), then f 0 = 0 is an independent equation. Theorem 2 (Hilbert s weak Nullstellensatz) Let k be an alge- braically closed field. If f, f 1 , ..., f s ∈ k[x 1 , ..., x n ] are such that f ∈ I(W (f 1 , • • • , f s )), then there exists an integer m ≥ 1 such that f m ∈ f 1 , • • • , f s (and conversely).
Based on the Theorem 2, Michelucci et al (Michelucci and Foufou, 2006a) deduced the Corollary 1.
Corollary 1 Let k be an algebraically closed field and
W (f 1 , • • • , f s ) = ∅. If f, f 1 , • • • , f s have the common root w, then rank([f (w), f 1 (w), • • • , f s (w )] T )< s + 1.
Informally, Corollary 1 tells that if a system of polynomial equations containing redundant equations, then the Jacobian matrix of the equations at the affine space (solution space) must be row rank deficiency. However, the reverse is not correct. In other words, if there exists Jacobian matrix whose rank is deficiency at the solution space, then system of polynomial equations does not necessarily contain redundant equations. A typical example is the singular configuration in section 2.3.2: the system is row rank deficiency at the solution space but the system does not contain geometric over-constraints. It is the singular configuration that causes the system rank deficiency. Therefore, to detect over-constraints through analyzing Jacobian matrix, one has to note that Jacobian matrix should be computed on configurations in the solution space but avoid the singular ones. We propose a schema on determining the existence of over-constraints through analyzing Jacobian matrix in figure 2.28. That is, compute the Jacobian matrix at a configuration from affine space. If the rank is full, then there is no over-constraints. Otherwise, we check if the configuration is singular. If not, then there exists over-constraints otherwise we move to test the other configuration in the affine space. Loops means that one has to go back to generate Jacobian matrix at different points until the existence/nonexistence of over-constraints can be determined. It is a recursive process of finding points that can be used to determine the existence/non-existence of over-constraints. In reality, however, affine space sometimes is hard to find or even does not exist. Moreover, the singularity of a configuration is sometimes difficult to test. A lot of research work have been done to address the two issues.
Detection approaches
The first group of methods are symbolic algebraic methods, which compute a Grobner basis for the given system of equations. Algorithms to compute these bases include those by Buchberger [START_REF] Bose | Gröbner bases: An algorithmic method in polynomial ideal theory[END_REF], and by Wu-Ritt (Chou, 1988b;[START_REF] Wu | Mechanical theorem proving in geometries: Basic principles[END_REF]. These methods, essentially, transform the system of polynomial equations into a triangular system whose solutions are those of the given system.
Grobner Basis(GB) Assume a set of polynomials
f 0 , f 1 , • • • , f s ∈ C[x 1 , • • • , x n ].
The reduced Grobner basis(rgb 0 ) of the ideal f 1 , • • • , f s satisfies rgb 0 = {1} and rgb 0 = {0} with respect to any ordering. The new reduced Grobner basis of the ideal f [START_REF] Cox | Groebner Bases[END_REF].
0 , f 1 , • • • , f s is rgb new . If rgb new = {1}, f 0 = 0 is a conflicting equation; if rgb new ≡ rgb old , f 0 = 0 is a redundant equation(b⊕); if rgb old ⊂ rgb new , f 0 = 0 is an independent equation
Results of evaluating the method are as follows:
• Criteria set 1 Obviously, the above method can tell if a constraint is redundant or conflicting (a,b⊕). However, the method does not support finding spanning group of a (c ).
• Criteria set 2 The method is initially designated for solving polynomial equations. Therefore, there is no meaning evaluating it with set of criteria on system decomposition (d,e ). The method does not analyze the singularity of a configuration (f ).
• Criteria set 3 The method analyzes non-linear equations. Therefore, any geometries (g⊕ ) with non-linear constraints (h⊕) in 3D or 2D space (j⊕ ) modeling at equation level (i⊕) are applicable for the method.
• Criteria set 4 To detect a set of over-constraints, the process of computing reduced grobner basis requires inputing one equation at a time. Therefore, the over-constraints are not generated all at once but iteratively (k ). However, debugging these over-constraints are not supported (l ).
Construction of a Grobner basis is a potentially time-consuming process. Hoffman used this technique to do geometric reasoning between geometric configurations [START_REF] Hoffmann | Geometric and Solid Modeling: An Introduction[END_REF]. In terms of detecting geometric over-constraints, Kondo [START_REF] Kondo | Algebraic method for manipulation of dimensional relationships in geometric models[END_REF] initially used Grobner basis method to test dependencies among 2D dimension constraints.
Wu-Ritt characteristic sets Let a system of polynomial equations
P = {f 0 = f 1 = • • • = f s = 0}, where f 0 , f 1 , • • • , f s ∈ Q[x 1 , • • • , x n ]
represent system of constraints and Zero(P) denote the set of all common zeros of {f 0 , f 1 , • • • , f s }. The system contains redundant equations iff there exist a polynomial p such that:
Zero (P -{p}) ≡ Zero (P ) (2.13)
For the polynomial set P , its zero set can be decomposed into a union of zero sets of polynomial sets in triangular form through Wu-Ritt's zero decomposition algorithm:
Zero (P ) = 1 i k Zero (T S {i} /I {i}) (2.14)
where each T S{i} is a polynomial set in triangular form, I{i} is the production of the initials of the polynomials in T S{i} and k is the number of zero sets. The system contains inconsistent equations iff k ≡ 0 (Chou, 1988a). In the work of Gao and Chou [START_REF] Gao | Solving geometric constraint systems. II. A symbolic approach and decision of Rc-constructibility[END_REF], they present complete methods for identifying conflicting and redundant constraints based on Wu-Ritt's decomposition algorithm. Also, the algorithm is used to solve Pappus problems on deciding if a configuration can be drawn with ruler and compass.
Results of evaluating Gao's method are as follows:
• Criteria set 1 As discussed above, their method enable to detect conflicting and redundant constraints (a,b⊕) but cannot find the spanning groups (c ).
• Criteria set 2 The method decomposes a set of polynomials into a union of zero sets in triangular form. No over-constrained subparts, rigid subsystems are generated as well as singular configurations are analyzed (d,e,f ).
• Criteria set 3 The method analyzes non-linear equations. Any geometries (g⊕ ) with non-linear constraints (h⊕) in 3D or 2D space (j⊕ ) modeling at equation level (i⊕) are applicable.
• Criteria set 4 The results are the same as those of evaluating Grobner basis.
Symbolic detection methods are sound in theory but suffer from high computation cost. As discussed previously, the worst-case can be doubly exponential. Moreover, the reduced Grobner basis has to be computed every Detection approaches time an equation is to be analyzed. Therefore, this method are not able to deal with large systems of equations.
The second group of methods analyze Jacobian matrix of equation systems. Contrary to symbolic methods, these numerical methods are more practical in computation but are theoretical deficiency in some cases. On one hand, if the affine space of a system does not exist, an equivalent one that sharing similar Jacobian structure need to be found. On the other hand, even if the Jacobian matrix of a configuration is row rank deficiency in affine space, it has to be sure that the configuration is not singular.
Perturbation method Haug proposed a perturbation method to deal with singular configurations and detect redundant constraints in mechanical systems [START_REF] Haug | Computer aided kinematics and dynamics of mechanical systems[END_REF]. Initially, assume system of equations Φ(q) = 0 and the corresponding Jacobian matrix Φ q is rank deficiency at q. As we discussed before, this is not enough to determine the existence of the overconstraints since the singular configuration always make a Jacobian matrix rank deficiency. He suggested to analyze Jacobian matrix at one more configuration by doing the following:
• Add a small perturbation δq to q and obtain Φ q δq = 0. This process is based on the Implicit Function Theorem [START_REF] Krantz | The Implicit Function Theorem: History, Theory, and Applications[END_REF].
• Applying G-J elimination to Φ q , Φ q δq = 0 is transformed into
Φ I u Φ I v 0 Φ R v δu δv = 0. Φ I u
is the upper triangular matrix with 1s as diagonal elements. Φ R v can be treated as the matrix with all 0s under given tolerance. Equations in Φ(q) = 0 corresponding to Φ I u part: Φ I (q) = 0 are independent.
• Now, Φ q δq = 0 can be simplified into Φ I u δu + Φ I v δv = 0 and thus
δv = -(Φ I v ) -1 Φ I u δu, δq = δu δv = δu -(Φ I v ) -1 Φ I u δu
• Assume q is perturbed to new point q * satisfying q * = q + δq. To ascertain it lies in the affine space, is should satisfy Φ(q * ) = 0. This is equivalent to Φ I (q * ) = 0 since the latter is composed of all independent equations of the former.
• Solving Φ I (q * ) = 0, q * = q + δq, δq
= δu δv = δu -(Φ I v ) -1 Φ I u δu
, the value of q * is obtained.
• Compute the rank of Jacobian matrix at q * : Φ q * and check if it is rank deficiency.
We can see from above that obtaining an appropriate value of the perturbation δq so that q * lies in the affine space is the main part of the work.
Results of evaluating the method are as follows:
• Criteria set 1 The method enables to detect geometric over-constraints (a⊕) but does not distinguish redundant and conflicting constraints (b ).
Finding the spanning groups is also not supported (c ).
• Criteria set 2 The method mainly detects the over-constraints based on analyzing the Jacobian matrix of a whole system. There is no meaning evaluate the method with respect to system decomposition criteria (d,e,f ).
• Criteria set 3 The method analyzes both linear and non-linear equation systems. Therefore, any geometries (g⊕ ) with non-linear and linear constraints (h⊕ ) in 3D or 2D space (j⊕ ) modeling at equation level (i⊕) are applicable for the method.
• Criteria set 4 The over-constraints are generated in a single-pass way since Jacobian matrix analysis is on the whole system at one time (k⊕). However, debugging the over-constraints is not addressed (l ).
His method selects two points in affine space to determine the existence of geometric over-constraints. If Jacobian matrix at any point is full rank, then there is no over-constraint. However, if the rank of Jacobian matrix at both points is deficiency, then there exists geometric over-constraints.
Numerical Probabilistic Method (NPM) Roots of system of equations can be sometimes hard to find or even do not exist. In these cases, the affine space does not exist. Sebti Foufou et al [START_REF] Foufou | Numerical decomposition of geometric constraints[END_REF] suggest a Numerical Probabilistic Method (NPM), which is to test Jacobian matrix at random points instead of the affine space. However, there is a risk that Jacobian matrix is row rank deficiency at the chosen points and the corresponding configurations happen to be singular. They suggest to test on more testing points to reduce the possibility of happening such case. Moreover, in order to get more confidence, authors suggest that testing at 10 different points should be sufficient. NPM is practical in computation but is not sound in theory since the testing points are not necessarily all in affine space.
Results of evaluating the method are as follows:
• Criteria set 1 The method enables to identify numerical over-constraints (a⊕). However, it can neither distinguish redundant and conflicting constraints nor finding the spanning group of an over-constraint (b,c ).
• Criteria set 2 The method can be used to decompose a system into rigid subsystems (e ). However, decomposition into over-constrained components as well as analyzing singular configurations are not supported (d,f ).
• Criteria set 3 The method analyzes both linear and non-linear equation systems. Therefore, any geometries (g⊕ ) with non-linear and linear constraints (h⊕ ) in 3D or 2D space (j⊕ ) modeling at equation level (i⊕) are applicable for the method.
• Criteria set 4 Numerical over-constraints are detected all at once (k⊕) but debugging them are not supported (l ).
Witness Configuration Method (WCM) Instead of randomly selecting configurations, Michelucci et al. suggested to study the Jacobian structure at witness configurations where incidence constraints are satisfied (Michelucci and Foufou, 2006b). The witness configuration and the target configuration shares the same Jacobian structure, which is non-singular in affine space. As a consequence, all the numerical over-constraints can be identified (Michelucci and Foufou, 2006a). More recently, Moinet et al. developed tools to identify conflicting constraints through analyzing the witness of a linearized system of equations [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF]. Their approach has been successfully applied to the well-known Double-Banana geometry.
For a geometric constraints system represented with a set of equations F (U, X) = 0, where U denotes a set of parameters with prescribed values U T (T for target), and X is the vector of unknowns. The solution is denoted as X T . A witness is a couple (U W , X W ) such that F (U W , X W ) = 0. Most of the time, U W and X W are different from U T and X T respectively. The witness (U W , X W ) is not the solution but shares the same combinatorial features with the target (U T , X T ), even if the witness and the target lie on two distinct connected components of the solution set. Therefore, analyzing a witness can detect numerical over-constraints of a system [START_REF] Michelucci | Using the witness method to detect rigid subsystems of geometric constraints in CAD[END_REF]Michelucci et al., 2006). These numerical over-constraints are not only the structural ones but also geometric redundancies. Step one aims at generating the witness configuration while at step two, QR is applied on the Jacobian matrix A. As a result, the rows of equations are re-ordered by P and the number of basis constraints is revealed by r. Finally, coming back to the re-ordered original equations, the first r equations are the basis constraints while the remaining ones are the numerical overconstraints. Note that, QR can be replaced with G-J in the process and would generate result different from the one of QR since the two methods adopt different sorting rows strategy.
The results of evaluating the method are the same as those of evaluating NPM (section 2.4.2) except that the property of Correct is retained (m⊕). Michelucci et al (Michelucci and Foufou, 2006a) have proved that the WCM can identify all the dependencies among constraints. In other words, if removing these dependent constraints, the remaining constraints are independent.
WCM Extension Thierry et al [START_REF] Thierry | Extensions of the witness method to characterize under-, over-and well-constrained geometric constraint systems[END_REF] extend WCM to incrementally detect over-constrainedness and thus to compute a wellconstrained boundary system. Also, they design the so called W-decomposition to identify all well-constrained subsystems, which manages to decompose systems non-decomposable by classic combinatorial methods.
Results of evaluating the method are as follows:
Detection approaches
• Criteria set 1 The results of evaluation within this set of criteria are the same with those of NPM.
• Criteria set 2 The W-decomposition enables to efficiently identify the maximal well-constrained subsystems of an articulated system as well as further decompose a rigid system into well-constrained subsystems (e ) but finding over-constrained components is not discussed (d?).
For finding the spanning groups, it is not supported (f ).
• Criteria set 3 The results are the same with those of evaluating NPM.
• Criteria set 4 Working on the witness, the naive idea would be to try and remove constraints one by one and, at each step, compute the rank again to determine if the constraint is redundant with the remaining set. However, the author points out that performing this way is expensive. They consider an incremental construction of the geometric constraint system to identify the set of redundant constraints with no additional costs in comparison to the basic detection of redundancy (k ). The method does not support debugging overconstraints (l?). [START_REF] Thierry | Extensions of the witness method to characterize under-, over-and well-constrained geometric constraint systems[END_REF] .
Generating a witness configuration Sometimes, when certain geometric elements happens to be drawn with specific properties (collinearity, coplanarities, etc) without representing a real constraint, the sketch is not typical of the expected solution, for example it does not satisfy incidence constraints. Thus cannot be used as a witness configuration. A witness configuration should be generic when it remains rigid before and after infinitesimal perturbation. Likewise, if the sketch is not rigid before perturbation, it should be not rigid after the perturbation (Combinatorial Rigidity). For example, figure 2.30-a) is not generic: a small perturbation on the dimensions of the bars will result in a non-rigid sketch shown in figure 2.30-b). However, figure 2.30-b) is generic: if a small perturbation is introduced in the dimension of the sketch, it will remain non-rigid. Usually, non-generic sketches are constituted with aligned line segments presented in figure 2.30-a). The collinearity will induce artificial redundancy between the constraints associated with the collinear vectors. As a result, before using the WCM, one has to make sure the witness configuration is typical of the expected solution.
Here, we adapted the algorithm of Moinet [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF] for generating generic witness configurations. Other methods for generating witness configurations can be found in [START_REF] Thierry | Extensions of the witness method to characterize under-, over-and well-constrained geometric constraint systems[END_REF][START_REF] Kubicki | Witness computation for solving geometric constraint systems[END_REF]). Moinet's algorithm contains the following steps:
1. Compute the Jacobian matrix of system of equations.
2. Calculate the rank r old of Jacobian matrix at initial sketch.
3. Randomly perturb the initial sketch (usually generated by users), regenerate the Jacobian matrix, and recompute the rank r new at the new position(new sketch).
4. If r new > r old , replace the initial sketch by the new one and reiterate the third step.
5. Otherwise the old sketch is generic.
Hybrid methods
Serrano's Serrano analyzes systems of equations (h⊕) to select a well constrained, solvable subsets from candidate constraints [START_REF] Serrano | Constraint management in conceptual design[END_REF]. His method first detects structural over-constraints (a ) if there are equations uncovered after maximum matching. To further detect numerical over-constraints (a⊕) within strong connected components (e ), symbolic and numerical method are used. The symbolic method used is pure symbolic operations, where constraints are eliminated one by one by substituting one variable into other equations until a final expression is obtained. Also, nonlinear equations are linearized and G-J elimination method is applied to analyze them. He repeated the above process until finally redundant and conflicting constraints are all distinguished (b⊕). Moreover, the author suggests the spanning group of an over-constraint is a set of constraints within the same strong connected component (c⊕). However, it will generate wrong results if linearize non-linear systems for detection (example will be shown in section 4.2.4).
His constraint manager enables designers to generate geometric overconstraints iteratively (k ). When a geometric over-constraint is detected (l⊕), the constraint manager provides three alternatives, where users can select an appropriate one satisfying his/her needs. Finally, as the modeling is in equations (i⊕), his method is applicable to geometries of both free-form and Euler (g⊕ ), linear and non-linear constraints (h⊕ ) and 3D and 2D (j⊕ ).
Results of all the evaluation results are summarized in the table 2.7. Numerical analysis Since the constraints of the two configurations are linear, both G-J and QR can be used. Both methods identify equations {e7, e8}, respectively equations {e15, e16}, as the numerical over-constraints of Curve 2, respectively of Curve 3. However, in terms of fine detection, only G-J further identifies the conflicting and redundant equations. For Curve 2 (upper part of figure 2.35), equation e7 is conflicting (last term of the row is not null) and e8 is redundant (last term of the row is null). Similarly, for Curve 3 (lower part of figure 2.35), equations e15 and e16 are redundant (zero values in the last column). Thus, as equations {e7, e8} come from the same geometric constraint p4, respectively {e15, e16} come from constraint p8, they cannot be split in two and the whole constraint p4 is to be considered as conflicting, respectively p8 considered as redundant. The result of the whole detection process is shown in table 2.10. For free-form linear constraints system, G-J is capable of conducting coarse and fine detection but QR is applicable for coarse detection only.
Geometric over-constraints detection
Non-linear use case : Double Banana
Problem description This section presents a non-linear use case with the Double Banana geometry (figure 2.36). The variables, constraints and the parameters of the Double Banana configuration are exactly the same as the one in the work of Moinet et al [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF]. The difference is that their modeling is based on coordinate-free system while here it is Cartesian-based. As it is shown in figure 2.36, the Double Banana is defined by 8 points whose 3D coordinates are unknowns, thus there are 24 variables whose values are to be found. The Double Banana is constrained with 18 non-linear equations imposing distances between couples of points as detailed in figure 2.37. The figure in Fig. 9 is drawn using vertices coordinates (listed in Fig. 8); they do not fulfil length specifications (also in Fig. 8). The resolution of the GCSP is assigned to the solver developed by the authors [5]. The result expected is a geometry that conforms to all the 18 specifications given. As this case study is overconstrained, the solver is unable to find a geometry that satisfies the 18 specifications. Consequently, this research proposes to add a numerical problem analysis before solving the GCSP. The result of this analysis gives a set of compatibility equations required to find a new set of consistent specifications stored in S ′ .
Analysis of the numerical problem
The numerical analysis of the problem is performed by adapting the witness method to the case of the double banana structure.
The first part is to find a generic sketch. A linear system is generated from the initial configuration. Its rank is calculated as 17. The points of the initial sketch are perturbed randomly. A second system is generated and its rank is calculated. It is also 17. It can therefore be concluded that the initial object is generic. Furthermore, the rank of matrix A is lower than the number of constraints (17 < 18); thus we can assume (based on Michelucci results) that the case is over-constrained. Then two options are offered to the users: either they reformulate the problem alone, or they leave the algorithm to manage this overstress. We now describe the second scenario. In this example, an extract of the matrix A of size 18 × 144 (144 = 18 • 8) is given in (34) (see
Box I). The computation of the Gaussian elimination using Matlab rref() function gives the matrix presented in (35) (see Box II).
From this point we can first observe that the 18th row is full of 0, meaning that the 18th specification is redundant. Secondly, it can be seen that the columns numbered {12, 18, 20, 21, . . . , 144} do not contain any elimination value which means that the value of unknowns {12, 18, 20, 21, . . . , 144} can be chosen freely. This second feature is not considered in the method presented.
Consequently to have a well-posed problem, the proposed solution is to release the 18th constraint. The new reduced problem is therefore composed of 17 specifications which are stored in S.
This new well-posed problem, is sent to the solver. A solution is found in 4 iterations. In Fig. 10, we can see all the configurations of the objects obtained during the solving. An object can be drawn at each step of the algorithm. Convergence is reached very quickly. Indeed, the configuration desired is almost obtained after only one iteration. It is necessary to wait for the 4th stage for an object to reach standard convergence (ϵ = 1e -6 in Algorithm 1).
The solution computed is a set of points called P ′ , with the same topology as before. The geometrical solution is illustrated by Fig. 11. The analysis of this solution gives the length parameters presented in Table 1. It can be seen that the 18th constraint that has been removed is equal to L 18 = 27.622761 instead of 32 (the designer's original goal). This solution is not the exact desired solution. It is probably just one solution close to the designer's aims. The geometrical solver also gives the compatibility equations (in this example, only one equation describes the dependence between the 18 parameters) (there are only 17 independent [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF].
Structural analysis: BFS and D-M decomposition
As in the previous example, structural analysis methods are tested first. Results show that the whole system is only one connected component that is structurally under-constrained. However, we will see in the following that a conflicting constraint is therefore undetected.
Numerical analysis
The results for coarse and fine detection are shown respectively in table 2.11 and table 2.12. Since two different sorting rows strategies are used, two different numerical over-constraints are detected (e14 for G-J and e9 for QR) and none of them is the same with the one found by Moinet et al. (e18). At the geometric level, those identified equations correspond to lengths imposed between points of the Double Banana. Thus, two aspects can be discussed when analyzing the results.
First, it is interesting to know how far are the lengths 14(A14), 9(A9) and 18(A18) from their initial specifications. This can be achieved while computing the deviation between the lengths that should be reached (i.e. imposed lengths as detailed in figure 2.37) and the lengths obtained by releasing the corresponding over-constraint, then solving and evaluating the new lengths. The column dev of table 2.11 gathers together the deviations when considering the identified over-constraints. For example, the identified over-constraint e14 is associated to the length 14(A14) initially set up to 35. If this equation is removed, the system can be solved and we obtained 14(A14) = 35.84 which gives a relative deviation of 0.84/35 = 0.024. The deviations obtained using G-J or QR are quite similar and rather small (i.e. about 10 times smaller) when compared to the deviation 4.38/32 0.137 obtained when removing e18 identified by the algorithm of Moinet et al [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF].
Benchmark and analysis of use cases
Then, it is interesting to know the degree of dependencies between constraints of the final system. This can be evaluated computing the condition number of the Jacobian matrix at the solution point of the new system generated after removing the over-constraints (i.e. 14(A14), 9(A9) or 18(A18)). The results are shown in the column cond of the table 2.11. The dependencies between remaining constraints are lower for G-J and QR than for the method of Moinet et al. Table 2.12: Fine detection using witness configuration on the Double Banana.
In terms of fine detection methods, table 2.12 shows that equations e14 and e9 are found to be conflicting by both incremental solving and optimization methods. The former method outperforms the latter because of less computational time and a ? is put in the table. However, Grobner-Basis does not converge in this amount of time. It is consistent with the discussion in [START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF] that Grobner-Basis is applicable for small systems with less than 10 non-linear algebraic equations. Finally, the method by Moinet et al. also identifies e18 as a conflicting constraint but their paper does not address the time issue which can therefore not be compared.
Testing different witness configurations In the work of Moinet et al., the Double Banana configuration is linearized and the conflicting constraint e18 is detected from the linearized system [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF]. Their linearization is on the witness configuration generated from the initial sketch. Here, this section extends their work by linearizing the system at a witness not generated from the initial sketch but from random configurations. The idea is to better understand whether the selection of a witness configuration affects the detection and identification of conflicting constraints. Table 2.13: Linearization at random witness to identify conflicting equations.
Table 2.13 gathers together the results of this analysis. It shows that there are cases where conflicting constraints are wrongly detected. here, witness configurations 6, 8, 9 and 10 do not allow a proper identification of conflicting constraints. For example, let us consider the witness configuration 9 illustrated on figure 2.38 and for which the coordinates of the points are given in table 2.14. This configuration is generic (Michelucci et al., 2006) because the rank of the matrix remains constant (and equal to 17 in the present case) even after a random perturbation on all the coordinates (figure 2 Linear analysis identifies e13 as a conflicting constraint. We remove e13 and solve the remaining system using the solver based on Levenberg-Marquardt algorithm in MATLAB toolbox. The remaining system and the corresponding initial sketch where the solver starts to solve are shown in figure 2.39. The system is the same with the original one except that edge constraint A13 are removed. However, in this case, the solution process does not converge. We can not reach a conclusion that the remaining system is non-solvable, since the Levenberg-Marquardt algorithm converges to local minima rather than a global one. When applying an incomplete solver to solve system of equations, if the solving process converges, then the system is indeed solvable. Otherwise, complete solvers should be applied to know the solvability of a system, which will induce reliable results on determining redundant/conflicting status of an over-constraint.
Conclusion
This chapter has introduced several approaches for finding over-constrain ts and identifying conflicting as well as redundant equations in linear and non-linear geometric systems. Structural and numerical methods have been tested and analyzed with results on B-Spline curves and on the Double-Banana use cases. Several benefits have been reached. The definition of over-constraints with respect to free-form geometries is formalized. Local segments of free-form geometries can be found by BFS. Linear methods such as G-J and QR are capable of analyzing linear equations systems. These methods can also be used as part of the witness method to analyze nonlinear equations systems. In this case, a particular attention has to be paid on the choice of the witness. In order to stay consistent with respect to the user-specified requirements, the discussion is first performed at the level of the equations but is then moved to the level of the geometry and design intent while considering the constraints encapsulating the different equations. As it is often the case in geometric constraint solving, large systems are first decomposed in subsystems which are then analyzed using numerical methods. Second, the treatment of the over-constraints could be extended while considering the variables as well as the objective function to be minimized as potential parameters to be found so as to get a solution closer to the design intent (The definition will be addressed formally in next chapter). Considering free-form geometries, a particular attention could be paid to the treatment of configurations where the knots of the knot sequences as well as the weights of the NURBS are considered as unknowns, and especially when considering free-form surfaces deformation. This will be further discussed in the next chapter.
Chapter 3
Detection and resolution in NURBS-based systems
In this chapter, we first introduce two detection frameworks (section 3.1) based on a combination of methods discussed in previous section. Then, we propose an original decision-support approach to address over-constrained geometric configurations. It focuses particularly on the detection and resolution of redundant and conflicting constraints as well as finding the corresponding spanning groups (section 3.4) when deforming NURBS patches.
Based on a series of structural decompositions coupled with numerical analyses, the proposed approach handles both linear and non-linear constraints (section 3.2). Since the result of this detection process is not unique, several criteria are introduced to drive the designer in identifying which constraints should be removed to minimize the impact on his/her original design intent (section 3.3).
Detection framework: first scenarios
In real-life application, debugging geometric constraints systems can be done in two different ways. First, like within CAD modelers, designers are able to detect over-constraints interactively during the modeling process in 2D sketches (figure 1.9). In this case, the constraints are added incrementally. The other way of debugging is to analyze system of constraints that already exist. Here, it is assumed that all the constraints and associated equations have been predefined and that the analysis is to be performed on the whole system. Based on the work of previous section, we propose two detection frameworks to meet the two debugging ways: incremental detection and decremental detection. Both of them are based on a combination of structural and algebraic methods. These methods are listed in table 3 Here, we assume that the constraint C is to be added to a set of constraints S. This framework is to test if C is an over-constraint with respect to S. The first method we use is either D-M or MWM (table 3.1), which detects structural over-constraints through maximum matching or maximum b-matching. The method is applied to the new group S + C after adding C. If C is unmatched, then C is a structural over-constraint. Otherwise, we apply WCM method (table 3.2) to detect numerical over-constraints of S + C. If the rank of the new system S + C is bigger than that of S at witness configurations, C is an independent constraint otherwise it is a numerical over-constraint. And whether it is redundant or conflicting can be checked using Grobner basis or Incremental solving method (figure 3.5). In this case, since the constraints has been added incrementally, the users can be informed directly if the newly inserted constraint has been detected as an over-constraint.
In the context of this PhD thesis, such a scenario is not necessarily the best one. As in this case, the detected over-constraints will not necessarily well affect the design intent, but rather the modeling process and its numerous modeling steps. However, one advantage is to be able to detect and treat the over-constraints as soon as they appear in the modeling process. Decremental detection analyzes a set of existing constraints. The constraints set and its associated equations set are initially represented with a bipartite graph. Structural over-constraints will be identified using either D-M or MWM (table 3.1) if there exists unmatched constraints after maximum matching (or b-matching). They will be removed and then the system is updated. If there is no unmatched constraints, strongly connected components (irreducible subsystems of D-M or balanced sets of MWM) are generated by fine decomposition of the system. Then, algebraic methods are used to detect numerical over-constraints inside each component. Since strongly connected components linked with solving order is actually a DAG structure, components corresponding to the source vertices are usually analyzed first. Once an over-constraint is found, it is removed. After that, the system is updated as well as the corresponding bipartite graph is rebuilt. The detection process finishes when no more numerical over-constraints are found.
Decremental detection framework
In contrary to the previous scenario, the advantage here is that the decision on what to regarding the detected over-constraints can be performed on the entire system, thus better considering the design intent. At the opposite, if the system to checked is rather large, it can be more difficult to take decisions at the end, than to take decision during the modeling process.
Necessity of combining decomposition and algebraic methods
Drawbacks of algebraic analysis The previous sections have illustrated algebraic approaches as a mean to process system of equations and include:
• Symbolic methods for determining polynomial ideal membership in algebraically closed fields such as the Grobner basis and Wu-Ritt methods capable of distinguishing redundant and conflicting equations.
• Numerical methods analyzing Jacobian matrix of a system to find the rank deficient rows, such G-J, QR, NPM, and WCM: G-J and QR enable to distinguish linear redundant and conflicting equations while NPM and WCM can only identify over-constrained equations.
These direct algebraic solvers deal with general systems of polynomial equations, i.e. they do not exploit geometric domain knowledge. Symbolic methods have at least exponential time complexity and they are slow in practice as well. In addition, algebraic methods do not take into account design considerations and thus cannot assist in the conceptual design process.
Benefits of system decomposition Generally, decomposition exploits the local support property of NURBS geometry, which allows for knowing the distribution of local segments as well as the constrained status of each local parts. If some parts are locally over-constrained, users will treat these parts directly without globally inserting unnecessary control points. More specially, coarse decomposition returned by some methods enables a user for debugging purpose (e.g., identification of over-/under-constrained components), which corresponds to the view the user has of its system. For this reason, it is generally desirable to respect a coarse decomposition induced by a high-level user's manipulation of entities (e.g., mechanical components). Sitharam et al. proposed to adapt graph-based recursive assembly methods to this requirement [START_REF] Sitharam | Solving minimal, wellconstrained, 3d geometric constraint systems: combinatorial optimization of algebraic complexity[END_REF]. Moreover, decomposition methods are appealing for the drastic gain in efficiency they offer. For example, symbolic methods cannot be applied directly to large systems due to high computational cost but are applicable to the small subsystems after system decomposition.
Therefore, an algorithm should use geometric domain knowledge to develop a plan for locating local parts of a configuration, decomposing a local part into subsystem as small as possible so that algebraic methods could be applied recursively. Also, it is desirable if the algorithm provides coarse decomposition allowing for debugging purpose directly.
A Decomposition-Detection plan
The first requirement of a Decomposition-Detection (D-D) plan is that it should be able to find local segments of free-form configurations. Secondly, a D-D plan should decompose a constraint system into small subsystems and analyze these subsystems using algebraic methods. Since the time cost of over-constraints detection is proportional (at least polynomial) to the size of a system, the second requirement is that the small subsystems should be as small as possible so that algebraic methods can analyze them as fast as possible. If there is no over-constraints of a subsystem, it should be solved and the solution should be substituted into the entire system resulting in a simpler system. As it is shown in figure 3.3, a D-D plan initially decompose a system S into {S 1 , • • • , S i , • • • , S n } local segments using for instance the local support property of NURBS curves and surfaces. Then, for each local segment S i , the D-D plan proceeds by iteratively applying the following steps at each iteration j: 1. Find the small subsystem SS i,j of the current local part S i . Since the small subsystems are linked with solving sequence, the ones that are the source of the sequence should be chosen first (SS i,1 ).
2. Detect numerical over-constraints in SS i,j using algebraic methods of Section 2.4.2. Users can either remove or modify them once they are detected. Otherwise, solve SS i,j directly using algebraic solver.
3. Replace SS i,j-1 by an abstraction or simplification T i,j-1 (SS i,j-1 ) thereby replacing the entire system E i,j-1 by a simplification E i,j = T i,j-1 (E i,j-1 ). The simplification can be the removal/modification of the over-constraints or solving SS i,j-1 and substituting the solution to E i,j-1 . The latter operation can potentially generate over-constraints since the solution of SS i,j-1 may cause some equations of E i,j-1 satisfied or unsatisfied (Section 3.3).
The decremental framework can be adapted and incorporated into a D-D plan to analyze S i . In this way, S i is initially represented with a bipartite graph. SS i,j corresponds to the strongly connected component of jth iteration, which is to be analyzed by algebraic method (T i,j (SS i,j )). The analysis results could then be used to simplify E i,j through T i,j (E i,j ). As a result, E i,j is updated to E i,j+1 . More details of the proposed decomposition plan will be given in Section 3.2
A generic approach coupling structural decompositions and numerical analyses
This section describes our approach for detecting and treating redundant and conflicting geometric constraints. The main idea is to decompose the system of equations into smaller blocks that can be analyzed iteratively using dedicated numerical methods. The overall framework and algorithm are first introduced before detailing the different steps involved [START_REF] Hu | Over-constraints detection and resolution in geometric equation systems[END_REF].
Overall detection framework
The overall framework has been modeled in Figure 3.4. It is based on three nested loops: the structural decomposition into connected components (CC); the structural decomposition of a CC into its subparts (G1, G2, G3) and its corresponding DAG of stronglyly connected components (SCC); the iterative numerical analysis of these SCC. Pseudo-codes for the main procedures are provided in Section 3.2.3
Loop among connected components
The system of equations (SE) is initially represented by a graph structure G, where nodes correspond to variables and edges to equations. The structure is first decomposed into n connected components {CC 1 , • • • , CC n } using Breadth First Search (BFS) [START_REF] Leiserson | A work-efficient parallel breadth-first search algorithm (or how to cope with the nondeterminism of reducers)[END_REF]. Such a decomposition is made possible thanks to the local support property of NURBS or simply when using constraints which decouple what happens along the x, y and z directions of the reference frame (e.g. position or coincidence constraints). As a result, geometric over-constraints can be detected separately for each CC i .
Loop among subparts obtained by D-M decomposition
The D-M decomposition is used to structurally decompose CC i into a maximum of three subparts: G i1 (over-constrained subpart), G i2 (wellconstrained subpart) and G i3 (under-constrained subpart). Each subpart (if it exists) will be analyzed iteratively using the third nested loop explained below.
However, a single pass of the third loop on each G ij is not sufficient. Indeed, any pass may lead to the removal of constraints, which modifies the
that CC d i (resp. G d ij ) refers to CC i (resp. G ij ) after its d th D-M decomposi- tion.
Although the number of passes required is unknown in advance, it is guaranteed that the process will converge to a state where only one subpart G i3 is left. In other words, constraints will be either removed or moved to the third subpart along the process.
Loop among stronglyly connected components
In addition to the subparts, D-M decomposition also provides a DAG for each CC d i . Nodes of this DAG are stronglyly connected components SCC d ijk . Edges of this DAG (purple arrows in the figure 3.4) denote solving dependencies between the SCC d ijk and may cross subparts G d ij boundaries. In the following, linkedSCC(G d ij ) refers to the operation that obtains (the subpart of) this DAG from the d th D-M decomposition of CC d ij that corresponds to the given subpart.
The third loop consists in trying to iteratively (in the DAG-dependencies induced order) find numerical over-constraints in each SCC d ijk or, when it is solvable, propagate its solution to other blocks. Since blocks are strongly connected, there is only one potential solution to each block unless it contains only variables, and this latter case can only be encountered in a third subpart G d i3 . The process works only the top-level of the DAG (red blocks in the figure) because these blocks equations do not use variables from other blocks. Moreover, the other level of the DAG (blue blocks in the figure) use variables from other blocks, which contains the same number of variables and equations.
For each red block, and as shown in the top-left part of the figure 3.4, an appropriate numerical method (numFindRC in the figure and the pseudocode) tries to find redundant (R) or conflicting (C) constraints. These overconstraints are then removed from the currently analyzed connected component CC d ij . If the block is solvable, its (unique) solution is propagated to dependent blocks, which may lead to additional redundant or conflicting constraints being detected and removed from CC d ij . Once all red blocks have been analyzed, this part of the DAG (potentially turning blue blocks into red ones) is recomputed until all blocks are analyzed. However, there is no need to recompute the D-M decomposition on the whole CC d ij and it is sufficient to recompute it only the maximum matching for the current subpart by calling linkedSCC again. The superscript m is used to note that G dm ij (resp. SCC dm ijk ) refers to G d ij (resp. SCC d ijk ) after its m th matching. Although the number of passes required is unknown in advance, it is guaranteed that the process will converge to a state where there are either no blocks left, or these blocks only contain variables (and that is only possible for the third subpart G d i3 ) according to our testing experiences . In other words, constraints and variables are removed until we obtain an under-constrained system with multiple solutions, meaning no more propagation is possible. In that last step, as shown in the bottom-left part of the figure, the remaining system is analyzed for numerical conflicts and proceeds with the next connected component.
Strongly connected components analysis
This section discusses the techniques used to analyze the stronglyly connected components SCC dm ijk . This corresponds to numFindRC function of Algorithm 2 (section 3.2.3) used to find redundant (R) and conflicting (C) constraints of a component if they exist. Otherwise the component is solved and solutions are propagated to the whole system. Here, QR is used for linear systems.
For non-linear systems, methods can be of three types. As discussed in section 2.5.2, symbolic methods like Grobner basis is limited to high computational cost when analyzing large systems of equations. Sometimes, the solving process cannot coverage (table 2.12). Numerical method like WCM, however, enables to detect numerical over-constraints of non-linear systems in polynomial time (Michelucci and Foufou, 2006a) but cannot further distinguish redundant and conflicting constraints. To find them, WCM should be used together with other methods that are able to distinguish redundant and conflicting constraints. In this section, we show how these methods are combined to detect redundant and conflicting constraints of a system. The different combinations are illustrated in figure 3.5 and discussed in next paragraphs.
Let us define a constraints system with a set of polynomial equations:
f 1 (x 1 , x 2 , • • • , x n ) = 0 f 2 (x 1 , x 2 , • • • , x n ) = 0 . . . f r (x 1 , x 2 , • • • , x n ) = 0 f r+1 (x 1 , x 2 , • • • , x n ) = 0 . . . f m (x 1 , x 2 , • • • , x n ) = 0 (3.1)
Detection and resolution in NURBS-based systems
94 m ••• n basis over f 1 =0 f r =0 f m =0
for all the over-constraints f j =0
f 1 =0 f r =0 fj=0 If rgb r+j ==rgb r then f j is redundant If rgb r+j =={1} then f j is conflicting Compute rgb r
Grobner basis
Solve
If solvable, f j is redundant, otherwise it is conflicting
Incremental solving
Min: S.t. : By using WCM, we find that equations 1 to r are the basis constraints and the equations (r+1) to m are the over-constraints (figure 3.5). To distinguish redundant and conflicting constraints, further steps are needed. They are detailed in the following paragraphs.
2 j F f 1 0,... 0 r f f Min: 2 2 1 r j i i F f f Compute rgb r+j If ,f j is redundant, otherwise it is conflicting
WCM+GB (Grobner Basis)
The Grobner Basis method for distinguishing redundant and conflicting equations can be briefly illustrated as follows. For a set of polynomials
f 0 , f 1 , • • • , f s ∈ C[x 1 , • • • , x n ].
Assuming the reduced Grobner basis(rgb 0 ) of the ideal f 1 , • • • , f s satisfies rgb 0 = {1} and rgb 0 = {0} with respect to any ordering. To know if f 0 = 0 is a conflicting, redundant or independent equation, we compute the new reduced Grobner basis of the ideal
f 0 , f 1 , • • • , f s , which is rgb new . If rgb new = {1}, f 0 = 0 is a conflicting equa- tion; if rgb new ≡ rgb old , f 0 = 0 is a redundant equation; if rgb old ⊂ rgb new , f 0 = 0 is an independent equation (paragraph 2.4.2).
As for distinguishing redundant and conflicting constraints in the overconstraints set, we first compute the reduced Grobner basis (rgb basis ) of the basis constraints ({f 1 , • • • , f r }); then add one equation (f j = 0, j ∈ {r + 1, • • • , m}) of the over-constraints list to the basis constraints and compute the reduced Grobner basis of the new group (rgb new ). Of course, rgb basis satisfies rgb basis = {1} and rgb basis = {0} since equations contained are all independent. Moreover, rgb new satisfies either rgb new = {1} or rgb new = {0}, indicating f j = 0 as redundant or conflicting respectively. Following the same way, equations of the over-constraints are tested one by one with respect to the basis constraints. As a result, redundant and conflicting constraints in a set of the over-constraints are distinguished.
WCM+IS (Incremental Solving)
The Incremental Solving method test redundant and conflicting constraints by directly testing the solvability of system of equations. The solvability of basis constraints {f 1 = 0, • • • , f r = 0} are tested first. Indeed, it is solvable. Then, we incrementally insert an over-constraint f j = 0, j ∈ {r + 1, • • • , m} to the set of basis constraints forming a new group of equations {f 1 = 0, • • • , f r = 0, f j = 0}. If the new group is solvable, then f j = 0 is redundant, otherwise it is conflicting.
WCM+OP(Optimization)
In the work of Ge et al [START_REF] Ge | Geometric constraint satisfaction using optimization methods[END_REF], optimization method is used to address geometric constraint satisfaction problems. They proposed to convert the system of equations
F = {f 1 , f 2 , • • • , f n } into the sum of squares σ = n i=1 f 2
i and find the minimal value of σ. If σ min is not equal to 0, the system is inconsistent. The method is mainly used to decide whether a system is consistent or not. In other words, it can be used to know the existence of conflicting constraints. However, we extend the method to two approaches (Optimization 1 and Optimization 2 ) to test the redundancy or conflicting of f j = 0, j ∈ {r + 1, • • • , m} with respect to basis constraints{f 1 = 0, • • • , f r = 0}.
• Optimization 1 A constraint satisfaction problem is set by transforming the over-constraint f j = 0 to an objective function F = f 2 j (X). minimize
X F = f 2 j (X) subject to {f 1 = 0, • • • , f r = 0}, i = 1, . . . r.
If F min is equal to 0, then f j = 0 is redundant; otherwise, it is conflicting.
• Optimization 2 The constraints set
{f 1 = 0, • • • , f r = 0, f j = 0} is transformed into CSP: minimize X F = f 2 j (X) + r i=1 f 2 i (X)
If F min is equal to 0, then f j = 0 is redundant; otherwise, it is conflicting.
Detection and resolution in NURBS-based systems 96
Pseudo-code
Algorithm 3 Structural decomposition
1: SE ← System of Equations 2: G ← Graph(SE) 3: [CC 1 , • • • , CCn] ←BFS(G)
4: for i = 1 to n do 5:
[G 1 i1 , G 1 i2 , G 1 i3 ] ←DM(CC i )
6:
CC 1 i ← CC i 7:
for j = 1 to 3 do 8:
d ← 1
9:
continue←True 10:
while continue & G d ij = ∅ do 11: [continue,CC d+1 i ] ← findRC(CC d i , G d ij )
12:
d ← d + 1
13:
[G d i1 , G d i2 , G d i3 ] ←DM(CC d i )
14: 15:
C ← checkConflicting(CC d i )
16:
end if 17:
CC d i ← removeRCfromCC(CC d i , [R, C])
18:
end if
19:
if l == N then all red blocks contain only variables 20:
[R,C]← numFindRC(CC d i )
21:
CC d i ← removeRCfromCC(CC d i , [R, C])
22:
return False,CC d i 23:
end if
24:
end for 25:
G d(m+1) ij ← update(CC d i )
26:
m ← m + 1
27:
[SCC dm ij1 , • • • , SCC dm ijN ] ←linkedSCC(G dm ij )
28: end while 29: return True,CC d i As it is shown in above tables, we provide the pseudo-code for the two main procedures of the approach, surrounded by dotted rectangles on figure 3.4.
Rough time complexity analysis
Assuming systems of equations have V number of variables and E number of equations. It is initially decomposed into N number of unconnected components. Then, for each component i, DM decomposition is applied M i times to analyze the DAGs (figure 3.6). For one DAG j, we assume that there are E ij number of equations and V ij number of variables as well as K ij number of red blocks. In these blocks, we assume that there are K ij-non number of blocks containing non-linear equations and K ij-lin number of blocks containing linear equations. Also, for one block K k , we assume that there are m ijk number of equations and n ijk number of variables.
System of Equations
Component i ••• ••• Component N Component 1 BFS ••• ••• ••• ••• DM DM 1 j Mi 1 Kij Kk •• mijk nijk Eij Vij •• V: variables E: equations DAG:
(V + E) O (E ij V ij ) + O (E ij ) 2 d 2 /2 + d 2 n ijk -1
O(n 3 ijk ) BFS: Operate on the whole system of equations D-M: Operate on one of the DAGs (jth-DAG) [START_REF] Pothen | Computing the block triangular form of a sparse matrix[END_REF] Grobner basis: Operate on one of the red blocks (kth red block). n ijk is the number of variables, and d is the maximal total degree of the input polynomials [START_REF] Dubé | The structure of polynomial ideals and Gröbner bases[END_REF] Table 3.3: Time complexity for individual operations Time complexity of the methods used in the basic operations is summarized in table 3.3 As a result, by adding basic operations together, the total time complexity can be estimated as:
O(V +E)+ N i=1 M j=1 O(E ij V ij )+O(E ij )+ K ij-non k=1 2(d 2 /2+d) 2 n ijk -1 + K ij-lin k=1 n 3 ijk (3.
2) The above time complexity estimates the upper bound computation time for detecting over-constraints that are detected by QR or Grobner basis method. However, for detecting over-constraints using Optimization or Incremental solving, the time complexity is hard to estimate because solving nonlinear equations is an iterative process, whose convergence speed is problem-dependent. Therefore, estimate the time complexity of the whole algorithm by considering only the number of equations and variables is not enough. Actually, time complexity estimation considering only number of equations and variables can be misleading in some cases. Empirical experiments on time complexity will be shown in section 4.4
Validation and evaluation of the solutions
Section 1.3 has introduced the multiple ways to model requirements within an optimization problem by specifying an unknown vector X, the constraints to be satisfied F (X) = 0 and the function G(X) to minimize.
The approach described in this section allows for the identification of redundant and conflicting equations. Correctness is ensured since it consists of a fixed-point algorithm that only stops when the system is solvable. Additionally, any removed equation is guaranteed to be either conflicting or redundant with the remaining set. It has thus been shown that the set of equations F (X) = 0 can be decomposed in two subsets: F b (X) = 0 containing the basis equations, and F o (X) = 0 the over-constrained ones.
To stay close to the requirements the designer has in mind, the proposed approach then moves from the equations level to the constraints level. Thus, Validation and evaluation of the solutions the geometric constraints associated to the equations F o (X) = 0 are analyzed and all the equations related to those constraints are gathered together in a new set of equations F o (X) = 0. Of course, the equations F o (X) = 0 are included in the set of equations F o (X) = 0. Finally, the equations related to constraints which are neither conflicting nor redundant form the other set F b (X) = 0. This transformation allows working at the level of the constraints and not at the level of the equations. This is much more convenient for the end-user interested in working at the level of geometric requirements.
Since this decomposition is not unique, it gives rise to various potential final solutions (interactive decomposition has not been considered in this thesis). Therefore several criteria are now introduced to evaluate these solutions according to the initial design intent. To be able to characterize the quality of the obtained solutions, the set of user-specified parameters P is introduced. This set gathers together all the parameters the designer can introduce to define the constraints his/her shape has to satisfy. For example, the distance d imposed between two points of a NURBS surface is a parameter characterizing a part of the design intent. Then, the idea is to evaluate how much the solutions deviate from the initial design intent and notably in terms of the parameters P .
To do so, the optimization problem containing the basis constraints is solved :
F b (X) = 0 min G(X) (3.3)
and the solution X is then used to evaluate the unsatisfied over-constraints F o (X ) as well as the real values P of the user-specified parameters P . For example, if the user-specified distance d between the two patches cannot be met, then the real distance d will be measured on the obtained solution. From this solution, it is then possible to evaluate three quality criteria:
• Deviation in terms of parameters/constraints: this criterion aims at measuring how far/close the real values P of the parameters are from the user-specified parameters P . This criterion helps understanding if the design intent is preserved in terms of parameters and consequently in terms of constraints.
df = i |P i -P i | i |P i | (3.4)
• Deviation in terms of function to minimize: this criterion directly evaluates how much the objective function G has been minimized. Here, the function is simply computed from the solution X of the optimization problem. To preserve the design intent this value is to be minimized. Thus, it can be used to compare several solutions between them. dg = G(X ) (3.5)
• Degree of near-dependency: rank deficiency of the Jacobian matrix at the witness clearly reveals the dependencies between constraints. However, for NURBS-based equation systems, the constraints can be independent but near to be dependent. In this case, the Jacobian matrix of F b (X) at the solution point X is ill-conditioned and the corresponding solution can be of bad quality. The third criterion thus evaluates the condition number (cond) of the Jacobian matrix as a measure of near-dependency [START_REF] Kincaid | Numerical analysis: mathematics of scientific computing[END_REF]:
cond = cond(J J F b (X )) (3.6)
Finally, even if those criteria characterize the quality of the solution X with respect to the design intent, they have not been combined in a unique indicator. Thus, the results of the next chapter will be evaluated by analyzing and comparing those three criteria for each solution.
Finding spanning groups
Among one loop of analyzing stronglyly connected components, numerical over-constraints can be found by either analyzing strongly connected components directly or indirectly generated during the process of propagating solutions. In this section, we first discuss how the spanning groups of these over-constraints can be found in one loop and then discuss how to link the spanning groups of different loops together so as to obtain the final spanning groups of the over-constraints.
In one loop
In this section, we assume that the DAG structure of one loop is shown in figure 3.7(a). Block A is solvable while block I contains over-constraints. After solving block A and propagating the solutions to the other blocks, all the over-constraints in block B are either satisfied or unsatisfied. For the former, spanning groups of over-constraints of block I are inside the block itself. For the later, however, spanning groups of over-constraints of block B are outside the block since these over-constraints are triggered by solutions of block A.
Inside block
For an over-constraint E oi of a set of over-constraints E o in a block, its spanning group E sg is the set of basis constraints E b of that block. Since in a strongly connected component every vertex is reachable from every other vertex, the over-constraint can be reached from all the basis constraints through the variables they share.
Outside block
In this case, the spanning group of an over-constraint gathers together the constraints of the block whose solution triggers the satisfactory or unsatisfactory of the over-constraint.
Taken figure 3.7 as an example. At step a), block A and I are analyzed. The result shows that block A is solvable while I contains numerical over-constraints which are I.2 and I.5. As discussed in previous section, the spanning group of the over-constraints are {I.1, I.3, I.4}. In step c), we solve the block A and propagate the solution to the DAG structure. As a result, constraints in block B {B.1, B.2} are the over-constraints with constraints in block A as the spanning group {A.1, A.2, A.3, A.4}. They are summarized in table 3.4. Table 3.4: Two types of over-constraints and spanning groups Thus, there exist two types of over-constraints: over-constraints detected based on the basis constraints inside the same block (type I), or over-constraints detected based on the solutions from other blocks (type Detection and resolution in NURBS-based systems 102 II). For the former, they are detected by numerical methods discussed in section 3.2.2. For the latter, they are detected during the process of propagating solutions.
Finding the spanning group of type II Spanning group of type I is easy to find, since the basis constraints are within the same block of an over-constraint. In this paragraph, we propose a method for finding the spanning group of type II based on a directed graph. We take figure 3.7 as an example. Since constraints of block B are fully fed by solutions from block A, the spanning group can be traced back through the feeding variables. Steps to find spanning groups are:
A 1 2 3 4 1 2 B • • • • • • 1 2 3 4 • • • • • • 1 2 3 4 • • • • • • 1 2 3 4 1 2 1 2 1 2 A B A B A B o) a) b) c)
1. Equation graphs are generated according to the equations. Note that, here we assume that the variables within a block are shared by every two equations so that the block is a strongly connected component(figure 3.8 a).
2. Directed graphs are generated by transforming edges into directed edges where a direction is pointed from an equation to a variable. As it is shown in figure 3.8 b), we also use a bi-directed edge to link the feeding variables between the two blocks. These variables are exactly the same variables.
Linking loops together
The previous paragraph has shown how to find the spanning group of an over-constraint in one loop of D-M decomposition. However, as it is shown in figure 3.4, the DAG structure G dm ij is updated frequently when looping on analyzing SCC dm ijk s, which means that a constraint may go through many loops before it is finally detected as an over-constraint. For example, in figure 3.9, three groups of over-constraints are detected during the three loops of applying D-M decomposition (table 3.5). Table 3.5: Over-constraints detected in three loops of figure 3.9
The spanning group of the three over-constraints are summarized in the .2,A.3,A.4 C.1,C.2,C.3,I.1,A.1,A.2,A.3,A.4 Table 3.6: Spanning groups of the over-constraint: D.1, E.3 and F.1
General case
In our algorithm, variables of a constraint can be substituted at different loops of D-M decomposition before it is finally detected as an overconstraint. For example, in figure 3.10, variables of constraint E. Finding spanning groups that, we use E i to represent a set of equations in a block. For example, in step n-1) of figure 3.12, solutions of equation set E6 are substituted directly to the over-constraint(type II) while solutions of equation set E4 and E5 are firstly propagated to equation set E2 and E3 respectively and then in step n), the solution of E2 and E3 are directly fed to the over-constraint. In the latter case, solutions of E4 and E5 are fed indirectly to the over-constraint in step n-1).
1 2 ( , , , , , ) 0
k n f x x x x ( , ) 0 n f x ( , ) 0 n f x ( ) f 1 2 ( , , , , , ) 0 k n f y y y y 2 ( , , , , ) 0 k n f y y y ( ) 0 f ( ) 0 f 1 2 ( , , , , , ) 0 k n f z z z z ( , , , ) 0 k n f z z ( , , ) 0 k n f z z ( ) 0 f ••• basis f ( ) : f X ( ) : f Y ( ) : f Z ••• ••• ••• ••• ••• ••• = constant
Pseudocode
This section provides the pseudo-code for finding the spanning groups. We modified the previous algorithms and generated the new ones.
Algorithm 5 Structural decomposition
1: SE ← System of Equations 2: G ← Graph(SE) 3: [CC 1 , • • • , CCn] ←BFS(G) 4: for i = 1 to n do 5: [G 1 i1 , G 1 i2 , G 1 i3 ] ←DM(CC i )
6:
CC i ← directedGraph(CC i )
directions are from equations to variables 7:
CC 1 i ← CC i 8: CC 1 i ← CC i 9:
for j = 1 to 3 do 10:
d ← 1 11: continue←True 12: while continue & G d ij = ∅ do 13: [continue , CC d+1 i , CC d+1 i ]← findRCandSpanningGroup(CC d i , G d ij , CC d i )
14:
d ← d + 1
15:
[G d i1 , G d i2 , G d i3 ] ←DM(CC d i )
16: 17:
1: [SCC d1 ij1 , • • • , SCC d1 ijN ] ←linkedSCC(G d ij ) 2: m ← 1 3: G d1 ij ← G d ij 4: while [SCC dm ij1 , • • • , SCC dm ijN ] = ∅ do 5: l ← 0 6: for k = 1
C ← checkConflicting(CC d i )
18:
spanningGroupC ← descendantEquations(CC d i , C) Find all the descendant equations starting from C in CC d i .
19:
else 20:
spanningGroupR ← descendantEquations(CC d i , R ∪ SCC dm ijk .basis) Find all the descendant equations starting from R and the basis equations in SCC dm ijk .
21:
spanningGroupC ← descendantEquations(CC d i , C ∪ SCC dm ijk .basis) Find all the descendant equations starting from C and the basis equations in SCC dm ijk .
22:
end if 23:
CC d i ← removeRCfromCC(CC d i , [R, C])
24:
end if
25:
if l == N then all red blocks contain only variables
26:
[R,C]← numFindRC(CC d i )
27:
CC d i ← removeRCfromCC(CC d i , [R, C])
28: One can notice that those algorithms do not really return the redundant and conflicting equations as they are removed during the detection process. Spanning groups are identified and the list of equations is saved outside the algorithm.
Conclusion
In this chapter an approach for finding all over-constraints in free-form geometric configurations has been introduced. It relies on a coupling between structural decompositions and numerical analysis. The approach has several benefits: it is able to distinguish between redundant and conflicting constraints; it is applicable on both linear and non-linear constraints; and it applies numerical methods on small sub-blocks of the original system, thus allowing to scale to some large configurations. Additionally, since the set of over-constraints of a system is not unique, it has been shown that our approach is able to provide different sets depending on the selected structural decomposition, and proposed criteria to compare them and assist the user in choosing the constraints he/she wants to remove. Even if the kernel of the algorithm works on equations and variables, the decision is taken by considering the geometric constraints specified by the designer at a high level. Finally, method on finding the spanning group of an overconstraint is proposed. For an over-constraint, there may exist many spanning groups. Our algorithm enables to find one of them based on a directed graph. In the next chapter, the detection and resolution processes as well as finding spanning groups are illustrated with results on both academic and industrial examples.
Chapter 4
Results and discussion
This chapter presents three configurations on which the proposed over-
Double-Banana testing case
The variables X, the constraints F (X) = 0, and the parameters P of the Double-Banana testing case are exactly the same as the ones tested by [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF]. The configuration as well as the parameters are discussed in section 2.5.2.
Modeling
The system to be solved is composed of 18 equations (figure 4.1) and 24 variables. The variables are the coordinates (x i , y i , z i ) of 8 vertices (P t i ) of the Double-Banana geometry. The incidence matrix of the equations is shown in table 4.1 e4 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 e1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 e9 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 e2 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e18 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 e10 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 e7 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 e15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 e13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 e12 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 e8 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 e16 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 e17 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e6 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 e3 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 e5 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 e11 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0
Detection process
Incidence matrix describes the relationships between variables and equations. Adjacency matrix describes the relationships between variables, which can be obtained from incidence matrix. Based on the adjacency matrix, the constraint graph between variables is drawn in figure 4.2.
Double-Banana testing case
Structural analysis For the local part, D-M decomposition is used for structural analysis. As it is shown in table 4.2, the whole part is structurally under-constrained. Maximum matching of the part is the diagonal (marked red) of the matrix. Strongly connected components and the solving sequence between them form a DAG structure which is shown in figure 4.3. Since all the red blocks are only variables, the whole part should be analyzed together. Thus, the process follows the bottom part of the algorithm of figure 3.4. Numerical analysis WCM analysis is used within our numFindRC function and an over-constraint is detected. More specifically, the equation e9 is here detected. Using our Incremental Solving approach, the equation is further characterized as a conflicting one (type I). The equation e9 is therefore removed and the system is solved using the initial position of the nodes as initial values of the variables. Using the results, the equation e9 is then reevaluated and the associated parameter is compared to the user-specified value. In the present case, e9 is not satisfied since it is equal to 44.47 compared to the initial user-specified requirement of 45 (figure 2.37). Thus, the deviation from the design intent is df = 0.53/45. Our algorithm gives a solution that is much closer to the initial design intent than the algorithm of Moinet et al., and the remaining system is less ill-conditioned after removing the conflicting constraint (table 4.4). Actually, the algorithm of Moinet et al. identifies e18 as a conflicting equation and its removal induces a deviation df = 4.38/32 from the initial design intent.
Sketching a 3D teapot
In this example, we show how the proposed over-constraints detection and resolution approach can support the sketching of a 3D teapot (figure 4.4). The designer sketches the teapot following his/her design intent and the associated requirements.
Modeling
Here, the objective is to modify the spout and lid of the teapot by specifying the following elements: -Coincidence and tangency: In figure 4.7, 8 coincidence and 8 tangency constraints are specified to maintain the continuity between surf34 and surf36, surf27 and surf28. They are labeled from 1 to 16 and they generate 8 × 3 linear and 8 × 3 non-linear equations labeled from 1 to 48 (table 4.5). The non-linearity comes from the use of the vector product to express the collinearity of normals. Overall, there are 20 geometric constraints generating 52 equations in the set F (X) = 0. Some of those constraints are added intentionally conflicting and it is the purpose of this section to try to see how our algorithm can detect and remove them without affecting too much the design intent at the level of the constraints.
1 × × × × × √ √ √ √ √ √ × × × × × × √ √ √ √ √ √ × × × × × × × × × × × × × × √ √ √ √ 7 × × × × × √ √ √ √ √ √ × × × × × × √ √ √ √ √ √ × × × × × × × × × × × × × × √ √ √ √ V U ×:
1 × × × × × × 2 × × × × × × × × × × × × 30 √ × × × × √ 31 √ × × × × √ 32 √ × × × × √ 33 √ × × × × √ ••• U V surf27 surf28 ••• •••
Detection Process
Finding local parts BFS decompose the whole system into 59 unconnected components with 51 of them containing only variables (figure 4.9). Each of the other 8 components contains equations and they are analyzed further by our algorithm.
Analysis of the local parts
Analyzing component 1 (CC1) Incidence matrix between variables and equations is shown in table 4.6. The evolution of G 10 11 is shown in figure 4.10. Since all the red blocks contains only variables, all the equations of CC1 are analyzed at one time. As a result, equation e6 is redundant (type I). The spanning group of e6 is {e1, e2, e3, e4, e5}.
Analyzing component 2 (CC2) Component CC2 is very similar to component CC1. That is, their equations are different only in the name of variables and equations. Following the similar analysis process of CC1, equation e24 is redundant(type I). It is redundant with {e19, e20, e21, e22, e23}, which corresponds to the spanning group.
• x31 x32 x33 x34 x35 x36 e1 e2 e3 e4 e5 e6 x30 x186 x169 ••• x42 x37 ••• ••• • • • • •
Analyzing component 3 (CC3) Component CC3 contains 6 equations with 30 variables. The red blocks of the DAG structure (figure 4.11) contain only variables and thus all the system of equations are analyzed together. As a result, equation e10 is redundant (type I). It is redundant with {e7, e8, e9, e11, e12} which corresponds to the spanning group. 63 , the red blocks contain only variables and thus the algorithm takes all the equations as input and equations e34, e35 and e36 are all redundant with {e31, e32, e33} (type I). 81 , the red blocks are solvable and solutions are propagated to the block e41, e42, and e40. The differences are -2.94e-7, -5.49e-12, and -1.29e-11 respectively and they are redundant with {e37, e39}, {e37, e38, e39}, and {e37, e39} respectively (type II). 127 Sketching a 3D teapot
1 1 1 1 1 1 1 1 1 1 e17 1 1 1 1 1 1 1 1 1 1 e18 1 1 1 1 1 1 1 1 1 1 e13 0 0 0 1 0 0 0 0 1 0 e14 0 0 0 0 1 0 0 0 0 1
x163 x164 x165 x166 x167 x168 x115 x116 x117 x118 x119 x120 e34 1 1 1 1 1 1 1 1 1 1 1 e35 1 1 1 1 1 1 1 1 1 1 1 e36 1 1 1 1 1 1 1 1 1 1 1 e31 0 0 0 1 0 0 0 0 0 1 0 e32 0 0 0 0 1 0 0 0 0 0 1 e33 0 0 0 0 0 1 0 0 0 0 0 Table 4.12: Component 6 (CC6 6 × 12) x163 x164 x165 x166 x167 x168 x115 x116 x117 x118 x119 x120 e34 1 1 1 1 1 1 1 1 1 1 1 e35 1 1 1 1 1 1 1 1 1 1 1 e36 1 1 1 1 1 1 1 1 1 1 1 e31 0 0 0 1 0 0 0 0 0 1 0 e32 0 0 0 0 1 0 0 0 0 0 1 e33 0 0 0 0 0 1 0 0 0 0 0
x8 x7 x9 x11 x12 e47 1 1 1 1 1 e46 0 1 1 0 1 e48 0 1 1 0 1 e44 0 0 0 1 0 e45 0 0 0 0 1
x1 x2 x3 e37 1 0 0 e38 0 1 0 e39 0 0 1 e40 1 0 1 e41 1 1 1 e42 1 0 1
Table 4.17: Component 8 (CC8 6 × 3)
x1 x2 x3 e37 1 0 0 e38 0 1 0 e39 0 0 1 e40 1 0 1 e41 1 1 1 e42 1 0 1
Result of linearizion
Linearizion In the work of Moinet [START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF], and Serrano [START_REF] Serrano | Constraint management in conceptual design[END_REF], non-linear constraints systems are linearized so that linear detection methods can be applied. The linearizion is based on Taylor-series expansion at a given point and the linear detection methods are QR (used by Hu et al) and G-J (used by Moinet, Serrano).
To know whether linearizion of non-linear system affects the detection results or not, equations system of teapot geometry are linearized at the witness configuration starting from initial sketch. Then, our algorithm is used to detect the numerical over-constraints. In our algorithm, a redundant and conflicting constraint is distinguished by comparing the difference ∆|P 0 -P | of the over-constraint between initial specified value (e(X 0 ) = P 0 ) and final value (e(X ) = P ) after releasing over-constraints with defined tolerance. The value of the tolerance used in this testing case is 1e-4. The deviations (∆) from the design intent after removing the identified over-constraints are summarized in table 4.23 for both the original system and the linearized one.
From the table, it is obvious that differences of {e6, • • • , e36} of linearized system are much higher than those of original system. This is because the linearized system ignores the truncation error O(X 2 ) when expanding the Taylor series at the witness to the first order. Meanwhile, it can cause independent constraints detected as over-constraints, which may be wrong in some cases ({e46, e48}, for example). Therefore, linearizion of non-linear system is unreliable both in detecting numerical over-constraints as well as distinguishing redundant and conflicting constraints. As a result, linearizion of non-linear systems is not recommended when dealing with non-linear systems. This testing example has shown that our algorithm works well on free form configurations. More specifically, it can deal properly with the local support property. However, the detection is still at the level of equations and the resolution of geometric over-constraints is not addressed. All these issues will be discussed in a 3D glass geometry in the next section.
Sketching a 3D glass
In this example, the idea is to show how the proposed over-constraints detection and resolution approach can support the sketching of a 3D glass composed of 4 connected NURBS patches. The designer sketches his/her design intent and associated requirements.
Modeling
Here, the objective is to modify the upper part of the glass by specifying the following elements:
• Variables: Each patch has a degree 5 × 5 and has a control polygon made of 16×6 control points which coordinates are the variables of our optimization process (figure 4.19a). Since the objective is to modify the upper part of the glass, the designer selects how many rows of control points are to be blocked and how many can move. For example, if the designer wishes to free 4 upper rows of control points of the four patches, then there will be 4 × (6 × 4) × 3 = 288 variables in the unknown vector X. The results will be illustrated with 4 and 5 rows free to move. -Distance: 2 distance constraints are defined between the opposite sides of the patches (figure 4.19d). They are labeled 5 and 6 and they generate 2 × 1 non-linear equations labeled 13 and 14 (table 4.24).
-Coincidence and tangency: 8 coincidence and 8 tangency constraints are specified to maintain the continuity between the upper parts of the patches during the deformation (figure 4.19b).
They are labeled from 7 to 22 and they generate 8 × 3 linear and 8 × 3 non-linear equations labeled from 15 to 62 (table 4.24). The non-linearity comes from the use of the vector product to express the collinearity of normals.
Overall, there are 22 geometric constraints generating 62 equations in the set F (X) = 0. Some of those constraints are conflicting and it is the purpose of this section to try to see how our algorithm can detect and remove them without affecting too much the design intent.
• Objective function: Since the proposed approach removes the identified over-constraints, the resulting system of equations F b (X) = 0 (section 3.3) may become under-constrained and a function G(X) has to be minimized. Here, the idea is to make use of the approach of [START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF] to define two types of deformation behavior: either a minimization of the variation of the shape (G 1 (X)) between the initial and final configurations, or minimization of the area of the final shape (G 2 (X)). In terms of design intent, the first one tends to preserve the initial shape of the glass, whereas the second forgets the initial shape and tends to generate surfaces similar to tensile structures. , where the red block is solved and the solution is fed to the component containing e14. As a result, it is conflicting and the spanning group is {e4, e5, e6, e10, e11, e12}. e14 is removed and the system is now updated to CC 1 2 . 23 , all the generated red blocks contain only variables. Equations of these blocks are analyzed together. As a result, equations {e32, e56, e38, e62} are found to be redundant. The spanning group of each equation is {e10, e11, e12, e27, e28, e29, e30, e31, e33, e34, e35, e36, e37, e4, e5, e6, e51, e52, e53, e54, e55, e57, e58, e59, e60, e61}. The detection results of CC2 are summarized in the table 4.29. One can first notice that depending on the configurations, the deviation df on the constraints varies. For example, with N rows = 4 and while minimizing G 1 (X), the configuration 7 generates a solution that is closer to the design intent than configuration 6 (0.10684 < 0.12607 in Table 4.33). For configuration 3, it is clear that the deviation to the design intent in terms of constraints is more important when minimizing the area of the final surface than when minimizing the shape variation (0.2288 > 0.10179 in Table 4.33). This is clearly visible on Figures 4.24c But the deviation dg i on the objective function to be minimized also varies. While considering the minimization of the shape variation, one can see that configuration 3 is less interesting than configuration 1 in the sense that it minimizes less the shape variation (15459.52 > 13801.04 in Table 4.33).
Finally, for a given configuration, one can notice that when the number of free rows increases, i.e. when there is more freedom, the objective function decreases and the solution is therefore closer to the design intent. This is visible when comparing values from Tables 4.33 and 4.34. For example, in figure 4.25 b), the shape variation of configuration 1 of 5 rows is more minimized than the one of 4 rows (11266.93 < 13801.04). Thus, the selection of the variables X are also important when setting up the optimization problem, which should affect the design intent. We implemented the Algorithm 1 and 2 (Section 3.2.3) in MATLAB. Our code was executed on a Dell Precision M4800, Windows 7 system. To know the effect of system decomposition, experiments are conducted on the glass example and the results are compared with a detection method without decomposition. This method used consists in W CM + IS (section 3.2.2) shown in figure 4.26. Experiments are based on a glass series with different number of the variables and constraints (coincidence and tangent constraints). As it is shown in figure 4.27, variables are the coordinates of control points ranging from row 1 to row 3,4,5 and 6 with corresponding variables of number 216,288,360 and 432 respectively. Also, the coincidence and tangent constraints of point 1-8 are added incrementally. In other words, they are turned off (marked 0) at the beginning. Then, they are added incrementally from point 1 to 8 by turning on a constraint (marked 1) until all the constraints are added (all the constraints are marked red). Computation time with respect to 3,4,5 and 6 rows of variables are shown in the figures 4. 28, 4.29, 4.30, and 4.31 respectively. For a given set of variables, if the number of equations are less than 20, the effect of system decomposition is not obvious; However, when adding more equations, system decomposition significantly reduces computational time. The average computation time of figures 4. 28, 4.29, 4.30, and 4.31 using decomposition are 6 times faster than ones without decomposition. Since it strongly depends on the system to be analyzed, a theoretical complexity analysis has not yet been done. The reader can anyhow check section 3.2.4 for a first understanding of complexity issues without considering solving process.
Results with respect to tolerance
In our algorithm, the separation between basis (E b ) and over-constraints (E o ) is based on the rank computation of the Jacobian matrix at the witness configuration, where the number of E b is equal to the value of the rank. However, the latter depends on a tolerance. Actually, we use Singular Value Decomposition (SVD) to compute the rank, which is equal to the number of singular values that are larger than the tolerance (tol rank ).
Formally, the singular value decomposition of a m × n real or complex matrix M is a factorization of the form U ΣV , where U is a m × m real or complex unitary matrix, Σ is a m × n rectangular diagonal matrix with non-negative real numbers on the diagonal, and V is a n × n real or complex unitary matrix. The diagonal entries σ i of Σ are known as the singular values Table 4.35: Number of identified numerical over-constraints with respect to tol rank (column) and tol rc (row) on the teapot our approach generates a solution that is much closer to the initial design intent than the one of Moinet. In the teapot geometry, we concluded that linearizion of non-linear systems is not reliable since it will induce more overconstraints than what to test really. In the glass geometry, we have been illustrated that the selection of over-constraints set influences the deviation from initial intent. Finally, we shown the efficiency of the decomposition method used in our algorithm and demonstrated that the specification of tolerances would affect the detection result.
Conclusion and perspectives
The general objective of this work was the detection and treatment of geometric over-constraints during the manipulation of free form surfaces. It results in an algorithm detecting geometric over-constraints (redundant and conflicting constraints) as well as finding the corresponding spanning groups. The detection of the over-constraints is at the level of equations but the treatment is at the level of geometries, which enables the manipulation of geometric constraints well suited to engineering design. In this way, the treatment tries to maintain the design intent. This work has been decomposed in two main categories: the definition of geometric over-constraints, methods and tools for the detection of geometric over-constraints (Chapter 2), the selection and integration of these methods to give rise to the algorithms for identifying geometric over-constraints of free form configurations as well as high level manipulations of these over-constraints (Chapter 3). The proposed approach allows the detection and treatment of redundant and conflicting constraints. The proposed concepts and algorithms have been implemented and tested on both academic and industrial examples (Chapter 4).
Several practical conclusions have already introduced various perspectives relative to the developed modules. This final section of the manuscript mostly focuses on other perspectives which have not been discussed before.
Basic definitions, detection methods and tools... The chapter 2 of this document describes a set of basic definitions of geometric over-constraints, methods and tools that are able to detect geometric over-constraints. Since the geometric over-constraints can be defined structurally or numerically, the corresponding detection methods and tools are classified into structural detection methods and numerical detection methods. However, these structural definitions and detection methods are mainly
Conclusion and perspectives
for systems containing only Euler geometries. For systems made from free form geometries, there is no structural definitions i.e. definitions based on DoF counting. As a result, we adopt the numerical definitions of geometric over-constraints for free form geometries, which would require a constraint system represented in an equation form and methods working at the level of equations. To select methods that are useful for identifying geometric over-constraints of free form configurations, several cases are used, where methods are tested and compared with respect to a specified criterion.
One perspective can stem from this part of the work. Formal definitions of geometric over-constraints in terms of free-form geometry should be given while considering its local support property. Also, structural representation of free form configurations as well as the corresponding decomposition methods should be discussed. The decomposition should taken into account the local support property as well.
Over-constraints detection and resolution in geometric equation systems... Free form configurations can be represented with a set of polynomial equations and the problem is thus transformed into finding numerical overconstraints from the equation set. More specially, since numerical overconstraints are either redundant or conflicting, a set of consistent and inconsistent equations are to be detected. To find them, the chapter 3 describes a tool which is a combination of the structural decompositions and numerical analysis methods. In addition, our approach is able to provide different sets depending on the selected structural decomposition, and proposed criteria to compare them and assist the user in choosing the constraints he/she wants to remove. Moreover, the spanning groups of the over-constraints are detected and help users to quickly locate which set of constraints generates these over-constraints. This makes the debugging process easier. The kernel of the proposed approach works on equations and variables, but the decision is taken by considering the geometric constraints specified by the designer at the geometric level. In chapter 4, the over-constraints detection and resolution process have been described and analyzed with results on both academic and industrial examples. Our approach uses a general DoFbased constriction check enhanced by a WCM-based validation in a recursive assembly way, which allows for interleave decomposition and recombination of system of equations. As shown in the testing cases, it is better than any existing detection method with respect to generality and reliability. Here, the generality refers to the scale of types of geometries and constraints while the latter refers to the detected over-constraints satisfying our definition of redundant and conflicting constraints.
A number of perspectives stem from this part of work:
• We have restricted the variables to control points in this manuscript. Parameters like degrees of curves/surfaces, weights of the NURBS can also be set as variables, leaving more freedom to the users when manipulating free form geometries. Generally speaking, our algorithm can analyze such cases as well, since it is initially designed for analyzing system of polynomial equations. The generality of our algorithm can be tested when more parameters other than coordinates of control points are set as variables. However, our algorithm cannot be directly used for cases where knots in knots sequences are set as unknowns. This is due to the fact that in this case, computing positions, derivatives on curves/surfaces uses recursive approach. No equations can be generated without knowing the values of knots sequences. Without equations, our approach cannot be set up.
• This give rise to second perspective, that is, develop tools to detect over-constrained configurations when no equations (black box constraints) are available. The tools should detect inconsistencies and give feedbacks to the experts on how to modify them.
• An automation of the process should assist the designer in selecting the set of over-constraints that less deviate from his/her original design intent. As it is, the designer has access to three main criteria (dg, df , cond) which can be difficult to analyze for a non-expert. Thus, higherlevel criteria should be imagined on top of those ones.
• The approach can be made interactive, i.e. allowing the designer to select between the different conflicting sets along the process, or even modify the faulty constraints.
• Complete solver like interval analysis can be used to solve system of equations. The current solver in our detection framework is based on Levenberg-Marquardt (LM) algorithm, which converges to local minima rather than global one when solving system of equations. Interval analysis is an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. The solver can be incorporated to our detection framework to analyze academic/industrial cases and compare results with the ones of LM solver.
Development Process . . . . . . . . . . . . . . . . . . 5 1.2 CAD modeling approaches . . . . . . . . . . . . . . . . . . . . 8 1.2.1 Manipulating geometric models . . . . . . . . . . . . . 8 1.2.2 Modeling approaches . . . . . . . . . . . . . . . . . . . 9 1.3 Modeling multiple requirements in an optimization problem . 10 1.4 From requirements to constraints . . . . . . . . . . . . . . . . 12
Définition 2 .
2 Le degré de liberté DDC(e)d'une contrainte géométrique e est le nombre d'équations indépendantes nécessaires pour représenter. Par exemple, les contraintes de distance ont une DDC en 2D et 3D. Pour un système de contraintes géométriques G avec un ensemble E of contraintes, le degré de liberté de l'ensemble des contraintes est DDCs = e∈E DDC(e). Définition 3. Un système de contraintes géométriques G est structurelle bien-contraint si G satisfait DDCs = DDLs et si tous les sous-systèmes après décomposition satisfaire DDCs ≤ DDLs.
Figure 2 :
2 Figure 2: Analyse par blocs des systèmes linéaires: (a) processus global de détection, (b) factualisation QR avec pivotement des colonnes..
Figure 3 :
3 Figure 3: Analyse séquentielle des systèmes non linéaires. Phase I: détection des sur-contraintes .
Figure 6 :
6 Figure 6: Esquisse initiale de la géométrie du verre .
Figure 7 :
7 Figure 7: Résultats de l'esquisse après suppression des contraintes conflictuelles avec N rows = 4 : (a) verre initial, (b 1 ) la configuration 1 et minimisation de la variation de forme, (b 2 ) la configuration 1 et la minimisation de la surface de la surface finale, (c 1 ) la configuration 3 et minimisation de la variation de forme, (b 2 ) la configuration 3 et la minimisation de la surface de la surface finale.
Figure 1 . 1 :
11 Figure 1.1: Generic structure of a PDP.
Figure1.3: Initial configuration a), deformation with a minimization which ignores the initial shape b), and preserves the initial shape c) of initial glass[START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF].
Figure 1
1 Figure 1.4: Bi-variate basis function a) associated to a control point which is displaced b) to produce a local modification of a patch[START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF]
Figure 1
1 Figure 1.5: Matching a curve constraint a) initial configuration b) shape after deformation and insertion of a discontinuity[START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF]
Figure 1.6: a) no area constraint b) constant inside area c) inside area scaled with a factor α = D 2 /d 2[START_REF] Pernot | Fully free-form deformation features for aesthetic shape design[END_REF]
Figure 1 . 7 :
17 Figure 1.7: Need of relaxation areas (a) to avoid undesirable undulations (b)[START_REF] Pernot | Constraints Automatic Relaxation to Design Products with Fully Free Form Features[END_REF].
Figure 1
1 Figure 1.8: 2D sketches a) constraints labeling with colors b) sketch with conflicting constraints c) sketch with redundant constraints. Clearly, the sketcher does not highlight differently redundant/conflicting constraints.
Figure 1
1 Figure 1.9: A 2D curve sketch a) configuration with curve geometry b) locally over-constrained (according to the local support property) c) globally over-constrained (according to the sketcher)
Figure 2
2 Figure 2.1: a) bipartite graph b) bipartite graph with maximum matching(red lines)
Figure 2 Figure 2
22 Figure 2.2: a) building a flow network b) finding a maximum matching by computing the maximum flow from source to sink
Figure 2
2 Figure 2.4: a) Weakly connected component marked blue b) Strongly connected components marked purple
Figure 2
2 Figure 2.5: a) Undirected graph of figure 2.4-a b) Undirected graph of figure 2.4-b
Figure 2 Figure 2
22 Figure 2.6: a) Incidence matrix of figure 2.4-a b) Incidence matrix of figure 2.4-b
Figure 2 Figure 2
22 Figure 2.8: a) Adjacency matrix of figure 2.4-a b) Adjacency matrix of figure 2.4-b
l
Figure 2.12: Bipartite graph of configuration in figure 2.10
Figure 2
2 Figure 2.14: a) Structurally over-constrained b) Structurally wellconstrained c) Structurally under-constrained
Figure 2 .
2 Figure 2.15: Pappus's hexagon theorem: Points X, Y and Z are collinear on the Pappus line (dotted line). The hexagon is AF BDCE.
Figure 2
2 Figure 2.18: (o): Level o (a): Level a (b): Level b (c): Level c .
3
. finds subgraph of density ≥ -D + 1, where the density of a subgraph A : d(A) = DOCs(A) -DOF s(A).
4Figure 2
2 Figure2.20: Left: Constraint graph with 5 entities and 7 constraints. Right: Flow network derived from the bipartite graph, where source S is linked to each constraint (capacity correspond to Doc i of a constraint i) and each entity is linked to the sink T (capacity correspond to Dof of an entity) .
Dor + 1 and the computation of Dor depends on the subsets of constraint entities to which R is attached. For example, Dor(A, B) ({A, B} is the subset of the configuration in figure 2.21) is different from Dor(C, D, E) ({C, D, E} is the subset of the configuration in figure 2.22). But in the Dense algorithm, the overload is invariant with different subsystems.
Figure 2
2 Figure 2.21: Overloading to subset {A,B}
Figure2.23: Over-constraints detection process of an example taken from[START_REF] Latham | Connectivity analysis: a tool for processing geometric constraints[END_REF]. The unsaturated node is the geometric over-constraint..
Figure 2
2 Figure 2.24: D-M decomposition of incidence matrix A.
an element of ith row and kth column in A. Numerical over-constraints are identified by searching lines that contain only 0. The vector b is updated to b new when transforming [A, b]. The last m -r values of the b new allow to further distinguish redundant (equal to 0) and conflicting (not equal to 0) constraints.
Figure 2
2 Figure 2.25: Gauss elimination with partial pivoting.
Figure 2.26: LU Factorization with partial pivoting
Figure 2
2 Figure 2.27: QR Factorization with column pivoting.
Figure 2.28: Over-constraints detection based on Jacobian matrix analysis
Figure 2
2 Figure 2.29: Witness configuration method
Figure 2
2 Figure 2.30: a) Rigid sketch b) Non-rigid sketch[START_REF] Thierry | Extensions of the witness method to characterize under-, over-and well-constrained geometric constraint systems[END_REF] .
Figure 2.34: Result of D-M analysis for Curve 2 and Curve 3.
Figure 2 .
2 Figure 2.35: G-J analysis of Curve 2 (Upper) and G-J analysis of Curve 3 (Lower) with r the rank and n the number of equations.
Figure 2
2 Figure 2.36: Initial geometry of the Double-Banana[START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF].
Fig. 8 .
8 Fig. 8. Text file -example of the Double Banana geometry.
Fig. 9 .
9 Fig. 9. Example of the Double Banana: initial geometry.
Figure 2 .
2 Figure 2.37: Parameters of the Double Banana configuration[START_REF] Moinet | Defining tools to address over-constrained geometric problems in Computer Aided Design[END_REF].
Figure 2 .
2 Figure 2.38: Left: Witness configuration 9 which does not allow a proper identification of the conflicting equations Right: Witness configuration 9' obtained from witness 9 with a random perturbation of amplitude = 1.
Figure 2 .
2 Figure 2.39: System of removing e13
Figure 3.1: Incremental detection framework where constraints are added one by one
Figure 3
3 Figure 3.2: Decremental detection framework
Figure 3
3 Figure 3.3: A Decomposition-Detection plan. S i is the ith component after decomposition; S ij is the component S i at jth step when analyzing; E ij refers to the entire system when analyzing S ij .
Figure 3 . 4 :
34 Figure 3.4: Overall framework composed of three nested loops defining the main structure of the detection algorithm
Figure 3.5: Combing WCM with Grobner basis, Incremental solving or Optimization to detect redundant and conflicting constraints.
Figure 3
3 Figure 3.6: Decomposition for time complexity analysis.
Figure 3
3 Figure 3.7: Detecting over-constraints in one loop: An example
Figure 3 . 8 :
38 Figure 3.8: Finding spanning group of type II of figure 3.7.
3.
After solving, new directed edges are generated by pointing from variables to edges. As it is shown in step c) of the figure, they are the red edges in block A by simply reversing the edges of step b).
Finding spanning groups 4 .
4 Starting from B.1 and B.2, the spanning groups are the equations reached along the directed edges, which are the set of equations {A.1, A.2, A.3, A.4} in this case.
Figure 3
3 Figure 3.9: Detecting over-constraints along several D-M loops. a) First loop b) Second loop c) Third loop
3 and F.1 are fed by solution from block C which has been fed by the solution from block A. More general case of constraint E.3 and F.1 are shown in figure 3.11 a) and b) respectively. Variables of an over-constraint or basis constraints can be substituted at different D-M loops before the final identification.
Figure 3 .
3 Figure 3.10: Loops of detecting the over-constraint: D.3 (left), E.3 (middle) and F.1 (right)
Figure 3
3 Figure 3.11: General feeding process of over-constraints
Figure 3 .
3 Figure 3.13: Finding spanning group during D-M decompositions
constraints detection and resolution technique has been tested. The first one concerns the academic Double-Banana testing case widely studied in the literature (section 4.1). It has been used to compare our solution to the ones generated by others. The two other examples are more industrial and concerns the shaping of a teapot (section 4.2) and glass (section 4.3) composed of several NURBS patches. The relationship between time complexity and size of a system, and the influence of specified tolerances on the detection results are discussed in section 4.4.
Figure 4
4 Figure 4.1: System of equations of the Double Banana geometry x1 y1 z1 x2 y2 z2 x3 y3 z3 x4 y4 z4 x5 y5 z5 x6 y6 z6 x7 y7 z7 x8 y8 z8e4 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 e1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 e9 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 e2 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e18 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 e10 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 e7 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 e15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 e13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 e12 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 e8 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 e16 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 e17 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e6 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 e3 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 e5 0 0 0 0 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 e11 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0
Figure 4.3: Evolution of G 1 13
Figure 4.4: Initial sketch of a teapot
Figure
Figure 4.5: Free control points of the Lid
Figure 4 . 6 :Figure 4
464 Figure 4.6: Free control points of the Spout
Figure 4.9: Constraint graph between variables
G
Figure 4.10: Evolution of G 1 11
Figure 4.11: Evolution of G 1 31
Figure 4.15: Evolution of G 1
Figure 4
4 Figure 4.19: Initial sketch of glass geometry.• Constraints: Three types of constraints are used to specify how the shape of the 3D glass has to evolve:
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x133 x134 x135 x136 x137 x138 x139 x140 x141 x142 x143 x144 x67 x68 x69 x61 x62 x63 x64 x65 x66 x70 x71 x72 x73 x74 x75 x76 x77 x78 x79 x80 x81 x82
Figure 4.23: Evolution of G 3 23
All configurations are then solved while acting on both the number of upper rows to be fixed (N rows = 4 or 5), and the objective function to be minimized (either G 1 (X) or G 2 (X). The effects of applying two objective functions on configuration 1 are shown in figure4.25 a). The results are gathered together in Tables 4.33 and 4.34. Each configuration is evaluated through the three previously introduced criteria dg, df and cond. Some solutions are shown in Figure4.24.
Figure 4 .
4 Figure 4.24: Results of the sketching after removing conflicting constraints with N rows = 4 : (a) initial glass, (b 1 ) configuration 1 and minimization of the shape variation, (b 2 ) configuration 1 and minimization of the area of the final surface, (c 1 ) configuration 3 and minimization of the shape variation, (b 2 ) configuration 3 and minimization of the area of the final surface
1 and 4.24c 2 .
Figure 4
4 Figure 4.25: a) Configuration 1 with 5 free rows after minimizing G 1 and G 2 ; b) Configuration 1 with 4,5,6 free rows after minimizing G 1
Figure 4
4 Figure 4.26: Method W CM + IS without decomposition
Figure 4 .
4 Figure 4.27: Changing variables and constraints. a) Increasing number of variables b) Increasing number of constraints (coincidence and tangent)
Figure 4 .
4 Figure 4.28: Computation time with respect to number of equations when number of variables = 216.
Figure 4
4 Figure 4.29: Computation time with respect to number of equations when number of variables = 288.
Figure 4 .Figure 4
44 Figure 4.30: Computation time with respect to number of equations when number of variables = 360.
Une approche générique couplant les décompositions structurelles et les analyses numériques La troisième boucle consiste à essayer itérativement (dans l'ordre induit par les dépendances DAG) de trouver des contraintes numériques dans chaque SCC d ijk ou, lorsqu'il est résoluble, propager sa solution à d'autres blocs. Étant donné que les blocs sont fortement connectés, il n'y a qu'une seule solution possible pour chaque bloc, à moins qu'il contienne uniquement des variables, et ce dernier cas ne peut être rencontré que dans une troisième sous-partie G d i3 . Le processus ne fonctionne que le niveau supérieur du DAG (blocs rouges de la figure) car ces équations de blocs n'utilisent pas de variables d'autres blocs. Pour chaque bloc rouge, et comme indiqué dans la partie supérieure gauche de figure refdh, une méthode numérique appropriée (numFindRC dans la figure et le pseudo-code) essaie de trouver redondant (R) ou conflictuelles (C). Ces sur-contraintes sont ensuite supprimées du composant connecté actuellement analysé CC d ij . Si le bloc est résoluble, sa solution (unique) se propage vers des blocs dépendants, ce qui peut entraîner la détection de contraintes supplémentaires redondantes ou conflictuelles, et supprimées de CC d ij . Une fois que tous les blocs rouges ont été analysés, cette partie du DAG (potentiellement transformer les blocs bleus en rouge) est recalculée jusqu'à ce que tous les blocs soient analysés. Cependant, il n'est pas nécessaire de recalculer la décomposition D-M sur l'ensemble de CC d ij ; il suffit de la recalculer uniquement pour l'équivalent maximal pour la souspartie actuelle en appelant à nouveau le service linkedSCC. L'exposant m est utilisé pour noter que G dm ij (resp. SCC dm ijk ) se réfère à G d ij (resp. SCC d ijk ) après sa m th matching. Bien que le nombre de passes requises soit inconnu à l'avance, il est garanti que le processus converge vers un état où il n'y a plus de blocs, ou ces blocs ne contiennent que des variables (et cela n'est possible que pour la troisième sous-partie G d i3
En plus des sous-parties, la décomposition de DM fournit également
un DAG pour chaque CC d i . Les noeuds de ce DAG sont des composants
fortement connectés SCC d ijk . Bordes de ce DAG (violet les flèches dans la
figure 1) désignent la résolution des dépendances entre SCC d ijk et peuvent
traverser les sous-paragraphes G d ij limites. Dans ce qui suit, linkedSCC (G d ij )
se réfère à l'opération qui obtient (la sous-partie de) ce DAG à partir de d
th DM décomposition de CC d ij qui correspond à la sous-partie donnée.
xiii
Table 1 :
1 le système restant est moins mal-conditionné après la suppression de la contrainte conflictuelle (table 1). En fait, l'algorithme de Moinet et al. identifie e18 comme une équation conflictuelle et son retrait induit une déviation df = 4.38/32 de l'intention initiale de conception. Comparaison entre notre algorithme et l'approche de Moinet sur le cas du test Double-Banane. Puisque l'approche proposée supprime les sur-contr aintes identifiées, le système d'équations résultant F b (X) = 0 (section 0.3.4) peut devenir sous-contraint et un la fonction G(X) doit être minimisée. Ici, l'idée est de faire usage de l'approche de Pernot et al. pour définir deux types de comportement de déformation (Pernot et al., 2005): soit une minimisation de la variation de la forme (G 1 (X)) entre les configurations initiale et finale, soit la minimisation de la forme finale (G 2 (X)). En termes d'intention de conception, le premier a tendance à conserver la forme initiale du verre, alors que le second oublie la forme initiale et tend à générer des surfaces similaires aux structures de traction.
Méthode Witness Sur-contrainte df cond
Notre esquisse initiale e9 0.53/45 16.97
Moinet et al. esquisse initiale e18 4.38/32 73.33
0.4.2 Esquisser un verre 3D
Dans cet exemple, l'idée est de montrer comment l'approche proposée de
détection et de résolution de sur-contraintes peut supporter l'esquisse d'un
verre 3D composé de 4 patchs NURBS connectés. Le concepteur esquisse son
intention de conception et les exigences associées. L'objectif est de modifier
• Objective function:
Table 4 :
4 Evaluation des 9 configurations avec N rows = 4.
Minimization of G 1 (X) Minimization of G 2 (X)
Config. dg 1 df cond dg 2 df cond
1 11266.93 0.10000 4.0149e17 85121.36 0.10000 3.3441e17
2 14719.05 0.10280 4.6031e17 86295.47 0.25034 6.5355e19
3 12506.55 0.10277 1.7748e19 85190.96 0.20076 1.0972e18
4 9944.87 0.11452 1.7903e18 79428.31 0.20592 1.7041e18
5 8799.29 0.11448 6.1454e17 77800.57 0.22919 1.0218e18
6 12561.66 0.13935 4.1681e18 69603.16 0.76646 8.5100e16
7 10441.11 0.10684 1.0862e18 69502.72 0.70009 2.3460e18
8 11134.09 0.12097 2.5394e18 71465.72 0.65901 1.5773e18
9 11877.59 0.12601 1.3790e19 67661.55 0.80372 8.4472e17
Table 5 :
5 Evaluation des 9 configurations avec N rows = 5.
xxvii Conclusion et travaux futurs
0.5 Conclusion et travaux futurs
Dans ce travail, une approche pour trouver toutes les sur-contraintes
dans des configurations géométriques de forme libre a été introduit. Il s'appuie
sur un couplage entre les décompositions structurelles et l'analyse numérique.
Le processus et son algorithme ont été décrits et analysés avec des résultats
à la fois exemples académiques et industriels. L'approche a plusieurs avan-
tages: elle est capable de distinguer les contraintes redondantes et con-
flictuelles; il est applicable à la fois sur les contraintes linéaires et non
linéaires; et cela applique des méthodes numériques sur de petits sous-blocs
du système original, permettant ainsi d'évoluer vers de grandes configura-
tions. De plus, puisque l'ensemble des sur-contraintes d'un système n'est pas
unique, il a été montré que notre approche est en mesure de fournir différents
ensembles en fonction de la la décomposition structurelle sélectionnée et les
critères proposés pour comparer et aider l'utilisateur à choisir les contraintes
il/elle veut supprimer. Même si le noyau de l'algorithme fonctionne sur des
équations et des variables, la décision est prise en considérant les contraintes
géométriques spécifiées par le concepteur à un niveau élevé.
Table 2 .
2
1: Evaluations of definitions
Table 2 .
2 2: Symbols used to characterize the approaches.
Symbols Criteria
Not adapted/Incomplete
⊕ Well adapted/Complete
? Not appreciable
No meaning/Not applicable
Table 2
2
Detection level Gradation of criteria
Level Criteria ⊕
a type numerical structural
b redundant/conflicting yes no
c spanning group yes no
.3: Criteria attached to the detection level (set 1)
Table 2
2
.5: Criteria related to system modeling (set 3)
Table 2 .
2 10: Testing results of Curve 2 and Curve 3
Structural analysis Coarse detection Fine detection
BFS D-M G-J Elimination QR Factorization G-J Elimination
Curve 2 2 local parts structural under-and well-constrained subparts for each local part e7 and e8 over-constraints e7 and e8 over-constraints e7 conflicting e8 redundant
Curve 3 2 local parts structural under-and well-constrained subparts for each local part e15 and e16 over-constraints e15 and e16 over-constraints e15 and e16 redundant
.38 and table 2.14).
node 1 2 3 4 5 6 7 8
witness 9 X Y Z 29 51 26 54 30 41 34 13 26 51 1 21 14 17 18 49 36 48 30 29 61 7 5 46
witness 9' X Y Z 30 51 27 54 29 40 34 13 27 52 1 21 13 18 18 48 35 49 31 30 61 8 5 46
Table 2.14: Coordinates of witness configurations 9 and 9' where the later
is obtained by randomly inserting a perturbation in the coordinates of the
former.
.1 and table 3.2 respectively, and are discussed in the next subsection.
Detection framework: first scenarios
Level Modeling Method Strong connected components
equation bipartite graph D-M irreducible subsystems
geometry bipartite graph MWM balanced sets
Table 3 .
3 1: Structural methods selected from table 2.7
Linear Method Non-linear Method
over-constraints WCM WCM
redundancies /conflicts G-J/QR Grobner basis /Incremental solving
Table 3
3
.2: Algebraic methods selected from table 2.7
3.1.1 Incremental detection framework
table 3.6 Spanning group of D.1 is {I.3, I.4} which are directly from red block I. Spanning group of E.3 is more complicated, However. In loop c), the spanning group is {E.1, E.2}. But in loop b), the solution of block C is propagated to the equations of block E, resulting in red block E in loop c). Therefore, equations in block E of loop c) is equivalent with the equations in block C and E of loop b). The spanning group of E.3 is extended to {E.1, E.2, C.1, C.2, C.3, I.1}. Similarly, in loop a), the solution of block A updates the structure of block C. By adding equations of block A, the final spanning group of E.3 is {E.1, E.2, C.1, C.2, C.3, I.1, A.1, A.2, A.3, A.4}. Comparing to E.3, F.1 is of type II and thus does not have spanning group inside block F. The final spanning group is a set of equations outside the block (table 3.6, third column).
Over-constraint D.1 E.3 F.1
Spanning group I.3,I.4 E.1,E.2,C.1,C.2,C.3,I.1,A.1,A
Table 4 .
4 1: Incidence matrix of the equations of the Double Banana geometry
Table4.4: Comparison between our algorithm and Moinet's approach on the Double-Banana testing case It has been shown that our algorithm works well on this geometry. To see if it works well on the other configurations as well, we introduce the following industrial example: a 3D teapot, whose geometry type and constraints are much more complex.
Method Witness Over-constraint df cond
Our initial sketch e9 0.53/45 16.97
Moinet et al. initial sketch e18 4.38/32 73.33
Table 4
4
Constraint Equations Type Component
1 1-3 linear 1
2 4-6 non-linear 1
3 7-9 linear 3
4 10-12 non-linear 3
5 13-15 linear 5
6 16-18 non-linear 5
7 19-21 linear 2
8 22-24 non-linear 2
9 25-27 linear 4
10 28-30 non-linear 4
11 31-33 linear 6
12 34-36 non-linear 6
13 37-39 linear 8
14 40-42 non-linear 8
15 43-45 linear 7
16 46-48 non-linear 7
17 49 linear 5
18 50 linear 5
19 51 linear 7
20 52 linear 7
.5: Typology of constraints and equations involved in the description of the 3D teapot sketching example
Table 4
4
Results and discussion
.11: Third time D-M decomposition on CC5 (d=3)
Table 4
4
.13: First time D-M decomposition on CC6 (d=1)
Analyzing component 7 (CC7) Component 7 contains 8 equations and
6 variables (table 4.14). After D-M decomposition, it is initially decomposed
Table 4
4
.16: Second time D-M decomposition on CC7 (d=2)
e45
x12
e46 e48
x7 x9
20 72 G orderedLinkedSCC e44 x11 e46 e48 x7 x9 orderedLinkedSCC orderedLinkedSCC x8 e47
e47 e47 x8 e47 x8
x8
21 22 23
G G G
72 72 72
Figure 4.17: Evolution of G 1 72
Analyzing component 8 (CC8) Component 8 contains 6 equations and
3 variables (table 4.17). After D-M decomposition, it is initially decomposed
into structurally over-constrained (G 10 81 ) subpart (table 4.18). For G 10 81 , at the
step of G 11
Table 4
4 Detected over-constraints and the corresponding spanning groups are summarized in table 4.20. It shows which equation is redundant/conflicting with which group of equations. However, it cannot be presented to users for debugging purpose directly since users are not directly working at the level of the equations but at the level of the constraints. To make it easier to understand, over-constraints and the spanning groups at the level of geometry is shown in table 4.21. The correspondences between constraints and equations is given in table 4.5.
.18: First time D-M decomposition on CC8 (d=1)
e38 e39 e37 e38 e39 e37
x2 x3 x1 x2 x3 x1
10 81 G orderedLinkedSCC
e41 e42 e40 e41 e42 e40
11
81 G
Figure 4.18: Evolution of G 1 81
Finally, the proposed detection and resolution method have been applied
on 8 component and the above detection results are summarized in the
table 4.19.
Component 1 2 3 4 5 6 7 8 Over √ √ √ Initial D-M Well Under √ √ √ √ √ √ √ √ D-M times 1 1 1 1 3 1 2 1 Redundant e6 e24 e10 e28 e16/e17/e18 e34/e35/e36 e40/e41/e42 Over-constraints Type Conflicting I I I I I e50 I e51/e52 II Type II II
Table 4.19: Detection results of the teapot geometry with non-linear equa-
tions directly
Table 4 .
4 20: Over-constraint and the spanning group at the level of equations on the teapot
Over-constraint Spanning Group
constraint 1 constraint 2
constraint 8 constraint 7
constraint 4 constraint 3
constraint 10 constraint 9
constraint 18 constraint 17
constraint 6 constraint 5, 17
constraint 12 constraint 11
constraint 19 constraint 15
constraint 20 constraint 15
constraint 14 constraint 13
Table 4.21: Over-constraints and the spanning groups at the level of geome-
tries on the teapot
The results are summarized in table 4.22. Comparing to the table 4.19, {e34, e35} are detected as conflicting and {e46, e48} are new conflicting constraints.
Component 1 2 3 4 5 6 7 8 Initial D-M Over Well Under √ √ √ √ √ √ √ √ √ √ √ D-M times 1 1 1 1 3 1 2 1 Over-constraints Redundant Type Conflicting e6 e24 e10 e28 e16/e17/e18 I e50 e36 I e34/e35 e51/e52, e46/e48 e40/e41/e42 II Type I I I I II I II,I
Table 4.22: Detection results of linearized teapot geometry
Table 4 .
4 23: Deviations (∆) from the design intent when removing overconstraints detected in original and linearized systems.
Original Linearized
over-constraint ∆ over-constraint ∆
e6 1.32e-14 e6 0.0118
e24 6.46e-15 e24 0.0111
e10 5.31e-15 e10 4.55e-04
e28 2.91e-14 e28 0.0017
e16 3.10e-09 e16 3.69e-05
e17 1.70e-09 e17 2.01e-05
e18 1.11e-13 e18 1.47e-08
e34 2.68e-14 e34 0.0096
e35 2.17e-14 e35 0.0052
e36 1.35e-14 e36 3.97e-07
e40 -1.29e-11 e40 -1.29e-11
e41 2.94e-07 e41 2.94e-07
e42 -5.46e-12 e42 -5.46e-12
e50 -1.40e+01 e50 -1.40e+01
e51 -2.52 e51 -2.52
e52 9.16 e52 9.16
e46 0.0074
e48 0.0034
After applying BFS, as visible in figure4.20, there are 98 unconnected components (red vertices) in the graph with 96 of them containing only variables (right part of the figure). The other two components CC 1 and CC 2 contain both variables and equations. Since the two components are similar, the process is illustrated only with CC 2 component. , yellow section of table 4.26) and underconstrained subpart (G 1 23 , gray section of table 4.26). Maximum matching of these subparts is the diagonal (marked red) of each section.The process of analyzing each subpart is illustrated as following:• Evolution of G 1 21 subpart.As it is shown in figure 4.21, initially G 1 21 is set to G 10 21 before maximum matching and strong connected blocks are generated. After applying orderedLinkedSCC, strong connected components {SCC 11 211 , • • • , SCC 11 215 } are generated as red blocks and G 10 21 is now updated to G 11 21 . Since these blocks are solvable, the solutions are
Sketching a 3D glass
x105 x106 x107 x108 x189 x190 x191 x192
x277 x278 x279 x133 x134 x135
x101 x102 x103 x104 x185 x186 x187 x188
CC1
x97 x98 x99 x100 x181 x182 x183 x184
x45 x46 x47 x48 x177 x178 x179 x180 x261 x262 x263 x264
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18 x19 x20 x21 x22 x23 x24 x265 x266 x267 x268 x269 x270 x271 x272 x273 x274 x275 x276 x280 x281 x282 x283 x284 x285 x286 x287 x288 x121 x122 x123 x124 x125 x126 x127 x128 x129 x130 x131 x132 x136 x137 x138 x139 x140 x141 x142 x143 x144 x145 x146 x147 x148 x149 x150 x151 x152 x153 x154 x155 x156 x157 x158 x159 x160 x161 x162 x163 x164 x165 x166 x167 x168
x41 x42 x43 x44 x173 x174 x175 x176 x257 x258 x259 x260
x49 x50 x51 x52 x53 x54 x55 x56 x57 x58 x59 x60 x64 x65 x66 x67 x68 x69 x70 x71 x72 x73 x74 x75 x76 x77 x78 x79 x80 x81 x82 x83 x84 x85 x86 x87 x88 x89 x90 x91 x92 x93 x94 x95 x96 x193 x194 x195 x196 x197 x198 x199 x200 x201 x202 x203 x204 x208 x209 x210 x211 x212 x213 x214 x215 x216 x217 x218 x219 x220 x221 x222 x223 x224 x225 x226 x227 x228 x229 x230 x231 x232 x233 x234 x235 x236 x237 x238 x239 x240
x37 x38 x39 x40 x169 x170 x171 x172 x253 x254 x255 x256
x33 x34 x35 x36 x117 x118 x119 x120 x249 x250 x251 x252
CC2
x29 x30 x31 x32 x113 x114 x115 x116 x245 x246 x247 x248
x61 x62 x63 x205 x206 x207 x25 x26 x27 x28 x109 x110 x111 x112 x241 x242 x243 x244
Figure 4.20: Constraint graph between variables
4.3.2 Detection Process
Finding local parts
Constraint Equations Type Component
4 1-3 linear 1
2 4-6 linear 2
1 7-9 linear 1
3 10-12 linear 2
5 13 non-linear 1
6 14 non-linear 2
Analysis of the local parts 7 8 15-17 18-20 linear non-linear 1 1
9 21-23 linear 1
10 11 After applying D-M decomposition (1st time) on initial CC 0 24-26 non-linear 1 27-29 linear 2 , we got 2 12 30-32 non-linear structurally over-constrained subpart (G 1 21 , green section of table 4.26), 2 13 33-35 linear 2 well-constrained subpart (G 1 22
14 36-38 non-linear 2
15 39-41 linear 1
16 42-44 non-linear 1
17 45-47 linear 1
18 48-50 non-linear 1
19 51-53 linear 2
20 54-56 non-linear 2
21 57-59 linear 2
22 60-62 non-linear 2
Table 4.24: Typology of constraints and equations involved in the descrip-
tion of the 3D glass sketching example
Table 4 .
4 25: Component 2 (CC2 31 × 48) propagated to the other dependent blocks. Same procedure is applied on G 11 21
Evolution of G 2 22 subpart. For the new system CC 1 2 , D-M is applied second time resulting a structurally well-constrained subpart (G 2 22 , yellow section of the table 4.27) and under-constrained subpart (G 2 23 , gray section of the table 4.27). As it is shown in figure 4.22, after applying orderedLinkedSCC to G 20 22 , strong connected components {SCC 21 221 , • • • , SCC 21 226 } are solvable red blocks. No numerical overconstraints are found and solutions are propagated to the other dependent blocks. The system is now updated to CC 2 2 . Evolution of G 3 23 subpart. After employing D-M decomposition third time to the new system CC 2 2 , only structurally under-constrained subpart G 3 23 is generated. As it is shown in figure 4.23, after applying orderedLinkedSCC on G 30
11 11 11 11 11
SCC SCC SCC SCC SCC
211 212 213 214 215
e5 e4 e12 e11 e10
x68 • x67 • x141 • x140 • x139 •
12
211
10 21 G orderedLinkedSCC e6 orderedLinkedSCC e6 x69 •
x69 •
e14
e14
11 12
G G
21 21
21 21 21 21 21 21 21 21 21 21 21 21
SCC SCC SCC SCC SCC SCC SCC SCC SCC SCC SCC SCC
221 222 223 224 225 226 221 222 223 224 225 226
20 22 G orderedLinkedSCC e29 x75 e28 x74 e27 x73 e53 x3 e52 x2 e51 x1 e29 x75 e28 x74 e27 x73 e53 x3 e52 x2 e51 x1
21 21
G G
22 22
Figure 4.22: Evolution of G 2 22
•
SCC
Figure 4.21: Evolution of G 1 21 •
Table 4 .
4 29: Detection results of CC2 of the glass geometry
Component 2 Over √ Initial D-M Well Under √ √ D-M times 3 Over-constraints Type Conflicting e32/e56/e38/e62 Redundant I e14 Type II
Table 4 .
4 33: Evaluation of the 9 configurations with N rows = 4
Minimization of G 1 (X) Minimization of G 2 (X) Config. dg 1 df cond dg 2 df cond 1 11266.93 0.10000 4.0149e17 85121.36 0.10000 3.3441e17 2 14719.05 0.10280 4.6031e17 86295.47 0.25034 6.5355e19 3 12506.55 0.10277 1.7748e19 85190.96 0.20076 1.0972e18 4 9944.87 0.11452 1.7903e18 79428.31 0.20592 1.7041e18 5 8799.29 0.11448 6.1454e17 77800.57 0.22919 1.0218e18 6 12561.66 0.13935 4.1681e18 69603.16 0.76646 8.5100e16 7 10441.11 0.10684 1.0862e18 69502.72 0.70009 2.3460e18 8 11134.09 0.12097 2.5394e18 71465.72 0.65901 1.5773e18 9 11877.59 0.12601 1.3790e19 67661.55 0.80372 8.4472e17
Table 4 .
4 34: Evaluation of the 9 configurations with N rows = 5
Un certain nombre de perspectives découlent de ce travail. D'abord, une automatisation de processus devrait aider le concepteur à choisir l'ensemble. Le concepteur a accès à trois critères principaux (dg, df , cond) qui peuvent être difficiles à analyser pour un non-expert. Ainsi, des critères de niveau supérieur devraient être imaginés en plus de ceux-ci. Deuxièmement, l'approche peut être rendue interactive, c'est-à-dire permettre le concepteur de choisir entre les différents ensembles en conflit le long de la processus, ou même de modifier les contraintes défectueuses. Enfin, il est prévu de étendre ce travail afin qu'il puisse être utilisé pour détecter et expliquer configurations géométriques qui, même lorsqu'elles sont solubles, se traduisent des conceptions de qualité.
, • • • , CC d n ]
Benchmark and analysis of use cases
Benchmark and analysis of use cases
The previously introduced detection methods have been classified and summarized in table 2.7 which clearly shows their theoretical capabilities. The practical capabilities are studied in this section on different use cases. Problem description The first use case corresponds to the deformation of a B-spline curve of degree 3 defined by 13 control points and a knot sequence U = {u 0 , u 0 , u 0 , u 0 , u 1 , • • • , u 9 , u 10 , u 10 , u 10 , u 10 } with u 0 = 0 and u 10 = 1. They are the values of knots, which are knowns. This knot sequence highlights 10 segments [u i , u i+1 ] for which it will be important to understand whether they are under-, well-or over-constrained. Figure 2.31 shows the B-Spline curve with 10 of the control points (green circles) free to move and the remaining ones fixed (red triangles). The deformation is driven by 9 position constraints (p1 -p9). Finally, the deformation problem is defined by 10 couples of variables (xj, yj) which are the coordinates of pj and 18 equations ek with k ∈ {1..18}. Free control points and position constraints are presented in Table 2.8 together with the corresponding variables and equations. 2.31, the curve is composed of 10 local segments according to the 10 intervals of the knots vector. Results of a manual DoFs analysis of each segment are presented in table 2.9. The curve is globally under-constrained but contains locally under-/well-/over-constrained subparts. However, DoF-based counting does not accurately reflect the non-solvability of this system. More precisely, the result in Figure 2.32 shows that constraints p1 in Seg1 and p5 in Seg7 are structurally under-constrained, constraints {p6, p7, p8, p9}
Benchmark and analysis of use cases in Seg9 are structurally over-constrained and the remaining constraints {p2, p3 , p4} in Seg3 are structurally well-constrained. The result matches well with able 2.9. Moreover, since equations {e16, e17} are unmatched, constraints {p8, p9} are structural over-constrained. Structural analysis using D-M decomposition thus gives the same results as the manual DoF analysis. However, D-M decomposition would fail when detecting numerical overconstraints. New examples are then introduced to illustrate such configurations. Figure 2.33 shows two new configurations obtained by modifying the previously defined constraints. For Curve 2, a point of the curve is assigned to two different positions (p3 = p4) thus creating a conflict. For Curve 3, a point of the curve is assigned to two similar constraints (p8 = p9) thus creating a redundancy.
As illustrated on figure 2.34, D-M decomposition gives the same results for the two configurations. Thus, the subtle difference between the two configurations cannot be distinguished, which confirms the need for further analyzing the system using numerical methods.
Detection framework: first scenarios
SSi,2 Ti,j-1(Ssi,j-1)
Finding local parts
The Breadth First Search (BFS) is used to find the unconnected components of the constraint graph. As revealed by BFS, the constraint graph contains only one component, meaning that there is only one local part in the whole system. e17 0 0 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 e18 1 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 e4 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 e1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 e2 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0 e16 1 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 e6 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 e3 1 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 1 1 0 0 0 0 e9 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 e7 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 e8 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 0 e10 0 0 0 1 1 1 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 e12 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 e14 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 e13 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 e15 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 e11 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 e5 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 0 0 0 0 0 1
Results and discussion
Analyzing component 5 (CC5) Component 5 contains equations and 12 variables (table 4.8). After D-M decomposition, it is initially decomposed into structurally over-constrained (G 10 51 ), well-constrained and underconstrained subpart (table 4.9). For G 10 51 , at the step of G 11 51 , the red block is solvable and the solution is propagated to the block containing equation e50. As a result, the difference is -1.40e+01 and it is conflicting with e49 (type II). After launching D-M decomposition second time, the system is decomposed into structurally well-constrained (G 20 52 ) and under-constrained subparts (table 4.10). The evolution of G 20 52 is shown in figure 4.13. However, there is no over-constraints identified and the solution of the red block are propagated to the other blocks. Finally, after applying D-M decomposition three times, the whole system is structurally under-constrained (table 4.11). Since all red blocks contain only variables, all the equations in table 4.11 are analyzed as a whole. As a result, equation e16, e17 and e18 are redundant (type I). The spanning group of the three equations are {e13, e14, e15, e49}.
into structurally over-constrained (G 10 71 ) and well-constrained subpart (table 4.15). For G 10 71 , at the step of G 11 71 , the red block is solvable and the solution is propagated to the block e52 and block e51. The differences are -2.52 and 9.16 respectively and the two are all conflicting with e43 (type II). After launching D-M decomposition second time, the system is structurally well-constrained (G 20 72 ). The evolution of G 20 72 is shown in figure 4.17. However, there is no over-constraints identified and solutions of red blocks are propagated to the other blocks.
Over-constraints and the spanning groups
The analysis of those two components gives rise to the identification of 2 conflicting equations (e13, e14) which correspond to either the position or distance constraints. Also, 8 redundant equations are detected, which are contained in 8 tangent constraints (section 4.3.1). Since the result of the detection process is not unique, 9 configurations are obtained and are gathered together in Table 4.32. Here, one has to remember that even if the detection process identifies conflicting equations, our algorithm removes the constraints associated to those equations. For example, configuration 1 considers that the two distance constraints (one between patches P1 and P4, and the other between P2 and P3) are to be removed (0 in the table) and the 4 position constraints are kept (1 in the table ). , e11, e12, e27, e28, e29, e30, e31, e33, e34, e35, e36, e37, e4, e5, e6, e51, e52, e53, e54, e55, e57, e58, e59, e60, e61 e38 e10, e11, e12, e27, e28, e29, e30, e31, e33, e34, e35, e36, e37, e4, e5, e6, e51, e52, e53, e54, e55, e57, e58, e59, e60, e61 e56 e10, e11, e12, e27, e28, e29, e30, e31, e33, e34, e35, e36, e37, e4, e5, e6, e51, e52, e53, e54, e55, e57, e58, e59, e60, e61 e62 e10, e11, e12, e27, e28, e29, e30, e31, e33, e34, e35, e36, e37, e4, e5, e6, e51, e52, e53, e54, e55, e57, e58, e59, e60 , e2,e3, e7, e8, e9, e15, e16, e17, e19, e20, e21, e22, e23, e25, e26, e39, e40, e41, e43, e44, e45, e46, e47, e49, e50 e24 e1, e2,e3, e7, e8, e9, e15, e16, e17, e19, e20, e21, e22, e23, e25, e26, e39, e40, e41, e43, e44, e45, e46, e47, e49, e50 e42 e1, e2,e3, e7, e8, e9, e15, e16, e17, e19, e20, e21, e22, e23, e25, e26, e39, e40, e41, e43, e44, e45, e46, e47, e49, e50 e48 e1, e2,e3, e7, e8, e9, e15, e16, e17, e19, e20, e21, e22, e23, e25, e26, e39, e40, e41, e43, e44, e45, e46, e47, e49, e50 Table 4.30: Over-constraint and the spanning group at the level of equations on the glass Point CO TAN
Adding constraints of M . The rank of M equals to the number of σ i s > tol rank in Σ.
A U
Moreover, tolerance could also affect the number of conflicting and redundant constraints. In our algorithm, the detected over-constraints are of two types: type I and type II. Further distinguishing on the redundant and conflicting constraints inside the two types of over-constraints requires a solving and feeding process respectively. On one hand, termination of a solving process is determined by termination tolerance on the function value, on the first-order optimality, and on the values of variables between steps. On the other hand, substituting a solution(X 0 ) to a constraint(e(X) = P ) would give new value to the constraint(e(X 0 ) = P ). Comparing the absolute difference between the two (|P -P |) with the tolerance would determine if a constraint is conflicting (|P -P | > tolerance) or redundant(|P -P | ≤ tolerance). Here, we define this tolerance as tol rc .
In this section, we use the teapot example to show the variation of the two type of tolerances (tol rc and tol rank ) on the results of the overconstraints detection. The tol rc , tol rank are ranging from 10 -1 to 10 -10 respectively. Note that, in the teapot testing case (section 4.2), the two tolerances are both fixed to 10 -6 . The results are summarized in the tables 4. 35, 4.36, and 4.37. The results of the testing case in section 4.2 are highlight in tables below.
Table 4.36: Number of identified conflicting constraints with respect to tol rank (column) and tol rc (row) on the teapot From the three tables, we can see that both of the two tolerances influence the number of redundant constraints (table 4.37) and numerical overconstraints (table 4.35). The number of conflicting constraints, however, remains the same since the differences of conflicting constraints in figure 4.8 are larger than the range of specified tolerances. This case is specific since these differences are added intentionally large at the beginning to test the algorithm. In practical computation, one has to pay attention to the specification of the two thresholds, which, as demonstrated in this example, affects the detection results.
Actually, the two thresholds are to be consistent with respect to the accuracy of the adopted CAD modeler. So, if the CAD modeler is using an accuracy of 1e-4, then our algorithm should be set up with this value for the two thresholds.
Conclusion
In this work, the over-constraints detection and resolution process have been described and analyzed with results on both academic and industrial examples. More specially, in the Double-Banana geometry, we shown that
Conclusion and perspectives
• Find the minimum number of spanning groups of an over-constraint. Our method enables to find one of spanning groups but not necessarily minimum one. This can be extended in the future. Presenting a user the minimum number of spanning group of an over-constraint will help him/her avoid considering too many constraints when debugging an over-constrained system.
Towards advanced detection of solvable configurations generating poor quality solutions...
It is planned to extend this work so that it can be used to detect and explain geometric configurations which, even when solvable, result in poor quality designs. In the context of free form surfaces deformation, configurations with poor quality are of various types: surfaces with ridges, saddles, troughs, domes, saddle ridges etc. Using artificial intelligence techniques, the rules linking the input parameters to the generation of bad shapes could be obtained by learning carefully selected bad quality examples. Once these rules are obtained, they can be used on a new case to a priori estimate the impact of input parameters without having to perform it. Cette thèse propose une approche originale d'aide à la décision pour aborder les configurations géométriques sur-contraintes durant les phases de conception assistée par ordinateur. Elle se concentre particulièrement sur la détection et la résolution de contraintes redondantes et conflictuelles lors de la déformation de surfaces libres à base de carreaux NURBS. À partir d'une série de décompositions structurelles couplées à des analyses numériques, l'approche proposée traite à la fois les contraintes linéaires et nonlinéaires. Les décompositions structurelles sont particulièrement efficaces en raison de la propriété de support local des NURBS. Puisque le résultat de ce processus de détection n'est pas unique, plusieurs critères sont introduits pour inciter le concepteur à identifier les contraintes à supprimer afin de minimiser l'impact sur son intention de conception initiale. Ainsi, même si le noyau de l'algorithme travaille sur des équations et des variables, la décision est prise en considérant les contraintes géométriques spécifiées par l'utilisateur.
Mots clés : Processus de développement de produit, déformation de surface de forme libre, équations linéaires et non linéaires, sous-parties localement sur-contraintes, contraintes redondantes et conflictuelles, décomposition structurelle, analyse numérique, aide à la décision, intention de conception
DETECTION AND TREATMENT OF INCONSISTENT OR LOCALLY OVER-CONSTRAINED CONFIGURATIONS DURING THE MANIPULATION OF 3D GEOMETRIC MODELS MADE OF FREE-FORM SURFACES
ABSTRACT: Today's CAD modelers need to incorporate more and more advanced functionalities for the modeling of high-quality products defined by complex shapes.
There is notably a need for developing decision support tools to help designers during the free-form geometric modeling phase of the Product Design Process (PDP). Actually, the final shape of a product often results from the satisfaction of requirements sketched during an incremental design process. The requirements can be seen as constraints to be satisfied. To shape a free-form object, designers have to specify the geometric constraints the object has to satisfy. Most of the time, those constraints will generate a set of linear and non-linear equations linking variables whose values have to be found.
During the modeling process, designers may express involuntarily several times the same requirements using different constraints thus leading to redundant equations, or they can involuntarily generate conflicting equations resulting in unsatisfiable configurations. Additionally, due to the local support property of NURBS, equations may not span on all variables, thus resulting in locally over-constrained subparts.
This work proposes an original decision-support approach to address over-constrained geometric configurations in Computer-Aided Design. It focuses particularly on the detection and resolution of redundant and conflicting constraints when deforming freeform surfaces made of NURBS patches. Based on a series of structural decompositions coupled with numerical analyses, the proposed approach handles both linear and nonlinear constraints. The structural decompositions are particularly efficient because of the local support property of NURBS. Since the result of this detection process is not unique, several criteria are introduced to drive the designer in identifying which constraints should be removed to minimize the impact on his/her original design intent. Thus, even if the kernel of the algorithm works on equations and variables, the decision is taken by considering the user-specified geometric constraints. |
01762672 | en | [
"sdv.neu",
"sdv.ib.ima"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01762672/file/EpiToolsSEEG.pdf | Samuel Medina Villalon
Rodrigo Paz
Nicolas Roehri
Stanislas Lagarde
Francesca Pizzo
Bruno Colombet
Fabrice Bartolomei
Romain Carron
Christian-G Bénar
email: christian.benar@univ-amu.fr
S Medina
Roehri 1+
C-G Bénar
Epitools, a software suite for presurgical brain mapping in epilepsy : Intracerebral EEG
Keywords: Epilepsy, SEEG, automatic segmentation, contacts localization, 3D rendering, CT
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
About one third of patients with epilepsy are refractory to medical treatment [START_REF] Kwan | Early identification of refractory epilepsy[END_REF]. Some of them suffer from focal epilepsy; for these patients, epilepsy surgery is an efficient option [START_REF] Ryvlin | Epilepsy surgery in children and adults[END_REF]. The goal of epilepsy surgery is to remove the epileptogenic zone (EZ), i.e., the structures involved in the primary organization of seizures, and to spare the functional cortices (e.g. involved in language or motor functions). For this purpose, a pre-surgical work-up is needed including a non-invasive phase with clinical history and examination, cerebral structural and functional imaging (MRI and PET), neuropsychological assessment and long-term surface EEG recordings. Nevertheless, for about one quarter of these patients, it is difficult to correctly identify the EZ and/or its relationship with functional areas. For these difficult cases, an invasive study with intracranial EEG is required [START_REF] Jayakar | Diagnostic utility of invasive EEG for epilepsy surgery: Indications, modalities, and techniques[END_REF]. Stereo-EEG (SEEG) is a powerful method to record local field potentials from multiple brain structures, including mesial and subcortical structure. It consists in a stereotaxic surgical procedure performed under general anesthesia and aiming at implanting 10 to 20 electrodes within the patient's brain. Each electrodes being made up of 5 to 18 contacts, [START_REF] Mcgonigal | Stereoelectroencephalography in presurgical assessment of MRI-negative epilepsy[END_REF], [START_REF] Cossu | Explorations préchirurgicales des épilepsies pharmacorésistantes par stéréo-électro-encéphalographie : principes, technique et complications[END_REF]. SEEG allows high resolution mapping of both the interictal, i.e. between seizures, and ictal activity, and helps delineate the epileptic network, determine which specific areas need to be surgically removed [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF], and identify the functional regions to be spared [START_REF] Fonseca | Hemispheric lateralization of voice onset time (VOT) comparison between depth and scalp EEG recordings[END_REF]. The SEEG interpretation crucially relies on combined use of three sources of information: clinical findings, anatomy and electrophysiology. Therefore, it is of utmost importance to precisely localize the origin of the SEEG signal within the patient's anatomy, as was done in the field of EcoG [START_REF] Dykstra | Individualized localization and cortical surface-based registration of intracranial electrodes[END_REF], [START_REF] Groppe | iELVis : An open source MATLAB toolbox for localizing and visualizing human intracranial electrode data[END_REF].
The intracerebral position of SEEG electrodes is usually assessed visually by clinicians, based on the registration of MRI scan on CT image. However, this approach is time consuming (100 to 200 contacts by patients) and a potential source of error. The use of a software allowing automatic localization of electrodes contact within the patient's MRI would thus be very helpful. Such software should be able to perform: i) an automatic and optimal localization of contact positions with respect to each patient's cerebral anatomy, ii) an automatic labelling of the SEEG contacts within an individualized atlas.
Currently, there isn't any software available to perform automatically such processes. Indeed, previous studies have proposed semi-automatic registration of intracranial electrodes contacts, based on Computerized tomography (CT) and Magnetic resonance imaging (MRI) images and prior information such as planed electrodes trajectory [START_REF] Arnulfo | Automatic segmentation of deep intracerebral electrodes in computed tomography scans[END_REF], [START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF] or manual interactions [START_REF] Princich | Rapid and efficient localization of depth electrodes and cortical labeling using free and open source medical software in epilepsy surgery candidates[END_REF], [START_REF] Qin | Automatic and precise localization and cortical labeling of subdural and depth intracranial electrodes[END_REF].
On the other hand, advances in signal analysis (including connectivity) of epileptic activities in SEEG have been major in recent years and now constitute key factors in the understanding of epilepsy [START_REF] Bartolomei | Defining epileptogenic networks: Contribution of SEEG and signal analysis[END_REF]. The translation of such advanced analyses to clinical practice is challenging but should lead to improvement in SEEG interpretation. Therefore, there is a need to easily apply some signal analysis to raw SEEG signal and then to graphically display their results in the patient's anatomy.
Our objective was thus to design a suite of software tools, with user-friendly graphical user interfaces (GUIs), which enables to i) identify with minimal user input the location of SEEG contacts within individual anatomy ii) label contacts based on a surface approach as implemented in FreeSurfer (Fischl, 2012) or MarsAtlas [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF] iii) interact with our in-house AnyWave software [START_REF] Colombet | AnyWave: A cross-platform and modular software for visualizing and processing electrophysiological signals[END_REF] and associated plugins in order to display signal processing results in the patient's MRI or in a surface rendering of the patient's cortex. Hereafter, we will describe step by step the implementation of our suite "EpiTools". The suite as well as the full documentation are freely available on our web site http://meg.univ-amu.fr/wiki/GARDEL:presentation. It includes mainly the following software programs: GARDEL (for "GUI for Automatic Registration and Depth Electrodes Localization"), the 3Dviewer for 3D visualization of signal processing results within the AnyWave framework.
Material and Methods
The complete pipeline is illustrated in Fig. 1. Firstly, FreeSurfer pipeline (1) can be run optionally on MRI image to obtain pial surface and Atlases. Then, "GARDEL" (2) tool was developed to co-register MRI on CT scan, detects automatically SEEG contacts and label them if an atlas is available. It saves electrodes coordinates to be reused. Finally, the "3Dviewer" (4) tool is designed to display signal analysis results, obtained thanks to "AnyWave" (3) and attached plugins, inside patient individual brain mesh or MRI slices. Inputs are : i) MR pre-implantation and CT post-implantation images, ii) if needed, results of FreeSurfer pipeline (Pial surface and Atlases) (Fischl, 2012) or MarsAtlas [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF], iii) SEEG electrophysiological signal processing results. The output consists of a 3D visualization of these data in the patient's own anatomical rendering, a selection of SEEG contacts found inside grey matter for signal visualization in AnyWave software and brain matter or label associated with each contact. The different tools of our pipeline are written in Matlab (Mathworks, Natick, MA), and can be compiled to be used as standalone in all operating systems (requiring only the freely available Matlab runtime).
SEEG and MRI data
In our center, SEEG exploration is performed using intracerebral multiple-contact electrodes (Dixi Medical (France) or Alcis (France): 10-18 contacts with length 2 mm, diameter 0.8 mm, and 1.5 mm apart; for details see [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF]). Some electrodes include only one group of contacts regularly spaced (5,10, 15 and 18 contacts per electrode), and others are made up of 3 groups of 5 or 6 contacts spaced by 7-11 mm (3x5 or 3x6 contacts electrodes). There are two types of electrodes implantation: orthogonal and oblique. Electrodes implanted orthogonally are almost orthogonal to the sagittal plane and almost parallel to the axial and coronal planes. Electrodes implanted obliquely can be implanted with variable angle. The choice of the type and number of electrodes to be implanted depends on clinical needs. Intracerebral recordings were performed in the context of their pre-surgical evaluation. Patients signed informed consent, and the study was approved by the Institutional Review board (IRB00003888) of INSERM (IORG0003254, FWA00005831).
All MR examinations were performed on a 1.5 T system (Siemens, Erlangen, Germany) during the weeks before SEEG implantation. The MRI protocol included at least T1-weighted gradient-echo, T2weighted turbo spin-echo, FLAIR images in at least two anatomic planes, and a 3D-gradient echo T1 sequence after gadolinium based contrast agents (GBCA) injection. Cerebral CT (Optima CT 660, General Electric Healthcare, 120 kV, 230-270 FOV, 512x512 matrix, 0.6mm slice thickness), without injection of contrast agents, were performed the day after SEEG electrodes implantation. Each CT scan was reconstructed using the standard (H30) reconstruction kernel to limit the level of streaks or flaring.
The AnyWave framework
Our tools GARDEL and the 3Dviewer are intended to interact with the AnyWave1 software, developed in our institution for the visualization of electrophysiological signals and for subsequent signal processing [START_REF] Colombet | AnyWave: A cross-platform and modular software for visualizing and processing electrophysiological signals[END_REF]. AnyWave is multi-platform and allows to add plugins created in Matlab or Python2 . Some specific plugins were created and added to implement measure for SEEG analysis as Epileptogenicity Index (EI) [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF], non-linear correlation h 2 [START_REF] Wendling | Interpretation of interdependencies in epileptic signals using a macroscopic physiological model of the EEG[END_REF], Interictal spikes or high frequency oscillations (HFO) detections [START_REF] Roehri | What are the assets and weaknesses of HFO detectors? A benchmark framework based on realistic simulations[END_REF][START_REF] Roehri | Time-Frequency Strategies for Increasing High-Frequency Oscillation Detectability in Intracerebral EEG[END_REF] as well as graph measures [START_REF] Courtens | Graph Measures of Node Strength for Characterizing Preictal Synchrony in Partial Epilepsy[END_REF].
Electrode contact segmentation and localization
GARDEL localizes the SEEG contacts in the patient's anatomy. Unlike most existing techniques [START_REF] Arnulfo | Automatic segmentation of deep intracerebral electrodes in computed tomography scans[END_REF][START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF][START_REF] Princich | Rapid and efficient localization of depth electrodes and cortical labeling using free and open source medical software in epilepsy surgery candidates[END_REF][START_REF] Qin | Automatic and precise localization and cortical labeling of subdural and depth intracranial electrodes[END_REF], it relies on an automatic segmentation of each contact. Minimal user intervention is required (only for attributing the electrode names). It also labels each individual contact with respect to a chosen atlas (Desikan-Kiliany (Desikan et al., 2006), Destrieux [START_REF] Destrieux | Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature[END_REF], or in particular MarsAtlas [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF]).
GARDEL takes as input a post-implantation CT-scan (where the SEEG contacts are visible as hyper density signals areas) and an anatomical MRI Volume. DICOM, NIFTI or Analyze formats are accepted.
Classically, clinical images are in the DICOM format, and are converted into a NIfTI format for easier manipulation, using a DICOM to NIfTI converter3 that can be launched within GARDEL. Image opening is made thanks to Jimmy Shen ("Tools for NIfTI and ANALYZE image") toolbox4 .
Since MRI and CT images have different resolution, size and origin, spatial alignment and registration are performed. In order to maintain the good quality of the CT image and keep an optimal visualization of electrodes, it was preferred to register the MRI to the CT space. The MRI is registered using maximization of Normalized Mutual Information and resliced into the CT space using a trilinear interpolation, thanks to the SPM85 toolbox [START_REF] Penny | Statistical Parametric Mapping: The Analysis of Functional Brain Images[END_REF].
With both images co-registered in the same space, the next step is to segment the electrodes on the CT scan. To do so, the resliced MRI is segmented into 3 regions: white matter, grey matter and cerebrospinal fluid using SPM (spm_preproc function). These three images are combined into a mask that enables us to remove extra-cerebral elements such as skull and wires (spm_imcalc function).
In order to segment the electrodes, a threshold is found from the histogram of grey values of the masked CT image. Electrode intensity values being significantly greater than brain structures [START_REF] Hebb | Imaging of deep brain stimulation leads using extended hounsfield unit CT[END_REF], they are defined as outliers, with a threshold based on median and quartiles : Thr= (Q3 + 1.5*IQR) with Q3 third quartile and IQR inter-quartile range (Q3-Q1) and could also be adjusted. The electrode segmentation is divided into 2 steps. The first step aims at segmenting each electrode individually and the second step aims at separating each contact of a given electrode. Thus, once the CT images have been thresholded, the resulting binary image is dilated using mathematical morphology (MATLAB Image Processing Toolbox, imdilate function) in order to bind the contacts of the same electrode together. We then find each connected component, which corresponds to each electrode, and label them separately (bwconncomp function) (Fig. 2a). This results in one mask per electrode. These masks are iteratively applied to the non-dilated thresholded CT to obtain a binary image of the contacts. We then apply a distance transform of the binary image preceding a watershed segmentation [START_REF] Meyer | Topographic distance and watershed lines[END_REF]
(Matlab watershed function).
A first issue is that the watershed technique may oversegment some contacts, i.e. identify several contacts instead of one, or may miss some contacts because the contacts were too small or removed after thresholding. To solve this issue, we calculated the distance between contacts within each electrode as well as their individual volume. We removed the outliers of each feature and reconstructed the missing contacts by building a model of the electrode using the median distance and the direction vector of the electrode. The vector is obtained by finding the principal component of the correctly segmented contacts. If contacts are missed between consecutive contacts, i.e. if the distance between two consecutive contacts is greater than the median, it is divided by the median to estimate the number of missing contacts, which are then placed on the line formed by the two contacts, equally spaced. If contacts are missed at the edge of the electrode, we place contacts at the median distance of the last contact on the line given by the direction vector until the electrode mask is filled. This method allows adding missing contacts within or at the tip of the electrode. It could add contacts inside the guide screw, if it belongs to the electrode mask, but can easily be deleted after a quick review. This method enables reconstructing electrodes even if they are slightly bent, and the error made during the reconstruction of missing contacts are minimized using piece-wise linear interpolation (in contrast with simple linear interpolation) (Fig. 2b).
A second issue is the segmentation of the 3x5 or 3x6 contacts electrodes, which are classically detected as 3 different electrodes. This is solved in two steps. Firstly, for each 5 or 6 contacts electrode, their direction vector is calculated. We then compute a dot product between them to check if they are collinear and group them. Secondly, only on these subsets of electrodes, dot products between a vector of a given electrode and vectors constructed with a point of this electrode and points of other electrodes are calculated to check if they need to be grouped.
A third issue is the difficulty to segments oblique electrodes because of the CT resolution that is not high enough in the vertical direction. To solve it, a second method for contact localization is applied after the fully automatic one, based on pre-defined electrodes characteristics (we have built in templates for Alcis and DIXI electrodes used in our center). The directions of clustered electrodes that failed the previous step are obtained by finding the principal component of each point composing these electrodes. Then, based on these electrodes characteristics and the estimated size of the clustered electrodes, a match is made and the electrodes can be reconstructed. Fig. 2c displays the end of the segmentation process. Electrodes are projected on the maximum intensity projection image of the CT scan.
After the automatic segmentation, the user has the possibility to delete and/or add manually electrodes and/or single contacts. In order to create an electrode, the user has to mark the first contact, as well as another one along the electrode, and to choose the electrode type. This electrode will be created with respect to pre-defined electrodes characteristics (number of contacts, contact size and spacing). In order to manually correct contact position, contacts can be deleted or added one by one. Contacts numbers will be reorganized automatically.
Another important feature of GARDEL is the localization of each contact within patients' anatomy. As manual localization is usually time consuming, our goal is to localize precisely and to label automatically each contact in the brain and in individualized atlases. Atlases from FreeSurfer (Desikan-Killiany [START_REF] Desikan | An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest[END_REF] or Destrieux [START_REF] Destrieux | Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature[END_REF]) or "MarsAtlas" [START_REF] Auzias | MarsAtlas: A cortical parcellation atlas for functional mapping[END_REF] can be imported after having performed a transformation from FreeSurfer space to the native MRI space. For each contact, its respective label and those of its closest voxels neighbors (that formed a 3x3x3 cube) are obtained. In case of multiple labels in this region of interest, the most frequent one is used to define contact label. GARDEL provides for each SEEG contact its situation within the grey and white matter and its anatomical localization based on this atlas.
At the end, contacts coordinates, their situation within cerebral grey or white matter and anatomical labels can be saved, for later use in the 3Dviewer (see below). GARDEL also saves automatically AnyWave montages (selection of contacts located inside grey matter and grouped them by area (frontal, occipital, parietal, temporal) for visualization of SEEG signals in AnyWave software). Fig. 3 displays output rendering of GARDEL tool. It is possible to display electrode in patient anatomy (Fig. 3a), in an atlas (Fig. 3b) or all electrodes in a surface rendering of the cortex (Fig. 3c).
3D representation (3Dviewer)
3Dviewer tool, closely linked to GARDEL, permits to display in a 3D way a series of relevant information inside individual mesh of the cortex:
-SEEG electrodes, -mono-variate values such as the Epileptogenicity Index, spike or high frequency oscillation rate, -bi-variate values such as non-linear correlation h 2 or co-occurrences graph.
Data required for this tool are the following: patient's MRI scan, electrodes coordinates as given by GARDEL, pial surface made by FreeSurfer or cortex mesh made by any other toolbox (e.g. SPM), and mono-variate or bi-variate values based on format created by AnyWave and associated plugins. The parameters to be displayed can be easily set by the user: mesh, electrodes, mono or bi-variate values.
Each SEEG contact is reconstructed as a cylinder with the dimension of electrodes used. Mono-variate values can either be displayed as a sphere or a cylinder with diameter and color proportional to the values. Bi-variate values (connectivity graphs) are displayed as cylinders with diameter and color proportional to the strength of the value and an arrow for the direction of the graph. Views can be switched from 3D to 2D. In the 2D view SEEG contacts are displayed on the MRI with mono-variate values in color scale (Fig. 4a). Values are listed as tables.
As explained above, the 3Dviewer allows the 3D visualization of several signal analysis measures obtained from AnyWave software within the patient's anatomy (Fig. 4b). One type of measure is the quantification of ictal discharges as can be done by the Epileptogenicity Index [START_REF] Bartolomei | Epileptogenicity of brain structures in human temporal lobe epilepsy: A quantified study from intracerebral EEG[END_REF].
Briefly, the EI is a tool quantifying epileptic fast discharge during a seizure, based on both the increase of the high frequency content of the signal and on the delay of involvement at seizure onset. It gives for each channel a measure between 0 and 1 that can be used to assess the epileptogenicity of the underlying cortex. Typically, a threshold between 0.2 and 0.3 is used for delineating the epileptogenic zone. The results could be automatically exported as a Microsoft Excel file with numerical data.
Another type of measure comes from the interictal activity, i.e. the activity between seizures. Both spikes and high frequency oscillations (HFO) are markers of epileptic cortices, and their detection and quantification important in clinical practice. Spikes and HFO are detected by the Delphos plugin.
This detector is able to automatically detect in all channels, oscillations and spikes based on the shape of peaks in the normalized ("ZH0") time-frequency image [START_REF] Roehri | Time-Frequency Strategies for Increasing High-Frequency Oscillation Detectability in Intracerebral EEG[END_REF][START_REF] Roehri | What are the assets and weaknesses of HFO detectors? A benchmark framework based on realistic simulations[END_REF]. Results could be exported as histogram of spike, HFO rates and combination of these two markers. A step further, the co-occurrence of inter-ictal paroxysms could bring some information about the network organization of the spiking cortices. Co-occurrence graphs can be built using the time of detection of spikes or HFOs [START_REF] Malinowska | Interictal networks in Magnetoencephalography[END_REF].
Furthermore, epilepsy also leads to connectivity changes both during and between seizures. The study of such changes is important in the understanding of seizure organization, semiology, seizure onset localization, etc. The non-linear correlation h 2 is a connectivity analysis method, based on non-linear regression, usefully applied in the study of epilepsy [START_REF] Wendling | Interpretation of interdependencies in epileptic signals using a macroscopic physiological model of the EEG[END_REF] [START_REF] Bartolomei | Defining epileptogenic networks: Contribution of SEEG and signal analysis[END_REF]. It is computed within the core part of AnyWave. The results could be visualized on a graph with weighted edges, directionality and delay and could also be exported on a Matlab file in order to proceed others connectivity analyses. The GraphCompare plugin quantifies the number and the strength of links between selected contacts based on h2 results to compare , with statistical testing, the connectivity of a period of interest with that of a reference period [START_REF] Courtens | Graph Measures of Node Strength for Characterizing Preictal Synchrony in Partial Epilepsy[END_REF]. The results are represented in form of boxplots and histograms representing: total degree and strengths of nodes, and the repartition of degree of the entire network. Statistical results are also automatically exported. Finally, graph representations of edge with significant changes could also be represented. SEEG contacts. We investigated 30 patients with 10 to 18 electrodes each, resulting in a total of 4590 contacts. The validation was made by an expert clinician (SL). The measure was the concordance of each reconstructed contact with the real contact as visualized on the native CT. Then, we estimated the sensitivity and precision of GARDEL. Sensitivity was defined as the number of good detections of contacts (true positive) divided by the total number of contacts of the patient and precision as the number of good detection of contacts over the total number of detections.
Our segmentation tool has a mean sensitivity of 97% (first quartile 98%, median and third quartile 100%) and a mean precision of 95% (first quartile 92%, median and third quartile 100%). Only a small subset of contacts was missed (129 out of 4590) and few false detections were obtained (243). The decrease in performance is mostly due to oblique electrodes that are not clearly distinguishable in some CT scans or when electrodes cross each other.
We also did a multi-rater comparison to confirm our results. We chose 3 more patients (more than 600 detections) that were validated by 3 raters. We obtained 83.1 % of inter-rater agreement using Krippendorf's alpha. Discrepancies were due to oblique electrodes. Separating electrode types, we obtained a full agreement among raters for orthogonal electrodes (477 contacts) and an alpha equal to 0.78 for obliques electrodes (130 contacts).
The comparison with another tool (iElectrodes [START_REF] Blenkmann | iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization[END_REF]) showed a mean difference of 0.59mm between round voxel coordinates obtained by the two tools. This difference could potentially be explained by rounding effect of centroid computation at the voxel level (our CT images had 0.42*0.42*0.63mm voxel size). Therefore, our method appears to be efficient and can be used in clinical practice. It is automatic (except for naming electrodes), contrary to previous studies that used planned electrodes trajectory [START_REF] Arnulfo | Automatic segmentation of deep intracerebral electrodes in computed tomography scans[END_REF][START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF] or manual interaction [START_REF] Princich | Rapid and efficient localization of depth electrodes and cortical labeling using free and open source medical software in epilepsy surgery candidates[END_REF][START_REF] Qin | Automatic and precise localization and cortical labeling of subdural and depth intracranial electrodes[END_REF]).
In the few cases where there are errors, it is possible to easily correct manually the segmentation (explained in method section). Moreover, automatic contacts labelling using individualized atlas opens accelerated and more robust interpretation of SEEG data in correlation with patient's anatomy. It is also possible to co-register post-resection MRI with electrode position to identify whether electrode sites were resected. However, such registration has to be performed with care for large resections, for which the brain tissues may have moved (Supp.Fig. 1).
Nevertheless, a limitation of our tool resides in the fact that, in the current version, it requires intercontact distance to be the same across electrodes. Indeed, to estimate the position of missing contacts, it takes the median distance between all contacts.
Validation of labelling from Atlases
The second step was to validate the concordance between the obtained labels of each contact and its actual anatomical location. For this purpose, a senior neurosurgeon (RC) reviewed the data of 3 patients to check if our tool properly assigned each contact to its correct label according to a given atlas (different atlas per patient). Concordance results are the following: 534 over 598 contacts were accurately labeled (89.3%), 28 were uncertain (4.7%), i.e., contacts difficult to label automatically because of their location at the junction between two areas or between grey and white matters, and 36 (6%) were wrong. These errors were mostly due to incorrect segmentation of individual MRIs because of abnormalities/lesions, or in rare cases to a mismatch between atlas label and clinician labeling. It is to be noted that segmentation can be corrected at the level of the FreeSurfer software6 .
Results across patient were concordant (90%, 88% and 89%).
3Dviewer: representation of physiological data
Signal processing is increasingly used for the analysis of SEEG [START_REF] Bartolomei | Defining epileptogenic networks: Contribution of SEEG and signal analysis[END_REF]. The major interest of our pipeline is the possibility to represent on the patient's MRI scan the data from advanced electrophysiological signal analyses. This is the goal of the 3Dviewer. Data could be represented on the 3D mesh of the patients or MRI slices in the 3 spatial planes (Fig. 4). This two modes of representation are complementary in SEEG interpretation making it possible not only to visualize the estimate of the epileptic abnormalities on 3D but also to precisely localize them within brain structures and provide potentially useful guidance for surgical planning on 2D MRI slices.
CONCLUSION
In this study, we presented a suite of tools, called EpiTools, to be used for SEEG interpretation and related clinical research applications. The SEEG section of EpiTools is mainly based on 2 distinct parts.
The first part, GARDEL , is designed for automatic electrode segmentation and labelling. It is, to the best of our knowledge, the first software to perform automatic segmentation, electrode grouping and contact labelling within individual atlas, needing only from the user to name electrodes and to correct results if necessary. We validated the contact detection and obtained good results both for sensitivity and precision. The second part consists of the 3Dviewer, which displays on the patient's MRI scan or on a 3D surface rendering, the results of signal processing at the contact location. It creates advanced link between individual anatomy and electrophysiological data analyses. In the future, we will present the application of EpiTools to non-invasive electrophysiological data such as EEG and MEG.
médecin" from Aix Marseille Université (PhD program ICN).
Fig. 1
1 Fig.1 Scheme of EpiTools pipeline for intracerebral EEG . Firstly, FreeSurfer(1) can be run to obtain
Fig. 2
2 Fig.2 Steps of electrodes segmentation a) steps of the clustering of contacts within electrodes (left
Fig. 3
3 Fig.3 Results of GARDEL tool a) MRI co-registered on the CT image with one electrode and its
Fig. 4
4 Fig.4 Overview of 3Dviewer tool results a) coronal, sagittal and axial planes of the patient MRI with
Available at meg.univ-amu.fr
Available at https://www.python.org/
Available at http://fr.mathworks.com/matlabcentral/fileexchange/42997-dicom-to-nifti-converter--nifti-tooland-viewer
4 Available at https://fr.mathworks.com/matlabcentral/fileexchange/8797-tools-for-nifti-and-analyze-
image 5 Available at http://www.fil.ion.uclac.uk/spm/
https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/TroubleshootingData
Acknowledgements
The authors wish to thank Olivier Coulon and Andre Brovelli for useful discussions on CT-MRI coregistration and MarsAtlas/Brainvisa use. We thank Dr Gilles Brun for helping with the writing of methods about CT imaging. The calcultation of Krippendorff alpha for this paper was generated using the Real Statistics Resource Pack software (Release 5.4). Copyright (2013 -2018) Charles Zaiontz. www.real-statistics.com. This work has been carried out within the FHU EPINEXT with the support of the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French Governement program managed by the French National Research Agency (ANR). Part of this work was funded by a joint Agence Nationale de la Recherche (ANR) and Direction Génerale de l'Offre de Santé (DGOS) under grant "VIBRATIONS" ANR-13-PRTS-0011-01. Part of this work was funded by a TechSan grant from Agence Nationale de la recherche "FORCE" ANR-13-TECS-0013. F Pizzo was funded by "Bourse doctorale jeune
All these results could be imported in the 3Dviewer to be represented in the patient's anatomy.
Validation methodology
We validated the two aspects of our segmentation tool. Firstly, the validity of the segmentation of SEEG contact. Detected centroids were superimposed on the CT images and clinicians assessed whether centroids were entirely within the hyper-intensity region corresponding to the contact in the CT image. We performed a multi-rater analysis in order to evaluated the inter-rater agreement (using Krippendorf's alpha [START_REF] Krippendorff | Reliability in Content Analysis[END_REF]). Moreover, we compared our tool to another one (iElectrodes [START_REF] Blenkmann | iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization[END_REF]). On 3 patients we picked randomly 20 contacts detected by the two software applications and calculated the mean distance between their round voxels coordinates in the CT space.
Secondly, the validation of anatomical labels. The expert neurosurgeon verified if the label assigned to each contact was concordant with the true location of the contact in the anatomy. It was labelled as "good" if it was within the right anatomical region, as "uncertain" if it was at the boundary between 2 areas and as "wrong" if it was discordant with the region it assigned.
RESULTS and DISCUSSION
Validation of electrodes Segmentation
Manual localization is time consuming. It takes in average 49 minutes for 91 contacts as explained in [START_REF] Blenkmann | iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization[END_REF] or 75 minutes for 1 implant segmentation as explained in [START_REF] Narizzano | SEEG assistant: a 3DSlicer extension to support epilepsy surgery[END_REF]. Our method takes in average 19 minutes of automated processing (including MRI to CT coregistration, brain segmentation and contact localization) by implantation for a double 1.90Ghz processor with 6 cores each and 16 GB RAM machine (the automatic segmentation process is parallelized between the available cores) added to manual corrections if needed. Thus, our method saves significant time for users.
Concerning the validation of our method, the first important step was to validate the localization of
Conflict of Interest:
The authors declare that they have no conflict of interest.
Authors' contributions |
01762687 | en | [
"shs.sport",
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2010 | https://insep.hal.science//hal-01762687/file/152-%20Effects%20of%20a%20trail%20running%20competition.pdf | Christopher Easthope Schmidt
Christophe Hausswirth
Julien Louis
Romuald Lepers
Fabrice Vercruyssen
Jeanick Brisswalter
email: brisswalter@unice.fr
C S Easthope
Effects of a trail
Keywords: Trail running, Ultra long distance, Master athlete, Eccentric contractions, Muscle damage, Efficiency
come
Introduction
While the popularity of running trail events has increased over the past 5 years [START_REF] Hoffman | The Western States 100-Mile Endurance Run: participation and performance trends[END_REF], limited information concerning the physiological responses of the tanner occurring during this type of contest is available. Trails can be defined as ultra-long-distance runs lasting over 5 h in duration which are performed in a mountain context, involving extensive vertical displacement (both uphill and downhill). One of the main performance determining components of trail runs is exercise duration. In general, ultra-endurance exercices such as marathon running, road cycling, or Ironman triathlons are well-known to impose a strenuous physical load on the organism, which leads to decreases in locomotion efficiency and concomitant substrate changes [START_REF] Brisswalter | Carbohydrate ingestion does not influence the charge in energy cost during a 2-h run in welltrained triathletes[END_REF][START_REF] Fernström | Reduced efficiency, but increased fat oxidation, in mitochondria from human skeletal muscle after 24-h ultraendurance exercise[END_REF], thermal stress coupled with dehydration [START_REF] Sharwood | Weight changes, medical complications, and performance during an Ironman triathlon[END_REF]), oxidative stress [START_REF] Nieman | Vitamin E and immunity after the Kona triathlon world championship[END_REF][START_REF] Suzuki | Changes in markers of muscle damage, inflammation and HSP70 after an Ironman triathlon race[END_REF], and specifically in running events, structural muscle damage [START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF][START_REF] Suzuki | Changes in markers of muscle damage, inflammation and HSP70 after an Ironman triathlon race[END_REF]. The second major characteristic of trail running events is the large proportion of eccentric work performed during the downhill segments of the race. Eccentric contractions involve force generation in a lengthening muscle, and are known to procure severe structural damage in muscles, affecting their contractile and recuperative properties [START_REF] Nicol | The stretch-shortening cycle: a model to study naturally occurring neuromuscular fatigue[END_REF]. Several studies in the last decade have investigated the effects of long-distance runs performed on level courses. Collective results show an increased release of muscular enzymes into the plasma, a structural disruption of the sarcomere, a substantial impairment in maximal force generation capacity (Lepers et al. 2000a;[START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF][START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF][START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF][START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF], and a decrease in post-race locomotion efficiency [START_REF] Millet | Influence of ultra-long term fatigue on the oxygen cost of two types of locomotion[END_REF][START_REF] Millet | Running from Paris to Beijing: biomechanical and physiological consequences[END_REF], indicating that muscles are progressively damaged during the exercise. Specifically, maximal isometric knee extension force has been reported to decrease by 24% after a 30-km running race [START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF], by 28% after 5 h of treadmill running [START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]) and by 30% after a 65-km ultra-marathon [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF]. Recently, [START_REF] Millet | Running from Paris to Beijing: biomechanical and physiological consequences[END_REF] reported a 6.2% decrease in running efficiency 3 weeks after a 8,500-km run between Paris and Beijing performed in 161 days. [START_REF] Gauché | Vitamin and mineral supplementation and neuromuscular recovery after a running race[END_REF] have reported that maximal voluntary force decreased by 37% at the end of a prolonged trail run. Repeated eccentric contractions may also affect locomotion efficiency, as demonstrated by [START_REF] Braun | The effects of a single bout of downhill running and ensuing delayed onset of muscle soreness on running economy performed 48 h later[END_REF], who observed a decrease of 3.2% in running efficiency 48 h after a 30-min downhill run. In a similar vein, [START_REF] Moysi | Effects of eccentric exercise on cycling efficiency[END_REF] found a 6% decrease in cycling efficiency after 10 series of 25 repetitions of squats, an eccentric exercise. Repeated eccentric contractions, independent of their context, seem to induce a decrease in locomotion efficiency, even if efficiency is evaluated in concentrically dominated cycling [START_REF] Moysi | Effects of eccentric exercise on cycling efficiency[END_REF]. Based upon the reviewed literature, it was assumed that trail running races would accentuate muscle damage when compared to level running, due to the large proportion of eccentric contractions occurring in the successive downhill segments of courses and therefore lead to both a decrease in muscular performance and locomotion efficiency. Few studies so far have analyzed the physiological aspects of trail running. The existing studies mainly focused on the origin of the decline in contraction capacity (e.g. [START_REF] Miles | Carbohydrate influences plasma interleukin-6 but not C-reactive protein or creatine kinase following a 32-km mountain trail race[END_REF][START_REF] Gauché | Vitamin and mineral supplementation and neuromuscular recovery after a running race[END_REF] or on pacing strategies during the race [START_REF] Stearns | Influence of hydration status on pacing during trail running in the heat[END_REF]). To our knowledge, only limited data is available on the impact of this type of events on locomotion efficiency [START_REF] Millet | Influence of ultra-long term fatigue on the oxygen cost of two types of locomotion[END_REF].
A further characteristic of trail running competitions is the increasing participation of master athletes [START_REF] Hoffman | The Western States 100-Mile Endurance Run: participation and performance trends[END_REF]. Tanaka and Seals defined master athletes in their 2008 article as individuals who regularly participate in endurance training and who try to maintain their physical performance level despite the aging process. In a competition context, competitors are traditionally classified as master athletes when over 40 years of age, the age at which a first decline in endurance peak performance is observed [START_REF] Lepers | Age related changes in triathlon performances[END_REF][START_REF] Sultana | Effects of age and gender on Olympic triathlon performances[END_REF][START_REF] Tanaka | Endurance exercise performance in masters athletes: age-associated changes and underlying physiological mechanisms[END_REF]. The aging process induces a great number of structural and functional transformations, which lead to an overall decline in physical capacity (Thompson 2009). The current rising of the average age in western countries has procured the need to design strategies which increase functional capacity in older people and so forth ameliorate the standard of living (e.g. [START_REF] Henwood | Short-term resistance training and the older adult: the effect of varied programmes for the enhancement of muscle strength and functional performance[END_REF]. Observing master athletes can give insight into age-induced changes in physiology and adaptability, thus enabling scientists to develop more concise and effective recuperation and mobilization programs. Recent studies have shown that master endurance athletes are able to maintain their performance despite exhibiting the structural changes in muscle performance and in maximal aerobic power which are classically associated with aging [START_REF] Lepers | Age related changes in triathlon performances[END_REF][START_REF] Tanaka | Endurance exercise performance in masters athletes: age-associated changes and underlying physiological mechanisms[END_REF][START_REF] Bieuzen | Age-related changes in neuromuscular function and performance following a high-intensity intermittent task in endurance-trained men[END_REF][START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF].
In this context, the first purpose of our study was to evaluate muscle performance and efficiency of runners participating in a long-distance trail competition. Trail running was expected to procure important muscle damage and decrease in locomotion efficiency. The second purpose was to compare changes in these parameters between young and master participants. We hypothesized that neuromuscular alterations following the competition would be greater in master athletes compared to young athletes.
Materials and methods
Subjects
Eleven young and fifteen master athletes, all well-motivated, volunteered to participate in this study. The characteristics of the subjects are shown in Table 1. All subjects had to be free from present or past neuromuscular and metabolic conditions that could have affected the recorded parameters. The subjects had regular training experience in long-distance running prior to the study (8.4 ± 6.0 years for the young vs. 13.3 ± 7.8 years for the master runners), and had performed a training program of 72.1 ± 25.1 and 74.1 ± 23.6 km/week, respectively, for young and masters during the 3 months preceding the experiment. The local ethics committee (St Germain en Laye, France) reviewed and approved the study before its initiation and all subjects gave their informed written consent before participation.
Experimental procedure
The study was divided into four phases; preliminary testing and familiarization, pre-testing, trail race intervention and post-testing (see Fig. 1). During the first phase, subjects were familiarized with the test scheme and location, and preliminary tests were performed. During the third phase, subjects performed a 55-km trail running race in a medium altitude mountain context. During the second and the fourth phases, muscle performance and efficiency were analyzed and blond samples were collected. All physiological parameters were recorded 1 day before (pre) and 3 days after the trail running race (post 1, 24, 48, and 72 h).
Preliminary session
During a preliminary session that took place 1 month before the experiment, 26 subjects (11 young and 15 masters) underwent an incremental cycling test at a self-selected cadence on an electromagnetically braked ergocycle (SRM, Schoberer Rad Messtechnik, Jülich, Welldorf, Germany). In accordance with the recommendations of the ethic committee and the French Medical Society, a cycle ergometer protocol was chosen to evaluate ventilatory parameters and efficiency, even though a running protocol would have been preferred. Considerations were based on the assumption that extremely fatigued subjects would have difficulties running on a treadmill and that this might lead to injuries. [START_REF] Moysi | Effects of eccentric exercise on cycling efficiency[END_REF] have shown that eccentric muscle damage affects locomotion efficiency and ventilatory parameters in a cycling protocol in a similar way to a running protocol. The ergocycle allows subjects to maintain a constant power output which is independent of the selected cadence, by automatically adjusting torque to angular velocity. The test consisted of a warm-up lasting 6 min at 100 W, and an incremental period in which the power output was increased by 30 W each minute until volitional exhaustion. During this incremental cycling exercise, oxygen uptake (VO 2 ), minute ventilation (VE), and respiratory exchange ratio (RER) were continuously measured every 15 s using a telemetric system (Cosmed K4b2, Rome, Italy). The criteria used for the determination of VO2max were a plateau in VO 2 despite an increase in power output, a RER above 1.1, and a heart rate (HR) above 90% of the predicted maximal HR [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. Maximal oxygen uptake (VO 2max ) was determined as the average of the last three highest VO 2 values recorded (58.8 ± 6.5 ml/min/kg for the young vs. 55.0 ± 5.8 ml/ min/kg for the master athletes). The ventilatory threshold (VT) was determined according to the method described by Wasserman et al. (1973). The maximal aerobic power output (MAP) was the highest power output completed in 1 min (352.5 ± 41.1 W for the young vs. 347.6 ± 62.9 W for the master athletes).
Race conditions
The running event was a 55-km trail race involving a 6,000 m vertical displacement (3,000-m up and 3,000-m down). The starting point and finishing line were at 694-m altitude, and the highest point of the race was at 3,050 m. Due to the competitive nature of the intervention, each subject was well motivated to perform maximally over the distance. From the initial group (11 young and 15 masters), only three subjects (one young and two master athletes) did not finish the course. Therefore, all data presented corresponds to the finalist group (10 young and 13 master athletes). Physical activity after the race was controlled (walking activities were limited and massages were prohibited). Mean race times performed by subjects are shown in Table 1.
Maximal isometric force and muscle properties
Ten minutes after the sub-maximal cycling exercise, the maximal voluntary isometric force of the right knee extensor (KE) muscles was determined using an isometric ergometer chair (type: J. Sctnell, Selephon, Germany) connected to a strain gauge (Type: Enertec, schlumberger, Villacoublay, France). Subjects were comfortably seated and the strain gauge was securely connected to the right ankle. The angle of the right knee was fixed at 100° (0° = knee fully extended). Extraneous movement of the upper body was limited by two harnesses enveloping the chest and the abdomen. For each testing session, the subjects were asked to perform three 2-3 s maximal isometric contractions (0 rad/s) of the KE muscles. The subjects were verbally encouraged and the three trials were executed with a 1-min rest period. The trial with the highest force value was selected as the maximal isometric voluntary contraction (MVC, in Newton). In addition to MVC, the M-wave of the vastus lateralis was recorded from a twitch evoked by an electrical stimulation. Changes in neuromuscular properties were evaluated throughout all the testing sessions (Lepers et al. 2000b;[START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]. Electrical stimulation was applied to the femoral nerve of the dominant leg according to the methodology previously described by [START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]. The following parameters of the muscular twitch were obtained: (a) peak twitch (Pt), i.e. the highest value of twitch tension production and (b) contraction time (Ct), i.e. the time from the origin of the mechanical response to Pt.
EMG recordings
During the MVC, electrical activity of the vastus lateralis (VL) muscle was monitored using bipolar surface electrodes (Blue sensor Q-00-S, Medicotest SARL, France). The pairs of pregelled Ag/AgC1 electrodes (interelectrode distance = 20 mm; area of electrode = 50 mm 2 ) were applied along the fibers at the height of the muscle belly, as recommended by the SENIAM. A low skin impedance (<5 kΏ) was obtained by abrading and cleaning the area with an alcohol wipe. The impedance was subsequently measured with a multimeter (Isotech, IDM 93N). To minimize movement artifacts, the electrodes were secured with surgical tape and cloth wrap. A ground electrode was placed on a bony site over the right anterior superior spine of the iliac crest. To ensure that the electrodes were precisely at the same place for each testing session, the electrode location was marked on the skin with an indelible marker. EMG signals were pre-amplified (Mazet Electronique Model, Electronique du Mazet, Mazet Saint-Voy, France) close to the detection site (common-mode rejection ratio = 100 dB; Z input = 10 GΏ; gain = 600; bandwidth frequency = 6-1,600 Hz). EMG data were sampled at 1,000 Hz and quantified by using the root mean square (RMS). Maximal RMS EMG of VL muscle was set as the maximal 500-ms RMS value found over the 3-s MVC (i.e.,500 ms window width, 1-ms overlap) using the propriety software Origin 6.1. During evoked stimulation performed before the MVC, peak-to-peak amplitude (PPA) and peak-to-peak duration (PPD) of the M-wave were determined for the VL muscle. Amplitude was defined as the sum of absolute values for maximum and minimum points of the biphasic (one positive and one negative deflection) M-wave. Duration was defined as the time from maximum to minimum points of the biphasic M-wave.
Blood markers of muscle damages
For each evaluations series, 15 ml of blood was collected into vacutainer tubes via antecubetal venipuncture. The pre-exercise sample was preceded by a 10-min rest period. Once the blood sample was taken, tubes were shuffled by turning and placed on ice for 30 s before centrifugation (10 min, 3,000 T/min, 4°C). The obtained plasma sample was then stored in multiple aliquots (Ependorf type, 500 µl per samples) at -80°C until analyzed for the markers described below. All assays were performed in duplicate on first thaw. As a marker of sarcolemma disruption, the activity of the muscle enzymes creatine kinase (CK) and lactodeshydrogenase (LDH) were measured spectrophotometrically in the blood plasma using commercially available reagents (Roche/Hitachi, Meylan, France).
Locomotion efficiency
Subjects were asked to perform a cycling control exercise (CTRL) at a self-selected cadence on the same ergocycle as used in the preliminary session. This cycling exercise involved 6 min at 100 W followed by 10 min at a relative power output corresponding to the ventilatory threshold. For each subject and each cycling session, metabolic data was continuously recorded to assess the efficiency in cycling.
Efficiency can be expressed as a ratio between (external) power output and the ensuing energy expenditure (EE). Efficiency may, however, be calculated in a variety of ways [START_REF] Martin | Has Armstrong's cycle efficiency improved?[END_REF]. In this study, two types of efficiency calculation were employed, gross efficiency (GE), and delta efficiency (DE). GE is defined as work rate divided by energy expenditure and calculated using the following equation [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF]:
Gross efficiency (%)
DE is considered by many to be the most valid estimate of muscular efficiency [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF][START_REF] Coyle | Cycling efficiency is related to the percentage of type I muscle fibers[END_REF]). DE calculations are based upon a series of work rates which are then subjected to linear regression analysis. In this study, work rates were calculated from the two intensity tiers completed in the test described at the beginning of this section.
Delta efficiency (%)
In order to obtain precise values for work rate utilized in the efficiency calculations, power output was assessed from the set work rate and the true cadence as monitored by the SRM crank system. EE was obtained from the rate of oxygen uptake, using the equations developed by [START_REF] Brouwer | One simple formulate for calculating the heat expenditure and the quantities of carbohydrate and fat oxidized in metabolism of men and animals from gaseous exchange (oxygen intake and calorie acid output) and urine-N[END_REF]. These equations take the substrate utilization into account, by calculating the energetic value of oxygen based on the RER value. To minimize a potential influence of the VO 2 slow component, which might vary between subject groups, the mean EE during the 3rd to 6th minute was used in the calculations of GE and DE.
Statistical analysis
All data presented are mean ± SD (tables and figures). Each dependent variable was then compared between the different testing conditions using a two-way ANOVA with repeated measures (period vs. group). Newman-Keuls post-hoc tests were applied to determine the between-mean differences, if the analysis of variance revealed a significant main effect for period or interaction of group x period. For all statistical analysis, a P < 0.05 value was accepted as the level of significance.
Results
Muscular performance
In all evaluations, MVC values of master athletes were significantly lower than the young group' s values (-21.8 ± 4.6%, P < 0.01). One hour after the intervention (post), maximal isometric strength values of knee extensors decreased significantly when compared with pre-race values, in nonsignificant different proportions for young (-32%, P < 0.01) and master athletes (-40%, P < 0.01). MVC values for young subjects returned to baseline at post 24 h, at which time the MVC reduction in masters remained significant (-13.6%, P = 0.04). A significant decrease in EMG activity (RMS) during MVC of the vastus lateralis (VL) muscle was observed at 1 and 24 h post-exercise without any differences between groups or periods. Compared with pre-race values, post-exercise MVC RMS values decreased in young by -40.2 ± 19% and in masters by -42 ± 19.2% (P < 0.01) (Fig. 2).
Muscular twitch and M-wave properties
Before the race, no significant effect of age was observed on peak twitch torque (Pt) or contraction time (Ct). One hour after the race, no effect was recorded on Ct or Pt independent of the group. Post 24 h, a slower contraction time (Ct) and a lower peak twitch torque (Pt) were recorded in both groups. Compared to pre-race values, Pt decreased by 18.2% (P = 0.04) in young and by 23.5% (P = 0.02) in master runners at post 24 h. These alterations in twitch properties returned close to pre-test values in young subjects, but remained significant for 48 h (P = 0.03) and 72 h (P = 0.04) in masters subjects (Table 2).
Before the race, no significant effect of age was observed on PPA or PPD values of the M-wave for the VL muscle (Table 2). One hour after the race, a significant increase in PPD was observed in both groups. This increase remained significant in the master athlete group at post 24 h (P = 0.02), while the young group returned close to baseline values. Furthermore, in masters, PPA values decreased below pre-race values 48 h (P = 0.04) and 72 h (P = 0.02) after the race, while no effects were observed in young subjects.
Blood markers of muscle damages
Twenty-four hours (Post 24 h) after the race, the plasma activities of CK and LDH increased significantly in comparison to pre-race values, with a greater increase for master subjects (P = 0.04). CK and LDH values remained significantly elevated at post 48 h and post 72 h, without any difference between groups (Table 3).
Locomotion efficiency and cycling cadence
Gross efficiency (GE), delta efficiency (DE) and cadence values are presented in Table 4. No significant difference in GE, DE or cadence was observed between groups before the race. After the race, results indicated a non-group specific, significant decline in GE from post 24 to 72 h (GE mean decrease in young vs. masters in % of pre-race values: -4.7 vs. -6.3%, respectively). In both groups, VE increased post 24, 48, 72 h in comparison to pre-test values (VE mean increase in young vs. masters in % of pre-race values: +11.7 vs. +10.1%, respectively). No significant change in DE was observed in young subjects alter the race. On contrary, a significant decrease in DE was recorded in master subjects (DE decrease in master athletes of post 24 h (P = 0.02), post 48 h (P = 0.01) and post 78 h (P = 0.03) in % of pre-race values: -10.6, -10.4, -11.5%, respectively).
Post-race cadence was significantly higher in all post-race evaluations for young subjects when compared with masters. Results indicate a significant increase in cycling cadence post 24 h (+4.4%, P = 0.04), post 48 h (+10.6%, P = 0.03), and post 72 h (+17%, P < 0.01) for young, and only in post 48 h (+3.9%, P = 0.04) and post 72 h (+10.8%, P = 0.03) for master athletes.
Discussion
The objective of the present study was to investigate changes in muscular performance and locomotion efficiency in well-trained endurance runners procured through a trail running competition and to compare these to literature data taken from level courses. The participation of two different age groups of runners (young vs. masters) allowed additional study of the effect of age on fatigue generation and recuperation. The main results of our study indicate that: (1) post-run muscular performance and locomotion efficiency decline, while the associated concentrations of muscle damage indicating blood markers rise, regardless of age, and (2) changes of proportion of the differences between groups are only visible in the recuperation phase (post 24-72 h).
The running event analyzed in this study was a 55-km trail race featuring a 6,000-m vertical displacement (3,000-m up and 3,000-m down). The average race time was 06:45 + 00:45. As stated, the main performance components of trail running are exercise duration and vertical displacement (uphill and downhill). From this perspective, trail running competitions induce an intensive physical work load on the organism. Considering the popularity of trail running and the abundance of competitions over the world, it appears important to precisely characterize the acute physiological reactions consecutive to such events.
One of the most significant consequences of the race was a reduction in muscular performance. The recorded data manifests a significant decline in maximal force-generating capacities in young (-32%) and master athletes (-40%) 1 h post-race. The intervention seems to have decreased MVC in a slightly greater magnitude compared to: (i) prolonged run on level such as 5-h treadmill running (-28%, [START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]) or shorter 30-km trait race (-24%, [START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF] or race of longer duration but with lower altitude change (-30%, [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF]). An adaption for workload distribution, age, training status and a methodological standardization would have to be conducted before a precise comparison between races is possible. It is generally accepted though, that the structural muscle damage leading to MVC loss is generated by the eccentric muscle contractions occurring in running [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF][START_REF] Millet | Mechanisms contributing to knee extensor strength loss after prolonged running exercise[END_REF][START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF][START_REF] Place | Time course of neuromuscular alterations during a prolonged running exercise[END_REF]). As a logical consequence, interventions with a greater overall percentage of eccentric force production should induce a more pronounced MVC decline in comparison to level courses of same duration, in which the eccentric component is not as pronounced: the recorded MVC data of this study thus conforms to this idea. Additional studies will need to clarify the relative contribution of duration versus altitude changes to the decrease of muscular capacities following prolonged running exercices.
After the race (post 24-72 h), MVC values progressively returned to their pre-race level. In addition, results indicate a significant decrease in VL muscle activity (i.e. RMS values) recorded during MVC performed 1 h after the race which persisted until 72 h after the race. Further parameters used to characterize muscular fatigue included muscular twitch and M-wave properties. Pt decreased significantly 24 h after the race, accompanied by a concomitant increase in Ct from 24 to 72 h after the race, albeit only in masters. The main explanation for these perturbations of contractile parameters could be an alteration of the excitation-contraction coupling process that can be attributed to several mechanisms including, but not limited to, reduced Ca 2+ release from the sarcoplasmic reticulum (Westerblad et al. 1991), a decrease in blood pH and a reduced rate or force of crossbridge latching [START_REF] Metzger | Effects of tension and stiffness due to reduced pH in mammalian fast-and slow-twitch skinned skeletal muscle fibres[END_REF]). An increase in Ct after the race could also indicate an impairment in type II muscles fibers (i.e. fast contraction fibers) which may be compensated for by the more fatigue resistant type I muscle fibers (i.e. slow contraction fibers). Twitch muscle properties were unchanged at 1-h post-race, alterations appearing only 24 h after the race and later. This phenomenon might suggest that muscle fatigue was counterbalanced by potentiation mechanisms occurring immediately after the race [START_REF] Baudry | Postactivation potentiation influences differently the nonlinear summation of contractions in young and elderly adults[END_REF][START_REF] Shima | Mechanomyographic and electromyographic responses to stimulated and voluntary contractions in the dorsiflexors of young and old men[END_REF][START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF]. By contrary, M-wave PPD was significantly reduced immediately after the race (post) and tended to return to basal values 24-72 h after the race. The master group exhibited increased M-wave PPA at 24-h post-race. As previously described in the literature, these increases in M-wave parameters suggest an alteration in muscle excitability; probably generated by impairments in neuromuscular propagation due to an increase in sarcolemma permeability to sodium, potassium and chloride [START_REF] Lepers | Advances in neuromuscular physiology of motor skills and muscle fatigue[END_REF]. These results support the assumption of muscle damage development through trail running. The data recorded for muscle damage indicating blood markers underscores this observation. A post-race increase in the plasma activity of muscle enzymes (CK and LDH), which persisted for several days after the race (Table 3), was recorded. Similarly, [START_REF] Suzuki | Changes in markers of muscle damage, inflammation and HSP70 after an Ironman triathlon race[END_REF] reported a significant increase in CK and LDH activities in the plasma soon after an Ironman triathlon, which remained elevated until 1 day after the race. Intracellular enzymes such CK and LDH indicate that the muscle injury arises from myofibrillar disruption [START_REF] Clarkson | Muscle function after exercise-induced muscle damage and rapid adaptation[END_REF][START_REF] Noakes | Effect of exercise on serum enzyme activities in humans[END_REF], and are classically used to assess the loss of sarcolemmal integrity after strenuous exercices [START_REF] Overgaard | Effects of running distance and training on Ca2+ content and damage in human muscle[END_REF]). As neither CK nor LDH is considered redundant indicators (Warren et al. 1999), the analysis was augmented by the acquisition of further physiological variables.
As an important determinant of performance in endurance events, locomotion efficiency is classically surveyed in athletes in order to evaluate the effects of particular training periods [START_REF] Santalla | Muscle efficiency improves over time in world-class cyclists[END_REF]). It has been reported that even small increments in cycling efficiency may lead to major improvements in endurance performance [START_REF] Moseley | The reliability of cycling efficiency[END_REF]. The efficiency of physical work is a measure of the body' s effectiveness in converting chemical energy into mechanical energy. Efficiency was here calculated as described in the methods section; the quotient of work rate and energy expenditure [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF]. A decrease in locomotion efficiency can therefore be interpreted as either a relative increase in energy expenditure, or a relative decrease in work rate. Considering that work rate was fixed for all the tests, increased energy expenditure remains the only viable option. Recorded values show a decline in GE in both groups of athletes after the race, which persisted until 72 h post-race. Although commonly employed, GE has been criticized for its inclusion of energy-delivery processes that do not contribute to production of mechanical work in the denominator. Therefore, locomotion efficiency was also evaluated through DE calculation, which is by many considered to be the most valid estimate of muscular efficiency [START_REF] Gaesser | Muscular efficiency during steady-rate exercise: effects of speed and work rate[END_REF][START_REF] Coyle | Cycling efficiency is related to the percentage of type I muscle fibers[END_REF][START_REF] Mogensen | Cycling efficiency in humans is related to low UCP3 content and to type I fibres but not to mitochondrial efficiency[END_REF]. While GE values declined for both groups, DE values only declined in the masters group (P24, P48, and P72), thus confirming the increase in energy expenditure to ensure a continuous power output. This phenomenon is largely related to a decline in muscular performance. In order to produce the same locomotive work as in the pre-race condition, strategies such as an increase in spatio-temporal recruitment of muscle fibers or an increase in cycling cadence, involving a concomitant increase in VE (Table 4) could be engaged. The attained results provide evidence of an alteration of cycling efficiency in both groups tested.
The second aim of this study was to analyze age-related effects on muscular performance and cycling efficiency after the trail race by comparing physiological variables recorded in young and master athletes. Race completion time did not significantly differ between groups (06:42 + 00:51 vs. 06:51 + 00:47, for young vs. masters, respectively). Despite the structural and functional alterations typically observed during the aging process, master athletes were able to produce the same level of performance as the young group. This observation confirms the realistic possibility of preventing the age-related decline of physical performance through physical activity.
The analysis of muscular performance in the two groups of athletes shows a classical decline in maximal force-generating capacity in masters (-21.8 ± 4.6%), when compared with young for all testing sessions performed before and after the race [START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF][START_REF] Louis | Muscle strength and metabolism in master athletes[END_REF]. Results additionally indicate a similar decrease in MVC values at 1-h post-race in both age groups which, in the master subjects only, persisted until 24 h after the race, suggesting a slower recovery. Based on the results of [START_REF] Coggan | Histochemical and enzymatic characteristics of skeletal muscle in master athletes[END_REF], which were confirmed by Tarpenning et al. (2004), the age-induced decrease of MVC values in master athletes similar to our experimental population can be mainly explained by neural factors such as muscle recruitment and/or specific tension. The twitch analysis based assessment of muscular function seems to confirm this hypothesis. This study is to our knowledge the first to present twitch and M-wave data for master athletes after a trail running competition. As previously described in studies on long-distance exercise induced fatigue in young subjects (e.g. [START_REF] Millet | Alterations of neuromuscular function after an ultramarathon[END_REF], Pt and Ct parameters increased 24 h after the race. The proportions were similar in both groups tested. The alterations in muscular properties persisted several days after the race in masters only, further supporting the idea of a slower muscle recovery in this group. Master M-wave PPD values increased proportionally to the aforementioned development in the young group at 1-h post-race, and returned to pre-race values in all the following testing conditions. By contrary, master M-wave PPA values decreased significantly from 48 to 72 h after the race, while this decline was marginal in young athletes. Despite a similar training status in young and master athletes, the values of these parameters show a greater alteration in masters' muscular function (i.e. contractility and excitability) after the race, indicating a slower recovery of muscle strength. An assessment of VL muscle activity shows that this effect is not brought by an age-induced muscle activation impairment, as MVC RMS values declined in similar proportions between groups after the race.
As depicted in Table 3, CK and LDH activity in plasma increased in similar proportions after the race for both groups indicating a similar level of muscular deterioration between groups following the trail competition. This is in line with the above mentioned results, as the proportion of the competitioninduced reduction of MVC was similar between groups.
This might support the idea that regular endurance training reinforces active muscles, and therefore limits the structural and functional changes classically associated with aging [START_REF] Lexell | Human aging, muscle mass, and fiber type composition[END_REF].
Results of this study show an effect of aging on cycling efficiency before and after the running race. While GE declined in similar proportions in both groups after the race, DE declined only in masters 24, 48 and 72 h after the race (Table 4). The GE decline in both groups could be mainly related to increases in energy-delivery processes that do not contribute to mechanical work. Variations in these processes originate through modifications in cycling kinematics (e.g. cycling cadence) or muscular contraction patterns (e.g. recruitment of subsidiary muscles, increase in antagonistic co-activation) in fatigued muscles and must be considered when regarding the GE [START_REF] Braun | The effects of a single bout of downhill running and ensuing delayed onset of muscle soreness on running economy performed 48 h later[END_REF]. The decline of DE in masters could be strongly related to alterations in muscular performance, provoking an increase in muscle activity in cycling to produce the same external work. [START_REF] Gleeson | Effect of exercise-induced muscle damage on the blood lactate response to incremental exercise in humans[END_REF] suggested that an increase in type II fiber recruitment may occur when exercise is performed in a fatigued state. In addition, if force-generating capacity was compromised, more motor units would have to be activated to achieve the same submaximal force output, resulting in a concomitant increase in metabolic cost [START_REF] Braun | The effects of a single bout of downhill running and ensuing delayed onset of muscle soreness on running economy performed 48 h later[END_REF]. Such an effect could contribute to the significantly higher VE shown in the present study. The results demonstrate that master athletes reached a similar level of fatigue through the race, when compared to young athletes, but recovered significantly slower. The hypothesis that master athletes achieve a higher level of fatigue through similar exertion can therefore be no longer supported. This was surprising, as the input parameters of master athletes were considerably lower and therefore either a lower performance or a greater fatigue would be expected. Thus, it must be surmised that masters must in some form economize energy expenditure over the length of the course, for example through adaptations in strategy or locomotor patterns.
Conclusion
The aim of this study was to assess physiological responses to an exhaustive trail running competition and to analyze possible differences between young and master athletes.
A 55-km ultra-endurance event was used as a fatigue generating intervention. An especially large amount of muscular fatigue was generated through the large proportion of eccentric contractions occurring during the downhill sections of the race. Results indicate an acute fatigue in all subjects (young and masters), which is mainly represented by decreases in muscle performance. Despite similar race performances between groups, the generated fatigue was similar between groups. Post-race development of CK, and neuromuscular properties suggests a decrease in the recuperation kinetics of the master subjects. The results attained in this study give indication that regular endurance training cannot halt the age-related decline in muscle performance, but that performance level can non-the-less be maintained by global or local strategy adaptations or to-date unknown adaptations on a physiological level.
Tarpenning KM, Hamilton-Wessler M, Wiswell RA, Hawkins SA ( 2004
FIGURESFig. 1 AFig. 2
12 FIGURES
Table 2 Twitch and M-wave parameters of the vastus lateralis muscle before (Pre), and 1 h (Post), 24 h (Post 24), 48 h (Post 48) and 72 h (Po; 72) alter the race
2 SD) values of 10 young and 13 master athletes are shown Pt peak twitch, Ct contraction time, HRt half-relaxation time, PPA peak-to-peak amplitude, PPD peakto-peak duration * Significantly different from pre-exercise (P < 0.05)
Variable Pre Post Post 24 Post 48 Post 72
Twitch
Pt (N) Young 36 (9) 35 (11) 29 (11)* 34 (9) † 35 (12) †
Master 36 (11) 34 (12) 27 (12)* 28 (08)* 29 (12)*
Ct (ms) Young 63.3 (13.7) 63.4 (10.6) 68.8 (11.2) * 64.7 (9.5) † 66.9 (10.3)
Master 61.3 (15.6) 64.9 (17.4) 71.1 (12.9)* 73.2 (10.8)* 76.2 (12.7)
M-Wave
PPA (mV) Young 3.5 (1.4) 3.6 (1.6) 3.9 (1.5) 3.4 (1.7) 3.0 (1.4)
Master 3.4 (1.5) 3.1 (1.3) 3.1 (1.5) 2.4 (1.4)* 2.3 (0.7)*
PPD (ms) Young 7.6 (1.5) 9.2 (1.2)* 7.0 (2.2) † 7.0 (2.5) 7.3 (2.8)
Master 7.9 (1.5) 9.5 (2.5)* 9.3 (2.8)* 7.8 (2.7) 7.6 (3.3)
Mean (
† Significantly different from master (P < 0.05)
Table 3 Changes in muscle damage indicating blond markers for young and master athletes before (Pre), 24 h (Post 24 h), 48 h (Post 48 h) and 72 h (Post 72 h) after the race
3 CK creatine kinase, LDH lactate dehydrogenase * Significantly different from pre-exercise (P < 0.05) † Significantly different from masters (P < 0.05)
Variable Group Normal Pre Post 24 h Post 48 h Post 72 h
range
CK (U/1) Young 50-230 135 (26) 1,470 (565)* † 909 (303)* 430 (251)*
Master 50-230 138 (107) 1,559 (593)* 920 (298)* 531 (271)*
LDH (U/1) Young 120-245 229 (52) 528 (164)* 453 (65)* 410 (65)*
Master 120-245 194 (63) 482 (142)* 468 (105)* 473 (165)*
Table 4 Changes in efficiency, ventilation and cycling cadence for young and masters during cycling exercises performed before (Pre), 24 h (Post 24), 48 h (Post 48) and 52 h (Post 52) after the race
4
Variable Pre Post 24 Post 48 Post 72
GE (bpm) |
01762706 | en | [
"shs.sport",
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2005 | https://insep.hal.science//hal-01762706/file/154-%20Modification%20of%20cycling%20biomechanics.pdf | Anne Delextrat
Véronique Tricot
Thierry Bernard
Fabrice Vercruyssen
Christophe Hausswirth
Jeanick Brisswalter
Van Hoecke
Hausswirth, Smith, & Brisswalter ; Vercruyssen
) Elliot
Modification of Cycling Biomechanics During a Swim-to-Cycle Trial
Keywords: pedal rate, resultant torque, asymmetry, neuromuscular fatigue
come
have shown a significant decrease in stride length (SL) and a significant increase in stride rate (SR) during a 3,000-m run undertaken at constant velocity (5.17 m•s -1 ), while [START_REF] Brisswalter | Variability in energy cost and walking gait during race walking in competitive race walkers[END_REF] did not observe any variation in gait kinematics during a 3-hr walk at competition pace. The recent appearance of multisport activities such as triathlon (swimming-cycling-running) raises new questions relative to the influence of locomotion mode transitions on kinematic adaptation during each discipline.
Triathlon events are characterized by a variety of distances ranging from Sprint (750-m swim, 20-km cycle, 5-km run) to . From the shortest to the longest distances the relative duration of the swimming, cycling, and running parts represents 18% to 10%, 52% to 56%, and 30% to 34%, respectively [START_REF] Dengel | Determinants of success during triathlon competition[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF]. Researchers have found significant correlations between cycling and running duration and triathlon performance, whereas no such relationship has been reported with swimming time [START_REF] Dengel | Determinants of success during triathlon competition[END_REF][START_REF] Schabort | Prediction of triathlon race time from laboratory testing in national triathletes[END_REF]. Consequently, many studies involved in performance optimization have focused on the adaptation of the locomotor pattern after the cycle-to-run transition [START_REF] Hausswirth | Relationships between running mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF][START_REF] Millet | Duration and seriousness of running mechanics alterations after maximal cycling in triathletes[END_REF][START_REF] Quigley | The effects of cycling on running mechanics[END_REF].
One of the most studied parameters in the literature concerns stride characteristics (i.e., SL and SR). The influence of a prior cycling task on these variables during running is not clearly established. [START_REF] Quigley | The effects of cycling on running mechanics[END_REF] found no significant effect of a prior 30-min cycling bout on SL and SR measured during running. In contrast with these results, [START_REF] Hausswirth | Relationships between running mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF] observed a significant decrease in SL (7%) at the start of the running bout of a simulated triathlon (30-min swimming, 60-min cycling, 45-min running) compared with an isolated 45-min run. Moreover, it was reported that decreasing the cycling metabolic load by modifying the geometry of the bicycle frame for a 40-km trial involved significantly higher SL (12%) and SR (2%) during the first 5-km of the subsequent running bout. These increases in stride characteristics resulted in a faster mean running speed [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF].
It is often suggested that these modifications of running kinematics account for the increase in running energy demand classically observed during prolonged exercises. Indeed, significantly higher energy costs have been reported when SL was either increased or decreased from the freely chosen SL (e.g., [START_REF] Cavanagh | The effect of stride length variation on oxygen uptake during distance running[END_REF]. The same phenomenon is also observed during cycling, whereby a cadence associated with a minimization of energy demand (energetically optimal cadence) could be identified around 75 rev•min -1 [START_REF] Vercruyssen | Effect of exercise duration on optimal pedaling rate choice in triathletes[END_REF]. However, to our knowledge the variations in cycling kinematics and energy expenditure have never been examined in the context of a swim-to-cycle trial.
Within this framework, the main purpose of the present study was to investigate the influence of a prior 750-m swim on cycling kinematics. A secondary purpose was to relate these alterations to the metabolic demand of cycling.
Methods
Eight well trained and motivated male triathletes (age 27 ± 6 yrs; height 182 ± 8 cm; weight 72 ± 7 kg; body fat 12 ± 3%) participated in this study. They had been competing in the triathlon at the interregional or national level for at least 3 years.
Mean ± SD training distances per week were 5.6 ±2.3 km in swimming, 65 ± 33 km in cycling, and 32 ± 16 km in running, which represented 131 ± 54 min, 150 ± 76 min, and 148 ± 79 min for these three disciplines, respectively. This training program was mostly composed of technical workouts and interval training in swimming, aerobic capacity (outdoor), and interval training (cycle ergometer) in cycling, and fartlek and interval training in running. It included only one crosstraining session (cycle-to-run) per week. These training volumes and intensities are relatively low when compared to the training load usually experienced by triathletes of this level, partly because the experiment was undertaken in winter when triathletes decrease their training load in all three disciplines. The participants were all familiarized with laboratory testing. They were fully informed of the procedures of the experiment and gave written informed consent prior to testing. The project was approved by the local ethics committee for the protection of individuals (Saint-Germain-en-Laye, France).
On their first visit to the laboratory the triathletes underwent two tests. The first test was to determine their leg dominance, in which the 8 participants were classified by kicking dominance according to the method described by [START_REF] Daly | Asymmetry in bicycle ergometer pedaling[END_REF]. The second test was a laboratory incremental test on a cycle ergometer to determine maximal oxygen uptake (VO 2 max) and maximal aerobic power (MAP). After a 6-min warm-up at 150 W, power output was increased by 25 W every 2 minutes until volitional exhaustion. The criteria used for determinating VO 2 max were: a plateau in VO 2 despite the increase in power output, a heart rate (HR) over 90% of the predicted maximal heart rate (220age in years ± 10), and a respiratory exchange ratio (RER) over 1.15 [START_REF] Howley | Criteria for maximal oxygen uptake: Review and commentary[END_REF].
On their subsequent visits to the laboratory the triathletes underwent 4 submaximal sessions separated by at least 48 hours (Figure 1). The first session was always a 750-m swim performed alone at a sprint triathlon competition pace (SA trial). It was used to determine the swimming intensity for each participant. The 3 other sessions, presented in counterbalanced order, comprised 2 swim-to-cycle trials and one isolated cycling trial. The cycling test was a 10-min ride on the bicycle ergometer at a power output corresponding to 75% of MAP and at FCC. During the isolated cycling trial (CTRL trial) this test was preceded by a warm-up on the cycle ergometer at a power output corresponding to 30% of MAP for the same duration as SA swim. During the swim-to-cycle transitions, the cycling test was preceded either by a 750-m swim performed alone at the pace adopted during SA (SCA trial), or by a 750-m swim at the same pace in a drafting position (i.e., swimming directly behind a competitor in the same lane, SCD trial). The same lead swimmer was used for all participants. He was a highly trained triathlete competing at the international level. In order to reproduce the swimming pace adopted during the SA trial, the triathletes were informed of their performance every 50 m via visual feedback.
The swim tests took place in the outdoor Olympic swimming pool of Hyères (Var, France); the participants wore a neoprene wet suit (integral wet suit Aqua-® , Pulsar 2000; thickness: shoulders 1.5 mm, trunk 4.5 mm, legs 1.5 mm, arms 1.5 mm). The cycling tests were conducted near the swimming pool in order to standardize the duration of the swim-to-cycle transition (3 min). The intensity of 75% of MAP was chosen because it was close to the pace adopted by triathletes of the same level during sprint triathlon competitions [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] and close to the intensity used in a recent study on the cycle-to-run transition in trained triathletes (e.g., [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF].
All triathletes rode an electromagnetically braked cycle ergometer (SRM Jülich, Welldorf, Germany) equipped with their own pedals and adjusted according to their anthropometrical characteristics. This system can maintain a constant power output independent of the pedal rate spontaneously adopted by the participants. Power output is continuously calculated as shown in Equation 1:
Power (W) = Torque (Nm) Angular Velocity (rad•s -1 )
(1)
10 10
The torque generated at the crank axle is measured by 20 strain gauges situated between the crank arms and the chain-rings. The deformation of the strain gauges is proportional to the resultant force acting tangentially on the crank (i.e., effective pedaling torque). Pedal rate and torque are inductively transmitted to the power control unit with a sampling frequency of 500 Hz and data are averaged every 5 seconds and recorded by the power control unit. The SRM system was calibrated prior to each trial. It has been shown to provide a valid and reliable measure of power output when compared with a Monark cycle ergometer [START_REF] Jones | The dynamic calibration of bicycle power measuring cranks[END_REF]. From these data, several parameters were calculated for each pedal revolution. An average value of these parameters was then computed during the last 30 s of each minute
The mean value of the resultant torque exerted during the downstroke of the dominant leg (MTD, in Nm) and during the downstroke of the nondominant leg (MTND, in Nm);
The maximal (peak) value of the resultant torque exerted during the downstroke of the dominant leg (PTD, in Nm) and during the downstroke of the nondominant leg (PTND, in Nm);
The arm crank angle corresponding to PTD (AD, in degrees) and PTND (AND, in degrees). Crank angle was referenced to 0° at top dead center (TDC) of the right crank arm and to 180° at the TDC of the left crank arm (thus the right leg downstroke was from 0° to 180° and the left leg downstroke was from 180° to 360°). Then the crank angle for the left leg downstroke was expressed relative to the TDC of the left crank arm (i.e., 180° was subtracted to the value obtained).
In addition to the biomechanical analysis, the participants were asked to report their perceived exertion (RPE) immediately after each trial using the 15-graded Borg scale from 6 to 20 [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]. Moreover, physiological effort during swimming and cycling was assessed from heart rate (HR), oxygen uptake (VO 2 ), and lactate values. During the cycling trials, VO 2 was recorded by the Cosmed K4b 2 telemetric system (Rome, Italy) recently validated by [START_REF] Mclaughlin | Validation of the Cosmed K4 b 2 portable metabolic system[END_REF], and HR was continuously monitored during swimming and cycling using a cardiofrequency meter (Polar vantage, Tampere, Finland). Blood lactate concentration (LA, mmol•L -1 ) was measured by the lactate Pro™ LT-1710 portable lactate analyzer (Arkray, KDK, Kyoto, Japan) from capillary blood samples collected from the participants' earlobes immediately after swimming (L1) and at the 3rd and 10th min of cycling (L2, L3).
From these data, cycling gross efficiency (GE, %) was calculated as the ratio of work accomplished•min -1 (kJ•min -1 ) to metabolic energy expended•min -1 (kJ•min -1 ) [START_REF] Chavarren | Cycling efficiency and pedalling frequency in road cyclists[END_REF]. Since relative intensity of the cycling bouts could be superior to ventilatory threshold (VT), the aerobic contribution to metabolic energy was calculated from the energy equivalents for oxygen (according to respiratory exchange ratio value) and a possible anaerobic contribution was estimated using blood lactate increase with time (lactate: 63J•kg -1 mM -1 , di Prampero 1981). For this calculation, VO 2 and lactate increase was estimated from the difference between the 10th and 3rd minutes.
All the measured variables were expressed as mean and standard deviation (M ± SD). Differences in biomechanical and physiological parameters between the three conditions (CTRL vs. SCA; CTRL vs. SCD; SCA vs. SCD trials) as well as differences between the values recorded during the downstroke of the dominant and the downstroke of the nondominant leg (MTD vs. MTND; PTD vs. PTND; AD vs. AND) were analyzed using a Wilcoxon test. The level of confidence was set at p < 0.05.
Results
The test to determine leg dominance showed that among the 8 participants, 6 were left-leg dominant and only 2 were right-leg dominant. During the maximal test, the observation of a plateau in VO 2 (mean value for VO 2 max: 68.6 ± 7.5 ml•min -1 •kg -1 ) and HRmax and RERmax values (respectively, 191 ± 8 beats•min -1 and 1.06 ± 0.05) showed that 2 out of 3 criteria for attainment of VO 2 max were met [START_REF] Howley | Criteria for maximal oxygen uptake: Review and commentary[END_REF]. MAP values (342 ± 41 W) were close to those previously obtained for triathletes of the same level [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF]. The cycling exercises were performed at a mean power output of 259 ± 30 W, which was 75% of MAP.
There was no significant difference in performance between the two swimming trials (respectively for SCD and SCA: 653 ± 43 s, and 654 ± 43 s, p > 0.05). The mean velocity of the two 750-m was therefore 1.15 m•s -1 . In contrast, HR values recorded during the last 5 min of swimming and blood lactate concentration measured immediately after swimming were significantly lower in the SCD trial than in the SCA trial (mean HR values of 160 ± 16 vs. 171 ± 18 beats•min -1 , and blood lactate concentrations of 5.7 ± 1.8 vs. 8.0 ± 2.1 mmol•L -1 , respectively, for SCD and SCA trials, p < 0.05). Furthermore, RPE values recorded immediately after swimming indicated that the participants' perception of effort was significantly lower in the SCD trial than in the SCA trial (13 ± 2 vs. 15 ± 1, corresponding to "rather laborious" vs. "laborious," respectively, for SCD and SCA trials, p < 0.05).
The biomechanical and physiological parameters measured during cycling in CTRL, SCD, and SCA trials are listed in Tables 1 and2. No significant difference between SCD and CTRL trials was observed for all biomechanical parameters measured. Moreover, in spite of significantly higher blood lactate levels during cycling in the SCD compared with the CTRL trial (Table 2, p < 0.05), the GE and VO 2 values recorded during these two these trials were not significantly different (Table 2, p > 0.05).
Table 1 shows that several biomechanical parameters recorded during cycling were significantly different in the SCA trial compared to the SCD trial. The participants adopted a significantly lower pedal rate after the swimming bout performed in a drafting position than after the swimming bout performed alone (Table 1, p < 0.05). Consequently, the mean resultant torque measured at the crank axle was significantly higher in SCD than in SCA (MTD and MTND, Table 1, p < 0.05). A higher resultant peak torque was also observed during the SCD trial compared with the SCA trial. However, this difference was significant only during the downstroke of the nondominant leg (PTND, Table 1, p < 0.05). Moreover, MTD was significantly higher when compared with MTND in all trials (CTRL, SCD, and SCA, Table 1, p < 0.05), suggesting an asymmetry between the mean torques exerted by the dominant leg and the nondominant leg.
A significant difference was also shown between PTD and PTND during the SCA and SCD trials only ( prior swimming bout performed alone (Table 2, p < 0.05). Therefore, cycling gross efficiency was significantly higher in the SCD trial than in the SCA trial (Table 2, p < 0.05). Finally, the SCA trial was also characterized by a significantly lower RMT and significantly higher lactate values measured during cycling compared with the CTRL trial (Tables 1 and2, p < 0.05).
Discussion
The main results of this study indicated that the lower metabolic load when swimming in a drafting position resulted in a modification of the locomotor pattern adopted in cycling and a better efficiency of cycling after the swim-to-cycle transition. Indeed, a significantly lower pedal rate and significantly higher mean and peak resultant torques were observed in the SCD trial compared with the SCA trial.
The observation of a decrease in movement frequency following prior swimming in a drafting position is in accordance with results previously observed during the cycle-to-run transition. Indeed, [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] compared the influence of two drafting strategies during the 20-km cycling stage of a sprint triathlon on subsequent 5-km running performance, i.e., drafting continuously behind a leader or alternating drafting and leading every 500 m at the same pace. They showed that the continuous drafting strategy during cycling involved a significant decrease in SR (6.6%) during the first km of running when compared to the alternate drafting strategy.
More recently, [START_REF] Bernard | Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes[END_REF] observed that the lower metabolic load associated with the adoption of a pedal rate of 60 vs. 100 rev•min -1 during a 20-min cycling bout at an average power output of 276 W resulted in a significant decrease in SR during the first 500-m of a subsequent 3,000-m run undertaken at competition pace (1.48 ± 0.03 Hz vs. 1.51 ± 0.05 Hz, respectively, for the 60 and 100 rev•min -1 conditions). The decrease in pedal rate measured in the present study (5.8%) is comparable to these previous results. Furthermore, [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] reported that the lower running SR following the continuous cycling drafting strategy was accompanied by a significant improvement in running speed (4.2%). This suggests that the decrease in pedal rate observed in the present study may lead to performance improvement during the cycling bout of the SCD trial compared with the SCA trial. Since these cycling bouts were undertaken at the same power output, a better performance in this case could be achieved through a decrease in energy expenditure.
The main hypothesis proposed in the literature to account for the modifications of SR or pedal rate during multisport activities is relative to the relationship between the movement frequencies of successive exercises. [START_REF] Bernard | Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes[END_REF] suggested that the decrease in SR observed in their study at the start of the 3,000-m run in the 60 vs. the 100 rev•min -1 condition was directly related to the lower pedal rate adopted during prior cycling. In the present study, the decrease in pedal rate observed in the SCD trial compared with the SCA trial could in part be accounted for by the locomotor pattern adopted during prior swimming. Indeed, significant decreases in arm stroke frequency have been reported when swimming in a drafting position vs. swimming alone, from 2.5% to 5.6% [START_REF] Chatard | Performance and drag during drafting swimming in highly trained triathletes[END_REF][START_REF] Chatard | Drafting distance in swimming[END_REF]. If the participants adopt the same leg kick pattern in the drafting and isolated conditions (i.e., 2-or 6-beat per arm stroke for triathletes), a lower absolute frequency of leg kicks could therefore be expected in the SCD trial compared with the SCA trial. This lower kick rhythm could be partly responsible for the significant decrease in pedal rate at the onset of cycling in SCD vs. SCA.
The lower pedal rate observed in the present study was associated with a significantly higher cycling gross efficiency in the SCD trial compared to the SCA trial. The energy expenditure occurring during prolonged exercises, considered one of the main determinants of successful performance [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] O'toole | Applied physiology of a triathlon[END_REF], is commonly related to the modifications in the biomechanical aspects of locomotion [START_REF] Hausswirth | Relationships between running mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF]. Among the biomechanical parameters often suggested to account for the increase in energy expenditure during cycling events, the pedal rate spontaneously adopted by the athletes is one of the most frequently cited in the literature [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] Vercruyssen | Effect of exercise duration on optimal pedaling rate choice in triathletes[END_REF]. Several studies have shown a curvilinear relationship between VO 2 or energy cost and pedal rate during short cycling trials (maximal duration of 30 min) performed by triathletes. This relationship allowed them to determine an energetically optimal cadence (EOC), defined as the pedal rate associated with the lower VO 2 or energy cost value, ranging from 72.5 to 86 rev•min -1 among the studies [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling: Effect of exercise duration[END_REF][START_REF] Vercruyssen | Effect of exercise duration on optimal pedaling rate choice in triathletes[END_REF][START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF].
Interestingly, the value of EOC seems to depend on cycling intensity. Indeed, a linear increase in EOC has been reported in trained cyclists when power output was raised from 100 to 300 W [START_REF] Coast | Linear increase in optimal pedal rate with increased power output in cycle ergometry[END_REF]. In the present study, the pedal rate adopted by the triathletes in the SDC trial (85 rev•min -1 ) was in the range of the EOC identified in previous studies. This could help explain the lower energy expenditure observed in this trial compared with the SCA trial. When the pedal rate is increased from the EOC, as is the case in the SCA trial, a higher metabolic load is classically reported. This elevation in energy expenditure could be related either to an increase in internal work occurring during repetitive limb movements [START_REF] Francescato | Oxygen cost of internal work during cycling[END_REF], or to a higher cost of ventilation [START_REF] Coast | Ventilatory work and oxygen consumption during exercise and hyperventilation[END_REF].
The adoption of a particular pedal rate by cyclists or triathletes during competition could be related to several criteria. The parameters most often cited in the literature are a minimization of neuromuscular fatigue, a decrease in the force applied on the cranks, and a reduction of stress in the muscles of the lower limbs [START_REF] Neptune | A theoretical analysis of preferred pedaling rate selection in endurance cycling[END_REF][START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF][START_REF] Takaishi | Optimal pedaling rate estimated from neuromuscular fatigue for cyclists[END_REF]. Within this framework, pedaling at a higher rate is theoretically associated with a lowering of the pedaling force produced by the muscles of the lower limbs to maintain a given power output, delaying the onset of local neuromuscular fatigue [START_REF] Takaishi | Optimal pedaling rate estimated from neuromuscular fatigue for cyclists[END_REF].
This relationship has been reported in several studies, showing that the peak forces applied on the pedals were significantly lower with the elevation of pedal rate at constant power output [START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF][START_REF] Sanderson | The influence of cadence and power output on the biomechanics of force application during steady-rate cycling in competitive and recreational cyclists[END_REF]. But [START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF] also showed that the average resultant pedal force produced during short cycling trials varied with the pedal rate adopted by the participants, reaching minimum values at 90 rev•min -1 when power output was 100 W and 100 rev•min -1 for a higher power output (200 W). Moreover, [START_REF] Neptune | A theoretical analysis of preferred pedaling rate selection in endurance cycling[END_REF] have reported that during a 5-min ride at 265 W, the minimum values of 9 indices representative of muscle activation, force, stress, and endurance for 14 muscles of the lower limbs were obtained when the participants adopted a pedal rate of 90 vs. 75 and 105 rev•min -1 . They referred to this cadence (90 rev•min -1 ) as the theoretical mechanical optimal cadence. In the SCA trial of the present study, the pedal rate adopted by the triathletes (91.1 rev•min -1 ) corresponds to the mechanical optimal cadence. It could therefore be suggested that following the high intensity swim, involving a possible decrease in leg muscular capacity, the triathletes were more fatigued and intrinsically adopted a pedal rate close to mechanical optimal cadence in order to minimize neuromuscular fatigue. Conversely, following the swim at a lower relative intensity (SDC trial), they were less fatigued and therefore spontaneously chose a lower pedal rate associated with higher torques. However, this hypothesis must be considered with caution because the decreases in resultant torques observed with increasing cadence in the present study are smaller than the decreases in peak forces reported in previous studies for comparable power outputs (4.2% for this study vs. 13% for [START_REF] Patterson | Bicycle pedalling forces as a function of pedalling rate and power output[END_REF].
The variation of the torques exerted during cycling in the 3 trials of this study shows an asymmetry between the dominant leg and the nondominant leg. Indeed, the MTD were significantly higher compared to the MTND in all trials and the PTD were significantly higher than the PTND in the SCD and SCA trials. The observation of significantly higher torques or forces exerted by the dominant leg compared to the other leg during cycling has been classically reported in the literature [START_REF] Daly | Asymmetry in bicycle ergometer pedaling[END_REF][START_REF] Sargeant | Forces applied to cranks of a bicycle ergometer during one-and two-leg cycling[END_REF]. This asymmetry seems to become more important with fatigue. Within this framework, [START_REF] Mccartney | A constant-velocity cycle ergometer for the study of dynamic muscle function[END_REF] found that the difference in maximal peak torque production between legs during a short-term all-out ride increased with the duration of exercise, reaching more than 15% for some participants at the end of exercise.
The results of the present study are in accordance with these findings. Indeed, the differences in torque between the dominant and nondominant legs were more important in the trials preceded by a swimming bout compared with the CTRL trial (5.0% and 5.6% vs. 4.0%, respectively, in SCD, SCA, and CTRL for mean torques; 5.8% and 5.5% vs. 3.6%, respectively, in SCD, SCA, and CTRL for peak torques). In addition, the difference between PTD and PTND in the CTRL trial was not statistically significant. Finally, although the PTND was significantly higher in the SCD trial compared to the SCA trial, no significant difference between these trials was observed for PTD. This suggests that the dominant leg might be less sensitive to fatigue and/or cadence manipulation compared to the weakest leg. In the context of the swim-to-cycle trial of a sprint triathlon, asymmetry might only have a small influence on performance for two main reasons. First, the resultant torques vary to a small extent between the dominant and nondominant legs. Second, the duration of the cycling leg of sprint triathlons is usually quite short (approx. 29 min 22 sec in the study by [START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF], and therefore fatigue is less likely to occur than in long-distance events.
In conclusion, this study shows that the conditions of a 750-m swimming bout could affect biomechanical adaptation during subsequent cycling. In particular, when prior swimming was performed in a drafting position, the triathletes adopted a significantly lower pedal rate associated with higher mean and peak resultant torques recorded at the crank axle. These modifications could be partly explained by the delayed appearance of fatigue in the cycling bout of the SCD trial compared with the SCA trial. Needed are further studies that include measurements of the force applied on the pedals so we can more precisely examine the influence of prior swimming on the biomechanics of force application during cycling at a constant power output.
Figure 1 -
1 Figure 1 -Experimental protocol. L: blood sampling; K4 b 2 : installation of the Cosmed K4b 2 analyzer.
Table 1 ,
1 p < 0.05). The results concerning physiological parameters demonstrate that swimming in a drafting position induced significantly lowered VO 2 and blood lactate values during subsequent cycling compared with a
Table 1 Mean Values of Biomechanical Parameters Recorded During Cycling in CTRL, SCD, and SCA Trials
1
CTRL SCD SCA
Pedal rate (rev•min -1 ) 85.1 ± 8.1 85.0 ± 6.2 90.2 ± 8.8 b
MTD (Nm) 31.2 ± 3.5 31.3 ± 3.2 30.2 ± 3.5 b
MTND (Nm) 30.0 ± 3.6 29.8 ± 3.4 28.6 ± 3.5 a,b,c
PTD (Nm) 46.2 ± 5.2 47.4 ± 5.1 46.0 ± 6.2
PTND (Nm) 44.6 ± 7.7 44.8 ± 6.6 43.6 ± 6.9 b,c
AD (degrees) 80.3 ± 10.7 83.3 ± 6.5 81.8 ± 8.3
AND (degrees) 79.3 ± 13.6 82.4 ± 11.5 82.7 ± 16.7
Note: MTD = mean resultant torque exerted during downstroke of dominant leg; MTND =
mean resultant torque exerted during downstroke of nondominant leg; PTD = peak resul-
tant torque exerted during downstroke of dominant leg; PTND = peak resultant torque
exerted during downstroke of nondominant leg; AD = crank angle corresponding to PTD;
AND = crank angle corresponding to PTND.
Significantly different, p < .05: from CTRL trial;
b from SCD trial; from dominant side.
Table 2 Mean Values of Physiological Parameters Recorded During Cycling in CTRL, SCD, and SCA Trials
2
CTRL SCD SCA
VO 2 (L•min -1 ) 3.90 ± 0.50 3.83 ± 0.53 4.03 ± 0.54 b
LA (mmol•L -1 ) 3 min 4.5 ± 0.8 6.6 ± 1.9 7.9 ± 1.9 a,b
10 min 4.8 ± 2.2 6.8 ± 2.4 8.2 ± 2.7 a,b
GE (%) 19.1 ± 1.5 19.5 ± 1.6 18.5 ± 0.6
b
Note VO 2 = oxygen uptake; LA = blood lactate concentration; GE = gross efficiency.
Significantly different, p < .05: from CTRL trial; b from SCD trial.
Acknowledgments
The authors are grateful to all the triathletes who took part in the experiment and acknowledge their wholehearted cooperation and motivation. |
01762716 | en | [
"info.info-dc",
"info.info-cl",
"info.info-se"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01762716/file/FGCS2018_CoMe4ACloud_AuthorsVersion.pdf | Zakarea Al-Shara
email: zakarea.al-shara@imt-atlantique.fr
Frederico Alvares
email: frederico.alvares@imt-atlantique.fr
Hugo Bruneliere
email: hugo.bruneliere@imt-atlantique.fr
Jonathan Lejeune
email: jonathan.lejeune@lip6.fr
Charles Prud'homme
email: charles.prudhomme@imt-atlantique.fr
Thomas Ledoux
email: thomas.ledoux@imt-atlantique.fr
CoMe4ACloud: An End-to-end Framework for Autonomic Cloud Systems
Keywords: Cloud Computing, Autonomic Computing, Model Driven Engineering, Constraint Programming ment (cf. the relatively recent trend on Industrial Internet of Things and Cloud
Autonomic Computing has largely contributed to the development of self-manageable Cloud services. It notably allows freeing Cloud administrators of the burden of manually managing varying-demand services, while still enforcing Service-Level Agreements (SLAs). All Cloud artifacts, regardless of the layer carrying them, share many common characteristics. Thus, it should be possible to specify, (re)configure and monitor any XaaS (Anything-as-a-Service) layer in an homogeneous way. To this end, the CoMe4ACloud approach proposes a generic model-based architecture for autonomic management of Cloud systems. We derive a generic unique Autonomic Manager (AM) capable of managing any Cloud service, regardless of the layer. This AM is based on a constraint solver which aims at finding the optimal configuration for the modeled XaaS, i.e. the best balance between costs and revenues while meeting the constraints established by the SLA. We evaluate our approach in two different ways. Firstly, we analyze qualitatively the impact of the AM behaviour on the system configuration when a given series of events occurs. We show that the AM takes decisions in less than 10 seconds for several hundred nodes simulating virtual/physical machines. Secondly, we demonstrate the feasibility of the integration with real Cloud systems, such as Openstack, while still remaining generic. Finally, we discuss our approach according to the current state-of-the-art.
Introduction
Nowadays, Cloud Computing is becoming a fundamental paradigm which is widely considered by companies when designing and building their systems. The number of applications that are developed for and deployed in the Cloud is constantly increasing, even in areas where software was traditionally not seen as the core ele-Manufacturing [START_REF] Xu | From Cloud Computing to Cloud Manufacturing[END_REF]). One of the main reasons for this popularity is the Cloud's provisioning model, that allows for the allocation of resources in an on-demand basis. Thanks to this, consumers are able to request/release compute/storage/network resources, in a quasi-instantaneous manner, in order to cope with varying demands [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF].
From the provider perspective, a negative consequence of this service-based model is that it may quickly lead the whole system to a level of dynamicity that makes it difficult to manage (e.g., to enforce Service Level Agreements (SLAs) by keeping Quality of Service (QoS) at acceptable levels). From the consumer perspective, the large amount and the variety of services available in the Cloud market [START_REF] Narasimhan | State of Cloud Applications and Platforms: The Cloud Adopters' View[END_REF] may turn the design, (re)configuration and monitoring into very complex and cumbersome tasks. Despite of several recent initiatives intending to provide a more homogeneous Cloud management support, for instance as part of the OASIS TOSCA [START_REF]Topology and Orchestration Specification for Cloud Applications (TOSCA)[END_REF] initiative or in some European funded projects (e.g., [START_REF] Ardagna | MODAClouds: A Model-driven Approach for the Design and Execution of Applications on Multiple Clouds[END_REF] [START_REF] Rossini | Cloud Application Modelling and Execution Language (CAMEL) and the PaaSage Workflow[END_REF]), current solutions still face some significant challenges.
Heterogeneity. Firstly, the heterogeneity of the Cloud makes it difficult for these approaches to be applied systematically in different possible contexts. Indeed, Cloud systems may involve many resources potentially having various and varied natures (software and/or physical). In order to achieve well-tuned Cloud services, administrators need to take into consideration specificities (e.g., runtime properties) of several managed systems (to meet SLA guarantees at runtime). Solutions that can support in a similar way resources coming from all the different Cloud layers (e.g., IaaS, PaaS, SaaS) are thus required.
Automation. Cloud systems are scalable by definition, meaning that Cloud system may be composed of large sets of components and hence complex software structures to be handled manually in an efficient way. This concerns not only the base configuration and monitoring activities, but also the way Cloud systems should behave at runtime in order to guarantee certain QoS levels and expected SLA contracts. As a consequence, solutions should provide means for gathering and analyzing sensor data, making decision and re-configuring (to translate taken decisions into actual actions on the system) when relevant.
Evolution. Cloud systems are highly dynamic: clients can book and release "elastic" virtual resources at any moment at time, according to given SLA contracts. Thus, solutions need to be able to reflect and support transparently the elastic and evolutionary aspects of services. This may be non trivial, especially for systems involving many different services.
In this context, the CoMe4ACloud collaborative project1 relies on three main pillars: Modeling/MDE [START_REF] Schmidt | Guest editor's introduction: Model-driven engineering[END_REF], Constraint Programming [START_REF]Handbook of Constraint Programming[END_REF], and Autonomic Computing [START_REF] Kephart | The Vision of Autonomic Computing[END_REF]. Its primary goal is to provide a generic and extensible solution for the runtime management of Cloud services, independently from the Cloud layer(s) they belong to. We claim that Cloud systems, regardless of the layer in the Cloud service stack, share many common characteristics and goals, which can serve as a basis for a more homogeneous model. In fact, systems can assume the role of both consumer /provider in the Cloud service stack, and the interactions among them are governed by SLAs. In general, Anything-as-a-Service (XaaS) objectives are very similar when generalizing it to a Service-Oriented Architecture (SOA) model: (i) finding an optimal balance between costs and revenues, i.e., minimizing the costs due to other purchased services and penalties due to SLA violation, while maximizing revenues related to services provided to customers; (ii) meeting all SLA or internal constraints (e.g., maximal capacity of resources) related to the concerned service.
In previous work, we relied on the MAPE-K Autonomic Computing reference architecture as a means to build generic an Autonomic Manager (AM) capable of managing Cloud systems [START_REF] Lejeune | Towards a generic autonomic model to manage cloud services[END_REF] at any layer. The associated generic model basically consists of graphs and constraints formalizing the relationships between the Cloud service providers and their consumers in a SOA fashion. From this model, we automatically generate a constraint programming model [START_REF]Handbook of Constraint Programming[END_REF], which is then used as a decision-making and planning tool within the AM.
This paper adds on our previous work in that we provide further details on the constraint programming models and translation schemes. Above all, in this work, we show how the generic model layer seamlessly connects to the runtime layer, i.e., how monitoring data from the running system are reflected to the model and how changes in the model (performed by the AM or human administrators) are propagated to the running system. We provide some examples showing this connection, notably over an infrastructure based on the OpenStack [START_REF]OpenStack Open Source Cloud Computing Software[END_REF]. We evaluate experimentally the feasibility of our approach by conducting a quantitative study over a simulated IaaS system. The objective is to analyze the AM behaviour in terms of adaptation decisions as well as to show how well it scales, considering the generic nature of the approach. Concretely, the results show the AM takes decisions in less than 10 seconds for several hundred nodes simulating virtual/physical machines, while remaining generic.
The remainder of the paper is structured as follows. In Section 2, we provide the background concepts for the good understanding of our work. Section 3 presents an overview of our approach in terms of architecture and underlying modeling support. In Section 4, we provide a formal description of the AM and explain how we designed it based on Constraint Programming. Section 5 gives more details on the actual implementation of our approach and its connection to real Cloud systems. In Section 6, we provide and discuss related performance evaluation data. We describe in details the available related work in Section 7 and conclude in Section 8 by opening on future work.
Background
The CoMe4Acloud project is mainly based on three complementary domains of Computer science.
Autonomic Computing
Autonomic Computing [START_REF] Kephart | The Vision of Autonomic Computing[END_REF] emerged from the necessity to autonomously manage complex systems, in which the manual human-like maintenance becomes infeasible such as those in context of Cloud Computing. Autonomic Computing provides a set of principles and reference architecture to help the development of self-manageable software systems. Autonomic systems are defined as a collection of autonomic elements that communicate with each other. An autonomic element consists of a single autonomic manager (AM) that controls one or many managed elements. A managed element is a software or hardware resources similar to its counterpart found in non-autonomic systems, except for the fact that it is adapted with sensors and actuators so as to be controllable by autonomic managers.
An autonomic manager is defined as a software component that, based on highlevel goals, uses the monitoring data from sensors and the internal knowledge of the system to plan and execute actions on the managed element (via actuators) in order to achieve those goals. It is also known as a MAPE-K loop, as a reference to Monitor, Analyze, Plan, Execute, Knowledge.
As previously stated, the monitoring task is in charge of observing the data collected by software or hardware sensors deployed in the managed element. The analysis task is in charge of finding a desired state for the managed element by taking into consideration the monitored data, the current state of the managed element, and adaptation policies. The planning task takes into consideration the current state and the desired state resulting from the analysis task to produce a set of changes to be performed on the managed elements. Those changes are actually performed in the execution task within the desired time with the help of the actuators deployed on the managed elements. Last but not least, the knowledge in an autonomic system assemble information about the autonomic element (e.g., system representation models, information on the managed system's states, adaptation policies, and so on) and can be accessed by the four tasks previously described.
Constraint Programming
In the context of autonomic computing systems to manage the dynamics of cloud systems, in order to take into consideration goals or utility functions, it is necessary to implement some methods. In this work, the goals and utility functions are defined in terms of constraint satisfaction and optimization problems. To this end we rely on Constraint Programming to model and solve these kind of problems.
Constraint Programming (CP) is a paradigm that aims to solve combinatorial problems defined by variables, each of them associated with a domain, and constraints over them [START_REF]Handbook of Constraint Programming[END_REF]. Then a general purpose solver attempts to find a solution, that is an assignment of each variable to a value from its domain which meet the constraints it is involved in. Examples of CP solvers include free open-source libraries, such as Choco solver [START_REF] Prud'homme | Choco Documentation, TASC, LS2N CNRS UMR 6241[END_REF], Gecode [START_REF]The Gecode Team, Gecode: A generic constraint development environment[END_REF] or OR-tools [START_REF]The OR-Tools Team, Google optimization tools[END_REF] and commercial softwares, such as IBM CPLEX CP Optimizer [START_REF]Cplex cp optimizer[END_REF] or SICStus Prolog [START_REF] Andersson | Sicstus prolog user's manual[END_REF].
Modeling a CSP
In Constraint Programming, a Constraint Satisfaction Problem (CSP) is defined as a tuple X, D, C and consists of a set of n variables X = {X 1 , X 2 , . . . , X n }, their associated domains D, and a collection of m constraints C. D refers to a function that maps each variable X i ∈ X to the respective domain D(X i ). A variable X i can be assigned to integer values (i.e., D(X i ) ⊆ Z), a set of discrete values (i.e. D(X i ) ⊆ P(Z)) or real values (i.g. D(X i ) ⊂ R). Finally, C corresponds to a set of constraints {C 1 , C 2 , . . . , C m } that restrain the possible values variables can be assigned to. So, let (v 1 , v 2 , . . . , v j n ) be a tuple of possible values for subset
X j = {X j 1 , X j 2 , . . . , X j n j } ⊆ X. A constraint C j is defined as a relation on set X j such that (v 1 , v 2 , . . . , v j n ) ∈ C j (D(X j 1 ) x D(X j 2 ) x . . . x D(X j n j )).
Solving a CSP with CP
In CP, the user provides a CSP and a CP solver takes care of solving it. Solving a CSP X, D, C is about finding a tuple of possible values (v
1 , v 2 , . . . , v n ) for each variable X i ∈ X such that ∀i ∈ [[1..n]], v i ∈ D(X i
) and all the constraints C j ∈ C are met. In the case of a Constraint Optimization Problem (COP), that is, when a optimization criterion have to be maximized or minimized, a solution is the one that maximizes or minimizes a given objective function f : D(X) → Z.
A constraint model can be achieved in a modular and composable way. Each constraint expresses a specific sub-problem, from arithmetical expressions to more complex relations such as AllDifferent [START_REF] Régin | Generalized arc consistency for global cardinality constraint[END_REF] or Regular [START_REF] Pesant | A regular language membership constraint for finite sequences of variables[END_REF]. A constraint not only defines a semantic (AllDifferent: variables should take distinct value in a solution, Regular : an assignment should respect a pattern given by an automaton) but also embeds a filtering algorithm which detects values that cannot be extended to a solution. Modeling a CSP consists hence in combining constraints together, which offers both flexibility (the model can be easily adapted to needs) and expressiveness (the model is almost human readable). Solving a CSP consists in an alternation of a propagation algorithm (each constraint removes forbidden values, if any) and a Depth First Search algorithm with backtrack to explore the search space.
Overall, the advantages of adopting CP as decision-making modeling and solving tool is manifold: no particular knowledge is required to describe the problem, adding or removing variables/constraints is easy (and thus useful when code is generated), the general purpose solver can be tweaked easily.
Model-driven Engineering
Model Driven Engineering (MDE) [START_REF] Schmidt | [END_REF][START_REF] Bézivin | Model Driven Engineering: An Emerging Technical Space[END_REF], more generally also referred to as Modeling, is a software engineering paradigm relying on the intensive creation, manipulation and (re)use of various and varied types of models. In a MDE/Modeling approach, these models are actually the first-class artifacts within related design, development, maintenance and/or evolution processes concerning software as well as their environments and data. The main underlying idea is to reason as much as possible at a higher level of abstraction than the one usually considered in more traditional approaches, e.g. which are often source code-based. Thus the focus is strongly put in modeling, or allowing the modeling of, the knowledge around the targeted domain or range of problems. One of the principal objectives is to capitalize on this knowledge/expertise in order to better automate and make more efficient the targeted processes.
Since several years already, there is a rich international ecosystem on approaches, practices, solutions and concrete use cases in/for Modeling [START_REF] Brambilla | Model-Driven Software Engineering in Practice[END_REF]. Among the most frequent applications in the industry, we can mention the (semi-)automated development of software (notably via code generation techniques), the support for system and language interoperability (e.g. via metamodeling and model transformation techniques) or the reverse engineering of existing software solutions (via model discovery and understanding techniques). Complementarily, another usage that has increased considerably in the past years, both in the academic and industrial world, is the support for developing Domain-Specific Languages (DSLs) [START_REF] Fowler | Domain-Specific Languages[END_REF]. Finally, the growing deployment of so-called Cyber-Physical Systems (CPSs), that are becoming more and more complex in different industry sectors (thanks to the advent of Cloud and IoT for example), has been creating new requirements in terms of Modeling [START_REF] Derler | Modeling Cyber-Physical Systems[END_REF].
Approach Overview
This section provides an overview of the CoMe4ACloud approach. First, we describe the global architecture of our approach, before presenting the generic me-tamodels that can be used by the users (Cloud Experts and Administrators) to model Cloud systems. Finally, to help Cloud users to deal with cross-layers and SLA, we provide a service-oriented modeling extension for XaaS layers.
Architecture
The proposed architecture is depicted in Figure 1 In order to better grasp the fundamental concepts of our approach, it is important to reason about the architecture in terms of life-cycle. The life-cycle associated with this architecture involves both kinds of models. A topology model t has to be defined manually by a Cloud expert at design time (step 1). The objective is to specify a particular topology of system to be modeled and then handled at runtime, e.g., a given type of IaaS (with Virtual/Physical Machines nodes) or SaaS (with webserver and database components). The topology model is used as the input of a specific code generator that parameterizes a generic constraint program that is integrated into the Analyzer (step 2) of the generic AM.
The goal of the constraint program is to automatically compute and propose a new suitable system configuration model from an original one. Hence, in the beginning, the Cloud Administrator must provide an initial configuration model c0 (step 3), which, along with the fore-coming configurations, is stored in the AM's Knowledge base. The state of the system at a given point in time is gathered by the Monitor (step 4) and represented as a (potentially new) configuration model c0 . It is important to notice that this new configuration model c0 reflects the running Cloud system s current state (e.g., a host that went down or a load variation), but it does not necessarily respect the constraints defined by the Cloud Expert/Administrator, e.g., if a PM crashed, all the VMs hosted by it should be reassigned, otherwise the system will be in a inconsistent state. To that effect, the Analyzer is launched (step 5) whenever a new configuration model exists, whether it results from modifications that are manually performed by the Cloud Administrator or automatically performed by the Monitor. It takes into account the current (new) configuration model c0 and related set of constraints encoded in the CP itself. As a result, a new configuration model c1 respecting those constraints is produced. The Planner produces a set of ordered actions (step 6) that have to be applied in order to go from the source (c0 ) to the target (c1) configuration model. More details on the decision-making process, including the constraint program is given in Section 4.
Finally, the Executor (step 7) relies on actuators deployed on the real Cloud System to apply those actions. This whole process (from steps 4 to 7) can be reexecuted as many times as required, according to the runtime conditions and the constraints imposed by the Cloud Expert/Administrator. It is important to notice that configuration models are meant to be representations of actual Cloud systems at given points in time. This can be seen with configuration model c (stored within the Knowledge base) and Cloud system s in Figure 1, for instance. Thus, the content of these models has to always reflect the current state of the corresponding running Cloud systems. More details on how we ensure the required synchronization between the model(s) and the actual system are given in Section 5.2.
Generic Topology and Configuration Modeling
One of the key features of the CoMe4ACloud approach is the high-level language and tooling support, whose objective is to facilitate the description of autonomic Cloud systems with adaptation capabilities. For that purpose, we strongly rely on an MDE approach that is based on two generic metamodels.
As shown in Figure 2, the Topology metamodel covers 1) the general description of the structure of a given topology and 2) the constraint expressions that can be attached to the specified types of nodes and relationships. Starting by the structural aspects, each Cloud system's Topology is named and composed of a set of NodeTypes and corresponding RelationshipTypes that specify how to interconnect them. It can also have some global constraints attached to it.
Each NodeType has a name, a set of AttributeTypes and can inherit from another NodeType. It can also have one or several specific Constraints attached to it. Cloud experts can declare the impact (or "cost") of enabling/disabling nodes at runtime (e.g., a given type of Physical Machine/PM node takes a certain time to be switched on/off). Each AttributeType has a name and value type. It allows indicating the impact of updating related attribute values at runtime. A ConstantAttributeType stores a constant value at runtime, a CalculatedAttributeType allows setting an Expression automatically computing its value at runtime.
Any given RelationshipType has a name and defines a source and target Node-Type. It also allows specifying the impact of linking/unlinking corresponding nodes via relationships at runtime (e.g., migrating a given type of Virtual Machine/VM node from a type of PM node to another one can take several minutes). One or several specific Constraints can be attached to a RelationshipType.
A Constraint relates two Expressions according to a predefined set of comparison operators. An Expression can be a single static IntegerValueExpression or an Attri-buteExpression pointing to an AttributeType. It can be a NbConnectionExpression representing the number of NodeTypes connected to a given NodeType or Relati-onshipType (at runtime) as predecessor/successor or source/target respectively. It can also be a AggregationExpression aggregating the values of a AttributeType from the predecessors/successors of a given NodeType, according to a predefined set of aggregation operators. It can be a BinaryExpression between two (sub)Expressions, according to a predefined set of algebraic operators. Finally, it can be a CustomExpression using any available constraint/query language (e.g., OCL, XPath, etc.), the full expression simply stored as a string. Tools exploiting corresponding models are then in charge of processing such expressions.
As shown in Figure 3, the Configuration part of the language is lighter and directly refers to the Topology one. An actually running Cloud system Configuration is composed of a set of Nodes and Relationships between them. Each Node has an identifier and is of a given NodeType, as specified by the corresponding topology. It also comes with a boolean value indicating whether it is actually activated or not in the configuration. This activation can be reflected differently in the real system according to the concerned type of node (e.g., a given Virtual Machine (VM) is already launched or not). A node contains a set of Attiributes providing name/value pairs, still following the specifications of the related topology.
Each Relationship also has an identifier and is of a given RelationshipType, as specified again by the corresponding topology. It simply interconnects two allowed Nodes together and indicates if the relationship can be possibly changed (i.e., removed) over time, i.e., if it is constant or not.
Service-oriented Topology Model for XaaS layers
The Topology and Configuration metamodels presented in the previous section provide a generic language to model XaaS systems. Thanks to that, we can model any kind of XaaS system that can be expressed by a Direct Acyclic Graph (DAG) with constraints having to hold at runtime. However, this level of abstraction can also be seen as an obstacle for some Cloud Experts and Administrators to model elements really specific to Cloud Computing. Thus, in addition to the generic modeling language presented before, we also provide in CoMe4ACloud an initial set of reusable node types which are related to the core Cloud concepts. They constitute a base Service-oriented topology model which basically represents XaaS systems in terms of their consumers (i.e., the clients that consumes the offered services), their providers (i.e., the required resources, also offered as services) and the Service Level Agreements (SLA) formalizing those relationships. Figure 4 shows an illustrative graphical representation of an example configuration model using the pre-defined node types. Root node types. We introduce two types of root nodes: RootP rovider and RootClient. In any configuration model, it can only exist one node of each root node type. These two nodes do not represent a real component of the system but they can be rather seen as theoretical nodes. A RootP rovider node (resp. RootClient node) 315 has no target node (resp. source node) and is considered as the final target (resp. initial source). In other words, a RootP rovider node (resp. RootClient node) node represents the set of all the providers (resp. the consumers) of the managed system. This allows grouping all features of both provider and consumer layers, especially the costs due to operational expenses of services bought from all the providers (re-320 presented by attribute SysExp in a RootP rovider node) and revenues thanks to services sold to all the consumers (represented by attribute SysRev in a RootClient node).
SLA node types. We also introduce two types of SLA nodes: SLAClient and SLAP rovider. In a configuration model, SLA nodes define the prices of each service level that can be provided and the amount of penalties for violations. Thus, both types of SLA nodes provide different attributes representing the different prices, penalties and then the current cost or revenue (total cost) induced by current set of bought services (cf. the Service node types below) associated with it. A SLAClient node (resp. SLAP rovider node) has a unique source (resp. target) which is the RootClient node (resp. RootP rovider node) in the configuration. Consequently, an attribute SysRev (resp. SysExp) is equal to the sum of all attribute total cost of its sources node (resp. target nodes).
Service node types. A SLA defines several Service Level Objectives (SLO) for each provided service [START_REF] Kouki | Csla: a language for improving cloud sla management[END_REF]. Thus, we have to provide base Service node types: each service provided to a client (resp. received from a provider) is represented by a node of type ServiceClient (resp. ServiceP rovider). The different SLOs are attributes of the corresponding Service nodes (e.g., configuration requirements, availability, response time, etc.). Since each Service node is linked with a unique SLA node in a configuration model, we define an attribute that designate the SLA node relating to a given service node. For a ServiceClient node (resp. ServiceP rovider node), this attribute is named sla client (resp. sla prov) and its value is a node ID which means that the node has a unique source (resp. target) corresponding to the SLA.
Internal Component node type. InternalComponent represents any kind of node of the XaaS layer that we want to manage with the Generic AM (contrary to the previous node types which are theoretical nodes and provided as core Cloud concepts). Thus, it is kind of a common super-type of node to be extended by users of the CoMe4ACloud approach within their own topologies (e.g., cf. Listing 1 from Section 5.1). A node of this type may be used by another InternalComponent node or by a ServiceClient node. Conversely, it may require another InternalComponent node or a ServiceP rovider node to work.
Decision Making Model
In this section, we describe how we formally modeled the decision making part (i.e., the Analyzer and Planner ) of our generic Autonomic Manager (AM) by relying on Constraint Programming (CP).
Knowledge (Configuration Models)
As previously mentioned, the Knowledge contains models of the current and past configurations of the Cloud system (i.e., managed element). We define formal notations for a configuration at a given instant according to the XaaS model described in Figure 3.
The notion of time and configuration consistency
We first define T , the set of instants t representing the execution time of the system where t 0 is the instant of the first configuration (e.g., the very first configuration model initialized by the Cloud Administrator, cf. Figure 1).
The XaaS configuration model at instant t is denoted by c t , organized in a Directed Acyclic Graph (DAG), where vertices correspond to nodes and edges to relationships of the configuration metamodel (cf. Figure 3). CST R c t denotes the set of constraints of configuration c t . Notice that these constraints refer to those defined in the topology model (cf. Figure 2). The property satisf y(cstr, t) is verified at t if and only if the constraint cstr ∈ CST R c t is met at instant t. The system is satisfied (satisf y(c t )) at instant t, if and only if ∀cstr ∈ CST R c t , satisf y(cstr, t). Finally, function H(c t ) gives the score of the configuration c at instant t : the higher the value, the better the configuration (e.g., in terms of balance between costs and revenues).
Nodes and Attributes
Let n t be a node at instant t. As defined in Section 3.2 it is characterized by: a set of constraints CST R n t specific to the type (cf. Figure 2). a set of attributes (atts n t ) defining the node's internal state.
a node identifier (id n ∈ ID t ),
An attribute att t ∈ atts n t at instant t is defined by: name name att , which is constant ∀t ∈ T , a value denoted val att t ∈ R ∪ ID t (i.e., an attribute value is either a real value or a node identifier)
Configuration Evolution
The Knowledge within the AM evolves as configuration models are modified over the time. In order to model the transition between configuration models, the time T is discretized by the application of a transition function f on c t such that f (c t ) = c t+1 . A configuration model transition can be triggered in two ways by: an internal event (e.g., the Cloud Administrator initializes (add) a software component/node, a PM crashes) or an external event (e.g., a new client arrival), which in both cases alters the system configuration and thus results in a new configuration model (cf. function event in Figure 5). This function models typically the Monitor component of the AM. the AM that performs the function control. This function ensures that satisf y(c t+1 ) is verified, while maximizing H(c t+1 ) 2 and minimizing the transition cost to change the system state between c t and c t+1 . This function characterizes the execution of the Analyzer, Planner and Executor components of the AM.
Figure 5 illustrates a transition graph among several configurations. It shows that an event function potentially moves away the current configuration from an optimal configuration and that a control function tries to get closer an new optimal configuration while respecting all the system constraints. Set of all possible system configurations
Analyzer (Constraint Model)
In the AM, the Analyzer component is achieved by a constraint solver. A Constraint Programming Model [START_REF]Handbook of Constraint Programming[END_REF] needs three elements to find a solution: a static set of problem variables, a domain function, which associates to each variable its domain, and a set of constraints. In our model, the graph corresponding to the configuration model can be considered as a composite variable defined in a domain. For the constraint solver, the decision to add a new node in the configuration is impossible as it implies the adding of new variables to the constraint model during the evaluation. We have hence to define a set N t corresponding to an upper bound of the node set c t , i.e., c t ⊆ N t . More precisely, N t is the set of all existing nodes at instant t. Every node n t / ∈ c t is considered as deactivated and does not take part in the running system at instant t.
Each existing node has consequently a boolean attribute called "activated" (cf. Node attribute activated in Figure 3). Thanks to this attribute the constraint solver can decide whether a node has to be enabled (true value) or disabled (false value).
The property enable(n t ) verifies if and only if n is activated at t. This property has an incidence over the two neighbor sets preds n t and succs n t . Indeed, when enable(n t ) is false n t has no neighbor because n does not depend on other node and no node may depend on n. The set N t can only be changed by the Administrator or by the Monitor when it detects for instance a node failure or a new node in the running system (managed element), meaning that a node will be removed or added in N t+1 . Figure 6 depicts an example of two configuration transitions. At instant t, there is a node set N t = {n 1 , n 2 , . . . , n 8 } and c t = {n 1 , n 2 , n 5 , n 6 , n 7 }. Each node color represents a given type defined in the topology (cf. Figure 3). The next configuration at t + 1, the Monitor component detects that component n 6 of a green type has failed, leading the managed system to an unsatisfiable configuration. At t + 2, the control function detects the need to activate a deactivated node of the same type in order to replace n 6 by n 8 . This scenario may match the configuration transitions from conf 1 to conf 3 in Figure 5.
Configuration Constraints
The Analyzer should not only find a configuration that satisfies the constraints. It should also consider the objective function H() that is part of the configuration constraints. The graph representing the managed element (the running Cloud system) has to meet the following constraints:
1. any deactivated node n t at t ∈ T has no neighbor: n t does not depend on other nodes and there is no node that depends on n t . Formally, ¬enable(n t ) ⇒ succs n t = ∅ ∧ preds n t = ∅ 2. except for root node types (cf. Section 3.3), any activated node has at least one predecessor and one successor. Formally,
enable(n t ) ⇒ | succs n t |> 0 ∧ | preds n t |> 0
3. if a node n ti is enabled at instant t i , then all the constraints associated with n a (link and attribute constraints) will be met in a finite time. Formally,
enable(n ti ) ⇒ ∃t j ≥ t i , ∀cstr ∈ CST R n t i ∧cstr ∈ CST R n t j ∧ enable(n tj ) ∧ satisf y(cstr, t j )
4. the function H() is equal to the balance between the revenues and the expenses of the system (cf. Figure 4). Formally, Algorithm 1: Global algorithm of the Analyzer Algorithm 1 is the global algorithm of the Analyzer which mimics Large Neighborhood Search [START_REF] Shaw | Using constraint programming and local search methods to solve vehicle routing problems[END_REF]. This strategy consists in two-step loop (lines 5 to 13) which is executed after the constraint model is instantiated in the solver (line 3) from the current configuration (i.e., variables and constraints are declared). First, in line 7, some variables of the solver model are selected to be fixed to their value in the previous satisfiable configuration ( in our case, the b) parameter). This reduces the number of values of some variables X s which can be assigned to, D 0 (X j ) ⊂ D(X j ), ∀X j ∈ X s .
H(c t ) =
Variables not selected to be fixed represent a variable area (V A) in the DAG.
It corresponds to the set of nodes in the graph the solver is able to change their successors and predecessors links. Such a restriction makes the search space to explore by the solver smaller which tends to reduce solving time. The way variables are selected is managed by the initializer (line 4). Note that when the initializer is built, the initial variable area, V A i where i = 0, contains all the deactivated nodes and any nodes whose state has changed since the last optimal configuration (ex : attribute value modification, disappearance/appearance of a neighbour).
Then, the solver tries to find a solution for the partially restricted configuration (line 9). If a solution is found, the loop breaks and the new configuration is returned (line 11). Otherwise, the variable area is extended (line 7). A call for a new initialization at iteration i means that the solver has proved that there is no solution in iteration i -1. Consequently, a new initialization leads to relax the previous
V A i-1 , D i-1 (X j ) ⊆ D i (X j ) ⊆ D(X j ), ∀X j ∈ X s . At iteration i, V A i is equal to V A i-1
plus the sets of successors and predecessors of all nodes in V A i-1 . Finally, if no solution is found and the initializer is not able to relax domains anymore, D i (X j ) = D(X j ), ∀X j ∈ X s , the Analyzer throws an error.
This mechanism brings three advantages: (1) it reduces the solving time because domains cardinality is restrained, (2) it limits the set of actions in the plan, achieving thus one of our objective and (3) it tends to produce configurations that are close to the previous one in term of activated nodes and links.
Note that without the Neighborhood Search Strategy, the initial variable area VA 0 is equal to the whole graph leading thus to a single iteration.
Planner (Differencing and Match)
The Planner relies on differencing and match algorithms for object-oriented models [START_REF] Xing | Umldiff: An algorithm for object-oriented design differencing[END_REF] to compute the differences between the current configuration and the new configuration produced by the Analyze. From a generic point of view it exists five types of action : enable and disable node; link and unlink two nodes; and update attribute value.
Implementation Details
In this section, we provide some implementation details regarding the modeling languages and tooling support used by the users to specify Cloud systems as well as the mechanisms of synchronization between the models within the Autonomic Manager and the actual running Cloud systems.
A YAML-like Concrete Syntax
We propose a notation to allow Cloud experts quickly specifying their topologies and initializing related configurations. It also permits sharing such models in a simple syntax to be directly read and understood by Cloud administrators. We first built an XML dialect and prototyped an initial version. But we observed that it was too verbose and complex, especially for newcomers. We also thought about providing a graphical syntax via simple diagrams. While this seems appropriate for visualizing configurations, this makes more time-consuming the topology creation/edition (writing is usually faster than diagramming for Cloud technical experts). Finally, we designed a lightweight textual syntax covering both topology and configuration specifications.
To provide a syntax that looks familiar to Cloud users, we considered YAML and its TOSCA version [START_REF]YAML (TOSCA Simple Profile[END_REF] featuring most of the structural constructs we needed (for topologies and configurations). We decided to start from this syntax and complement it with the elements specific to our language, notably concerning expressions and constraints as not supported in YAML (cf. Section 3.2). We also ignored some constructs from TOSCA YAML that are not required in our language (e.g., related to interfaces, requirements or capabilities). Moreover, we can still rely on other existing notations. For instance, by translating a configuration definition from our language to TOSCA, users can benefit from the GUI offered by external tooling such as Eclipse Winery [START_REF]Winery project[END_REF].
As shown on Listing 1, for each node type the user gives its name and the node type it inherits from (if any) (cf. Section 3.3). Then she describes its different attribute types via the properties field, following the TOSCA YAML terminology.
Similarly, for each relationship type the expert gives its name and then indicates its source and target node types.
As explained before (and not supported in TOSCA YAML), expressions can be used to indicate how to compute the initial value of an attribute type. For instance, the variable ClusterCurConsumption of the Cluster node type is initialized at configuration level by making a product between the value of other variables. Expressions can also be used to attach constraints to a given node/relationship type. For example, in the node type Power, the value of the variable PowerCurConsumption has to be lesser or equal to the value of the constant PowerCapacity (at configuration level).
As shown on Listing 2, for each configuration the user provides a unique identifier and indicates which topology it is relying on. Then, for each actual node/relationship, its particular type is explicitly specified by directly referring to the corresponding node/relationship type from a defined topology. Each node describes the values of its different attributes (calculated or set manually), while each relationship describes its source and target nodes.
Synchronization with the running system
We follow the principles of Models@Runtime [START_REF] Blair | Models@ run.time[END_REF], by defining a bidirectional causal link between the running system and the model. The idea is to decouple the specificities of the causal link, w.r.t. the specific running subsystems, while keeping the Autonomic Manager generic, as sketched in Figure 7. It is important to recall that the configuration model is a representation of the running system and it can be modified in three different situations: (i) when the Cloud administrator manually changes the model; (ii) when it is the time to update the current configuration with data coming from the running system, which is done by the Monitor component; and (iii) when the Analyzer decides for a better configuration (e.g., with higher balance function), in which case the Executor performs the necessary actions on the running Cloud systems. Therefore, the causal link with the running system is defined by two different APIs, which allows to reflect both the changes performed by the generic AM to the actual Cloud systems, and the changes that occur on the system at runtime to the generic AM. To that effect, we propose the implementation of an adaptor for each target running system (managed element).
From the Executor component perspective, the objective is to translate generic actions, i.e., enable/disable, link/unlink nodes, update attribute values, into concrete operations (e.g., deploy VM at a given PM) to be invoked over actuators of the different running subsystems (e.g., Openstack, AWS, Moodle, etc.). From the Monitor point of view, the adaptors' role is to gather information from sensors deployed at the running subsystems (e.g., a PM failure, a workload variation) and translate it into the generic operations to be performed on the configuration model by the Monitor, i.e., add/remove/enable/disable node, link/unlike nodes and update attribute value.
It should be noticed that the difference between the two APIs is the possibility to add and remove nodes to the configuration model. In fact, the resulting configuration from the Analyzer does not imply the addition or removal of any node, since the constraint solver may not add/remove variables during the decision-making process, as already explained in Section 4.2. The Cloud Administrator and the Monitor, on the contrary may modify the configuration model (that is given as input to the constraint solver) by removing and adding nodes as a reflect of both the running Cloud system (e.g., a PM that crashed) or new business requirements or agreements (e.g., a client that arrives or leaves). Notice also that both adaptors and the Monitor component are the entry-points of the running subsystems and the generic AM, respectively. Thus, the adaptors and the Monitor are the entities that actually have to implement the APIs. We rely on a number of libraries (e.g., AWS Java SDK 3 , Openstack4j4 ) that ease the implementation of adaptors. For example, Listing 3 shows an excerpt of the implementation of the enable action for a VM node in Openstack4j. For the full implementation and for more examples, please see https://gitlab.inria.fr/ come4acloud/xaas. . av a il a bi l it y Zo n e ( targetCluster + " : " + targetPM ) 12 . build () ; 13 [START_REF]The OR-Tools Team, Google optimization tools[END_REF] Server vm = os . compute () . servers () . boot ( vm ) ;
3 https://aws.amazon.com/fr/sdk-for-java/
Performance Evaluation
In this section, we present an experimental study of our generic AM implementation that has been applied to an IaaS system. The main objective is to analyze qualitatively the impact of the AM behaviour on the system configuration when a given series of events occurs, and notably the time required by the constraint solver to take decisions. Note that the presented simulation focuses on the performance of the controller. Additionnally, we also experimented with the same scenario on a smaller system but in a real OpenStack IaaS infrastructure 5 . In a complementary manner, a more detailed study of the proposed model-based architecture (and notably its core generic XaaS modeling language) can be found in [START_REF] Bruneliere | A Model-based Architecture for Autonomic and Heterogeneous Cloud Systems[END_REF] where we show the implementation of another use case, this time for a SaaS application6 .
The IaaS system
We relied on the same IaaS system whose models are presented in Listings 1 and 2 to evaluate our approach. In the following, we provide more details. For sake of simplicity, we consider that the IaaS provides a unique service to their customers: compute resource in the form of VMs. Hence, there exists a node type V M Service extending the ServiceClient type (cf. Section 3.3). A customer can specify the required number of CPUs and RAM as attributes of V M Service node. The prices for a unit of CPU/RAM are defined inside the SLA component, that is, inside the SLAV M node type, which extends the SLAClient type of the service-oriented topology model. Internally, the system has several InternalComponents: VMs (represented by the node type V M ) are hosted on PMs (represented by the node type P M ), which are themselves grouped into Clusters (represented by the node type Cluster). Each enabled V M has exactly a successor node of type P M and exactly a unique predecessor of type V M Service. This is represented by a relationship type stating that the predecessors of a P M are the V M s currently hosted by it. The main constraint of a V M node is to have the number of CPUs/RAM equal to attribute specified in its predecessor V M Service node. The main constraint for a P M is to keep the sum of allocated resources with V M less or equal than its capacity. A P M has a mandatory link to its Cluster, which is also represented by a relationship in the configuration model. A Cluster needs electrical power in order to operate and has an attribute representing the current power consumption of all hosted PMs. The P owerService type extends the ServiceP rovider type of the service-oriented topology model, and it corresponds to an electricity meter. A P owerService node has an attribute that represents the maximum capacity in terms of kilowatt-hour, which bounds the sum of the current consumption of all Cluster nodes linked to this node (PowerService). Finally, the SLAP ower type extends the SLAP rovider type and represents a signed SLA with an energy provider by defining the price of a kilowatt-hour.
Experimental Testbed
We implemented the Analyzer component of the AM by using the Java-based constraint solver Choco [START_REF] Prud'homme | Choco Documentation, TASC, LS2N CNRS UMR 6241[END_REF]. For scalability purposes the experimentation simulates the interaction with the real world, i.e., the role of the components Monitor and Executor depicted in Figure 1, although we have experimented the same scenario with a smaller system (fewer PMs and VMs) in a real OpenStack infrastructure7 ). The simulation has been conducted on a single processor machine with an Intel Core i5-6200U CPU (2.30GHz) and 6GB of RAM Memory running Linux 4.4.
The system is modeled following the topology defined in Listing 1, i.e., compute services are offered to clients by means of Virtual Machines (VM) instances. VMs are hosted by PM, which in turn are grouped by Clusters of machines. As Clusters require electricity in order to operate, they can be linked to different power providers, if necessary (cf. section 6.1). The snapshot of the running IaaS configuration model (the initial as well as the ones associated to each instant t ∈ T ) is described and stored with our configuration DSL (cf. Listing 2). At each simulated event, the file is modified to apply the consequences of the event over the configuration. After each modification due to an event, we activated the AM to propagate the modification on the whole system and to ensure that the configuration meets all the imposed constraints.
The simulated IaaS system is composed of 3 clusters homogeneous PMs. Each PM has 32 processors and 64 GB of RAM memory. The system has two power providers: a classical power provider, that is, brown energy provider and a green energy provider.
The current consumption of a turned on PM is the sum of its idle power consumption (10 power units) when no guest VM is hosted with an additional consumption due to allocated resources (1 power unit per CPU and per RAM allocated). In order to avoid to degrade the analysis performance by considering too much physical resources compared to the number of consumed virtual resources, we limit the number of unused PM nodes in the configuration model while ensuring a sufficient amount of available physical resources to host a potential new VM.
In the experiments, we considered five types of event:
AddV M Service (a): a new customer arrival which requests for x V M Service (x ranges from 1 to 5). The required configuration of this request (i.e., the number of CPUs and RAM units and the number of V M Service) is chosen independently, with a random uniform law. The number of required CPU ranges from 1 to 8, and the number of required RAM units ranges from 1 to 16 GB. The direct consequences of such an event is the addition of one SLAV M , x V M Service nodes and x V M nodes in the configuration model file. The aim of the AM after this event is to enable the x new VM and to find the best PM(s) to host them. leavingClient (l): a customer decides to cancel definitively the SLA. Consequently, the corresponding SLAV M , V M Service and V M nodes are removed from the configuration. After such an event the aim of the AM is potentially to shut down the concerned PM or to migrate other VMs to this PM in order to minimize the revenue loss.
GreenAvailable (ga): the Green Power Provider decreases significantly the price of the power unit to a value below the price of the Brown Energy Provider. The consequence of that event is the modification of the price attribute of the green SLAP ower node. The expected behaviour of the AM is to enable the green SLAP ower node in order to consume a cheaper service.
GreenU nAvailable (gu): contrary to the GreenAvailable event, the Green Power Provider resets its price to the initial value. Consequently, the Brown Energy Provider becomes again the most interesting provider. The expected behaviour of the AM is to disable the green SLAP ower node to the benefit of the classical power provider.
CrashOneP M (c): a PM crashes. The consequence on the configuration is the suppression of the corresponding PM node in the configuration model. The goal of the AM is to potentially turn on a new PM and to migrate the VMs which were hosted by the crashed PM.
In our experiments, we consider the following scenario over the both analysis strategies without neigborhood and with neigborhood depicted in Section 4.2.2. Initially, the configuration at t 0 , no VM is requested and the system is turned off. At the beginning, the unit price of the green power provider is twice higher than the price of the other provider (8 against 4). The unit selling price is 50 for a CPU and 10 for a RAM unit. Our scenario consists to repeat the following sequence of events: 5 AddV M Service, 1 leavingClient, 1 GreenAvailable, 1 CrashOneP M , 5 AddV M Service, 1 leavingClient, 1 GreenU nAvailable and 1 CrashOneP M . This allows to show the behaviour of the AM for each event with different system's sizes. We show the impact of this scenario over the following metrics: the amount of power consumption for each provider (Figures 8a and8c); the amount of V M Service and size of the system in terms of number of nodes (Figure 8f); the configuration balance (function H()) (Figure 8e). the latency of the Choco Solver to take a decision (Figure 8g) the number of PMs being turned on (Figure 8d) the size of generated plan, i.e., the number of required actions to produce the next satisfiable configuration (Figure 8b)
The x-axis in Figure 8 represents the logical time of the experiment in terms of configuration transition. Each colored area in this figure includes two configuration transitions: the event immediately followed by the control action. The color differs according to the type of the fired event. For the sake of readability, the x-axis does not begin at the initiation instant but when the number of node reaches 573 and events are tagged with the initials of the event's name.
Analysis and Discussion
First of all, we can see that both strategies have globally the same behaviour whatever the received event. Indeed, in both cases a power provider is deactivated when its unit price becomes higher than the second one (Figures 8a and8c). This shows that the AM is capable of adapting the choice of provided service according to their current price and thus benefit from sales promotions offered by its providers. When the amount of requests for V M Service increases (Figure 8f) in a regular basis, the system power consumption increases (Figures 8a and8c) sufficiently slowly so that the system balance also increases (Figure 8e). This can be explained by the ability of the AM to decide to turn on a new PM in a just-in-time way, that is, the AM tries to allocate the new coming VMs on existing enabled PM. On the other way around, when a client leaves the system, as expected, the number of V M Service nodes decreases but we can see that the number of PMs remains constant during this event, leading to a more important decrease of the system balance. Consequently, we can deduce that the AM has decided in this case to privilege the reconfiguration cost criteria at the expense of the system balance criteria. Indeed, we can notice in Figure 8b that the number of planning actions remains limited for the event l.
However, we can observe some differences on the values between both strategies. The main difference is in the decision to turn on a PM in the events AddV M Service and CrashOneP M . In the AddV M Service event, the neighborhood strategy favors the start-up of new PM, contrary to the other strategy which favors the use of PM already turned on. Consequently, the neighborhood strategy increases the power consumption leading to a less interesting balance. This can be explained by the fact that the neighborhood strategy avoids to modify existing nodes which limits its capacity for actions. Indeed, this is confirmed in Figure 8b where the curve of the neighborhood strategy is mostly lower than the other one. However, the solving time is worse (Figure 8g) because the minimal required variable area (V A) to find a solution needs several iterations.
Conversely, in the CrashOneP M event, we note that the number of PM is mostly the same with the neighborhood strategy while the other one starts up systematically a new PM. This illustrates the fact that, in case of node disappearance, the neighborhood strategy tries to use as much as possible the existing nodes by modifying it as less as possible. Without neighborhood, the controller is able to modify directly all variables of the model. As a result, it is more difficult to find a satisfiable configuration, which comes at the expense of a long solving time (Figure 8g) Finally, in order to keep an acceptable solving time while limiting the number of planning actions and maximizing the balance, it is interesting to choose the strategy according to the event. Indeed, the neighborhood strategy is efficient to repair nodes disappearance but the system balance may be lower in case of new client arrival. Although our AM is generic, we could observe that with the appropriate strategy, it can take decisions in less than 10 seconds for several hundred nodes. In terms of a CSP problem, the considered system's size corresponds to an order of magnitude of 1 million variables and 300000 constraints. Moreover, the taken decisions increase systemically the balance in case of favorable events (new service request from a client, price drop from a provider, etc.) and limits its degradation in case of adverse events (component crash, etc.) .
Related Work
In order to discuss the proposed solution, we identified common characteristics we believe important for autonomic Cloud (modeling) solutions. Table 1 compares our approach with other existing work regarding different criteria: 1) Genericity -The solution can support all Cloud system layers (e.g., XaaS), or is specific to some particular and well-identified layers; 2) UI/Language -It can provide a proper user interface and/or a modeling language intended to the different Cloud actors; 3) Interoperability -It can interoperate with other existing/external solutions, and/or is compatible with a Cloud standard (e.g., TOSCA); 4) Runtime support -It can deal with runtime aspects of Cloud systems, e.g., provide support for autonomic loops and/or synchronization.
In the industrial Cloud community, there are many existing muti-cloud APIs/libraries 8 9 and DevOps tools 10 11 . APIs enable IaaS provider abstraction, therefore easing the control of many different Cloud services, and generally focus on the IaaS client side. DevOps tools, in turn, provide scripting language and execution platforms for configuration management. They rather provide support for the automation of the configuration, deployment and installation of Cloud systems in a programmatical/imperative manner.
The Cloudify12 platform overcomes some of these limitations. It relies on a variant of the TOSCA standard [START_REF]Topology and Orchestration Specification for Cloud Applications (TOSCA)[END_REF] to facilitate the definition of Cloud system topologies and configurations, as well as to automate their deployment and monitoring. In the same vein, Apache Brooklyn13 leverages Autonomic Computing [START_REF] Kephart | The Vision of Autonomic Computing[END_REF] to provide support for runtime management (via sensors/actuators allowing for dynamically monitoring and changing the application when needed). However, both Cloudify and Brooklyn focus on the application/client layer and are not easily applicable to all XaaS layers. Moreover, while Brooklyn is very handy for particular types of adaptation (e.g., imperative event-condition-action ones), it may be limited to handle adaptation within larger architectures (i.e., considering many components/services and more complex constraints). Our approach, instead, follows a declarative and holistic approach which is more appropriated for this kind of context.
Recently, OCCI (Open Cloud Computing Interface) has become one of the first standards in Cloud. The kernel of OCCI is a generic resource-oriented metamodel [START_REF] Nyren | Open cloud computing interface -core, specification document[END_REF], which lacks a rigorous and formal specification as well as the concept of (re)configuration. To tackle these issues, the authors of [START_REF] Merle | A precise metamodel for open cloud computing interface[END_REF] specify the OCCI Core Model with the Eclipse Modeling Framework (EMF) 14 , whereas its static semantics is rigorously defined with the Object Constraint Language (OCL) 15 . An EMF-based OCCI model can ease the description of a XaaS, which is enriched with OCL constraints and thus verified by a many MDE tools. The approach, however, does not cope with autonomic decisions at runtime that have to be done in order to meet those OCL invariants. The European project 4CaaSt proposed the Blueprint Templates abstract language [START_REF] Nguyen | Blueprint Template Support for Engineering Cloud-based Services[END_REF] to describe Cloud services over multiple PaaS/IaaS providers. In the same direction, the Cloud Application Modeling Language [START_REF] Bergmayr | UML-based Cloud Application Modeling with Libraries, Profiles, and Templates[END_REF] studied in the ARTIST EU project [START_REF] Menychtas | Software Modernization and Cloudification Using the ARTIST Migration Methodology and Framework[END_REF] suggests using profiled UML to model (and later deploy) Cloud applications regardless of their underlying infrastructure. Similarly, the mOSAIC EU project proposes an open-source and Cloud vendor-agnostic platform [START_REF] Sandru | Building an Open-Source Platform-as-a-Service with Intelligent Management of Multiple Cloud Resources[END_REF]. Finally, StratusML [START_REF] Hamdaqa | A Layered Cloud Modeling Framework[END_REF] IaaS providers. Thus they are quite layer-specific and do not provide support for autonomic adaptation.
The MODAClouds EU project [START_REF] Ardagna | MODAClouds: A Model-driven Approach for the Design and Execution of Applications on Multiple Clouds[END_REF] introduced some support for runtime management of multiple Clouds, notably by proposing CloudML as part of the Cloud Modeling Framework (CloudMF) [START_REF] Ferry | Towards Model-Driven Provisioning, Deployment, Monitoring, and Adaptation of Multi-cloud Systems[END_REF][START_REF] Ferry | CloudMF: Applying MDE to Tame the Complexity of Managing Multi-cloud Applications[END_REF]. As in our approach, CloudMF provides a generic provider-agnostic model that can be used to describe any Cloud provider as well as mechanisms for runtime management by relying on Models@Runtime techniques [START_REF] Blair | Models@ run.time[END_REF]. In the PaaSage EU project [START_REF] Rossini | Cloud Application Modelling and Execution Language (CAMEL) and the PaaSage Workflow[END_REF], CAMEL [START_REF] Achilleos | Business-Oriented Evaluation of the PaaSage Platform[END_REF] extended CloudML and integrated other languages such as the Scalability Rule Language (SRL) [START_REF] Domaschka | Towards a Generic Language for Scalability Rules[END_REF]. However, contrary to our generic approach, in these cases the adaptation decisions are delegated to 3rd-parties tools and tailored to specific problems/constraints [START_REF] Silva | Model-Driven Design of Cloud Applications with Quality-of-Service Guarantees: The MODAClouds Approach[END_REF]. The framework Saloon [START_REF] Quinton | SALOON: a Platform for Selecting and Configuring Cloud Environments[END_REF] was also developed in this same project, relying on feature models to provide support for automatic Cloud configuration and selection. Similarly, [START_REF] Dastjerdi | An effective architecture for automated appliance management system applying ontology-based cloud discovery[END_REF] proposes the use of ontologies were used to express variability in Cloud systems. Finally, Mastelic et al., [START_REF] Mastelic | Towards uniform management of cloud services by applying model-driven development[END_REF] propose a unified model intended to facilitate the deployment and monitoring of XaaS systems. These approaches fill the gap between application requirements and cloud providers configurations but, unlike our approach, they focus on the initial configuration (at deploy-time), not on the run-time (re)configuration.
Recently, the ARCADIA EU project proposed a framework to cope with highly adaptable distributed applications designed as micro-services [START_REF] Gouvas | A Context Model and Policies Management Framework for Reconfigurable-by-design Distributed Applications[END_REF]. While in a very early stage and with a different scope than us, it may be interesting to follow this work in the future. Among other existing approaches, we can cite the Descartes modeling language [START_REF] Kounev | A Model-Based Approach to Designing Self-Aware IT Systems and Infrastructures[END_REF] which is based on high-level metamodels to describe resources, applications, adaptation policies, etc. On top of Descartes, a generic control loop is proposed to fulfill some requirements for quality-of-service and resource management. Quite similarly, Popet al., [START_REF] Pop | Support Services for Applications Execution in Multiclouds Environments[END_REF] propose an approach to support the deployment and autonomic management at runtime on multiple IaaS. However both approaches are targeting only Cloud systems structured as a SaaS deployed in a IaaS, whereas our approach allows modeling Cloud systems at any layer.
In [START_REF] Mohamed | An autonomic approach to manage elasticity of business processes in the cloud[END_REF], the authors extend OCCI in order to support autonomic management for Cloud resources, describing the needed elements to make a given Cloud resource autonomic regardless of the service level. This extension allows autonomic provisioning of Cloud resources, driven by elasticity strategies based on imperative Event-Condition-Action rules. The adaptation policies are, however, focused on the business applications, while our declarative approach, thanks to a constraint solver, is capable of controlling any target XaaS system so as to keep it close to the a consistent and/or optimal configuration.
In [START_REF] García-Galán | User-centric Adaptation of Multi-tenant Services: Preference-based Analysis for Service Reconfiguration[END_REF], feature models are used to define the configuration space (along with user preferences) and game theory is considered as a decision-making tool. This work focuses on features that are selected in a multi-tenant context, whereas our approach targets the automated computation of SLA-compliant configurations in a cross-layer manner.
Several approaches on SLA-based resource provisioning -and based on constraint solvers -have been proposed. Like in our approach, the authors of [START_REF] Dougherty | Model-driven auto-scaling of green cloud computing infrastructure[END_REF] rely on rely on MDE techniques and constraint programming to find consistent configurations of VM placement in order to optimize energy consumption. But no modeling or high-level language support is provided. Nonetheless, the focus remains on the IaaS infrastructure, so there is no cross-layer support. In [START_REF] Ghanbari | Optimal autoscaling in a iaas cloud[END_REF], the authors propose a new approach to autoscaling that utilizes a stochastic model predictive control technique to facilitate resource allocation and releases meeting the SLO of the application provider while minimizing their cost. They use also a convex optimization solver for cost functions but no detail is provided about its implementation. Besides, the approach addresses only the relationship between SaaS and IaaS layers, while in our approach any XaaS service can be defined.
To the best of our knowledge, there is currently no work in the literature that features at the same time genericity w.r.t. the Cloud layers, interoperability with standards (such as TOSCA), high-level modeling language support and some autonomic runtime management capabilities. The proposed model-based architecture described in this paper is an initial step in this direction.
Conclusion
The CoMe4ACloud architecture is a generic solution for the autonomous runtime management of heterogeneous Cloud systems. It unifies the main characteristics and objectives of Cloud services. This model enabled us to derive a unique and generic Autonomic Manager (AM) capable of managing any Cloud service, regardless of the layer. The generic AM is based on a constraint solver which reasons on very abstract concepts (e.g., nodes, relations, constraints) and tries to find the best balance between costs and revenues while meeting constraints regarding the established Service Level Agreements and the service itself. From the Cloud administrators and experts point of view, this is an interesting contribution because it frees them from the difficult task of conceiving and implementing purpose-specific AMs. Indeed, this task can be now simplified by expressing the specific features of the XaaS Cloud system with a domain specific language based on the TOSCA standard. Our approach was evaluated experimentally, with a qualitative study. Results have shown that yet generic, our AM is able to find satisfiable configurations within reasonable solving times by taking the established SLA and by limiting the reconfiguration overhead. We also showed how we managed the integration with real Cloud systems like such as Openstack, while remaining generic.
For future work, we intend to apply CoMe4ACloud to other contexts somehow related to Cloud Computing. For instance, we plan experiment our approach in the domain of Internet of Things or Cloud-based Internet of Things, which may incur challenges regarding the scalability in terms of model size. We also plan to investigate how our approach could be used to address self-protection, that is, to be able to deal with security aspects in an autonomic manner. Last but not least, we believe that the constraint solver may be insufficient to make decisions in a durable way, i.e., by considering the past history or even possible future states of the managed element. A possible alternative to overcome this limitation is to combine our constraint programming based decision making tool with control theoretical approaches for computing systems.
Figure 1 :
1 Figure 1: Overview of the model-based architecture in CoMe4ACloud.
Figure 2 :
2 Figure 2: Overview of the Topology metamodel -Design time.
Figure 3 :
3 Figure 3: Overview of the Configuration metamodel -Runtime.
Figure 4 :
4 Figure 4: Example of configuration model using base Service-oriented node types (illustrative representation).
Figure 5 :
5 Figure 5: Examples of configuration transition in the set of configurations.
2 Figure 6 :
26 Figure 6: Examples of configuration transitions.
4 initializer 5 while not found solution and not error do 6 solver
456 ← buildInitializer(Satisf iableConf , M inBalance, withN eighgorhood); impossible to initialize variables")
Listing 1 :Listing 2 :
12 Topology excerpt. Topology : IaaS node_types : I n t e r n a l C o m p o n e n t : ... PM : derived_from : I n t e r n a l C om p o n e n t properties : i mp a ct O fE n ab l in g : 40 i m p a c t O f D i s a b l i n g : 30 ... VM : derived_from : I n t e r n a l C o m p o n e n t properties : ... Cluster : derived_from : I n t e r n a l C om p o n e n t properties : constant C l u s t e r C o n s O ne C P U : type : integer constant C l u s t e r C o n s O ne R A M : type : integer constant C l u s t e r C o n s M i n O n e P M : type : integer variable C l u s t e r N b C P U A c t i v e : type : integer equal : Sum ( Pred , PM . P m Nb C PU Al l oc a te d ) variable C l u s t e r C u r C o n s u m p t i o n : type : integer equal : C l u s t e r C o n s M i n O n e P M * NbLink ( Pred ) + C l u s t e r N b C P U A c t i v e * C l u s t e r C o n s O n e C P U + C l u s t e r C o n s O n e R A M * Sum ( Pred , PM . P m S i z e R A M A l l o c a t e d ) variable P o w e r C u r C o n s u m p t i o n : type : integer equal : Sum ( Pred , Cluster . C l u s t e r C u r C o n s u m p t i o n ) constraints : P o w e r C u r C o n s u m p t i o n less_or_equal : PowerCapacity ... r e l a t i o n s h i p _ t y p e s : VM_To_PM : v a l i d _ s o u r c e _ t y p e s : VM v a l i d _ t a r g e t _ t y p e s : PM PM_To_Cluster : v a l i d _ s o u r c e _ t y p e s : PM v a l i d _ t a r g e t _ t y p e s : Cluster C lu s te r _T o _P o we r : v a l i d _ s o u r c e _ t y p e s : Cluster v a l i d _ t a r g e t _ t y p e s : Power ... Configuration excerpt. s t e r C u r C o n s u m p t i o n : 0 19 C l u s t e r N b C P U A c t i v e : 0 20 C l u s t e r C o n s O n e C P U : 1 21 C l u s t e r C o n s O n e R A M : 0 22 C l u s t e r C o n s M i n O n e P M :
Figure 7 :
7 Figure 7: Synchronization with real running system.
Listing 3 :
3 Excerpt of an IaaS adaptor with OpenStack.
where ID t is the set of existing node identifiers at t and id n is unique ∀t ∈ T ;
a type (type n ∈ T Y P ES) a set of predecessors (preds n t ∈ P(ID t )) and successors (succs n t ∈ P(ID t )) nodes in the DAG. Note that ∀n t a , n t b ∈c t , id n t b = id n t a ∃id n t b ∈ succs n t a ⇔ ∃id n t a ∈ preds n t b . It is worth noting that the notion of predecessors and successors here is implicit in the notion of Relationship of the configuration metamodel.
An expected lower bound of the next balance. This value depends on the reason why the AM has been triggered. For instance, we know that if the reason is a new client arrival or if a provider decreases its prices, the expected balance must be higher than the previous one. Conversely, if there is a client departure, we can estimate that the lower bound of the next balance will be smaller (in this case, the old balance minus the revenue brought by the client); This forces the solver to find a solution with a balance greater than or equal to this input value.
and
att t exp ∈ atts n t RP ∧ att t exp = SysExp
4.2.2. Execution of the Analyzer
The Analyzer needs four inputs to process the next configuration:
a) The current configuration model which may be not satisfiable (e.g., c0 in
Section 3.1);
b) The most recent satisfiable configuration model (e.g., c0 in Section 3.1);
c)
att t rev -att t exp
where
att t rev ∈ atts n t RC ∧ att t rev = SysRev
d) A boolean that indicates whether to use the Neighborhood Search Strategy, which is explained bellow. 1 Analyse (CurrentConf , Satisf iableConf , M inBalance, withN eighborhood) Result: a satisfiable Configuration 2 begin 3 solver ← buildConstraintModelFrom(CurrentConf );
Table 1 :
1 provides another language for Cloud applications dealing with different layers to address the various Cloud stakeholders concerns. All these approaches focus on how to enable the deployment of applications (SaaS or PaaS) in different Comparison of Cloud (modeling) solutions -for full support, ∼ for partial support
Generi- UI / Interop-Runtime
city Language erability support
APIs/DevOps
Cloudify
Brooklyn ∼
[32]
[33]
[34] ∼
[35]
[36] ∼
[37]
[38, 39] ∼
[40] ∼
[41] ∼
[42]
[43][44][45] - -
[46]
[47]
[48]
[49]
[50]
[51]
CoMe4ACloud ∼ ∼
https://come4acloud.github.io/
Since the research of optimal configuration (a configuration where the function H() has the maximum possible value) may be too costly in terms of execution time, it is possible to assume that the execution time of the control function is limited by a bound set by the administrator.
http://www.openstack4j.com
CoMe4ACloud Openstack Demo:http://hyperurl.co/come4acloud_runtime
CoMe4ACloud Moodle Demo: http://hyperurl.co/come4acloud
CoMe4ACloud Openstack Demo: http://hyperurl.co/come4acloud_runtime
Apache jclouds:https://jclouds.apache.org
Deltacloud:https://deltacloud.apache.org
Puppet:https://puppet.com
Chef:https://www.chef.io/chef/
http://getcloudify.org
https://brooklyn.apache.org
https://eclipse.org/modeling/emf
http://www.omg.org/spec/OCL |
01762840 | en | [
"phys.cond.cm-scm"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01762840/file/TardaniLangmuir2018_postprint.pdf | Franco Tardani
Wilfrid Neri
Cecile Zakri
Hamid Kellay
Annie Colin
Philippe Poulin
Shear Rheology Control of Wrinkles and Patterns in Graphene Oxide Films
Drying graphene oxide (GO) films are subject to extensive wrinkling, which largely affects their final properties. Wrinkles were shown to be suitable in biotechno logical applications; however, they negatively affect the electronic properties of the films. Here, we report on wrinkle tuning and patterning of GO films under stress controlled conditions during drying. GO flakes assemble at an air-solvent interface; the assembly forms a skin at the surface and may bend due to volume shrinkage while drying. We applied a modification of evaporative lithography to spatially define the evaporative stress field. Wrinkle alignment is achieved over cm 2 areas. The wavelength (i.e., wrinkle spacing) is controlled in the μm range by the film thickness and GO concentration. Furthermore, we propose the use of nanoparticles to control capillary forces to suppress wrinkling. An example of a controlled pattern is given to elucidate the potential of the technique. The results are discussed in terms of classical elasticity theory. Wrinkling is the result of bending of the wet solid skin layer assembled on a highly elastic GO dispersion. Wavelength selection is the result of energy minimization between the bending of the skin and the elastic deformation of the GO supporting dispersion. The results strongly suggest the possibility to tune wrinkles and patterns by simple physicochemical routes.
1 undesirable for optical applications in which the film roughness and inhomogeneity yield light scattering.
INTRODUCTION
Graphene oxide (GO) is broadly used for the manufacture of graphene based membranes and films. These are currently investigated for a vast number of technological (electronics, [START_REF] Eda | Chemically Derived Graphene Oxide: Towards Large Area Thin Film Electronics and Optoelectronics[END_REF][START_REF] Yan | Graphene based flexible and stretchable thin film transistors[END_REF][START_REF] Chee | Flexible Graphene Based Supercapacitors: A Review[END_REF] optics, optoelectronics, [START_REF] Eda | Chemically Derived Graphene Oxide: Towards Large Area Thin Film Electronics and Optoelectronics[END_REF][START_REF] Chang | Graphene Based Nanomaterials: Synthesis, Properties, and Optical and Optoelectronic Applications[END_REF] filtration systems [START_REF] Liu | Graphene based membranes[END_REF] ) and biotechno logical (biomedical devices and tissue engineering [START_REF] Kumar | Comprehensive Review on the Use of Graphene Based Substrates for Regenerative Medicine and Biomedical Devices[END_REF] ) applica tions.
The oxygen containing groups make GO soluble in water and thus easy to handle under environmentally friendly conditions. Graphene like properties are quickly restored after chemical [START_REF] Chua | Chemical reduction of graphene oxide: a synthetic chemistry viewpoint[END_REF] or thermal [START_REF] Pei | The reduction of graphene oxide[END_REF] reduction. Moreover, the large shape anisotropy of the sheets assures liquid crystalline (LC) behavior even under dilute conditions. [START_REF] Aboutalebi | Spontaneous Formation of Liquid Crystals in Ultralarge Graphene Oxide Dispersions[END_REF][START_REF] Xu | Aqueous Liquid Crystals of Graphene Oxide[END_REF][START_REF] Kim | Graphene Oxide Liquid Crystals[END_REF] Under shear, GO sheets tend to align and flatten due to the suppression of thermal undulations. [START_REF] Poulin | Superflexibility of graphene oxide[END_REF] Exotic assemblies can be expected if instabilities set up during deposition, by analogy to lamellar phases. [START_REF] Diat | Effect of shear on a lyotropic lamellar phase[END_REF] Some effort was recently devoted to exploiting the LC behavior to tune the film structure. [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF][START_REF] Hong | Controlling wrinkles and assembly patterns in dried graphene oxide films using lyotropic graphene oxide liquid crystals[END_REF] Shear or liquid crystalline behavior directly affects the structure of wet films and membranes. However, solid films are used in actual devices and applications, requiring a drying step after wet deposition. Drying mechanisms are expected to play a significant role in the film structure beyond the initial possible ordering in the wet state. This is a critical issue especially for micrometer thick coatings, where drying induced inhomogeneity is likely to occur.
The drying of a colloidal dispersion is a quite complex process. Changes in concentration and viscosity and the setting up of instabilities are only some of the processes the film undergoes during drying. [START_REF] Routh | Drying of thin colloidal films[END_REF] The coffee ring [START_REF] Deegan | Capillary flow as the cause of ring stains from dried liquid drops[END_REF] effect and cracking [START_REF] Singh | Cracking in Drying Colloidal Films[END_REF] may affect the macroscopic homogeneity of the manufactured film. Dry graphene oxide deposits are commonly characterized by extensive wrinkling. [START_REF] Hong | Water front recession and the formation of various types of wrinkles in dried graphene oxide droplets[END_REF][START_REF] Ahmad | Water assisted stable dispersal of graphene oxide in non dispersible solvents and skin formation on the GO dispersion[END_REF] The phenomenon is associated with the application of tensile/compressive stresses in 2D systems. In the case of drying GO dispersions, the stress can arise from water evaporation. During evaporation, GO flakes progressively assemble in a stack fashion. The presence of hydrogen bonds increases interflake friction, and sliding is impeded. This no slide condition results in buckling and folding of the microscopic flake assembly [START_REF] Hong | Water front recession and the formation of various types of wrinkles in dried graphene oxide droplets[END_REF][START_REF] Ahmad | Water assisted stable dispersal of graphene oxide in non dispersible solvents and skin formation on the GO dispersion[END_REF][START_REF] Guo | Hydration Responsive Folding and Unfolding in Graphene Oxide Liquid Crystal Phases[END_REF] as a consequence of volume shrinkage.
The presence of ripples and wrinkles may affect the final properties of the film. [START_REF] Cote | Tunable assembly of graphene oxide surfactant sheets: wrinkles, overlaps and impacts on thin film properties[END_REF] In principle, this effect can be desirable or not depending on the final application. Wrinkling can be sought after in biological applications. Indeed, wrinkled GO films are suitable for anisotropic cell growth [START_REF] Wang | wavelength tunable graphene based surface topographies for directing cell alignment and morphology[END_REF] and antibacterial activity. [START_REF] Zou | Wrinkled Surface Mediated Antibacterial Activity of Graphene Oxide Nanosheets[END_REF] Wrinkling can also improve film flexibility. A wrinkled layer can be stretched or bent with a reduced propensity for cracking. In addition, the excess surface area of wrinkles can be useful to enhance the energy storage capabilities of supercapacitors. However, wrinkling should be suppressed in the preparation of other electronic devices. For instance, wrinkled films showed a very large scattering of measured sheet resistance values. [START_REF] Cote | Tunable assembly of graphene oxide surfactant sheets: wrinkles, overlaps and impacts on thin film properties[END_REF] Wrinkles are also It is therefore critical to tune the formation of wrinkles and to develop a better understanding to control their structure or even to suppress them.
Nowadays, it is possible to induce controlled wrinkle formation through the application of known mechanical stresses to dried films. For instance, mono or few layer graphene films were transferred onto prestrained stretchable substrates. [START_REF] Zang | Multifunctionality and control of the crumpling and unfolding of large area graphene[END_REF][START_REF] Thomas | Controlled Crumpling of Graphene Oxide Films for Tunable Optical Transmittance[END_REF] As the strain was released, films started wrinkling. In another example, the strain appeared sponta neously when ultrathin graphite films were suspended on predefined trenches. [START_REF] Bao | Controlled ripple texturing of suspended graphene and ultrathin graphite membranes[END_REF] In this latter case, a mechanical stress was also thermally induced by the mismatch of the thermal expansion coefficients of the graphene layer and that of the substrate. Concerning GO liquid crystalline systems, only few research efforts can be found in the literature. [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF][START_REF] Hong | Controlling wrinkles and assembly patterns in dried graphene oxide films using lyotropic graphene oxide liquid crystals[END_REF] In the first case, [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF][START_REF] Hong | Water front recession and the formation of various types of wrinkles in dried graphene oxide droplets[END_REF] the authors related the shear banding of the LC flow alignment to the wrinkles that appear after drying. In a more recent study on wrinkling, thick films were produced at high temperature (>50 °C) and low blade coating velocities. [START_REF] Hong | Controlling wrinkles and assembly patterns in dried graphene oxide films using lyotropic graphene oxide liquid crystals[END_REF] Different types of wrinkles were observed as a function of the drying conditions.
Here, we show by using suspensions with different compositions that the rheological properties of the suspensions are key features in controlling the wrinkles and patterns formed upon drying.
We show in particular how the wavelength of the wrinkles varies with the composition of the GO solution, without any obvious link with pre existing ordering in the GO liquid crystal films and without using prestretched substrates. A correlation with mechanical and shear rheological properties of the GO dispersion is discussed. We demonstrate the possibility to pattern wrinkles through the development of an evaporative lithography method. [START_REF] Harris | Patterning Colloidal Films via Evaporative Lithography[END_REF] The latter is shown to be negative, by contrast to other common colloid films, with the accumulation of particles in regions with lower evaporation rates. The distinctive negative evaporative lithography is also driven by the rheological properties of the GO dispersions. Finally, we show how the inclusion of spherical nanoparticles can be used to reduce, and even completely suppress, the formation of wrinkles, therefore providing a method to make perfectly flat GO films from drying solutions. When used alone, the spherical nanoparticles typically form films which crack upon drying. This mechanism which is opposite to buckling and wrinkling results from the tension of capillary bridges between the particles. Here this mechanism is used to balance the spontaneous tendency of GO films to wrinkle.
The present results therefore provide a comprehensive basis to better control and tune the formation and structure of wrinkles in GO based films.
EXPERIMENTAL SECTION
Materials. Commercial graphene oxide solutions in water from Graphenea were used. The graphene oxide concentration is 4.0 mg/ mL. The flake lateral size is a few micrometers. Ludox HS 40 silica nanoparticles were used as additives. Dispersions are provided by Aldrich with a batch concentration of 40 wt % in water. The average nanoparticle diameter is 12 nm.
Sample Preparation. GO concentrated dispersions were obtained with a two step centrifugation process. [START_REF] Poulin | Superflexibility of graphene oxide[END_REF] The first step was a mild centrifugation (1400g, 20 min) used to remove possible aggregates. The collected dispersion was centrifuged for 45 min at 50 000g. This second step allowed the separation of a concentrated LC pellet of graphene oxide in water. The concentration, determined through the dry extract, was (4.3 ± 0.3) wt %. All the other concentrations were obtained by dilution in water. Care was taken to ensure homogeneous mixing after water addition. Usually, diluted samples were vortex mixed for at least 30 s.
Nanoparticles and GO dispersions were obtained by mixing the two dispersions in a volume ratio of 1:10. After being mixed, the hybrid systems were bath sonicated for 15 min.
Rheology. Rheological measurements were performed with an AR2000 stress controlled rheometer from TA Instruments. All of the samples were analyzed with 40 mm cone and plate geometry at 25.0 °C. Evaporation was avoided by the use of a trap to keep the humidity rate constant during the measurements.
Film Preparation. GO films were prepared with a doctor blade combined with rod coating technology. A drop of dispersion was put on a glass substrate, and coating proceeded at a velocity, v, of 1 in./s (2.5 cm/s). Different velocities did not show any particular effect, so the slowest available was chosen. The average film size was 2.5 × 1.5 cm 2 . To avoid geometrical issues, a constant amount of material was used to obtain films with the same surface and shape. [START_REF] Kassuga | The effect of shear and confinement on the buckling of particle laden interfaces[END_REF] Care was taken to avoid substrate inclination by fixing it with a tape and checking for planarity with a bubble level. The films were produced under controlled conditions of temperature and relative humidity of 26.0 °C and 35%, respectively. The films were not removed until they were completely dry.
Dynamic Evaporative Lithography. Film drying was performed in a confined environment. Soon after coating, a mask was placed above the film, at a distance of ∼300 μm. The mask size was 2.0 × 7.5 cm 2 . The mask ensured evaporation in a well controlled manner. Then the mask was moved at a velocity of 5-10 μm/s, according to the contact line recession speed. Faster or slower velocities caused only a small perturbation in wrinkle order, maintaining the wavelength and alignment the same.
Film Characterization. The films were characterized with optical and electronic microscopy. The thickness was determined through the use of an OM Contour GTK optical profilometer. Data were analyzed with the Vision 64 software under white light illumination vertical scanning interferometry mode (VSI). Films were cut at different points to assess the different regions and to get an average film thickness.
Wavelength Determination. The wrinkle spacing (i.e., wave length, λ) was determined through a Gaussian analysis of distributions of 100 measures of distances extracted from 5 pictures per sample. One dimensional fast Fourier transformation (FFT) of different linear profiles extracted from optical micrographs was also used to confirm the obtained results.
RESULTS
Film Casting. GO films were prepared from water based dispersions in the GO concentration range of 0.6-4.3 wt %. Coating was performed in a controlled shear rate range, γ(= v/ H w ) ≈ 100-500 s -1 , where v (2.5 cm/s) is the velocity of the moving blade and H w is the thickness of the wet film. The investigated systems showed shear thinning behavior over the whole range of tested shear rates in rheology characterizations. In confined geometries, the systems form shear bands, as already observed elsewhere. [START_REF] Luo | Nematic Order Drives Macroscopic Patterns of Graphene Oxide in Drying Drops[END_REF] However, these flow induced structures rapidly disappeared during drying.
The progressive rearrangement of flakes and the setting up of concentration gradients produced wrinkled films with no controlled structures. Control over temperature and humidity conditions was not sufficient to avoid this problem. For this reason, we developed a technique to induce progressive directional evaporation. The choice was motivated by the need to control solvent flow to promote flake assembly and solid film growth along a particular direction. The technique is schematized in Figure 1. Soon after casting, the wet films were covered with masks of different heights H m . The mask-wet film distance (H m -H w ) was fixed at ∼300 μm to stop evaporation under the covered area. The end of the film was left uncovered to start evaporation. The films progressively dried as the mask was retracted at a constant velocity, v m . During the whole process, wrinkles (Figure 1c) on the dry film developed from the wavy surface of the wet part. The controlled drying allowed control over wrinkle patterns. An average alignment was obtained along the drying direction. The volume shrinkage of the film induced a compressive stress that caused elastic instabilities. As observed elsewhere, [START_REF] Guo | Hydration Responsive Folding and Unfolding in Graphene Oxide Liquid Crystal Phases[END_REF] rehydration relaxed and suppressed the formed wrinkles. The wrinkles appeared birefringent under crossed polarizers.
Surface Wrinkling Characterization. As soon as the wet film started drying, wrinkles appeared on the surface at the free air-water interface (i.e., outside the mask). These surface wrinkles were compressed and folded after complete drying. This behavior is reminiscent of the behavior of polymeric systems [START_REF] Pauchard | Mechanical instability induced by complex liquid desiccation[END_REF] for which a skin layer forms and buckles. The pinning of the contact line and surface tension on the (covered) wet side clamped the skin at these two ends (i.e., the wet and the dry film boundaries) of the receding front. Then, evaporation induced a reduction in volume associated with a compressive strain producing wrinkling. This process finally allowed the alignment of the wrinkles perpendicular to the receding liquid front. The spacing of parallel wrinkles was analyzed as indicated in the Experimental Section. In Figure 2, 1D FFTs taken from grayscale profiles of the microscopic pictures are shown. For all of the tested concentrations and film thicknesses, nearly the same spacing was found for wet and dry films from a given system, even though the spacing distribution appears broader for dry films.
Concentration and Thickness Effects. The wrinkle spacing, λ, was determined as a function of GO concentration and dry film thickness, H, as shown in Figure 3. First, the solid content of the dry film was kept constant. The wavelength was measured for films obtained from dispersions of different GO concentrations. To achieve this, the film area was kept constant, and H w was changed proportionally to the GO concentration. As shown in Figure 3a, λ decreases with a power law like decay at increasing dispersion concentration. Second, films of different H (i.e., final solid content) were produced. A certain GO dispersion was deposited at different H w values (i.e., different final deposit, H). The process was repeated for three different GO concentrations. In this second case, λ was found to increase linearly with H (Figure 3b). The slope of this linear increase changes with concentration.
Thin Films. Very thin films were prepared by a modification of the blade coating approach. A flexible scraper was attached to the blade to reproduce the technique used in ref 31 (inset of Figure 3c). At very low thickness (inset of Figure 3c), films showed colors from thin film interference, and no more wrinkles were detected. The thickness boundary h between the wrinkling of thick films and no wrinkling of thin films was determined for different GO concentrations (Figure 3c). The absence of wrinkling in thin films reflects the absence of skin formation during drying. [START_REF] De Gennes | Solvent evaporation of spin cast films: ″crust″ effects[END_REF][START_REF] Bornside | Spin coating: One dimensional model[END_REF] Actually because of its very thin structure, the film dries as a whole gel. Similar behavior was reported for polymer films. Following de Gennes, [START_REF] De Gennes | Solvent evaporation of spin cast films: ″crust″ effects[END_REF] a skin forms when the polymer concentration in the top layer of an evaporating solution increases above a given value. A steady state solvent evaporative current is then established in the skin. By comparing solvent diffusion in the vapor phase with that through the skin, he determined a limiting skin thickness (τ 0 ) of around 70 nm. Therefore, for films of comparable thickness a steady state solvent evaporation is set throughout the whole film height, and no skin formation is expected. The measured h was actually in the same range for polymer systems, representing a good approximation of the skin layer (τ) of a (hypothetical) thick film (i.e., h ≈ τ).
Nanoparticle Effect. The effect of added spherical nanoparticles on film structure was investigated. First, it was checked that the nanoparticles had no effect on the GO dispersion phase behavior in the concentration range of the present study. No phase separation or destabilization was observed for weeks. Then, films were prepared at a fixed H w of 40 μm. A map of the different film structures is reported in Controlled Patterns. By using evaporative lithography, it is possible to control not only wrinkle alignment but also the formation of particular patterns. Two examples are reported in Figure 5. Films were dried under a fixed holed mask. The 1 mm diameter holes were hexagonally arranged with an average pitch of d h ≤ 1 mm. The mask pattern was accurately reproduced on the films. Circular menisci were first formed under the open holes, where evaporation was higher. The meniscus diameter grew with time. Finally, different menisci joined under the covered part of the mask. This differential evaporation was responsible for the formation of hexagonal features with higher edges. Surprisingly, this behavior differs from conventional colloidal evaporative lithography. The present case is peculiar to a negative evaporative lithography, as the higher features were cast under the covered part, in contrast to the case of conventional nanoparticle aqueous dispersions. [START_REF] Harris | Patterning Colloidal Films via Evaporative Lithography[END_REF] The reverse situation was reported only in solvent mixtures (i.e., water-ethanol), when Marangoni effects play a role. [START_REF] Harris | Marangoni Effects on Evaporative Lithographic Patterning of Colloidal Films[END_REF] The same peculiar patterns were reproduced in pure GO as well as in GO NP hybrid systems. However, the presence of NPs completely removed wrinkles. This was particularly evident through cross polarizing microscopy. Hybrid films did not show any birefringence because of the absence of wrinkles. The wrinkle arrangement was the consequence of the stress field arising from uneven evaporation rates. Inside the open holes, small wrinkles were aligned perpendicularly with respect to the meniscus contact line, while higher crumpled regions appeared under the covered parts. In the hybrid systems, no small wrinkles were detected, while higher crumpled deposits were still present. The overall pattern appeared more blurred.
DISCUSSION
Wrinkling is a universal phenomenon when applying a compression/tension to elastic films. The general theory of Cerda and Mahadevan [START_REF] Cerda | Geometry and Physics of Wrinkling[END_REF] defines the wrinkle wavelength, λ, as follows
λ ≈ ⎜ ⎟ ⎛ ⎝ ⎞ ⎠ B K 1/4 ( 1
)
where B is the film bending stiffness (= E f τ 3 , E f is Young's modulus, and τ is the film thickness) and K is the stiffness of an effective elastic foundation. It is possible to address a particular case by knowing the physics of the system. The present system is composed of a rigid film lying on an elastic support made of the GO suspension in a gel like liquid crystal state. In that case, K takes the form of an elastic bulk modulus, giving
λ τ ≈ ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ E E f s 1/3 (2)
The wavelength, λ, is proportional to the top layer thickness, τ, with a slope defined by the mismatch of the film, E f , and substrate, E s , Young's moduli. GO exhibits surfactant like behavior, exposing the hydrophobic moiety at the air-water interface. [START_REF] Kim | Graphene Oxide Sheets at Interfaces[END_REF] In particular, the used GO possess the proper C/O ratio to be easily entrapped in the interface. [START_REF] Imperiali | Interfacial Rheology and Structure of Tiled Graphene Oxide Sheets[END_REF] In the used concentration range, due to their large lateral size, flake diffusion is limited. If the evaporation is quite fast, GO flakes tend to accumulate at the interface. Routh and Zimmerman 38 defined a specific Peclet number, Pe ≈ H w E /D, to quantify the ratio between diffusion (D) and the evaporation rate (E ). At Pe ≫1, evaporation overtakes diffusion and skinning is expected. Approximating E ̇ as the water front velocity (∼μm/s) and the diffusion coefficient as that of a sphere of 1 μm size, one obtains Pe ≈ 8, fulfilling the above conditions. Thus, a skin layer is expected to form on the top of the film. An analogous process was already observed in polymeric systems. [START_REF] Ciampi | Skin Formation and Water Distribution in Semicrystalline Polymer Layers Cast from Solution: A Magnetic Resonance Imaging Study[END_REF] The high frictional forces among the flakes and the pinning of the contact line suppress slippage, and the skin is finally folded under compression. [START_REF] Guo | Hydration Responsive Folding and Unfolding in Graphene Oxide Liquid Crystal Phases[END_REF] The resultant in plane compressive stress is directed by the receding water front. In this hypothesis, eq 2 could explain the different dependencies shown in Figure 3.A skin layer of thickness τ is suspended on foundations with different elasticities (i.e., GO bulk dispersions). The shear elastic modulus (G′) of bulk GO dispersions increases with concentration, as observed by rheology experiments. In principle, knowledge of the Poisson ratio for the GO foundation layer allows the determination of the bulk Young modulus, E s . However, the qualitative trend in E s with concentration can still be defined by a rough approximation, Es ≈ G'.
Using eq 2, an apparent bending stiffness B and its dependence on concentration can be inferred from the data in Figure 3, as shown in Figure 6. This bending stiffness increases as a power law (B ≈ conc b ) with an exponent b close to 3 (3.3). Considering the relation B = E f τ 3 , the behavior can be explained by assuming a direct proportionality of skin thickness, τ, with GO concentration. If one considers a Young modulus on the order of ∼10 2 GPa, [START_REF] Jimenez Rioboo | Elastic constants of graphene oxide few layer films: correlations with interlayer stacking and bonding[END_REF] then a skin layer in the range of 10-100 nm is expected. These considerations are actually in agreement with data obtained for very thin films (Figure 3c). The approximations of the Young and bending moduli are quite different from that obtained for monolayer graphene oxide. [START_REF] Imperiali | Interfacial Rheology and Structure of Tiled Graphene Oxide Sheets[END_REF] However, the present situation concerns multilayer assembly and the whole mechanical properties derived in a complex way from the monolayer. Compressive rheology characterization would be more appropriate in this particular situation. [START_REF] De Kretster | Compressive rheology: an overview[END_REF] However, shear and compression properties have been shown to behave in qualitatively the same manner. [START_REF] Zhou | Chemical and physical control of the rheology of concentrated metal oxide suspensions[END_REF] Therefore, the considerations above can give an idea of our wrinkling mechanics. Unfortunately, characterization of the skin layer is challenging, and a complete quantitative characterization of the film is not possible.
Wrinkle alignment is a consequence of a resultant tensile stress applied perpendicularly to the drying front as the volume decreased with the contact lines of the skin pinned. The mask drives the formation of a horizontal drying front. This front separates the dry film from the liquid film under the mask. A skin on the drying front is pinned at the contact line and stretched backward by the liquid surface tension. These constraints drive a unidirectional tensile stress in response to the normal compression due to volume shrinkage. The shape of the mask assures negligible skin formation at the lateral sides controlling the alignment.
The addition of nanoparticles modifies the physics of the system. In particular, we are interested here in particles that form cracks when used alone, a behavior exactly opposite to the case of GO systems which have extra surface area. The relatively low nanoparticles concentration presently used does not affect the phase behavior and the bulk mechanical properties of the dispersions. But other effects are coming into play. As known, above a critical thickness, films made solely of nanoparticles tend to crack. During evaporation, a close packed configuration of NPs forms at the contact line. As the solvent recedes (vertically) from this NP front, a large negative capillary pressure is produced in the menisci between particles. [START_REF] Lee | Why Do Drying Films Crack?[END_REF] This generates a compression stress normal to the surface and a large in plane tensile stress, as the rigid substrate prevents lateral deformation. Crack formation will release the tensile stress, at the expense of surface energy. The present situation is more complicated due to the presence of two different particles with two limiting distinct behaviors, one with the formation of cracks and the other with the formation of wrinkles. Actually, it was observed that the in plane stress induced by NPs at the surface of the films could even balance the stress that normally generates wrinkles. This can result in the total disappearance of wrinkles. The large size difference between flakes and nanoparticles is expected to produce stratification. With the same Pe argument for skin formation, one can expect an uneven distribution of flakes and nanoparticles during drying. [START_REF] Routh | Drying of thin colloidal films[END_REF] In a recent simulation, [START_REF] Cheng | Dispersing Nanoparticles in a Polymer Film via Solvent Evaporation[END_REF] the distribution of NPs in a skinning polymer solution was attributed mostly to NP-polymer and NPsolvent interactions. The nanoparticles used here are hydro philic and have a better affinity for the solvent. In this case, NPs are expected to accumulate under the superficial skin. Actually, no NPs are visible at the top of the dry films in experiments. During evaporation, volume shrinkage will set NP layers under compression. As the NPs are not deformable and their movement is hindered under confinement (i.e., GO stack assembly), in plane tensile stress is produced. This stress will first flatten the GO skin layer until there is enough disposable surface from the wrinkles, then again producing cracks. We can infer that the shape of the boundary lines comes from a complex interplay of wrinkle amplitude, wavelength, and dry thickness effects.
The possibility of patterning a peculiar structure with negative lithography can be explained by considering the yield stress nature of GO liquid crystal dispersions. [START_REF] Poulin | Superflexibility of graphene oxide[END_REF] Menisci are created under the open areas of the mask due to solvent loss (Figure 5a). The stress generated by a Laplace pressure gradient (σ = 2γ/R ≈ 14 Pa, for water surface tension and hole size R = 1 mm) is not high enough to overtake the yield stress (i.e., σ y > 14 Pa for 2.0 wt % GO, SI) and to induce viscous flow. Therefore, the films will retain the resulting deformation, generating the pattern. As already mentioned, a compressive yield stress would be more appropriate, but the consideration still holds.
CONCLUSIONS
By taking advantage of dynamic evaporative lithography and tuning the rheological properties of GO dispersions, we were able to control the wrinkling of GO films. The obtained wrinkles were aligned over a macroscopic area of different GO deposits. The wavelength was tuned by changes in the concentration and thickness of the films. The phenomenon was attributed to the formation of a skin layer, subjected to compressive strain during drying. It was shown that this compressive stress was balanced with a tensile stress to get rid of the wrinkles. The latter was simply obtained with the addition of nanoparticles, making the concept easily imple mentable in applications.
Controlling the phenomenon of wrinkling is critical in the fabrication of particular patterns, as illustrated in the present work. Notably, wrinkling can be altered without affecting the final macroscopic texture. This is actually the case of evaporative lithography in hybrid systems. This is important since wrinkles can affect the properties of the films.
As already mentioned, [START_REF] Zou | Wrinkled Surface Mediated Antibacterial Activity of Graphene Oxide Nanosheets[END_REF] the relation between GO film roughness and its antimicrobial activity has been demonstrated. We expect that tuning the wrinkle spacing may add selectivity. Patterned films can be also used as templates for controlled nanoparticle deposition. In principle, anisotropic wettability may also be obtained.
There are still some open questions related to the physical mechanisms involved in the investigated phenomena. We used a modification of the elasticity theory to qualitatively describe the mechanics of the system. The foundation considered in the theory is a purely elastic material, whereas in our case the material is viscoelastic. Moreover, the whole process is dynamic, as the top layer is forming and growing continuously. Therefore, more accurate theories are needed to consider all of these phenomena. From an experimental point of view, the characterization of the skin layer would also be interesting. The determination of the skin thickness will allow the confirmation of our hypothesis. It would also be interesting to look at the NPs' spatial distribution during film drying. The character ization of NPs and their packing would be helpful in quantifying the stress built up during drying.
Figure 1 .
1 Figure 1. Film casting and drying is shown in (a) along with the overall wrinkle alignment (2D FFT inset) (b) and the microscopic appearance of a wrinkle (c). v and v m indicate the directions of blade and mask movement, and H m , H w , and H are the mask, wet film, and dry film heights, respectively. Scale bars are (b) 500 μm and (c) 500 nm.
Figure 2 .
2 Figure 2. Wavelength determination examples, optical micrograph, grayscale profile, and 1D FFT for the wet (a) and dry (b) films. The scale bar is 100 μm.
Figure 4 .
4 Three different situations were observed: wrinkling (filled squares), flattening (open squares), and cracking (crosses). Under certain conditions of GO and NP concentrations, wrinkling was completely suppressed. The addition of NPs favored the casting of smoother films. The threshold concentration of NPs to remove wrinkling increased with GO concentration. Above a certain threshold of NP content, films underwent cracking. At the highest GO concentration used, it was not possible to reduce wrinkling without cracking the dry film.
Figure 3 .
3 Figure 3. λ (μm) is reported as a function of GO wt % at a fixed (1.3 ± 0.5 μm) dry thickness (a) and (b) at different dry thicknesses (H, μm) for 4.3 wt % (square, red line), 1.7 wt % (circle, green line), 0.68 wt % (triangle, blue line) GO dispersions. (c) Thin film thickness, h (nm), as a function of GO wt %. (Insets) The blade with a flexible scraper and the appearance of a film (scale bar 1.0 cm).
Figure 4 .
4 Figure 4. Effect of NP addition. (a) Wrinkling phase diagram for hybrid films at 40 μm fixed H w . Wrinkled (black squares), smooth (empty square), and cracked (cross) films. (b, c) SEM and (d) optical images of smooth, wrinkled, and cracked films, respectively. Scale bars are 20 (b), 30 (c), and 250 μm (d).
Figure 5 .
5 Figure 5. Patterning of GO (2.0 wt %) and GO + NPs (2.0 and 0.1 wt %, respectively) films: (a) schematic representation of the holed mask with the drying and patterning scheme; cross polarizing micrographs showing the obtained patterns for (b) pure GO and (c) GO NPs film. (Inset) Optical micrograph for the hybrid films, obtained in reflection. The scale bar is 1000 μm.
Figure 6 .
6 Figure 6. Bending stiffness (B = E s λ 3 , N m) reported as a function of GO wt %.
ACKNOWLEDGMENTS
We acknowledge C. Blanc, collaborator at the GAELIC project, for fruitful discussions and E. Laurichesse for some technical support of the experimental part.
Funding
The research in the manuscript was conducted in the framework of the A.N.R. funded GAELIC project.
AUTHOR INFORMATION
Corresponding Author *E mail: franco.tardani@gmail.com.
Notes
The authors declare no competing financial interest. |
00176202 | en | [
"math.math-ds",
"math.math-mp",
"phys.mphy"
] | 2024/03/05 22:32:13 | 2008 | https://hal.science/hal-00176202v2/file/LorentzCRAS.pdf | Emanuele Caglioti
François Golse
The Boltzmann-Grad limit of the periodic Lorentz gas in two space dimensions
The periodic Lorentz gas is the dynamical system corresponding to the free motion of a point particle in a periodic system of fixed spherical obstacles of radius r centered at the integer points, assuming all collisions of the particle with the obstacles to be elastic. In this Note, we study this motion on time intervals of order 1/r as r → 0 + .
Résumé
La limite de Boltzmann-Grad du gaz de Lorentz périodique en dimension deux d'espace. Le gaz de Lorentz périodique est le système dynamique correspondant au mouvement libre dans le plan d'une particule ponctuelle rebondissant de manière élastique sur un système de disques de rayon r centrés aux points de coordonnées entières. On étudie ce mouvement pour r → 0 + sur des temps de l'ordre de 1/r.
Version française abrégée
On appelle gaz de Lorentz le système dynamique correspondant au mouvement libre d'une particule ponctuelle dans un système d'obstacles circulaires de rayon r centrés aux sommets d'un réseau de R 2 , supposant que les collisions entre la particule et les obstacles sont parfaitement élastiques. Les trajectoires de la particule sont alors données par les formules [START_REF] Boca | The distribution of the free path lengths in the periodic two-dimensional Lorentz gas in the small-scatterer limit[END_REF]. La limite de Boltzmann-Grad pour le gaz de Lorentz consiste à supposer que le rayon des obstacles r → 0 + , et à observer la dynamique de la particule sur des plages de temps longues, de l'ordre de 1/r -voir (3) pour la loi d'échelle de Boltzmann-Grad en dimension 2.
Or les trajectoires de la particule s'expriment en fonction de l'application de transfert d'obstacle à obstacle T r définie par [START_REF] Tartar | Compensated compactness and applications to partial differential equations[END_REF] -où la notation Y désigne la transformation inverse de [START_REF] Golse | On the distribution of free path lengths for the periodic Lorentz gas II. M2AN Modél[END_REF] -application qui associe, à tout paramètre d'impact h ′ ∈ [-1, 1] correspondant à une particule quittant la surface d'un obstacle dans la direction ω ∈ S 1 , le paramètre d'impact h à la collision suivante, ainsi que le temps s s'écoulant jusqu'à cette collision. (Pour une définition de la notion de paramètre d'impact, voir [START_REF] Golse | On the Periodic Lorentz Gas and the Lorentz Kinetic Equation[END_REF].)
On se ramène donc à étudier le comportement limite de l'application de transfert T r pour r → 0 + . Proposition 0.1 Lorsque 0 < ω 2 < ω 1 et α = ω2 ω1 / ∈ Q, l'application de transfert T r est approchée à O(r 2 ) près par l'application T A,B,Q,N définie à la formule (14). Pour ω ∈ S 1 quelconque, on se ramène au cas ci-dessus par la symétrie (15).
Les paramètres A, B, Q, N mod. 2 intervenant dans l'application de transfert asymptotique sont définis à partir du développement en fraction continue (9) de α par les formules (11) et (12).
On voit sur ces formules que les paramètres A, B, Q, N mod. 2 sont des fonctions très fortement oscillantes des variables ω et r. Il est donc naturel de chercher le comportement limite de l'application de transfert T r dans une topologie faible vis à vis de la dépendance en la direction ω. On montre ainsi que, pour tout h ′ ∈ [-1, 1], la famille d'applications ω → T r (h ′ , ω) converge au sens des mesures de Young (voir par exemple [START_REF] Tartar | Compensated compactness and applications to partial differential equations[END_REF] p. 146-154 pour une définition de cette notion) lorsque r → 0 + vers une mesure de probabilité P (s, h|h ′ )dsdh indépendante de ω :
Théorème 0.2 Pour tout Φ ∈ C c (R * + ×] -1, 1[) et tout h ′ ∈] -1, 1[, la limite (16) a lieu dans L ∞ (S 1
) faible-* lorsque r → 0 + , où la mesure de probabilité P (s, h|h ′ )dsdh est l'image de la probabilité µ définie dans (17) par l'application (A, B, Q, N ) → T A,B,Q,N (h ′ ) de la formule (14). De plus, cette densité de probabilité de transition P (s, h|h ′ ) vérifie les propriétés (18).
Le théorème ci-dessus est le résultat principal de cette Note : il montre que, dans la limite de Boltzmann-Grad, le transfert d'obstacle à obstacle est décrit de manière naturelle par une densité de probabilité de transition P (s, h|h ′ ), où s est le laps de temps entre deux collisions successives avec les obstacles (dans l'échelle de temps de la limite de Boltzmann-Grad), h le paramètre d'impact lors de la collision future et h ′ celui correspondant à la collision passée.
Le fait que la probabilité de transition P (s, h|h ′ ) soit indépendante de la direction suggère l'hypothèse d'indépendance (H) des quantités A, B, Q, N mod. 2 correspondant à des collisions successives. Théorème 0.3 Sous l'hypothèse (H), pour toute densité de probabilité f in ∈ C c (R 2 × S 1 ), la fonction de distribution f r ≡ f r (t, x, ω) de la théorie cinétique, définie par (3) converge dans L ∞ (R + × R 2 × S 1 ) vers la limite (22) lorsque r → 0 + , où F est la solution du problème de Cauchy (21) posé dans l'espace des phases étendu (x, ω, s, h) ∈ R 2 × S 1 × R * + ×] -1, 1[. Dans le cas d'obstacles aléatoires indépendants et poissonniens, Gallavotti a montré que la limite de Boltzmann-Grad du gaz de Lorentz obéit à l'équation cinétique de Lorentz (4). Le cas périodique est absolument différent : en se basant sur des estimations (cf. [START_REF] Bourgain | On the distribution of free path lengths for the periodic Lorentz gas[END_REF] et [START_REF] Golse | On the distribution of free path lengths for the periodic Lorentz gas II. M2AN Modél[END_REF]) du temps de sortie du domaine Z r défini dans (1), on démontre que la limite de Boltzmann-Grad du gaz de Lorentz périodique ne peut pas être décrite par l'équation de Lorentz (4) sur l'espace des phases R 2 × S 1 classique de la théorie cinétique : voir [START_REF] Golse | On the Periodic Lorentz Gas and the Lorentz Kinetic Equation[END_REF]. Si l'hypothèse (H) ci-dessous était vérifiée, le modèle cinétique (22) dans l'espace des phases étendu fournirait donc l'équation devant remplacer l'équation cinétique classique de Lorentz (4) dans le cas périodique.
The Lorentz gas
The Lorentz gas is the dynamical system corresponding to the free motion of a single point particle in a periodic system of fixed spherical obstacles, assuming that collisions between the particle and any of the obstacles are elastic. Henceforth, we assume that the space dimension is 2 and that the obstacles are disks of radius r centered at each point of Z 2 . Hence the domain left free for particle motion is
Z r = {x ∈ R 2 | dist(x, Z 2 ) > r} ,
where it is assumed that 0 < r < 1 2 .
(1)
Assuming that the particle moves at speed 1, its trajectory starting from x ∈ Z r with velocity ω ∈ S 1 at time t = 0 is t → (X r , Ω r )(t; x, ω) ∈ R 2 × S 1 given by Ẋr (t) = Ω r (t) and Ωr (t) = 0 whenever X r (t) ∈ Z r ,
X r (t + 0) = X r (t -0) and Ω r (t + 0) = R[X r (t)]Ω r (t -0) whenever X r (t -0) ∈ ∂Z r , (2)
denoting ˙= d dt and R[X r (t)] the specular reflection on ∂Z r at the point X r (t) = X r (t ± 0). Assume that the initial position x and direction ω of the particle are distributed in Z r × S 1 with some probability density f in ≡ f in (x, ω), and define
f r (t, x, ω) := f in (rX r (-t/r; x, ω), Ω r (-t/r; x, ω)) whenever x ∈ Z r . ( 3
)
We are concerned with the limit of f r as r → 0 + in some appropriate sense to be explained below. In the 2-dimensional setting considered here, this is precisely the Boltzmann-Grad limit.
In the case of a random (Poisson), instead of periodic, configuration of obstacles, Gallavotti [START_REF] Gallavotti | Rigorous theory of the Boltzmann equation in the Lorentz gas[END_REF] proved that the expectation of f r converges to the solution of the Lorentz kinetic equation for (x, ω) ∈ R 2 × S 1 :
(∂ t + ω • ∇ x )f (t, x, ω) = S 1 (f (t, x, ω -2(ω • n)n) -f (t, x, ω))(ω • n) + dn , f t=0 = f in . (4)
In the case of a periodic distribution of obstacles, the Boltzmann-Grad limit of the Lorentz gas cannot be described by a transport equation as above: see [START_REF] Golse | On the Periodic Lorentz Gas and the Lorentz Kinetic Equation[END_REF] for a complete proof, based on estimates on the free path length to be found in [START_REF] Bourgain | On the distribution of free path lengths for the periodic Lorentz gas[END_REF] and [START_REF] Golse | On the distribution of free path lengths for the periodic Lorentz gas II. M2AN Modél[END_REF]. This limit involves instead a linear Boltzmann equation on an extended phase space with two new variables taking into account correlations between consecutive collisions with the obstacles that are an effect of periodicity: see Theorem 4.1.
The transfer map
Denote by n x the inward unit normal to Z r at the point x ∈ ∂Z r , consider
Γ ± r = {(x, ω) ∈ ∂Z r × S 1 | ± ω • n x > 0} , (5)
and let Γ ± r /Z 2 be the quotient of Γ ± r under the action of Z 2 by translation on the x variable. For (x, ω) ∈ Γ + r , let τ r (x, ω) be the exit time from x in the direction ω and h r (x, ω) be the impact parameter:
τ r (x, ω) = inf{t > 0 | x + tω ∈ ∂Z r } , and h r (x, ω) = sin(ω, n x ) . (6)
Obviously, the map
Γ + r /Z 2 ∋ (x, ω) → (h r (x, ω), ω) ∈] -1, 1[×S 1 (7)
coordinatizes Γ + r /Z 2 , and we henceforth denote Y r its inverse. For each r ∈]0, 1 2 [, consider now the transfer map
T r : ] -1, 1[×S 1 → R * + ×] -1, 1[ defined by T r (h ′ , ω) = (rτ r (Y r (h ′ , ω)), h r (X r (τ r (Y r (h ′ , ω)); Y r (h ′ , ω)), Ω r (τ r (Y r (h ′ , ω)); Y r (h ′ , ω)))) . (8)
For a particle leaving the surface of an obstacle in the direction ω with impact parameter h ′ , the transition map T r (h ′ , ω) = (s, h) gives the (rescaled) distance s to the next collision, and the corresponding impact parameter h. Obviously, each trajectory (2) of the particle can be expressed in terms of the transfer map T r and iterates thereof. The Boltzmann-Grad limit of the periodic Lorentz gas is therefore reduced to computing the limiting behavior of T r as r → 0 + , and this is our main purpose in this Note.
We first need some pieces of notation. Assume ω = (ω 1 , ω 2 ) with 0 < ω 2 < ω 1 , and α = ω 2 /ω 1 ∈]0, 1[\Q. Consider the continued fraction expansion of α:
α = [0; a 0 , a 1 , a 2 , . . .] = 1 a 0 + 1 a 1 + . . . . ( 9
)
Define the sequences of convergents (p n , q n ) n≥0 and errors (d n ) n≥0 by the recursion formulas
p n+1 = a n p n + p n-1 , p 0 = 1 , p 1 = 0 , d n = (-1) n-1 (q n α -p n ) , q n+1 = a n q n + q n-1 q 0 = 0 , q 1 = 1 , (10)
and let
N (α, r) = inf{n ≥ 0 | d n ≤ 2r 1 + α 2 } , and k(α, r) = - 2r √ 1 + α 2 -d N (α,r)-1 d N (α,r) . ( 11
)
Proposition 2.1 For each ω = (cos θ, sin θ) with 0 < θ < π 4 , set α = tan θ and ǫ = 2r √ 1 + α 2 , and
A(α, r) = 1 - d N (α,r) ǫ , B(α, r) = 1 - d N (α,r)-1 -k(α, r)d N (α,r) ǫ , Q(α, r) = ǫq N (α,r) . ( 12
)
In the limit r → 0 + , the transition map T r defined in ( 8) is explicit in terms of A, B, Q, N up to O(r 2 ):
T r (h ′ , ω) = T A(α,r),B(α,r),Q(α,r),N (α,r) (h ′ ) + (O(r 2 ), 0) for each h ′ ∈] -1, 1[. ( 13
)
In the formula above
T A,B,Q,N (h ′ ) = (Q, h ′ -2(-1) N (1 -A)) if (-1) N h ′ ∈]1 -2A, 1] , T A,B,Q,N (h ′ ) = Q ′ , h ′ + 2(-1) N (1 -B) if (-1) N h ′ ∈ [-1, -1 + 2B[ , T A,B,Q,N (h ′ ) = Q ′ + Q, h ′ + 2(-1) N (A -B) if (-1) N h ′ ∈ [-1 + 2B, 1 -2A] , ( 14
)
for each (A, B, Q, N ) ∈ K :=]0, 1[ 3 ×Z/2Z, with the notation Q ′ = 1-Q(1-B) 1-A .
The proof uses the 3-term partition of the 2-torus defined in section 2 of [START_REF] Caglioti | On the distribution of free path lengths for the periodic Lorentz gas III[END_REF], following the work of [START_REF] Blank | Thom's problem on irrational flows[END_REF].
For ω = (cos θ, sin θ) with arbitrary θ ∈ R, the map h ′ → T r (h ′ , ω) is computed using Proposition 2.1 in the following manner. Set θ = θ -m π 2 with m = [ 2 π (θ + π 4 )] and let ω = (cos θ, sin θ). Then
T r (h ′ , ω) = (s, h) , where (s, sign(tan θ)h) = T r (sign(tan θ)h ′ , ω) . (15)
3. The Boltzmann-Grad limit of the transfer map T r
The formulas (11) and (12) defining A, B, Q, N mod. 2 show that these quantities are strongly oscillating functions of the variables ω and r. In view of Proposition 2.1, one therefore expects the transfer map T r to have a limit as r → 0 + only in the weakest imaginable sense, i.e. in the sense of Young measuressee [START_REF] Tartar | Compensated compactness and applications to partial differential equations[END_REF], pp. 146-154 for a definition of this notion of convergence.
The main result in the present Note is the theorem below. It says that, for each h ′ ∈ [-1, 1], the family of maps ω → T r (h ′ , ω) converges as r → 0 + and in the sense of Young measures to some probability measure P (s, h|h ′ )dsdh that is moreover independent of ω.
Theorem 3.1 For each Φ ∈ C c (R * + × [-1, 1]) and each h ′ ∈ [-1, 1] Φ(T r (h ′ , •)) → ∞ 0 1 -1 Φ(s, h)P (s, h|h ′ )dsdh in L ∞ (S 1 ω ) weak-* as r → 0 + , ( 16
)
where the transition probability P (s, h|h ′ )dsdh is the image of the probability measure on K given by
dµ(A, B, Q, N ) = 6 π 2 1 0<A<1 1 0<B<1-A 1 0<Q< 1 2-A-B dAdBdQ 1 -A (δ N =0 + δ N =1 ) (17) under the map K ∋ (A, B, Q, N ) → T A,B,Q,N (h ′ ) ∈ R + × [-1, 1]
. Moreover, P satisfies:
(s, h, h ′ ) → (1 + s)P (s, h|h ′ ) is piecewise continuous and bounded on
R + × [-1, 1] × [-1, 1],
and P (s, h|h ′ ) = P (s, -h| -h ′ ) for each h, h ′ ∈ [-1, 1] and s ≥ 0.
(
) 18
The proof of (16-17) is based on the explicit representation of the transition map in Proposition 2.1 together with Kloosterman sums techniques as in [START_REF] Boca | The distribution of the free path lengths in the periodic two-dimensional Lorentz gas in the small-scatterer limit[END_REF]. The explicit formula for the transition probability P is very complicated and we do not report it here, however it clearly entails the properties (18).
The Boltzmann-Grad limit of the Lorentz gas dynamics
For each r ∈]0, 1 2 [, denote dγ + r (x, ω) the probability measure on Γ + r that is proportional to ω • n x dxdω. This probability measure is invariant under the billiard map
B r : Γ + r ∋ (x, ω) → B r (x, ω) = (x + τ r (x, ω)ω, R[x + τ r (x, ω)ω]ω) ∈ Γ + r . (19)
For (x 0 , ω 0 ) ∈ Γ + r , set (x n , ω n ) = B n r (x 0 , ω 0 ) and
α n = min(|ω n 1 /ω n 2 |, |ω n 2 /ω n 1 |) for each n ≥ 0, and define b n r = (A(α n , r), B(α n , r), Q(α n , r), N (α n , r) mod. 2) ∈ K for each n ≥ 0. ( 20
)
We make the following asymptotic independence hypothesis: for each n ≥ 1 and each Ψ ∈
C([-1, 1]× K n ) (H) lim r→0 + Γ + r Ψ(h r , ω 0 , b 1 r , . . . , b n r )dγ + r (x 0 , ω 0 ) = 1 -1 dh ′ 2 S 1 dω0 2π K n Ψ(h ′ , ω 0 , β 1 , . . . , β n )dµ(β 1 ) . . . dµ(β n )
Under this assumption, the Boltzmann-Grad limit of the Lorentz gas is described by a kinetic model on the extended phase space R 2 × S 1 × R + × [-1, 1] -unlike the Lorentz kinetic equation ( 4), that is set on the usual phase space R 2 × S 1 . Theorem 4.1 Assume (H), and let f in be any continuous, compactly supported probability density on R 2 × S 1 . Denoting by R[θ] the rotation of an angle θ, let F ≡ F (t, x, ω, s, h) be the solution of
(∂ t + ω • ∇ x -∂ s )F (t, x, ω, s, h) = 1 -1 P (s, h|h ′ )F (t, x, R[π -2 arcsin(h ′ )]ω, 0, h ′ )dh ′ F (0, x, ω, s, h) = f in (x, ω) ∞ s 1 -1 P (τ, h|h ′ )dh ′ dτ (21) where (x, ω, s, h) runs through R 2 × S 1 × R * + ×] -1, 1[. Then the family (f r ) 0<r< 1 2 defined in (3) satisfies f r → ∞ 0 1 -1 F (•, •, •, s, h)dsdh in L ∞ (R + × R 2 × S 1 ) weak- * as r → 0 + . ( 22
)
For each (s 0 , h 0 ) ∈ R + × [-1, 1], let (s n , h n ) n≥1 be the Markov chain defined by the induction formula (s n , h n ) = T βn (h n-1 ) for each n ≥ 1 , together with
ω n = R[2 arcsin(h n-1 ) -π]ω n-1 , (23)
where β n ∈ K are independent random variables distributed under µ. The proof of Theorem 4.1 relies upon approximating the particle trajectory (X r , Ω r )(t) starting from (x 0 , ω 0 ) in terms of the following jump process with values in R 2 × S 1 × R + × [-1, 1] with the help of Proposition 2.1 (X t , Ω t , S t , H t )(x 0 , ω 0 , s 0 , h 0 ) = (x 0 + tω 0 , ω 0 , s 0 -t, h 0 ) for 0 ≤ t < s 0 , (X t , Ω t , S t , H t )(x 0 , ω 0 , s 0 , h 0 ) = (X τn + (t -s n )ω n , ω n , s n+1 -t, h n ) for s n ≤ t < s n+1 .
(24)
Unlike in the case of a random (Poisson) distribution of obstacles, the successive impact parameters on each particle path are not independent and uniformly distributed in the periodic case -likewise, the successive free path lengths on each particle path are not independent with exponential distribution. The Markov chain (23) is introduced to handle precisely this difficulty.
Figure 1 .
1 Figure 1. Left: the transfer map (s, h) = Tr(h ′ , v), with h ′ = sin(n x ′ , v) and h = sin(nx, v). Right: Particles leaving the surface of one obstacle will next collide with one of generically three obstacles. The figure explains the geometrical meaning of A, B, Q. |
01762898 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01762898/file/463502_1_En_39_Chapter.pdf | Miroslava Černochová
email: miroslava.cernochova@pedf.cuni.cz
Tomáš Jeřábek
email: tomas.jerabek@pedf.cuni.cz
DIYLab as a way for Student Teachers to Understand a Learning Process
Keywords: Digital Literacy, DIY, DIYLab Activity, Visualisation of Learning Process, Student Teacher
The authors introduce their experiences gained in the EU project Do It Yourself in Education: Expanding digital literacy to foster student agency and collaborative learning (DIYLAB). The project was aimed to design an educational procedure based on DIY philosophy with a student-centred and heuristic approach to learning focused on digital literacy development and later to verify it in teaching practice in primary and secondary schools and HEIs in Finland, Spain and the Czech Republic. In the Czech Republic the project DIYLAB was realized as a teaching approach in initial teacher education with Bachelor and MA degrees for ICT, Biology, Primary Education and Art Education student teachers. DIYLab activities represented occasions for student teachers to bring interesting problems related to their study programmes and also their after-school interests. An integral part of DIYLab activities was problem visualisation using digital technology; visual, film, animation, etc. served as a basis for assessing both pupils' digital competence and their problem-solving capability. The DIYLab have influenced student teachers' pedagogical thinking of how to develop pupils' digital literacy and to assess digital literacy development as a process and not as a digital artefact. Following the project, the DIYLab approach is being included in future Bachelor and MA level initial teacher education with the aim to teach student teachers (1) to design DIY activities for digital literacy development supported inter-disciplinary relations in school education, and (2) to use digital technology to oversee and assess learning as a process.
Introduction
Since 2006, ICT has been included in the curriculum as compulsory for primary, lower and upper secondary education in the Czech Republic with the aim of developing digital literacy and the use of ICT skills so that pupils are enabled to use standard ICT technology. In schools there is, however, a tendency for ICT lessons to focus on outcomes which can be produced by basic formats using a typical office suite (for example, using a word processor, spreadsheet and presentation program) and on the ability to search and to process information (primarily via the Internet).
Some pupils and HEI students manifest their discontentment against how digital technology is used in schools. They show teachers they are more experienced in using digital technology than their teachers. The 2006 ICT curriculum is a thing of the past. It does not sufficiently reflect new and advanced technology and the need to implement innovative teaching approaches in digital literacy development which emphasises the educational potential for new ways and strategies of learning, not just user skills.
Thus the Czech government approved in 2014 the Strategy of Digital Education ( [START_REF] Moeys | Strategie digitálního vzdělávání[END_REF]) that would contribute, among other things [START_REF] Bergold | Participatory Research Methods: A Methodological Approach in Motion [110 paragraphs[END_REF] to create conditions for the development of the digital literacy and computational thinking of pupils, and (2) to create conditions for the development of the digital literacy and teachers' computational thinking.
The DIYLab, being grounded in DIY (do-it-yourself) philosophy, is an example of an innovative approach to education which worked in schools and enabled the development of digital literacy. Pupils at all levels were at first cautious but attracted to the idea and motivated to learn. Teachers were empowered by coming to understand another strategy to enable students' learning.
The DIY Concept
The concept of DIY is not totally new. It can be found when speaking of the development, for example, of amateur radio as a hobby. The DIY movement has developed and spread step by step into different branches (technical education, Art, science, etc.). It has common features: it brings together enthusiastic people (who have the same aim and interest) to solve in a creative way interesting problems in their field and mutually to share "manuals" on how to proceed or how "you can do it yourself".
Globally, there is a generation of DIY enthusiasts and supporters who join in various communities or networks. There is nothing that could limit activities of this generation of creative and thoughtful people; if they need to know something to be able to realise their DIY ideas they learn from one another. The DIY generation very often uses ICT for their creative initiatives. The DIY generation visualises stories to document the process explaining how problems were solved to be shared as tutorials by others. Freedom to make and to create using ICT is perceived as freedom of access, in the choice of tools and technology, and a release from reliance on specific software and hardware tools; it is using a variety of resources, making copies and sharing outcomes and methods.
Implementation of DIY into Education
To apply DIY in schools means enabling pupils to bring into school interesting ideas from the extra-curricular environment and to create conditions for their exploration; to put them within a school curriculum and to (re)arrange circumstances for collaboration and sharing experiences, in a similar way to that in which scientists and experts might work. Through such processes pupils can use their knowledge and skills from different subjects and interests so they can keep discovering new things and interdisciplinary contexts and connections ( [START_REF] Sancho-Gil | Envisioning DIY learning in primary and secondary schools[END_REF]). In such activities pupils organise themselves, their procedures and processes; principles of autonomous learning are thus being put into practice.
According to Y. Kafai and K. Peppler ( [START_REF] Kafai | Youth, Technology, and DIY: Developing Participatory Competencies in Creative Media Production[END_REF]), it is possible to incorporate DIY activities into programming, designing models, constructing robots, creating manuals (tutorials) on how to do or how to learn something (for example, how to count using an abacus). Thus, DIY can potentially contribute to further mastery in the use of digital technology and consequently improve digital literacy. DIY in school contexts corresponds with the concept of learning as "the natural, unstoppable process of acquiring knowledge and mastery" and being aware that "the vast majority of the learning in your life doesn't happen when you're a kid in school" ( [6, p. 22]).
This article focuses on DIYLab activities implemented through the project DIYLab in teacher education at the Faculty of Education but also includes an example from a school since one of the participant teachers was also a student teacher at the Faculty.
First, it was necessary to map ideas of student teachers and teacher educators if, why and how to bring topics from outside students' activities into their study programme. The student teachers were expected to integrate their own interesting problem related to their life or hobbies, but unfortunately the majority of them did not come with any own initiative waiting for a task formulated by their teachers. It emerged that student teachers who participated in DIYLab had not been used to bringing their extra-curricular interests, hobbies, or expertise into university study. After that, it was necessary to specify a framework and key features of the DIY activities which were consequently realised by student teachers.
A Model of DIYLab Activity
Imperative in the practice of DIYLab is that all who apply the DIY idea in their activities endeavour to share with the outside world how they proceeded and how they solved a problem. They develop tutorials which visually (using movie, animation, etc.) document the process, explaining how problems were solved and what was learned. This means of transmitting to others how to proceed may be perceived as an author's self-reflection of his/her learning. Story-telling -a narrative assemblage -is a very important attribute of the DIY creation process ([4, p. 300]). The publishing of a procedure or a manual on how to create or produce something or how something was made can help others to produce something similar; it can help others to learn new methods or to create something completely new and original. The concept of DIY aligns with young people's experiences who point out that in schools "we miss so much of the richness of real learning, which relies on failure, trial and error, getting to know people, and reaching for things you didn't think were possible" ([6, p. 75]).
Key features of DIYLab activities
A model of DIYLab activities was based on DIY philosophy which is a studentcentred, heuristic approach to learning and problem-solving and which implies six pedagogical principles for approaches to learning (Table 1). A key aim of DIY activities beyond solving a problem is to provide a manual on how to solve the problem. This "handbook" is then published in a form which can be shared with others -the best and easy to understand way is in a visible form (e.g., video, animation). (2) to have the characteristics of inquiry-based teaching and learning methods DIY communities dedicate their time to original problems which have not been solved and which are different to traditional school tasks.
(
3) to support transdisciplinary knowledge
To enable pupils to bring into school interesting ideas from the extra-curricular environment and to create conditions for their exploration. If pupils have an interesting problem to be solved, they do not worry about which school subject it relates to.
J. Sancho et al. ([10])
(4) to contribute to autonomous / selfregulated learning Documenting how to proceed for others may be perceived as an author's self-reflection of his/her learning. DIY communities enjoy to find a solution "Building new tools and paths to help all of us learn". The requirement to visualise a learning process about "how the problem can be solved" as a message for others follows several reasons. Firstly, visual tools are normally understood as comprehensible regardless of which languages we speak. Secondly, we are all, student teachers included, increasingly surrounded by visual stories (e.g. YouTube, animated instructions for passengers how to behave during a flight). The skills required to use digital technology for visualisation fully correspond to a concept of digital literacy defined by Y. Eshet-Alkalai ([3, p. 93]): "digital literacy involves more than the mere ability to use software or operate a digital device; it includes a large variety of complex cognitive, motor, sociological, and emotional skills, which users need in order to function effectively in digital environments." Eshet-Alkalai's conceptual model of digital literacy consists of five digital literacy thinking skills, including the "photo-visual digital thinking skill: Modern graphic based digital environments require scholars to employ cognitive skills of "using vision to think" in order to create photo-visual communication with the environment. This unique form of digital thinking skill helps users to intuitively "read" and understand instructions and messages that are presented in a visualgraphical form, as in user interfaces and in children's computer games." ([3, p. 93])
Specification of a Research Field
The implementation of the DIYLab project in teacher education was an opportunity to focus on the development of pedagogical thinking in student teachers; primarily to enrich by using an innovative didactical approach to digital literacy development, and to understand better the learning process.
The project DIYLab was expected to answer questions such as how much student teachers are capable of visualising their learning, which type of visual description (narration) student teachers would produce or how difficult it is for them to visualise their learning process. The student teachers had not been used to considering why and how to visualise the learning process. They had studied learning theory in aspects of pedagogy and psychology. Therefore, it was expected that they would find visualising their learning processes challenging because they had never undertaken such a pedagogical task. During their HEI study, they mainly do self-reflections from didactic situations, teaching processes or learning only in oral or written forms, but not in a visual manner.
Research Methodology
Participatory Action Research -PAR ( [START_REF] Bergold | Participatory Research Methods: A Methodological Approach in Motion [110 paragraphs[END_REF]; ( [START_REF] Kemmis | Exploring the relevance of critical theory for action research: Emancipatory action research in the footsteps of Jürgen Habermas[END_REF]; [START_REF] Kemmis | Participatory action research. Communicative action and the public sphere[END_REF]) was the research methodology adopted since it allowed active engagement, intervention and the opportunity for participant observation. The approach was also consonant with the democratic values implicit in the above-stated DIY philosophy. The impact of DIY approach on teacher education was studied using qualitative research methods (focus groups, questionnaire surveys, interviews, observations and analyses of student teachers' DIY outcomes). Teacher educators evaluated not only the originality of student teachers' DIYLab activity procedures, but also how much these DIY activities corresponded to the six pedagogical principles (see Fig. 1) and to what extent student teachers managed to visualise a process and their ways of thinking and learning.
Characteristics of student teachers participated in DIYLab activities
From January 2015 to January 2016, at the Faculty of Education, 192 part-time and full-time student teachers (aged at least 20 years) and eight teacher educators from four departments (IT and Technical Education, Art Education, Biology and Environmental Studies, and Primary Education) were introduced to the DIY philosophy within compulsory courses focused on pedagogy, ICT education, computing education, biology, educational technology, multimedia etc.
Analysis of Some DIYLab Activities Performed by Student Teachers
The ). Some of them were published on the HUB (hub.diylab.eu).
Ways in which the DIYLab activities met the defined requirements (1) Collaborative learning
The collaborative approach to DIYLab activities was the most irregular one, and was dependent on each particular process and students. For the part-time student teachers who live and work in different places of the Czech Republic and only meet during classes at the Faculty of Education collaboration and co-operation with their coscholars are more fitting and appropriate than for full-time students. Some DIYLab activities were extremely specialised and tasks had to be solved individually.
(2) Inquiry-based teaching and learning DIYLab activities were not for student teachers routine tasks usually assigned in seminars. In some cases, the student teachers faced technological problems (see Building android apps or the specific solution for Installing a camera in a birdhouse for the subject Multimedia Systems), in another cases they faced more theoretical didactic problems (see Collection of examples of problems which human cannot solve without using computer).
(
3) Trans-disciplinary knowledge
Almost all of DIYLab activities had trans-disciplinary overlap. In some cases, the trans-disciplinary co-operation became obvious only thanks to the DIYLab project and had an impact in forming a student teachers' professional competence of selfreflection (e.g. How I'm becoming a teacher). Nearly thirty per cent (28.5%) of ICT student teachers stated that in their DIYLab activity they did not use knowledge from other subjects; if there was any required knowledge from other subjects, it was mostly from physics, mathematics, English, geography, medicine or cinematography or computer science, and rarely from biology, chemistry or art. Nevertheless, they appreciated the opportunity to collaborate with students of other study specialisations very much, and due to such collaboration from their point of view they learned a great deal.
(4) Autonomous / Self-regulated learning
The dimensions of independent learning and self-regulation underpin the whole process and were actively promoted, taking due account of the diversity of the students and their willingness to learn by new means. The student teachers appreciated the DIYLab approach to learning from two perspectives: they learned (a) another approach to solve an issue, and (b) how to properly lay out their work, to visualise and organise tasks in order to find solutions.
(
5) Digital literacy improvement / Digital competence
In carrying out the DIYLab activities the student teachers worked with quite a narrow range of hardware and software which was largely determined by the technical equipment available at the Faculty of Education or in the resources available through their respective Bachelor's and Master's degree studies. Most of the student teachers involved in DIYLab activities were ICT students. In general, in the case of ICT student teachers it was virtually impossible to determine any improvement or progress in their digital literacy. Based on the outputs, the students were mostly using video, presentation and text editors. For ICT student teachers, the majority of DIY activities were only an opportunity (sometimes a routine) to apply their digital literacy skills to solving problems, while for Art or Biology Education student teachers DIY activities distinctively contributed to improvement of their skills in using digital technology. As a result of involvement in DIYLab they learned to create animations etc. DIY activities with ICT student teachers increased their didactic thinking about the role and possibilities of digital technology in education; besides that, they assisted their Art Education peers to be able to do animations or Biology student teachers to design a technological solution and to install a camera in a birdhouse.
(6) Connection to study programmes / curriculum
The student teachers did their DIYLab activities during one semester as a part of their final work with the aim of gaining credits and grades. Each DIYLab outcome consisted of two main parts: (i) a product as a solution of the problem (e.g., SW application, a set of 3D tools, models, database, mechanical drawing, electric circuit, robot), and (ii) a digital object (e.g. video, movie, animation) which visualises a process demonstrating how student teachers were progressing, how they learned a problem and how they managed to resolve a DIYLab activity. Fig. 1 shows the results of a questionnaire survey focused on how teacher educators evaluated their DIYLab activities connecting the six pedagogical principles. From this evaluation the following average values for each item are derived: contribution to autonomous / self-regulated learning (4.8), digital competence improvement (4.4), connection with the curriculum (4.3), support to trans-disciplinary knowledge (3.9), inquiry-based teaching and learning (3.6), and support for collaborative learning (3.1). The majority of problems solved in DIYLab were not characteristic of inquirybased problems. Teacher educators invested a lot of time facilitating student teachers to develop a DIYLab idea. For the teacher educators it was not always easy to motivate their students to bring their own projects. Students seemed to be afraid to step into new territory. The main motivation to carry out their DIY activities for some student teachers lay in getting credits, not in solving problems. In part-time study, there was not much time for defining and understanding a problem for inquiry-based teaching and inter-disciplinary links. Potentially, this is an advantage since it may have contributed to increased online collaboration between students and an increase in collaborative learning. There were several factors (teacher educator, student, solved problem, study specialisation, motivation, experiences, etc.) that influenced a way how particular pedagogical principles were accomplished in each individual DIYLab activity (Fig. 1).
Examples of DIYLab activities carried out by Bc degree student teachers
The student teachers on ICT Bachelor Studies' courses counted on their teachers to assign them a topic. Despite some of them work in computer companies or specialise in some aspect of computing, they rarely came up with their own proposals. When they had some ideas for topics for DIYLab activities these were related to their hobbies (e.g. diving, gardening, theatre). Some of them were surprised that they had to do something linking knowledge and experiences from different branches or disciplines.
For example, there was a student who was interested in scuba diving and who proposed a project, Diver's LogBook (see http://hub.diylab.eu/2016/01/27/diverslogbook/). Another student is part of a theatre group, Kašpárek and Jitřenka, and she decided to initiate a project entitled, Database Development -database of theatre ensembles (http://hub.diylab.eu/2016/01/27/database-development-database-oftheatre-ensembles/). Bc. student teachers of Biology who studied the life of birds in a nesting box directed their activities to a project, Bird House (http://hub.diylab.eu/2016/01/27/bird-house/). In courses focused on digital technology, one ICT student looked for a solution as to How to create an animated popup message in Adobe After Effects.
Bachelor student teachers weren't used to thinking about what and how they had learned, much less how to visualise their own learning process. They didn't consider thinking about learning and reflection on DIYLab activity to be "professional". Unlike Art Education students, the ICT student teachers are advanced in digital technology, but they lack knowledge and skills to observe and to visually display and present processes. Bachelor ICT student teachers very often reduced a visualisation of their DIYLab procedure to a set of screenshots. They recognised DIYLab only from the technological point of view and the extent to which software and hardware were applied. Generally, for Bc. student teachers it was very difficult to visualise their learning in DIY activities. They were not particularly interested in the pedagogical concept of learning and how to visualise its process because in Bc. study programmes the main focus is on acquiring knowledge from particular branches (Biology, ICT, Art, etc.), rather than understanding the learning process involved.
Examples of DIY activities carried out by MA degree student teachers
The MA degree student teachers did their DIYLab activities predominantly within didactic subjects or courses making limited use of technology. The majority of them were part-time students who work in schools as unqualified teachers of ICT or Informatics subjects, and so most of them tried to apply the DIYLab idea in their teaching with their pupils.
MA student teachers thought about and mediated the topics and the purpose of DIYLab activities more deeply than Bachelor-level students mainly due to the fact that they realised their DIYLab activities primarily in courses focused on didactics' aspects and contexts. MA student teachers elaborated some general themes proposed by their teacher educators.
The requirement to record and visualise a learning process did not surprise MA student teachers. They understand how important it is to visualise a learning process from a pedagogical point of view. Data taken from such visualisation can help teachers to understand better the learning outcomes of their pupils. However, they had no experience in the process of visualisation. Similarly to the Bachelor-level students, they very often reduced a visualisation of DIYLab procedure to a set of screenshots. A few of them did an animation of their way of thinking about the DIYLab activity. (e.g., Problems which a human cannot solve without using a computertomography). Some of them did a tutorial (An animated story about a small wizard, https://www.youtube.com/watch?v=QA1skX4GiBI). Some of them developed a methodical guide how to work with pupils (DIY_Little Dances in Scratch -Start to move_CZ, http://hub.diylab.eu/2016/01/11/little-dances-in-scratch-start-tomove/diy_little-dances-in-scratch-start-to-move_cz/), some did a comic strip.
Examples of DIY activities carried out by pupils and completed in lessons managed by ICT student teachers on school practice
Some part-time ICT student teachers decided to apply DIYLab to their class teaching in schools where their pupils did similar DIY activities. All these experiences from schools demonstrate a great enthusiasm and motivation to learn and to solve problems related to after-school activities and through which they develop their digital literacy. For example a girl (aged 15) enjoys recording and editing digital sounds in her free time. She designed a DIYLab activity as a sound story-telling of a boy who would like to meet his girlfriend (https://www.youtube.com/watch?v=a8TzZCAzxKo). She describes how she produced the story-telling as a movie in which she explains what she did, how she collected sounds and which software applications were used in her work (https://www.youtube.com/watch?v=jbSID9_B72k).
Conclusions
Although the EU project has now ended, the Faculty of Education will continue DIYLab activities as a compulsory assignment and an integrated component on courses for Bachelor degree ICT study and for full-time and part-time student teacher of MA degree ICT study. Great attention will be given to (i) ways how to motivate student teachers to choose appropriate topics from their after-school interests and hobbies and to design DIY activities for inquiry-based learning, (ii) methodological approaches as to how to visualise a process of learning in Informatics, Computing or ICT subjects; diaries, scenarios, process-folios or log books used in Art Education or in technical or technological oriented branches will be used as an inspiration for such an approach in ICT teacher education, and (iii) how to support a close interdisciplinary collaboration among student teachers and teacher educators.
The challenge for the Czech context is to change the culture from teacherdependency to students as independent, autonomous learners in the classroom. Creativity in content, methods and pedagogy are absolute requirements to achieve this goal. The DIYlab project showed differences and limits in the culture of approaches to creativity from a pedagogical point of view: If we compare the DIY learning in educational practice in Prague with approaches to creative learning in Barcelona, which is a popular place for international creative artists and DIY communities, in the Czech context the DIYLab will need much longer to break free from the bureaucratic concept of teaching and the assessment of learning outcomes.
The value of the evaluative criteria in framing the DIY process and the parameters used enabled analysis and could support the design and thinking by teachers considering using the DIY method.
K
voluntarily spend a lot of time in intense learning, they tackle highly technical practices, including film editing, robotics, and writing novels among a host of other activities across various DIY networks." To develop photo-visual digital thinking skill as a component of digital literacy Yto be connected with the curriculum School curriculum / study program for HEI students
Fig. 1 .
1 Fig. 1. Teacher educator's evaluation of their DIYLab activities using a scale 0-5 with 0-being no accomplishment and 5-being maximal accomplishment. (Source: [2])
Table 1 .
1 Six pedagogical principles for a design of DIYLab activities
Feature of DIYlab Idea Authors,
activity resources
(1) to support Members of DIY communities collaborate mutually.
collaborative learning
student teachers worked on 16 themes for DIYLab activities and produced within one semestre 81 digital outcomes of different quality and content: Multimedia project (6 DIY digital objects/ 11 students), Design of Android applications (4/11),
Little
dances with Scratch (4/12), Collection of examples of problems which human cannot
solve without using computer (6/28), Contemporary trends in WWW pages
development (5/9), Teaching learning object development (4/12), Wiki of teaching
activities (1/8), Educational robotics project (6/16), Anatomy and morphology of
plants (5/20), Biological and geological technology -field trips (2/3), How I´m
becoming a teacher (17/23), Animated stories (11/13), and Teaching with tablets
(10/26 |
01762974 | en | [
"math.math-ap"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01762974/file/GRE_2102_MD3.pdf | Tomasz Dębiec
email: t.debiec@mimuw.edu.pl
Marie Doumic
email: marie.doumic@inria.fr
AND Piotr Gwiazda
email: pgwiazda@mimuw.edu.pl
Emil Wiedemann
email: wiedemann@ifam.uni-hannover.de
Tomasz D Ębiec
Sorbonne Universités
Relative Entropy Method for Measure Solutions of the Growth-Fragmentation Equation
Keywords: measure solutions, growth-fragmentation equation, structured population, relative entropy, generalised Young measure
published or not. The documents may come L'archive ouverte pluridisciplinaire
INTRODUCTION
Structured population models were developed for the purpose of understanding the evolution of a population over time -and in particular to adequately describe the dynamics of a population by its distribution along some "structuring" variables representing e.g., age, size, or cell maturity. These models, often taking the form of an evolutionary partial differential equation, have been extensively studied for many years. The first age structure was considered in the early 20th century by Sharpe and Lotka [START_REF] Sharpe | A problem in age-distribution[END_REF], who already made predictions on the question of asymptotic behaviour of the population, see also [START_REF] Kermack | A contribution to the mathematical theory of epidemics[END_REF][START_REF] Kermack | Contribution to the mathematical theory of epidemics. ii. the problem of endemicity[END_REF]. In the second half of the 20th century size-structured models appeared first in [START_REF] Bell | Cell growth and division: I. a mathematical model with applications to cell volume distributions in mammalian suspension cultures[END_REF][START_REF] Sinko | A new model for age-size structure of a population[END_REF]. These studies gave rise to other physiologically structured models (agesize, saturation, cell maturity, etc.).
The object of this note is the growth-fragmentation model, which is found fitting in many different contexts: cell division, polymerisation, neurosciences, prion proliferation or even telecommunication. In its general linear form this model takes the form of the following equation.
∂ t n(t, x) + ∂ x (g(x)n(t, x)) + B(x)n(t, x) = ∞ x k(x, y)B(y)n(t, y) dy, g(0)n(t, 0) = 0, n(0, x) = n 0 (x).
(1.1)
Here n(t, x) represents the concentration of individuals of size x ≥ 0 at time t > 0, g(x) ≥ 0 is their growth rate, B(x) ≥ 0 is their division rate and k(x, y) is the proportion of individuals of size x created out of the division of individuals of size y. This equation incorporates a very important phenomenon in biology -a competition between growth and fragmentation. Clearly they have opposite dynamics: growth drives the population towards a larger size, while fragmentation makes it smaller and smaller. Depending on which factor dominates, one can observe various long-time behaviour of the population distribution.
Many authors have studied the long-time asymptotics (along with well-posedness) of variants of the growth-fragmentation equation, see e.g. [START_REF] Cáceres | Rate of convergence to the remarkable state for fragmentation and growth-fragmentation equations[END_REF][START_REF] Doumic | Eigenelements of a general aggregation-fragmentation model[END_REF][START_REF] Michel | Existence of a solution to the cell division eigenproblem[END_REF][START_REF] Mischler | Spectral analysis of semigroups and growth-fragmentation equations[END_REF][START_REF] Perthame | Exponential decay for the fragmentation or cell-division equation[END_REF]. The studies which establish convergence, in a proper sense, of a (renormalised) solution towards a steady profile were until recently limited only to initial data in weighted L 1 spaces. The classical tools for such studies include a direct application of the Laplace transform and the semigroup theory [START_REF] Mischler | Spectral analysis of semigroups and growth-fragmentation equations[END_REF]. These methods could also provide an exponential rate of convergence, linked to the existence of a spectral gap.
A different approach was developed by Perthame et al. in a series of papers [START_REF] Michel | General entropy equations for structured population models and scattering[END_REF][START_REF] Michel | General relative entropy inequality: an illustration on growth models[END_REF][START_REF] Perthame | Exponential decay for the fragmentation or cell-division equation[END_REF]. Their Generalised Relative Entropy (GRE) method provides a way to study long-time asymptotics of linear models even when no spectral gap is guaranteed -however failing to provide a rate of convergence, unless an entropy-entropy dissipation inequality is obtained [START_REF] Cáceres | Rate of convergence to the remarkable state for fragmentation and growth-fragmentation equations[END_REF]. Recently Gwiazda and Wiedemann [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF] extended the GRE method to the case of the renewal equation with initial data in the space of non-negative Radon measures. Their result is motivated by the increasing interest in measure solutions to models of mathematical biology, see e.g. [START_REF] Carrillo | Structured populations, cell growth and measure valued balance laws[END_REF][START_REF] Gwiazda | A nonlinear structured population model: Lipshitz continuity of measure valued solutions with respect to model ingredients[END_REF] for some recent results concerning well-posedness and stability theory in the space of non-negative Radon measures. The clear advantage of considering measure data is that it is biologically justified -it allows for treating the situation when a population is initially concentrated with respect to the structuring variable (and is, in particular, not absolutely continuous with respect to the Lebesgue measure). This is typically the case when departing from a population formed by a unique cell. We refer also to the recent result of Gabriel [START_REF] Gabriel | Measure solutions to the conservative renewal equation[END_REF], who uses the Doeblin method to analyze the long-time behaviour of measure solutions to the renewal equatio n.
Let us remark that the method of analysis employed in the current paper is inspired by the classical relative entropy method introduced by Dafermos in [START_REF] Dafermos | The second law of thermodynamics and stability[END_REF]. In recent years this method was extended to yield results on measure-valued-strong uniqueness for equations of fluid dynamics [START_REF] Brenier | Weak-strong uniqueness for measure-valued solutions[END_REF][START_REF] Feireisl | Dissipative measurevalued solutions to the compressible Navier-Stokes system[END_REF][START_REF] Gwiazda | Weak-strong uniqueness for measurevalued solutions of some compressible fluid models[END_REF] and general conservation laws [START_REF] Christoforou | Relative entropy for hyperbolic-parabolic systems and application to the constitutive theory of thermoviscoelasticity[END_REF][START_REF] Demoulini | Weak-strong uniqueness of dissipative measure-valued solutions for polyconvex elastodynamics[END_REF][START_REF] Gwiazda | Dissipative measure valued solutions for general conservation laws[END_REF]. See also [START_REF] Dębiec | Relative entropy method for measure-valued solutions in natural sciences[END_REF] and refereces therein.
The purpose of this paper is to generalise the results of [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF] to the case of a general growthfragmentation equation. Similarly as in that paper we make use of the concept of a recession function to make sense of compositions of nonlinear functions with a Radon measure. However, the appearance of the term H (u ε (t, x))u ε (t, y) in the entropy dissipation (see (3.8) below), which mixes dependences on the variables x and y, poses a novel problem, which is overcome by using generalised Young measures and time regularity.
The current paper is structured as follows: in Section 2 we recall some basic results on Radon measures, recession functions and Young measures as well as introduce the assumptions of our model, in Section 3 we state and prove the GRE inequality, which is then used to prove a longtime asymptotics result in Section 4.
DESCRIPTION OF THE MODEL
Preliminaries.
In what follows we denote by R + the set [0, ∞). By M (R + ) we denote the space of signed Radon measures on R + . By Lebesgue's decomposition theorem, for each µ ∈ M (R + ) we can write
µ = µ a + µ s ,
where µ a is absolutely continuous with respect to the Lebesgue measure L 1 , and µ s is singular. The space M (R + ) is endowed with the total variation norm
µ TV := R + d|µ|,
and we denote µ TV = TV (µ). By the Riesz Representation Theorem we can identify this space with the dual space to the space C 0 (R + ) of continuous functions on R + which vanish at infinity. The duality pairing is given by
ν, f := ∞ 0 f (ξ ) dµ(ξ ).
By M + (R + ) we denote the set of positive Radon measures of bounded total variation. We further define the ϕ-weighted total variation by µ TV ϕ := R + ϕd|µ| and correspondingly the space M + (R + ; ϕ) of positive Radon measures whose ϕ-weighted total variation is finite. We still denote TV (µ) = µ TV ϕ . Of course we require that the function ϕ be non-negative. In fact, for our purposes ϕ will be strictly positive and bounded on each compact subset of (0, ∞).
We say that a sequence ν n ∈ M (R + ) converges weak * to some measure ν ∈ M (R + ) if
ν n , f -→ ν, f for each f ∈ C 0 (R + ).
By a Young measure on R + × R + we mean a parameterised family ν t,x of probability measures on R + . More precisely, it is a weak * -measurable function (t, x) → ν t,x , i.e. such that the mapping
(t, x) → ν t,x , f is measurable for each f ∈ C 0 (R + ).
Young measures are often used to describe limits of weakly converging approximating sequences to a given problem. They serve as a way of describing weak limits of nonlinear functions of the approximate solution. Indeed, it is a classical result that a uniformly bounded measurable sequence u n generates a Young measure by which one represents the limit of f (u n ), where f is some non-linear function, see [?] for sequences in L ∞ and [START_REF] Ball | A version of the fundamental theorem for Young measures[END_REF] for measurable sequences.
This framework was used by DiPerna in his celebrated paper [START_REF] Diperna | Measure-valued solutions to conservation laws[END_REF], where he introduced the concept of an admissible measure-valued solution to scalar conservation laws. However, in more general contexts (e.g. for hyperbolic systems, where there is usually only one entropy-entropyflux pair) one needs to be able to describe limits of sequences which exhibit oscillatory behaviour as well as concentrate mass. Such a framework is provided by generalised Young measures, first introduced in the context of incompressible Euler equations in [START_REF] Diperna | Oscillations and concentrations in weak solutions of the incompressible fluid equations[END_REF], and later developed by many authors. We follow the exposition of Alibert, Bouchitté [START_REF] Alibert | Non-uniform integrability and generalized Young measures[END_REF] and Kristensen, Rindler [START_REF] Kristensen | Characterization of Generalized Gradient Young Measures Generated by Sequences in W 1,1 and BV[END_REF].
Suppose f : R n → R + is an even continuous function with at most linear growth, i.e.
| f (x)| ≤ C(1 + |x|)
for some constant C. We define, whenever it exists, the recession function of f as
f ∞ (x) = lim s→∞ f (sx) s = lim s→∞ f (-sx) s .
Definition 2.1. The set F (R) of continuous functions f : R → R + for which f ∞ exists and is continuous on S n-1 is called the class of admissible integrands.
By a generalised Young measure on Ω = R + × R + we mean a parameterised family (ν t,x , m) where for (t, x) ∈ Ω, ν t,x is a family of probability measures on R and m is a nonnegative Radon measure on Ω. In the following, we may omit the indices for ν t,x and denote it simply (ν, m).
The following result gives a way of representing weak * limits of sequences bounded in L 1 via a generalised Young measure. It was first proved in [START_REF] Alibert | Non-uniform integrability and generalized Young measures[END_REF]Theorem 2.5]. We state an adaptation to our simpler case. Proposition 2.2. Let (u n ) be a bounded sequence in L 1 loc (Ω; µ, R), where µ is a measure on Ω. There exists a subsequence (u n k ), a nonnegative Radon measure m on Ω and a parametrized family of probabilities (ν ζ ) such that for any even function f ∈ F (R) we have
f (u n k (ζ ))µ * ν ζ , f µ + f ∞ m (2.1)
Proof. We apply Theorem 2.5. and Remark 2.6 in [START_REF] Alibert | Non-uniform integrability and generalized Young measures[END_REF], simplified by the fact that f is even and that we only test against functions f independent of x. Note that the weak * convergence then has to be understood in the sense of compactly supported test functions ϕ ∈ C 0 (Ω, R).
The above proposition can in fact be generalised to say that every bounded sequence of generalised Young measures possesses a weak * convergent subsequence, cf. [26, Corollary 2.] Proposition 2.3. Let (ν n , m n ) be a sequence of generalised Young measures on Ω such that
• The map x → ν n x , | • | is uniformly bounded in L 1 , • The sequence (m n ( Ω)) is uniformly bounded.
Then there is a generalised Young measure (ν, m) on Ω such that (ν n , m n ) converges weak * to (ν, m).
2.2. The model. We consider the growth-fragmentation equation under a general form:
∂ t n(t, x) + ∂ x (g(x)n(t, x)) + B(x)n(t, x) = ∞ x k(x, y)B(y)n(t, y) dy, g(0)n(t, 0) = 0, n(0, x) = n 0 (x). (2.2) We assume n 0 ∈ M + (R + ).
The fundamental tool in studying the long-time asymptotics with the GRE method is the existence and uniqueness of the first eigenelements (λ , N, ϕ), i.e. solutions to the following primal and dual eigenproblems.
∂ ∂ x (g(x)N(x)) + (B(x) + λ )N(x) = ∞ x k(x, y)B(y)N(y) dy g(0)N(0) = 0, N(x) > 0, for x > 0, ∞ 0 N(x)dx = 1, (2.3) -g(x) ∂ ∂ x (ϕ(x)) + (B(x) + λ )ϕ(x) = B(x) x 0 k(y, x)ϕ(y) dy ϕ(x) > 0, ∞ 0 ϕ(x)N(x)dx = 1.
(2.4)
We make the following assumptions on the parameters of the model.
B ∈ W 1,∞ (R + , R * + ), g ∈ W 1,∞ (R + , R * + ), ∀ x ≥ 0, g ≥ g 0 > 0, (2.5) k ∈ C b (R + × R + ), y 0 k(x, y)dx = 2, y 0 xk(x, y)dx = y, (2.6) k(x, y < x) = 0, k(x, y > x) > 0. (2.7)
These guarantee in particular existence and uniqueness of a solution n ∈ C (R + ; L 1 ϕ (R + )) for L 1 initial data (see e.g. [START_REF] Perthame | Kinetic formulation of conservation laws[END_REF]), existence of a unique measure solution for data in M + (R + ) (cf. [START_REF] Carrillo | Structured populations, cell growth and measure valued balance laws[END_REF]), as well as existence and uniqueness of a dominant eigentriplet (λ > 0, N(x), ϕ(x)), cf. [START_REF] Doumic | Eigenelements of a general aggregation-fragmentation model[END_REF]. In particular the functions N and ϕ are continuous, N is bounded and ϕ has at most polynomial growth. In what follows N and ϕ will always denote the solutions to problems (2.3) and (2.4), respectively. Let us remark that in the L 1 setting we have the following conservation law
∞ 0 n ε (t, x)e -λt ϕ(x)dx = ∞ 0 n 0 (x)ϕ(x)dx.
(2.8) 2.3. Measure and measure-valued solutions. Let us observe that there are two basic ways to treat the above model in the measure setting. The first one is to consider a measure solution, i.e. a narrowly continuous map t → µ t ∈ M + (R + ), which satisfies (2.2) in the weak sense, i.e. for each
ψ ∈ C 1 c (R + × R + ) - ∞ 0 ∞ 0 (∂ t ψ(t, x) + ∂ x ψ(t, x)g(x))dµ t (x)dt + ∞ 0 ∞ 0 ψ(t, x)B(x)dµ t (x)dt = ∞ 0 ∞ 0 ψ(t, x) ∞ x k(x, y)B(y)dµ t (y)dxdt + ∞ 0 ψ(0, x)dn 0 (x).
(2.9)
Thus a measure solution is a family of time-parameterised non-negative Radon measures on the structure-physical domain R + .
The second way is to work with generalised Young measures and corresponding measurevalued solutions. To prove the generalised relative entropy inequality, which relies on considering a family of non-linear renormalisations of the equation, we choose to work in this second framework.
A measure-valued solution is a generalised Young measure (ν, m), where the oscillation measure is a family of parameterised probabilities over the state domain R + such that equation (2.2) is satisfied by its barycenters ν t,x , ξ , i.e. the following equation
∂ t ( ν t,x , ξ + m) + ∂ x (g(x)( ν t,x , ξ + m)) + B(x)( ν t,x , ξ + m) = ∞ x k(x, y)B(y) ν t,x , ξ dy + ∞ x k(x, y)B(y)dm(y) (2.10)
holds in the sense of distributions on R * + × R * + . It is proven in [START_REF] Gwiazda | A nonlinear structured population model: Lipshitz continuity of measure valued solutions with respect to model ingredients[END_REF] that equation (2.2) has a unique measure solution. To this solution one can associate a measure-valued solution -for example, given a measure solution t → µ t one can define a measure-valued solution by
δ dµ a t dL 1 , id = µ a t , m = µ s t
where dµ 1 dµ 2 denotes the Radon-Nikodym derivative of µ 1 with respect to µ 2 . However, clearly, the measure-valued solutions are not unique -since the equation is linear, there is freedom in choosing the Young measure as long as the barycenter satisfies equation (2.10). For example, a different measure-valued solution can be defined by 1 2 δ
2 dµ a t dL 1 + 1 2 δ {0} , id = µ a t .
Uniqueness of measure-valued solution can be ensured by requiring that the generalised Young measure satisfies not only the equation, but also a family of nonlinear renormalisations. This was the case in the work of DiPerna [START_REF] Diperna | Measure-valued solutions to conservation laws[END_REF], see also [START_REF] Dębiec | Relative entropy method for measure-valued solutions in natural sciences[END_REF].
To establish the GRE inequality which will then be used to prove an asymptotic convergence result, we consider the measure-valued solution generated by a sequence of regularized solutions. This allows us to use the classical GRE method established in [START_REF] Perthame | Transport equations in biology[END_REF]. Careful passage to the limit will then show that analogous inequalities hold for the measure-valued solution.
GRE INEQUALITY
In this section we formulate and prove the generalised relative entropy inequality, our main tool in the study of long-time asymptotics for equation (2.2). We take advantage of the well-known GRE inequalities in the L 1 setting. To do so we consider the growth-fragmentation equation (2.2) for a sequence of regularized data and prove that we can pass to the limit, thus obtaining the desired inequalities in the measure setting.
Let n 0 ε ∈ L 1 ϕ (R + ) be a sequence of regularizations of n 0 converging weak * to n 0 in the space of measures and such that TV(n 0 ε ) → TV(n 0 ). Let n ε denote the corresponding unique solution to (2.2) with n 0 ε as an initial condition. Then for any differentiable strictly convex admissible integrand H we define the usual relative entropy
H ε (t) := ∞ 0 ϕ(x)N(x)H n ε (t, x)e -λt N(x) dx
and entropy dissipation
D H ε (t) = ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) H n ε (t, y)e -λt N(y) -H n ε (t, x)e -λt N(x) -H n ε (t, x)e -λt N(x) n ε (t, y)e -λt N(y) - n ε (t, x)e -λt N(x) dxdy.
Then, as shown e.g. in [START_REF] Michel | General entropy equations for structured population models and scattering[END_REF], one can show that
d dt ∞ 0 ϕ(x)N(x)H n ε (t, x)e -λt N(x) dx = -D H ε (t) (3.1)
with right-hand side being non-positive due to convexity of H. Hence the relative entropy is non-increasing. It follows that
H ε (t) ≤ H ε (0) and, since H ≥ 0, ∞ 0 D H ε (t) dt ≤ H ε (0). (3.2)
In the next proposition we prove corresponding inequalities for the measure-valued solution generated by the sequence n ε . This result is an analogue of Theorem 5.1 in [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF]. Proposition 3.1. With notation as above, there exists a subsequence (not relabelled), generating a generalised Young measure (ν, m) with m = m t ⊗ dt for a family of positive Radon measures m t , such that
lim ε→0 ∞ 0 χ(t)H ε (t) dt = ∞ 0 χ(t) ∞ 0 ϕ(x)N(x) ν t,x (α), H(α) dx + ∞ 0 ϕ(x)N(x)H ∞ dm t (x) dt (3.3) for any χ ∈ C c ([0, ∞)), and
lim ε→0 ∞ 0 D H ε (t) dt = ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,y (ξ ) ⊗ ν t,x (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdydt + ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,x (α), H ∞ -H (α) dm t (y)dxdt ≥ 0. (3.4)
We denote the limits on the left-hand sides of the above equations by ∞ 0 χ(t)H (t) dt and ∞ 0 D H (t) dt, respectively, thus defining the measure-valued relative entropy and entropy dissipation for almost every t. We further set
H (0) := ∞ 0 ϕ(x)N(x)H (n 0 ) a (x) N(x) dx + ∞ 0 ϕ(x)H ∞ (n 0 ) s |(n 0 ) s | (x) d|(n 0 ) s (x)|. (3.5)
We then have d dt H (t) ≤ 0 in the sense of distributions, (3.6)
and ∞ 0 D H (t)dt ≤ H (0). (3.7)
Proof. The function t → ∞ 0 n ε (t, x)e -λt ϕ(x)dx is constant and the function N is strictly positive on (0, ∞). Therefore the sequence u ε (t, x) := n ε (t,x)e -λt N(x) is uniformly bounded in L ∞ (R + ; L 1 ϕ,loc (R + )). Hence we can apply Proposition 2.2 to obtain a generalised Young measure (ν, m) on
R + × R + . Since u ε ∈ L ∞ (R + ; L 1 ϕ,loc (R + )), we have m ∈ L ∞ (R + ; M (R + ; ϕ))
. By a standard disintegration argument (see for instance [START_REF] Evans | Weak convergence methods for nonlinear partial differential equations[END_REF]Theorem 1.5.1]) we can write the slicing measure for m, m(dt, dx) = m t (dx) ⊗ dt, where the map t → m t ∈ M + (R + ; ϕ) is measurable and bounded.
By Proposition 2.2 we have the weak * convergence
H(u ε (t, x))(dt ⊗ ϕ(x)dx) * ν t,x , H (dt ⊗ ϕ(x)dx) + H ∞ m.
Testing with (t, x) → χ(t)N(x) where χ ∈ C c (R + ), we obtain (3.3). Further, the convergence [START_REF] Brenier | Weak-strong uniqueness for measure-valued solutions[END_REF], since for H ε we have the corresponding inequality (3.1).
∞ 0 χ(t)H ε (t)dt → ∞ 0 χ(t)H ε (t)dt implies (3.
We now investigate the limit as
ε → 0 of ∞ 0 D H ε (t)dt. Denoting Φ(x, y) := k(x, y)N(y)B(y) we have D H ε (t) = ∞ 0 ∞ 0 Φ(x, y)ϕ(x)[H(u ε (t, y)) -H(u ε (t, x)) -H (u ε (t, x))u ε (t, y) + H (u ε (t, x))u ε (t, x)]dxdy. (3.8)
We consider each of the four terms of the sum separately on the restricted domain [0, T ] × [η, K] 2 for fixed T > 0 and K > η > 0. Let D H ε,η,K denote the entropy dissipation with the integrals of (3.8) each taken over the subsets [η, K] of R + .
We now apply Proposition 2.2 to the sequence u ε , the measure dt ⊗ ϕ(x)dx on the set [0, T ] × [η, K]. The first two and the last integrands of D H ε,η,K (t) depend on t and only either on x or on y. Therefore we can pass to the limit as ε → 0 by Proposition 2.2 using a convenient test function. More precisely, testing with (t, x) → K η Φ(x, y)dy, we obtain the convergence
- T 0 K η K η Φ(x, y)ϕ(x)H(u ε (t, x))dydxdt -→ - T 0 K η K η Φ(x, y)ϕ(x) ν t,x , H dydxdt - T 0 K η K η Φ(x, y)ϕ(x)H ∞ dm t (x)dydt,
and, noticing that the recession function of
α → αH (α) is H ∞ , T 0 K η K η Φ(x, y)ϕ(x)H (u ε (t, x))u ε (t, x)dydxdt -→ T 0 K η K η Φ(x, y)ϕ(x) ν t,x , αH (α) dydxdt + T 0 K η K η Φ(x, y)ϕ(x)H ∞ dm t (x)dydt.
Likewise, using (t, y)
→ 1 ϕ(y) K η Φ(x, y)ϕ(x)dx, we obtain T 0 K η K η Φ(x, y)ϕ(x)H(u ε (t, y))dxdydt → T 0 K η K η Φ(x, y)ϕ(x) ν t,y , H dxdydt + T 0 K η K η Φ(x, y)ϕ(x)H ∞ dm t (y)dxdt.
There remains the term of D H ε,η,K in which the dependence on u ε combines x and y. To deal with this term we separate variables by testing against functions of the form f 1 (x) f 2 (y). We then consider
- T 0 [η,K] 2 f 1 (x) f 2 (y)H (u ε (t, x))u ε (t, y)dxdydt = - T 0 K η f 1 (x)H (u ε (t, x))dx K η f 2 (y)u ε (t, y)dy dt.
The integrands are now split, one containing the x dependence, and one the y dependence. However, extra care is required here to pass to the limit. As the Young measures depend both on time and space, it is possible for the oscillations to appear in both directions. We therefore require appropriate time regularity of at least one of the sequences to guarantee the desired behaviour of the limit of the product. Such requirement is met by noticing that since
u ε ∈ C ([0, T ]; L 1 ϕ (R + )) uniformly, we have u ε uniformly in W 1,∞ ([0, T ]; (M + (R + ), • (W 1,∞ ) * )), cf. [8, Lemma 4.1]. Assuming f 2 ∈ W 1,∞ (R + ) we therefore have t → K η f 2 (y)u ε (t, y)dy ∈ W 1,∞ ([0, T ]).
This in turn implies strong convergence of K η f 2 (y)u ε (t, y)dy in C ([0, T ]), by virtue of Arzéla-Ascoli theorem. Therefore we have (noting that (H ) ∞ ≡ 0 by sublinear growth of H)
- T 0 [η,K] 2 f 1 (x) f 2 (y)H (u ε (t, x))u ε (t, y)dxdydt = - T 0 K η f 1 (x)H (u ε (t, x))dx K η f 2 (y)u ε (t, y)dy dt -→ - T 0 K η f 1 (x) ν t,x , H dx K η f 2 (y) ν t,y , id dy dt - T 0 K η f 1 (x) ν t,x , H dx K η f 2 (y)dm t (y) dt = - T 0 [η,K] 2 f 1 (x) f 2 (y) ν t,x , H (α) ν t,y , ξ dxdy - T 0 [η,K] 2 f 1 (x) f 2 (y) ν t,x , H (α) dm t (y)dxdt.
By density of the linear space spanned by separable functions in the space of bounded continuous functions of (x, y) we obtain
- T 0 [η,K] 2 Φ(x, y)ϕ(x)H (u ε (t, x))u ε (t, y)dxdydt -→ T 0 [η,K] 2 Φ(x, y)ϕ(x) ν t,x , H (α) ν t,y , ξ dxdydt - T 0 [η,K] 2 Φ(x, y)ϕ(x) ν t,x , H (α) dm t (y)dxdt.
Gathering all the terms we thus obtain the convergence as ε → 0
T 0 D H ε,η,K (t)dt -→ T 0 D H η,K (t)dt with D H η,K (t) := [η,K] 2 Φ(x, y)ϕ(x) ν t,y (ξ ) ⊗ ν t,x (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdy + [η,K] 2 Φ(x, y)ϕ(x) ν t,x (α), H ∞ -H (α) dm t (y)dx.
Observe that since Φ is non-negative and H is convex, the integrand of D H ε,η,K is non-negative. Hence so is the integrand of the limit. Therefore, by Monotone Convergence, we can pass to the limit η → 0, K → ∞, and T → ∞ to obtain
0 ≤ lim ε→0 ∞ 0 D H ε (t) dt = ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,y (ξ ) ⊗ ν t,x (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdydt + ∞ 0 ∞ 0 ∞ 0 ϕ(x)N(y)B(y)k(x, y) ν t,x (α), H ∞ -H (α) dm t (y)dxdt.
Finally we note that by the Reshetnyak continuity theorem, cf. [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF][START_REF] Kristensen | Relaxation of signed integral functionals in BV[END_REF] we have the convergence H ε (0) → H (0). Together with (3.2) this implies (3.6).
LONG-TIME ASYMPTOTICS
In this section we use the result of the previous section to prove that a measure-valued solution of (2.2) converges to the steady-state solution. More precisely we prove Theorem 4.1. Let n 0 ∈ M (R + ) and let n solve the growth-fragmentation equation (2.2). Then
lim t→∞ ∞ 0 ϕ(x)d|n(t, x) -m 0 N(x)L 1 | = 0 (4.1)
where m 0 := ∞ 0 ϕ(x)dn 0 (x) and L 1 denotes the 1-dimensional Lebesgue measure.
Proof. From inequality (3.7) we see that D H belongs to L 1 (R + ). Therefore there exists a sequence of times t n → ∞ such that
lim n→∞ D H (t n ) = 0.
Consider the corresponding sequence of generalised Young measures (ν t n ,x , m t n ). Thanks to the inequality H (t) ≤ H (0) this sequence is uniformly bounded in the sense that
sup n ∞ 0 ϕ(x)N(x) ν t,x (α), |α| dx + ∞ 0 ϕ(x)N(x)dm t n (x) < ∞. (4.2)
Therefore by the compactness property of Proposition 2.3 there is a subsequence, not relabelled, and a generalised Young measure ( νx , m) such that
(ν t n ,x , m t n ) * ( νx , m)
in the sense of measures. We now show that the corresponding "entropy dissipation"
D H ∞ := ∞ 0 ∞ 0 Φ(x, y)ϕ(x) νy (ξ ) ⊗ νx (α), H(ξ ) -H(α) -H (α)(ξ -α) dxdy + ∞ 0 ∞ 0 Φ(x, y)ϕ(x) νx (α), H ∞ -H (α) d m(y)dx (4.3)
is zero. To this end we argue that
D H ∞ = lim n→∞ D H (t n ).
Indeed this follows by the same arguments as in the proof of Proposition 3.1. In fact now the "mixed" term poses no additional difficulty as there is no time integral. It therefore follows that
D H ∞ = 0. (4.4)
As H is convex, both integrands in (4.3) are non-negative. Therefore (4.4) implies that both the integrals of D H ∞ are zero. In particular
∞ 0 ∞ 0 H(ξ ) -H(α) -H (α)(ξ -α)d νx (α)d νy (ξ ) = 0,
and since the integrand vanishes if and only if ξ = α, this implies that the Young measure ν is a Dirac measure concentrated at a constant. Then the vanishing of the second integral of D H ∞ implies that m = 0. Moreover, the constant can be identified as
m 0 := ∞ 0 ϕ(x)dn 0 (x) (4
CONCLUSION
In this article, we have proved the long-time convergence of measure-valued solutions to the growth-fragmentation equation. This result extends previously obtained results for L 1 ϕ solutions [START_REF] Michel | General relative entropy inequality: an illustration on growth models[END_REF]. As for the renewal equation [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF], it is based on extending the generalised relative entropy inequality to measure-valued solutions, thanks to recession functions. Generalised Young measures provide an adequate framework to represent the measure-valued solutions and their entropy functionals.
Under slightly stronger assumptions on the fragmentation kernel k, e.g. the ones assumed in [START_REF] Cáceres | Rate of convergence to the remarkable state for fragmentation and growth-fragmentation equations[END_REF], it has been proved that an entropy-entropy dissipation inequality could be obtained. Under such assumptions, we could obtain in a simple way a stronger result of exponential convergence, see the proof of Theorem 4.1. in [START_REF] Gwiazda | Generalized entropy method for the renewal equation with measure data[END_REF]. However the aboveseen method allows us to extend the convergence to spaces where no spectral gap exists [START_REF] Bernard | Asymptotic behavior of the growth-fragmentation equation with bounded fragmentation rate[END_REF].
A specially interesting case of application of this method would be critical cases where the dominant eigenvalue is not unique but is given by a countable set of eigenvalues. It has been proved that for L 2 initial conditions, the solution then converges to its projection on the space spanned by the dominant eigensolutions [START_REF] Bernard | Cyclic asymptotic behaviour of a population reproducing by fission into two equal parts[END_REF]. In the case of measure-valued initial condition, due to the fact that the equation has not anymore a regularisation effect, the asymptotic limit is expected to be the periodically oscillating measure, projection of the initial condition on the space of measures spanned by the dominant eigensolutions. This is a subject for future work.
. 5 )
5 by virtue of the conservation in time of∞ 0 ϕ(x)e -λt ν t,x , • dx + ∞ 0 ϕ(x)e -λt dm t (x).By virtue of Proposition 2.2 withH = | • -m 0 | it then follows that lim n→∞ ∞ 0 ϕ(x)d|n(t n , x)e -λt nm 0 N(x)L 1 | = 0,which is the desired result, at least for our particular sequence of times.Finally, we can argue that the last convergence holds for the entire time limit t → ∞, invoking the monotonicity of the relative entropy H . Indeed, the choiceH = | • -m 0 | in (3.5) yields the monotonicity in time of ∞ 0 ϕ(x)d|n(t, x)e -λtm 0 N(x)L 1 |, and the result follows.
Acknowledgements. T. D. would like to thank the Institute for Applied Mathematics of the Leibniz University of Hannover for its warm hospitality during his stay, when part of this work was completed. This work was partially supported by the Simons -Foundation grant 346300 and the Polish Government MNiSW 2015-2019 matching fund. The research of T. D. was supported by National Science Center (Poland) 2014/13/B/ST1/03094. M.D.'s research was supported by the Wolfgang Pauli Institute (Vienna) and the ERC Starting Grant SKIPPER AD (number 306321). P. G. received support from National Science Center (Poland) 2015/18/M/ST1/00075. |
01762992 | en | [
"sdu.astr",
"sdu.astr.im"
] | 2024/03/05 22:32:13 | 2014 | https://hal.science/hal-01762992/file/1710.03509.pdf | Enrico Marchetti
Laird Close
Jean-Pierre Véran
Johan Mazoyer
email: johan.mazoyer@obspm.fr
Raphaël Galicher
Pierre Baudoz
Patrick Lanzoni
Frederic Zamkotsian
Gérard Rousset
Frédéric Zamkotsian
Deformable mirror interferometric analysis for the direct imagery of exoplanets
Keywords: Instrumentation, High-contrast imaging, adaptive optics, wave-front error correction, deformable mirror
INTRODUCTION
Direct imaging of exoplanets requires the use of high-contrast imaging techniques among which coronagraphy. These instruments diffract and block the light of the star and allow us to observe the signal of a potential companion. However, these instrument are drastically limited by aberrations, introduced either by the atmosphere or by the optics themselves. The use of deformable mirrors (DM) is mandatory to reach the required performance. The THD bench (french acronym for very high-contrast bench), located in the Paris Observatory, in Meudon, France, uses coronagraphy techniques associated with a Boston Micromachines DM. [START_REF] Bifano | Adaptive imaging: MEMS deformable mirrors[END_REF] This DM is a Micro-Electro-Mechanical Systems (MEMS), composed of 1024 actuators. In March 2013, we brought this DM in Laboratoire d`Astrophysique de Marseille (LAM), France, where we studied precisely the performance and defects of this DM on the interferometric bench of this laboratory. The result of that study, conducted in collaboration with F. Zamkotsian et P. Lanzoni, from LAM are presented here.
We first describe the MEMS DM, the performance announced by Boston Micromachines and its assumed state before this analysis (Section 2). In the same section, we also present the interferometric bench at LAM. The results of this analysis are then presented in several parts. We first describe the analyzed DM overall shape and surface quality (Sections 3 and 4). We then analyze accurately the influence function of an actuator and its response to the application of different voltages (Section 5), first precisely for one actuators and then extended to all the DM. Finally, special attention will be paid to the damaged actuators that we identified (Section 6). We will present several causes of dysfunction and possible solutions.
THE MEMS DM AND THE LAM INTERFEROMETRIC BENCH 2.1 The MEMS DM: specifications and damaged actuators
Out of the 1024 actuators, only 1020 are used because corners are fixed. We number our actuators from 0 (bottom right corner) to 1023 (top left corner) as shown in Figure 1. The four fixed corner actuators are therefore numbers 31, 992 and 1023. The edges of the DM are also composed of fixed actuators, unnumbered. The inter-actuator pitch is 300µm, for a total size of the DM 9.3 mm. Boston Micromachines announces a subnanometric minimum stroke and a total stroke of 1.5 µm. All the values presented in this paper, unless stated otherwise, are in mechanical deformation of the DM surface (which are half the phase deformation introduced by a reflection on this surface). The flattened-DM surface quality is valued by Boston Micromachines at 30 nm (root mean square, RMS).
The electronics of the DM allows us to apply voltages between 0 and 300 V, coded on 14 bits. The minimum stroke is therefore 300/2 14 V or 18.3 mV. To protect the surface, the maximum voltage for this DM is 205 V.
We use percentage to express the accessible voltages: 0% corresponds to 0 V, while 100% corresponds to a voltage of 205 V. Each percent is thus a voltage of 2.05 V. The higher the voltage is the more the actuator is pulled towards the DM. A voltage of 100 % thus provides the minimum value of the stroke, a voltage of 0% its maximum. The minimum stroke of each actuators is 8.93 10 -3 %. This value was checked on the THD bench by checking the minimum voltage to produce an effect in the pupil plan after the coronagraph. Gain measurement in Section 5 will allow us to check the specifications for the maximal and minimum stroke of an actuator. The numbering starts at 0 in the bottom right corner and ends in 1023 in the top left corner. The actuators 841 and 197, in red, considered defective, were not used. Therefore, the pupil used is reduced (27 actuators along the diameter of the pupil only) and offseted on the DM.
Before March 2013, we thought that two actuators were unusable (they did not follow our voltage instructions): the 841 who could follow his neighbors if they were actuated and the 197 that seemed stuck at the 0% value. These actuators will be studied specifically in section 6. To avoid these actuators, the pupil before this analysis was reduced (27 actuators across the diameter of the pupil only) and offseted (see Figure 1).
Analysis at LAM : interferometric bench and process
The interferometric bench at LAM 2 was developed for the precise analysis of DM. Figure 2 shows the diagram of the Michelson interferometer. The source is a broadband light, which is filtered spatially by a point hole and spectrally at λ = 650 nm (this is the wavelength of the THD bench). In the interferometer, one of the mirrors is the DM to analyze (Sample in Figure 2). The other is a plane reference mirror (Reference flat in Figure 2). At the end of the other arm of the interferometer, a CCD detector (1024x1280) is placed. A lens system can be inserted in front of the camera to choose between a large field (40 mm wide, covering the whole DM) to a smaller field (a little less than 2 mm wide or 6x6 actuators). Both fields will be used in this study. The phase measurement is done using the method of Hariharan. [START_REF] Hariharan | Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm[END_REF] We introduce 5 phase differences in the reference arm:
{-2π/2, -π/2, 0, π/2, 2π/2}, (1)
and record the images with the CCD. The phase difference φ between the two arms can then be measured using:
tan(φ) = I -π/2 -I π/2 2I 0 -I -2π/2 -I 2π/2 . ( 2
)
Assuming a null phase on the reference mirror, the phase on the DM is just φ. Since the phase is only known between 0 and 2π, the overall phase is unwrapped using a path-following algorithm. This treatment can sometimes be difficult in areas with very high phase gradient. Finally, we measure the surface deformation of the DM by multiplying λ/(2π) and dividing by 2 (to measure mechanical movement of the DM from optical path difference).
The accuracy in the phase measurement is limited by the aberrations of the reference plane mirror and by the differential aberrations in the arms of the interferometer. However, the performance obtained on the measurement of the mechanical deformation of the DM are subnanometric. [START_REF] Liotard | Static and dynamic microdeformable mirror characterization by phase-shifting and time-averaged interferometry[END_REF] We can also retrieve the amplitude on the surface using:
4 M = 3 4(I -π/2 -I π/2 ) 2 + (2I 0 -I -2π/2 -I 2π/2 ) 2 2(I -2π/2 + I π/2 + I 0 + I π/2 + I 2π/2 ) . (3)
We now present the results of this analysis. Finally, the vertical lines indicate the limits of the 27x27 actuator pupil before March 2013 (red dotted line) and a pupil centered of the same size (brown dotted line). Right : Same cross sections for different voltages applied to all the actuators (from 0% to 90 %). Piston was removed and the curves were superimposed. The abscissas are measured in inter-actuator pitch and the y-axis is in nanometers.
GENERAL FORM OF THE DM
Figure 3 (left) shows, in a black solid curve, a cross section of the DM over the entire surface (in one of the main direction). We applied the same voltage of 70% to all the actuators. The x-axis is measured in inter-actuator pith and the mechanical deformation in y-axis is in nanometers. The first observation is that a uniform voltage on all the actuators does not correspond to a flat surface on the DM. The general shape is a defocus over the entire surface of approximately 500 nm (peak-to-valley, PV). The position of the 27x27 actuator pupil on the bench THD before March 2013 is drawn in red vertical lines. The brown vertical lines indicate a pupil of the same size, centered on the DM. The "natural" defocus of the DM in a pupil 27 actuators is about 350 nm (PV).
Figure 3 (right) represent the same cross section if we apply different uniform tensions to the DM (from 0% to 90%). Piston was removed and we superimposed these curves, which show that this form of defocus is present in the same proportions in all voltages. Due to slightly different gains between the actuators, there is a small variations of the actuators at the center between the various applied voltages.
The theoretical stroke of an actuator is 1.5µm and can normally compensates for this defocus by pulling the center actuators of 500 nm while letting the one on the edges at low tensions. However, this correction would be at the cost of a third of this theoretical maximum stroke on the center actuators. The chosen solution on our bench is to place the coronagraph mask outside the focal plane. Indeed, at a distance d the focal plane, the defocus introduced is:
Defoc P V = d 8(F/D) 2 , ( 4
)
in phase difference, in PV, where F/D is the opening of our bench. With the specifcations of our bench, we chose d = 7 cm, which correspond to the introduction of a defocus (in phase error) of 700 nm (PV), which exactly compensates the 350 nm (PV) of defocus (in mechanical stroke) in our 27 actuator pupil. We can therefore chose the voltages around a uniform value on the bench. Before the analysis at the LAM, we chose a voltage of 70 %, for reasons which are discussed in Section 5.2.
We also note on the black solid line on Figure 3 (left) the large variation at the edges of the DM (550 nm, PV in only one actuator pitch), when the same voltage of 70 % is applied to all actuators. This variation tends to decrease when lower voltages are applied to the actuators on the edges. However, this edge must not be included in the pupil. Note that the pupil prior to the analysis in Marseille (in red vertical lines) was very close to these edges.
On the edges, we can clearly seen the "crenelations" created by our DM actuators. To measure these deformations, I removed the lower frequencies (including all the frequencies accessible to the DM) numerically with a smoothing filter. The result is plotted in Figure 3 (left) in blue solid line. We clearly see this crenelation effect increase as we approache the edges. Once again, it is better to center the pupil on the detector to avoid the edge actuators. These effects are the main causes of the poor surface quality of MEMS DMs that we discussed in the next section. We now study the surface of our DM first through the level of aberrations, then with the study of the Power Spectral Density (PSD).
SURFACE QUALITY
In Fig 4 we show images of the surface of the DM obtained in large field (about 10 mm by 10 mm) on the left and small field (right), centered on the 4 by 4 central actuators (ie 1.2 mm by 1.2 mm). In both case, we removed all frequencies reachable by the DM (below 0.5 (inter-actuator pitch) -1 ) through a smoothing filter in post processing to observe its fine structures. For example, the defocus mentioned in the previous section has been removed. In the large field, we can observe the actuator 769 (in the lower left), which is fixed to the value 0% (see Section 6.3) and was unnoticed before this analysis but no noticeable sign of the two known faulty actuators (see Figure 1). We also note the edges and corners, very bright, due to fixed actuators.
Figure 5: On the left, cross sections on two actuators observed in small field. Each point of these cross sections is an average over a width of 0.1 inter-actuator pitch, either avoiding center and release-etch holes (curve "best case", in red) or on the contrary right in the center of an actuator (curve " worst case" in black). On the right, azimuthally averaged PSD measured on the whole DM (wide field) and on some actuators (narrow field). The frequencies on the horizontal axis are measured in µm -1 and the vertical axis is in nm 2 .µm 2 . The black dotted vertical lines indicate remarkable frequencies: the frequency of the actuators (1/300µm -1 ) and the maximum correctable frequency by the DM, of (2 inter-actuator pitch) -1 , or 1/(2 * 300)µm -1 . Finally, in red, we adjusted asymptotic curves.
Boston Micromachines announces a surface quality 30 nm (RMS) on the whole DM when it is in a "flat" position. Because of the high defocus defect that we corrected using a defocus of the coronagraphic mask, we have not tried to obtain a flat surface on the DM to verify this number. However, an estimate of the remaining aberrations in a "flat" position can be maid by removing in post processsing all the frequencies correctable by our DM. We measure the remaining aberrations without the edges and found 32 nm (RMS). This is slightly higher than the specifications of Boston Micromachines but one of the actuator at least is broken. The same measurement on our actual 27 actuators offseted pupil gives 8 nm (RMS) and 7 nm (RMS) for a same size pupil centered.
In the right image, we observe the details of the actuator. We observed three types of deformations:
• the center of the actuator, in black, with a size of about 25 µm
• the edges, which appear as two parallel lines separated by 45µm and of length one inter-actuator pitch (300µm)
• the release-etch holes of the membrane (4 in the central surface + 2 between the parallel lines of the edges).
In the principal direction of the DM, they appears very 150 µm and are only a few µm larges. A priori, these holes are a consequence of the making process by lithography.
We measured cross sections along two actuators in the small field, shown in Figure 5. The horizontal axis is in inter-actuator pitch and the vertical axis is in nanometers. Each of the points of these cross-sections is an average on a width of about 0.1 inter-actuator pitch. We placed these bands either right in the center of an actuator (curve "worst case", in black) or in a way to avoid both centers and release-etch holes (curve "best case", in red). The two bumps in 0.15 and 0.35 and in 1.15 and 1.35 inter-actuator pitch, common to both curves correspond to the parallel lines at the edges of the actuators. They produce mechanical aberrations of 12 nm (PV). The centers, in the black curve, are located in 0.65 and 1.65 inter-actuator pitch. They introduce mechanical aberrations 25 nm (PV). It is not certain that the aberrations in the release-etch holes are properly retrieved for several reasons. First, their size is a smaller than 0.1 inter-actuator pitch, so they are averaged in the cross section. We are also not sure that the phase is correctly retrieved in the unwrapping process as it encounters a strong phase gradient in these holes. They produce aberrations of 20 nm (PV). In total, on one actuator, aberrations of 30 nm (PV) and 6 nm (RMS) are obtained.
Figure 5 shows the azimuthally averaged DSP of the DM. The black curve represents the azimuthally averaged DSP for the whole DM (large field). We clearly observe the peak at the characteristic frequency of the DM (1/300 µm -1 ), indicated by a black dotted line. We can see peaks at other characteristic frequencies (1/(300 * √
2) µm -1 , 2/(300) µm -1 , ...). We took an azimuthal average to average these frequencies and observe a general trend. We repeated the same operation for DSP calculated on a small field. As shown in red in Figure 5 (right), we plotted the trends of these azimuthal DSP, which shows a asymptotic behavior in -4.4 for the big field and -3.3 for small field. Indeed, very small defects can come from differential aberrations in the interferometer, deformation of the flat reference mirror or noise in the measurement. We therefore adopt the large field value of f -4.4 for asymptotic behavior.
We now precisely study the behavior of a single actuator (Section 5), ie the influence function, the coupling with its neighbors, and the gain when we applied different voltages. For this analysis, we observe the behavior of a central actuator (number 528). We will measure its influence function and the inter-actuator coupling then study its gain, maximum and minimum strokes. These measurements were conducted by applying to the actuator 528 several voltages ranging from 10 to 90 % while the rest of the actuators are set at the value 70 %.
BEHAVIOR OF A SINGLE ACTUATOR
Influence function and coupling
We study the influence function IF of an actuator, movement of the surface when a voltage is applied. This influence function can be simulate using: 5
IF (ρ) = exp[ln(ω)( ρ d 0 ) α ], ( 5
)
where ω is the inter-actuator coupling and d 0 is the inter-actuator pitch. Figure 6 shows the influence function of a central actuator (528). At first, we apply a voltage of 40 % to the actuator (the others remaining at a voltage of 70 %) then a voltage of 70 % and made a difference, shown in the left picture. This is therefore the influence function for a voltage of -30%. We can observed that the influence function has no rotational symmetry. The main shape is a square, surrounded by a small negative halo.
We made cross section in several directions: one of the main directions of the DM, one of the diagonals. The results are presented on a logarithmic scale in Figure 6 (center). The distance to the center of the actuator is in inter-actuator pitch. We applied an offset to plot negative values in a logarithmic scale and we indicate the zero level by a dotted blue line. In the main direction, a break in the slope is observed at a distance of 1 inter-actuator pitch. The influence of the actuator in this direction is limited to 2 inter-actuator pitch in each way. In the diagonal direction, the secondary halo is about 3 nm deep, which is 0.5 % of the maximum. Due to this halo, the influence is somewhat greater (however, less than 3 inter-actuator pitch).
On Figure 6 (right), is plotted a cross section of the influence function in a principal direction of the DM. The inter-actuator coupling (height of the function at the distance of 1 inter-actuator pitch) is of 12 %. We fitted a curve using the function described in Equation 5 using this coupling and found α = 1.9. This shows that the central part of the influence function is almost a Gaussian ( alpha = 2), but do not take into account the "wings".
Gain study
In this section, we measured the maximum of the influence function for different voltages applied to the 528 actuator, the other remaining 70 %. Figure 7 (left) shows the superposition of cross sections in a principal direction of the DM for voltage values of 20 %, 30 %, 40 %, 50 %, 60 %, 70 %, 80 %, 90 %. We fitted Gaussian curves for these functions and observed that the maximum values of the peak are always located at the same place, and the width of the Gaussian is constant for the range of applied voltages. This shows that the influence function is identical for all the applied voltages. We plot the maximum of these curves as a function of the voltages in red diamonds in Figure 7 (right). The scale of these maximum can be read on the left axis, in nanometers. We then adjusted a quadratic gain (black solid curve) on this figure. This allows us to extrapolate the voltage values for 0 % and 100 %. From this figure, it can be deduced that:
• the maximum stroke is 1100 nm (1.1µ)m slightly less than the value indicated by Boston Micromachines. Temporal response to a +5% command in voltage for a normal actuator (left) and for the actuator 841 (right). Starting with a voltage of 70%, we send a +5% at 0s, wait for this command to be applied and then send a -5% command, at 5.39 s for normal actuator and at 20.14 s for the actuator 841. The vertical axis is % of the stroke, the abscissa in seconds since the update command of +5%. The dashed blue indicates the sending of the -5% command.
• the value of 70 % is the one that allows the maximum stroke in both ways (545 nm when we push and 560 nm when we pull). If the actuator is at a value of 25 %, we can only enjoy a maximum stroke of 140 nm in one direction. For this reason, we used to use the DM at values around 70 % before March 2013.
• the gain have a quadratic variation and therefore, the value of the minimum stroke in volt or in percent (8.93 10 -3 %) corresponds to different minimum strokes in nanometers depending on the location on this curve. We plotted the value of the minimum stroke in blue on the same plot (the scale of this curve, in nanometers, can be read on the right axis). We observe that a variation of 8.93 10 -3 % around 70 % produces a minimum stroke of 0.14 nm, which is twice the movement produced by the same variation around 25 % (0.07 nm).
Applying voltages around 70 % makes sense if we try to make the most of the stroke of the DM, but if we try to correct for small phase aberrations (which is our use of this DM), we should apply the lowest voltages possible.
We observed the positions of all the actuators and verified that they are evenly distributed on the surface. The gains of all actuators are very close on all the surface (variation of 20% between the minimum gain and the maximum).
We finish this study by an inventory of the different failures that we encountered and the solutions that we have fortunately been able to put in place to overcome these failures.
DAMAGED ACTUATORS
Before the analysis in March 2013, the actuators 841 and 197 were not responding correctly to our commands. A specific study on these actuators allowed us to overcome these dysfunctions and include them again in the pupil.
The slow actuator
We found that the actuator 841 responded to our voltage commands but with a very long response time. The interferometric bench in Marseille is not suited for temporal study (the successive path differences introductions limit the measurement frequency). Therefore, I used the phase measurement method developed on the THD bench: the self-coherent camera, see Mazoyer et al. (2013). [START_REF] Mazoyer | Estimation and correction of wavefront aberrations using the self-coherent camera: laboratory results[END_REF] I examined the temporal response of the 841 actuator after a command of +5% and compared it with the temporal response of a normal actuator (777). From a starting level of 70% for all of the actuator of the DM, we first sent a command to go to 75% to each of these two actuators, wait for this command to be executed and sent an order to return to the initial voltage. Figure 8 shows the results of this operation for a normal actuator (777, left) and for the slow actuator (841, right). The measurement frequency is on average 105 ms. The horizontal axis is the time (in seconds), with origin the date at which the +5% command is sent. Our phase measurement method does not give an absolute measurement of the phase and so we normalized the result (0% is the mean level before the command, 100% is the mean level after the +5%command.
For the normal actuator, the response time is inferior to the measurement period (105 ms in average). This result is consistent with the response time of an actuator announced by Boston Micromachines (< 20µs) although we cannot verify this value with this method. For the slow actuator, there is a much slower response to the rise as well as to the descent. We measured the response time to 95% of the maximum in the rise (7.5 s) and in the descent (8.1 s). However, as the static gain of the actuator is comparable with the gain of healthy actuators, we deduce that this actuator goes slowly but surely to the right position.
The coupled actuators
We realized that the actuator number 197 responded to the commands applied to the actuator 863, at the other end of the DM. It seems that the actuator 197 has a certain autonomy, but in case of large voltage differences applied on these two actuators, the 197 follows the commands applied to the 863 actuator. We carefully verified that if we apply the same voltage to them, these two actuators respond correctly to the command and have comparable gains than the other actuators. The actuator 863 is fortunately on the edge of the DM, so we can center the pupil with no influence of this actuator. Since, we have recentered the pupil to include the the 197 actuator back (see Figure 9). We systematically apply same voltages to both actuators simultaneously.
The dead actuator
Finally, we notice that the 769 actuator does not respond at all to our commands. This actuator is on the far edge of the DM. It is possible that he broke during the transportation towards LAM laboratory, but as it was far off pupil, we may have previously missed this failure. This actuator is fixed to the value 0% regardless of the applied voltage. However, we checked that it has no influence over 2 inter-actuator pitch.
CONCLUSION AND CONSEQUENCES ON THE BENCH
The identification of the faulty actuators and the solutions to overcome these dysfunctions have enabled us to recenter the pupil on our DM. Figure 9 shows the position of the pupil on the DM after the study at LAM. This centering has enabled to move away from the edges of the DM. We also saw that this centering is preferable to limit the introduction into the pupil of aberrations at high frequencies non reachable by the DM. Finally, we recently lowered the average value of the voltages on the DM from 70 % to 25 % and improve by a factor of 2 the minimum stroke reachable by each actuators. These upgrades played an important role in the improvement of our performance on the THD bench.
Figure 1 :
1 Figure 1: Numbered actuators and position of the pupil on the DM before March 2013 (in green).The numbering starts at 0 in the bottom right corner and ends in 1023 in the top left corner. The actuators 841 and 197, in red, considered defective, were not used. Therefore, the pupil used is reduced (27 actuators along the diameter of the pupil only) and offseted on the DM.
Figure 2 :
2 Figure 2: The interferometric bench at LAM. Figure from Liotard et al. (2005).2
Figure 3 :
3 Figure3: DM cross sections. Left: cross sections of the whole DM surface in black when all actuators are at 70 % in voltages. Each point on this curve corresponds to an average over an actuator wide band. In blue line, we plotted the result after removing the frequencies accessible to the DM (in post-treatment, with a smoothing filter). Finally, the vertical lines indicate the limits of the 27x27 actuator pupil before March 2013 (red dotted line) and a pupil centered of the same size (brown dotted line). Right : Same cross sections for different voltages applied to all the actuators (from 0% to 90 %). Piston was removed and the curves were superimposed. The abscissas are measured in inter-actuator pitch and the y-axis is in nanometers.
Figure 4 :
4 Figure 4: Surface of the DM in large field on the left (the whole DM is about 10 mm by 10 mm) and in small field on the right, centered on the 4 by 4 central actuators (ie 1.2 mm by 1.2 mm). In both cases, all the actuators are set to 70 % in voltages. To observe the fine structures of the DM, we removed in both cases low frequencies digitally in post-processing. On the left image, we can see actuator 769 (bottom left), which is fixed to 0% (see Section 6.3).
Figure 6 :
6 Figure 6: Influence function. Left, measurement of the influence function of a central actuator. Center, cross section of the influence function in logarithmic scale along a principal direction of the mirror and in an diagonal direction. Right, cross section of the influence function along a principal direction, on which is superimposed cross section of a simulated influence function. The abscissas are in inter-actuator pitch and the vertical axis are in nanometers.
Figure 7 :
7 Figure7: Study of one actuator: stroke and gain. Left: influence functions for different applied voltages. Right: maximum values of these influence functions in red and quadratic gain (black solid curve). The minimum percentage applicable (8.93 10 -3 %) can produce different minimum stroke depending on your position on this quadratic curve : we plot the minimum stroke around each voltage in blue (the scale of this curve, in nanometers, can be read on the right axis).
Figure 8 :
8 Figure8: Study of the slow actuator. Temporal response to a +5% command in voltage for a normal actuator (left) and for the actuator 841 (right). Starting with a voltage of 70%, we send a +5% at 0s, wait for this command to be applied and then send a -5% command, at 5.39 s for normal actuator and at 20.14 s for the actuator 841. The vertical axis is % of the stroke, the abscissa in seconds since the update command of +5%. The dashed blue indicates the sending of the -5% command.
Figure 9 :
9 Figure 9: This study allowed us to identify precisely the causes of actuator failures and recenter the pupil on the DM, including actuators 197 and 841.
1012 980 948 916 884 852 820 788 756 724 692 660 628 596 564 532 500 468 436 404 372 340 308 276 244 212 180 148 116 84 52 20 1011 979 947 915 883 851 819 787 755 723 691 659 627 595 563 531 499 467 435 403 371 339 307 275 243 211 179 147 115 83 51 19 1010 978 946 914 882 850 818 786 754 722 690 658 626 594 562 530 498 466 434 402 370 338 306 274 242 210 178 146 114 82 50 18 1009 977 945 913 881 849 817 785 753 721 689 657 625 593 561 529 497 465 433 401 369 337 305 273 241 209 177 145 113 81 49 17 1008 976 944 912 880 848 816 784 752 720 688 656 624 592 560 528 496 464 432 400 368 336 304 272 240 208 176 144 112 80 48 16 1007 975 943 911 879 847 815 783 751 719 687 655 623 591 559 527 495 463 431 399 367 335 303 271 239 207 175 143 111 79 47 15 1006 974 942 910 878 846 814 782 750 718 686 654 622 590 558 526 494 462 430 398 366 334 302 270 238 206 174 142 110 78 46 14 1005 973 941 909 877 845 813 781 749 717 685 653 621 589 557 525 493 461 429 397 365 333 301 269 237 205 173 141 109 77 45 13 1004 972 940 908 876 844 812 780 748 716 684 652 620 588 556 524 492 460 428 396 364 332 300 268 236 204 172 140 108 76 44 12 1003 971 939 907 875 843 811 779 747 715 683 651 619 587 555 523 491 459 427 395 363 331 299 267 235 203 171 139 107 75 43 11 1002 970 938 906 874 842 810 778 746 714 682 650 618 586 554 522 490 458 426 394 362 330 298 266 234 202 170 138 106 74 42 10 1001 969 937 905 873 841 809 777 745 713 681 649 617 585 553 521 489 457 425 393 361 329 297 265 233 201 169 137 105 73 41 9 1000 968 936 904 872 840 808 776 744 712 680 648 616 584 552 520 488 456 424 392 360 328 296 264 232 200 168 136 104 72 40 8 999 967 935 903 871 839 807 775 743 711 679 647 615 583 551 519 487 455 423 391 359 327 295 263 231 199 167 135 103 71 39 7 998 966 934 902 870 838 806 774 742 710 678 646 614 582 550 518 486 454 422 390 358 326 294 262 230 198 166 134 102 70 38 6 997 965 933 901 869 837 805 773 741 709 677 645 613 581 549 517 485 453 421 389 357 325 293 261 229 197 165 133 101 69 37 5 996 964 932 900 868 836 804 772 740 708 676 644 612 580 548 516 484 452 420 388 356 324 292 260 228 196 164 132 100 68 36 4 995 963 931 899 867 835 803 771 739 707 675 643 611 579 547 515 483 451 419 387 355 323 291 259 227 195 163 131 99 67 35 3 994 962 930 898 866 834 802 770 738 706 674 642 610 578 546 514 482 450 418 386 354 322 290 258 226 194 162 130 98 66 34 2 993 961 929 897 865 833 801 769 737 705 673 641 609 577 545 513 481 449 417 385 353 321 289 257 225 193 161 129 97 65 33 1 992 960 928 896 864 832 800 768 736 704 672 640 608 576 544 512 480 448 416 384 352 320 288 256 224 192 160 128 96 64 32 0 Damaged actuators Damaged actuators and pupil position before March 2013
1010 978 946 914 882 850 818 786 754 722 690 658 626 594 562 530 498 466 434 402 370 338 306 274 242 210 178 146 114 82 50 18 1009 977 945 913 881 849 817 785 753 721 689 657 625 593 561 529 497 465 433 401 369 337 305 273 241 209 177 145 113 81 49 17 1008 976 944 912 880 848 816 784 752 720 688 656 624 592 560 528 496 464 432 400 368 336 304 272 240 208 176 144 112 80 48 16 1007 975 943 911 879 847 815 783 751 719 687 655 623 591 559 527 495 463 431 399 367 335 303 271 239 207 175 143 111 79 47 15 1006 974 942 910 878 846 814 782 750 718 686 654 622 590 558 526 494 462 430 398 366 334 302 270 238 206 174 142 110 78 46 14 1005 973 941 909 877 845 813 781 749 717 685 653 621 589 557 525 493 461 429 397 365 333 301 269 237 205 173 141 109 77 45 13 1004 972 940 908 876 844 812 780 748 716 684 652 620 588 556 524 492 460 428 396 364 332 300 268 236 204 172 140 108 76 44 12 1003 971 939 907 875 843 811 779 747 715 683 651 619 587 555 523 491 459 427 395 363 331 299 267 235 203 171 139 107 75 43 11 1002 970 938 906 874 842 810 778 746 714 682 650 618 586 554 522 490 458 426 394 362 330 298 266 234 202 170 138 106 74 42 10 1001 969 937 905 873 841 809 777 745 713 681 649 617 585 553 521 489 457 425 393 361 329 297 265 233 201 169 137 105 73 41 9 1000 968 936 904 872 840 808 776 744 712 680 648 616 584 552 520 488 456 424 392 360 328 296 264 232 200 168 136 104 72 40 8 999 967 935 903 871 839 807 775 743 711 679 647 615 583 551 519 487 455 423 391 359 327 295 263 231 199 167 135 103 71 39 7 998 966 934 902 870 838 806 774 742 710 678 646 614 582 550 518 486 454 422 390 358 326 294 262 230 198 166 134 102 70 38 6 997 965 933 901 869 837 805 773 741 709 677 645 613 581 549 517 485 453 421 389 357 325 293 261 229 197 165 133 101 69 37 5 996 964 932 900 868 836 804 772 740 708 676 644 612 580 548 516 484 452 420 388 356 324 292 260 228 196 164 132 100 68 36 4 995 963 931 899 867 835 803 771 739 707 675 643 611 579 547 515 483 451 419 387 355 323 291 259 227 195 163 131 99 67 35 3 994 962 930 898 866 834 802 770 738 706 674 642 610 578 546 514 482 450 418 386 354 322 290 258 226 194 162 130 98 66 34 2 993 961 929 897 865 833 801 769 737 705 673 641 609 577 545 513 481 449 417 385 353 321 289 257 225 193 161 129 97 65 33 1 992 960 928 896 864 832 800 768 736 704 672 640 608 576 544 512 480 448 416 384 352 320 288 256 224 192 160 128 96 64 32 0 Fixed actuator (dead) Coupled actuators Slow actuator Damaged actuators and new pupil position
To see the latest results on this high contrast bench, see Galicher et al. (2014) 7 and Delorme et al. (2014). 8
ACKNOWLEDGMENTS
J. Mazoyer is grateful to the CNES and Astrium (Toulouse, France) for supporting his PhD fellowship. The DM study at LAM was funded by CNES (Toulouse, France). |
01763128 | en | [
"spi.tron"
] | 2024/03/05 22:32:13 | 2017 | https://theses.hal.science/tel-01763128/file/TH2017PESC1163.pdf | Arie Finkelstein
Therese Cynober
Loic Garçon
Véronique Picard
Genevieve Baudoin
Hugues Talbot
Suat Topsu
Thérèse Cynober
Loïc Garçon
Thibaut Barati
Olena Tankyevych
Ivan Swiac
Etienne Desjardins
Michel Pellegrin
Gui
Francine Abdou
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
The ability of red blood cells (RBC) to change their shape under varying conditions is a crucial property allowing the cells to traverse capillaries narrower than their own diameter. Ektacytometry is a technique for measuring deformability by exposing a highly diluted blood sample to shear stress and evaluating the resulting elongation in RBC shape using a laser diffraction pattern.
Two main methods are used to characterize RBC deformability in ektacytometry:
.
(2) In "Osmotic Scan", the shear stress is kept constant, but RBCs are mixed in a medium where osmolality is increased gradually. The diffraction pattern then shows
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017 vii cylinder ektacytometer operation principle, and to optimize the design of the full device.
The shear stress distribution in a shear driven flow system (Taylor-Couette cylinders) is different than that generated in a pressure driven flow in a channel. In a Taylor-Couette system all RBCs are exposed to the same shear stress, but in a flow channel the distribution of shear stress is not uniform. This means that the interpretation of the diffraction pattern in these two different systems represents potentially different deformability "indexes". This difference is assessed theoretically but the results need to be verified experimentally for a variety of pathologies in order to confirm that they both give comparable results in clinical use. A small number of pilot experiments performed to date show comparable results.
The dissertation is divided into seven chapters including a general introduction and conclusions. A short description of the thesis structure is found at the end of the introduction chapter, section 1.6 "Aim, objectives and Dissertation Overview ".
deformability as a function of osmolality. This mode is also known as "osmotic gradient ektacytometry", and provides information on membrane stiffness, intracellular viscosity and surface-area-to-volume ratio [START_REF] Clark | Osmotic gradient ektacytometry: comprehensive characterization of red cell volume and surface maintenance[END_REF].
Measurement of RBC deformability in Osmotic Scan mode has made it possible to diagnose several hereditary disorders related to cell membrane or hemoglobin defects such as spherocytosis, eliptocytosis, or stomatocytosis. RBC deformability depends on both cytosolic and membrane parameters and measurement of the osmotic deformability profile has been used to monitor patient treatment, aimed at normalization of RBC properties.
Ektacytometry has been dominated by the Taylor-Couette cylinder shearing technique for the last four decades. However, equipment based on flow channel shearing is simpler, cheaper and easier to maintain. Thus, in recent years several studies have shown that the results obtained by this technique, in the shear scan ektacytometry mode, are valid for clinical use. However, no osmotic gradient ektacytometer in a flow channel was designed and built up until the present research project.
In this dissertation the design, construction and testing of an osmotic gradient ektacytometer based on a flow channel is described in detail. The new instrument
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017 vi has several advantages over the currently used rotating cylinder technique. These are:
-Lower quantities of blood sample are required -important for newborn babies and experimental mouse models used in bio clinical studies.
-Closed circuit design giving better sterile conditions and lowering sample contamination risk.
-Closed circuit allows monitoring of additional parameters like oxygenation and temperature. It also permits the instrument to be more compact in size and lighter.
-No moving parts in the flow channel leading to lower power consumption, lower production/manufacturing cost, and simplification of maintenance.
-Test conditions are closer to blood vessel physiology.
-Provides better precision because shear stress does not depend on viscosity under constant pressure across the channel.
The research involved the design of several components:
-A new hydraulic system intended to create varying osmolality solution with a fixed viscosity.
-The use of various sensors and associated electronics for the measurement of temperature, conductivity, differential pressure and laser beam intensity.
-Design of an electronic board (based on a microcontroller programmed to control valves and pumps and to communicate with user interface program on a computer).
Comparative experiments were performed in order to determine the most effective means of measuring the elliptical shape of the diffraction pattern which represents the average RBC deformability. Results obtained by a camera were compared to those obtained by a four quadrant photodiode detector.
The study was also complemented by a theoretical analysis based on fluidic modeling, in order to correlate experimental results with the Taylor-Couette
INTRODUCTION
Circulation of Red Blood Cells (RBC) throughout the vascular system is essential for delivery of oxygen to body tissues. Most of the oxygen exchange with the tissues is done in the smallest capillaries of the body, and the diameter of those is generally smaller, by as much as one third, than the diameter of the RBC. Therefore, the ability of RBC to repeatedly and reversibly deform is crucial for proper circulation and oxygen exchange. In fact, during the four months average lifespan of an RBC in humans, it goes through nearly a quarter of a million cycles of stretching and relaxation.
Flaws in RBC deformability result in failure of the RBC's to circulate properly in the capillaries, as well as to a decrease in their flow rate in macrovascular vessels, due to deterioration of the shear thinning property compared to healthy blood. [START_REF] Chien | Biophysical behavior of red cells in suspensions[END_REF]) [START_REF] Schmid-Schönbein | Influence of Deformability of Human Red Cells upon Blood Viscosity[END_REF]. Decrease in RBCs' deformability can result from hereditary or acquired pathologies such as sickle cell anemia, spherocytosis, elliptocytosis, stomatocytosis, auto-immune hemolytic anemia and malaria. RBC deformability disorders are genetically sustained in human populations in areas where malaria is endemic. Despite their severe health effects, Introduction 2 they confer survival advantages for those affected by the heterozygous1 form and who contract malaria [START_REF] Haldane | The Rate of Mutation of Human Genes[END_REF] [START_REF] Allison | Polymorphism and Natural Selection in Human Populations[END_REF]. For this reason the majority of these pathologies are geographically confined to the tropics and subtropics. Due to poverty in some of these regions, there is a lack of commercial interest and a striking lack of resources allocated for research, diagnosis and treatment in this field.
The study of Red Blood Cell (RBC) disorders is of primary importance, not only for diagnosis and treatment, but also because it may one day lead to understanding the mechanisms preventing malaria which may, in turn, contribute to a discovery of efficient preventive means.
My work has been motivated by the desire to contribute to both research and treatment of hereditary RBC disorders.
History 2
RBCs were first discovered by Jan Swammerdam, in 1658, but a first published report only appeared in 1674 by Antonie Van Leeuwenhoek who also described their form changing feature [START_REF] Van Leeuwenhoek | Microscopical observations concerning blood, milk, bones, the brain, spittle, and cuticula[END_REF]. A modern milestone in the field was the work of Färaeus and Lindqvist showing that blood viscosity decreases strongly with reduced diameter of the tube (below a critical value of 0.3 mm) [START_REF] Fahraus | The viscosity of the blood in narrow capillary tubes[END_REF]. The role of deformability for the survival and proper functioning of RBC has long been known. However, its measurement only started in 1964 when Rand and Burton introduced the micropipette aspiration technique [START_REF] Rand | Mechanical Properties of the Red Cell Membrane: I. Membrane Stiffness and Intracellular Pressure[END_REF]. In the following years several other methods of measuring RBC deformability were published including filtration, optical trapping, rheoscopy, flow channels and ektacytometry as detailed in § 2.1.
Introduction 3
Blood
Blood is a non-Newtonian, viscoelastic and thixotropic suspension composed of roughly half plasma and half cellular particles. The particles are mostly cells and cell fragments: red blood cells (RBC), white blood cells (WBC), and thrombocytes (platelets). RBCs make around 98% of the total volume of cells in the blood. The average RBC's hematocrit (fraction of the total blood volume) is 45% in men and 42% in women.
Plasma, the extracellular portion of blood, consists of water (92%), proteins, nutrients, minerals, waste products, hormones, carbon dioxide, glucose, electrolytes and clotting factors. It is a Newtonian fluid with normal viscosity range of 1.10 -1.30 cP at 37°C.
Blood viscosity, which has a crucial impact on circulation, may be subjected to several anomalies and depends on hematocrit, RBC deformation, aggregation and plasma viscosity. These factors are dynamic and vary with shear stress. Normal blood viscosity range is 3-4 cP at 37°C
Blood flow in vessels with diameter between 10µm to 300µm has significant non-Newtonian properties known as the Fahraeus-Lindqvist effect [START_REF] Fahraus | The viscosity of the blood in narrow capillary tubes[END_REF].
Red Blood Cells
Red blood cells count well over half of the total number of cells in a human body (Bianconi et al., 2013). Around 2.4 million of them are replaced each second in an adult body [START_REF] Sackmann | Biological Membranes Architecture and Function[END_REF]. Mammalian RBCs are non-nucleated cells. Their nucleus and mitochondria are expelled during bone marrow extrusion and cell maturation. Higher hemoglobin content improves their capacity for oxygen storage and diffusion. Moreover, the bi-concave cell shape of normal RBCs, which improves their deformability, and hence their circulation, would not be possible with the presence of a nucleus. The human normal RBC has an average diameter of a 7.82µm
(SD ±0.58µm), and an average thickness of 2.58 µm (SD ±0.27µm) at the wider parts Introduction 4
and 0.81µm (SD ±0.35µm) in the center [START_REF] Evans | Improved measurements of the erythrocyte geometry[END_REF]. RBCs of neonates, compared to adult cells, have 21% larger volume, 13% greater surface area and 11%
wider diameter [START_REF] Linderkamp | Geometry of Neonatal and Adult Red Blood Cells[END_REF].
The biconcave shape of a normal RBC gives it another advantage in achieving an efficient oxygen transfer since its average surface area of ~140 µm² is considerably higher than 97 µm², which is the surface area of a sphere having the same 90 fL volume 3 [START_REF] Canham | The minimum energy of bending as a possible explanation of the biconcave shape of the human red blood cell[END_REF]) [START_REF] Lenard | A note on the shape of the erythrocyte[END_REF]. The cell shape is a result of the interaction between the membrane, a network of skeletal proteins and the cytoplasm. Aging cells are removed from circulation by the spleen after a four months average lifespan, in which they travel around 400 km distance [START_REF] Sackmann | Biological Membranes Architecture and Function[END_REF].
Introduction 5
traverse capillaries narrower than their own diameter. Furthermore, deformability of RBC improves circulation also in macrovascular vessels since it's responsible for the reduction in blood viscosity (Figure 2). The Deformability of RBCs is defined as their ability to modify their shape in response to externally applied forces. In blood circulation these forces are forms of stress either imposed by fluid flow with cell to cell interactions or by a vessel wall.
Human RBCs take around one minute to complete one cycle of circulation. In their lifetime RBCs are repeatedly subject to deformation, in around one hundred and fifty thousand circulation cycles.
The degree and other characteristics of deformability of RBCs depend on multiple factors related not only to the general flow conditions such as vessel geometry, blood pressure, blood viscosity, hematocrit and temperature, but also to the cell biomechanical and biochemical characteristics.
Three main cell properties responsible for RBC deformability are: cell surface to volume ratio (S/V), intracellular viscosity and cell membrane viscoelastic properties [START_REF] Chien | Theoretical and experimental studies on viscoelastic properties of erythrocyte membrane[END_REF].
The biconcave geometry gives the cell an excess of surface area allowing it to easily fold and elongate under smaller forces. The mean volume of RBC is 94 fl (SD ±14fl) and its membrane's surface area is 135 µm² (SD ±16µm²) (Evans andFung, 1972)(McLaren et al., 1987). This represents an increase in membrane surface area, compared to a sphere of an equal volume, by a factor of 1.4. The larger surface area allows for a more efficient gas exchange and for substantial changes in shape under shearing forces, with very little area expansion. The maximal fractionnal area expansion, producing lysis, was uniformly distributed between 2% and 4% average [START_REF] Evans | Elastic area compressibility modulus of red cell membrane[END_REF]. The biconcave shape is due to the mechanical and biochemical properties of the membrane, which is composed of a phospholipid bilayer supported by cytoskeletal proteins.
The RBC membrane surrounds a liquid cytoplasm whose viscosity depends mainly on hemoglobin concentration (MCHC). Its value is ~7 cP for normal MCHC of ~32 g/dl. It rises sharply with MCHC in a nonlinear fashion: it's nearly quadrupled at an MCHC of 40 g/ml [START_REF] Chien | Red Cell Deformability and its Relevance to Blood Flow[END_REF]. MCHC varies with changes in plasma osmolality leading to hydration or dehydration of the cell. It also increases with cell aging resulting in decreased deformability of old cells [START_REF] Sutera | Age-related changes in deformability of human erythrocytes[END_REF]. For a healthy RBC, the viscosity contrast λ, defined as the ratio of cytosol over blood plasma viscosity, is approximately λ=5 [START_REF] Fischer | The red cell as a fluid droplet: tank tread-like motion of the human erythrocyte membrane in shear flow[END_REF] [START_REF] Fedosov | Deformation and dynamics of red blood cells in flow through cylindrical microchannels[END_REF]. The higher this contrast is the higher is the threshold shear stress in which RBCs, flowing in the macro circulation, change from tumbling motion (rigid body rotation) into tank tread motion4 (where the membrane turns around the cell content) [START_REF] Fischer | Threshold shear stress for the transition between tumbling and tank-treading of red blood cells in shear flow: dependence on the viscosity of the suspending medium[END_REF]. The increase in threshold has an effect on the whole blood viscosity and hence results in reduced flow rate. Tank tread motion has an important effect on RBC deformability since the induced cell membrane rotation transmits the external shear into the cell interior, causing it to participate in flow and behave more like a fluid [START_REF] Keller | Motion of a tank-treading ellipsoidal particle in a shear flow[END_REF].
Introduction 7
High elasticity of the membrane and cytoskeletal structure is an important determinant in the ability of RBCs to deform. The cell membrane can repeatedly elongate, under stress, to a length of over twice its diameter. This deformation is reversible, so the cell acquires back the biconcave discoid shape after the shear stress is removed. In other words the cell is highly elastic and also has a memory of its shape. The rim for instance is always formed by the same membrane elements.
These elements regain their original position after tank treading movement under shear [START_REF] Fischer | Shape Memory of Human Red Blood Cells[END_REF]. RBCs have a quick response time to deformation and recovery, of approximately 80 ms [START_REF] Chien | Theoretical and experimental studies on viscoelastic properties of erythrocyte membrane[END_REF].
Pathological disorders affecting RBC deformability 5
A short review of RBC abnormalities characterized by RBC deformability alteration follows with a focus on hereditary hemolytic anemias. Some of these present very similar symptoms and a correct specific diagnosis is critical.
Since most of these hereditary disorders confer resistance to Malaria they are significantly more prevalent in regions where Malaria is endemic [START_REF] Allison | Polymorphism and Natural Selection in Human Populations[END_REF][START_REF] Kwiatkowski | How Malaria Has Affected the Human Genome and What Human Genetics Can Teach Us about Malaria[END_REF].
Many studies demonstrate that RBC deformability is altered when either membrane or cytosol is altered. The source for these alterations could be either acquired or hereditary.
In these abnormalities, altered cell geometry is manifested in alteration of S/V ratio and results in decreased cellular deformability, compromised red cell function, osmotic fragility and lessened survival of the RBC. This is the case of sickle cell disease (SC), hereditary spherocytosis (HS), hereditary elliptocytosis (HE), hereditary stomatocytosis (HSt), thalassemia or malaria [START_REF] Allard | Red Cell Deformability Changes in Hemolytic Anemias Estimated by Diffractometric Methods (Ektacytometry) Preliminary Results[END_REF][START_REF] Mokken | The clinical importance of erythrocyte deformability, a hemorrheological parameter[END_REF].
The Ektacytometer is useful-for both diagnosis and medical follow up treatmentin RBC abnormalities characterized by decreased cell deformability.
Ektacytometer shear scan curves show abnormal deformability, but they give similar curves for several pathologies, therefore they do not reveal enough information for specific differential diagnoses. Osmotic scan curves give distinctive curves for different pathologies. Some examples are found in Figure 3. Extensive information for the interpretation of osmotic scan Ektacytometry curves can be found in [START_REF] Clark | Osmotic gradient ektacytometry: comprehensive characterization of red cell volume and surface maintenance[END_REF][START_REF] Johnson | Osmotic scan ektacytometry in clinical diagnosis[END_REF][START_REF] King | ICSH guidelines for the laboratory diagnosis of nonimmune hereditary red cell membrane disorders[END_REF][START_REF] Mohandas | Analysis of factors regulating erythrocyte deformability[END_REF]) Introduction 10
Hereditary Spherocytosis (HS)
Hereditary Spherocytosis, affects all ethnic groups but is more common in people of northern European ancestry (2 in 10000) [START_REF] Eber | Hereditary spherocytosis-defects in proteins that connect the membrane skeleton to the lipid bilayer[END_REF]. Clinical manifestations of HS are highly variable ranging from mild to very severe anemia. A common feature of all forms of HS is loss of RBC membrane surface area resulting in change of cell shape from discocytes to stomatocytes to spherocytes (Figure 4).
Reduction in membrane surface results in reduced deformability and inability to effectively traverse the spleen, which sequesters and removes spherocytes from circulation. Splenectomy reduces the severity of anemia by increasing the life span of spherocytic red cells.
Hereditary Elliptocytosis (HE)
Although HE has a worldwide distribution, it's more common in Malaria endemic regions with prevalence between 0.6% to 1.6% in West Africa [START_REF] Dhermy | Spectrin-based skeleton in red blood cells and malaria[END_REF].
Most cases of HE are heterozygous and hence asymptomatic. Around 10% of patients, mostly homozygous for HE variants, experience mild to severe anemia, including the severe variant hereditary pyropoikilocytosis (HPP). A mechanically unstable membrane results in progressive loss of membrane surface area and transformation of cell shape from discocyte to elliptocyte, as a function of time in the circulation (Figure 4). As with HS, splenectomy increases the life span of fragmented red cells and thus reduces the severity of anemia.
Hereditary Ovalocytosis (SAO)
Rigid membrane is a distinguishing feature of hereditary ovalocytosis [START_REF] Mohandas | Rigid membranes of Malayan ovalocytes: a likely genetic barrier against malaria[END_REF]. The geographic distribution is highly correlated to Malaria infected zones mainly in South East Asia. SAO prevalence can reach 6.6 to 20.9% in several groups of Malayan aborigines (Amato and Booth,1977). Despite a marked increase in cell membrane rigidity (4 to 8 times less elastic than normal membrane as assessed by
Ektacytometry and micropipette aspiration) most affected people experience minimal hemolysis. Except for one reported case [START_REF] Picard | Homozygous Southeast Asian ovalocytosis is a severe References 125 dyserythropoietic anemia associated with distal renal tubular acidosis[END_REF] the Introduction 11 homozygous form has not been described and is thought to be lethal [START_REF] Liu | The homozygous state for the band 3 protein mutation in Southeast Asian Ovalocytosis may be lethal[END_REF].
Hereditary Stomatocytosis (HSt)
HSt is a rare hemolytic anemia, characterized by RBC taking a form of a cup with a mouth shape (stoma) area. Several varieties of HSt were identified, with two main ones associated with the status of hydration of the cell. The impact on osmotic gradient ektacytometry curves is a curve shifted to the left (DHSt) or to the right (OHSt) of a normal deformability curve.
Hereditary Xerocytosis (also called DHSt Dehydrated Hereditary Stomatocytosis)
DHSt is the least severe but the more common of the HSt conditions. Its prevalence is estimated as 1 patient out of 50,000 individuals contrary to OHSt with 1 patient per 100,000 individuals (Hereditary Stomatocytosis -Anaemias -Enerca, European
Network for Rare and Congenital Anaemias). Due to dehydration, the cell MCHC and cytoplasmic viscosity are increased. Cell dehydration has a marginal effect on survival of DHSt cells. DHSt is therefore associated with well compensated anemia with a mild to moderately enlarged spleen.
Overhydrated hereditary Stomatocytosis (OHSt)
OHSt is a rare disorder where RBC over hydration results in increased cell volume with no increase in surface area. Unlike in DHSt, the cell survival in OHSt is significantly compromised leading to moderately severe to severe anemia. Cells with increased sphericity are sequestered by the spleen. However while splenectomy is beneficial in management of HS and HE, it is completely contraindicated in both types of hereditary stomatocytosis because of an increased risk of venous thromboembolic complications. Osmotic scan ektacytometry is very important in providing a proper diagnosis of HSt.
Sickle cell disease
Sickle cell disease (SCD) or sickle cell anemia is an inherited blood disorder characterized by abnormal mechanical and rheological behavior of RBCs. SCD is caused by sickle hemoglobin (HbS), a variant hemoglobin (Hb) molecule resulting from a point mutation in the β-globin gene. Upon deoxygenation, HbS polymerizes or self-assembles inside the RBC and significantly alters and damages the cytoskeleton and membrane cortex, resulting in a sickle-shaped RBC. This sickle RBC has decreased deformability, causing abnormal rheology in sickle-cell blood and eventually various complications of SCD: painfull crisis, ischemia and organ damage can result when microcirculation is impeded due to the poorly deformable RBCs. As a result of these complications, a study conducted in the United States during the 1980s showed a decrease of roughly 25 to 30 years in life expectancy of sickle cell anemia patients [START_REF] Platt | Mortality In Sickle Cell Disease -Life Expectancy and Risk Factors for Early Death[END_REF]. Since the last two decades, new treatments improve life expectancy.
Aim, objectives and Dissertation Overview
The aim of this dissertation is to explore the theoretical basis, build a prototype and produce a proof of concept for a new diagnostic instrument of hereditary RBC disorders, based on a microfluidic method.
In order to achieve this aim I explored the following objectives:
1. Describe the current state-of-the-art in the field, including research and commercial applications, so as to place my work in the context of existing technology. This is described in chapter 2 "State of the Art".
2. Study the theoretical difference between the current method used in osmolality gradient Ektacytometry diagnosis based on rotating concentric cylinders, and my proposed microfluidic method. This is explored in chapter 3 "Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry".
3. Compare the two methods of measurement of deformability of RBCs that are used in Ektacytometry: photodetectors and a camera. The comparison allowed me to choose the preferred method to employ in the current work.
This comparison is done in chapter 4 "Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality".
4. Model the hydraulic system in order to evaluate flow rate magnitudes necessary to achieve proper osmolality increasing solution for Osmoscan curves. This is explored in chapter 5 "Modelling and Mathematical analysis".
5. Design and build a prototype, based on knowledge acquired in previous chapters, and compare measurement results with an Ektacytometer using rotating cylinders. This is explored in chapter 6 "Design and Proof of principle".
6. Conclusions from this study are presented in chapter 7. Swann , 1954). The technique was later applied to RBC by [START_REF] Rand | Mechanical Properties of the Red Cell Membrane: I. Membrane Stiffness and Intracellular Pressure[END_REF] and was introduced as a laboratory clinical tool in the early 1970's by Evans [START_REF] Evans | New Membrane Concept Applied to the Analysis of Fluid Shearand Micropipette-Deformed Red Blood Cells[END_REF]. The RBC is aspirated by a glass micropipette of internal diameter between 1 to 3 µm to which a negative pressure is applied. The length of the aspirated part of the cell depends on the amount and duration of the suction pressure, the micropipette diameter and the cell properties. There are three phases in this aspiration process. First a small part of the cell membrane produces a tongue, inside the micropipette, of an equibiaxial form (Figure 5). Increased aspiration pressure results in a longer aspirated section that starts to buckle. At a certain point, the aspiration pressure reaches a threshold level causing the cell to flow inside the tube. In some cases the cells are osmotically swollen into a nearly spherical form prior to the aspiration. Comparing their aspiration rate to cells in isotonic conditions can give supplementary information [START_REF] Hochmuth | Micropipette aspiration of living cells[END_REF] The instrument consists of three parts: a pressure controller, a chamber where the cells are aspirated by the micropipette and a microscope.
STATE OF THE ART
Different mechanical properties of the cell can be determined by various measurements:
-Time required for the aspirated part of the cell to withdraw and return to its original shape after release of pressure. The shape recovery time constant depends primarily on the membrane viscosity and its elasticity, cytoplasm viscosity and cell thickness.
-Pressure that is required to aspirate a length of the cell equal to the radius of the pipette.
-Ratio of the aspirated length to the micropipette radius at a given pressure.
-Critical pressure above which the cell flows inside the micropipette.
The MA technique allows measurement of different mechanical properties of individual cells such as membrane viscosity, elasticity and bending modulus (Hochmuth et al., 1979) [START_REF] Artmann | Temperature transitions of protein properties in human red blood cells[END_REF] [START_REF] Brooks | Rheology of blood cells[END_REF] 1983) [START_REF] Chien | Theoretical and experimental studies on viscoelastic properties of erythrocyte membrane[END_REF]. It is a useful research tool, but it is not practical for clinical use due to its time consuming procedure. Moreover, due to numerous approximations, results need validation by other measurement methods.
Attempts for the automation of an MA system have not found a commercial application [START_REF] Heinrich | Automated, high-resolution micropipet aspiration reveals new insight into the physical properties of fluid membranes[END_REF]Rawicz, 2005) (Shojaei-Baghini et al., 2013).
Rheoscopy
This method was introduced in 1969 [START_REF] Schmid-Schoenbein | Microscopy and viscometry of blood flowing under uniform shear rate(rheoscopy)[END_REF] for the measurement of single red blood cell deformability A Rheoscope consists of a rheological chamber (flow system) placed on an inverted microscope stage, and equipped with means for optical measurement.
One variant of rheoscope uses a transparent cone-plate viscometer as the rheological chamber. Cells suspended in the viscometer are subjected to a constant shear and are photographed through a microscope (Figure 6). Photographic images are measured and cell elongation (E) is calculated from the length and width of the ellipsoid image.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
State of the Art 18
Elongation is defined as E= (L-W)/(L+W). Shear force acting on the RBC can be varied either by changing the viscosity of the suspending fluid or the speed of rotation of the cone. The effects on cell dimensions can be directly measured.
A microfluidic Rheoscope is obtained by using a flow channel as the flow system [START_REF] Zhao | Microscopic investigation of erythrocyte deformation dynamics[END_REF].
Several versions of a Rheoscope whereby the deformation is measured automatically rather than directly observed, were proposed using video or high speed CCD cameras. One such design is based on rotating parallel plates establishing the shearing system, and determination of cell deformability by an image processing algorithm [START_REF] Dobbe | Measurement of the distribution of red blood cell deformability using an automated rheoscope[END_REF]. Another design is based on a microfluidic channel [START_REF] Bransky | An automated cell analysis sensing system based on a microfabricated rheoscope for the study of red blood cells physiology[END_REF]. The automated versions of a Rheoscope can yield histograms for cell deformability and thus give information about deformability distribution of RBC population. However, the technology is still limited by time consuming computer processing and by inaccuracies in image interpretations for complex geometrical forms.
Optical Tweezers (OT)
The founding principles for the optical tweezers, also called «laser trapping», were first discovered by Arthur Ashkin in 1970 [START_REF] Ashkin | Acceleration and Trapping of Particles by Radiation Pressure[END_REF]. He showed that microscopic dielectric particles can be lifted and held, both in air and water, by highly focused optical beams [START_REF] Ashkin | Optical Levitation by Radiation Pressure[END_REF]. However it was only in 1986 that these beams were employed in an instrument [START_REF] Ashkin | Observation of a single-beam gradient force optical trap for dielectric particles[END_REF] The technique was later labeled « optical tweezers » since it allows picking up and moving small particles using an optical beam. The physical principle is based on a balance of two types of optical forces, created by the interaction with a small spherical dielectric particle. These forces consist of the light scattering forces acting along the wave propagation direction and gradient electric forces acting in a lateral direction towards the beam center (Figure 7). The magnitude of these forces is of the order of pico Newtons (pN).
In the case of transparent dielectric beads whose diameter is greater than the wavelength of the laser beam, the forces can be explained by the effect of refracted rays. The refracted rays exit the bead in a direction different from which they entered. This indicates that the momentum of light has changed. An opposite momentum change on the particle is created according to conservation of linear momentum. A detailed explanation of the Physical forces involved, using two distinct approaches, ray optics and electric field associated with the light, can be found in Contemporary Physics by Justin E. Molloy and Miles J Pudgett [START_REF] Molloy | Lights, action: Optical tweezers[END_REF].
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
State of the Art 20 OT can be constructed by modifying an inverted microscope. A typical setup is shown in Figure 8. OT have been used to measure RBC deformability [START_REF] Hénon | A new determination of the shear modulus of the human erythrocyte membrane using optical tweezers[END_REF]) [START_REF] Dao | Mechanics of the human red blood cell deformed by optical tweezers[END_REF]. An Nd:YAG laser source of 1064 nm wavelength is used because of the low absorption coefficient at this wavelength. The beam intensity is up to a few Watts and the light absorption can cause damage to the trapped cell by heating [START_REF] Block | Making light work with optical tweezers[END_REF]. Two Microspheres are attached to the cell opposite sides. One of the beads is also attached to the microscope slide which is fixed to the stage. The other bead is lifted slightly by laser trapping. The stage is shifted in such a way that the cell is stretched (Figure 9) until it escapes from the trap. observed [START_REF] Mills | Nonlinear elastic and viscoelastic deformation of the human red blood cell with optical tweezers[END_REF]. From its dimensional variation in response to optical force, using a mathematical model, the shear modulus of a normal RBC was calculated to be approximately 10µN/m, [START_REF] Dao | Mechanics of the human red blood cell deformed by optical tweezers[END_REF].
Atomic Force Microscopy (AFM)
Atomic Force Microscopy (AFM) is a scanning technique providing images of surfaces on a sub nanometer (atomic) scale. It can also measure surface forces [START_REF] Binnig | Atomic Force Microscope[END_REF]. A probe composed of a spring arm and a microscopic tip moves across the surface having a physical contact with it. The vertical displacement of the arm caused by the examined surface is amplified by a laser beam and a quadrant photo detector. The quadrant detector measures very small changes in laser beam position reflected by the cantilever arm surface. (Figure 10) The signal from the quadrant photo detector is used as a feedback for the control of the microscope AFM can also be used in order to measure individual RBC deformability. For this purpose a parabolic or spherical tip is used. A force on the order of several nN is applied to the tip and the stage displacement is first measured against a hard material and then against the cell. The Young's modulus E is calculated from the indentation relative to the applied force taking into account the tip radius [START_REF] Weisenhorn | Deformation and height anomaly of soft surfaces studied with an AFM[END_REF].
The principle is based on an early instrument called Cell Poker (McConnaughey and Petersen, 1980). Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
State of the Art 24
Magnetic Twisting Cytometry
Magnetic Twisting Cytometry (MTC) applies a magnetic field of a precise magnitude to ferrimagnetic microbeads attached to the cell membrane. The resulting displacement of the microbeads is measured and gives information on the cell mechanical properties [START_REF] Wang | Mechanotransduction across the cell surface and through the cytoskeleton[END_REF]. Either a static or an oscillating magnetic field is used and the microbeads displacement is recorded by a CCD camera. The displacement is then computed from the image and the magnitude of rotation and translation is presented as a function of the applied torque (Figure 11).
The advantage of this method is that it allows the measurement of both static and dynamic (time dependent) deformability. Applying oscillating magnetic force at
Quantitative phase imaging
Quantitative phase imaging (QPI) has the potential of revealing unique cellular information. Unlike other measurement techniques described in this chapter, QPI does not employ an external force on the cell, but rather measures intrinsic nanoscale fluctuations (also called flickering) in cell membrane and uses them as an indicator of linear mechanical response, correlated to RBC deformability [START_REF] Brochard | Frequency spectrum of the flicker phenomenon in erythrocytes[END_REF]Popescu et al., 2006b). Measurement, by QPI, of linear response properties of the RBC membrane at varying states of osmotic stress makes it possible to experimentally probe the nonlinear elastic membrane response [START_REF] Park | Measurement of the nonlinear elasticity of red blood cell membranes[END_REF]. Some other QPI methods were used for RBC studies. Diffraction phase microscopy (DPM) has been used to measure and compare the shear moduli of healthy RBCs as well as those invaded by malaria organisms (Plasmodium falciparum) [START_REF] Park | Refractive index maps and membrane dynamics of human red blood cells parasitized by Plasmodium falciparum[END_REF]. Common-path diffraction optical tomography (cDOT) has been used to study blood storage effect on RBC surface area and deformability [START_REF] Park | Measuring cell surface area and deformability of individual human red blood cells over blood storage using quantitative phase imaging[END_REF].
Images obtained by this technique are shown in Figure 14.
Measurement techniques of RBC populations
Filtration
Filtration is a simple technique, where whole blood or diluted blood suspension is pressured against a filter membrane with pores diameter inferior to the cell diameter. Only deformable cells can pass through the filter. A filter of 5 µm pores is normally used with gravity effect or a pressure gradient applied across the membrane. Either a positive pressure [START_REF] Schmid-Schönbein | A simple method for measuring red cell deformability in models of the microcirculation[END_REF] or a negative pressure [START_REF] Reid | A simple method for measuring erythrocyte deformability[END_REF]) can be used. The filterability index is given by the filtered volume per minute. This technique is very frequently used because of its low cost and simplicity. However, it has several disadvantages as results depend on cell size and the occlusion of pores by white blood cells. If washed erythrocytes are used, the process not only risks removing non deformable RBCs but it is also time consuming [START_REF] Stuart | Erythrocyte rheology[END_REF]. Small non deformable cells can pass through the filter pores and are considered deformable. It also has a poor reproducibility and impossibility of detecting minor deformability due to lack of filters homogeneity.
Cell Transit Analyzer
The cell transit analyzer is similar to filtration but allows for an automatic measurement of a large number of individual cells. A diluted suspension of RBC is put under pressure against a membrane with several dozens of 3µm to 6µm cylindrical pores. A pure physiological buffer is placed on the other side of the membrane. Two electrodes are placed in the reservoirs on both sides of the membrane. Conductance is measured between the electrodes using a 100 KHz signal. The passage of a non-conductive cell through a pore results in small perturbation in the overall conductivity. Counting the generated resulting pulses and analyzing their duration and rise time, gives indication of cell deformability [START_REF] Koutsouris | Determination of erythrocyte transit times through micropores. I--Basic operational principles[END_REF]. The measurement principle is an extension of the Coulter principle [START_REF] Frank | An Investigation of Particle Flow Through Capillary Models With the Resistive Pulse Technique[END_REF].
The advantage of this technique is that it allows for the measurement of a large number of individual cells, giving information on deformability distribution and detecting subpopulations with different ability to deform. However, this technique, like the filtration technique is unable to measure badly non deformable cells since they occlude the pores. On the other hand, small non deformable cells are counted as deformable. Using the conductivity perturbation pulse length and intensity as an indication of cell deformability is problematic since it is influenced by cell size and by the form the cell obtains while passing through the pore. Some deformable cells obtain a cup shape occluding the pore and some are folded leaving a slit that allows a path for electrical current [START_REF] Reinhart | Folding of red blood cells in capillaries and narrow pores[END_REF].
Ektacytometry
Ektacytometry is an optical deformability measurement technique of RBC population, originally developed in France by Bessis in the 1970s [START_REF] Bessis | Diffractometric Method for Measurement of Cellular Deformability[END_REF]. In this technique a highly diluted red cell suspension is exposed to laser light and the diffraction pattern created by the laser beam, going through the suspension, is measured. When RBCs suspended in a high viscosity liquid, are exposed to a shear stress in a flow, they deform, which is in turn reflected by a change in the diffraction pattern of the laser beam. The diffracted pattern has the same eccentricity as the average of the exposed cells but is rotated by 90°. The measurement of the diffraction pattern eccentricity therefore provides a quantitative measure for the average RBC deformability. Two methods exist for the evaluation of the diffraction pattern elongation: Measuring light intensity with photodiodes at defined locations or using a camera with image analysis [START_REF] Finkelstein | Comparison between a Camera and a Four Quadrant Detector, in the Measurement of Red Blood Cell Deformability as a Function of Osmolality[END_REF]. The results are expressed as elongation index (EI) or deformation index
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
State of the Art 30
(DI) both reflect the aspect ratio of the elliptical diffraction pattern and increase with cell deformability (see section 4.3 Equation 24).
There are three different technics for RBC shearing: cylindrical Couette flow (rotating cylinders), planar rotational Couette flow (parallel plates) and plane
Poiseuille flow (microfluidic channel).
In ektacytometry, two main modes are used to characterize RBC deformability:
Shear Scan where the cells are kept in a constant osmolality and an increasing shear stress is applied (Figure 16), and Osmotic gradient (OsmoScan) where shear stress is kept constant, but cells are mixed in a medium where osmolality is gradually increased (Figure 17). The advantage of Osmoscan over Shear scan is that it allows not only to measure the deformability of the cells but also to attribute low deformability to specific cell defaults such as cell membrane or intracellular viscosity [START_REF] Clark | Osmotic gradient ektacytometry: comprehensive characterization of red cell volume and surface maintenance[END_REF][START_REF] Johnson | Osmotic scan ektacytometry in clinical diagnosis[END_REF]. Typical osmoscan curves for several RBC disorders are shown in Figure 17. Ektacytometry is a very practical laboratory technic, allowing, in an automated quick experiment, to evaluate the mean deformability of RBC population. However, it doesn't give detailed information regarding the distribution of the deformability in a mixed cell population.
Several geometries were used as shearing techniques: Couette cylinders, parallel plates, and flow channel. These geometries are described in detail in chapter 3.
Since my work is dedicated to the design and fabrication of a prototype for a new
Ekacytometer, I review hereafter, the current available commercial Ektacytometers.
Ektacytometer Commercial Applications
The
Technicon
(Technicon Instruments, Tarrytown, NY, USA)
The Technicon Ektacytometer, built and commercialized in the early 1990s, comprises a shearing system based on two concentric transparent cylinders having an inner cylinder diameter of 50.7mm and a gap of 0.5mm. The inner cylinder rotates at controlled speeds between 0 to 255 RPM while the outer cylinder is stationary. The sample preparation unit is comprises a steer tank and peristaltic pumps where blood is highly diluted with a Polyvinylpyrrolidone (PVP) transparent solution of 20 cP viscosity. The sample is pumped into the space between the cylinders and sheared by the inner cylinder rotation. A HeNe laser source of 1mW beam of 632.8nm wavelength and 1mm diameter crosses the sheared blood sample.
The diffracted laser pattern is projected on a mask containing four holes nearly equidistant from the beam center (there is a slight difference between the distance of the two vertical wholes and the two horizontal wholes in order to compensate for the lens effect of the cylinders). A quadrant photodiode detector is placed behind the mask and measures the light intensities behind the four holes (see section 4.3 Figure 32 and Equation 24). An average of the light intensities through the two vertical holes A1, A2 is calculated and so is the case for the two horizontal holes B1, B2. Thus errors due to non-perfect centration are eliminated. In the case of a round diffraction image the average vertical and the average horizontal light intensities Relaxation scan: The viscometer rotates at a pre-set maximum speed and then abruptly stops. Elongation is measured every 20ms for one second after rotation is stopped. This mode is used to study the speed in which the cell returns to its original form after shear stress is removed. The Lorca (also called Viscoscan) has a very similar shearing geometry to the Technicon Ektacytometer described above and to the original first Ektacytometer prototype developed by Bessis and Mohandas [START_REF] Bessis | Mesure continue de la déformabilité cellulaire par une méthode diffractométrique[END_REF]) [START_REF] Bessis | Diffractometric Method for Measurement of Cellular Deformability[END_REF]) [START_REF] Bessis | Automated ektacytometry: a new method of measuring red cell deformability and red cell indices[END_REF] . The transparent cylinders have a gap of 0.3 mm between them. An increasing speed of rotation is applied to the internal cylinder resulting in an increasing shear stress of 0.3 to 75 Pa, acting on the RBC suspended in the solution between them having a viscosity of 30 cP [START_REF] Hardeman | Laser-assisted optical rotational cell analyser (L.O.R.C.A.). I: A new instrument for measurement of various structural hemorheological parameters[END_REF]. A laser diode traversing the diluted RBC solution produces a diffraction pattern. The diffraction pattern is captured by a CCD camera. An image processing algorithm calculates an elongation index based on the image pattern. The corresponding shear stress is calculated from the speed of rotation, viscosity, and cylinder geometry. This instrument is temperature controlled for 37°C.
Osmolality gradient ektacytometry (Osmoscan) is not available on this instrument. 1. A blood sample diluted in a viscous transparent solution flows through the channel due to a diminishing vacuum pressure. A laser beam traverses the RBC suspension and creates a diffraction pattern projected on a screen. A CCD video camera captures the images and an Elongation Index (EI), as a measure of the RBC deformability, is determined from the iso-intensity curves in the diffraction pattern using an ellipsefitting program. The EI is recorded with respect to the average shear stress, which is calculated from the measured pressure. RBC average deformability is obtained for a range of shear stresses between 0 ~ 35 Pa (Sehyun [START_REF] Shin | Measurement of red cell deformability and whole blood viscosity using laser-diffraction slit rheometer[END_REF].
Osmolality gradient ektacytometry (Osmoscan) is not available on this instrument. photodiodes [START_REF] Ruef | The rheodyn SSD for measuring erythrocyte deformability[END_REF]. Osmolality gradient ektacytometry (Osmoscan) is not available on this instrument.
COMPARISON BETWEEN PRESSURE AND SHEAR DRIVEN FLOWS FOR OSMOLALITY GRADIENT EKTACYTOMETRY
Introduction
Different shearing geometries used in Ektacytometry were described in section 2.2.
In these geometries RBC's are exposed to flows of different character resulting in different velocity and shear stress profiles. In this chapter the implications of these differences for Osmolality Gradient Ektacytometry are examined.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 40 This equation was further extended to spheroids showing that the orientation of the spheroids has an influence on the effective viscosity of the suspension [START_REF] Jeffery | The Motion of Ellipsoidal Particles Immersed in a Viscous Fluid[END_REF] [START_REF] Leal | The effect of weak Brownian rotations on particles in shear flow[END_REF] [START_REF] Hinch | The effect of Brownian motion on the rheological properties of a suspension of non-spherical particles[END_REF] [START_REF] Mueller | The rheology of suspensions of solid particles[END_REF].
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017 Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 41
Figure 23 shows experimental and calculated viscosity of diluted RBC suspension at different concentrations (Shin et al., 2005a). We can conclude that the very low concentrations of RBC we use (hematocrit generally < 0.5%) allow us to neglect their influence on the solution viscosity. The equation for Reynolds Number (Re) characterizing a flow in a channel with height H much smaller than its width W is:
Equation 2 H U Re
Where ρ,μ and U are the fluid density, viscosity and flow velocity respectively.
For instance Re for a flow rate of 1ml/min of a PVP solution (Polyvinylpyrrolidone), with a density close to that of water 3 10 kg/m 3 [START_REF] Bolten | Experimental Study on the Surface Tension, Density, and Viscosity of Aqueous Poly(vinylpyrrolidone) Solutions[END_REF] and viscosity of µ= 2•10 -2 Pa•sec, through a channel of H=100µm, W=5mm.
We can calculate the average flow velocity as the volumetric flow rate Q divided by the channel section area A Uavg = Q/A = 3.33 µm/sec However, the distribution of flow velocity across the channel is parabolic (Equation 13), with a maximum velocity at the center. So the maximal (worst case) Reynolds number should be calculated in the lamina at the center of the channel. The maximum velocity is 1.5 times the average velocity (Equation 16). The height H that we take into account for the flow conditions around a RBC is the average 8 Detailed calculations for the Stokes equation for the three geometries presented below can be found in (George [START_REF] Hirasaki | Chapter 8-laminar flows with dependence on one dimension, college study notes -Transport phenomena[END_REF] and (James O. [START_REF] Wilkes | Chapter 6. Solution of Viscous-Flow Problems -Fluid Mechanics for Chemical Engineers with Microfluidics and CFD[END_REF] Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 43
undeformed cell diameter which is 8µm [START_REF] Yaginuma | Human red blood cell behavior under homogeneous extensional flow in a hyperbolicshaped microchannel[END_REF] (Sometimes the particle radius is used instead [START_REF] Purcell | Life at low Reynolds number[END_REF] ).
Which gives: In a Newtonian fluid the shear stress τ is proportional to the gradient of velocity (shear rate). For example in an incompressible and isotropic Newtonian fluid, the viscous stress is related to the shear rate by
Equation 3 y u
Where μ is the dynamic viscosity, u is the velocity and is the shear rate.
In Osmolality Gradient Ektacytometry, shear stress is kept constant while deformability is measured as a function of osmolality. Therefore, it is of prime importance to understand and compare the character and magnitude of shear stress in the various shearing geometries used in Ektacytometry. Since Couette cylinders and planar rotating disks produce similar shear driven flows (when measured for high radii), our main comparison will be between these two flows versus the pressure driven flow produced in a channel.
In order to find the shear stress equations for the different geometries we take the velocity profile obtained by developing the Stokes equation. We may then find the shear rate by differentiation. Finally we obtain the shear stress by multiplying the shear rate by the dynamic viscosity.
Cylindrical Couette flow
Cylindrical couette flow is a flow between two coaxial cylinders, created by their relative rotation (Figure 26). The difference of rotation speed between the cylinders Assuming a unidirectional, laminar, fully developed flow, with no slip at the walls, we look for the expression of velocity distribution across the gap between the cylinders. With the internal cylinder at rest and the external cylinder rotating at an angular velocity ω, using cylindrical coordinates, we get the following expression for the tangential velocity at radius r [START_REF] Chassaing | Mécanique des fluides: PC-PSI[END_REF]:
Equation 4 r R R R R r R R R r U 1 ) ( 2 1 2 2 2 2 2 1 2 2 1 2 2 2 2 2
And the shear is obtained by differentiation of Equation 4 with respect to r.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 46
Equation 5
2 1 2 1 2 2 2 2 1 2 1 2 1 2 2 2 2 2 2 1 2 2 2 2 2 1 2 2 1 2 2 2 2 2 1 ) )( ( ) )( ( 1 ) ( r R R R R R R R R R R R r R R R R R R R dr r dU We substitute 1 2 R R by R .
Assuming very large values for 1 R , 2 R and a very small R , we can replace
1 2 R R by R 2 . Since 2 1 R r R
we can approximate r by the constant R. Thus approximating the equation of shear (Equation 5) gives Equation 6which is the same as for plane Couette flow.
We obtain flow conditions similar to that of two parallel plates with one wall stationary and the other moving at a constant velocity (plane Couette flow), with constant shear rate across the gap for a given rotational speed.
Hence, for a Newtonian fluid, shear rate ( ) and shear stress ( ) are constant across the gap between the cylinders. where velocity is identical at every point on the rotating cylinder surface, velocity of a point on the rotating plate is a function of the radius r. This means that the velocity of a particle in the liquid between the plates is a function not only of its distance from the rotating plate but also of its distance from the axis of rotation. Thus shear stress in direction ϴ is a function of velocity gradients in both r and z directions.
Since measurements are taken at a radius much bigger than the gap dimensions (r>>z), the velocity gradient in the z direction (∂U/∂z) is dominant and the contribution to shear of the velocity gradient in r direction (∂U/∂r) is negligible (see Figure 28). The centrifugal and gravity induced flows can be neglected for small Reynolds numbers. Hence we assume a laminar unidirectional flow. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 49
We may calculate the velocity profile by integrating the Stokes equation twice subject to the following boundary conditions:
rN r U 2 0 , at z=0 0 , z r U at z=H
Where N is the number of revolutions per second. Hence,
Equation 8 H rNz z r U 2 ) , (
Since the diameter of the laser beam used in Ektacytometry is less than 1mm, we can assume that when directed in parallel to the plates axis at large values of r, the velocity in the r axis contained in the beam perimeter, is constant. In this case we replace the variable r by the constant R and we obtain a linear velocity profile.
Equation 9 H RNz z U 2 ) (
Thus differentiating in order to obtain shear, gives constant shear and constant shear stress across the gap, as in the case of the Couette cylinders (Equation 6 and Equation 7 respectively)
Equation 10 H RN dz z dU z 2 ) ( Equation 11 H RN 2
The minus sign comes from the fact that velocity is decreasing with increasing z value due to the rotation of the plate closer to the origin of axis.
In the case of a cone and plate geometry H can be expressed as a linear function of the radius r. In this case r and H are eliminated from Equation 8and the velocity becomes linearly dependent only on z. Thus differentiation with respect to z gives a constant shear rate and shear stress throughout the sample (Equation 10, Equation 11).
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 50
Plane-Poiseuille flow
Forced flow through a rectangular channel is called Hele-Shaw flow. When the flow depends only on one variable it is called plane-Poiseuille flow. In the flow channels which we use, the width is much larger than the height (W>>H) and measurements are taken far from the walls, in the center of the channel. Hence, we get flow conditions that approximate a plane-Poiseuille flow.
When a fluid enters a narrow channel from a wider tube the flow profile is not immediately parabolic. The length in which entrance effects are dominant in a flow channel for low Reynolds numbers can be approximated by 0.6 times the hydraulic diameter [START_REF] Shah | Laminar flow forced convection in ducts: a source book for compact heat exchanger analytical data[END_REF] . In a rectangular channel the hydraulic diameter is conventionally defined by
Equation 12 H W WH D H 2
For a channel 5mm wide and 200µm high we get turbulent conditions due to entrance effects for a length of 0.23mm. And for a channel of 5mm wide and 100µm
high the entrance effects are for a length of 0.12mm. Thus our measurements should take place in a distance greater than these values in order to have a fully developed laminar flow.
For the sake of simplicity, we assume a unidirectional flow. However, particles in a suspension exposed to Poiseuille flow are subjected to shear induced migration from high to low shear rate regions. In a channel or tube this means that particles assume an angular trajectory away from the vessel walls, towards the center [START_REF] Karnis | The kinetics of flowing dispersions[END_REF]. The degree of alignment of RBC with the flow direction increases with increasing flow rate (and hence shear rate) (H.L. [START_REF] Goldsmith | Deformation of human red cells in tube flow[END_REF]. Several experiments and simulations were done in order to evaluate the scale of this migration of particles. One of these gives the scale length in which particles migrate to the center as
2 3 a H
where H is the channel height and a is the particle diameter
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 51 [START_REF] Nott | Pressure-driven flow of suspensions: simulation and theory[END_REF]. For a RBC of 8 µm diameter in a channel of 200 µm height, this gives a length of 125 mm, while for a channel of 100µm height it is only 15.6mm.
Our Channels are 50mm long hence we conclude that in the case of 200µm high channel the position of measurements along the channel is not critical as long as it's over 0.23mm from the entrance. However in the case of 100µm high channel, our measurements should take place in a distance as close as possible to the channel entrance, so the RBC distribution is nearly not altered yet, but not too close (over 0.12mm) in order to avoid entrance effects. We assume a Newtonian incompressible fluid where the viscosity µ is constant.
For a pressure drop ∆p that acts over a channel of length L we obtain a parabolic velocity profile. We assume a fully developed steady flow where the pressure drop over the channel length is constant and equal to ∆p/L. Assuming H/W=0 (W goes to infinity) and boundary conditions of no slip at the walls (U=0), we get the following parabolic velocity profile by integrating the Stokes equation twice. We thus obtain the classical solution for plane Poiseuille flow, namely
Equation 13 2 0 2 1 ) ( H y U y U Z
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 52
Where 0 U is the maximum velocity, at the mid-plane:
L P H WH Q w avg 4 ² 3 2 1
The shear stress profile is
Equation 19 y L P y y
We obtain a linear shear stress, with zero shear stress at the center of the channel and maximum shear stress at the walls (Figure 30).
The shear stress at the wall is: 15gives Since shear stress is a viscous force it seems strange to perceive, at first glance, that we succeeded in producing flow conditions where shear stress is independent of fluid viscosity (right side of Equation 19, Equation 21and Equation 22). However, since shear stress is proportional not only to viscosity but to shear rate as well, increasing the viscosity of the solution while maintaining constant pressure across the channel, results in a decrease of flow rate and thus decrease of shear rate, by the same proportion and hence keeping shear stress constant (the solution is Newtonian, so its viscosity is constant with respect to variation of shear. The increase of viscosity is due either to replacement of the solution or to temperature variation).
Equation 20 ² 6 WH Q W W Substituting Q from Equation
In practice, a reduction in viscosity results in a reduction of pressure across the channel. This reduction is measured by the pressure sensor and our pressure regulation system increases the pumping (flow rate) by the same rate in order to maintain constant pressure. This results in a constant shear stress.
In real life, rectangular channels always have an aspect ratio H/W greater than zero.
In our case we use channels with W=5 mm and H=0.2 mm or H=0.1 mm. We can check the precision of the approximated formulas we used, in both cases.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 54
We examine the validity of the approximation assuming an aspect ratio H/W=0, used in Equation 15to Equation 22. The solution to the Stokes equation should be modified to take into account the rectangular cross channel section with H/W aspect ratios we actually use (Figure 31). There is an exact solution to the Stokes equation [START_REF] Cornish | Flow in a Pipe of Rectangular Cross-Section[END_REF]) and a correction factor was calculated for f and tabulated (Son, 2007):
Equation 23 W H f W H WH Q w 1 ² 6
For our channels aspect ratios H/W of 0.02 and 0.04, we choose from the table [START_REF] Son | Determination of shear viscosity and shear rate from pressure drop and flow rate relationship in a rectangular channel[END_REF] the closest value for values to the ones we calculated: Q = 3.13ml/min compared to 3.24 in the first case and 0.794ml/min compared to 0.8 in the second. The differences are due to the manufacturer's correction for non-zero H/W aspect ratio. As expected the difference increases from 0.006 to 0.11 with an increasing H/W ratio from 0.02 to 0.04.
Conclusions
Table 2 shows a summary of compared features of the two main flows used in Osmotic Scan Ektacytometry while table 2 gives a comparison of formulas developed above. The fact that the deformability of RBCs in a flow channel is not affected by small variations in solution viscosity presents an important advantage over Couette cylinders and rotating plates. In Couette cylinders and rotating plates, variation in solution viscosity is an important source of error. This variation is not due to variation of shear (a Newtonian solution) but rather to temperature variation and also to the fact that the characteristics of PVP solutes vary not only between suppliers, but also between batches from the same supplier. For instance, Sigma-Aldrich, the producer of PVP (average mol wt 360), widely used in Ektacytometry, gives in the product specification sheet a K value range of 80 to 100 (K value is an While some studies suggest that the diffraction pattern, in Couette cylinders, could give information regarding the distribution of deformability of a population of RBC [START_REF] Dobbe | Measurement of the distribution of red blood cell deformability using an automated rheoscope[END_REF]) [START_REF] Streekstra | Quantification of the fraction poorly deformable red blood cells using ektacytometry[END_REF], this would be more difficult to achieve in a flow channel due to the non-constant shear stress field.
) ( R R R r r U 2 2 2 1 8 ) ( H y L P H y U Z Average velocity 2 2 R U avg L P H U avg 12 2 Maximum velocity 2 max R U L P H U 8 2 max Shear rate profile R R 2 y L P y Shear stress profile R R 2 y L P y Shear stress at the walls R R 2 L P H W 2 Average shear stress R R 2 L P H avg 4
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 58
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 59
COMPARISON BETWEEN A CAMERA AND A FOUR QUADRANT DETECTOR, IN THE MEASUREMENT OF RED BLOOD CELL DEFORMABILITY AS A FUNCTION OF OSMOLALITY
Introduction
In this study we compared two measurement and calculation methods of average RBC deformability, both derived from the laser diffraction pattern. One method used a four quadrant silicon photodiode; the other used a CCD camera. Two distinct methods are used for the calculation of deformability from measured data.
Theoretical background
Above a certain shear stress level, the flow of RBCs mixed in a highly diluted viscous solution is laminar and they are oriented in a plane perpendicular to the shear gradient [START_REF] Bayer | Discrimination between orientation and elongation of RBC in laminar flow by means of laser diffraction[END_REF]. Their morphology changes with increasing shear from biconcave to tri-axial ellipsoid. Consequently, the diffraction pattern, created by the laser beam traversing the RBCs changes from a circle to an ellipse with the same
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 60 eccentricity, but its major axis is oriented perpendicularly to the direction of flow [START_REF] Zahalak | Fraunhofer diffraction pattern of an oriented monodisperse system of prolate ellipsoids[END_REF] (T. [START_REF] Fischer | Tank tread motion of red cell membranes in viscometric flow: behavior of intracellular and extracellular markers[END_REF]. This 90° rotation of the diffraction pattern is due to the inverse correlation of size between the diffracted image and the exposed cells. The minor axis of the image corresponds to the major axis of the exposed cells and the major axis of the image corresponds to the minor axis of the exposed cells.
In ektacytometry, two main modes are used to characterize RBC deformability: In the first the cells are mixed with a high viscosity solution of normal physiological osmolality (for human plasma 290 mOsm/Kg) then increasing shear stress is applied to the mixture. The diffraction pattern therefore shows deformability as a function of shear stress at physiologic tonicity. In the second mode the shear stress is kept constant, but RBCs are mixed in a medium where osmolality is increased gradually. The diffraction pattern then shows deformability as a function of osmolality. This mode is called "osmotic gradient ektacytometry" or "Osmoscan", and provides information on membrane stiffness, intracellular viscosity and surfacearea-to-volume ratio [START_REF] Clark | Osmotic gradient ektacytometry: comprehensive characterization of red cell volume and surface maintenance[END_REF]. Both approaches have been used with different instrumentation and analysis methods of the diffraction pattern [START_REF] Mohandas | Analysis of factors regulating erythrocyte deformability[END_REF]) [START_REF] Shin | Validation and application of a microfluidic ektacytometer (RheoScan-D) in measuring erythrocyte deformability[END_REF] [START_REF] Hardeman | Laser-assisted optical rotational cell analyser (L.O.R.C.A.). I: A new instrument for measurement of various structural hemorheological parameters[END_REF], and published reports have compared methods of the deformability measurements under changing shear stress [START_REF] Shin | Validation and application of a microfluidic ektacytometer (RheoScan-D) in measuring erythrocyte deformability[END_REF] [START_REF] Hardeman | Laser-assisted optical rotational cell analyser (L.O.R.C.A.). I: A new instrument for measurement of various structural hemorheological parameters[END_REF][START_REF] Oguz K Baskurt | Comparison of three commercially available ektacytometers with different shearing geometries[END_REF]) [START_REF] Wang | Measurement of erythrocyte deformability by two laser diffraction methods[END_REF]. Both image analysis of the diffraction pattern and simple intensity measurement at discrete spots in the diffraction pattern have been used.
Comparison of these different methods of analysis of the diffraction pattern, between different instrumentation, while changing osmolality is more complex as several factors affect the final result. Both the rate of osmolality change and the presence of hemolyzed cells in the population, as well as different shape changes as the result of water uptake or release may affect the diffraction pattern. In order to address this, and allow direct comparison of the two ways to measure the diffraction pattern, we designed a measuring device and method to measure the same cells simultaneously by the two methods. The approach uses the same laser source, same
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 61 optics, same viscometer, same osmolality measurement sensor, same hydraulic system and gradient maker and the same solutions to measure the same cells at the same time. This eliminates virtually all potential errors that could result from differences between two similar but yet different samples measured on two different instruments.
Experimental apparatus and methods
Several studies use a camera and image analysis in ektacytometry [START_REF] Bayer | Discrimination between orientation and elongation of RBC in laminar flow by means of laser diffraction[END_REF]) (Shin et al., 2005b) and there exist two such commercial apparatus: Lorca [START_REF] Hardeman | Laser-assisted optical rotational cell analyser (L.O.R.C.A.). I: A new instrument for measurement of various structural hemorheological parameters[END_REF] and RheoScan-D300 [START_REF] Shin | Validation and application of a microfluidic ektacytometer (RheoScan-D) in measuring erythrocyte deformability[END_REF]. Our design employs a custom apparatus we built, based on a Technicon ektacytometer [START_REF] Mp | The Technicon Ektacytometer: automated exploration of erythrocyte function[END_REF].
Originally, RBC deformability was measured by projecting the diffracted laser beam on a mask with four equidistant holes behind which a four quadrant silicon detector was placed [START_REF] Groner | New optical technique for measuring erythrocyte deformability with the ektacytometer[END_REF]. In order to compare image analysis with the deformation computed using the simpler four quadrant detector, we designed a setup that permitted simultaneous measurement of the same sample by the two methods consisting of a beam splitter (Thorlabs) placed in the path of the postdiffraction laser beam splitting the diffraction image into two identical intensity beams, 90° apart. The design used the optical bench and the transparent Couette viscometer of an ektacytometer with a 632.8 nm helium-neon laser source of 1 mW (Lasos, Germany). An ARM microcontroller board (Embedded Artists LPC2148)
provided the interface to measure osmolality and temperature and to control the speed of the viscometer and the plumbing needed to create the osmotic gradient. A mixture of whole blood was introduced into the viscometer at a hematocrit of approximately 0.08% in phosphate buffered (pH 7.35) Polyvinylpyrrolidone (PVP, Sigma, St Louis) at 0.2 poise viscosity. The tonicity of the mixture was varied between 50 and 600 mOsmol/kg with NaCl gradient, and cells were exposed to a constant shear stress of 159.3 dyn/cm 2 .
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 62
One image of the diffraction pattern was projected by the beam splitter on a mask with four holes, behind which was situated the four-quadrant detector. The other diffraction pattern was projected on a translucent screen, and behind it was placed a CCD camera. Data from both measurements were recorded by a computer equipped with an interface that allowed image analysis as well as signal detection of the quadrant detector.
The elongation index (EI), a measure of RBC mean deformability, was calculated from the signals of the four quadrant photodiodes measuring the projected diffraction pattern (Figure 32), using Equation 24where A and B are the signals on the long and short axis of the ellipsoid respectively. The use of two signals on each axis (A1, A2 and B1, B2) provides an average and compensates for slight difference in centering the beam on the mask.
Equation 24
B + B A + A ) B + (B - ) A + (A = EI 2 1 2 1 2 1 2 1
In the case of the camera, we used an image analysis algorithm (modified PINK library functions [START_REF] Couprie | Pink image processing library[END_REF]) to determine the elongation index by fitting the image, using iso-intensity curves, to an ellipsoid and determining the length of the short and long axis and EI= (L-S)/(L+S) as indicated in Figure 32 Both calculations of EI were performed simultaneously by a custom computer program as indicated in Figure 34, and plotted as EI versus osmolality.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 64 Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 65
The EI determined by the image processing (bottom curve) could be directly compared to the one determined by the quadrant detector (top curve).
Results and discussion
Comparison of the two curves clearly showed that they closely track each other in shape (Figure 36). While the absolute value of the deformability index is different at isotonicity, the osmolality of the minimum (LP), the osmolality of the maximum (MP), and the decrease at higher osmolality are very similar using either method.
The difference in amplitude can be explained by the fact that the diffraction pattern is not linear and light intensity ratio between the vertical and horizontal holes is higher than the diffracted ellipse's major to minor axes length ratio. Importantly, regardless of the method used, we find a minimum around 150 mOsmol, which has been shown to correlate with the osmolality at which approximately 50% of the RBC Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 66 have hemolyzed [START_REF] Clark | Osmotic gradient ektacytometry: comprehensive characterization of red cell volume and surface maintenance[END_REF], a maximum deformability around 290 mOsmol, and a sharp drop in deformation when the cell loses water under hyper osmolalities.
Different samples from control individuals show slightly different results in LP, MP.
IP and HP based on the individual characteristics of the donors. These shifts are very similar with either detection method. Similarly it can be expected that differences between control and patient samples show the same trends using either of the two methods. A direct comparison of patient samples on the same machine was not performed, but a study on a family with hereditary Elliptocytosis using either the camera based LORRCA MaxSis (Mechatronics, Hoorn) or the Quadrant diode based ektacytometer showed similar results [START_REF] Franck | A Family with Hereditary Elliptocytosis: Variable Clinical Severity Caused by Three Mutations in the α-Spectrin Gene[END_REF]. Together these results
indicate that either detection method identifies properly the change of RBC deformability over a large range of osmolalities and that neither method can be identified as being preferential to the other based on the final result. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality 67
Conclusions
Osmotic gradient ektacytometry is a very complex but valuable tool for the diagnosis of several (hereditary) RBC disorders. Our experiments lead us to conclude that either image analysis or the use of a quadrant detector result in clinically usable interpretation of RBC deformability.
Regardless of the analysis used, proper care of solution viscosity, temperature, pH, osmolality, and oxygenation is essential. However starting with the same samples our results show that the use of a very simple detection of intensity at discrete points of the diffraction pattern renders similar results as compared to a more complex and sophisticated analysis of the image.
We cannot exclude the possibility that changes in shape to the diffraction pattern under different conditions will add more information. However, in most cases in Modelling and Mathematical analysis 69
MODELLING AND MATHEMATICAL ANALYSIS
Modeling using Matlab/ Simulink
Our aim is to measure deformability as a function of osmolality, with constant shear stress. For this purpose we maintain a constant flow rate in the flow cell, and mix blood with a solution of gradually increasing osmolality.
All solutions have the same viscosity and temperature. Our hydraulic system consists of a solution preparation unit (stir tank), the sample mixing stage and the flow cell (Figure 37) .
Let's examine first the system behavior, ignoring for the moment the flow control mechanism.
In order to get a flow with increasing solution osmolality we use a stir tank filled initially to a volume V0 with low osmolality of C0.
A solution of constant high osmolality Ch flows into the stir tank at a flow rate f1, increasing gradually the osmolality of the tank solution (permanently stirred).
Simultaneously, the solution flows out of the stir tank at a flow rate f2. Since the outflow rate is higher than the inflow rate (f2 > f1), the stir tank will empty gradually.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Modelling and Mathematical analysis 70
The constant high osmolality inflow will be mixed into a decreasing volume and will increase the outflow Osmolality. The solution flowing out of the stir tank is mixed with the blood sample diluted in an Isotonic Osmolality Ci flowing at a rate of f3.
Finally we obtain the blood mixed in a solution of Cc osmolality flowing into the flow cell at a rate f4 = f2+ f3.
Let V be the volume of solution in the stir tank.
Let y be the osmolality of the solution in the stir tank.
Let f1, f2, f3 and f4 be flow rates as shown in Figure 37.
All the above values are always positive.
Since our aim is to (nearly) empty the solution in the stir tank at the end of the experiment the outflow is always higher than the inflow 𝒇 𝟐 > 𝑓 𝟏 .
The mass conservation equation in the stir tank may be written as
Equation 25 2 1 f f V dt d
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Modelling and Mathematical analysis 71
In a similar manner the solution osmolality equation is
Equation 26 2 1 ) ( yf f c yV dt d h
Developing the left hand side gives 25gives
2 1 yf f c y dt d V V dt d y h Substituting for V dt d from Equation
2 1 2 1 ) ( yf f c y dt d V f f y h Or 1 1 2 1 2 1 ) ( yf f c f f y yf f c y dt d V h h 1 ) ( f y c y dt d V h Finally Equation 27 V f y c y dt d h 1 ) (
Equation 25 and Equation 27 describe the behavior of the stir tank.
Compute now the flow rate f4 through the flow cell.
Equation 28
3 2 4 f f f
And the osmolality Cc in the flow cell
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
Modelling and Mathematical analysis 72
Equation 29
3 2 3 2 f f f C f y C i c
A Matlab Simulink model lets us calculate the necessary flow rates in order to achieve a regular osmolality scan over time without running out of solution in the stir tank. The model is shown in Figure 38. In order to get a better understanding for the influence of the f1 and f2 parameters on the osmolality curve, we developed Equation 27 further, so we can find the solution to the differential equation and see how it depends on the various parameters.
Assuming that the flow rates are constant throughout the experiment, we get 27gives
Equation 30 t f f V V ) ( 2 1 0 Substituting V in Equation
Equation 31 t f f V f y c y dt d h ) ( ) ( 2 1 0 1
This equation is of the form of a first order linear ordinary differential equation.
We now find the general solution of this equation with the help of Mathematica.
We get:
Equation32 ) V ) C - (C - ) ) V + t f - t (f (C ) V + t f - t (f = Y(t) f2 - f1 f1 0 0 h f2 - f1 f1 0 2 1 h f2 - f1 f1 0 2 1 Simplifying gives Equation 33 2 1 1 ) )( ( ) ( 2 1 0 0 0 f f f h h t f t f V V C C C t Y
Putting the time dependent component in the denominator gives
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Modelling and Mathematical analysis 75
Equation 34
1 2 1 ) )( ( ) ( 0 2 1 0 0 f f f h h V t f t f V C C C t Y
From Equation 34 we see that the exponent 𝒇 𝟏 /(𝒇 𝟐 -𝒇 𝟏 ) determines if the solution is linear or not.
If the exponent equals 1 (𝒇 𝟐 = 𝟐𝒇 𝟏 ) than we get a linear progress of the osmolality y (Figure 39B).
If the exponent is greater than 1 (𝒇 𝟐 > 2𝒇 𝟏 ) then the curve is concave (Figure 39A ).
If the exponent is lower than one (𝒇 𝟏 < 𝑓 𝟐 < 2𝒇 𝟏 ) then the curve is convex (Figure 39C).
Simplifying further Equation 34 for the linear case we obtain
Equation 35
0 0 1 0 1 2 ) )( ( ) ( 2 C t V f C C t Y f f h
We could actually get the results in Equation 35, in a simpler manner from Equation 31without finding the general solution to the differential equation:
In order for Y(t) to be linear y dt d should be constant. We call this constant k and we get from Equation 31Equation 36
t f f V f y C k h ) ( ) ( 2 1 0 1
And since Y(t) is linear
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
Modelling and Mathematical analysis 76
Equation 37
0 kt y C
Substituting y in Equation 36gives
Equation 38 t f f V f C C k h ) ( ) kt ( 2 1 0 1 0 Isolating k gives Equation 39 t f f V f C C k h ) 2 ( ) ( 2 1 0 1 0
We see from Equation 39 that in order for k to be constant, the same condition for linearity that we found before 2 1 2 f f must be respected, and thus we get the same result as in Equation 35.
Equation 40 0 0 1 0 ) )( ( ) ( C t V f C C t Y h
Practical considerations
In our design we have several imposed restrictions and guidelines for choosing the right parameters.
The range of osmolality gradient is determined by biomedical needs and is in the range of 60 to 500 mOsm/kg.
For efficiency reasons the time of scan should be as short as possible but not too short. The blood cells should be exposed long enough to a given osmolality to complete either swelling or shrinking as water crosses the membrane. This time is Hence for a given viscosity and flow cell, the flow rate f4 can be calculated to achieve a shear stress to properly deform the blood cell.
Calculations for this flow rate are found in the example below as well as in chapter 6.
f3 is calculated in relation to f4 in order to give the desired value of hematocrit (red cell volume relative to total volume). The desired hematocrit in the flow cell is determined by flow cell geometry and the laser intensity (see chapter 6). The hematocrit in the flow cell is a fraction of 1% and since we want to achieve a homogenous solution in the flow cell, we want f3 to be important enough in relation to f4. Hence whole blood (hematocrit of approximately 40%) is premixed 1 to 40
with an isotonic solution.
The initial volume of solution in the stir tank is chosen in a way that the tank is nearly empty at the end of the test, so no solution is wasted.
f2 is determined from f3 and f4 by Equation 28.
Finally f1, is chosen to be half of f2 in order to give a linear progress of osmolality. Since the actual osmolality is measured by a conductivity sensor, a nonlinear increase in osmolality is not contributing to measurement errors. A close to linear rise in osmolality is preferred in order to get a roughly equal resolution of measurement points on the deformability vs osmolality curve (since acquisition rate is constant). Furthermore, it makes it easier to predict the quantities of solutions necessary for the experiment, in order to avoid running out of solution, or waste leftovers.
Example
Initial conditions are: Ch = 600 mOsm/kg, C0 = 30 mOsm/kg, Further numerical examples, can be found in the experimental section in chapter 6 .
Conclusions
In osmotic scan ektacytometry, deformability of RBC is measured as a function of varying osmolality. We presented in this chapter an osmolality gradient generator based on a stir tank. Finding necessary flow rate values experimentally, in order to get a, close to linear, osmolality increasing solution is very time consuming.
Moreover, each time flow rates change, new experiments should be conducted in order to establish the right new values. We found by both modelling (using Matlab/Simulink) and mathematical tools (using Mathematica), that the right relations between flow rates in order to produce a linear osmolality evolving solution is
2 1 2 f f
. We demonstrated our findings with a numerical example.
Moreover we show how to calculate, from the flow rate, the tube length necessary for RBC osmotic equilibration.
DESIGN AND PROOF OF PRINCIPLE
Introduction
The difference between shear stress distribution in RBC population flowing under shear and pressure driven flows was explained in chapter 3. My aim in this chapter is to produce an experimental proof of concept in order to validate the possible clinical use of a pressure driven microfluidic osmolality gradient Ekyacytometer. To this end a microfluidic Ektacytometr prototype was designed and built, and experimental results are compared with these obtained on a blood sample of the same volunteer, by the Technicon (shear driven) Ektacytometer.
This section presents the different stages of design of the microfluidic osmolality gradient Ektacytometer (FloDif) and some initial experimental results providing evidence for a proof of principle.
Process and venue used for testing
To establish the proof of principle of the theoretical framework presented in the previous chapters, I partnered with the Red Blood Cell laboratory at Children's Hospital Oakland Research Institute (CHORI), Oakland, California.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Design and Proof of principle 82
The CHORI laboratory was useful for the following reasons:
1. Availability of blood was restricted at my institute, but widely available at CHORI, where IRB approval for our studies was gained for our protocol.
2. The Red Cell Lab at CHORI is well respected, and the credibility of data collected there was expected to stand scientific scrutiny (http://www.rbclab.com/).
3. Dr Frans Kuypers, The director of the lab, is an expert in the measurement of RBC deformability. His previous collaborations with INSERM allowed me to be able to set up this collaboration, and provided me with access to the laboratory in Oakland to test both hardware and software components that were built in Paris.
4. Dr Kuypers has been involved in the maintenance and repair of virtually all of the original 10 Couette viscometers that were built as engineering prototypes by
Technicon, based on the original design of Besis and Mohandas in the 1980's [START_REF] Bessis | Automated ektacytometry: a new method of measuring red cell deformability and red cell indices[END_REF][START_REF] Bessis | Diffractometric Method for Measurement of Cellular Deformability[END_REF] 5. His lab is one of the few labs in the world that is able to provide the measurement of RBC deformability as a clinical service. In section 6.2.1 and 6.2.2 two designs are described:
The first is a complete instrument with software-driven automated plumbing to show that a small bench instrument could be built for automated measurement of small amounts of collected blood. While the basic automation was successful, the final measurement of the deformability curves was hampered by the noise in the data as the result of the pumps available for testing. No budget was available to provide optimized pump replacements, but this phase set the stage for a proposed design to create a small footprint automated FloDiF for clinical lab.
The second design shown ( 6.2.2) is a simplified version of the first. It does not include complete automation, but still allowed measurement of samples and comparison with the Technicon ektacytometer on the same samples of blood.
The experimental results (section 6.3.4) conducted on the second design, show that the basic principles as described in the previous chapters can be applied to a
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Design and Proof of principle 84 practical set up that allows commercialization of a small, efficient and cost effective instrument for measuring the deformability of RBC in small blood samples. This potential instrument can be built on well accepted principles and components to gain regulatory approval. This is important as the final design provides opportunities for this novel instrument to be used in both a research and clinical lab setting.
Hydraulic system
The hydraulic system consists of the following units: the flow cell where measurements take place, the gradient generator where osmolality-increasing solution is prepared, the equilibration unit where RBC cells are mixed with the solution and allowed to equilibrate their internal osmotic pressure, and the flow generation units (air pressure unit and syringe pumps) (see Figure 44).
Initial design
Initially, the design contained quite a complex hydraulic system. This system was simplified at a later stage by discarding syringe pump B and valve B (as seen in Figure 42), which also improved pressure stability in the flow cell. Picture of the initial system hydraulic system is shown in Figure 41 and a schematic description of the initial experimental setup with its five states of operation are shown in Figure 42.
Flow direction is indicated by red arrows and valve open position is in green while close position is in red. Table 4 shows the settings of valves and pump direction (push or pull) as they are applied in states 1 to 5. The following schematics show the five states of operation of one experiment. By changing the valve settings, the syringe flow rate and flow direction, the process avoids opening any containers or plumbing, thus preventing pressure leaks. By replacing the manual valves with three way solenoid valves the system can be automated.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Design and Proof of principle 86 The gas pressure in 5 is tightly maintained by setting the pressure regulator 4
properly.
5: Airtight vessel.
6: Syringe 2 drops high osmolality solution to the low osmolality solution that is present in 5 and mixed with magnetic stir. The gas pressure in 5 pushes the fluid towards 7. Using syringe 2 makes sure that dropping of hypertonic solution will happen at the same rate independent of pressure in 5. We also calculate the length of a tube of 1.6 mm ID, necessary in order to equilibrate the cells for osmolality variations (between the T and the channel. See Figure 43 ).
We use 20 seconds as a reference time for osmotic cellular equilibration (which is the same as the Technicon Ektacytometer). The necessary tube length is calculated from the flow rates column.
We can calculate the fluidic resistance of the channel, and assuming adapters and tubing leading to the channel have much lower fluidic resistance, we can use this value to convert measured pressure to flow rate. Fluidic resistance is calculated in the last column. Note that the 0.15 mm slides are actually 0.1 mm sticky µ-Slide I Luer, intended to be fixed on a standard microscope slide. The additional height comes from the thickness of the sticky film. The selection of the slide determines the range of shear stress that can be applied. In our case, since we wish to compare our results to those obtained by the Technicon Ektacytometer in osmoscan mode, we use the same magnitude of shear stress which is 16 pascal (as calculated at the end of section 3.3). From Table 5 we see that for a 0.1mm height channel the flow rate is a quarter of the flow rate in a 0.2 mm channel.
The lower flow rate presents an advantage of using smaller quantities of blood and solutions. However, the lower channel height (of 0.1mm) has more important dimensional variations, compromising reproducibility. According to the manufacturer, ibidi Luer slides are made of a polyethylenederivative with linear expansion coefficient of 7x10-5 cm/cm•°C (method: ASTM D696). This makes a negligible expansion of height of 0.007 µm/°C, for the 0.1 mm channel and of 0.014 µm/°C for the 0.2mm channel. Variation of height within one batch for µslide I 0.1mm is +/-13% and µslide I 0.2mm is +/-5%.
Table 6 shows dimensions of three different flow cells from the same batch as provided by the producer. In order to minimize errors due to variations in channel height it is necessary to measure flow rate each time the flow cell is replaced. This way it's possible to compensate variations in channel height by variation in pressure in order to keep shear stress constant (see Equation 22 section 3.5). Another option would be to pre-select channels within a certain height tolerance. In conclusion, 0.1 mm height channel is preferable for our application.
Optical system
The optical system consists of the HeNe 632.6 nm laser source, the adjusting mirrors, the microchannel diffraction unit and the photodetector measurement unit photodiode readings to the host PC through a serial port (Figure 52), The second A/D system using National Instruments acquisition system with a 12 bit resolution A/D converter (USB6009) sending the same measurements from the sensors to the host PC through a USB port (Figure 48). These two systems were used alternatively according to the software option chosen: Labview or the custom software application. Further information can be found in section 6.2.6 describing software.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Design and Proof of principle 96
Conductivity measurement
Osmolality measurement is necessary in order to produce osmotic scan curves, where RBC deformability is depicted with respect to osmolality. The usual way to measure osmolality is either by a freezing point depression or a vapour pressure depression osmometer. These techniques are expensive and difficult to implement in a small footprint instrument. We use, instead, conductivity measurement as an indicator for the osmolality of our solutions. The ability of a solution to conduct electricity depends on its ionic character and the solute concentration. Our solution's osmolality is dominated by variation in Sodium chloride (NaCl) concentration. All other components in the solutions have constant concentrations throughout the measurement (see section 6.3.2.1). A calibration procedure records conductivity with respect to calibrated solutions in seven different osmolalities (see 6.3.3 Osmolality calibration). A second order polynomial approximation gives the approximation parameters for calculation of osmolalities between the values of calibrated points (Figure 58). In the first stage a conductivity cell using two electrode platinum wires, and a 10kΩ NTC temperature sensor, were placed in a
Power supply unit
The power supply is composed of three units. The switching power supply of 12V is designed to provide power to the valves and pressure pump. An analog +/-15V symmetric low noise power supply provides power to the opto-detector unit (5mA), the conductivity measurement board (19mA) and the pressure sensors (13mA). A
5V power supply provides power to the Microcontroller board and the Relay board.
The power supplies are seen in Figure 53 and Figure 54. The HeNe laser tube and the syringe pumps are powered by their own dedicated power supplies.
Sample preparation
Venous blood sample was obtained from a healthy volunteer. Two 6 ml tubes with ACD-B solution where used and kept refrigerated for the three days of experiments
Estimation of dilution rate of blood
Experiments performed on a Couette cylinder Ektacytometer equipped with a four quadrant detector, show negligible errors due to variations in cell concentration and in mean cell hemoglobin content (MCHC) [START_REF] Groner | New optical technique for measuring erythrocyte deformability with the ektacytometer[END_REF][START_REF] Mohandas | Analysis of factors regulating erythrocyte deformability[END_REF]. However, in order to be able to validly compare results between the microfluidic device and the Technicon Ektacytometer, we aimed to have, using both devices, approximately the same number of cells in the volume of solution exposed to the laser beam. Since we used laser beams of roughly the same diameter and intensity, we could increase the hematocrit by the ratio of the gap size between the Technicon cylinders divided by the microchannel height. So for a 0.1 mm height microchannel, compared to a 0.5mm gap between the cylinders, we need 5 times higher hematocrit. The solution containing whole blood, introduced into the Technicon viscometer is at a hematocrit of approximately 0.08%, thus we need to use in the microchannel blood at hematocrit of around 0.4%. That makes a dilution
of whole blood at a rate of around 1 to 100 compared to 1 to 500 dilution in the Technicon Ektacytometer. We can estimate the number of cells diffracted by the laser beam. The number of RBC in normal blood is around 5 million per µl. Since we dilute blood by 100, while using the microchannel, we have 50 thousand cells per µl.
The volume exposed by a 1mm diameter laser beam in a channel of 0.1 mm height is: 0.1•π•0.5² mm 3 = 0.0785 µl. So the number of cells exposed to the laser beam is 5•10 4 •0.0785 ~ 3926. The actual amount of blood used and required in the experiments described in section 6.3.4, is 70 µl. This can be further reduced by increasing the laser intensity.
Solution preparation 9
Three kinds of aqueous solutions are used in osmotic gradient ektacytometry: hypotonic, isotonic and hypertonic. Isotonic solution is used as the sample diluent in Shear Scan mode and as pre-diluent in Osmoscan mode. The hypotonic and hypertonic solutions are used in order to create osmolality gradient for Osmoscan mode.
Polyvinyl Pyrollidone (PVP) is a polymer used to achieve the necessary viscosity.
Sodium chloride (NaCl) is used to achieve the right tonicity.
Dibasic and monobasic anhydrous sodium phosphates (Na2HPO4, NaH2PO4) are used to achieve the right pH ("Technicon Ektacytometer User's manual,").
Osmolality calibration
Osmolality calibration is achieved by measuring conductivity for several calibration solutions of known osmolality. Curve fitting is performed by using the measured calibration points. The coefficients obtained are stored and used to determine osmolality from measured conductivity during experiments.
Calibration solutions are obtained by mixing high and low solutions in ratios specified in Table 9. Using the calibration solutions and distilled water, I measure conductivity at 8
points. Measurement values are shown in Table 10. Measured conductivity values are the output of a 10 bit analog to digital converter. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Design and Proof of principle 110
Osmoscan curves
Figure 59 shows curves of three experiments (green purple and blue). We can see that the curves nearly overlap. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017
Design and Proof of principle 113
Conclusions
A microfluidic Osmotic Scan Ektacytometer was designed and built as a prototype.
Unlike in a Couette cylinder Ektacytometer, where shear stress is constant across the gap between the cylinders, a cell population flowing in a microfluidic channel is subjected to a linear distribution of shear stress across the channel. For this reason, comparing results of both methods should be conducted with special care. Equation 22 -Provides better precision because shear stress does not depend on viscosity when constant pressure is maintained across the channel.
This new technique opens up the possibility of building a simple small footprint instrument that can be used with finger prick amounts of blood. Finger pricks are used for many applications (e.g. diabetes glucose monitoring) as it is much less invasive as compared to a needle in the arm. It could prove useful in many applications.
This new technique should still be validated by a series of experiments on both normal and pathological blood samples. Moreover, the claim that the flow channel presents a better measurement precision, has to be assessed in detail for each factor contributing to the potential cumulative measurement error and experimentally proven.
Figure 2 :
2 Figure 2: Effect of deformability on high-shear viscosity of blood. from (J. Kim et al., 2015)
Figure 3 :Figure 4 :
34 Figure 3: Osmotic deformability profiles of red cells from normal control, AIHA, HS, HE/HPP, DHSt, and hemoglobinopathies. From (King et al., 2015)
Figure 5 :
5 Figure 5: Deformation of a cell due to negative pressure in a micropipette. From (Mitchison and Swann, 1954)
Figure 6 :
6 Figure 6: Schematic diagram of a Rheoscope (Groner et al., 1980)
Figure 7 :
7 Figure 7: Optical tweezers forces. From[START_REF] Grier | A revolution in optical manipulation[END_REF]
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017State of the Art 21
Figure 9 :
9 Figure 9: A single beam RBC laser trapping technique (from Dao et al., 2003)
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 State of the Art 23 stage piezoelectric scanner. This way the tip follows the surface changes without breaking. The displacement of the tip is recorded and used to create the surface image. Several variants to the contact imaging mode are used: a tapping mode or a non-contact mode. Other deflection measurement methods are used such as piezoelectric detection, laser Doppler vibrometry, optical interferometry and capacitive detection.
Figure 10 :
10 Figure 10: Schematics of AFM operation.From[START_REF] Eaton | Atomic Force Microscopy[END_REF]
Figure 11 :
11 Figure 11: Schematic of RBC optical magnetic twisting cytometry (OMTC) tests. A magnetic field is applied normal to the magnetization of the bead to generate a torque on the bound bead. The applied torque deforms the cell, which causes the bead to rotate and translate. The in-plane displacement of the bead is tracked optically. From (Puig-de-Morales-Marinkovic et al., 2007)
Figure 13.
Figure 12 :
12 Figure 12: Experimental setup for wDPM. From (Bhaduri et al., 2012)
Figure 14 :
14 Figure 14: Images obtained by cDOT, QPI method. 3D rendered iso-surfaces of refractive index maps of individual RBCs from (A) healthy, (B) iron deficiency anemia, (C) reticulocyte, and (D) hereditary spherocytosis, red blood cells (Kim et al., 2014)
Figure 15 :
15 Figure 15: Schematic representation of the filtration apparatus. From (Schmid-Schönbein et al., 1973)
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017State of the Art 29
Figure 16 :
16 Figure 16: Typical shear scan deformability curve. From (Bayer et al., 1994)
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017State of the Art 31
Figure 17 :
17 Figure 17: Typical osmoscan curves. from (Lazarova et al., 2017).
first prototype of an Ektacytometer was developed in Bicêtre hospital near Paris, by Bessis and Mohandas in the early nineteen seventies, and had a stationary internal cylinder and a rotating external cylinder. Nearly twenty years later a commercially available Ektacytometer was produced by Technicon (Technicon Instruments, Tarrytown, NY, USA). It was very similar to the Bicêtre hospital prototype but had an internal rotating cylinder and an external stationary one. Several dozens of them were sold but their production was stopped several years Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 State of the Art 32 later. There are still a few functioning units in some hospitals in France and in the USA. Currently, there exist three commercially-available ektacytometers having different shearing geometries: Couette cylinders (Lorca), rotating parallel plates (Rheodyn SSD) and a flow channel (RheoScan-D). All three allow only shear scan and a comparison study was conducted between them (Oguz K Baskurt, 2009). A recent variant of Lorca, called Lorrca MaxSis based on Couette cylinders shearing technique, allows osmotic gradient ektacytometry.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017State of the Art 33 should be equal. In the case of an elongated light diffraction pattern the average vertical light intensity will be superior to the average horizontal one (the elongated diffracted image is rotated 90° with respect to the elongated cells. Hence the difference between vertical and horizontal light intensities gives an indication of the elongation rate. This difference is divided by the sum of intensities A+B so elongation values stay within the value limits of +/-1.The Technicon Ektacytometer comprises four modes of operation:Shear scan (called RPM scan in the Technicon manual) and osmoscan where described earlier in section 2.1.3.3.Time scan:The viscometer rotates at a very high speed for four minutes. During this time elongation is measured and recorded. This mode is used for the fragility assay, where cells are submitted to very high shear stress and their fragmentation is monitored by the decrease of elongation index with time.
Figure 18 :
18 Figure 18: A block diagram of the Technicon Ektacytometer (Swaič, 2007)
Figure 19 :
19 Figure 19: Schematic diagram of Lorca RBC deformability measurement system. from (Dobbe, 2002)
Figure 20 :
20 Figure 20 : The LORRCA MAXSIS Ektacytometer. From (Lazarova, www document)
Figure 21 :
21 Figure 21: Schematic diagram of the RheoScan-D slit-flow ektacytometer (Shin et al., 2005a)
Figure 22 :
22 Figure 22: Various shearing geometries used to measure red blood cell deformability: (a) concentric cylinders, (b) cone and plate, (c) parallel disks, and (d) Poiseuille slit flow. From (J. Kim et al., 2015)
Figure 23 :
23 Figure 23 : Viscosity of diluted blood suspension vs. Hematocrit. From (Shin et al., 2005a)
Re 10 3 (kg/m 3 ) x 1.5 x 3.33•10 -6 (m/sec) x 8ˑ10 -6 (m) / 2•10 -2 Pa•sec = 0.00002 And for Couette cylinders (Figure 26) with 1 R = 25.35 mm, a gap of R = 0.5 mm and rotation velocity of N=150 RPM, we get for the maximum velocity: Umax = 2πR2N/60 = 0.4 m/sec Re 10 3 (kg/m 3 ) x 0.4 (m/sec) x 8•10 -6 (m) / 2•10 -2 Pa•sec = 0.16We see that in both geometries (coquette cylinders and a flow channel), Re is well below 1, and hence neglecting the inertial forces is justified.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArieFinkelstein-September 2017 Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 44We consider in our experiments a fully developed flow where flow is in a steady state. The importance of this notion can be observed in Figure24and Figure25. A fully developed flow is dependent both on the time after which pressure stabilizes and the length at which entrance effects diminish.
Figure 24 :Figure
24 Figure 24: Velocity profiles for Poiseuille flow: a) t = 0.02 s , b) t = 0.6 s. From (Schulz, 2011)
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArieFinkelstein-September 2017 Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 45 creates a velocity gradient across the gap and consequently a shear stress on cells suspended in the medium. This is the most widely used shearing geometry in Ektacytometry.
Figure
Figure 26: Couette cylinders
R
This means that the RBCs are subjected to equal shear stress, no matter what their position is across the gap R .In order to verify how close the approximation in Equation 6 is in the case of dimensions used in ektacytometry, we can calculate the non-approximated shear at the walls of the internal and external cylinders of the Technicon Ektacytometer, using Equation 5 and see how close the results are to the approximate value. For 1 = 25.35 mm and R = 0.5 mm and rotation velocity of N=150 RPM, we get Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 47
Figure 27 :
27 Figure 27: Plane Couette flow: velocity profile of fluid between two parallel plates with one plate moving at constant velocity. From ("ANCEY, Christophe -Notes de cours. Mécanique des Fluides")
Figure 28 :
28 Figure 28: Cross section of parallel disk rheometer. From (James O. Wilkes, 2015)
Figure 29 :
29 Figure 29: Pressure driven flow: velocity profile of fluid through a channel.
Figure 30 :
30 Figure 30: Velocity parabolic profile and shear stress linear profile in Plane-Poiseuille flow.From[START_REF] Chung | Extrusion of Polymers: Theory and Practice[END_REF]
slightly worse than our worst case (0.04). It gives a correction factor of 974
Figure 31 :
31 Figure 31: An illustration of the H/W ratio influence on the velocity profile, in a rectangular channel, showing a flow profile inside a channel of H= 2h=0.4mm, W= 2b = 5 mm. Measurements should be done in a width distance from the walls, comparable to at least four times the channel height. From (Shear Stress and Shear Rates for ibidi μ -Slides -Based on Numerical Calculations, Ibidi Application Note 11)Finally, we can use Equation22to evaluate the flow rates required in order to produce, in the microchannels, an average shear stress equal to 16 Pascal, used in
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArieFinkelstein-September 2017 Comparison between Pressure and Shear driven flows for Osmolality Gradient Ektacytometry 56 empirical parameter closely related to intrinsic viscosity). In order to keep experimental conditions constant, users of Ektacytometers are constrained to buy ready-made solutions with guaranteed viscosity. Therefore results with a flow channel are expected not only to reduce cost but also to have better precision and better reproducibility.Since all RBCs are exposed to the same shear stress in Couette cylinders, their diffraction patterns can be correlated to their arithmetic mean deformability. In a flow channel RBCs are exposed to a linear distribution of shear stress varying from maximum at the wall to zero in the center. Therefore the diffraction pattern is correlated to the average shear stress which is half of the wall shear stress ( W /2).
Figure 32 :
32 Figure 32: Screen shot of the diffraction image, indicating the position of the holes in the mask for signal detection by the quadrant detector (A1, A2, B1, B2) and the long (L) and short (S) axis of which the length is determined by the iso-intensity curves of the image analysis algorithm.
Figure 33 :Figure 34 :
3334 Figure 33: Modified Technicon Ektacytometer with a beam splitter and a camera
Figure 35 :
35 Figure 35: A typical osmotic deformability curve indicating the points used for comparison of the different measurement; the minimum at low osmolality (LP), the maximum deformability (MP), and the hypertonic osmolality (HP) at which EI= ½ EImax. The use of these parameters simplifies data presentation and interpretation. In order to facilitate the comparison of the results by these two methods, three indicator points are calculated and marked by the program on each curve, as shown in Figure 35: Hypotonic point (LP), Maximum (MP) where elongation reaches its maximum (EImax), and Hypertonic point (HP). As indicated by the average and standard deviations, LP, MP and HP vary between different individuals.
Figure 36 :
36 Figure 36: Experimental curve. The top curve was obtained by the four quadrant detector and the bottom one by the camera. The Y axis plots the elongation index against osmolality on the X axis. The dispersion of points at low osmolality is due to a low concentration of RBCs at the beginning of the experiment.
which laser diffraction is used to routinely measure RBC deformability, either for the diagnosis of hereditary Spherocytosis or Elliptocytosis, or to monitor changes in deformability as the result of treatment, a simple measurement of four points seems fully adequate. The added complexity of image collection in real time, comparison of thousands of images, use of image analysis algorithms, adds complexity and significant cost. The addition of variables such as the choice of various possible regions of the image for interpretation, gain, aperture size, exposure time, saturation, blooming effect and sensor sensitivity degradation seem unwarranted for routine measurements. The four quadrant detector is limited by the necessity for calibration and centering, but the simplicity, constant conditions of use and simple signal processing provide the design of a simple and highly cost effective instrument for routine measurement of RBC deformability. Additional studies are required to compare repeatability for both methods, and to demonstrate that the two curves overlap for a variety of pathologies. ArieFinkelstein-September 2017
Figure 37 :
37 Figure 37: Schematic diagram of the FloDif hydraulic system. f1, f2, f3 and f4 denote flow rates. C0, Ch, Ci, Cc and y denote osmolality. V denotes volume
Figure 38 :CFigure 39 :
3839 Figure 38: Hydraulic system Matlab Simulink model at the top, with Stir Tank subsystem at the bottom
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Modelling and Mathematical analysis 77 determined by the flow rate (f4) and by the volume of the tubing between the sample mixing stage (junction joining the solution f2 and the sample f3) and the flow cell. The flow rate through the slide (f4) is determined by the required sheer stress, which in turn will be affected by the internal size of the flow cell, and the solution viscosity.
osmotic equilibration: Ltube= 3.1•20/ 60π•0.08² = 51.4 cm So final conditions: V0 = 11 ml, Ch = 600 mOsm/kg, C0 = 30 mOsm/kg, Ltube= 52 cm.Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Modelling and Mathematical analysis 79
Figure 40 :
40 Figure 40: Simulation example of the evolution of Volume (green curve right axis) and Osmolality (blue left axis) in the steer tank for V0=10 ml, Ch=750 mOsm/kg, f1=1 ml, f2=2 ml Calculations are stopped at volume 0.5ml.
(
http://www.rbclab.com/Pages/200/240/240.5/240%205.html) During this study, the functionality of the individual parts was tested in Oakland and then optimized based on data acquired there. In subsequent phases these individual parts were combined in several designs to validate a complete instrument. The basic components and final design are described in the following sections: Section 6.2.3 : Flow cell chip choice. A description of several chips available and the final choice based on size considerations and test results. Section 6.3.2.1: Media. This was chosen based on proper environment for the red cell suspension, optimal viscosity, pH and osmolality, including the design of a conductivity assessment to continuously measure osmolality to which the cells are exposed in the flow cell. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Design and Proof of principle 83 Section 6.2.4 : RBC concentrations. The optimized concentration of the RBC suspension that provides proper diffraction of the laser and flow in the flow cell. Access to hematology instrumentation aided this optimization. Section 6.2.4 : Laser and detector, including optimization of optical pathway (orientation, distance of components) Based on the optimized components and conditions, the individual parts were combined in several designs. Two distinct systems were successfully used: One employing an ARM microcontroller (LPC2148) communicating with a PC through a serial port, and another using National Instruments acquisition system (USB6009) communicating with the host PC through a USB port. Software was designed to collect relevant data from the different elements. Coding was done in embedded C, C++/CLI Visual Studio as well as LabView. These two systems were chosen to allow future inclusion in a commercial version of the final design. The software design includes sections for the alignment of the optics, measurement of the diffraction pattern, calibration of the conductometer, and measurement of the pressure gradient in the flow path.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 85
Figure 41 :
41 Figure 41: Picture of initial hydraulic system experimental setup.
Figure 42 :
42 Figure 42: Five states of operation for one experiment: Step 1-Fill the LOW syringe with hypotonic solution. Step 2-Put LOW (hypotonic) solution in the mix reservoir. Air pressure is off. Step 3-Fill syringe B with HIGH (hypertonic) solution, syringe C with blood sample diluted in ISO(isotonic) solution and start air pressure in order to fill the Flow Cell with LOW (hypotonic) solution. Step 4 -empty syringe A to waste. Step 5 -the actual blood test run. A gradually increasing osmolality solution is pushed by air pressure from the Mix reservoir to the flow cell. On its way, it is mixed with the diluted blood sample.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 88
Figure
Figure 43: Second design improved hydraulic system 1: Syringe with sample blood in isotonic solution 2: PVP hypertonic solution 3: Air pump or compressed air tank 4: Pressure regulator 5: Airtight Vessel 6: Magnetic stir 7: T connection 8: Tubing of set length to equilibrate cells to osmolality 9: Conductivity and temperature sensors 10: Flow cell Description of the hydraulic system as shown in Figure 43:
8 :
8 7: T where RBC from syringe 1 meet (case A: Osmoscan), or do not meet (case B: Shear scan) fluid from 5.A: Osmotic deformability measurement: Ratio of flow from 1 and 5 determines hematocrit in the flow cell. Total flow rate (1+5) determines shear. This total flow, mainly determined by 5 (as that is the main contributor) with 1 as addition should be set constant. Also flow rate 1 << 5, to generate proper osmolality gradient. B: No flow from 5 allows different rates from 1 for the measurement of deformability at different shear with set osmolality or fragility measurement. The length of this tube together with flowrate determines the time that the RBC has to equilibrate (take up, or lose water). 9: Conductivity measurement. There is a small offset between measurement here and actual osmolality at measuring point. However this can be calibrated for a given flowrate. Importantly by filling 1,2 with known osmolalities we can calculate the change of osmolalities in time given a set low start volume.10: Flow cellDesign and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Design and Proof of principle 90
Figure 44 :
44 Figure 44: Hydraulic system of the second design
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 94(Figure45to Figure47). In order to eliminate possible sources of errors due to stability and collimation of the laser beam, we chose to use a Helium-Neon laser source for our prototype. Once a proof of principle is established, a small footprint laser diode can replace the HeNe tube. The vertical position of the flow cell is advantageous for a quick air bubble evacuation. The filter covering the photodetector is a 10 nm narrow filter which eliminates ambient light influence on measurements.
Figure 45 :
45 Figure 45: Laser tube with optical system. The laser light path can be adjusted with two mirrors (vertical/horizontal) adjusted by set screws.
Figure 46 :
46 Figure 46: Flow cell with bottom feed line and top waste line. Flow cells can be easily switched out by loosening screws of the cell holder and sliding in or out.
Figure 47 :
47 Figure 47: Four quadrant detector on sliding base. The detector front is covered with a filter.
Figure 48 :
48 Figure 48: Analog to digital conversion unit (NI USB6009) with relay card for solenoid valve automation.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Design and Proof of principle 97 measurement unit connected in the flow line very close to the channel (Figure 49 right). This was later replaced by two platinum wires introduced into the channel serving as the electrodes of the conductivity measurement cell. In order to avoid electrolysis, the circuit injects into the measured solution, a low level alternating voltage of 1.5 KHz. The voltage drop between the electrodes depends on the solution conductivity. This signal is fed to a log amplifier, then rectified and followed by an integrator. The last stage of the circuit compensates the measured values for temperature variations with the help of a signal obtained by a negative temperature coefficient sensor (NTC) introduced in the flow channel. The conductivity board and sensors are shown in Figure 49.
Figure 49 :
49 Figure 49: Conductivity measurement card with conductivity and temperature measurement unit. The conductivity measurement unit was integrated into the flow cell at a later stage.
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 99
Figure 50 :
50 Figure 50: Four quadrant detector circuit
Figure 52 :
52 Figure 52: ARM LPC2148 microcontroller board (right), conductivity measurement board (left) and quadrature photodetector analog interface (on breadboard)
Jtag adapter was used to program and debug the board. The ARM program (ArmEkta) reads analog inputs from the conductivity, pressure and optodetector boards and sends them to a PC through a serial port. Microsoft Visual Studio 2010 was used to develop the user interface program (WinEkta). This program communicates with the microcontroller board. It converts the conductivity into osmolality, and the four quadrant detector readings into deformability index. The resulting curve is presented on the computer screen as the blood test progresses, and finally stored on the hard disk. It allows comparison of curves and indicator points calculations. Figure 56 shows the software application in comparison mode displaying two curves of the same blood sample taken before and after calibration. In the second option, reading of conductivity, pressures and optodetector measured signals is performed by NI USB6009 A/D acquisition unit communicating with Labview program on a host PC, through a USB port (Figure 48 and Figure 57).
Figure 56 :
56 Figure 56: Screenshot of WinEkta Program
Figure
Figure 57: Computer program for user interface using Labview
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 106
Figure 58 :
58 Figure 58: Osmolality calibration curve obtained from Table 10
Figure 59 :
59 Figure 59: Curves obtained on the 0.1mm flow cell before osmolality calibration. We can observe a good repeatability between the blue, green and purple curves. Pressure on channel is 2.5 psi.
Figure 60
60 Figure60shows an experiment with 0.1 mm flow cell, after osmolality calibration.The duration of the experiment was 7 minutes. Pressure on the channel was 2.84 psi Solutions consumption was: Sample 1 ml (composed of 300µl of blood per 10ml of isotonic solution), High 1.5 ml, and Stir Tank 3 ml. The recommended 2:1 flow
Figure 61
61 Figure 61 shows results obtained by the Technicon Ektacytometer on the same blood sample used in Figure 60. In order to facilitate the comparison of the results obtained by these two methods, three indicator points are calculated and marked on each curve, as shown in section 4.3 Figure 35: Hypotonic point where deformability is minimal in the hypotonic osmolality region, Maximum point where deformability reaches its maximum and Hypertonic point where deformability has a value of half of maximum point. The designers of these methods chose different denominations for these same points. These points are marked Hypo, Max and Hyper in Figure 60 and Omin, DImax and Ohyp in Figure 61.
Figure 60 :
60 Figure 60: Curve obtained on the flow cell of H=0.1mm after osmolality calibration. Pressure on channel is 2.84 psi.
in section 3.4 shows that in order to have the same average shear stress in both techniques the value of wall shear stress, in a micro-channel, should be double of that normally used in the Couette cylinders. Since the distribution of sheer stress across the channel and across the gap between the cylinders is very different, an experimental proof of principle was necessary in order to validate results obtained by a microfluidic osmotic scan Ektacytometer.Osmoscan curves, obtained by the FloDif are comparable to these obtained by the Technicon Ektacytometer, under equal average shear, for one blood sample. Both the curve shape and magnitudes are very close (Figure60and Figure61). A comparison of indicator points on both curves is shown in Table11. The osmolality of the minimum point (LP point marked Omin or Hypo), the osmolality of the maximum (MP point marked DImax or Max), and the decrease at higher osmolality (HP point marked Ohyp or Hyper) are very similar using either method. Since this data is based on a single experiment, the significance of results is more in the closely tracking shapes rather than in the close magnitude of values. Importantly, regardless of the method used, we find a minimum around 150 mOsmol, which has been shown to correlate with the osmolality at which approximately 50% of the RBC have hemolyzed[START_REF] Clark | Osmotic gradient ektacytometry: comprehensive characterization of red cell volume and surface maintenance[END_REF], a maximum deformability around 290 mOsmol, and a sharp drop in deformation when the cell loses water under hyper osmolalities. Different samples from control individuals show slightly different results in LP, MP and HP based on the individual characteristics of the donors. These shifts are very similar with either detection method. It indicates clearly that the microfluidic method identifies properly the change of RBC deformability over a large range of Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Design and Proof of principle 114 osmolalities. Due to lack of resources and time, a very small number of experiments were performed. Nevertheless, this encouraging achievement can be considered as a proof of principle. It has still to be validated in a series of experiments on both normal and pathological blood samples, and a range for normal indicator points values should be established. Moreover, the repeatability and reproducibility of results should be further experimentally verified. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Conclusions 1157 CONCLUSIONSIn this thesis I describe, for the first time, the design and construction of a prototype of a diagnostic instrument for several hereditary RBC disorders based on Osmotic scan ektacytometry in a micro-channel. I provide experimental results that can serve as a proof of principle for a further development. That was my main goal in this thesis, and that was accomplished.This new technique presents several advantages over the currently used rotating cylinder technique :-Lower quantities of blood sample are required -important for newborn babies and experimental mouse models used in bio clinical studies.-Closed circuit design giving better sterile conditions and lowering sample contamination risk.-Closed circuit allows monitoring of additional parameters like oxygenation and temperature. It also permits the instrument to be more compact in size and lighter.-No moving parts in the flow channel (compared to the concentric rotating cylinders) leading to lower power consumption, lower production/manufacturing cost, and simplification of maintenance. Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry Arie Finkelstein-September 2017 Conclusions 116 -Test conditions closer to blood vessel physiology.
TABLE 1 :
1 LORRCA MAXSIS AND TECHNICON MAIN DIFFERENCES 36
TABLE 2 :
2 COMPARISON OF FEATURES OF CYLINDRICAL COUETTE FLOW AND PLANE-POISEUILLE
FLOW 56
TABLE 3 :
3 COMPARISON OF FORMULAS FOR CYLINDRICAL COUETTE FLOW VS. PLANE-POISEUILLE
FLOW
57 TABLE 4: THE
TABLE 5 :
5 TABLE INDICATES THE SETTINGS AS DEPICTED IN STEPS 1-5 87 CALCULATED FLOW RATES, PRESSURES AND TUBE LENGTHS FOR THREE DIFFERENT HEIGHT IBIDI SLIDES (L=50MM, W= 5MM). SEE MORE DETAILS ABOVE. 92
TABLE 6 :
6 CHANNEL HEIGHT MEASUREMENTS, WITHIN SAME BATCH, FOR IBIDI µ-SLIDE I (PROVIDED
BY IBIDI) 93
TABLE 7 :
7 CONDUCTIVITY BOARD VOLTAGE MEASUREMENT FOR SEVERAL CONDUCTIVITY VALUES 97
TABLE 8 :
8 COMPARISON OF PRESSURE MEASUREMENTS BETWEEN A NEEDLE GAUGE AND TWO
PRESSURE SENSORS 103
TABLE 9: COMPOSITION OF CALIBRATION SOLUTIONS 108
TABLE 10 :
10 MEASURED VALUES FOR OSMOLALITY CALIBRATION 109
TABLE 11 :
11 COMPARING THE INDICATOR POINTS ON CURVES IN FIGURE 60 AND FIGURE 61 FOR THE SAME BLOOD SAMPLE OBTAINED BY THE MICROCHANNEL DEVICE AND THE TECHNICON
EKTACYTOMETER 112
111 FIGURE 61: CURVE OBTAINED ON THE TECHNICON EKTACYTOMETER.
2.1 RBC Deformability Measurement Techniques 6 2.1.1 Introduction
The RBC Deformability measurement techniques can be categorized in two main
groups: Measurement of cell population deformability and measurement of
individual cell deformability 7 . Techniques used on individual cells are mostly used in
research and are rarely suitable for clinical use.
2.1.2 Measurement techniques of single cells 2.1.2.1 Micropipette Aspiration (MA)
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
The micropipette aspiration technique was first developed by Mitchinson and
Swann in 1954 as an instrument they called "Cell Elastimeter" (Mitchison and
State of the Art 16
(Evans,
Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometry
Arie Finkelstein-September 2017
State of the Art 17
Table 1 : Lorrca Maxsis and Technicon main differences
1
Lorrca MaxSis Technicon
Diffraction measurement Camera Photodiodes
Rotating cylinder External Internal
Viscosity of solutions 20 cP 30 cP
Laser source Laser Diode Helium Neon
2.2.3 RheoScan-D
(RheoMeditech, Seoul, Korea)
Described as Laser Diffraction Slit Flow Rheometer or Microfluidic Ektacytometer, this instrument is using, as a shearing device, a disposable flow channel of precise dimensions (0.2 mm high × 4.0 mm wide × 40 mm long).
Table 2 : Comparison of features of Cylindrical Couette flow and Plane-Poiseuille flow
2
Cylindrical Couette flow Plane-Poiseuille flow
Constant wall velocity Constant pressure
Shear driven flow Pressure driven flow
Linear velocity profile Parabolic velocity profile
Velocity, flow rate and shear rate Velocity, flow rate and shear rate
independent of viscosity dependent on viscosity
Homogenous shear stress Linear shear stress distribution
Shear stress proportional to viscosity Shear stress independent of viscosity
Table 3 : Comparison of formulas for Cylindrical Couette flow vs. Plane-Poiseuille flow
3
Cylindrical Couette flow Plane-Poiseuille flow
Velocity profile 1
2
Table 5 : Calculated flow rates, pressures and tube lengths for three different height Ibidi slides (L=50mm, W= 5mm). See more details above. 6.2.3.1 Selecting the optimal slide dimensions for this work.
5
Table 6 : Channel height measurements, within same batch, for Ibidi µ-Slide I (provided by Ibidi)
6
µ-Slide I Luer sticky (assembled
µ-Slide I Luer (standard bottom) by hand)
Target
height 100µm 200µm 400µm 600µm 100µm 200µm 400µm 600µm
Measured
height in
µm 87.7 196.8 399.2 602.8 152.4 268.3 461.25 656.98
87.6 192.8 402.1 600.4 146.6 269.7 464.00 653.13
87.3 195.4 399.8 605.0 150.8 269.7 466.80 653.53
Table7shows the relation between conductivity and output voltage for the conductivity board. A variable resistance (potentiometer) was connected instead of the conductivity cell and a precision voltmeter was connected at the output. The potentiometer was adjusted to give several output levels. The resistance was measured for each given voltage. All measurements were conducted in a controlled room temperature of 25°C.
Table 7 : Conductivity board Voltage measurement for several conductivity values
7
Resistance (Ω) Conductivity (mΩ -1 ) Voltage (volts)
20000 0.050 0.120
10000 0.100 0.150
5870 0.170 0.300
3471 0.288 0.600
2274 0.440 1.000
800 1.250 2.997
456 2.193 5.000
256 3.906 7.990
186 5.376 10.000
Table 8 : Comparison of pressure measurements between a needle gauge and two pressure sensors used in our experiments. All three were connected to the same pressure source controlled by an adjustable pressure regulator. See calculation formulae above. 6.2.6 Software
8 Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 104
Pressure (needle gauge) Freescale Vi0 Freescale P Freescale P Honeywell Vi1 Honeywell P Honeywell P
Cm/Hg psi kPa Volts psi kPa Volts psi kPa
5 0.967 6.666 0.8 0.967 6.667 1.275 0.969 6.679
10 1.934 13.332 1.403 1.939 13.367 2.055 1.944 13.402
15 2.901 19.998 1.99 2.885 19.889 2.804 2.880 19.857
20 3.867 26.664 2.61 3.884 26.778 3.605 3.881 26.760
25 4.834 33.331 3.258 4.928 33.978 4.42 4.900 33.784
27 5.221 35.997 3.47 5.270 36.333 4.71 5.263 36.284
* Pressure source is Xavitech Pump V200
* * Both sensors connected to same pressure with T connection
Two distinct options were designed. These systems were chosen to allow future
inclusion in a commercial version of the final design.
One system has an integrated microcontroller communicating with a host PC with a
user interface program. Crossworks for ARM microcontroller, by Rowley was used
as the embedded C development environment on a PC. The Rowley CrossConnect
Table 9 : Composition of calibration solutions
9 Design and evaluation of a new diagnostic instrument for osmotic gradient ektacytometryArie Finkelstein-September 2017Design and Proof of principle 109
Final %LOW %HIGH
Osmolality (40 mOsm/kg) (750 mOsm/kg)
40 100 0
100 91.6 8.4
150 84.5 15.5
200 77.5 22.5
300 63.4 36.6
400 49.3 50.7
750 0 100
Table 10 : Measured values for osmolality calibration
10
Osmolality Conductivity
750 1023
400 758
300 654
200 502
150 400
40 80
0 24
There is a recent controversy regarding the disadvantage of the homozygous form(Pasvol,
2009)2 A more detailed historical review can be found in[START_REF] Bessis | Discovery of the red blood cell with notes on priorities and credits of discoveries, past, present and future[END_REF] and[START_REF] Mohandas | Red cell membrane: past, present, and future[END_REF]
An intermediate regime of motion called swinging was found at around λ=4, between tumbling and tank-treading motion regimes[START_REF] References Abkarian | Swinging of Red Blood Cells under Shear Flow[END_REF]
This section is based on Pathobiology of Human Disease (Linda M.[START_REF] Mcmanus | Pathobiology of Human Disease: A Dynamic Encyclopedia of Disease Mechanisms[END_REF] and Red cell membrane : past, present and future[START_REF] Mohandas | Red cell membrane: past, present, and future[END_REF]
This section is based on the following sources:[START_REF] Musielak | Red blood cell-deformability measurement: Review of techniques[END_REF] [START_REF] Baskurt | International Expert Panel for Standardization of Hemorheological Methods[END_REF] [START_REF] Dobbe | Engineering developments in hemorheology[END_REF] [START_REF] Kim | Measurement Techniques for Red Blood Cell Deformability: Recent Advances[END_REF] [START_REF] Kim | Advances in the measurement of red blood cell deformability: A brief review[END_REF]
State of the Art
Comparison between a camera and a four quadrant detector, in the measurement of red blood cell deformability as a function of osmolality
In many publications Dextran is used as an alternative thickening polymer instead of PVP[START_REF] Groner | New optical technique for measuring erythrocyte deformability with the ektacytometer[END_REF][START_REF] Kuypers | Use of ektacytometry to determine red cell susceptibility to oxidative stress[END_REF]
project of modernising the Technicon Ektacytometer owned by their laboratory.
Design and Proof of principle 101
Pressure measurement card
The pressure sensors were introduced in order to allow close monitoring of pressure during experiments, and eventually a feedback controlled pressure pump system.
Two pressure sensors are used. One for the source pressure measurement and one for the pressure drop on the microchannel. Since the pressure on the channel determines the shear stress on the flow cell, I chose a higher precision Honeywell sensor (HSCDRRT005PG2A5) with a totale error inferior to +/-0.25% of full span.
The source pressure is measured by a cheaper Motorola sensor (MPXV5050DP) with a totale error inferior to +/-2.5% of full span. In order to validate the pressure measurement chain I connected a needle gauge pressure meter and the two pressure sensors to the same pressure source. The pressure source was a Xavitech V200 Pump.
APPENDIX
Scientific communications by Arie Finkelstein
A list of my scientific communications is presented below. Some of them were conducted in the framework of this thesis.
Internal communications
DESIGN AND EVALUATION OF A NEW DIAGNOSTIC INSTRUMENT FOR OSMOTIC GRADIENT
EKTACYTOMETRY
Abstract
The ability of red blood cells (RBC) to change their shape under varying conditions is a crucial property allowing these cells to go through capillaries narrower than their own diameter. Ektacytometry is a technique for measuring deformability by exposing a highly diluted blood sample to shear stress and evaluating the resulting elongation in RBC shape using a laser diffraction pattern. This work contributes to the design and evaluation of a new diagnostic technique based on osmotic scan ektacytometry, using a microfluidic method. It allows the measurement of deformability of an RBC population, as a function of varying medium osmolality. This measurement makes possible a differential diagnosis for any one of a number of RBC disorders presenting similar symptoms. It also permits the physician to follow the effects of treatments. Both theoretical aspects based on flow equations and a proof of principle are discussed. This new technique opens up the possibility of building a simple, small footprint instrument described in this work that can be used with finger prick amounts of blood. |
01763133 | en | [
"spi.meca.mema"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01763133/file/article_final.pdf | A Shanwan
S Allaoui
email: samir.allaoui@univ-orleans.fr
Different experimental ways to minimize the preforming defects of multilayered interlock dry fabric
Keywords: Fabrics/textiles, Lamina/ply, Preform, Defects
Different experimental ways to minimize the preforming defects of multi-layered interlock dry fabric
Anwar Shanwan, Samir Allaoui
INTRODUCTION
Long fiber-reinforced composites are widely used in various industries, especially in transportation, because it gives the possibility to reach a light final product. Liquid Composite Molding (LCM) processes are among the most interesting manufacturing processes to produce composite parts with complex geometry, because they offer a very interesting compromise in terms of repeatability. The first stage of this process (preforming) is delicate because there are several deformation mechanisms, which are very different from those of steel sheets stamping [START_REF] Allaoui | Experimental tool of woven reinforcement forming International Journal of Material Forming[END_REF].
The quality of preforms of double curved geometries depends on several parameters, such as: punch geometry, relative orientation of punch/fabric-layers and blank-holders pressure. These parameters play a major role on the quality of the final shape in term of defects appearance [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF].
Predicting of preforms quality, for a given shape with a given fabric, and subsequently, the defects that may appear, can be verified by using of finite element simulations [START_REF] Boisse | Modelling the development of defects during composite reinforcements and prepreg forming[END_REF][START_REF] Ten Thije | Large deformation simulation of anisotropic material using an updated lagrangian finite element method[END_REF][START_REF] Nosrat Nezami | Analyses of interaction mechanisms during forming of multilayer carbon woven fabrics for composite applications[END_REF][START_REF] Nosrat Nezami | Active forming manipulation of composite reinforcements for the suppression of forming defects[END_REF][START_REF] Hamila | A meso macro three node finite element for draping of textile composite performs[END_REF][START_REF] Allaoui | Experimental and numerical analyses of textile reinforcement forming of a tetrahedral shape[END_REF] or experimental studies [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Experimental and numerical analyses of textile reinforcement forming of a tetrahedral shape[END_REF][START_REF] Soulat | Experimental device for the performing step of the RTM process[END_REF][START_REF] Vanclooster | Experimental validation of forming simulations of fabric reinforced polymers using an unsymmetrical mould configuration[END_REF][START_REF] Chen | Defect formation during preforming of a bi-axial non-crimp fabric with a pillar stitch pattern[END_REF][START_REF] Lightfoot | Defects in woven preforms: Formation mechanisms and the effects of laminate design and layup protocol[END_REF][START_REF] Shan Liu | Investigation of mechanical properties of tufted composites: Influence of tuft length through the thickness reinforcement[END_REF]. In addition, during the manufacturing of composite parts, several layers of fabric are stacked together. As these layers (sheets) are not being interdependent, they have different behaviors and can relatively slide, each to other. By this way, an inter-ply friction is generated between them. Several studies showed that the preform quality depends highly on the inter-ply friction, which takes place between the superposed layers during forming [START_REF] Bel | Finite element model for NCF composite reinforcement preforming: Importance of inter-ply sliding[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF][START_REF] Ten Thije | A multi-layer triangular membrane finite element for the forming simulation of laminated composites[END_REF][START_REF] Vanclooster | Simulation of multi-layered composites forming[END_REF][START_REF] Chen | Intra/inter-ply shear behaviors of continuous fiber reinforced thermoplastic composites in thermoforming processes[END_REF][START_REF] Hamila | Simulations of textile composite reinforcement draping using a new semi-discrete three node finite element[END_REF]. Moreover, the friction effect is more severe in case of dry woven fabrics, due to shocks between the overhanging yarns of the superposed layers [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behavior[END_REF]. A recent study highlighted the influence and criticality of interply friction according to sequence of layers stacking, especially when the inter-ply sliding is greater than the unit cell length of the fabric [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF].
The aim of this study is to improve the quality of preforms of dry woven fabrics, by reducing or eliminating the defects via two criteria: the definition of the best process parameters and the second one is the reduction of inter-ply friction by improving the interface between the layers.
MATERIAL AND METHODS
Tests presented in this paper are performed on a commercial composite woven reinforcement, which is a powdered interlock fabric, denoted Hexcel G1151®, with a surface weight of 630 g /m². This fabric is composed of around 7.5 yarns / cm in warp and weft directions.
The unit cell of G1151® consists of 6 warp yarns and 15 weft yarns distributed on three levels. In situ, the average yarn width is about 2mm for warp and 3mm for weft. A specific forming device, developed at LaMé laboratory was used to perform the shaping tests [START_REF] Allaoui | Experimental tool of woven reinforcement forming International Journal of Material Forming[END_REF][START_REF] Soulat | Experimental device for the performing step of the RTM process[END_REF]. This device is equipped with two CCD cameras to track the yarns position and measure the plane shear of the reinforcement.
During preforming process, there is a complex relationship between three parameters: fabric mechanical properties, forming process parameters and punch (part) shape. This paper aims to improve the preforming quality of dry interlock fabric and to avoid, as possible as, the appearance of defects during preforming process on a given shape. To study a wide range of defects with a maximum of amplitudes, our tests were carried-out by means of a highly non-expandable and double curved form (prismatic punch) having a triple point and small curvature radii (10mm). The punch dimensions are shown in Figure 1 [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF].
The different preforming configurations, presented in this study, are illustrated in Figure 2 [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF], where eight blank-holders are used around the preform to apply a pressure of 0.15 bar on the fabric (Figure 2, a). The tests were done with a punch speed of 30 mm/min.
For both monolayer and two-layers preforming tests, the same experimental conditions are used.
In case of mono-layer preforming, several orientations of ply/punch are also used (α° : 0°, 30°, 45°, 60° and 90°). The 0° orientation, which is considered as reference configuration (Figure 2.a), means that the weft and warp directions of the stacked layers are parallels to the lateral edges of the punch faces. In case of two-layers performing, the tests are conducted by stacking one of layers at 0° and the other-one at α° (Figure 2.b), with several configurations such as : 0°/0°, 0°/90°, 0°/45°, 45°/0°, etc. Herein, α°/0° means that the upper layer is oriented at α° and the lower one at 0°.
RESULTS AND DISCUSSIONS
For monolayer preforming configuration, the first tests were carried-out with a monolayer oriented at 0° (reference configuration) and at the same optimal conditions, found in a previous study [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF] for the same type of fabric. Preforming tests showed a good preforms quality at macroscopic level (Figure 3.a) since the preform useful area does not have wrinkles defect.
Nevertheless, at mesoscopic level, ''buckle'' defects occur on the faces and the edges of prismatic preform where yarns are subjected to in plane bending. Subsequently, these yarns undergo an outof-plane buckling, so that the weaving pattern is not respected for a long time. In terms of shear angles, the maximal values are reached at the bottom corners of the preforms (50° and 55°). These values are close to the interlock fabric-locking angle. On the other hand, no wrinkles defects occurred in the preform useful area due to the coupling between shear and tension, which can delay the onset of wrinkles when the tension applied on fabric increases [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Launay | Experimental analysis of the influence of tensions on in plane shear behaviour of woven composite reinforcements[END_REF][START_REF] Harrison | Characterising the shear-tension coupling and wrinkling behaviour of woven engineering fabrics[END_REF].
A comparison between the 0° configuration and the others of 90°, 0°/0°, 0°/90°, 90°/0° and 90°/90°
shows the same results with the same defects (Figure 3.b). In fact, in all of these cases, the relative orientation between yarn networks and the punch remains unchangeable, which confirm the effect of the relative position of punch/fabric. The only difference between the 0° and 90° preforms is the inverting of the position of weft and warp networks.
Oriented monolayer preforming tests
The preforms obtained by oriented monolayers (α ≠0° and α ≠90°) show more extensive defects than the above-mentioned case (0° and 90°) although the shear angle values remain in the same scope obtained in the case of reference configurations.
Despite the small shear angles, wrinkles occur at the useful area, as illustrated at zone 1 of the Figure 4 (case of monolayer oriented at 30°). As shown in this figure, wrinkles appear on two opposite corners of the preform where the observed shear angles are low (22°). Moreover, there are no wrinkles defects on the frontal face (area 3), where high shear angles are observed (49°).
In addition, "buckle" defects are also observed in this preform (areas 2) and are located at different emplacements by comparison with those obtained at 0° monolayer orientation. These "buckle" defects are almost due to bending stresses applied on yarns during preforming. These observations correspond to those obtained in previous studies [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF] and confirm the significant effect of the relative orientation of punch/ply on the preform quality for complex geometries. Subsequently, in the case of oriented configurations (0° < α <90°), the preforms quality is not acceptable. In fact, a bad preforms quality leads to aesthetic problems and non-respect of the dimensional specifications. In addition, these defects may have an effect on the mechanical performances of the final part [START_REF] Hörrmann | The effect of fiber waviness on the fatigue life of CFRP materials[END_REF][START_REF] Cruanes | Effect of mesoscopic out-of-plane defect on the fatigue behavior of a GFRP[END_REF]. Therefore, the preforms quality needs to be improved.
The improvements can be achieved by different strategies like : substitution of fabric by another one with a better formability, changing of manufacturing process, applying of best manufacturing process parameters, and/or modifying of ply orientations and parts geometry, etc. From an industrial point of view, certain strategies could be costly and/or time-consuming (change of process, change of reinforcement).
However, some parameters are often set by the technical specifications of the part (such as: ply orientations, geometry, type of reinforcement, etc…). Furthermore, the modifying of such parameters can affect the entire project of system in which the part would evolve. Thus, the most interesting strategy to adopt is to modify the parameters that do not affect the specifications of the part. It will be possible, for example, to improve the preforms quality by optimizing of the process parameters.
Hence, a new strategy depending on modifying of some process parameters, such as the blank holder's pressure and their geometry, was adopted. On the other hand, the other parameters like the orientation of layers, the punch geometry and the type of fabric, remain fixed. Preforming tests are held on oriented layers, a change was applied on two parameters: blank holder's pressure and blank holder's geometry. This change was applied separately in order to analyze the results and define which parameter is more important than the others on the preforms quality.
The tests showed that the increasing of tensile force applied on the yarn's networks, which is obtained by increasing of blank holder pressure, leads to a delay on the onset of wrinkles or to avoid them [START_REF] Allaoui | Experimental preforming of highly double curved shapes with a case corner using an interlock reinforcement[END_REF][START_REF] Launay | Experimental analysis of the influence of tensions on in plane shear behaviour of woven composite reinforcements[END_REF][START_REF] Harrison | Characterising the shear-tension coupling and wrinkling behaviour of woven engineering fabrics[END_REF]. Therefore, the pressure was firstly increased up to 0.2 bars only on the two square blank holders located on the opposite corners B&D, where wrinkles defect appear (Figure 4). The maximal value of the pressure was defined by the capacity of the compressed air system.
The obtained results show that the wrinkles remain on the preform but their amplitude is slightly decreased (Figure 5). In addition, the shear angles did not change on the preform areas compared to the case where the pressure applied on fabric is 0.15 bar (figure 4).
The tests showed that the increase in the blank holders' pressure did not allow avoiding of wrinkles completely. Hence, an adapted blank holder's geometry was suggested to improve the preforms quality [START_REF] Capelle | Complex shape forming of flax woven fabrics: Design of specific blank-holder shapes to prevent defects[END_REF]. A single blank holder surrounding the preform, which eliminates the gaps present in the initial configuration, has been used to replace the eight individual ones. Thus, new tests were performed by using of this geometry with a pressure of 0.15 bars. As shown in Figure 6, the obtained preform has a better quality because there is a higher decrease in wrinkles amplitude than the one obtained by pressure increasing. It means that the effect of blank holder geometry on the preform quality is more significant than the effect of pressure because the blank holder controls the force application and its distribution on the yarns. However, the change of blank-holders geometry did not allow the elimination of wrinkles completely.
For this reason, a third solution, depending on combining of the two previous strategies was used in order to improve the quality of the preforms (pressure of 0.20 bars + single blank holder). In this case, a good quality was obtained without any wrinkles defect, as shown in Figure 7. Subsequently, the combining of several optimized parameters can leads to the avoiding of wrinkle defects.
On the other hand, buckles defects remain appeared in the useful area in spite of the previous suggested solutions. The extent of the region of these defects and their amplitude are almost identical. Consequently, "buckle" defects can not be avoided completely by the strategy used in this study. In fact, this defect is generated by in plane bending of yarns that leads to their out of plane buckling promoted by the fact that the fibers are continuous and not bonded together. To avoid this defect completely, it may be possible to change the nature of yarns and/or their geometry. This solution is sometimes possible with natural yarns [START_REF] Capelle | Complex shape forming of flax woven fabrics: Design of specific blank-holder shapes to prevent defects[END_REF], which are close to a homogeneous material because the fibers are bonded between them, but it is difficult or impossible to be achieved in the case of carbon and glass yarns.
As a conclusion of this part, the relative orientation of punch/layer has a significant influence on the preform quality whereas the optimizing of process parameters (blank holder geometry and/or applied pressure) can lead to further improvements in preforms quality. In addition, it is highlighted that the blank holder geometry has more significant effect than the tensile applied on the yarn networks. Finally, the combination of these two solutions led to better results. Moreover, the two improvements do not have an important influence on the mesoscopic defects (buckles) because they have no act on the mechanisms involved in the appearance of these defects.
Multilayers preforming tests
The same approach used in monolayer tests was applied for two-layers preforming tests. In this paragraph, we present the results of 45°/0° preforms (45°/0° means that the oriented layer is the external one in the stacking order). The 45°/0° stacking sequence was chosen for two reasons: firstly, because it is more prone to have defects than the 0°/45° configuration [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF] and secondly, it enables to observe and make the required measurements on the outer layer (45°), in order to be compared with a monolayer, preformed at 45°. The preforming results of 45°/0° stacking sequence, obtained with the same initial process parameters, show more numerous defects than 45° monolayer preform, and thus, a bad quality is obtained as shown in Figure 8. The type and location of these defects remain the same of those of 45° monolayer configuration but their amplitude and their quantities are significantly higher, whereas the shear angle values remain unchanged relatively (Figure 8). In fact, the bad quality is attributed to inter-ply friction as it has been highlighted and demonstrated in previous study [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF].
The inter-ply friction lead to the appearance of additional wrinkles in the center of the frontal face of two-layer preforms, where there are highest shear angles. Moreover, when compared with 30° and 30°/0° preforms, the configurations of 45° and 45/0° show more numerous defects and other additional types of defects also, like the weave pattern heterogeneity (Figure 4 and Figure 8). The increase in type and amplitude of defects is induced by the effect of the punch/layer relative orientation.
To improve the quality of 45°/0° preforms, the same strategy, used for oriented monolayers, was applied. Thus, draping tests were conducted firstly with an increasing in the blank holder's pressure, then by using the new blank holder geometry and finally by combining of the two solutions together. The obtained results show an improvement in the two-ply preforms quality with the same trend observed for monolayer preforms, i.e., the use of the new blank holder geometry gives better results than the increase in pressure (Figure 9). When the two solutions are used together, their effect was combined and thus a greater improvement was obtained (Figure 10).
However, defects remain always on the preform in spite of these improvements. Subsequently, the combination of these two solutions did not enable to avoid wrinkles completely.
In fact, in the 45°/0° configuration, there is an interface between the two stacked layers, which plays a major role on the preform quality. It has been showed in a previous study that the amount and the amplitude of defects increase because of inter-ply friction, caused by the relative sliding between layers [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF]. The fabric/fabric friction behavior is governed by shock phenomenon occurring between transverse overhanging yarns of each ply, which leads to signal variation with high amplitudes due to the high tangential forces generated by shocks (Figure 11). The tangential forces hampers the sliding of the plies locally and lead to an increase in defects appearance and amplitude. The inter-ply friction effect is significant when inter-ply sliding is larger than the fabric's unit cell length. In the case of 45°/0°, the measured sliding distance can reach more than 70 mm while the unit cell length is about 8 mm.
To avoid the overhanging yarns shocks, it is necessary to reduce the inter-ply sliding or to decrease the inter-ply friction. The reduction of sliding distance remains difficult to be achieved because it depends on both relative positions of ply/ply and punch/ply. However, it is possible to reduce the global ply/ply friction behavior by making the dynamic friction smoother. For this aim, avoiding or reduction of shock phenomenon, occurring between yarns, is necessary. This can be achieved by several solutions that require modifying of: crimp, fabric meso-architecture, yarns shape and/or their material, surface treatment, etc. These improvements will therefore induce a change of reinforcement or its characteristics, whereas it is sometimes not possible to change them according to technical specifications.
To overcome this problem, we proposed the inserting of an intermediate mat reinforcement layer between the two performed plies. This solution does not need to change the fabric or change its characteristics (crimp, meso-architecture, ...). As the mat is not a woven fabric, there is no shocks, which take place between the different plies of fabric. Indeed, it is evident that the mat insertion induces a modification of the stack, and therefore, a modification of the mechanical performances and of stage following stage of LCM processes (resin injection/infusion), which have to be considered.
To verify this assumption, friction tests were conducted on fabric/mat in order to compare their results with those of fabric/fabric. Commercial glass mat, with an areal weight of 300 gr/m², was used for this study. These tests were done by means of experimental test bench, which was developed at LaMé laboratory of Orleans University [START_REF] Hivet | Design and potentiality of an apparatus for measuring yarn/yarn and fabric/fabric friction[END_REF]. The bench working principle depends on the sliding of two plans surfaces (Figure 12). A normal force FN is applied on the upper sample, which is fixe and connected to a tensile force sensor. The lower sample could be moved horizontally so that it generates a tangential force, which can be measured by the sensor. Fabric/mat friction tests were carried-out in warp and weft directions according to the experimental conditions illustrated on table 1.
The obtained fabric/mat friction behaviors are presented on Figure 13, where we observe smoother dynamic friction behaviors in comparison with behavior of interlock/interlock (Figure 11 and Figure 13). This means that there are no yarn's shocks during the relative sliding of plies. In addition, the average values of the dynamic friction coefficient for fabric/mat are 0.25 in wrap direction and 0.35 in weft direction. These values are reduced at least to the half in comparison with fabric/fabric case, where the friction coefficient is around 0.61.
Theses results confirm our hypothesis and, therefore, could leads to an improvement of the preform quality of two stacked layers. To verify this fact, preforming tests were carried out after inserting of glass mat between the two layers of 45°/0° configuration with 0.15 bars pressure applied by the single blank holder, which surrounds the preform (Erreur ! Source du renvoi introuvable.). The Figure 15 shows the positive effect of this strategy because the wrinkles defects have decreased significantly in comparison with the case illustrated in Figure 10. The remaining defects have a low amplitude, which could be negligible with comparison to the initial configuration.
Consequently, defects amplitude is highly reduced thanks to the mat using. The observed improvements were specially been observed at B and D corners (Figure 15). In this case, the global amelioration is due to two reasons: The mat prevents any direct contact between overhanging yarns of G1151® preformed layers, i.e., there are no shocks between the yarns of the two layers, even if the sliding between layers is higher than the fabric's unit cell length. Subsequently, the using of an intermediate layer (glass mat) has an important role on the stabilization of friction coefficient. In addition, stress is also reduced during the sliding between performed layers.
The mat allows a smooth friction during the inter-ply sliding, and so, the friction coefficient is reduced. Hence, wrinkles and buckles amplitudes are enormously reduced.
These results highlight the importance of the intermediate mat layer in the reduction of friction coefficient and consequently in the reduction of the amplitude of defects. In addition, the using of an intermediate mat layer ensures a stabilization in friction coefficient variation during the sliding, i.e. the variation amplitude of friction coefficient in the case of mat/fabric fiction was reduced to a quart comparing to the case of fabric/fabric friction.
Finally, it can be concluded that the combining of the three previous solutions (new blank-holders geometry, pressure increasing and mat using) allows an enormous reduction in the defects appearance and in their amplitude. Thanks to combining of these solutions, it was practically possible to avoid the defects appearance in the preform useful area, especially, wrinkles defects.
Nevertheless, whichever the improvement level, the defects remain even with small amplitude. For this reason, we decided to combine all previous improvements solutions with the last improvement, which is the use of compaction effect between layers. Indeed, it has been shown that if the layer subjected to defects was at the inner position, relatively to the punch, the outer ply applies a compaction effect that leads to decrease in defects [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF] (case of configuration 0°/45°). This configuration is more interesting because in conventional laminates, a ply oriented at 0 ° or 90 ° is often placed outside the stacking, which make this improvement viable industrially. Subsequently, Laying the oriented ply below the non-oriented one, i.e., using the 0°/45° stacking sequence. Indeed, as shown in Figure 16, wrinkles defects were completely disappeared, thanks to combining of the four mentioned solutions. Only the buckles defect still on the preform, as shown at the central face of the preform (Figure 16).
Finally, each one of these four improvement solutions was applied alone, in this experimental study, in order to classify them according to their influence on the defects appearance. The defects and the preform quality obtained after applying of each solution were compared quantitatively and qualitatively. The results are summarized on table 2. The sign (+) means that the solution has a positive effect to avoid the considered defect while the (-) sign means a bad effect.
According to these results, the improvement solutions can be classified according to their importance, from the more significant to the smallest one, as follows:
1) Reduction and stabilization of dynamic friction coefficient (by introducing of intermediate mat between the layers);
2) Adapted blank-holders' geometry and number;
3) Laying the oriented layer below the non-oriented one; 4) Applying a tensile, through the blank holder pressure, on the yarns' networks.
CONCLUSION
This study presents a strategy to improve the quality of dry complex preforms. The results showed that the inter-ply friction and the relative orientation between layers and the punch influence the preforms quality significantly by inducing numerous defects with large amplitudes and extent. The modifying of the blank holder geometry and the increase of their pressure enabled to improve the quality of the monolayer preforms. The changes in these parameters allowed avoiding wrinkles in the monolayer preforms. On the other hand, they did not show significant improvements in the two-layer preforms quality since the inter-ply friction, occurring during the preforming of multi-layers, affects the defects appearance hugely.
The reduction of the inter-ply friction can be achieved by several solutions; the majority of them induces a change of the reinforcement or its characteristics, which is sometimes not possible, according to technical specifications of a composite part. The better solution to be proposed is to insert a mat fabric between the preformed layers. This one allow the decreasing of the number and amplitude of wrinkles significantly. However, this modification in the stack has to be considered, as it will induce a modification in the mechanical performance of the material and also of stage following stage of LCM processes (resin injection/infusion).
The obtained results showed that the inter-ply friction is the first and the most important parameter, which influences the defects appearance. Then, the blank-holders geometry is considered as the second parameter according to its importance. Next, the compaction between layers and finally the tensile applied on the yarns networks. In conclusion, suitable technical solutions should be applied to improve the friction during the shaping of dry reinforcements, in order to improve the preforms quality, which is hugely affected by the friction between layers.
preforming tests were done by combining of the four following solutions together: Use of a new blanks-holders geometry; Use of the optimal pressure value (0,15 bars); Use of intermediate mat layer between the preformed fabric's layers;
Figure 1 Figure 2 :Figure 3 :Figure 4 :Figure 5 :Figure 6 :Figure 7 :Figure 8 :
12345678 Figure 1 : Punch dimensions
57°± 4 48°± 2 (
2 a ) External layer, oriented at 45° ( b ) Internal layer at 0° ******** Wrinkles zone at B&D corners and in interne layer ******** Buckles zones ******** Zone of an important shear without wrinkles
Figure 9 :
9 Figure 9 : 45°/0° defects after using of new blank holder geometry
Figure 10 :Figure 11 :
1011 Figure 10 : 45°/0° defects after combining of new blank holder geometry and pressure increasing
Figure 12 :
12 Figure 12 : Friction bench principle
Figure 13 :
13 Figure 13 : Fabric/mat friction behaviors according to wrap and weft directions
Figure 14 :
14 Figure 14 : Configuration of two layer preforming (45°/0°) with mat inserting.
Figure 16 :
16 Figure 16 : Complete disappearance of wrinkles defect when combining the four solutions: new blanks-holders geometry, optimal pressure value, using of mat layer and laying of the oriented layer below the non-oriented one.
Table 1 :
1 Experimental condition of friction tests
Improvement solution Wrinkles Buckles Global quality
Two layers in initial configuration 45°/0° --- - -4
Pressure increasing -- - -3
New blanks-holders geometry (rectangular form) - - -2
Intermediate Mat insertion ++ + +3
Pressure decreasing and new blanks-holder geometry +++ + +4
Pressure decreasing, new blanks-holders geometry and intermediate mat ++++ + +5
Pressure decreasing, new blanks-holders geometry, intermediate mat and 0°/45° stacking sequence ++++ ++ +6
Table 2 :
2 Classification of the process parameters effect and their influence on the preform quality. |
01763182 | en | [
"spi.meca.mema"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763182/file/Mesoscopic%20and%20macroscopic%20friction%20behaviours.pdf | L Montero
S Allaoui
email: samir.allaoui@univ-orleans.fr
G Hivet
Characterisation of the mesoscopic and macroscopic friction behaviours of glass plain weave reinforcement
Keywords: A. Fabrics/textiles, A. Yarn, B. Mechanical properties, E. Preforming
Friction at different levels of the multi-scale structure of textile reinforcements is one of the most significant phenomena in the forming of dry fabric composites. This paper investigates the effect of the test conditions on fabric/fabric and yarn/yarn friction. Friction tests were performed on a glass plain weave and its constitutive yarns, varying the pressure and velocity.
The results showed that the friction behaviours at the two scales were highly sensitive to these two parameters. An increase in pressure led to a decrease in the friction coefficients until steady values were reached, while an increase in velocity led to an increase in the friction coefficients.
At each scale, the frictional behaviour of the material was significantly influenced by the structural reorganisation of the lower scale.
Introduction
Fibre-reinforced composite materials are gaining in popularity in industry because of their high performances, lightweight and design flexibility. In addition, textile composites offer sustainable solutions concerning environmental issues, for instance in transport sectors where decreasing the weight of the different structures can reduce fuel consumption and hence polluting emissions. However, even if fibrous composites appear to be a good solution, many issues remain, especially as regards mastering processes such as the predictability of the quality of the part, cycle time, cost price, etc.
Liquid Composite Moulding (LCM) processes are among the most attractive candidates to manufacture complex composite shapes with a high degree of efficiency (cost/time/quality).
The first step in LCM processes consists in forming the fibrous reinforcement. The mechanical behaviour of dry reinforcement with respect to the shape geometry is a key point in order to ensure both a correct final shape and good mechanical properties of the final part. In addition, during a multi-layer forming process, friction between the reinforcement layers and between tools and external layers has a significant effect on the quality of the preform obtained (appearance of defects) [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF][START_REF] Thije | A multi-layer triangular membrane finite element for the forming simulation of laminated composites[END_REF]. However, the mechanisms governing the preforming of dry reinforcements are far from being fully understood [START_REF] Hivet | Analysis of woven reinforcement preforming using an experimental approach[END_REF]. During preforming, the reinforcements are subjected to different loadings such as tension, shear, compression, bending, and friction at different levels of the multi-scale structure of the textile reinforcement. Friction can cause local defects such as wrinkling or yarn breakage, significantly altering the quality of the final product, and can modify the final orientation of the fibres, which is crucial for the mechanical behaviour of the composite part. Friction also plays a significant role in the cohesion and the deformation mechanism of a dry fibrous network. Consequently, understanding the friction behaviour between reinforcements is necessary so as to understand, master and optimize the first forming step in LCM processes. A growing number of studies have therefore been conducted on the friction behaviour between fibrous reinforcements or on the relationship between friction and formability [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF][START_REF] Thije | A multi-layer triangular membrane finite element for the forming simulation of laminated composites[END_REF][START_REF] Hamila | Simulations of textile composite reinforcement draping using a new semi-discrete three node finite element[END_REF][START_REF] Gorczyca-Cole | A friction model for thermostamping commingled glass-polypropylene woven fabrics[END_REF][START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF][START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF][START_REF] Thije | Design of an experimental setup to measure tool-ply and plyply friction in thermoplastic laminates[END_REF]. However, since it is a complex issue due to the multi-scale fibrous nature of the reinforcements, considerable research remains to be done in order to fully understand e this phenomenon.
Different kinds of studies on the frictional behaviours of textile and technical reinforcements have been conducted over the past years. These materials are in general defined in terms of their multi-scale character: macroscopic (fabric), mesoscopic (tow or yarn) and microscopic (fibre). Studies carried out at each scale, using different devices, show that depending on the scale considered, the behaviour obtained appears to be different.
At the microscopic scale, Nowrouzieh et al. evaluated experimentally and with a microscopic model the inter-fibre friction forces of cotton to study the fibre processing and the effect of these forces on the yarn behaviour [START_REF] Nowrouzieh | The investigation of frictional coefficient for different cotton varieties[END_REF][START_REF] Nowrouzieh | Inter fiber frictional model[END_REF]. They found that the friction behaviour was correlated to the yarn strength and its irregularity (variation in the fibre section). The fibre with the highest friction coefficient produced more regular yarns. Analysis of the variance of the modelling results showed that inter-fibre friction was more sensitive to the normal load than to the velocity.
At the mesoscopic scale, the friction between various couples of materials such as tow/tow, tow/metal and tow/fabric has been studied on reinforcements made from different fibres (aramid, carbon and E-glass). The results demonstrated the significance of the relative orientation between the tows (parallel and perpendicular) on inter-tow friction for technical reinforcements [START_REF] Vidal-Sallé | Friction Measurement on Dry Fabric for Forming Simulation of Composite Reinforcement[END_REF][START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: Friction experiments[END_REF]. The contact model proposed by Cornelissen provides a physical explanation for the experimentally observed orientation dependence in tow friction (tow/metal or inter-tow) [START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: Friction experiments[END_REF][START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: A contact mechanics model of tow-metal friction[END_REF]. The mesoscopic frictional behaviour of carbon tows was explained by the microscopic constitution of the tow assuming a close packing of filaments which leads the normal load in a stationary tow to transfer from one layer of filaments to the layer beneath. Some recent papers deal with fabric/fabric and fabric/metal friction at the macroscopic scale.
Many of them deal with textile materials and focus on the effect of test conditions with the aim of improving the manufacturing process or adapting and functionalizing the final product (clothes). Ajayi studied the effect of the textile structure on its frictional properties by varying the yarn sett (number of yarns/cm) and the crimp while keeping the Tex and thickness constant [START_REF] Ajayi | Effects of fabric structure on frictional properties[END_REF]. The frictional properties increased by increasing the crimp (and thus the density), which was attributed to the knuckle effect of the textile. The term knuckle refers to the cross-over points of the warp and weft yarns making up the fabric. During the weaving process, knuckles generate yarn undulations, i.e. an irregular and rough surface of the fabric, because the two sets of yarns interlace with each other. The yarn undulation is characterised by the yarn knuckle, which is defined as the yarn crown. Furthermore, several studies have been conducted to understand the effect of the test conditions, such as atmospheric conditions which are relevant for textiles used for clothing especially as they are often made of natural materials. Several parameters, such as relative humidity, fabric structure, type of fibre material and direction of motion were found to exhibit an effect on the textile/textile friction while temperature (0-50°C) did not significantly influence the frictional parameters [START_REF] Arshi | Modeling and optimizing the frictional behavior of woven fabrics in climatic conditions using response surface methodology[END_REF]. Here again, the most significant parameter was related to the fabric structure. Das and co-workers [START_REF] Das | A study on frictional characteristics of woven fabrics[END_REF] examined the textile/textile and textile/metal frictional characteristics that simulate interaction between clothing items and fabric movement over a hard surface. They performed frictional tests with different normal pressures on commercial fabrics typically used in clothing industries in which some are composed of 100% of the same material while others are blended (made with two materials such as polyester/cotton). It was concluded that fabric friction is affected by the rubbing direction, type of fibre, type of blend, blend proportion, fabric structure and crimp. Fabric/metal friction is less sensitive to the rubbing direction.
A few studies deal with the macroscopic frictional response of technical reinforcements. These materials have many similarities with the textile family but also differences such as material constitution, unit cell size and some of the mechanisms involved during their frictional behaviour. A recent benchmark compared results obtained with different devices developed by teams working on this topic [START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF]. Experimental tests on fabric/metal friction performed by the different teams on Twintex reinforcement exhibited an effect of pressure and velocity on the dynamic friction coefficient. In another study, the fabric/fabric friction behaviour was characterized using a specific device on different glass and carbon fabric architectures [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF]. It was shown that the fabric/fabric friction was highly different and more complex than that of textile or homogeneous materials. The measured values varied by up to a factor of two during the friction test under the same conditions. A period and an amplitude that depend strongly on the relative positioning and shift of the two samples characterize the frictional signal. The period of the signal can be directly related to the unit-cell length (periodic geometry). In addition, the specificity of the fabric/fabric contact behaviour was found to be directly related to the shocks taking place between overhanging yarns. However, no studies were found in the literature addressing the interesting question of the effect of test conditions that are representative of the preforming of dry reinforcements on fabric/fabric friction behaviour.
The study carried out by Cornelissen is undoubtedly useful to build a relationship between the micro and the meso scales as regards friction, but extensive experimental work needs to be performed at the meso and macro scales in order to obtain enough data for the correct definition and identification of a future model. It is therefore necessary to study the variation in friction behaviour with respect to the normal pressure and velocity for different fabric architectures to contribute to a better understanding of fabric/fabric friction behaviour. This is the goal of the present paper.
Materials and Methods
Tested dry fabric
The experiments were conducted on a glass plain weave dry fabric (figure 1.a). This balanced fabric has a thickness of 0.75 mm and an areal weight of 504 g/m 2 . The width of the yarn is 3.75mm and the average spacing between neighbouring yarns (for weft and warp directions) is around 5mm comprising 1.25mm of spacing because the yarns are not tightened together. The unit cell length is ~10 mm. For the tow samples, the yarns were extracted from the woven fabric.
Description of the device
When undertaking experimental investigations of dry-fabric friction, the various mesoscopic heterogeneities, the different unit cell sizes and anisotropy should be considered. This requires the use of specific experimental equipment designed to consider these properties. A specific experimental device at the laboratory PRISME, presented in Figure1.b, is dedicated to this task [START_REF] Hivet | Design and potentiality of an apparatus for measuring yarn/yarn and fabric/fabric friction[END_REF]. The device consists of two plane surfaces, on which the two samples are fixed, sliding relative to each other. The bottom sample is fixed on a rigidly and accurately guided steel plate that can be moved horizontally in a fixed direction. The imposed velocity can vary from 0 to 100 mm/s. The top sample is fixed on a steel plate which is linked to a load sensor connected to a data acquisition system used to record tangential forces during the test. A dead weight on the top sample provides a constant normal load F N . To obtain a uniform pressure distribution on the contact area of the samples, a calibration procedure was performed before testing to determine the optimal position of the dead weight [START_REF] Hivet | Design and potentiality of an apparatus for measuring yarn/yarn and fabric/fabric friction[END_REF]. For fabric/fabric experiments, the position of the dead weight was defined by using the mean of the tangential force. This approach gives an average position which limits the effect of specimen misalignment.
Test conditions
Before starting the experiments, the samples were conditioned in standard laboratory conditions (T~23°, RH~50%). To distinguish the different physical phenomena occurring during the friction tests, an acquisition frequency of 50Hz was used. This value was chosen based on tests performed in a previous study on the same material with the same bench. The friction coefficient (µ) was calculated using Coulomb's theory:
𝜇 = 𝐹 𝑇 𝐹 𝑁 = 𝐹 𝑇 𝑀•𝑔 (1)
where F T is the tangential load measured by the sensor, F N is the normal load, M is the total mass of the upper specimen with the dead weight and g is gravitational acceleration.
To investigate the effect of the shaping process conditions on both the macroscopic and mesoscopic behaviours of the fabric, two kinds of friction tests were performed: fabric/fabric and yarn/yarn. Tests were conducted in four relative positions of the samples, varying: pressure and test speed. For the relative positioning of the samples, four different orientations were tested: 0°/0°, 0°/90°, 90°/90° and 0°/45°. These configurations, generally used in laminates, exhibit extreme friction coefficients (maximum and minimum) of two fabric plies [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF]. The position 0°/0° was the reference one, and consisted in orienting the weft yarns of the two samples in the stroke direction. For 0°/90° and 0°/45°, the lower sample was kept along the same direction as the reference configuration, while the upper sample was rotated. For the 90°/90° configuration, the warp yarns of the two samples were oriented in the sliding direction.
For the yarn samples, the 0° orientation corresponds to the tows oriented in the movement direction, while 90° means that they are perpendicular.
Tests were conducted at five different pressures (3, 5, 10, 20 and 50 kPa) that are in the range of values involved during dry fabric preforming and at a speed of 1 mm/s. This velocity is the one used in a previous study to investigate the effect of pressure on fabric/metal friction behaviour [START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF]. It is in the order of magnitude of inter-ply velocity values during the forming of dry reinforcements [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF]. The pressure is calculated by dividing the normal force by the areal size of the upper specimen. This definition is even the same for the fabric/fabric and yarn/yarn tests. Indeed, each specimen (upper and lower) of yarn/yarn tests contain several yarns that were placed next to each other. Consequently, the calculated pressure for fabric/fabric tests will be a theoretical one and not a real one because the contact between the two samples will not occur on this whole surface due to crimp and nesting.
The velocity values selected were 0.1, 1, 10 and 50 mm/s. This gives us a factor of 500 between the lowest and highest speed which covers the inter-ply sliding speed range during multi-layer forming whatever the laminate considered [START_REF] Allaoui | Effect of inter-ply sliding on the quality of multilayer interlock dry fabric preforms[END_REF]. The tests with different pressures were analysed to determine the pressure value at which the tests with various velocities were performed. This pressure was determined in order to distinguish the effect of the two parameters and make the comparison reliable.
At least five tests were performed for each test case.
Results and discussion
FABRIC/FABRIC FRICTION BEHAVIOUR
Typical layer/layer friction behaviour for dry fabric is illustrated in figure 2. This curve shows very clearly that the fabric/fabric friction behaviour is very different from the Coulomb/Amonton friction behaviour of a homogeneous material. A previous study showed that this behaviour is due to the superposition of two phenomena [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF]: yarn/yarn friction between the yarns of the two dry fabric plies, and shocks between the transverse overhanging yarns of each ply. During the weaving process, warp and weft yarns are manipulated in such a way that the two sets of yarns interlace with each other to create the required pattern of the fabric. The sequence in which they interlace with each other is called the woven structure (meso-structure). The yarns of one direction are bent around their crossing neighbour yarns, generating different crimps of the two networks resulting from the asymmetry of the weaving process. As a result, a height difference is obtained between the weft and neighbouring warp which is defined as depth overhanging or knuckle height (see figure 3) which promotes the shock phenomenon (lateral compression of yarns). These shocks occur periodically and generate high tangential reaction forces (F) leading to a substantial increase in the maximum friction values. The periodicity of the shocks is linked to the fabric meso-architecture and the relative position of the plies since it is difficult to control the relative position of the two samples during the tests, especially for reinforcements that have a weak unit cell length. During the positioning of the two samples on each other before testing, one may obtain a configuration in which the two are perfectly superimposed (figure 4.a) or laterally shifted (figure 4.b). When the two plies are perfectly superimposed the peak period is associated to the length of the sample unit cell, which is ~10 mm for the glass plain weave considered here. Figure 2 illustrates this configuration. On the other hand, when the plies are not perfectly superimposed (shifted samples), the peaks appear at periods equal to a portion (half in the case of the plain weave) of the fabric unit cell length.
In order to analyse the variation in the values of the fabric/fabric friction coefficient, the static frictional coefficient (μ s ) was first taken as the highest peak at the beginning of the motion (e.g. around 4 seconds in figure 2). After an area containing the maximum peak (20 seconds on figure 2), the dynamic friction domain can be considered as established. The dynamic friction coefficient (μ k ) can then be associated to the average of all the measured values. Moreover, the maximum values of the dynamic friction (peaks) and minimum values (valleys) were measured in order to assess the effect of the test conditions on the shock phenomenon. Maximum and minimum friction coefficients are noted respectively μ maxi and μ mini . The mean and standard deviation (σ) of each are calculated. These measurements were only considered for friction tests in which the period was close to the length of the unit cell for the configurations 0°/0°, 0°/90° and 90°/90°. For 0°/45°, as it is difficult to distinguish the unit cell length in the signal, all the peaks and valleys were considered to calculate μ maxi and μ mini.
Effect of Normal Pressure
The first test parameter considered in this study was the normal pressure. Fabric/fabric friction experiments were conducted at five pressures: 3, 5, 10, 20 and 50 kPa. The results of the static friction coefficients (µs) and the mean values of the dynamic friction coefficients (µk) are presented in Table 1 and illustrated on figure 5 It can be seen that the fabric/fabric static friction coefficients were higher than the dynamic coefficients in all the test configurations (Table 1) and had slightly higher standard deviations. This is a common observation in friction responses, which has also been noticed on textiles [START_REF] Ajayi | Effects of fabric structure on frictional properties[END_REF][START_REF] Das | A study on frictional characteristics of woven fabrics[END_REF]. Furthermore, the relative orientation of the two specimens has an effect on the frictional behaviour. In all cases, the static and dynamic frictional coefficients were higher for the 0°/0° configuration than for the 90°/90° configuration (Figure 5, Figure 6 and figure 7). The same trend has been observed on other fabric architectures, such as carbon interlock [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF], and can be attributed to the weaving effect (difference in crimp between the two yarn networks). As already mentioned, the fabric/fabric friction behaviour is governed by yarn/yarn friction and shocks between overhanging transverse yarns. The friction coefficient varied hugely (by up to a factor of two) because of high tangential forces due to the second phenomenon which predominates the global friction behaviour. According to the measures obtained, the tangential reaction forces due to shocks between weft yarns (configuration 0°/0°) are higher than those between warp yarns (configuration 90°/90°). As the reinforcement is assumed to be balanced and networks (weft and warp) are composed of the same yarns, this can be explained by the difference in crimp between the two networks resulting from the asymmetry of the weaving process. To confirm this fact, the crimp of warp and weft yarns was measured according to the ASTM D3883-04 standard [START_REF]and Yarn Take-up in Woven Fabrics[END_REF]. The crimp obtained for warp and weft yarns were respectively 0.35% and 0.43% confirming that higher crimp leads to higher tangential reaction forces due to yarn shocks. The increase in crimp results in a higher overhanging of the more crimped network yarn and thus in an increase of the friction coefficient. This conclusion is in good agreement with the study by Ajayi [START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: A contact mechanics model of tow-metal friction[END_REF] which showed that an increase in the weft yarn density generated an increase in the frictional resistance of textile.
For the 0°/90° configuration, transverse yarns are warp yarns (with high overhang value) for the bottom sample and weft yarns (with a low overhang value) for the upper sample. The shock phenomenon occurs between warp yarns that have high crimp and weft yarns with low crimp.
As a result, the tangential forces obtained in this configuration and thus the friction coefficients remain in between those obtained in the 0°/0° and 90°/90° configurations (figure 5, figure 6 and figure 7). There are still some points for which this trend was not confirmed (e.g. 3 kPa and 5 kPa), especially for maximum and minimum friction coefficients (figure 7), which may be due to the imperfect superimposition of the two samples.
As expected, the lowest friction coefficients were obtained for the configuration 0°/45° (see figures 5, 6 and tables 1 and2). The measured signal of the tangential force was smoother than the other configurations (figure 8). The amplitude of the signal was very weak (figures 7 and figure 8), which means that shocks between the overhanging yarns were not severe. In fact, Shocks occurred between network of yarns, one of which was oriented at 45°, which led to a very weak instantaneous lateral contact width between yarns. As a result, the yarn/yarn friction in this configuration governed the frictional behaviour and consequently the measured dynamic coefficient was closer to that of the yarn/yarn at 0°/90° (see section 3.2). When the normal pressure increased, the static and dynamic friction coefficients decreased (figure 5 and figure 6). This observation is in agreement with the results obtained for fabric/metal tests [START_REF] Sachs | Characterization of the dynamic friction of woven fabrics: Experimental methods and Benchmark results[END_REF]. For the four configurations (0°/0°, 0°/90°, 90°/90° and 0°/45°), friction coefficients significantly decreased in the pressure range of 3-20 kPa before converging to a steady value whatever the test configuration (figure 5, figure 6 and figure 7). The only singular point is the friction behaviour at 0°/90°, for which the maximum coefficient (μ max ) measured at a pressure of 5 kPa was higher than that measured at 3 kPa. This has an impact on the static and dynamic coefficient values, which show the same trend. This can be attributed to the low pressure. At this level of pressure (3 kPa), the shock phenomenon is not the predominant one and the behaviour is dominated by yarn/yarn friction. Thus, at this level of pressure, the friction behaviour tends closer to that of the 90°/90° configuration. This point will be addressed in future work.
After a pressure of 20 kPa, the fabric/fabric friction behaviours levelled off. Thus, the static (µs) and dynamic (µk) coefficients reached stabilized values with a low standard deviation, which denotes the good reproducibility of these behaviours (Table 1). The maximum decrease was obtained for the 0°/0° configuration (around 30% for the static and dynamic coefficients) while in the 0°/45°configuration only a 7% and 11% decrease was respectively observed for µs and µk.
This decrease in the friction coefficients can be attributed to the effect of fabric compaction and its consequences at the mesoscopic and macroscopic levels. When the pressure increases, the upper sample exerts a greater compaction on the lower. This leads to a high transverse compression strain of the fabric inducing at the mesoscopic level a reduction of the yarn's overhang height and a spreading of the yarns. At the macroscopic level, the reduction in the thickness of the reinforcement can potentially lead to a lateral spreading of the fabric in the inplane directions, and therefore a decrease in crimp. As the crimp at these stress states decreases, the texture ("roughness") of the fabric related to its meso-architecture decreases, and the contact area increases. However, the real contact area between the two samples driving the effective pressure is difficult to quantify for fabric/fabric friction tests (due to nesting, for example) in contrast to fabric/metal [START_REF] Cornelissen | Dry friction characterisation of carbon fibre tow and satin weave fabric for composite applications[END_REF].
The reduction in yarn's overhang height due to compaction leads to the decrease of the tangential reaction forces between the transverse yarns of the samples. As a result, the maximum friction coefficient (µ max ), measured on the peaks of the curves, decreased (figure 7).
In addition, the yarn spreading associated with the decrease in their overhang height generated a lower fabric "roughness". Therefore, the tangential reaction forces' signal, due to this new "roughness", was also smoother as observed for the 50 kPa pressure, while below 20 kPa the rise and fall in forces at each peak were more pronounced and abrupt (figure 9). It can also be seen that the maximum friction coefficient µ max continued to decrease slightly and was not completely stable at 50 kPa (figure 7), except for the configuration 0°/45 where the shocks phenomenon has a weak effect. Stabilization would probably occur at almost no meso roughness, which might be achieved at very high pressure not encountered during the composite shaping process.
The minimum friction coefficient (µ min ) measured in the curve valleys, which is attributed to yarn/yarn friction [START_REF] Allaoui | Influence of the dry woven fabrics meso-structure on fabric/fabric contact behaviour[END_REF], was not affected by this phenomenon and its evolution followed the same trend as the global friction behaviour (figure 7).
Effect of Velocity
In order to evaluate the effect of the sliding velocity on the fabric/fabric frictional behaviour, tests were conducted at four velocities: 0.1, 1, 10 and 50mm/s. These tests were performed at a pressure of 35 kPa as the friction behaviour versus pressure is stabilized at this pressure. It was previously observed that the friction coefficients at 0°/90° were situated between those of the 0°/0° and 90°/90° configurations and generally close to the 0°/0° configuration. For this reason, only the 0°/0°, 0°/45° and 90°/90° configurations were tested here. The static and dynamic friction coefficients obtained are summarized in table 2. The coefficients were measured with a good reproducibility (maximum deviation of 12% for the dynamic coefficient).
The friction coefficients changed slightly regardless of the configuration tested. The static friction coefficient increased from 0.427 to 0.503 when the velocity increased from 0.1 to 50 mm/s in the 0°/0° orientation, an increase of approximatively 17 % (figure 10). On the other hand, the 90°/90° configuration was less sensitive to the velocity with a maximum decrease of 9% at 50 mm/s compared to 0.1mm/s, which remains in the order of magnitude of experimental deviations. Once again, the 0°/0° configuration was more sensitive to the test parameters, which is likely due to its high crimp and therefore the predominance of the shock phenomenon.
The same trend was observed for the dynamic friction coefficients but to a lesser extent. The coefficients increases between 9 % and 14 % for all the configurations (figure 10). This slight increase in the dynamic friction is essentially due to the contribution of the minimum friction (µ min ) coefficient while the maximum (µ max ) decreased as can be observed on figure 11.
Consequently, increasing the speed has more influence on the amplitude variation than on the average value. Increasing the speed leads to an increase in the frequency of shocks which has two consequences: The kinetic energy of the yarns (of the lower sample) enables them to pass over the overhanging transverse yarns in a shorter time with a lower force. The maximum friction coefficient therefore decreases. This also causes an up and down movement of the upper sample that can be described as stick-slip phenomenon.
Between two shocks, a steady sliding between yarns does not exist, consequently the tangential force does not reach the stabilized value tending towards the friction coefficient of the yarns. Accordingly, the minimum coefficient increases.
For the configuration 0°/45°, both coefficients (µ max and µ mini ) increased. As, it has been discussed previously, these values are given in this configuration for information only and are not related to the meso-architecture in which case the values would have been different. Therefore, they cannot be used for comparison with other configurations.
To summarize, as the velocity increases, the upper sample does not follow strictly the irregularities of the lower sample which leads to a decrease in the signal variation just as if the irregularities were lower. It can be concluded that at low velocities (0.1 to 1mm/s) the dynamic frictional coefficient can be considered as almost constant whatever the orientation while beyond a velocity of 1 mm/s, its evolution as a function of the velocity should be considered.
YARN/YARN FRICTION BEHAVIOUR
The second aim of this study was to determine the effect of the test parameters on the friction behaviour at the mesoscopic level. Yarn/yarn friction tests were therefore conducted, varying the pressure and the velocity. Only the 0°/0° (parallel case) and 0°/90° (perpendicular case) configurations were performed. Recall that the 0° orientation means that the yarns are oriented in the direction of the stroke and 90° in the transverse direction.
Effect of Normal Pressure
As in the fabric tests, yarn/yarn friction tests were conducted by varying the normal pressure at 1mm/s. The results of the static (μs) and dynamic (μk) friction coefficients are summarized in table 3 and illustrated on figure 12 and figure 13. As expected, the static friction coefficients and their standard deviations were higher than for the dynamic coefficients. Moreover, the friction values were greater at 0°/0° than at 0°/90° which is in good agreement with previous studies carried out on carbon, aramid and glass yarns [START_REF] Vidal-Sallé | Friction Measurement on Dry Fabric for Forming Simulation of Composite Reinforcement[END_REF][START_REF] Cornelissen | Frictional behaviour of high performance fibrous tows: Friction experiments[END_REF].
The friction behaviour for the perpendicular configuration is mainly controlled by inter-fibre friction while for the parallel case, other phenomena are involved, among which: fibre bending, fibre reorganisation in the yarns, transverse compression, fibre damage, intermingling of fibres between yarns, etc. These phenomena are promoted by the spinning process, because fibres do not remain straight and some of them are damaged like it can be seen on figure 14. This generates reaction forces during tests that lead to the increase in friction forces. When the pressure increased, the static and dynamic coefficients decreased before reaching a plateau beyond 10 kPa. The decrease was larger in the parallel case (0°/0° configuration) with respect to the perpendicular one (0°/90°). For high pressure (50kPa), the friction increase because the higher compression rate generates more fibre damage. This fact is illustrated on Figure 15 showing the results where the friction coefficient increases after stabilization (beyond 40 s).
Observations performed using a microscope on this sample after the test showed a large number of broken fibres.
Effect of Velocity
The friction tests according to velocity were performed under 35 kPa as for the fabric/fabric tests. The results are summarized in table 4 and illustrated in figure 16. Once again, it was observed that the static friction coefficient was higher than the dynamic one.
As for fabric/fabric, the tow/tow friction behaviour remained unchanged in the range 0.1-1 mm/s velocities whatever the relative position of the samples. The friction was still constant for the parallel yarns (0°/0°) while for the perpendicular (0°/90°) case, an increase of more than 43% in the friction coefficient values was observed with the increasing velocity. It can be concluded that while the velocity does not affect the phenomena (intermingling of fibres between yarns, fibre reorganisation in the yarns, bending, etc.) controlling inter-tow friction in parallel yarns, it significantly affects the response for the 0°/90° configuration which is mainly controlled by interfibre friction. This trend is very different from the one observed on natural cotton fibres, which are more sensitive to pressure than to speed [START_REF] Nowrouzieh | The investigation of frictional coefficient for different cotton varieties[END_REF][START_REF] Nowrouzieh | Inter fiber frictional model[END_REF]. The differences between these fibres and those used in the present study (glass) are mainly related to their mechanical behaviour (brittle vs ductile), their compressibility and roughness. These characteristics are highly correlated with the friction coefficient [START_REF] Nowrouzieh | The investigation of frictional coefficient for different cotton varieties[END_REF]. As a conclusion, the fibre material significantly influences the effect of the test conditions on the microscopic friction behaviour (fibre/fibre) which results in the same effect at the mesoscopic level (yarn/yarn).
CONCLUSIONS
This study has highlighted the influence of test conditions on the frictional behaviour of dry reinforcements at mesocopic and macroscopic scales. It was found that the friction behaviour depends strongly on the relative orientation of the samples. Furthermore, experimental tests performed at the macroscopic level (fabric/fabric) showed that, for a given yarn, the friction coefficient is highly related to the yarn crimp because of the shock phenomenon occurring between transverse overhanging yarns. Friction coefficients decrease when the normal pressure increases until reaching steady values, which are almost identical whatever the relative orientation of the specimens. The greater decrease observed for the 0°/0° configuration can be attributed to the effect of the fabric compaction and its consequences on the yarn and fabric structure, i.e. a decrease in the yarn's overhang height and yarn spreading leading to the decrease in crimp.
Velocity has the opposite effect on fabric/fabric friction since the coefficients increase with the velocity. The static friction coefficient and the 0°/0° configuration are more sensitive to this parameter. The dynamic friction coefficient remains almost unchanged at low velocities whatever the relative orientation of the two samples while it increases slightly for high speeds.
This increase is due to the contribution of the minimum friction coefficient (µ min ) that increases because the high frequency between two shocks does not permit a stabilization of the tangential force at values tending towards the friction coefficient between yarns. However, the main effect of high speeds is a finite decrease in the amplitude variation of the friction response.
At the mesoscopic level, the results show the same trend as for macroscopic friction as a function of the test parameters. The parallel configuration 0°/0° is more sensitive to pressure while the 0°/90° is more influenced by velocity. This is due to the fact that friction behaviour in the perpendicular configuration is mainly controlled by inter-fibre friction while for the parallel case, other phenomena promoted by the spinning process are involved. It has been shown that the material constituting the fibres mainly influences the effect of the test conditions on the microscopic friction behaviour (fibre/fibre) which results in the same effect at the mesoscopic level (yarn/yarn).
We can conclude that at each scale, the frictional behaviour of the material studied here, which is heterogeneous and multiscale (micro-meso-macro), is governed by friction but is also significantly influenced by the structure of the lower scale. These structures (meso, micro) reorganise when test conditions such as pressure are varied, which leads to a variation in the friction behaviour. Thus, even if the same trends of the effect of test conditions are observed at different scales (meso, macro), they are caused by different mechanisms which are due to the structural reorganization at the lower scale.
and figure 6. The results of the maximum values of the dynamic friction (measured at the peaks of the signal) and minimum values (measured at the valleys) are illustrated on figure 7. The error bars represent the standard deviations (σ).
Figure 1 .Figure 2 .Figure 3 :
123 Figure 1.Material and equipment of the study
Figure 4 :Figure 5 .Figure 6 .Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .Figure 12 .Figure 13 .Figure 14 .Figure 15 .
456789101112131415 Figure 4: Relative positioning of the samples
Figure 16 .
16 Figure 16. Evolution of yarn/yarn friction coefficients according to testing velocity at pressure of 35kPa. The error bars represent the standard deviations
Table 1 .
1 Fabric/fabric frictional characteristics function of the normal pressure at 1mm/s.
Orienta tion Normal pressure (kPa) µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
3 0.5590 0.1461 26.13 0.4074 0.0155 3.80
5 0.5224 0.0690 13.21 0.3499 0.0148 4.22
0°/0° 10 0.4543 0.0442 9.72 0.3092 0.0131 4.23
20 0.3911 0.0276 7.06 0.2810 0.0162 5.78
50 0.3915 0.0339 8.65 0.2698 0.0120 4.45
3 0.4041 0.0467 11.55 0.3315 0.0198 5.98
5 0.5012 0.0878 17.51 0.3401 0.0461 13.54
0°/90° 10 0.4327 0.0375 8.66 0.2931 0.0067 2.30
20 0.3566 0.0142 3.97 0.2789 0.0396 14.20
50 0.3558 0.0320 9.00 0.2656 0.0107 4.02
3 0.3950 0.0210 5.33 0.3123 0.0223 7.12
5 0.3928 0.0705 17.94 0.2982 0.0030 1.02
90°/90° 10 0.3479 0.0287 8.25 0.2635 0.0167 6.34
20 0.3504 0.0158 4.51 0.2678 0.0042 1.58
50 0.3661 0.0354 9.67 0.2625 0.0062 2.35
3 0.2256 0.0320 14.18 0.1799 0.0070 3.90
5 0.2093 0.0133 6.35 0.1818 0.0159 8.75
0°/45° 10 0.1985 0.0162 8.14 0.1737 0.0039 2.27
20 0.2014 0.0040 1.99 0.1602 0.0019 1.17
50 0.2102 0.0073 3.49 0.1605 0.0012 0.75
Table 2 .
2 Experimental friction coefficients of fabric/fabric at 35 kPa according to velocity
Orientation Velocity (mm/s) Log [velocity] µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
0.1 -1.0 0.4274 0.0541 12.65 0.2846 0.0269 9.46
1.0 0.0 0.4275 0.0484 11.31 0.2760 0.0032 1.17
0°/0°
10.0 1.0 0.4338 0.0424 9.78 0.2928 0.0031 1.07
50.0 1.7 0.5031 0.0289 5.75 0.3098 0.0149 4.82
0.1 -1.0 0.3993 0.0130 3.25 0.2623 0.0136 5.20
1.0 0.0 0.4186 0.0358 8.55 0.2746 0.0084 3.06
90°/90°
10.0 1.0 0.3809 0.0394 10.33 0.2802 0.0057 2.03
50.0 1.7 0.3652 0.0089 2.42 0.2950 0.0185 6.26
0.1 -1.0 0.2072 0.0095 4.57 0.1687 0.0082 4.8858
0°/45° 1.0 0.0 0.2034 0.0009 0.43 0.1593 0.0074 4.6386
10.0 1.0 0.2267 0.0245 10.79 0.1640 0.0042 2.5559
50.0 1.7 0.2383 0.0146 6.12 0.1925 0.0025 1.2873
Table 3 .
3 Yarn/yarn frictional characteristics function of normal pressure at 1 mm/s
Orientation Normal pressure (kPa) µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
3 0.4037 0.1766 43.74 0.3188 0.0126 3.96
5 0.3478 0.0751 21.59 0.2831 0.0084 2.96
0°/0° 10 0.2670 0.0058 2.16 0.2584 0.0068 2.64
20 0.3336 0.0442 13.26 0.2822 0.0090 3.18
50 0.3332 0.0122 3.67 0.3055 0.0059 1.93
3 0.2165 0.0164 7.57 0.1867 0.0091 4.88
0°/90°
5 0.2186 0.0059 2.71 0.1913 0.0051 2.65
Table 4 .
4 Experimental friction coefficients of yarn/yarn at 35 kPa according to velocity
Orientation Velocity (mm/s) Log [velocity] µs Standard deviation [σ] [σ/µs]*100 (%) µk Standard deviation [σ] [σ/µk]*100 (%)
0.10 -1.0 0.3471 0.0017 0.49 0.2963 0.0412 13.89
0°/0° 1.00 0.0 0.3406 0.0500 14.67 0.2914 0.0224 7.69
10.00 1.0 0.3448 0.0320 9.28 0.2947 0.0104 3.53
50.00 1.7 0.3664 0.0564 15.39 0.2997 0.0063 2.10
0.1 -1.0 0.1757 0.0002 0.13 0.1563 0.0068 4.37
0°/90° 1.0 0.0 0.1825 0.0068 3.71 0.1739 0.0048 2.76
10.0 1.0 0.2163 0.0075 3.45 0.1829 0.0026 1.43
50.0 1.7 0.2458 0.0143 5.82 0.2243 0.0214 9.52
Acknowledgments
The research leading to these results received funding from the Mexican National Council of Science and Technology (CONACyT) under grant no I0010-2014-01. |
01763203 | en | [
"phys.meca.biom"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763203/file/3DAHM_HAERING.pdf | D Haering
C Pontonnier
G Nicolas
N Bideau
G Dumont
3D Analysis of Human Movement Science 2018
Introduction
In industrialized countries, musculoskeletal disorders (MSD) represent 80% to 90% of work related disorders. Ulnar nerve entrapment (UNE) and epicondylitis are the most common elbow MSDs within manual workers (1,2). UNE and epicondylitis are associated with maintained elbow flexion, or near-maximal elbow extension coupled with large loads (3). Awkward shoulder postures while using elbow increase MSD risk (4). Articular mechanical load can be estimated in regards of its maximal isometric torque from dynamometric measurements [START_REF] Haering | Proc XXVI Congress ISB[END_REF]. Most studies focusing on elbow ergonomics considered tasks executed in oneusually naturalshoulder configuration. A comparison between elbow isometric torque characteristics in natural and awkward shoulder configurations could help reduce the risk of elbow MSDs.
Research Question
The study highlights differences in elbow isometric torque characteristics when varying shoulder configurations and implications for ergonomics.
Methods
Dynamometric measurements and personalized torque-angle modelling were performed on a worker population to define elbow isometric torque characteristics during natural or awkward manual tasks.
Twenty-five middle-aged workers (33±6 years, 1.80±.07 m, 79±8 kg) participated in our study. One classical and five awkward shoulder configurations were tested: flexion 0° with external rotation (F0ER), 90° flexion with external rotation (F90ER), 180° flexion with external rotation (F180ER), 90° abduction with external rotation (A90ER), 90° abduction with internal rotation (A90IR), and 90° flexion with internal rotation (F90IR) (Fig. 1). Dynamometric measurements consisted in static calibration, submaximal concentric and eccentric warm-up, and isometric trials. Trials included 5 isometric contractions maintained for 5 s in flexion and extension evenly distributed through the angular range of movement of the participants.
A quadratic torque-angle model ( 6) was used to fit isometric torque measurements, where model parameters: peak isometric torque Γ 𝑚𝑎𝑥 , maximal range of motion Γ 𝑚𝑎𝑥 , and optimal angle Γ 𝑚𝑎𝑥 , were optimized. Optimal isometric torque for awkward shoulder configurations were compared to natural configuration in terms of: optimal model parameters (one-way repeated measures Anova), torque magnitude 𝑀 and angle phase 𝑃 (7).
Results
Significant effects of shoulder configuration on elbow peak isometric torque are shown (p<.01). In flexion, F0ER displays larger than F90ER and F180ER. In extension, F90ER shares highest with F0ER larger than F180ER (table 1). Magnitude analysis also reveals that maximal isometric torque over the full range of motion is overall the largest for A90IR in flexion or F0ER in extension.
No significant differences are found for maximal range of motion 𝑅𝑜𝑀.
Effect of shoulder configuration on elbow optimal angle 𝜃 𝑜𝑝𝑡 is found (p<.01). F0ER, F90ER and F180ER display smallest optimal angles (closest to anatomical reference) in flexion. Inversely, F0ER and A90IR show largest 𝜃 𝑜𝑝𝑡 in extension. Phase analysis show similar correspondences. Table 1. Awkward versus natural shoulder configuration in terms of average isometric torque parameters, torque magnitude and angle phase.
Torque direction
Shoulder configuration
Γ 𝑚𝑎𝑥 [N. m] 𝑅𝑜𝑀 [°] 𝜃 𝑜𝑝𝑡 [°] 𝑀 [%] 𝑃 [%] FLEXION F0ER (
Discussion
Peak isometric torque and magnitude results give a clear idea of elbow torque available for all shoulder configurations. Results confirm that natural position (F0ER) allows good compromise between peak torque and torque magnitude. For flexion, F90ER and F180ER appear as weakest configurations. Except for F90ER in elbow extension tasks, shoulder flexion should be minimized in strenuous working tasks. Those results agree with common ergonomic recommendations in terms of posture [START_REF] Mcatamney | [END_REF]9).
In flexion, F0ER with smaller optimal angle could also help reduce UNE occurrence by favoring tasks with less flexed elbow. Similarly in extension, F0ER and A90IR could help reduce epicondylitis by favoring less extended elbow work.
While A90IR appears as good alternate on torque and angle criterion, visibility issue might interfere.
Figure 1 .
1 Figure 1. Natural and awkward shoulder configurations tested for elbow torque on dynamometer. |
01763313 | en | [
"spi.mat",
"spi.meca",
"spi.meca.mema",
"spi.meca.solid"
] | 2024/03/05 22:32:13 | 2013 | https://hal.science/hal-01763313/file/cfm2013-YXIN7436-RR.pdf | R Moulart
R Rotinat
Elasto-plastic strain field measurement at micro-scale on 316L stainless steel
Keywords: méthode de grille, échelle micrométrique, élasto-plasticité, acier 316L grid method, micrometric scale, elasto-plastic behavior, stainless steel
Cette étude traite de la caractérisation du comportement mécanique d'un acier inoxydable austénitique (316L) à l'échelle micrométrique. Pour cela, la méthode de grille a été adaptée à cette échelle. Les grilles de pas d'environ 5 µm sont obtenues par photolithographie interférentielle sur une résine photosensible préalablement déposée à la surface de l'échantillon. L'éprouvette est soumise à une traction par paliers de chargement permettant ainsi la numérisation des grilles à l'aide d'un microscope interférométrique en lumière blanche pour chaque pas de chargement. Ces images ont ensuite été traitées pour en extraire le champ de déformations plan (résolution inférieure à 2×10 -3 pour une résolution spatiale de 20 µm). Ces déformations ont alors rendu compte de localisations des déformations compatibles avec les frontières des grains. Une tentative de détection de l'initiation de la plasticité locale a ensuite été mise en oeuvre : à partir des courbes contraintes-déformations moyennes par grain, il est possible de définir une valeur de limite élastique σ Y (à 0,02 % par exemple) locale. Il a alors été constaté que ces valeurs de σ Y locales ainsi déterminées peuvent varier de +/-20 MPa autour de la valeur macroscopique (180 MPa).
Introduction
To be able to precisely predict the global mechanical behaviour of a material, it is necessary to understand the physical phenomena taking place at the scale of its heterogeneities. Indeed, once the local mechanical behaviour is known, it is possible to deduce the overall properties via homogenisation schemes [START_REF] Qu | Fundamentals of Micromechanics of Solids[END_REF]. However, the experimental determination of the local behaviour still remains challenging as the classical experimental procedures are not robust anymore at a reduced scale. Consequently, the development of quantitative displacement and strain field measurement techniques at this scale is an important field that very much remains an open problem. Only a few papers can be found in the literature. Some works studied the plastic behaviour of an aluminium alloy under tensile test from moiré interferometry [START_REF] Nicoletto | On the visualization of heterogeneous plastic strains by Moiré interferometry[END_REF][START_REF] Guo | Study on deformation of polycrystalline aluminum alloy using moiré interferometry[END_REF] with a region of interest ranging from a few square millimetres to centimetres. The author managed to put in evidence the heterogeneous nature of the plastic strains due to the microstructure of the sample. More recently the digital image correlation (DIC) technique is also applied to study titanium alloy [START_REF] Lagattu | In-plane strain measurements on a microscopic scale by coupling digital image correlation and an in situ SEM technique[END_REF] with SEM images. Another interesting studies couple full-field strain measurements (using DIC with a scanning electronic microscope) with grain orientations through EBSD [START_REF] Héripré | Coupling between experimental measurements and polycrystal finite element calculations for micromechanical study of metallic materials[END_REF][START_REF] Padilla | Relating inhomogeneous deformation to local texture in zirconium through grain-scale digital image correlation strain mapping experiments[END_REF].
The present paper uses an original alternative to the above technique, with the hope to reach the right compromise between resolution and spatial resolution to study the transition between elastic and local microplastic behaviour. In this context, this work is aimed at characterising an austenitic stainless steel sample (FCC crystal system) under tensile loading at the scale of its grains in its elasto-plastic domain. This is done by applying the experimental procedure initially developed in the study of a ferritic steel [START_REF] Moulart | On the realization of microscopic grids for local strain measurement by direct interferometric photolithography[END_REF][START_REF] Moulart | Full-field evaluation of the onset of microplasticity in a steel specimen[END_REF]. This methodology is shortly recalled before introducing the results on the stainless steel sample.
Methodology
The micro-extensometric method used here has been previously developed [START_REF] Moulart | On the realization of microscopic grids for local strain measurement by direct interferometric photolithography[END_REF][START_REF] Moulart | Full-field evaluation of the onset of microplasticity in a steel specimen[END_REF]. It is based on the so-called "grid method" [START_REF] Surrel | Moiré and grid methods: a signal-processing approach[END_REF] which consists in analyzing the deformation of a periodic pattern deposited onto the surface to study by a spatial phase stepping algorithm here fitted to a micrometric scale. The overall procedure is schematically summarized in figure 1 and recalled hereafter.
FIG. 1 -Schematic summary of the experimental micro-scale full-field strain measurement procedure First of all, the samples are prepared: they are cut from a laminated plate of the studied stainless steel, then submitted to a recrystallization tempering heat treatment (to get bigger and equiaxed grains) and finally polished.
Cross-gratings (pitch: 5 µm) are then produced at the surface of the specimen by direct interferometric photolithography. This technique consists in recording the parallel interference fringes of two collimated laser beams on a photoresist layer spread onto the surface to study. Thus, the pitch can easily be adjusted depending only on the wavelength of the light and the angle between the two incident beams interfering on the sample. In order to obtain cross gratings (so as to have access to both components of the in-plane displacement), the process is simply repeated after rotating the sample by 90° (first step on figure 1). The deformation of the photoresist is assumed to follow exactly the one of the underlying substrate.
Then, the samples can be submitted to a tensile test thanks to a small home-made mini tensile machine. Its kinematic chain has been designed to be symmetrical with two mobile cross-heads in order to avoid as much The gratings on the region of interest are digitized in-situ at every step of the loading with a white-light interferometric microscope. Based on short length coherence interferences concepts, it allows to digitize the 3D profile of a sample with a subnanometric resolution in the vertical direction and with a spatial resolution limited by the diffraction limit (second step on figure 1). In practice, pitches down to 2 µm can be used. It is important to note that the surface must also be reasonably reflective to use such a microscope. Another important feature is the fact that the measured profile depends solely on the internal distance in the lens between a semi-transparent lens and the imaging plane. Therefore, not only will the out-of-plane displacements be recorded at the same time as the in-plane ones, but the latter will be independent from the former. This is one of the main advantages of this procedure compared to optical microscopy where the very low depth of field is clearly a problem [START_REF] Bartali | Fatigue damage analysis in a duplex stainless steel by digital image correlation technique[END_REF].
The profile images of the grating are then computed by the "grid method", based on a windowed spatial phase shifting algorithm [START_REF] Surrel | Moiré and grid methods: a signal-processing approach[END_REF]. Indeed, considering the sample grating as a periodic signal, it is possible to compute the phase value of this signal for each period, leading to the displacement maps [START_REF] Moulart | On the realization of microscopic grids for local strain measurement by direct interferometric photolithography[END_REF]. In this way, the spatial resolution of displacement is equal to the pitch of the grating These maps undergo a 2×2 "stitching" operation in order to enlarge the field of view (finally equal to 450×340 µm²).
From these fields and to get access to mechanical parameters, strain fields have to be calculated. For that, and considering that experimental data are noisy, it is necessary to use an appropriate smoothing procedure.
Here, it has been chosen to use a diffuse approximation (DA) scheme [START_REF] Devlin | Locally weighted regression: An approach to regression analysis by local fitting[END_REF][START_REF] Nayroles | La méthode des éléments diffus[END_REF]. DA is based on local weighted least-squares minimization using a polynomial diffuse approximation. It has shown that concerning the choice of the polynomial degree, the best compromise is 2 [START_REF] Feissel | Comparison of two approaches for differentiating full-field data in solid mechanics[END_REF].The DA approach is powerful technique which provides stable results but is also a time consuming technique. The strain resolution is deduced from the displacement resolution. For a full-field measurement method, the latter is often considered to be the standard deviation of the noise. By applying a smoothing/differentiation algorithm to the noise maps, it is possible to determine the strain resolution as the standard deviation of the generated ''noisy-strain" maps [START_REF] Pierron | Full-field assessment of the damage process of laminated composite open-hole tensile specimens. Part I: methodology[END_REF].
Finally, the observed microstructure of the region of interest is plotted on the strain maps to allow a comparison between it and the localizations of strain components. With this approach, it has been shown that the local strain resolution is inferior to 2×10 -3 for a spatial resolution of 20 µm [START_REF] Moulart | Full-field evaluation of the onset of microplasticity in a steel specimen[END_REF]. It has to be compared to the average size of a grain (≈ 100×100 µm²).
Results and discussion
The mechanical test performed on the stainless steel sample has consisted in a 52 load steps tensile test. Fig. 2 shows the stress-strain curves obtained from both a gauge glued onto the back side of the sample and an averaging of the ε xx maps on the whole surface of the region of interest. It also shows the three in-plane strain maps for the last stage of loading, once the material has reached its overall elasto-plastic domain.
FIG. 2 -Micro-extensometric results for the last step of loading
From this figure, two main remarks can be made:
The two stress-strain curves are quite different. This can be due to two main causes. The first possible cause is that there was some bending moment superimposed to the tensile load during the test leading to different longitudinal strain components on the two sides of the sample. The second possible cause is that the size of the observed region is inferior to the one of the representative Stress-strain curve
ε xx (x10 -3 ) ε yy (x10 -3 ) ε xy (x10 -3 )
volume (which is possible considering the total number of grains).
The different strain maps for the last step of loading show localizations that are in accordance with the positions of the grain and twin boundaries. It has to be noted that on the micrograph made after the test on the sample, some slip lines can be seen on some of the regions where ε xx is maximal. This reinforces the confidence and relevancy of the obtained strain maps.
One way to exploit these full-field data in order to characterize the onset of micro-plasticity is to perform an averaging of the strain values over the surface of a single grain or a single twin (considered here as the same kind of "unitary cell" of the microstructure). The averaging has two main advantages:
it gives access to a single representative value for the whole grain;
on a metrological point of view, the averaging increases the signal to noise ratio.
This allows plotting a stress-strain curve per grain. figure 3 shows such curves (grains 1 and 2 are grains that are submitted to large plastic deformation, showing slip lines on the micrograph whereas grains 3 and 4 are grains that are less deformed; see figure 2 ) to be compared to the one obtained for an averaging on the whole surface of the region of interest.
These curves show that, for this stainless steel sample, the local strain can vary of a 2.5 factor from a little solicited grain to a hugely solicited one. From this observation, it can be asserted that the proposed technique should be able to give information about the local onset of micro-plasticity.
To this purpose, two semi-automatic procedures of determination of a local yield point have been implemented.
The first one is simply based on the transposition of the normalized value calculation of the yield point to this scale: a value of the "0.02% yield stress" is thus proposed for the considered local regions. For that, a linear regression is made on the first value of strain/stress (up to a stress value equal to 100 MPa, to be sure to stay in the elastic domain while having enough experimental value to get a relevant regression law). Then, the deduced straight line, shifted by a value of 0.02 % in strain, is plotted. The intersection point between this line the experimental curve gives the σ Y 0.02% value (figure 4(a)). Thus, two values characterizing the onset of micro-plasticity are available. They, and the difference between them and the reference value obtained for the whole surface of the region of interest, are given in table 1.
Whole
Conclusion
The exposed study, based on a full-filed micro-extensometric methodology, has been applied to a austenitic stainless steel sample submitted to a tensile test.
The proposed experimental procedure has confirmed its robustness and relevancy to measure local small strains (strain resolution ≈ 2×10 -3 for a spatial resolution about 20 µm). Moreover, it has allowed showing that the plastic domain starts earlier than the macroscopic yield point (which can be considered only as an average value of all the local yield points). This was known for a long time (especially, once studying fatigue behaviour, the permanent deformations appearing before to reach the macroscopic yield stress are decisive) but has hardly been experimentally measured.
Equipped with this method, a possible continuation to this work could be to combine it with thermographic observation to link the local measured plastic strains to the heat dissipation of a studied material and, thus, develop a tool to better characterize the endurance limit.
rigid body motion in the middle part of the sample.
FIG. 3 -
3 FIG. 3 -Average stress-strain curves for several grains or twins of the sampleThe second one consists in finding the intersection between the regression line previously obtained from the first sets of values, and a 4 th order polynomial curve computed by regression on the last stages of loading. This allows getting rid off the possible influence of the noise affecting the experimental data on the determination of σ Y 0.02% by the previous method and to get a closer value to the local loss of linearity (figure 4(b)).
FIG. 4 -
4 FIG. 4 -The two ways to determine the local yield point
Values of local yield points for different grains of the sampleOne can conclude from this table that grains 1 and 2, which are more solicited, are entering in their plastic domain earlier that the whole observed region (for a nominal tensile stress ≈ 20 MPa lower). On the contrary, grains 3 and 4, less solicited, are entering in their plastic domain later (for a nominal tensile stress ≈ 30 MPa higher). The difference of values for grain 2 and grain 3 between the two values of σ Y 0.02% and σ Y can be explained by the noisier experimental data for these grains.
surface Grain 1 Grain 2 Grain 3 Grain 4 Average / 4 grains
σ Y 0.02% 180 MPa 164 MPa 164 MPa 211 MPa 204 MPa 185 MPa
Difference from
- -16 MPa -16 MPa 31 MPa 24 MPa 5 MPa
the whole surface
σ Y 145 MPa 122 MPa 105 MPa 195 MPa 171 MPa 148 MPa
Difference from
- -23 MPa -40 MPa 50 MPa 26 MPa 3 MPa
the whole surface
TAB. 1 - |
01763320 | en | [
"phys.phys.phys-comp-ph",
"spi.signal"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763320/file/Time_Reversal_OFDM.pdf | Wafa Khrouf
email: wafa.khrouf@supcom.tn
Zeineb Hraiech
email: zeineb.hraiech@supcom.tn
Fatma Abdelkefi
email: fatma.abdelkefi@supcom.tn
Mohamed Siala
email: mohamed.siala@supcom.tn
Matthieu Crussière
email: matthieu.crussiere@insa-rennes.fr
On the Joint Use of Time Reversal and POPS-OFDM for 5G Systems
Keywords: Waveform Optimization, Waveform Design, Pulse Shaping, POPS-OFDM, Signal to Interference plus Noise Ratio (SINR), Time Reversal
This paper investigates the efficiency of the combination of the Ping-pong Optimized Pulse Shaping-Orthogonal Frequency Division Multiplexing (POPS-OFDM) algorithm with the Time Reversal (TR) technique. This algorithm optimizes the transmit and receive OFDM waveforms with a significant reduction in the system Inter-Carrier Interference (ICI)/Inter-Symbol Interference (ISI) and guarantees maximal Signal to Interference plus Noise Ratio (SINR) for realistic mobile radio channels in 5G Systems. To this end, we characterize the scattering function of the TR channel and we derive the closedform expression of the SINR as a Generalized Rayleigh Quotient. Numerical analysis reveals a significant gain in SINR and Out-Of-Band (OOB) emissions, brought by the proposed TR-POPS-OFDM approach.
I. INTRODUCTION
OFDM modulation has witnessed a considerable interest from both academia and industry. This is due to many advantages such as low complexity receiver, simplicity and efficient equalization structure. Indeed, it has been adopted in various wired and wireless standards, such as ADSL, DVB-T, Wimax, WiFi, and LTE. Nevertheless, in its present form, OFDM presents several shortcomings and as such it is not capable of guaranteeing the quality of service of new and innovative applications and services that will be brought by 5G systems. In fact, it has a high spectral leakage and it requires strict frequency synchronization because it uses a rectangular waveform in time, leading to significant sidelobes in frequency [START_REF] Yunzheng | A Survey: Several Technologies of Non-Orthogonal Transmission for 5G[END_REF]. As a consequence, any lack of perfect frequency synchronization, to be expected from most of the innovative Machine Type Communications (MTC) in 5G, causes important Inter-Carrier Interferences (ICI). In addition to that, a variety of services and new applications will be provided by 5G systems, such as high data-rate wireless connectivity, which requires large spectral and energy efficiency, and Internet of Things (IoT), requiring robustness to time synchronization errors [START_REF] Luo | Signal Processing for 5G: Algorithms and Implementations[END_REF].
In order to overcome OFDM limitations and meet 5G requirements, various modulations have been suggested in the literature such as Generalized Frequency Division Multiplexing (GFDM), Universal Filtered Multi-Carrier (UFMC) and Filter Bank Multi-Carrier (FBMC), which are proposed in 5GNOW project [START_REF] Wunder | 5GNOW: Non-Orthogonal, Asynchronous Waveforms for Future Mobile Applications[END_REF]. It is shown at [START_REF] Wunder | 5GNOW: Non-Orthogonal, Asynchronous Waveforms for Future Mobile Applications[END_REF] that GFDM offers high flexibility for access to fragmented spectrum and low Out-Of-Band (OOB) emissions. However, in contrast to UFMC, GFDM has a low robustness to frequency synchronization errors in the presence of Doppler spread. Moreover, like UFMC [START_REF] Schaich | Waveform Contenders for 5G -OFDM vs. FBMC vs. UFMC[END_REF], FBMC has a high spectral efficiency and a good robustness to ICI. Nevertheless, FBMC, because of its long shaping filters, cannot be used in the case of low latency, sporadic traffic and small data packets transmission. Furthermore, authors in [START_REF] Siala | Novel Algorithms for Optimal Waveforms Design in Multicarrier Systems[END_REF] and [START_REF] Hraiech | POPS-OFDM: Ping-pong Optimized Pulse Shaping-OFDM for 5G Systems[END_REF] propose new class of waveforms, namely POPS-OFDM, which iteratively maximize the SINR in order to create optimal waveforms at the transmitter (Tx) and the receiver (Rx) sides. The obtained waveforms are well localized in time and frequency domains and they are able to reduce the ISI and the ICI as they are not sensitive to time and frequency synchronization errors. Another alternative which aims to reduce interference especially in time, caused by highly dispersive channels, is the time reversal (TR) technique which has been recently proposed for wireless communications systems. Its time and space focusing properties make it an attractive candidate for green and multiuser communications [START_REF] Dubois | Performance of Time Reversal Precoding Technique for MISO-OFDM Systems[END_REF]. In fact, it reduces ISI at the Rx side and it mitigates the channel delay spread.
This paper aims to design new waveforms for 5G systems by combining the benefits of POPS and TR techniques in terms of interference resilience. For this end, we analyze the corresponding system and we derive the SINR expression. We also evaluate the performances of the proposed approach in terms of SINR and OOB emissions.
The remaining of this paper is organized as follows. In Section III, we present the system model. In Section IV, we focus on the derivation of the SINR expression for TR systems and describe the TR-POPS-OFDM algorithm for waveforms design. Section V is dedicated to the illustration of the obtained optimization results and shed light on the efficiency of the proposed TR-POPS-OFDM approach. Finally, Section VI presents conclusion and perspectives to our work.
II. NOTATIONS
Boldface lower and upper case letters refer to vectors and matrices, respectively. The superscripts . * and . T denote the element-wise conjugation and the transpose of a vector or matrix, respectively. We denote by v = (. . . , v -2 , v -1 , v 0 , v 1 , v 2 , . . .) T = (v q ) q∈Z = (v q ) q the infinite vector v. The last notation, (v q ) q , with the set of values taken by q is not explicitly specified, that means that q spans Z.
Let M = (M pq ) p∈Z,q∈Z = (M pq ) pq refers to the infinite matrix M. The matrix shift operator Σ k (•) shifts all matrix entries by k parallel to the main diagonal of the matrix, i.e. if M = (M pq ) pq is a matrix with (p, q)-th entry M pq , then Σ k (M) = (M p-k,q-k ) pq . The symbol ⊗ is the convolution operator of two vectors and the symbol is the componentwise product of two vectors or matrices. We denote by E the expectation operator and by |.| the absolute value.
III. SYSTEM MODEL
In this Section, we first present the TR principal. Then, we describe the channel and system models in which we will apply our approach.
A. Time Reversal Principle
The Time Reversal (TR) principle [START_REF] Fink | Time Reversal of Ultrasonic Fields -Part I: Basic Principles[END_REF], [START_REF] Lerosey | Time Reversal of Electromagnetic Waves[END_REF], comes from the acoustic research field and allows a wave to be localized in time and space. Such a technique can be exploited to separate users, addressed simultaneously on the same frequency band, by their different positions in space.
The use of TR in transmission systems has generated a particular excitement as it allows, from a very high temporal dispersion channel, to obtain an ideal pulse in time and in space. This property has several useful advantages in wireless communications, among which we cite the followings:
• Negligible or null ISI brought a nearly "no memory" equivalent channel. • Minimum inter-user interference thanks to space power localization, with received negligible power outside a focal spot targeted to a given Rx. • Physical-layer-secured data transmission towards a desired user, as other users located outside of the focal spot of the targeted user will receive only a few power. TR integration into a telecommunication system is very simple. It consists in applying a filter on the transmitted signal. We suppose that we have a perfect knowledge of the transmission channel and that it is invariant between the instants of its measurements and the application of the TR at the Tx side. This filter is made up of the Channel Impulse Response (CIR) reversed in time and conjugated. It has the form of a matched filter to the propagation channel which guarantees optimal reception in terms of Signal to Noise Ratio (SNR). Then, the transmitted signal will cross an equivalent filter equal to the convolution between the channel and its time reversed version.
B. Channel Model
We consider a Wide Sense Stationary Uncorrelated Scattering (WSSUS) channel in order to have more insights on the TR-POPS-OFDM performances in the general case. To simplify the derivations, we consider a discrete time system. We denote by T s the sampling period and by R s = 1 Ts the sampling rate. We suppose that the channel is composed of K paths and that the Tx has a perfect knowledge of the channel state at any time. Note that this hypothesis is realistic in the case of low Doppler spread. Let
h (p) = (h (p) 0 , h (p) 1 , . . . , h (p) K-1 ) T be the channel discrete version at instant p such as h (p) l = M -1 m=0
h lm e j2πν lm pTs is the path corresponding to a delay lT s , where M is the number of Doppler rays, h lm and ν lm denote respectively the amplitude and the Doppler frequency of the l th path and the m th Doppler ray. The ray amplitudes, h lm , are supposed to be centered, independent and identically distributed complex Gaussian variables with average powers
π lm = E[|h lm | 2 ]. We denote by π l = M -1 m=0 π lm , where K-1 l=0 π l = 1.
The channel time reversed version at the instant p can be written as:
g (p) = h (p) * K-1 , . . . , h (p) * 1 , h (p) * 0 T . (1)
When we apply the TR technique at the Tx in a Single Input Single Output (SISO) system, the equivalent channel, experienced by the transmission at time instant pT s at the Rx, could be seen as the convolution between the channel and its time reversed version, as follows:
H (p) = h (p) ⊗ g (p) (2) = H (p) -(K-1) , . . . , H (p) -1 , H (p) 0 , H (p) 1 , . . . , H (p) K-1 T ,
where
H (p) k = K-1-|k| l=0 M -1 m,m =0 f (k, l, m, m ), such as: f (k, l, m, m ) = h * lm h l+k,m e j2π(ν l+k,m -ν lm )pTs , if k ≥ 0 h lm h * l-k,m e -j2π(ν l-k,m -ν lm )pTs , else.
It should be noted that
H (p) -k = H (p) * k
, which means that the channel is Hermitian symmetric, and that the equivalent aggregate channel coefficients, H (p) k , are still decorrelated, as in the actual channel.
C. OFDM System
In this paper, we consider a discrete time version of the waveforms to simplify the theoretical derivations that will be investigated.
Let T and F refer to the OFDM symbol duration and the frequency separation between two adjacent subcarriers respectively. The sampling period is equal to T s = T N where N ∈ N. We denote by δ = 1 F T = Q N the time-frequency lattice density, where Q = 1 TsF ≤ N is the number of subcarriers. We denote by e = (e q ) q the sampled version of the transmitted signal at time qT s , with a sampling rate R s = 1 Ts , expressed as: e = m,n a mn ϕ ϕ ϕ mn ,
where ϕ ϕ ϕ mn = (ϕ q-nN ) q (e j2πmq/Q ) q is the time and frequency shifted version of the OFDM transmit prototype waveform, ϕ ϕ ϕ = (ϕ q ) q , used to transmit the symbol a mn . We suppose that the transmitted symbols are decorrelated, with zero mean and energy equal to
E = E[|a mn | 2 ] ϕ ϕ ϕ 2 .
The received signal is expressed as:
r = mn a mn φ φ φmn + n, (4)
where
[ φ φ φmn ] q = K-1 k=-(K-1) H (q)
k [ϕ ϕ ϕ mn ] q-k is the channel distorted version of ϕ ϕ ϕ mn and n = (n q ) q is a discrete complex Additive White Gaussian Noise (AWGN), with zero mean and variance N 0 .
The decision variable, denoted Λ kl , on the transmitted symbol a kl is obtained by projecting r on the receive pulse ψ ψ ψ kl , such as:
Λ kl = ψ ψ ψ kl , r = ψ ψ ψ H kl r, (5)
where ψ ψ ψ kl = (ψ q-lN ) q (e j2πkq/Q ) q is the time and frequency shifted version of the OFDM receive prototype waveform ψ ψ ψ = (ψ q ) q and •, • is the Hermitian scalar product over the space of square-summable vectors.
IV. TR-POPS ALGORITHM
The main objective of this part is to optimize the waveforms at the Tx/Rx sides in our system based on TR technique. To this end, we adopt the POPS-OFDM principal [START_REF] Siala | Novel Algorithms for Optimal Waveforms Design in Multicarrier Systems[END_REF]. This algorithm consists in maximizing the SINR for fixed synchronization imperfections and propagation channel.
Without loss of generality, we will focus on the SINR evaluation for the symbol a 00 . Referring to (5), the decision variable on a 00 can be written as:
Λ 00 = a 00 ψ ψ ψ 00 , φ φ φ00 + (m,n) =(0,0)
a mn ψ ψ ψ 00 , φ φ φmn + ψ ψ ψ 00 , n and it is composed of three terms. The first term is the useful part, the second term is the ISI and the last term presents the noise term. Their respective powers represent useful signal, interference and noise powers in the SINR and which we derive their closed form expressions in the sequel. This SINR will be the same for all other transmitted symbols.
A. Average Useful, Interference and Noise Powers
The useful term is denoted U 00 = a 00 ψ ψ ψ 00 , φ φ φ00 . For a given realization of the channel, the average power of the useful term can be written as:
P h S = E ϕ ϕ ϕ 2 | ψ ψ ψ 00 , φ φ φ00 | 2 .
Thus, the useful power average over channel realizations is given by:
P S = E P h S = E ϕ ϕ ϕ 2 E | ψ ψ ψ 00 , φ φ φ00 | 2 . ( 6
)
The interference term, I 00 = (m,n) =(0,0) a mn ψ ψ ψ 00 , φ φ φmn , results from the contribution of all other transmitted symbols a mn , such as (m, n) = (0, 0).
For a given realization of the channel, the average power of the interference term can be written as:
P h I = E ϕ ϕ ϕ 2 (m,n) =(0,0) | ψ ψ ψ 00 , φ φ φmn | 2 .
Therefore, the interference power average over channel realizations has the following expression:
P I = E P h I = E ϕ ϕ ϕ 2 (m,n) =(0,0) E | ψ ψ ψ 00 , φ φ φmn | 2 , ( 7
)
where
E | ψ ψ ψ 00 , φ φ φmn | 2 = ψ ψ ψ H E φ φ φmn φ φ φH mn ψ ψ ψ. (8)
In the sequel, we consider a diffuse scattering function in the frequency domain, with a classical Doppler spectral density, decoupled from the dispersion in the time domain. So,
E [ φ φ φmn ] p φ φ φH mn q = K-1 k=-(K-1) Π k J 2 0 (πB d T s (p -q)) ×[ϕ ϕ ϕ mn ] p-k [ϕ ϕ ϕ mn ] * q-k
, where B d is the Doppler Spread, J 0 (•) is the Bessel function of the first kind of order zero and
Π k = K-1 l=0 π 2 l + K-1 l=0 π l 2 , if k = 0 K-1-|k| l=0 π l π l+|k| , else
is the average power of the global channel. Then the average useful and interference powers have the following expressions:
P S = E ϕ ϕ ϕ 2 ψ ψ ψ H KS ϕ ϕ ϕ ψ ψ ψ and P I = E ϕ ϕ ϕ 2 ψ ψ ψ H KI ϕ ϕ ϕ ψ ψ ψ, (9)
where KS ϕ ϕ ϕ and KI ϕ ϕ ϕ are Hermitian, symmetric, positive and semidefinite matrices:
KS ϕ ϕ ϕ = K-1 k=-(K-1) Π k Σ k ϕ ϕ ϕϕ ϕ ϕ H Λ (10)
and
KI ϕ ϕ ϕ = n ΣnN K-1 k=-(K-1) Π k Σ k ϕ ϕ ϕϕ ϕ ϕ H Ω -KS ϕ ϕ ϕ . (11)
The entries of matrices Λ and Ω are defined as:
Λ pq = J 2 0 (πB d T s (p -q))
and
Ω pq = QJ 2 0 (πB d T s (p -q)) , if (p -q) mod Q = 0, 0, else, with p, q ∈ Z.
The noise term is given by N 00 = ψ ψ ψ 00 , n n n . Thus, the noise power average is the following:
P N = E | ψ ψ ψ 00 , n | 2 = ψ ψ ψ H E nn H ψ ψ ψ.
As the noise is supposed to be white, its covariance matrix is equal to R nn = E nn H = N 0 I, where I is the identity matrix. Consequently,
P N = N 0 ψ ψ ψ 2 . (12)
B. Optimization Technique
The SINR expression is the following:
SIN R = P S P I + P N = ψ ψ ψ H KS ϕ ϕ ϕ ψ ψ ψ ψ ψ ψ H KIN ϕ ϕ ϕ ψ ψ ψ , (13)
where
KIN ϕ ϕ ϕ = KI ϕ ϕ ϕ + N0 E ϕ ϕ ϕ 2 I.
Our optimization technique is an iterative algorithm where we maximize alternately the Rx waveform ψ ψ ψ, for a given Tx waveform ϕ ϕ ϕ, and the Tx waveform ϕ ϕ ϕ, for a given Rx waveform ψ ψ ψ.
Note that (13) can also be written as:
SIN R = ϕ ϕ ϕ H KS ψ ψ ψ ϕ ϕ ϕ ϕ ϕ ϕ H KIN ψ ψ ψ ϕ ϕ ϕ , (14)
where KS ψ ψ ψ and KIN ψ ψ ψ are expressed as:
KS ψ ψ ψ = K-1 k=-(K-1) Π k Σ k ψ ψ ψψ ψ ψ H Λ (15)
and
KIN ψ ψ ψ = KI ψ ψ ψ + N 0 E ψ ψ ψ 2 I (16)
with
KI ψ ψ ψ = n Σ nN K-1 k=-(K-1) Π k Σ k ψ ψ ψψ ψ ψ H Ω-KS ψ ψ ψ .
(17) Thus, the optimization problem is equivalent to maximizing a generalized Rayleigh quotient. As N0 E ϕ ϕ ϕ 2 > 0, we can affirm that KIN ϕ ϕ ϕ is always invertible and relatively wellconditioned.
The main steps of the proposed algorithm, presented by Figure 1, are the following:
• Step 1: We initialize the algorithm with ϕ ϕ ϕ (0) , • Step 2: For the iteration (i), we compute ψ ψ ψ (i) as the eigenvector of (KIN ϕ ϕ ϕ (i) ) -1 KS ϕ ϕ ϕ (i) with maximum eigenvalue,
• Step 3: For the obtained ψ ψ ψ (i) , we determine ϕ ϕ ϕ (i+1) as the eigenvector of (KIN ψ ψ ψ (i) ) -1 KS ψ ψ ψ (i) with maximum eigenvalue,
• Step 4: We proceed to the next iteration, (i + 1),
• Step 5: We stop the iterations when we obtain a negligible variation of SINR. We note that eig, used in Figure 1, is a function that returns the eigenvector of a square matrix with the largest eigenvalue.
V. SIMULATION RESULTS
In this section, the performances of the proposed TR-POPS technique are evaluated. To show the gain in terms of SINR and Power Spectral Density (PSD), a comparison with POPS-OFDM and conventional OFDM with TR is also realized.
The results of POPS-OFDM algorithm applied to our system based on TR technique are carried out for a discrete timefrequency lattice. The optimal Tx/Rx waveform couple maximizing the SINR, ϕ ϕ ϕ opt , ψ ψ ψ opt , is evaluated for a Gaussian initialization waveform ϕ ϕ ϕ (0) .We presume having an exponential truncated decaying model. Figure 2 presents the evolution of the SINR versus the normalized Doppler spread B d /F for a normalized channel delay spread T m /T where Q = 128, N = 144, a lattice density is equal to 8/9 and for a waveform support duration D = 3T . The obtained results demonstrate that TR-POPS-OFDM approach improves the SINR with a gain of 2.3 dB for B d /F = 0.1 compared with POPS-OFDM and a gain that can reach 5.2 dB for B d /F = 0.02 compared with conventional OFDM with TR. Moreover, this figure is a mean to find the adequate couple (T, F ) of an envisaged application to insure the desired transmission quality. Figure 4 shows that, thanks to the TR technique, the obtained optimal transmit waveform, ϕ ϕ ϕ opt , reduces the OOB emissions by about 40 dB compared to the POPS-OFDM system without TR. We present in Figure 5 the Tx/Rx waveforms, ϕ ϕ ϕ opt and ψ ψ ψ opt , corresponding to the optimal SINR for Q = 128, N = 144, F T = 1 + 16 128 , B d T m = 0.001 and D = 3T . Since the channel is characterized by an Hermitian and symmetric response thanks to the TR effect, we obtain identical Tx/Rx waveforms as it is illustrated in this figure.
0 Initialize φ 0 0 1 0 eig φ φ ψ KIN KS 0 0 1 1 eig ψ ψ K N KS φ I 1 1 1 1 eig φ φ ψ KIN KS
VI. CONCLUSION
In this paper, we studied the association of POPS-OFDM algorithm with TR precoding technique to design novel waveforms for 5G systems. To this end, we presented the corresponding system model and we derived the analytical SINR expression. Despite the additional complexity of applying the combination process, simulation results showed that the proposed approach offers a highly flexible behavior and better performances in terms of maximization of the SINR and reduction of the ISI/ICI. Another possible challenging research axis consists in applying this combination in MIMO-OFDM and FBMC/OQAM systems.
Figure 1 :
1 Figure 1: Optimization philosophy.
Figure 2 :
2 Figure 2: Optimized SINR as a function of B d F for Q = 128, SN R = 30 dB, B d T m = 10 -3 and D = 3T .
Figure 3
3 Figure3illustrates the effect of TR by showing the evolution of the SINR with respect to the time-frequency parameter F T . As in Figure2, our proposed system outperforms the POPS-OFDM system and conventional OFDM with TR. The presented results reveal an increase in the obtained SINR that can reach 1.45 dB for F T = 1 + 8 128 compared with POPS-OFDM and an increase of 4.5 dB for F T = 1+48 128 compared to conventional OFDM with TR.Figure4shows that, thanks to the TR technique, the obtained optimal transmit waveform, ϕ ϕ ϕ opt , reduces the OOB
Figure 3 :
3 Figure 3: SINR versus F T for Q = 128, SN R = 30 dB, B d T m = 10 -2 and D = 3T .
Figure 4 :
4 Figure 4: PSD of the optimized transmit waveform for Q = 128, SN R = 30 dB, B d T m = 10 -3 , F T = 1.25 and D = 3T .
Figure 5 :
5 Figure 5: Tx/Rx optimized waveforms for D = 3T . |
01763324 | en | [
"sdv.ib.ima",
"sdv",
"phys",
"phys.cond.cm-ms"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01763324/file/Verezhak_ACOM_ActaBiomater_Hal.pdf | M Verezhak
E F Rauch
M Véron
C Lancelon-Pin
J.-L Putaux
M Plazanet
A Gourrier
Ultrafine heat-induced structural perturbations of bone mineral at the individual nanocrystal level
Keywords: Bone, mineral nanocrystals, hydroxyapatite, TEM, electron diffraction, heating effects
The nanoscale characteristics of the mineral phase in bone tissue such as nanocrystal size, organization, structure and composition have been identified as potential markers of bone quality. However, such characterization remains challenging since it requires combining structural analysis and imaging modalities with nanoscale precision. In this paper, we report the first application of automated crystal orientation mapping using transmission electron microscopy (ACOM-TEM) to the structural analysis of bone mineral at the individual nanocrystal level. By controlling the nanocrystal growth of a cortical bovine bone model artificially heated up to 1000 ºC, we highlight the potential of this technique. We thus show that the combination of sample mapping by scanning and the crystallographic information derived from the collected electron diffraction patterns provides a more rigorous analysis of the mineral nanostructure than standard TEM. In particular, we demonstrate that nanocrystal orientation maps provide valuable information for dimensional analysis. Furthermore, we show that ACOM-TEM has sufficient sensitivity to distinguish between phases with close crystal structures and we address unresolved questions regarding the existence of a hexagonal to monoclinic phase transition induced by heating. This first study therefore opens new perspectives in bone characterization at the nanoscale, a daunting challenge in the biomedical and archaeological fields, which could also prove particularly useful to study the mineral characteristics of tissue grown at the interface with biomaterials implants.
Introduction.
Bone tissue is a biological nanocomposite material essentially composed of hydrated collagen fibrils of ~100 nm in diameter and up to several microns in length, reinforced by platelet-shaped nanocrystals of calcium phosphate apatite of ~ 4×25×50 nm 3 in size [START_REF] Weiner | The Material Bone: Structure-Mechanical Function Relations[END_REF]. These mineralized fibrils constitute the building blocks of bone tissue, and their specific arrangement is known to depend primarily on the dynamics of the formation and repair processes. Since these cellular processes can occur asynchronously in space and time, the mineralized fibrils adopt a complex hierarchical organization [START_REF] Weiner | The Material Bone: Structure-Mechanical Function Relations[END_REF], which was shown to be a major determinant of the macroscopic biomechanical properties [START_REF] Zimmermann | Intrinsic mechanical behavior of femoral cortical bone in young, osteoporotic and bisphosphonate-treated individuals in low-and high energy fracture conditions[END_REF]. Extensive research programs are therefore currently focused on bone ultrastructure for biomedical diagnoses or tissue engineering applications.
However, structural studies at the most fundamental scales remain challenging due to the technical difficulties imposed by nanoscale measurements and by the tissue heterogeneity. Nevertheless, as a natural extension of bone mineral density (BMD) analysis, an important marker in current clinical studies, the following key characteristics of the mineral nanocrystals have been identified as potential markers of age and diseases: chemical composition, crystallinity vs disorder, crystal structure, size, shape and orientation [START_REF] Matsushima | Age changes in the crystallinity of bone mineral and in the disorder of its crystal[END_REF][START_REF] Boskey | Variations in bone mineral properties with age and disease[END_REF]. Recent progress in the field showed that in order to obtain a deeper medical insight into the mechanisms of bone function, several such parameters need to be combined and correlated to properties at larger length scales [START_REF] Granke | Microfibril Orientation Dominates the Microelastic Properties of Human Bone Tissue at the Lamellar Length Scale[END_REF]. Interestingly, from a totally different point of view, the archaeological community has drawn very similar conclusions concerning nanoscale studies for the identification, conservation and restoration of bone remains and artifacts [START_REF] Chadefaux | Archaeological Bone from Macro-to Nanoscale: Heat-Induced Modifications at Low Temperatures[END_REF].
From a materials science perspective, this is a well identified challenge in the analysis of heterogeneous nanostructured materials. Yet, technically, a major difficulty stems from the fact that most of the identified nanostructural bone markers require individual measurements on dedicated instruments which are generally difficult to combine in an integrative approach.
One 'gold standard' in nanoscale bone characterization is X-ray diffraction (XRD), which allows determining atomic-scale parameters averaged over the total volume illuminated by the X-ray beam. An important result from XRD studies conducted with laboratory instruments is that the bone mineral phase has, on average, a poorly crystalline apatite structure which, to a certain extent, is induced by a high fraction of carbonate substitutions [START_REF] De | Le substance minerale dans le os[END_REF]. Such studies enabled localization of a substantial number of elements other than calcium and phosphorus present in bone via ionic substitutions [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF], which can lead to serious pathological conditions, e.g. skeletal fluorosis [START_REF] Boivin | Fluoride content in human iliac bone: Results in controls, patients with fluorosis, and osteoporotic patients treated with fluoride[END_REF]. When an average description of bone properties is insufficient, synchrotron X-ray beams focused to a typical diameter of 0.1 -10 µm [START_REF] Schroer | Hard x-ray nanoprobe based on refractive x-ray lenses[END_REF][START_REF] Fratzl | Position-Resolved Small-Angle X-ray Scattering of Complex Biological Materials[END_REF][START_REF] Paris | From diffraction to imaging: New avenues in studying hierarchical biological tissues with x-ray microbeams (Review)[END_REF] operated in scanning mode allow mapping the microstructural heterogeneities. However, this remains intrinsically an average measurement and the current instrumentation limits prevent any analysis at the single mineral nanocrystal level.
Transmission electron microscopy (TEM) is a second 'gold standard' in bone characterization at the nanoscale. In high resolution mode, it allows reaching sub-angstrom resolution [START_REF] Xin | HRTEM Study of the Mineral Phases in Human Cortical Bone[END_REF] and therefore provides atomic details of the crystals. This increased resolution comes at the cost of the image field of view, which may not provide representative results due to the tissue heterogeneity. This limitation can partly be alleviated in scanning mode, which is more adapted to the collection of a large amount of data for statistical usage. In particular, for the process known as Automated Crystal Orientation Mapping (ACOM-TEM) [START_REF] Rauch | Automated crystal orientation and phase mapping in TEM[END_REF], diffraction patterns are systematically acquired while the electron beam is scanning micron-sized areas, such that the structural parameters of hundreds of individual nanocrystals may be characterized and used to reconstruct orientation maps with nanometer spatial resolution.
To our best knowledge, the present study is the first reported use of the ACOM-TEM method to analyze mineral nanocrystals in bone tissue. To demonstrate the potential of this technique for bone studies, a test object is required which structure should be as close as possible to native bone while offering a wide range of nanocrystal dimensions. Heated bone provides an ideal model for such purposes, ensuring a tight control over the nanocrystal size by adjusting the temperature.
This system was extensively studied in archeological and forensic contexts. Upon heating to 100-150 °C, bone is progressively dehydrated [START_REF] Legeros | Types of 'H2O' in human enamel and in precipitated apatites[END_REF] and collagen is considered to be fully degraded at ~ 400 °C [START_REF] Kubisz | Differential scanning calorimetry and temperature dependence of electric conductivity in studies on denaturation process of bone collagen[END_REF][START_REF] Etok | Structural and chemical changes of thermally treated bone apatite[END_REF]. Most X-ray studies concluded an absence of mineral crystal structure modifications before 400 °C, while a rapid crystal growth has been reported at ~ 750 °C [START_REF] Rogers | An X-ray diffraction study of the effects of heat treatment on bone mineral microstructure[END_REF][START_REF] Hiller | Bone mineral change during experimental heating: An X-ray scattering investigation[END_REF][START_REF] Piga | A new calibration of the XRD technique for the study of archaeological burned human remains[END_REF]. In a recent study we provided evidence that the mineral nanocrystals increase in size and become more disorganized at temperatures as low as 100 °C [START_REF] Gourrier | Nanoscale modifications in the early heating stages of bone are heterogeneous at the microstructural scale[END_REF]. In addition, many debates remain open concerning the nature of a postulated high temperature phase transition, the coexistence of different crystallographic phases, as well as the presence of ionic defects above and below the critical temperature of Tcr = 750 °C [START_REF] Greenwood | Initial observations of dynamically heated bone[END_REF]. The heated bovine cortical bone model therefore presents two main advantages to assess the potential of ACOM-TEM: 1) the possibility to fine-tune the mineral nanocrystal size upon heating and 2) the existence of a phase transition at high temperatures.
Using a set of bovine cortical bone samples in a control state and heated at eight temperatures ranging from 100 to 1000 °C, we show that ACOM-TEM provides enough sensitivity to probe fine crystalline modifications induced by heating; in particular, nanocrystal growth, subtle changes in stoichiometry and space group. Those results provide new insight into the detailed effects of heating on bone and validate the use of ACOM-TEM for fundamental studies of the nanoscale organization of bone tissue in different contexts.
Materials and methods.
Sample preparation: A bovine femur was obtained from the local slaughterhouse (ABAG, Fontanil-Cornillon, France). The medial cortical quadrant of a femoral section from the mid-diaphysis was extracted with a high precision circular diamond saw (Mecatome T210, PRESI) and fixed in ethanol 70 % for 10 days (supplementary information, Fig. S1). Nine 2×2×10 mm 3 blocks were cut in the longitudinal direction and subsequently dehydrated (48 hours in ethanol 70 % and 100 %) and slowly dried in a desiccator. One block was used as a control, while the others were heated to eight temperatures: 100, 200, 300, 400, 600, 700, 800 and 1000 °C for 10 min in vacuum (10 -2 mbar) inside quartz tube and cooled in air. The temperature precision of the thermocouple was ~ 2-3 °C and the heating rate was ~ 30-40 °C/min. The heating process resulted in color change, as shown in Fig. S2 of supplementary information. The samples were then embedded in poly-methyl methacrylate (PMMA) resin following the subsequent steps: impregnation, inclusion and solidification. For impregnation, a solution of methyl methacrylate (MMA) was purified by aluminum oxide and a solution of MMA was prepared with dibutyl phthalate in a 4:1 proportion (MMA1). The samples were kept at 4 °C in MMA1 for 5 days. For inclusion, the samples were stored in MMA1 solution with 1 w% of benzoyl peroxide for 3 days and in MMA1 solution with 2 w% of benzoyl peroxide for 3 days. The solidification took place in PTFE flat embedding molds covered by ACLAR film at 32 °C for 48 h. The resin-embedded blocks were then trimmed and cut with a diamond knife in a Leica UC6 ultramicrotome. The 50-nm-thick transverse sections (i.e., normal to the long axis of the femur) were deposited on 200 mesh Cu TEM grids coated with lacey carbon.
TEM data acquisition:
The measurements were performed using a JEOL 2100F FEG-TEM (Schottky ZrO/W field emission gun) operating at an accelerating voltage of 200 kV and providing an electron beam focused to 2 nm in diameter at sample position. A camera was positioned in front of the TEM front window to collect diffraction patterns as a function of scanning position with a frame rate of 100 Hz. The regions of interest were first selected in standard bright-field illumination (supplementary Fig. S3). The field of view for ACOM acquisition was chosen to be 400×400 nm 2 with a 10 ms acquisition time and a 2 nm step size. A sample-tocamera distance of 30 cm was chosen for all samples, except for the larger crystals treated at 800 and 1000 °C, for which a camera length of 40 cm was used, applying a precession angle of 1.2 ° at a frequency of 100 Hz in order to minimize dynamical effects [START_REF] Rauch | Automated crystal orientation and phase mapping in TEM[END_REF]. Following distortion and camera length corrections, a virtual brightfield image was reconstructed numerically by selecting only the transmitted beam intensities.
Radiation damage assessment:
No severe radiation damage was observed during ACOM-TEM data acquisition. This was assessed by independent bright-field acquisitions in the region close to the one scanned by ACOM-TEM for each heat-treatment temperature. These measurements were performed under the same conditions but with a smaller spot size of 0.7 nm (i.e. with a higher radiation dose) and were repeated 25 times to emphasize potential damage. Examples of bright-field images before the first and last (25th) frames are shown in Fig. S5 (supplementary information), showing very limited radiation damage.
ACOM-TEM analysis:
The data analysis relies on the comparison between the electron diffraction patterns collected at every scan position and simulated patterns (templates) calculated for a given crystal structure in all possible orientations [START_REF] Rauch | Automated crystal orientation and phase mapping in TEM[END_REF][START_REF] Rauch | Rapid spot diffraction patterns idendification through template matching[END_REF], thus allowing the reconstruction of crystal orientation maps (Fig. 1). The template matching was performed using the ASTAR software package from NanoMEGAS SPRL. In its native state, bone mineral is calcium phosphate close to a well-known hydroxyapatite, Ca 10 (PO 4 ) 6 (OH) 2 , a subset of the widespread geological apatite minerals [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF]. Hence, our initial model for the crystal structure is a hexagonal space group (P63/m) with 44 atoms per unit cell and lattice parameters of a = 9.417 Å; c = 6.875 Å [START_REF] Hughes | Structural variations in natural F, OH, and Cl apatites[END_REF]. Every i-th acquired diffraction pattern collected at position (x i ,y i ) was compared to the full set of templates through image correlation (template matching) and the best fit gave the most probable corresponding crystallographic orientation. This first result can thus be represented in the form of a color map representing the crystalline orientation (Fig. 1f). To assess the quality of the fit, a second map of the correlation index Q i can be used, defining Q i as:
Q i = ∑ j=1 m P ( x j , y j ) T i ( x j , y j ) √ ∑ j=1 m P 2 ( x j , y j ) √ ∑ j=1 m T i 2 ( x j , y j )
where P(x j ,y j ) is the intensity of measured diffraction patterns and T i (x,y) corresponds to the intensity in every ith template. Q i compares the intensities of the reflection contained in the diffraction pattern, denoted by the function P(x,y), to the corresponding modeled intensities T i (x,y) in every i-th template in order to select the best match [START_REF] Rauch | Rapid spot diffraction patterns idendification through template matching[END_REF]. The degree of matching is represented in an 'index map' that plots the highest matching index at every location (Fig. 1g).
This parameter therefore weights the degree of correlation between the acquired and simulated diffraction patterns. If more than one phase is expected to be present, several sets of templates can simultaneously be fitted to the data in order to identify the best one. This allows constructing 'phase maps' in which each crystallographic phase is associated to a given color.
A critical aspect of the ACOM analysis is to judge the quality of the proposed crystal orientation. Indeed, it is worth emphasizing that the template matching algorithm always provides a solution, which requires evaluating the fidelity of the phase/orientation assignment, especially in the case of overlapping crystals. A reliability parameter R i was proposed to address this point. It is proportional to the ratio of the correlation indices for the two best solutions and is defined by:
R i =100(1- Q i2 Q i1 )
where Q i1 is the best solution (represented as a red circle on the stereographic projection in Fig. 1e) and Q i2 is the second best solution (shown with a green circle). Reliability values range between 0 (unsafe/black) and 100 (unique solution/white). In practice, a value above 15 is sufficient to ascertain the validity of the matching (Fig. 1h).
Results
.
Individual nanocrystal visualization.
Bone nanocrystal orientation maps were derived for the set of heated bone samples (Fig. 2) with the corresponding collective orientations on the 0001 stereographic projection. A moderate increase in crystal size was observed below 800 °C, followed by a rapid growth at higher temperatures with a crystal shape change from platelet to polyhedral.
An important advantage of ACOM is that the size and geometry measurements in bright-field TEM such as in Fig. 1a are generally performed on the whole image. In our case, the nanocrystals can exhibit a broad distribution of orientations, such that size estimation is impractical due to the platelet-shaped crystal geometry and leads to overestimated values as pointed out in earlier studies [START_REF] Ziv | Bone Crystal Sizes: A Comparison of Transmission Electron Microscopic and X-Ray Diffraction Line Width Broadening Techniques[END_REF]. The additional visualization of the nanocrystal orientation therefore allows restricting the dimensional analysis to crystals in the same orientation (the same color-code). For example, crystals displayed in red in Fig. 1f are oriented with their c-axis (longest axis) perpendicular to the scanning plane. The crystals displayed in green and blue have a 90° misorientation from this particular zone axis. While the colors are representative of the crystal orientations, the overall difference between the ratio of red vs green and blue colors at different temperatures reflect the spatial coherence in the phase index and, thus, the level of heterogeneity within a particular sample or between different samples. This fact is confirmed by additional scans with larger fields of view acquired for each sample of the temperature series (supplementary information, Fig. S4).
The crystallographic texture can be inferred from the stereographic projection along the 0001 direction showing that, below 800 °C, most regions consist of crystals mainly aligned along the c-axis, which is in good agreement with other larger scale XRD studies [START_REF] Voltolini | Hydroxylapatite lattice preferred orientation in bone: A study of macaque, human and bovine samples[END_REF]. At higher temperatures, we observed randomly oriented crystals resulting from a phase transition which nature will be discussed in a later section. Bone mineral nanocrystal size estimation.
The nanocrystal size was measured assuming two models: platelet (anisotropic) for the low temperature (LT) phase below 750 °C and polyhedral (isotropic) for the high temperature (HT) phase above 750 °C.
In the LT phase, the smallest platelet dimensions are obtained by line profiling (as displayed in Fig. 3a) of crystals having their c-axis aligned with the beam direction (represented by a red color). Crystal overlapping effects are not expected for this orientation as the long axis length of a platelet is comparable to the sample thickness. Therefore, the electron beam is predicted to pass through one crystal only, thus providing reliable size estimation.
For the HT phase, the nanocrystal size was determined using a spherical approximation (crystal diameters) based on the definition of grain boundaries, i.e. the locations where the misorientation between two pixels at the orientation map (Fig. 3c) is higher than a user selected threshold value (5° in the present case). An example of a grain boundary map for the 800 °C sample is shown in Fig. 3d. An average grain size is then estimated using a sphere diameter weighted by the grain's area in order to avoid misindexing from numerous small grains (mainly noise). For the HT phase, the nanocrystal size was determined using a spherical approximation (crystal diameters) based on the definition of grain boundaries, i.e. the locations where the misorientation between two pixels at the orientation map (Fig. 3c) is higher than a user selected threshold value (5° in the present case). An example of a grain boundary map for the 800 °C sample is shown in Fig. 3d. An average grain size is then estimated using a sphere diameter weighted by the grain's area in order to avoid misindexing from numerous small grains (mainly noise).
The grain size distribution was then obtained for the HT phase (Fig. 3e). According to the two described models for the nanocrystal sizes (summarized in Fig. 3b), we found that the smallest bone mineral particle size rises, on average, from 3.5 nm in the control state to 5.1 nm at 700 °C. Subsequently, the average particle diameter dramatically increases upon further heating: up to 70 nm at 800 °C and 94 nm at 1000 °C.
Identification of the high temperature apatite phase.
While stoichiometric hydroxyapatite has a calcium-to-phosphate ratio of 1.67, bone mineral is known to accommodate ~ 7 wt.% of carbonate and numerous other ionic substitutions [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF]. To test the sensitivity of ACOM-TEM to changes in stoichiometry and therefore of space group, we used the data set containing the largest grains (1000 °C). We compared the fits obtained with the hydroxyapatite template against four template structures of different minerals occurring in nature with similar chemical composition and stoichiometry: alphatetra calcium phosphate (TCP) (α-Ca 3 (PO 4 ) 2 ) (space group -P21/a), beta-Ca pyrophosphate (β-Ca 2 P 2 O 7 ) (P4 1 ), tetra-calcium phosphate (Ca 4 (PO 4 ) 2 O) (P2 1 ), CaO (Fmm) and whitlockite (R3c), which chemical compositions and structures are shown in Fig. 4a-e. Those phases were previously encountered in synthetic apatites subjected to heat treatments and could, therefore, be potential candidates for heated bone [START_REF] Berzina-Cimdina | Research of Calcium Phosphates Using Fourier Transform Infrared Spectroscopy[END_REF]. Since these minerals have different space groups, all possible orientations are described by different fractions of the stereographic projection allowed by symmetry, as shown by the color map shapes in Fig. 4.
Two criteria can be used to conclude that the hydroxyapatite template (Fig. 4a) provides the best solution: 1) the highest index value that characterizes the quality of the solution is nearly twice larger for hydroxyapatite than for other apatite structures; in addition, 2) a given particle is expected to be fitted with the same orientation if monocrystalline, resulting in a uniform color, which is only fulfilled for hydroxyapatite. This analysis provides a first proof-of-concept that ACOM-TEM has a sufficient sensitivity to identify subtle variations of the crystal lattice that can be expected in highly disordered biological mineral structures, such as bone mineral nanocrystals. Other apatite minerals which are not expected to be found in bone tissue but have close to hydroxyapatite stoichiometry and chemical composition such as brushite (space group Ia), monetite ( P 1 ) and tuite ( R 3 m ) [START_REF] Schofield | The role of hydrogen bonding in the thermal expansion and dehydration of brushite, di-calcium phosphate dihydrate[END_REF][START_REF] Catti | Hydrogen bonding in the crystalline state. CaHPO4 (monetite), P1 or P1? A novel neutron diffraction study[END_REF][START_REF] Sugiyama | Structure and crystal chemistry of a dense polymorph of tricalcium phosphate Ca3 (PO4)2: A host to accommodate large lithophile elements in the earth's mantle[END_REF][START_REF] Calvo | The crystal structure of whitlockite from the Palermo quarry[END_REF] were also used to test the ACOM-TEM sensitivity. I.e. if ACOM-TEM had resulted in an equal probability to find these phases, it would clearly have suggested a lack of precision of the method. The analysis shows that this is not the case, i.e. these phases did not allow describing bone data as well as hydroxyapatite (see Figure S5a-d in supplementary information).
Space group: monoclinic or hexagonal ?
A common issue in the identification of a hydroxyapatite phase at different temperatures is the hypothesis of the existence of a hexagonal (P6 3 /m) to monoclinic (P2 1 /b) phase transition above T cr = 750 °C. The corresponding structures are shown in Fig. 5a. However, such a transition was mainly predicted by theoretical models [START_REF] Slepko | Hydroxyapatite: Vibrational spectra and monoclinic to hexagonal phase transition[END_REF][START_REF] Corno | Periodic ab initio study of structural and vibrational features of hexagonal hydroxyapatite Ca10(PO4)6(OH)2[END_REF] and was only observed in artificially synthesized hydroxyapatite [START_REF] Ma | Hydroxyapatite: Hexagonal or monoclinic?[END_REF][START_REF] Ikoma | Phase Transition of Monoclinic Hydroxyapatite[END_REF][START_REF] Suda | Monoclinic -Hexagonal Phase Transition in Hydroxyapatite Studied by X-ray Powder Diffraction and Differential Scanning Calorimeter Techniques[END_REF]. From the theoretical point of view, the monoclinic hydroxyapatite structure is thermodynamically more stable than the hexagonal one. Nevertheless, the hexagonal phase allows an easier exchange of OH-groups with other ions, which is necessary for bone tissue functions.
This issue was therefore addressed by matching the hydroxyapatite templates of the hexagonal [START_REF] Hughes | Structural variations in natural F, OH, and Cl apatites[END_REF] (Fig. 5b) and the monoclinic [START_REF] Elliott | Monoclinic hydroxyapatite[END_REF] (Fig. 5c) structures with the 1000 °C bone data set. The phase map in Fig. 5d represents the structure with the highest index at each scan point. Based on the index values, as well as on the uniform colorcode criterion for single crystals, one can conclude that the hexagonal space group in bone mineral HT phase is more probable than the monoclinic one. Discussion.
Our results provide a first demonstration that a structural analysis is possible at the single nanocrystal level within bone tissue using ACOM-TEM. This constitutes a valuable improvement combining the advantages of selected area electron diffraction (SAED) and TEM. While SAED produces a global diffraction pattern from all the nanocrystals probed by an electron beam defined by a micron sized aperture, ACOM-TEM allows a detailed structural analysis within a similar field of view.
The use of the artificially heated bone model shows that the phase and orientation can be unambiguously determined for temperatures above ~ 400 o C where the crystal size is > 5 nm (Fig. 2). Even below this temperature, where the situation is less clear, since the scanning resolution given by the beam size (~ 2 nm) is close to the actual nanocrystals size (~ 4-5 nm), the observation of larger fields of view reveals the presence of coherently oriented domains in the control sample with characteristic sizes of ~ 100 -200 nm which are compatible with the diameter of collagen fibrils (supplementary information, Fig. S4). Such crystallographic information is not available from standard bright-field TEM and can be used to obtain more rigorous estimates of the nanocrystals dimensions. An important limitation of a bright-field TEM assessment of the thickness of the mineral platelets is that the nanocrystals can be sectioned in different orientation, hence resulting in an artificial broadening in projection [START_REF] Ziv | Bone Crystal Sizes: A Comparison of Transmission Electron Microscopic and X-Ray Diffraction Line Width Broadening Techniques[END_REF]. These artifacts can be avoided using ACOM-TEM by only considering the platelets oriented on-edge, i.e. with the c-axis normal to the observation plane (Fig. 3). Similarly, depending on the thickness of the sample section, several nanocrystals may partly overlap, which further complicates the analysis. This case can be avoided by discarding the areas corresponding to a low correlation index and reliability parameters, i.e. to a poor structural refinement.
Our analysis reveals three stages of crystal growth upon heating: a first, moderate growth (from ~ 3.5 to 5.1 nm) between ambient temperature and ~ 700 o C, an order of magnitude increase in size (from ~ 5.1 to 70 nm) between 700-800 o C, followed by an additional growth (from 70 to 94 nm) between 800-1000 o C. This is in very good agreement with previous X-ray studies [START_REF] Rogers | An X-ray diffraction study of the effects of heat treatment on bone mineral microstructure[END_REF][START_REF] Piga | A new calibration of the XRD technique for the study of archaeological burned human remains[END_REF][START_REF] Piga | Is X-ray diffraction able to distinguish between animal and human bones?[END_REF]. However, X-ray scattering provides average information from all crystals illuminated by the beam, while ACOM-TEM allows a precise mechanistic interpretation of the heating process. In particular, the appearance of new uncorrelated orientations at temperatures > 700 o C strongly suggests a recrystallization process by fusion-recrystallization of smaller grains. Interestingly, this process can qualitatively be observed from 400 o C onwards (i.e. following total collagen degradation) in Fig. 2, as the structure qualitatively becomes more heterogeneous (larger polydispersity in crystal sizes) and disordered. However, there is a sharp transition from platelets to polyhedral crystals between 700-800 o C, clearly indicating a non-linear crystal growth.
One major difficulty in assessing the crystalline structure of bone is that the crystal chemistry is known to fluctuate, potentially giving rise to modulations of the intensity and breadth of the Bragg peaks which thus tend to overlap in XRD [START_REF] Posner | Crystal chemistry of bone mineral[END_REF][START_REF] Sakae | Historical Review of Biological Apatite Crystallography[END_REF]. For the same reason, the precise interpretation of Raman and FTIR spectra is still a matter of debate after decades of studies [START_REF] Wopenka | A mineralogical perspective on the apatite in bone[END_REF]. Since the crystal structure is used as input in the ACOM-TEM analysis, such fine deviations to an ideal crystal structure could not be assessed reliably. Nevertheless, the analysis conducted with different templates shows that this method permits a reliable distinction between different phases on the basis of intrinsic quality metrics (correlation index) and extrinsic ones, e.g. the spatial coherence (color uniformity) of the phase and orientation determination.
This allowed, in particular, testing the hypothesis proposed in previous XRD studies that diffraction patterns of heated bone could be equally well indexed with a monoclinic space group instead of a hexagonal one. The lattice parameters of the two structures were found to be very close, with a β angle close to 120 o for the monoclinic phase [START_REF] Piga | Is X-ray diffraction able to distinguish between animal and human bones?[END_REF]. The main difference is that the length of the b-axis can fluctuate significantly from the a-axis in the monoclinic case (contrary to the hexagonal case where the a-and b-axis are identical by definition). Thus, the difference between the two structures is relatively subtle but, in principle, a monoclinic structure should better account for a higher degree of crystallinity generated by heating, as demonstrated with synthetic apatites [START_REF] Ma | Hydroxyapatite: Hexagonal or monoclinic?[END_REF].
Our results conclusively show that even for samples treated at 1000 o C, the mineral is better represented by a hexagonal structure. This was further confirmed by a close manual examination of the proposed solutions for a number of representative diffraction patterns (example in supplementary information, Fig. S6). Because the monoclinic phase is more representative of stoichiometric hydroxyapatite, the fact that bone mineral is better indexed by a hexagonal group implies that there is still a substantial degree of disorder in the crystal structure even for bone heated at high temperatures.
It is important to note that the ACOM setup can be readily implemented in standard existing TEM instruments and can therefore provide a close-to routine basis for biological, medical and archeological studies. Given the resolution level, we believe that ACOM-TEM could be advantageously exploited to analyze the interface layer (typically < 1 µm) between biomaterials and bone formed at the surface of implants, a critical aspect of osseointegration [START_REF] Davies | Bone bonding at natural and biomaterial surfaces[END_REF][START_REF] Legeros | Calcium phosphate-based osteoinductive materials[END_REF]. TEM was widely used to investigate the tissue structure at this interface [START_REF] Grandfield | High-resolution three-dimensional probes of biomaterials and their interfaces[END_REF][START_REF] Palmquist | Bone--titanium oxide interface in humans revealed by transmission electron microscopy and electron tomography[END_REF], but the collective mineral nanocrystals structure and organization was never analyzed. Similarly, in the biomedical field, severe pathological perturbations of mineral nanocrystals have been reported in many bone diseases, e.g. osteoporosis [START_REF] Rubin | TEM analysis of the nanostructure of normal and osteoporotic human trabecular bone[END_REF], osteogenesis imperfecta [START_REF] Fratzl-Zelman | Unique micro-and nano-scale mineralization pattern of human osteogenesis imperfecta type VI bone[END_REF] and rickets [START_REF] Karunaratne | Significant deterioration in nanomechanical quality occurs through incomplete extrafibrillar mineralization in rachitic bone: Evidence from in-situ synchrotron X-ray scattering and backscattered electron imaging[END_REF], for which a detailed nanoscale description is still lacking [START_REF] Gourrier | Scanning small-angle X-ray scattering analysis of the size and organization of the mineral nanoparticles in fluorotic bone using a stack of cards model[END_REF]. Additionally, this method could also have a positive impact in the archaeological field, since diagenetic effects associated with long burial time of bone remains are known to affect the mineral ultrastructure in numerous ways, hence impacting the identification and conservation of bone artifacts [START_REF] Reiche | The crystallinity of ancient bone and dentine: new insights by transmission electron microscopy[END_REF]. Finally, it should be mentioned that ACOM-TEM would most likely benefit from more advanced sample preparation methods such as focused ion beam milling coupled to scanning electron microscopy (FIB-SEM) which has been shown to better preserve the tissue ultrastructure [START_REF] Jantou | Focused ion beam milling and ultramicrotomy of mineralised ivory dentine for analytical transmission electron microscopy[END_REF][START_REF] Mcnally | A Model for the Ultrastructure of Bone Based on Electron Microscopy of Ion-Milled Sections[END_REF][START_REF] Reznikov | Three-dimensional structure of human lamellar bone: the presence of two different materials and new insights into the hierarchical organization[END_REF].
Conclusion.
In the present work, we showed that both insights, the direct visualization of individual bone nanocrystals and structural information, can simultaneously be accessed using ACOM-TEM analysis. The mineral nanocrystal orientation, crystallographic phase and symmetry can be quantified, even in biological samples such as bone tissue that are known to be very heterogeneous down to the nanoscale. Our analysis of a heated bone model points to a crystal growth by fusion and recrystallization mechanisms, starting from ̴ 400 o C onwards with a sharp transition between 700 o C and 800 o C. By testing different phases corresponding to deviations from the hydroxyapatite stoichiometry as input for the structural refinement, we were able to assess the sensitivity of ACOM-TEM. We tested the hypothesis of a monoclinic space group attribution to the bone sample heated at 1000 o C and found that a hexagonal structure was more probable, suggesting the presence of crystalline defects even after heating at high temperatures. We therefore believe that ACOM-TEM could have a positive impact on applied research in biomaterials development, biomedical investigations of bone diseases and, possibly, analysis of archaeological bone remains. Bright field images with corresponding ACOM orientation maps.
Fig. 1 :
1 Fig. 1: Generalized scheme of ACOM-TEM acquisition and data interpretation. (a) Bright-field (BF) image of bone tissue with the illustration of the scan area (dots diameter is 2 nm, enhanced for visibility), (b) recorded set of diffraction patterns, (c) example of fit using (d) the structure template for hydroxyapatite. (e) Fraction of stereographic projection with inverse pole figure color map, (f) orientation, (g) index and (h) reliability maps (high values appear brighter). Scale bar: 100 nm.
Fig. 2 :
2 Fig. 2: Mineral crystal orientation distribution in bone tissue as a function of temperature. Orientation maps reconstructed from ACOM-TEM from control to 1000 °C with corresponding inverse pole figure color map (scale bar: 100 nm). Inset: collective crystal orientations on 0001 stereographic projection with the color bar normalized to the total number of crystals per scan.
Fig. 3 :
3 Fig. 3: Bone mineral crystal size evolution with temperature. (a) an example of size measurement by line profiling along the c-axis of the platelet-shaped crystals for the LT regime (700 °C); (b) average crystal size vs temperature (black -smallest crystal size, red -crystal diameter). Examples of polyhedral-shaped crystal diameter measurement for the HT regime (800 °C): (c, d) indicate orientation and grain boundary maps, respectively, and (e) represents the distribution of crystal diameters.
Fig. 4 :
4 Fig.4: Phase sensitivity. a-f, orientation maps for 1000 °C heated bone data fitted with six apatite structures with corresponding inverse pole figure color maps. Maximum index values (Imax) are given for comparison. The hydroxyapatite structure produces the best fit indicated by the highest Imax value and homogeneous colors within single crystals. Scale bar: 50 nm.
Fig. 5 :
5 Fig. 5: Hexagonal vs monoclinic symmetry. (a) crystal structures of hexagonal and monoclinic hydroxyapatite (view along c-axis); (b, c) corresponding orientation maps for the 1000 °C sample with color code and maximum index values; (d) phase map showing mainly the presence of hexagonal phase. Scale bar: 50 nm.
Fig. S3 :
S3 Fig. S3: Regions of scans for the set of heat-treated bone samples. Bright field images with corresponding ACOM orientation maps. Scale bar in orientation maps: 100 nm.
Fig. S4 :
S4 Fig. S4: Larger fields of view scan regions for the set of heat-treated bone samples.Bright field images with corresponding ACOM orientation maps.
Acknowledgements.
The authors would like to acknowledge M. Morais from SIMaP for the support with the heating apparatus, D. Waroquy (ABAG, Grenoble, France) for providing the bovine samples, and the NanoBio-ICMG Platform (FR 2607, Grenoble) for granting access to the TEM sample preparation facility.
Author contributions.
M.V. ‡ , C.L.P. and J.L.P. prepared samples; M.V. ‡ , E.F.R., M.V., M.P. and A.G. performed the research; A.G., M.P. and E.F.R. provided the financial support for the project; M.V. ‡ and E.F.R. analyzed data; M.V. ‡ and A.G. wrote the paper with contributions from all authors; A.G. and M.P. designed the research.
Supplementary information |
01763341 | en | [
"spi.gproc",
"spi.meca.geme"
] | 2024/03/05 22:32:13 | 2013 | https://hal.science/hal-01763341/file/MSMP_TI_2013_MEZGHANI.pdf | Mohamed El Demirci
Hassan Mansori
S Mezghani
email: sabeur.mezghani@chalons.ensam.fr
I Demirci
M El Mansori
H Zahouani
Energy efficiency optimization of engine by frictional reduction of functional surfaces of cylinder ring-pack system
Keywords: Honing process, Surface roughness, elastohydrodynamic friction, Cylinder engine
Energy efficiency optimization of engine by frictional reduction of functional surfaces of cylinder ring-pack system
Sabeur Mezghani, Ibrahim
Penalty term parameter z r Pressure viscosity index (Roelands), z r = p r /(ln(η 0 ) + 9.67)
Introduction
The surface features of a cylinder liner engine are the fingerprint of the successive processes the surface has undergone, and they influence the functional performance of the combustible engine [START_REF] Caciu | Parametric textured surfaces for friction reduction in combustion engine[END_REF][START_REF] Tomanik | Friction and wear bench tests of different engine liner surface finishes[END_REF][START_REF] Pawlus | A study on the functional properties of honed cylinders surface during runningin[END_REF][START_REF] Mcgeehan | A literature review of the effects of piston and ring friction and lubricating oil viscosity and fuel economy[END_REF][START_REF] Srivastava | Effect of liner surface properties on wear and friction in a non-firing engine simulator[END_REF]. Therefore, surfaces and their measurement provide a link between the manufacture of cylinder bores and their functional performances [START_REF] Whitehouse | Surfacesa link between manufacture and function[END_REF]. Hence, the quantitative characterization of surface texture can be applied to production process control and design for functionality [START_REF] De Chiffre | Quantitative Characterisation of Surface Texture[END_REF]. The optimum surface texture of an engine cylinder liner should ensure quick running-in, minimum friction during sliding, low oil consumption, and good motor engine operating parameters in terms of effective power and unitary fuel consumption. Increasingly stringent engine emissions standards and power requirements are driving an evolution in cylinder liner surface finish [START_REF] Lemke | Characteristic parameters of the abbot curve[END_REF]. Unfortunately, the full effect of different cylinder liner finishes on ring-pack performance is not well understood [START_REF] De Chiffre | Quantitative Characterisation of Surface Texture[END_REF].
In mass production of internal combustion engine cylinder liners, the final surface finish on a cylinder bore is created by an interrupted multistage abrasive finishing process, known as the plateau-honing process. In honing, abrasive stones are loaded against the bore and simultaneously rotated and oscillated. Characteristically, the resulting texture consists of a flat smooth surface with two or more bands of parallel deep valleys with stochastic angular position. Figure 1 shows a typical plateau-honed surface texture from an engine cylinder.
To guarantee efficient production at industrial level of a cylinder liner of specific shape with acceptable dimensional accuracy and surface quality, three honing stages are usually required: rough honing, finish honing, and plateau honing. The surface texture is presumably provided by the ″finish honing″ [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Sabri | Functional optimisation of production by honing engine cylinder liner[END_REF]. Thus careful control of this operation is central to the production of the structured surface so that the cylinder liner will fulfil its mechanical contact functionalities in piston ring/cylinder liner assemblies (i.e. running-in performance, wear resistance, load-carrying capacity, oil consumption, etc.) [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Pawlus | Effects of honed cylinder surface topography on the wear of piston-piston ring-cylinder assemblies under artistically increased dustiness conditions[END_REF]. ring-pack friction reduction through cylinder liner finish optimization it is necessary to be able to distinguish the effect of each process variable on the roughness of these honed surfaces [START_REF] De Chiffre | Quantitative Characterisation of Surface Texture[END_REF].
In this work, strategies for piston ring-pack friction reduction through cylinder liner finish optimization were analyzed with the goal of improving the efficiency of selection of the honing process variable. The fundamental aim was to find a relation between the honing operating variables and the hydrodynamic friction at the piston rings/cylinder interface. An additional aim was to determine how the cylinder surface micro-geometry of plateau-honed cylinders affects the predicted friction. Thus, an experimental test rig consisting of an industrial honing machine instrumented with sensors to measure spindle power, expansion pressure, and honing head displacement was developed. Honing experiments were carried out using honing stones with varying sizes of abrasive grits and varying expansion speeds, that is, the indentation pulse of the honing stone's surface against the liner wall. Furthermore, a numerical model of lubricated elastohydrodynamic contact was developed to predict the friction performances and lubricant flow of the various liner surface finishes. It uses the real topography of the liner surface as input. In fact previous studies have found that the detailed nature of the surface finish plays an important role in ring friction and oil film thickness predictions [START_REF] Jocsak | The effects of cylinder liner finish on piston ring-pack friction[END_REF]. An appreciation of the limitations of the surface roughness parameters commonly used in automotive industries in providing a link between the honing process and the generated surface performance in the hydrodynamic regime is presented.
Experimental procedure
In this work, honing experiments were carried out on a vertical honing machine with an expansible tool (NAGEL no. 28-8470) (Figure 2). The workpiece consists of four cylinder liners of a lamellar gray cast iron engine crankcase.
The steps involved in the fabrication of the cylinder liners before the finish-honing operation are boring and rough honing, respectively (Table 1 treatment by impregnation with sulfur. Another interesting variation in the feed system is the expansion mechanism in the honing head, where three expansion velocities "V e " (1.5µm/s, 4µm/s and 8µm/s) were considered. All the other working variables were kept constant (Table 1). Note that the rough and finish honing operations use a mechanical expansion system and the plateau honing uses a hydraulic system. For each combination of grit size and expansion velocity, tests were repeated five times. Thus, the sensitivity of the produced surface finish to its generation process was considered.
Negative surface replicas made of a silicon rubber material (Struers, Repliset F5) were used to assess the texture of honed surfaces after the plateau-honing stage at the mid-height of the cylinder bore specimen. Topographical features of replica surfaces were measured in three locations by a three-dimensional white light interferometer, WYKO 3300 NT (WLI). The surface was sampled at 640 × 480 points with the same step scale of 1.94 μm in the x and y directions. Form component is removed from acquired 3D data using least square method based on cubic Spline function.
We can assume that the initial roughness of the cylinder bore has no influence on the obtained surface texture in this study. It affects only the honing cycle and the stone life, that is, the wear of the abrasive grits. In fact, the thickness of the removed material after finish honing (32.17 ± 2.21 µm) is greater than the total height of the original surface, which was about 24.56 ± 6.75 µm. This means that the finish honing operation completely penetrates the original surface topography and generates a new surface texture.
Numerical model for hydrodynamic friction simulation in piston ringpack system
A numerical model was developed to estimate friction at the ring-liner-piston contact. It takes into account the real topography of the cylinder liner. The scope of this model is to predict qualitatively the friction coefficient obtained to optimize the performances when the groove characteristics of cylinder liner surfaces are variables.
Geometry definition
An incompressible viscous fluid occupying, at a given moment, a field limited by a smooth plane surface P and by a rough surface R is considered. This field is represented on figure 3 (we did not represent the profile in the x2 direction). It extends from 0 to l1, 0 to l2 and h (x1, Figure 3 The separation field between a smooth surface and a rough one
EHL Equations
To estimate the pressure distribution, film thickness, and friction coefficient, a full system approach for elastohydrodynamic lubrication (EHL) was developed. The Reynolds equations have been written in dimensionless form using the Hertzian dry contact parameters and the lubricant properties at ambient temperature. To account for the effects of non-Newtonian lubricant behaviour effectives viscosities are introduced.
With the boundary condition P = 0 and the cavitation condition (or free boundary condition) P ≥ 0 are used everywhere Special treatment is used for cavitation condition as explained below. In this equation, is equal to . and are the effective viscosities in the X and Y directions, respectively. For point contact, it is not possible to derive these effective viscosities analytically. The perturbational approach described by Ehret et al. [START_REF] Ehret | On the lubricant transport condition in elastohydrodynamic conjunctions[END_REF] is used.
This analysis is based on the assumption that the shear stresses are only partially coupled and that the mean shear stress is negligible in the y direction [START_REF] Ehret | On the lubricant transport condition in elastohydrodynamic conjunctions[END_REF][START_REF] Greenwood | Two-dimensional flow of a non-Newtonian lubricant[END_REF].
In our model, the Eyring model is used. The perturbational approach leads to the following dimensionless effective viscosities:
where the dimensionless mean shear stress is written as:
where S is the slide to roll ratio (2(u 1u 2 )/(u 1 + u 2 )) and N is given by :
The constant parameters of Equation ( 5) are given in the nomenclature.
The lubricant's viscosity and density are considered to depend on the pressure according to the Dowson and Higginson relation [START_REF] Dowson | Elastohydrodynamic lubrication. The fundamentals of roller and gear lubrication[END_REF] (Eq. 6) and Roelands equation [START_REF] Roelands | Correlational aspects of the viscosity-temperature-pressure relationships of lubricant oil[END_REF] (Eq. 7):
where 0 is the density at ambient pressure.
where η 0 is the viscosity at ambient pressure, p r is a constant equal to 1.96 x 10 8 , and z r is the pressure viscosity index (z r = 0.65).
The film thickness equation is given in dimensionless form by the following equation:
is the height of the liner surface topography at each position (X,Y). H 0 is a constant determined by the force balance condition:
The normal elastic displacement of contacting bodies is obtained by solving the linear elasticity equations in three-dimensional geometry with appropriate boundary conditions [START_REF] Habchi | A full-system approach of the elastohydrodynamic line/point contact problem[END_REF][START_REF] Habchi | A full-system approach to elastohydrodynamic lubrication problems: application to ultra-low-viscosity fluids[END_REF]. The geometry (Ω) used (figure 4) is large enough compared to contact size (Ω c ) to be considered as semi-infinite structures. The linear elastic equations consist of finding the displacement vector U in the computational domain with the following boundary conditions:
In order to simplify the model, the equivalent problem defined by [START_REF] Habchi | A full-system approach to elastohydrodynamic lubrication problems: application to ultra-low-viscosity fluids[END_REF] is used to replace the elastic deformation computation for both contacting bodies. One of the bodies is assumed to be rigid while the other accommodates the total elastic deformation. The following material properties of the bodies are used in order to have (w is the dimensionless absolute value of the Z-component of the displacement vector):
where E i and i are the Young's modulus and Poisson's coefficient, respectively, of the material for contacting bodies (i = 1, 2).
Finally, the friction coefficient is evaluated by the following formula: these negative pressures are not relevant. In such cases, the fluid will evaporate and the pressure is limited by the vapor pressure of the fluid. This process is the cavitation. This problem is usually solved by setting the negative pressure to zero. This ensures that there will be zero pressure and gradient pressure on the free boundary. In the full system approach, this solution is not possible and the penalty method is used as an alternative, as explained in [START_REF] Habchi | A full-system approach to elastohydrodynamic lubrication problems: application to ultra-low-viscosity fluids[END_REF].
Cavitation problem
This method was introduced in EHL by Wu [START_REF] Wu | A penalty formulation and numerical approximation of the Reynolds-Hertz problem of elastohydrodynamic lubrication[END_REF]. An additional penalty term was introduced in the Reynolds equation:
where is a large positive number and is the negative part of the pressure distribution. This penalty term constrains the system to P 0 and forces the negative pressure to zero.
Numerical procedure
The iterative process is repeated until the maximum relative difference between two consecutive iterations reaches 10 -6 . Table 2 summarizes the fluid properties and contact parameters used in our simulation. The difference between our model and Venner and Lubrecht work [START_REF] Venner | Multilevel Methods in Lubrication[END_REF] is less than 1%.
This test confirms the validity of the model presented in this paper. Figure 5 show an example of a pressure distribution and film thickness profiles along the central line in the X direction for rough surface like one presented in Figure 6.
Table 3 Comparison of the current model with the Venner & Lubrecht model [START_REF] Venner | Multilevel Methods in Lubrication[END_REF] Venner and Lubrecht [START_REF] Venner | Multilevel Methods in Lubrication[END_REF]
Results and discussion
The numerical model presented in Section 3 was used to predict friction in the ring-linerpiston contact and to analyze possible friction reduction strategies in the piston ring-pack.
Table 4 regroups all the experimental and numerical results. As a result of simulation of the cylinder ring-pack contact, only the average friction coefficients were compared.
Cylinder liner surface roughness distribution can differ between surfaces with the same root mean square roughness. This difference can have a significant effect on the performance and behavior of the surface within the piston ring-pack system. The surface finish created by the honing process is controlled by the size and dispersion of abrasive particles adhering to the surface of the honing sticks. Figure 6 shows the effect of varying the grit size of abrasive honing stones used in the finish honing stage on the threedimensional topographical features of the produced surface. As shown in Figure 6, coarse abrasive grits yield deeper and larger lubrication valleys and consequently rougher surfaces. working variables: V e in finish honing stage = 4 µm/s; all others parameters are kept constant and are given in Table 1.)
Influence of abrasive grit size and expansion velocity on the impregnated surface texture and its friction performance within the piston ring-pack system
As a result of these honing experiments carried out with various sizes of abrasive grits, Figure 7 presents predicted values of the coefficient of friction and mean oil film thickness as a function of abrasive grit size of honing stone used at the finish honing stage. It demonstrates clearly that the surface texture achieved with finer abrasive grits yields a lower hydrodynamic friction coefficient in a cylinder ring-pack system than that obtained by coarse abrasive grits. Since the generated honed surfaces have the same honing cross-hatch angle, the differences in predicted hydrodynamic friction observed between these different finishes are mainly a result of surface peak and valley characteristics. Hence, the increase in valleys volume may increase the oil through the valleys of the surface, yielding a decrease in the oil film thickness, which will in turn induce an increase in hydrodynamic friction.
A reduction of the expansion speed operating condition leads to lower valley depth on the surface texture impregnated during coarse honing as observed in Figure 8. This figure also
shows that the expansion velocity has no effect on the spatial morphology of the generated surface texture and hence the roughness scale, as demonstrated by multiscale surface analysis in [START_REF] Lemke | Characteristic parameters of the abbot curve[END_REF]. 1).
Relationship between liner surface friction performance and honing process efficiency
The specific energy is used as a fundamental parameter for characterizing the honing process.
It is defined as the energy expended per unit volume of material removed. The specific honing energy defines the mechanisms of removal of material from the operated workpiece. It is calculated from the following relationships:
* honing Pm t Esp Qw ( 12
)
where honing t is the effective honing time, Pm is the average power absorbed by the honing process, calculated as the difference between on-load power recorded during the finishing and average off-load power recorded before and after the test, and Qw is the volumetric removal given by the following equation:
22 c Qw H D d ( 13
)
where c H is the cylinder height, and D and d are the cylinder diameter before and after the finish honing operation, respectively. It suggests that the optimum coefficient of friction with a good honing efficiency is reached by using a grit size of 80-100 µm and an expansion velocity equal to 4 µm/s. Furthermore, significant smooth surfaces are produced by plateau-honing with fine abrasive grit sizes due to the low indentation capacity of the fine abrasive grain. This generated surface texture also yields a low predicted coefficient of friction in the piston-ring-liner interface.
However, the use of fine abrasives has the lowest efficiency. In fact, it presents lower material removal and consumes a large specific energy due to the predominance of the plowing abrasion mechanism [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Sabri | Functional optimisation of production by honing engine cylinder liner[END_REF]. This yields a lower stone life and generates undue tool wear.
Thus, to ameliorate the honing efficiency, conventional abrasives can be replaced by superabrasive crystals which do not wear or break rapidly.
Roughness characteristics of optimal plateau-honed surface texture
To give a rough estimate of the potential side effects of the surface optimization, surface roughness has been evaluated using the functional roughness parameters R k (height of the roughness core profile), R pk (reduced peak height), and R vk (reduced valley height) given by the ISO 13565-2 standard [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF][START_REF] Jocsak | The effects of cylinder liner finish on piston ring-pack friction[END_REF]. These parameters are obtained based on the analysis of a bearing curve (the Abott-Firestone curve), which is simply a plot of the cumulative probability distribution of surface roughness height [START_REF] Sabri | Multiscale study of finish-honing process in mass production of cylinder liner[END_REF]. The peak height is an estimate of the These bubble plots show that all optimal surfaces in the hydrodynamic lubrication regime belong to the domain defined by: R pk < 1 µm R k < 3 µm R vk < 2.5 µm However, this critical domain does not guarantee the optimal behavior of the honed surface finish. For example, in Figure 12, the criterion R vk < 2.5 µm cannot exclude honed surfaces that induce a high friction coefficient. For the R pk and R k criteria, the same observation can be expressed. This suggests that the standard functional roughness parameters commonly used in the automotive industry cannot give a good classification of plateau-honed surfaces according to their functional performance. Table 5 shows linear correlation coefficients between roughness parameters and predicted coefficients of friction.
Table 5 The linear correlation coefficient between roughness parameters and the coefficient of friction
Correlation coefficient between Rpk and µ 0.666
Correlation coefficient between Rk and µ 0.664
Correlation coefficient between Rvk and µ 0.658
Hence, these standard functional parameters are not sufficient to give a precise and complete functional description of "ideal" honed surfaces. This can be attributed to the fact that bearing curve analysis is one-dimensional and provides no information about the spatial characteristics and scale of surface roughness.
Conclusion
This work focused on developing ring-pack friction reduction strategies within the limitations of current production honing processes. First, three-dimensional honed surface topographies were generated under different operating conditions using an instrumented industrial honing machine. Then, the three-dimensional surface topography of each honed cylinder bore is input into a numerical model which allows the friction performance of a cylinder ring-pack system in an EHL regime to be predicted. The strategy developed allows manufacturing to be related to the functional performance of cylinder bores through characterization. The results show that an increase in grit size will lead to an increase in surface roughness, with deeper valleys leading to an increase in hydrodynamic friction. They also show that the standard functional surface roughness parameters which are commonly used in the automotive industries do not provide a link between the honing process and the generated surface performance in the hydrodynamic regime.
Note that the analysis presented in this study does not take into consideration the effects of cylinder surface topography on its ability to maintain oil, that is, the oil consumption level. Experimental studies using a reciprocating bench tester will be carried out to evaluate the effect of honing operating conditions and cylinder surface topography on scuffing and oil consumption.
coefficient (Gpa -1 ) Elastic deflection of the contacting bodies (m) Dimensionless elastic deflection of the contacting bodies D, d Cylinder diameter before and after the finish honing operation, respectively (mm) E eq , eq Equivalent Young's modulus (Pa) and Poisson's coefficient, respectively E i , i Young's modulus (Pa) and Poisson's coefficient of component i, respectively E r Reduced modulus of elasticity Esp Specific honing energy (J/mm 3 ) temperature zero-pressure viscosity (Pa.s) Effective viscosities in the X and Y of curvature in x direction (m) Rg height of the liner surface topography at each position (m) Lubricant density (kg.m -3 ) Dimensionless lubricant density (= / 0 ) 0 Lubricant's density under ambient condition (kg.m -3 ) S Slide to roll ratio: S = 2(u 1u 2 )/(u 1 + u 2 ) Surface velocity of body I in x-direction (m.s -1 ) u m Mean entrainment velocity (m.s -1 ) x,y,z Space coordinates (m) X,Y Dimensionless space coordinates (=x/a, y/a) w Value of Z-component of the displacement vector (m)
Figure 1
1 Figure 1 Typical plateau-honed surface texture
Figure 2
2 Figure 2 (a) Vertical honing machine with expansible tool; (b) Schematic representation of the honing head in continuous balanced movement.
x2) respectively according to x1, x2 and x3. h (x1, x2) represents the fluid thickness. The smooth body is animated by a movement at the constant velocity "u1" along Ox1 axis whereas rough surface is static.
Figure 4
4 Figure 4 Scheme of geometric model for computations of the elastic deformation (1) and of the Reynolds equation (2).
pressure appears in the resolution of the Reynolds equation. Physically,
Reynolds equation, linear elastic equations, and load balance equation are simultaneously solved using a Newton-Raphson resolution procedure. The dimensionless viscosity , density , and film thickness H in the Reynolds equation are replaced by the expression given above. Except for the load balance equation, a standard Galerkin formulation is used. For the load balance equation, an ordinary integral equation is added directly with the introduction of an unknown H 0 . Unstructured variable tetrahedral meshing is used for both Reynolds and linear elastic equations. A total number of 100000 degrees of freedom are used in the simulation. An
Figure 5
5 Figure 5 Pressure (P) and film thickness (H) profiles along the central line in the X direction.
Figure 6
6 Figure 6 Three-dimensional topographies of plateau-honed surfaces produced using different abrasive grit sizes in the finish honing stage: (a) 40 µm, (b) 110 µm, and (c) 180 µm. (Process a b c
Figure 7
7 Figure 7 Evolution of friction coefficient of the cylinder ring-pack as a function of the abrasive grit size in the honing finish stage.
Figure 8
8 Figure 8 Three-dimensional topographies of plateau-honed surfaces produced using three different expansion velocities in the finish honing stage: (a) 1.5 µm/s, (b) 4 µm/s, and (c) 8 µm/s. (Process working variables: Abrasive grit size in the finish honing stage equal to 110µm; all other parameters are kept constant and are given in Table1).
Figure 9 ,
9 Figure 9, which presents a plot of the friction coefficient versus specific energy, highlights the link between honing process operating conditions and the functional behavior of plateau-honed surfaces in the hydrodynamic lubrication regime.
Figure 9
9 Figure 9Predicted coefficient of friction of surface texture generated with different honing grit sizes and indentation pulse as a function of consumed specific energy during finish honing. (The size of circles is proportional to the size of the honing abrasives which varies from 30 µm to 180 µm.)
Figures 10,[START_REF] Pawlus | Effects of honed cylinder surface topography on the wear of piston-piston ring-cylinder assemblies under artistically increased dustiness conditions[END_REF], and 12 display, for different abrasive grit sizes and at various expansion velocities, the existing correlation between the predicted friction coefficient within the cylinder ring-pack system and the functional roughness parameters of the plateau-honed surfaces of the cylinder bore, R pk , R k , and R vk , respectively.
Figure 10 Figure 11 Figure 12
101112 Figure 10 Predicted coefficient of friction vs. functional roughness parameter R pk . (The size of circles is proportional to the size of the honing abrasives which varies from 30 µm to 180 µm.)
Table 1
1 Honing working conditions
Honing process variables Rough honing Finish honing Plateau honing
V a : Axial speed (m/min) 28 28 28
V r : Rotation speed (rpm) 230 230 230
Honing time (sec) 20 15 2
Expansion type Mechanical Mechanical Hydraulic
V e : Expansion velocity (µm/s) 5 1.5, 4, and 8
Number of stones 6 6 6
Abrasive grit type Diamond Silicon carbide Silicon carbide
Grain size (µm) 125 30-180 30
Bond type Metal Vitrified Vitrified
Abrasive stone dimensions 2 × 5 × 70 6 × 6 × 70 6 × 6 × 70
(mm × mm × mm)
Table 2
2 Parameters for our simulation used with rough surfaces
Parameter Value Parameter Value
F N (N) 500 (GPa -1 ) 22.00
u m (m.s -1 ) 10.0 R x (m) 0.04
η 0 (Pa.s) 0.04 E i (GPa) 210
i 0.3 τ 0 (MPa) 0.5
Table 3
3
gives dimensionless central and minimum oil film thickness for the following dimensionless Moes and Venner parameters M=200 and L=10.
Table 4
4 Honing process variables, roughness parameters of honed surfaces and theirs predicted coefficient of friction
Ve Grit size Rpk Rk Rvk µ (%)
1.5 180 0.663 1.859 1.960 2.495
1.5 145 0.777 1.798 2.429 2.466
1.5 110 0.679 1.813 2.025 2.474
1.5 90 0.564 1.954 1.825 2.459
1.5 80 0.566 1.628 1.678 2.426
1.5 50 0.253 0.625 0.598 2.441
1.5 40 0.217 0.659 0.426 2.446
4 180 1.091 2.680 3.650 2.543
4 145 1.091 2.418 3.455 2.468
4 110 0.979 2.521 2.969 2.465
4 90 0.913 2.045 2.720 2.447
4 80 0.838 2.016 2.800 2.434
4 50 0.353 0.622 0.748 2.459
4 40 0.287 0.931 0.530 2.450
8 180 1.292 3.403 4.276 2.462
8 145 1.308 2.922 4.057 2.487
8 110 1.135 2.828 3.469 2.462
8 90 1.096 2.534 3.213 2.488
8 50 0.521 1.007 1.263 2.461
8 40 0.359 0.510 0.644 2.442 |
01763347 | en | [
"spi.mat",
"spi.meca",
"spi.meca.mema"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763347/file/MSMP_JMPT_2016_FOUILLAND.pdf | K Le Mercier
email: kevin.lemercier@univ-valenciennes.fr
M Watremez
E S Puchi-Cabrera
L Dubar
J D Guérin
L Fouilland-Paillé
Dynamic recrystallization behaviour of spheroidal graphite iron. Application to cutting operations
Keywords: SG iron, Hot cutting, Dynamic recrystallization, Finite element modelling
To increase the competitiveness of manufacturing processes, numerical approaches are unavoidable. Nevertheless, a precise knowledge of the thermo-mechanical behaviour of the materials is necessary to simulate accurately these processes. Previous experimental studies have provided a limited information concerning dynamic recrystallization of spheroidal graphite iron under hot cutting operations. The purpose of this paper is to develop a constitutive model able to describe accurately the occurrence of this phenomenon. Compression tests are carried out using a Gleeble 3500 thermo-mechanical simulator to determine the hot deformation behaviour of spheroidal graphite iron at high strains. Once the activation range of the dynamic recrystallization process is assessed, a constitutive model taking into account this phenomenon is developed and implemented in the Abaqus/Explicit software.
Finally, a specific cutting test and its finite element model are introduced. The ability of the numerical model to predict the occurrence of dynamic recrystallization is then compared to experimental observations.
Over the past few years, austempered ductile iron emerged for its application in several fields such as automotive and railway industries. This specific spheroidal graphite iron provides an efficient compromise between specific mechanical strength, fracture toughness and resistance to abrasive wear. Therefore, this material is intended to substitute forged steels for the weight reduction of numerous manufactured components [START_REF] Kovacs | Development of austempered ductile iron (ADI) for automobile crankshafts[END_REF]. To reach these enhanced mechanical properties, austempered ductile iron is obtained by a specific thermo-mechanical treatment (Figure 1). This consists in an austenitization of the cast iron in the temperature range 1123 -1223 K followed by quenching to an austempering temperature of 523 to 623 K causing the transformation of the austenite phase into ausferrite. A combined casting and forging process prior to this specific quenching is often performed to reduce manufacturing costs [START_REF] Meena | Drilling performance of green austempered ductile iron (ADI) grade produced by novel manufacturing technology[END_REF]. Also, with the aim of increasing the competitiveness, the removal of risers and feeder head is then performed at about 1273 K just after the casting operation. However, this stage can give rise to severe surface degradations under the cut surface, compromising the process viability.
A recent experimental investigation, performed by [START_REF] Fouilland | Experimental study of the brittle ductile transition in hot cutting of SG iron specimens[END_REF], The plastic flow stress evolution of a material which undergoes DRX is shown schematically in Figure 2. At stresses less than the critical stress for the onset of DRX (σ c ), the material undergoes both WH and DRV. However, once σ c is exceeded DRX will become operative and the three processes will occur simultaneously. As the strain applied to the material increases, the volume fraction recrystallized dynamically (X v ) will also increase In numerical cutting models, the [START_REF] Johnson | A constitutive model and data for metals subjected to large strains, high strain rates and high temperatures[END_REF] constitutive formulation is generally implemented because of its simplicity and numerical robustness [START_REF] Limido | SPH method applied to high speed cutting modelling[END_REF]. However, this empirical law describes the flow stress as a function of the total strain applied to the material, which is not a valid state parameter, by means of a simple parametric power-law relationship. Such a description would be incompatible with the evolution of flow stress mentioned above. Therefore, the present paper deals with the development of a specific constitutive model which not only allows the DRX process to be considered, but also that expresses the flow stress in terms of valid state parameters. Thus, the first challenge is to determine the activation range in which DRX occurs, by means of hot compression tests. Then, the selected model is implemented in the finite element analysis of a specific cutting operation. Finally, the prediction of the numerical model concerning the occurrence of DRX is discussed in relation to the experimental observations.
Experimental techniques
Material
The material employed for the present study is an ASTM A536 100-70-03 iron similar to that employed by [START_REF] Fouilland | Experimental study of the brittle ductile transition in hot cutting of SG iron specimens[END_REF]. This spheroidal graphite iron (SGI) exhibits a pearlitic matrix at room temperature and a small amount of ferrite surrounding the graphite nodules, called bullseye ferrite (Figure 3). Its chemical composition is given in Table 1.
Page 7 of 35
Mechanical characterization
The experiments were performed on a Gleeble 3500 thermo-mechanical testing machine.
Compression specimens of 10 mm in diameter and 12 mm in length were tested under constant deformation conditions in a vacuum chamber. The samples were heated at 5 K.s -1 to the testing temperature and then held for one minute at the test temperature. A K-thermocouple was welded at the half height of the specimen to ensure the temperature measurement. The tests were conducted at mean effective strain rates of 0.5, 1 and 5 s -1 , at nominal temperatures of 1073, 1173 and 1273 K. At the end of the tests, the specimens were air cooled. At least two tests were conducted for each deformation condition.
Cutting operation
These experiments were conducted on an orthogonal cutting test bench. The SGI specimens replicated the feeder head obtained just after casting. These were cylindrical
MPa
Effective strain
1273 K -5 s -1 1273 K -1 s -1 1273 K -0.5 s -1 1073 K -5 s -1 1173 K -1 s -1 1173 K -5 s -1 1073 K -1 s -1
Experimental
Figure 5: Effective-stress-effective strain curves obtained at different temperatures and strain rates.
The samples tested at 5 s -1 were observed using optical microscopy. Figures 6a and6b show the optical micrographs of the samples deformed at 1073 K and 1173 K. These microstructures present a significant variation in their pearlitic matrix from the original state (Figure 3). Indeed, ferrite grains are more prominent and their size is finer. The microstructure of the sample deformed at 1273 K, exhibited in Figure 6c, has a pearlitic matrix and the crushed graphite nodules are surrounded by ferrite grains to a lesser extent.
A c c e p t e d M a n u s c r i p t
The predominant change is observed with the pearlite grains, which have a finer size than in the original state. Figure 8 shows the experimental cutting forces and also multiple pictures extracted from the recording of the cutting operation, whose main steps are described below.
1. The pin is in full contact with the tool and its base is deformed plastically, 2. A shear band is detected on the pin base, the cutting forces reach a maximum,
Constitutive model employed for the description of the flow stress curves
In the past few years, several constitutive relations including DRX effects have been developed to describe the behaviour of austenite under hot working conditions. Most of these constitutive models are based on the dependence of different variables on the Zener-Hollomon parameter [START_REF] Kim | Study on constitutive relation of AISI 4140 steel subject to large strain at elevated temperatures[END_REF][START_REF] Lin | Constitutive modeling for elevated temperature flow behavior of 42CrMo steel[END_REF][START_REF] Lurdos | Empirical and physically based flow rules relevant to high speed processing of 304L steel[END_REF]Wang et al., 2012).
Description of the constitutive model
In the present work, the description of the flow stress curves corresponding to the SGI samples deformed under hot-working conditions has been carried out on the basis of the models earlier advanced by Puchi-Cabrera et al. for structural steels deformed under hotworking conditions [2011; 2013a; 2013b; 2014a; 2014b; 2015]. Accordingly, the flow stress data is employed for determining the main stress parameters characteristic of the deformation of the material under these conditions, which include: yield stress, critical stress for the onset of DRX, actual or hypothetical saturation stress (depending on deformation conditions) and actual steady-state stress. Additionally, both the Avrami exponent and the time required to achieve 50% DRX are determined from the work-softening transients present in some of the stress-strain curves. However, the model can be simplified on the ba- Thus, the work-hardening (WH) and dynamic recovery (DRV) transient of each stressstrain curve is described by means of the evolution equation derived from the workhardening law advanced by [START_REF] Sah | Recrystallization during hot deformation of nickel[END_REF], which is expressed as:
dσ ε dε = µ(T ) A 1 - σ ε -σ y (T, ε) σ sat (T, ε) -σ y (T, ε) 2 σ sat (T, ε) -σ y (T, ε) σ ε -σ y (T, ε) (1)
In the above evolution equation, σ ε represents the current flow stress, σ y the yield stress, σ sat the hypothetical saturation stress, µ(T ) the temperature-dependent shear modulus, ε the effective strain rate, T the deformation temperature and A a material parameter that could either be a constant or a function of deformation conditions through the Zener-Hollomon parameter, which is defined as
Z = ε exp Q RT
where Q is an apparent activation energy for hot deformation and R is the universal gas constant. As shown in the forthcoming, the constant A in the above equation is computed from the experimental stress-strain data corresponding to each stress-strain curve determined under constant deformation conditions.
Regarding the temperature-dependent shear modulus, it can be confidently computed from the equation [START_REF] Kocks | Laws for work-hardening and low-temperature creep[END_REF]:
µ(T ) = 88884.6 -37.3T , MPa (2)
The two stress parameters present in eq.( 1) can also be expressed in terms
σ y (T, ε) = δ y sinh -1 ε exp Q RT B y 1/my (3)
Whereas, for the hypothetical saturation stress:
σ sat (T, ε) = δ s sinh -1 ε exp Q RT B s 1/ms (4)
In eqs. 3 and 4, δ y , B y and m y , as well as δ s , B s and m s represent material parameters and Q an apparent activation energy for hot deformation.
The actual steady-state stress (σ ss ), which as indicated above is considered to be equal to the critical stress for the onset of DRX (σ c ), can also be correlated with Z by means of the STG model, according to the following expression:
σ ss (T, ε) = δ ss sinh -1 ε exp Q RT B ss 1/mss (5)
As in equation 4, δ ss , B ss and m ss represent material parameters.
From the computational point of view, equation 1 is firstly integrated numerically. If the resulting value of σ ε is less than σ c (σ c = σ ss ), the flow stress is determined by σ ε (σ = σ ε ), otherwise, the flow stress should be computed from a second evolution equation, which includes the description of the work-softening transient associated to DRX. This second evolution law is expressed as:
dσ dε = µ(T ) A 1 - σ -σ y + ∆σX v σ sat -σ y 2 σ -σ y + ∆σX v σ sat -σ y - n Av ∆σ(1 -X v ) ln 2 ε t n Av 0.5 -t n Av 0.5 ln (1 -X v ) ln 2 1-1 n Av (6)
A c c e p t e d M a n u s c r i p t
Thus, the incremental change in the flow stress with strain, once DRX becomes operative, is observed to depend on σ, σ y , σ sat , σ ss , the dynamically recrystallized volume fraction, X v , the Avrami exponent, n Av , the time for 50% DRX, t 0.5 , µ(T ), ε, T and constant A.
Since plastic deformation of the material can occur under transient conditions involving arbitrary changes in temperature and strain rate, X v should also be computed from the Johnson-Mehl-Avrami-Kolmogorov (JMAK) equation expressed in differential form:
dX v dt = n Av (1 -X v ) ln 2 t n Av 0.5 -t n Av 0.5 ln (1 -X v ) ln 2 1-1 n Av (7)
However, in this case the change in the volume fraction recrystallized with time is also expressed in terms of the time required to achieved 50% DRX, t 0.5 , which can be conveniently computed by means of the simple parametric relationship proposed by Jonas et al.
(2009), expressed as:
t 0.5 = D ε exp Q RT -q exp Q DRX RT , s (8)
In the above equation, D represents a material constant weakly dependent on the austenitic grain size, whereas q and Q DRX represent a material parameter and the apparent activation energy for dynamic recrystallization, respectively. Thus, the constitutive description of the material is represented by equations 1 through 8. Clearly, two important features can be observed. Firstly, the flow stress is absolutely independent of the total strain (ε) applied to the material. Secondly, that given the determination of the flow stress from the numerical integration of two evolution laws, such a parameter can be readily evaluated and ( 6) of the constitutive formulation presented in the previous section. The accurate description of the experimental curves suggests that their individual modelling allows a precise and reliable identification of all the parameters of interest indicated above. Table 2 summarizes the value of the relevant parameters that were determined for each deformation condition. However, in order to formulate a global constitutive equation able to predict the flow stress of the material under arbitrary deformation conditions, the functional dependence of σ y , σ sat , σ ss , t 0.5 and A (if any) on ε, T should also be accurately established.
Previous studies conducted on 20MnCr5 steel deformed under hot-working conditions (Puchi-Cabrera et al., 2014a) indicated that a single activation energy value of approximately 283.3 kJ.mol -1 could be satisfactorily employed for the computation of the Zener-Hollomon parameter (Z), as well as for the corresponding description of σ y , σ sat , σ ss and t 0.5 as a function of deformation temperature and strain rate. Therefore, in the present work the same value will be employed for both purposes. Thus, Figure 13 illustrates the change in σ y , σ sat and σ ss as a function of Z, as well as their corresponding description according to the STG model (eqs. 3 through 5). Table 3
A c c e p t e d M a n u s c r i p t
As can be observed from Figure 13, the predicted change in each parameter with Z, indicated by the solid lines, describes quite satisfactorily the experimental data, which provides a reliable formulation for modelling purposes. An interesting feature that can be observed from this figure is that related to the behaviour exhibited by σ sat and σ ss .
The curve corresponding to the change in σ sat with Z, in the temperature and strain rate intervals explored in the present work, is always above that corresponding to the change in σ ss . However, as Z increases above a value of approximately 10 16 s -1 , both curves tend to approach each other, which suggests that DRX will occur to a lesser extent and therefore, that DRV will be the only operative dynamic restoration process. cutting operation. However, it also shows that as the temperature decreases and the strain rate increases, the extent to which DRX continues to occur is more limited.
Regarding the temperature and strain rate description of t 0.5 , Figure 14 clearly illustrates that the simple parametric relationship given in eq.( 8) constitutes a quite satisfactory approach for the computation of such a parameter under arbitrary deformation conditions.
1E-17 1E-16 1E-15 1E-14 1E-13 1E-12 1E-1 1 1E-10 1E+10 1E+11 1E+12 1E+13 1E+14 1E+15 1E+16 t 0.5 exp(-283300/RT) , s -1 283300 ε exp , s RT -0.85 0.5 283300 Z = ε exp RT 283300 t = 0.0032 Z exp RT
Figure 14: Evolution of the time required to achieve 50% DRX as function of Z.
As indicated on the plot, this relationship can be simplified further by assuming that the apparent activation energy for DRX has the same magnitude than that for hot deformation, which reduces the number of material parameters in the global constitutive formulation without compromising the accuracy of the model prediction. The values of the different constants involved in eq.( 8) are shown on the plot.
Another important consideration of the proposed constitutive formulation involves the temperature and strain rate dependence of constant A, as can be clearly observed from
A c c e p t e d M a n u s c r i p t
Table 2. Previous work conducted both on C-Mn, 20MnCr5 and Fe-Mn23-C0.6 steels [START_REF] Puchi-Cabrera | Constitutive description of a low C-Mn steel deformed under hot-working conditions[END_REF]2013a;2013b;2014a;2014b;[START_REF] Puchi-Cabrera | Constitutive description of FeMn23C0.6 steel deformed under hot-working conditions[END_REF] indicates that this constant in general does not exhibit any significant dependence on T and ε. However, in the present case it can be clearly observed that such a constant exhibits a significant dependence on deformation conditions, which should be taken into consideration into the global constitutive formulation. Thus, Figure 15 highlights the change in A as a function of Z, which clearly indicates that an increase in Zener-Hollomon parameter value leads to a significant and unexpected increase in the athermal work-hardening rate of the material, θ 0 = µ/A.
Finite element modelling of the cutting operation
Finite element model description
Finite element modelling and analysis of the cutting operation were performed with the Abaqus/Explicit software. Figure 20 shows the initial mesh of the workpiece and the cutting tool. The specimen was meshed using 8-nodes 3D solid elements (C3D8RT). A higher mesh density was applied to the pin, as compared to the rest of the model. In order to reduce the computing time of this analysis, only half of both the specimen and the tool were modelled. This results in 9710 elements. A symmetry condition on the z-axis was applied to both right faces of the workpiece and the tool. Appropriate boundary conditions were applied to constrain the bottom, front and left faces of the specimen. The tool, which is considered as a rigid surface, was constrained to move in the cutting direction, at a speed, which varies with time, as shown in Figure 7. No other displacements were allowed. The contact between the tool and the workpiece was modelled using a Coulomb's friction law. The friction coefficient has been identified by an inverse method and set to an almost constant value of 0.2. The workpiece was modelled as an elastic-plastic material A c c e p t e d M a n u s c r i p t with isotropic hardening.
Implementation of the constitutive model
Since the cutting operation is assumed to be performed at a mean constant temperature and strain rate, the integrated version of the constitutive formulation presented in section 4 was implemented using a VUHARD user subroutine (Figure 21). The variables to be defined in the subroutine are the flow stress, Σ, and its variations with respect to the total effective strain, the effective strain rate and the temperature.
ε r = 1 2 A µ(T ) (σ sat (T, ε) -σ y (T, ε)) ε c = -ε r ln 1 - σ ss (T, ε) -σ y (T, ε) σ sat (T, ε) -σ y (T, ε) 2 If ε < ε c σ ε = σ y (T, ε) + (σ sat (T, ε) -σ y (T, ε)) 1 -exp - ε ε r 1/2 Required variables Σ = σ ε ∂Σ ∂ε (T, ε) , ∂Σ ∂ ε (T,ε) , ∂Σ ∂T (ε, ε) Else t = ε -ε c ε X v = 1 -exp -ln(2) t t 0.5 n Av ∆σ = X v (σ sat (T, ε) -σ ss (T, ε)) Required variables Σ = σ ε -∆σ ∂Σ ∂ε (T, ε) , ∂Σ ∂ ε (T,ε) , ∂Σ ∂T (ε, ε)
A c c e p t e d M a n u s c r i p t
The volume fraction recrystallized, X v , was set as a solution-dependent variable within the subroutine. The critical strain for the onset of DRX (ε c ), the relaxation strain (ε r ) and the time during which DRX occurs (t) were introduced.
As mentioned in the previous section, the cutting operation is conducted at a mean temperature of 1273 K and a mean strain rate of approximately 200 s -1 . Furthermore, the high cutting speed allows no time for heat transfer between the tool and the workpiece material. Thus, the model was assumed to be adiabatic. According to [START_REF] Soo | 3D FE modelling of the cutting of Inconel 718[END_REF],
this assumption is generally used for the simulation of high-speed manufacturing processes.
The initial temperature of the test was applied to the specimen. No fracture criterion is introduced in this simulation as the emphasis is put on the beginning of the hot cutting operation corresponding to the first 4 ms. Since the constitutive law has been implemented in its integrated form, it assumes that during the cutting operation the mean strain rate remains constant and therefore, no effect of crack propagation has been taken into consideration concerning the evolution of the volume fraction recrystallized dynamically. In order to take into account changes in temperature and strain rate during the cutting operation, the constitutive law should be implemented in its differential formulation.
Page 29 of 35
A c c e p t e d M a n u s c r i p t
Simulation results
Figure 22 illustrates the comparison between the predicted forces before the crack initiation and the experimental forces. The normal force is predicted quite satisfactorily, whereas the cutting force is clearly overestimated between the steps 1 and 2. However at 4 ms, the prediction errors are respectively about 2.8 percent and 1.8 percent for the cutting and normal forces. The gap between the finite element model and the experimental results can be explained by the fact that the SGI specimen is not perfectly clamped in its insert. The relative velocity between the tool and the pin is then less than expected at the beginning of the test. This, results in a time lag between the predicted cutting force and the actual one.
DRT
Abbreviations DIC Digital image correlation DRV Dynamic recovery DRX Dynamic recrystallization JMAK Johnson-Mehl-Avrami-Kolmogorov model SGI Spheroidal graphite iron STG Sellars-Tegard-Garofalo model WH Work hardening Arabic symbols A Material parameter B s , B ss , B y Material parameters in the STG model, s -1 D Material parameter, s m s , m ss , m y Material parameters in the STG model n Av Avrami exponent Q Apparent activation energy for hot-working, kJ.mol -1 q Material parameter Q DRX Apparent activation energy for dynamic recrystallization, kJ.mol -1 R Universal gas constant, J.mol -1 .K -1 T Absolute temperature, K t Time during which DRX occurs, s t 0.5 Time for 50 percent recrystallization, s X v Volume fraction recrystallized Z Zener-Hollomon parameter, s -1 Greek symbols δ s , δ ss , δ y Material parameters in the STG model, MPa ε Total effective strain
Figure 1 :
1 Figure 1: Heat treatment example for austempered ductile iron.
Figure 2 :
2 Figure 2: Typical dynamic recrystallization hardening curve.
effect of WH and DRV. As a consequence, a work softening transient will occur, leading to the presence of a peak stress (σ p ) on the flow stress curve. As the strain applied to the material continuous to increase, the balance among WH, DRV and DRX will lead to the achievement of a steady-state stress (σ ss ), whose magnitude is equal to σ c .
Figure 3 :
3 Figure 3: Optical micrograph of the ASTM A536 100-70-03 iron in its original state (etched with saturated nitric acid).
mm in height and 10 mm in diameter. Also, the fillet radius on the pin base was of 2.5 mm. At the beginning of the test, the specimens were heated in a furnace up to the required temperature. Then, they were clamped in a refractory insert bed, which prevented from heat losses. Finally, during the cutting operation, the high strength steel cutting tool moves against the cylinder, as shown in Figure4. This experimental device includes a piezoelectric sensor for measuring cutting loads. A high speed camera records the cutting operation at a frequency of 15000 frames per second with a resolution of 768 x 648 pixels. A speckle pattern covering the tool allows the determination of the effective cutting velocity by means of a digital image correlation (DIC). The tests were performed three times at a temperature of 1273 K with a tool rake angle of -10˚and an initial cutting speed of 1.2 m.s -1 . The choice of the negative rake angle was based on the results of the study conducted by[START_REF] Fouilland | Experimental study of the brittle ductile transition in hot cutting of SG iron specimens[END_REF], which revealed that such rake angle allows the observation of the brittle-ductile transition.
Figure 4 :
4 Figure 4: Clamped specimen and cutting tool.
Figure 5
5 Figure5illustrates the mean effective stress-effective strain curves obtained at different temperatures and strain rates. It was observed that the typical deviation of the flow stress values from the mean curve was about + or -2 MPa. The experimental stress-strain curves exhibit the same shape as that portrayed in Figure2, highlighting the occurrence of DRX during the compression tests.
Figure 6 :Figure 7 :
67 Figure 6: Optical micrographs of the ASTM A536 100-70-03 iron deformed at 5 s -1 and different temperatures (etched with saturated nitric acid).
Figure 8 :Figure 9 :
89 Figure 8: Axial and normal forces with the corresponding pictures recorded by the high speed camera.
A c c e p t e d M a n u s c r i p t sis of the experimental results reported by Jonas et al. (2009) and Quelennec et al. (2011), who demonstrated that, for a broad range of steel grades, the steady state stress is equal to the critical stress for the nucleation of DRX.
of T and ε by means of the well established Sellars-Tegart-Garofalo (STG) model [1966]. For the Page 15 of 35 A c c e p t e d M a n u s c r i p t yield stress, its functional dependence on T and ε is expressed as:
when the deformation of the material occurs under transient deformation conditions, which are characteristic of actual industrial hot deformation processes. The experimental flow stress data determined at different deformation temperatures and strain rates constitutes the raw data for the rational computation of the different material parameters involved Identification of the different parameters involved in the constitutive modelThe precise determination of the different stress parameters involved in the constitutive description of material, as well as the time required to achieve 50% DRX, can be conducted by means of the individual modelling of each stress-strain curve determined under constant conditions of temperature and strain rate. Figures 10 through 12 illustrate the comparison of the experimental stress-strain curves and the predicted ones employing equations (1)
Figure 10 :
10 Figure 10: Comparison of the experimental stress-strain curves and the constitutive formulation at 1073 K.
Figure 11 :Figure 12 :
1112 Figure 11: Comparison of the experimental stress-strain curves and the constitutive formulation at 1173 K.
Figure 13 :
13 Figure 13: σy, σsat and σss as a function of Z.
Figure 15 :Figure 16 :
1516 Figure 15: A as a function of Z.
Figure 17 :
17 Figure 17: Comparison between predicted and experimental stress-strain curves at 1173 K.
Figure 18 :
18 Figure 18: Comparison between predicted and experimental stress-strain curves at 1273 K.
Figure 19 :
19 Figure 19: Maximum relative error between the computed and predicted values of the flow stress.
Figure 20 :
20 Figure 20: Isometric view of the initial mesh configuration.
Figure 21 :
21 Figure 21: Algorithm defined in the VUHARD subroutine.
Figure 22 :
22 Figure 22: Comparison between experimental and predicted forces.
Figure 23 :Figure 24 :
2324 Figure23shows the von Mises stress (σ eq ) distribution within the specimen during the cutting operation. The fillet radius on the pin base is an area of stress concentration. The average value of von Mises stresses in the fillet radius at 4 ms is about 250 MPa. Under these deformation conditions, DRX is then operative as the effective stress associated to the WH and DRV curve is greater than the critical stress for the onset of DRX.
Table 1 :
1 Chemical composition of the ASTM A536 100-70-03 iron.
Element C Si Mn S Cu Ni Cr Mo Mg
Composition (wt%) 3.35 2.72 0.16 0.009 0.87 0.71 <0.03 0.21 0.043
Table 2 :
2 Relevant parameters involved in the description of the individual stress-stain curves.
T , K
summarizes the value of the different material parameters involved.
δ y , MPa B y , s -1 m y δ s , MPa B s , s -1 m s δ ss , MPa B ss , s -1 m ss
19.0 1.77E+08 3 104.2 3.47E+10 4.96 88.3 1.13E+11 3.74
Table 3 :
3 Materials parameters involved in description of σy, σsat and σss as a function of Z, according to the STG model.
A c c e p t e d M a n u s c r i p t of dynamic recrystallization has been determined. A typical cutting process has been modelled both from the experimental and numerical point of view. The numerical predictions agree with the experimental results and highlighted some explanations concerning the occurrence of dynamic recrystallization within the shear zone. Currently, further investigations are being carried out in order to validate the proposed constitutive description, by modelling other cutting configurations. Also, a fracture criterion is being characterized for the spheroidal graphite iron, in order to investigate the competition between the material fracture and the occurrence of DRX during the cutting operation.
Acknowledgements
The present research work has been supported by the ARTS Carnot Institute and was made possible through the collaboration between MSMP and LAMIH laboratories. The authors gratefully acknowledge the support of this institute. They also express their sincere |
01763369 | en | [
"info.info-se"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763369/file/MSR18_paper%20%28camera-ready%20version%29.pdf | César Soto-Valero
email: cesarsotovalero@gmail.com
Johann Bourcier
email: johann.bourcier@irisa.fr
Benoit Baudry
email: baudry@kth.se
Detection and Analysis of Behavioral T-patterns in Debugging Activities
Keywords: Debugging interactions, developers' behavior, T-patterns analysis, empirical software engineering
A growing body of research in empirical software engineering applies recurrent patterns analysis in order to make sense of the developers' behavior during their interactions with IDEs. However, the exploration of hidden real-time structures of programming behavior remains a challenging task. In this paper, we investigate the presence of temporal behavioral patterns (T-patterns) in debugging activities using the THEME software. Our preliminary exploratory results show that debugging activities are strongly correlated with code editing, file handling, window interactions and other general types of programming activities. The validation of our T-patterns detection approach demonstrates that debugging activities are performed on the basis of repetitive and well-organized behavioral events. Furthermore, we identify a large set of T-patterns that associate debugging activities with build success, which corroborates the positive impact of debugging practices on software development.
INTRODUCTION
Debugging is a widely used practice in the software industry, which facilitates the comprehension and correction of software failures. When debugging, developers need to understand the pieces of the software system in order to successfully correct specific bugs. Modern Integrated Development Environments (IDEs) incorporate useful tools for facilitating the debugging process, allowing developers to focus only in their urgent needs during the fixing work. However, debugging is still a very challenging task that typically involves the interaction of complex activities through an intense reasoning workflow, demanding a considerable cost in time and effort [START_REF] Perscheid | Studying the advancement in debugging practice of professional software developers[END_REF].
Due to the complex and dynamic nature of the debugging process, the identification and analysis of repetitive patterns can benefit IDE designers, researchers, and developers. For example, IDE designers can build more effective tools to automate frequent debugging activities, suggesting related tasks, or designing more advanced code tools, thus improving the productivity of developers. Furthermore, researchers can better understand how debugging behavior is related to developers' productivity and code quality. Unfortunately, most of existing studies on debugging activities within IDEs do not consider the complex temporal structure of developers' behavior, thus including only information about a small subset of possible events in the form of data streams [START_REF] Parnin | Are Automated Debugging Techniques Actually Helping Programmers?[END_REF].
The detection of temporal behavioral patterns (T-patterns) is a relevant multivariate data analysis technique used in the discovery, analysis and description of temporal structures in behavior and interactions [START_REF] Magnusson | Discovering hidden time patterns in behavior: T-patterns and their detection[END_REF]. This technique allows to determine whether two or more behavioral events occur sequentially, within statistically significant time intervals.
In this paper, we perform a T-patterns analysis to study debugging behavior. More specifically, we examine the relations of debugging events with other developers' activities. Through the analysis of the MSR 2018 Challenge Dataset, consisting of enriched event streams of developers' interactions on Visual Studio, we guide our work by the following research questions:
• RQ 1 : What developing events are the most correlated with debugging activities? • RQ 2 : Can we detect behavioral T-patterns in debugging activities? • RQ 3 : Is the analysis of T-patterns a suitable approach to show the effect of systematic debugging activities on software development?
We aim to answer these question by analyzing a set of 300 debugging sessions filtered from the MSR 2018 Challenge Dataset of event interactions. The objective of our analysis is twofold: [START_REF] Amann | FeedBaG: An interaction tracker for Visual Studio[END_REF] to provide the researchers with useful information concerning the application of T-patterns analysis in the study of developers' behavior; and (2) to present empirical evidence about the influence of debugging on software development.
Previous studies analyzed debugging behavior using patterns detection methods. For example, in the development of automated debugging techniques for IDE tools improvement [START_REF] Parnin | Are Automated Debugging Techniques Actually Helping Programmers?[END_REF]. However, to the best of our knowledge, this is the first attempt of using T-patterns analysis to investigate debugging session data.
DATA MANAGEMENT
The dataset for the 2018 MSR Challenge, released on March 2017 by the KaVE Project1 , contains over 11M enriched events that correspond to 15K hours of working time, originating from a diverse group of 81 developers [START_REF] Proksch | Enriched Event Streams: A General Dataset For Empirical Studies On In-IDE Activities Of Software Developers[END_REF]. The data was collected using FeedBaG, an interaction tracker for Visual Studio, which was designed with the purpose of capturing a large set of different in-IDE interactions during software developing in the shape of enriched event streams [START_REF] Amann | FeedBaG: An interaction tracker for Visual Studio[END_REF].
The THEME software2 supports the detection, visualization and analysis of T-patterns. It has been successfully applied in many different areas, from behavioral interaction between human subjects and animals to neural interactions within living brains [START_REF] Magnusson | Discovering hidden temporal patterns in behavior and interaction: T-pattern detection and analysis with THEME[END_REF]. Due to the data transferred by contributors is anonymous, we base our T-patterns analysis on the session Id that identifies developers' work during each calendar day. Our filtering routine removes duplicate events and generates individual session files with a structure appropriate for THEME. Date-time information of triggered events is converted to epoch-second values, which is an integer representing the number of elapsed seconds from 1970-01-01T00:00:00Z. Only sessions with debugging interactions where retained for further analysis. Our resulting dataset contains 300 sessions and more than 662K events. Figure 1 shows an example of the data inputs: the variable vs. value correspondence table with the debugging-related event types filtered ("vvt.vvt") and a data file of debugging interactions ("DebuggingSession.txt"). We are mostly interested in debugging events triggered using commands, such as "Debug.Start" or "Debug.StepInto", which represent the user's invocation of a direct debugging action in the IDE. We decide to keep other related event types that can bring additional information about the programmer's debugging behavior (e.g., "EditEvent", "TestEvent" or "BuildEvent"). To do so, we append onto each event type string its respective descriptor. For instance, we retain information about the amount of editing according to the size of changes made in the file (e.g., "Large" or "Short"), the result of tests (e.g., "Successful" or "Failed"), or the build result (e.g., "Successful" or "Unsuccessful").
Our analysis goes beyond the discovery of events' associations. We are more interested in explaining those connections in terms of developers' behaviour by means of T-patterns analysis. In the following, we perform the events analysis using THEME software. First, we show how interesting Tpatterns can be detected and visualized through the finegrained inspection of interactions in individual debugging sessions. Next, we aim to find general behavioral patterns that occur within statistical significance time thresholds for all the debugging sessions studied.
T-PATTERNS ANALYSIS
In this section, we summarize the main concepts regarding the detection and analysis of T-patterns [START_REF] Magnusson | Discovering hidden time patterns in behavior: T-patterns and their detection[END_REF]. Through the use of an active debugging session as case study, we illustrate the benefits of using THEME software as a tool for exploring hidden real-time structures of programming behaviour in IDEs. Our general approach consists of 3 phases: and evolution to deal with redundant detections, where partial and equivalent patterns are removed. THEME provides statistical validation features, global and per pattern, using randomization or Monte Carlo repeated [START_REF] Magnusson | Discovering hidden temporal patterns in behavior and interaction: T-pattern detection and analysis with THEME[END_REF].
T-patterns visualization. A T-pattern can be viewed as a hierarchical and self-similar pseudo fractal pattern, characterized by significant translation symmetry between their occurrences. Figure 2b shows the binary detection tree of a complex T-pattern of length 7 found in the debugging session of Figure 2a. The large vertical lines connecting event points indicate the occurrence time of the T-pattern. The node marked in green indicates an event that can be predicted from the earlier parts of the pattern (also called T-retrodictor).
GENERAL FINDINGS
We perform an exploratory data analysis to examine the association among events. We use the phi coefficient of correlation, a common measure for binary correlation, and the tidytext R package in order to visualize how often events appear together relative to how often they appear separately [START_REF] Silge | tidytext: Text Mining and Analysis Using Tidy Data Principles in R[END_REF]. Figure 3 shows the 10 developers' activities that we find more correlated with debugging (𝜑 > 0.5). From the figure, we observe that debugging activities are strongly correlated with code editing, window interactions, document saving, and activity events. In addition, we found that code completion, keyboard navigation and short code editing events are not directly correlated with debugging activities. Based on the observation of Figure 3, we derive the answer to the RQ 1 as follows:
Answer to RQ 1 : Debugging activities are more correlated with editing, file handling, window interactions and activity events than with other general commands or event types.
We are mostly interested in analyzing general patterns of events that occur within the debugging workflow. Such patterns allow for insights into the dynamic nature of developer's behavior while debugging software. Accordingly, all debugging sessions were ordered and concatenated in time to conform a single dataset for global analysis with THEME. Thus, the 300 debugging sessions were merged, resulting in a dataset with 263 different event types and more than 460K events' occurrences.
The following search parameters were fit in THEME via grid search: (a) detection algorithm = FREE; (b) minimum number of occurrences of pattern = 10; (c) significance level = 0.0005 (0.05% probability of any CI relationship to occur by chance); (d) maximum number of hierarchical search levels = 10; (e) exclusion of frequent event types occurring above the mean number of occurrences of ±1 standard deviations.
For the above parameters, more than of 12K of T-patterns were detected. We run the algorithm on 10 randomized versions of the data, using the same search parameters, to check if the set of detected T-patterns differentiate significantly from those obtained randomly. Figure 4 shows the comparison between the distributions of the detected patterns on the original data and the average number of patterns detected after the randomization procedure. The incidence of T-patterns in real data was significantly greater than in its randomized versions. Accordingly, it is clear that the T-patterns detected in the original dataset were not obtained by chance. This result demonstrates that debugging activities are organized on the basis of behavioral events, which occur sequentially and within significant constraints on the time intervals that separates them. Based on this result, we derive the answer to the RQ 2 as follows:
Answer to RQ 2 : The validation of the T-patterns detected using THEME provides meaningful evidence about the presence of behavioral patterns in debugging activities. Once T-patterns have been detected, the next challenge is to select relevant T-patterns for subsequent analysis. We are interested in study T-patterns that associate debugging activities with build results. To this end, we used the filters available in THEME, which allow to search for the presence of desired event types in patterns. We found a total of 735 T-patterns that directly associate debugging activities with successful builds, whereas only 67 T-patterns were found for unsuccessful builds. This result shows that, after a methodical sequence of debugging activities, generally the developers have much more chances to achieve successful builds.
Table 1 present a global comparison between the T-patterns found in debugging sessions that are directly related with successful and unsuccessful build results. From the table, we can see that T-patterns related to successful builds occurs more frequently and have a more complex structure, with higher values of patterns' length and duration. On the other hand, T-patterns associated with unsuccessful builds present a more simple structure, with a mean length value of nearly 2 events only and a duration that is almost five times smaller than T-patterns associated with successful builds. This result show that more complex debugging sessions (e.g., those in which developers utilize more specialized debugging tools or invert more time to complete) are more likely to pass the builds and correct software failures.
By analyzing the T-patterns of sessions with unsuccessful builds, we find that their contain mostly events that introduce minor changes in code (e.g., "Edit.Delete, "Edit.Paste"). We hypothesize that this type of debugging sessions were used to quick trace the effect of these changes. Table 1 also shows representative examples of T-patterns occurrences for both types of build results. Based on the T-patterns analysis performed, we derive the answer to the RQ 3 as follows:
Answer to RQ 3 : The quantitative analysis of detected T-patterns in debugging sessions shows that, in general, complex debugging activities achieve successful builds.
CONCLUSION
In this paper, we introduced T-patterns analysis as a useful approach to better understand developer's behavior during in-IDE activities. Through the analysis of 300 sessions with debugging interactions, the results obtained using the THEME software bring evidences about the presence of common Tpatterns during debugging. In particular, our analysis show a strong connection between debugging activities and successful builds. We believe that the study of the developers' activities using T-patterns analysis can advance the understanding about the complex behavioral mechanism that meddle during the process of software developing, which can benefit to both practitioners and IDE designers. In order to aid in future replication of our results, we make our THEME project, filtered dataset and R scripts publicly available online3 .
Figure 1 :
1 Figure 1: Data input structure for THEME software.
(1) visualization of debugging interactions in the form of T-data; (2) detection of T-patterns in debugging sessions; and (3) validation and analysis of the detected T-patterns. T-data. A T-data consists in a collection of one or more T-series, where each T-series represents the occurrence points 𝑝1, ..., 𝑝𝑖, ..., 𝑝𝑛 of a specific type of event during some observation interval [1, 𝑇 ]. Figure 2a shows an example of T-data coded from a debugging session with 166 squared data points (events occurrences), 25 T-series (event types), and a duration of 823 units. Each T-series in the Y-axis represents an event activity triggered in the IDE during the session, while the X-axis is the time in which each specific event was invoked. For the search parameters used, the blue squares represent detected T-patters, while the red ones did not. T-pattern. A T-pattern is composed of 𝑚 ordered components 𝑋1 . . . 𝑋𝑖 . . . 𝑋𝑚, any of which may be occurrence points or T-patterns, on a single dimension (time in this case), such that, over the occurrences of the pattern the distances 𝑋𝑖 𝑋𝑖+1, with 𝑖 . . . 𝑚 -1, varies within a significant small interval [𝑑1, 𝑑2]𝑖, called a critical interval (CI). Hence, a T-pattern 𝑄 can be expressed as: 𝑄 = 𝑋1[𝑑1, 𝑑2] 1 . . . 𝑋𝑖[𝑑1, 𝑑2]𝑖𝑋𝑖+1 . . . 𝑋𝑚-1[𝑑1, 𝑑2]𝑚-1𝑋𝑚 where 𝑚 is the length of 𝑄 and 𝑋𝑖[𝑑1, 𝑑2]𝑋𝑖+1 means that within all occurrences of the pattern in T-data, after an occurrence of 𝑋𝑖 at the instant 𝑡, there is a time window [𝑡 + 𝑑1, 𝑡 + 𝑑2]𝑖 within which 𝑋𝑖+1 will occur. Any T-pattern 𝑄 can be divided into at least one pair of shorter ones related by a corresponding CI: 𝑄 𝑙𝑒𝑓 𝑡 [𝑑1, 𝑑2]𝑄 𝑟𝑖𝑔ℎ𝑡 . Recursively, 𝑄 𝑙𝑒𝑓 𝑡 and 𝑄 𝑟𝑖𝑔ℎ𝑡 can thus each be split until the pattern 𝑋1 . . . 𝑋𝑚 is expressed as the 1 to 𝑚 terminals (occurrence points or event types) of a binary-tree. T-patterns detection. The T-patterns detection algorithm consists in a set of routines for CI detection, pattern construction and pattern completeness competition. The algorithm works bottom-up, level-by-level and uses competition
(a) T-data representation. (b) T-pattern visualization.
Figure 2 :
2 Figure 2: T-patterns analysis of a debugging session, both figures were created with THEME.
Figure 3 :
3 Figure 3: Pairwise correlation between events related to debugging activities.
Figure 4 :
4 Figure 4: Distribution of T-patterns lengths detected in real and randomized data.
Table 1 :
1 Summary of T-patterns detected which reflect the relation of debugging activities with build results.
Build Result Occurrence Length Duration T-pattern Example
Successful 735 4.87±0.72 580.09±232.51 (Debug.Start((Debug.StepOver Debug.StopDebugging)BuildEvent.Successful))
Unsuccessful 67 120.71±35.91 (Debug.Start(Edit.Delete(DocumentEvent.Saved BuildEvent.Unsuccessful)))
Available at http://www.kave.cc/datasets
For more information see http://patternvision.com
https://github.com/cesarsotovalero/msr-challenge2018 |
01763373 | en | [
"info.info-au"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763373/file/VSS18-v4.pdf | Tonametl Sanchez
email: t.sanchez@inria.fr
Andrey Polyakov
email: andrey.polyakov@inria.fr
Jean-Pierre Richard
email: jean-pierre.richard@centralelille.fr
Denis Efimov
email: denis.efimov@inria.fr
J.-P Richard
A robust Sliding Mode Controller for a class of bilinear delayed systems
In this paper we propose a Sliding Mode Controller for a class of scalar bilinear systems with delay in both the input and the state. Such a class is considered since it has shown to be suitable for modelling and control of a class of turbulent flow systems. The stability and robustness analysis for the reaching phase in the controlled system are Lyapunov-based. However, since the sliding dynamics is infinite dimensional and described by an integral equation, we show that the stability and robustness analysis is simplified by using Volterra operator theory.
I. INTRODUCTION
Turbulent Flow Control is a fundamental problem in several areas of science and technology, and improvements to address such a problem can produce a very favourable effect in, for example, costs reduction, energy consumption, and environmental impact [START_REF] Brunton | Closed-Loop Turbulence Control: Progress and Challenges[END_REF]. Unfortunately, in general, model based control techniques find several obstacles to be applied to the problem of Flow Control. One of the main difficulties is that the par excellence model for flow is the set of Navier-Stokes equations that is very complicated for simulation and control design [START_REF] Brunton | Closed-Loop Turbulence Control: Progress and Challenges[END_REF]. On the other hand, when the model is very simple it is hard to represent adequately the behaviour of the physical flow. In [1] the authors say that the remaining missing ingredient for turning flow control into a practical tool is control algorithms with provable performance guarantees. Hence, adequate models (a trade-off between simplicity, efficiency, and accuracy) and algorithms for Flow Control are required.
In [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF] a model for a flow system was proposed, such a model consists in a bilinear differential equation with delays in the input and in the state. An attractive feature of the model is that, according to the experimental results, with some few parameters the model reproduces the behaviour of the physical flow with a good precision. The justification for using such a kind of equations as models for flow systems was presented in [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF]. We reproduce that reasoning as a motivational example in Section II.
For a particular case of the model introduced in [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF], a sliding mode controller was proposed in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF]. That control technique was chosen due to the switching features of the actuators. A good experimental performance was obtained with such a controller1 . Hence, it is worth to continue with the study of the class of bilinear delayed systems and to develop general schemes for analysis and control design. In this paper we design a Sliding Mode Controller for a subclass of such systems by following the idea proposed in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF]. Nonetheless, the result of this paper differs from [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] in the following points.
• The dynamics on the sliding surface is infinite dimensional and is described by an integral equation. In [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] the asymptotic stability of the sliding motion was analysed in the frequency domain. In this paper we propose, as one of the main contributions, to analyse the stability properties of such dynamics by considering it as a Volterra integral equation. This allows us to simplify the analysis and to give simple conditions to guarantee asymptotic stability of the solutions. Hence we avoid the necessity of making a frequency domain analysis to determine the stability properties of an infinite dimensional system. • The analysis of the reaching phase is Lyapunov-based, this is important because it is not only useful to establish stability properties, but also robustness, whose analysis is performed applying Volterra operator theory. • Although the systems considered in this paper and those in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] are similar, the assumptions on the parameters are different. This allows us to enlarge the class of systems which can be considered for the application of the proposed methodology. Paper organization: In Section III a brief description of the control problem is given. Some properties of the system's solutions are studied in Section IV. The design and analysis of the proposed controller are explained in Section V. A robustness analysis is given in Section VI. A numerical example is shown in Section VII. Some final remarks are stated in Section VIII.
Notation: R denotes the set of real numbers. For any a ∈ R, R ≥a denotes the set {x ∈ R : x ≥ a}, and analogously for R >a . For any p ∈ R ≥1 , L p (J) denotes the set of measurable functions x : J ⊂ R → R with finite norm x L p (J) = J |x(s)| p ds 1 p , and L ∞ (J) denotes the set of measurable functions with finite norm x L ∞ (J) = ess sup t∈J |x(t)|.
II. MOTIVATIONAL EXAMPLE
In this section we reproduce the example given in [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF] on how a bilinear delayed differential equation can be obtained as a model for a flow system.
A unidimensional approximation to the Navier-Stokes equations is the Burgers' equation given by
∂v(t, x) ∂t + v(t, x) ∂v(t, x) ∂x = ν ∂ 2 v(t, x) ∂x 2 , (1)
where v : R2 → R is the flow velocity field, x ∈ R is the spatial coordinate, and ν ∈ R ≥0 is the kinematic viscosity. Assume that x ∈ [0, F ] for some F ∈ R >0 . Suppose that v(t, x) = v(x -ct), i.e. the solution of (1) is a travelling wave with velocity c ∈ R ≥0 , it has been proven that (1) admits this kind of solutions [START_REF] Debnath | Nonlinear Partial Differential Equations for Scientists and Engineers[END_REF]. A model approximation of (1) can be obtained by discretizing (1) in the spatial coordinate. Here, we use central finite differences, for the spatial derivatives, with a mesh of three points (and step of h = F/2). Thus
∂v(t, F/2) ∂t + v(t,F/2) F [v(t, F ) -v(t, 0)] = 4ν F 2 [v(t, F ) -2v(t, F/2) + v(t, 0)] . ( 2
)
Since v is assumed to be a travelling wave, it has a periodic pattern in space and time. In particular, note that v(t,
F/2) = v(F/2 -ct) = v(t + F/(2c), F ) = v(t -F/(2c), 0). Now, define y(t) = v(t, F ) and u(t) = v(t, 0), thus, (2)
can be rewritten as
ẏ(t) = -1 F y(t -ς)u(t -2ς) + 1 F y(t)u(t -ς)+ 4ν F 2 [y(t -ς) -2y(t) + u(t -ς)]
, where ς = F/2c. Hence, in [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF], [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF], [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF] the authors propose a more general model for separated flow control:
ẏ = N1 i=1 a i y(t -τ i ) + N2 i=1 N3 k=1 āk y(t -τk ) + b j × u(t -ς k ) . (3)
Observe that this approximating model still recovers two main features of the original flow model: first, it is nonlinear; and second, it is infinite dimensional.
III. PROBLEM STATEMENT
Consider the system
ẋ(t) = a 1 x(t -τ 1 ) -a 2 x(t -τ 2 ) + [c 1 x(t -τ1 ) -c 2 x(t -τ2 ) + b] u(t -ς) , (4)
where a 1 , a 2 , c 1 , c 2 , b, τ 1 , τ 2 , τ1 , τ2 ∈ R ≥0 . We assume that all the delays are bounded and constant. We also assume that the initial conditions of (4) are x(t) = 0 for all t < 0 and x(0) = x 0 for some x 0 ≥ 0.
The control objective is to drive the state of the system to a constant reference x * ∈ R >0 . Such an objective must be achieved under the following general restrictions:
• Since the equation is used to model a positive physical system, some conditions on the model parameters have to be given to guarantee that the solutions of (4) can only take nonnegative values. • Due to the physical nature of the on/off actuator, the control input is restricted to take values from the set {0, 1}.
IV. SYSTEM'S PROPERTIES
As stated in Section III we require some features of the solutions of (4) to guarantee that it constitutes a suitable model for the physical system. In this section we study the conditions on the parameters of ( 4) that guarantee nonnegativeness and boundedness of the solutions. Of course, existence and uniqueness of solutions must be guaranteed. To this aim we rewrite (4) as
ẋ = a 1 x(t -τ 1 ) + c 1 u(t -ς)x(t -τ1 ) -a 2 x(t -τ 2 )- c 2 u(t -ς)x(t -τ2 ) + bu(t -ς) , (5)
that can be seen as a linear delayed system with timevarying coefficients. The term bu(t -ς) is considered as the input. In a first time, we will consider the general case u : R ≥0 → R and, then, restrict it to u : R ≥0 → {0, 1}.
A locally absolutely continuous function that satisfies [START_REF] Feingesicht | Nonlinear active control of turbulent separated flows: Theory and experiments[END_REF], for almost all t ∈ [0, ∞), and its initial conditions for all t ≤ 0 is called a solution of ( 5) [START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF]. Hence, if in addition to the assumptions in the previous section we assume that u : [0, ∞) → R is a Lebesgue-measurable locally essentially bounded function, then the solution of ( 5) exists and it is unique, see Appendix A. Such a definition of solution is adequate for the analysis made in this section, however, for the closed-loop behaviour analysis, we will also consider another framework, see Remark 1 in Section V.
A. Nonnegative solutions
We have said that the model has to be guaranteed to provide nonnegative solutions. Thus, we first search for some conditions that guarantee that the solutions of (5) are nonoscillatory 2 . Consider (5) and define
P (t) = a 1 +c 1 u(t- ς) and N (t) = a 2 + c 2 u(t -ς).
Lemma 1 ([2], Corollary 3.13): Consider (5) with b = 0. If min(τ 2 , τ2 ) ≥ max(τ 1 , τ1 ), N (t) ≥ P (t) for all t ≥ t 0 , and there exists λ ∈ (0, 1) such that
lim t→∞ sup t-min(τ1,τ1) t-max(τ2,τ2) (N (s) -λP (s)) ds < ln(1/λ) e , lim t→∞ sup t t-max(τ2,τ2) (N (s) -λP (s)) ds < 1 e ,
then, the fundamental solution of ( 5) is such that X(t, s) > 0, t ≥ s ≥ t 0 , and ( 5) has an eventually positive solution with an eventually nonpositive derivative. Now, having nonoscillation conditions for (5) we can state the following.
Corollary 1: Consider (5) with b ≥ 0. Suppose that the assumptions of Lemma 1 hold. Assume that x(t) = 0, u(t) = 0 for all t < 0 and x(0) = x 0 for some x 0 ≥ 0. If u(t) ≥ 0 for all t ≥ 0, then x(t) ≥ 0 for all t ≥ 0. The proof is straightforward through the solution representation by using the fundamental function, see Lemma 3. Note that, in particular, the integral conditions of Lemma 1 are satisfied if
a 2 - 1 e (a 1 + c 1 ) max(τ 2 , τ2 ) < 1 e .
Although this is only sufficient, it constitutes a simple formula to verify the integral conditions of Lemma 1.
B. Boundedness of solutions
Observe that the nonoscillation conditions of Lemma 1 also guarantee the boundedness of the system's trajectories for b = 0. For the case b = 0 we have the following result.
Lemma 2: Consider (4) with its parameters satisfying Lemma 1, and with the initial conditions
x(t) = 0, u(t) = 0 for all t ≤ 0. If b = 0, N (t) -P (t) ≥ α , ∀ t ≥ 0 , (6)
for a strictly positive α, and u(t) = 1 ∀t ≥ 0, then the solution of ( 5) is such that x(t) ≤ x for all t ≥ 0 and
lim t→∞ x(t) = x , x = b a 2 + c 2 -a 1 -c 1 . (7)
Proof: According to Lemma 1, if b = 0, then we can ensure that there exists t 1 such that x(t) > 0 and ẋ(t) ≤ 0 for all t ≥ t 1 . Hence, there exists t 2 ≥ t 1 such that for all
t ≥ t 2 ẋ≤a 1 x(t -max(τ 1 , τ1 )) + c 1 u(t -ς)x(t -max(τ 1 , τ1 )) -a 2 x(t -min(τ 2 , τ2 )) -c 2 u(t -ς)x(t -min(τ 2 , τ2 )) ≤P (t)x(t -max(τ 1 , τ1 )) -N (t)x(t -min(τ 2 , τ2 )) ,
thus, since N (t) -P (t) ≥ α, we can ensure that lim t→∞ x(t) = 0, see e.g. [START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF]Theorem 3.4]. Now, for the particular case u(t) = 1 and b = 0, (4) is time-invariant and the asymptotic behaviour of x(t) guarantees that x = 0 is asymptotically stable, therefore, it is exponentially stable and its fundamental solution X(t, s) is exponentially bounded (see e.g. [START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF], [START_REF] Fridman | Introduction to Time-Delay Systems[END_REF]). Hence, for the case b = 0, u(t) = 1, the solution of (4) can be expressed as (see Lemma 3 in Appendix A)
x(t) = X(t, t 0 )x(0) + t t0 X(t, s)b ds .
Since X(t, s) decreases exponentially in t, x(t) is bounded, moreover, x(t) increases monotonically due to the input term. Thus lim t→∞ x(t) exists and it is some constant x, therefore, lim t→∞ ẋ(t) = 0 = -(a 2 + c 2 -a 1 -c 1 )x + b. This equality gives the limit value [START_REF] Feingesicht | A bilinear input-output model with state-dependent delay for separated flow control[END_REF].
V. SLIDING MODE CONTROLLER
In this section we present the Sliding Mode Controller for (4), but first, define k : R → R given by
k(r) = k a1 (r) -k a2 (r) + k c1 (r) , (8)
where
k a1 (r) = a 1 , r ∈ [min(ς, τ 1 ), max(ς, τ 1 )], 0, r / ∈ [min(ς, τ 1 ), max(ς, τ 1 )], k a2 (r) = a 2 , r ∈ [ς, τ 2 ], 0, r / ∈ [ς, τ 2 ], k c1 (r) = c 1 , r ∈ [ς, τ1 ], 0, r / ∈ [ς, τ1 ].
Theorem 1: If system (4) satisfies the conditions of Lemma 1, the condition (6), ς ≤ τ1 , and
τ2 min(ς,τ1) |k(r)| dr < 1 , (9)
then, for any x * ∈ (0, x) (where x is given by ( 7)), the solution of the closed loop of (4) with the controller
u(t) = 1 2 (1 -sign(σ 0 (t) -σ * )) , (10)
σ 0 (t) = x(t) + a 1 t t-τ1
x(s) ds -a 2 t t-τ2
x(s) ds+
c 1 t t-τ1+ς x(s) ds + t t-ς [c 1 x(s -τ1 + ς)- c 2 x(s -τ2 + ς) + b] u(s) ds , (11)
where
σ * = x * [1 -a 2 (τ 2 -ς) + a 1 (τ 1 -ς) + c 1 (τ 1 -ς)],
establishes a sliding motion in finite-time on the surface σ 0 (t) = σ * , and the sliding motion converges exponentially to x * . The design procedure is explained through the proof of the theorem given in the following sections. Note that for implementation, the following equivalent formula can also be used
σ 0 (t) = x(t) + t 0 {(a 1 + c 1 -a 2 )x(s) -a 1 x(s -τ 1 )+ a 2 x(s -τ 2 ) -c 1 x(s -τ1 + ς)(1 -u(s))+ [-c 2 x(s -τ2 + ς) + b] -[c 1 x(s -τ1 )- c 2 x(s -τ2 ) + b] u(s -ς)} ds .
A. Sliding variable
Let us, from [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF], define the sliding variable as σ(t) = σ 0 (t) -σ * . The time derivative of σ is
σ(t) = -(a 2 -a 1 -c 1 )x(t) -c 1 x(t -τ1 + ς)+ [c 1 x(t -τ1 + ς) -c 2 x(t -τ2 + ς) + b] u(t).( 12
)
Observe that σ 0 is acting as a kind of predictor since it allows us to have u without delay in [START_REF] Kiong | Comments on "robust stabilization of uncertain input-delay systems by sliding mode control with delay compensation[END_REF]. Now, let us verify that the trajectories of (4) in closed loop with [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF] reach and remain on the sliding surface σ = 0 in finite-time. To this end, we substitute [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF] in [START_REF] Kiong | Comments on "robust stabilization of uncertain input-delay systems by sliding mode control with delay compensation[END_REF] to obtain the differential equation
σ(t) = -1 2 g 1 (t) sign(σ(t)) + g 2 (t) , (13)
where
g 1 (t) = c 1 x(t -τ1 + ς) -c 2 x(t -τ2 + ς) + b , g 2 (t) = g 1 (t)/2 + (a 1 + c 1 -a 2 )x(t)- (14) c 1 x(t -τ1 + ς) .
Before we proceed to prove the establishment of a sliding regime on σ = 0, we have to guarantee the existence of solutions of [START_REF] Polyakov | Minimization of disturbances effects in time delay predictor-based sliding mode control systems[END_REF].
Remark 1: Note that (13) can be seen as a nonautonomous differential equation with discontinuous right-hand side, therefore, we can use the definition of solutions given by Filippov in [8, p. 50]3 . But, g 1 and g 2 in (13) depend on x, and it is the solution of the functional differential equation ( 4) that in turn depends on σ through the input u. However, if we study recursively the system (4), [START_REF] Polyakov | Minimization of disturbances effects in time delay predictor-based sliding mode control systems[END_REF], on the intervals [nς, (n + 1)ς), n = 0, 1, 2..., we can see that the Filippov approach still works. Indeed, from the assumptions of the initial conditions for (4), u(t) = 0 for t ∈ [0, ς), hence, in such an interval the solutions of (4) do not depend on σ. Therefore, in the same interval, (13) can be seen as a simple differential equation with discontinuous right-hand side. Now, the solutions of (4) are not affected for the values of σ(t) for any t ∈ [ς, 2ς), thus, in such interval, ( 13) is a simple differential equation with discontinuous right-hand side, and so forth.
Consider the Lyapunov function candidate V (σ) = 1 2 σ 2 , whose derivative along ( 13) is given by
V = σ(g 2 (t) -1 2 g 1 (t) sign(σ)) , = -1 2 (g 1 (t) -2g 2 (t) sign(σ))|σ| .
Hence, V is a Lyapunov function for (13) if g 1 (t) -2g 2 (t) sign(σ) ≥ 0. Let us start with the case σ > 0. In this case we have g 1 (t)-2g 2 (t) = (a 2 -a 1 -c 1 )x(t)+c 1 x(t-τ 1 + ς). Therefore, since the solutions of (4) are guaranteed to be nonnegative, g 1 (t) -2g 2 (t) ≥ 0. For the case σ < 0 we have
g 1 (t) -2g 2 (t) = 2(b -(a 2 -a 1 -c 1 )x(t) -c 2 x(t -τ2 + ς)).
Note that since σ(0) < σ * then x(0) < σ * , and we know for this case that x(t) is bounded from above by x. This clearly implies that b -(a 2 -a 1 -c 1 )x(t) -c 2 x(t -τ2 + ς) ≥ 0.
Up to now, we have proven that σ = 0 is Lyapunovstable, however, to guarantee finite-time convergence of σ(t) to the origin, we have to verify that g 1 (t) -2g 2 (t) sign(σ) is bounded from below by a strictly positive constant. The condition σ > 0 implies that x 0 > σ * . If x(t) is increasing it is convenient for the analysis, however, the critic situation is when x(t) is decreasing and x(t) < x. Note that, in such a case, u = 0 necessarily. Now, suppose that for some t 1 we have x(t 1 ) = x * , then
σ(t 1 ) = -a 2 t t-τ2 x(s) ds + a 1 t t-τ1 x(s) ds+ c 1 t t-τ1+ς x(s) ds -a 2 (τ 2 -ς)+ a 1 (τ 1 -ς) + c 1 (τ 1 -ς) ,
which is clearly negative. Hence, we can guarantee that, for the case σ > 0, x(t) is bounded from below by x * , and therefore, g 1 (t) -2g 2 (t) ≥ (a 2 -a 1 )x * . Now, for practical purposes, let us define
S(t) = x(t) -a 2 t-ς t-τ2 x(s) ds+ a 1 t-ς t-τ1 x(s) ds + c 1 t-ς t-τ1
x(s) ds . [START_REF] Utkin | Sliding Modes in Control and Optimization[END_REF] Observe that the sliding variable σ can be rewritten as σ(t) = S(t) -σ * + R(t), where
R(t) = - t t-ς [(a 2 -a 1 -c 1 )x(s) + c 1 x(s -τ1 + ς)] ds+ t t-ς [c 1 x(s -τ1 + ς) -c 2 x(s -τ2 + ς) + b] u(s) ds .
Now, we want to prove that b -(a 2 -a 1 -c 1 )x(t)c 2 x(t -τ2 + ς) is strictly positive when σ < 0. In this case the critic situation is when x is monotonically increasing. This happens only if u = 1. Note that in such situation
σ(t) = S(t) -σ * + t t-ς [b -(a 2 -a 1 -c 1 )x(s)- c 2 x(s -τ2 + ς)] ds ,
where the term
t t-ς [b-(a 2 -a 1 -c 1 )x(s)-c 2 x(s-τ 2 +ς)] ds is strictly positive. Note also that S(t) -σ * ≥ x(t) -x * .
Hence, if for some t 1 we have that x(t 1 ) = x * then σ(t 1 ) ≥ 0. Thus, we can conclude that b-(a 2 -a 1 -c 1 )x(t)-c 2 x(t-τ2 + ς) is bounded from below by b -(a 2 + c 2 -a 1 -c 1 )x * when σ < 0. Therefore, we have proven that the sliding mode is established in finite-time.
B. Sliding dynamics
To obtain the dynamics on the sliding surface σ = 0, we use the Equivalent Control method [START_REF] Utkin | Sliding Modes in Control and Optimization[END_REF], see also [START_REF] Utkin | Sliding Mode Control in Electro-Mechanical Systems[END_REF], [START_REF] Fedorovich | Differential equations with discontinuous right-hand side[END_REF], [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. To compute the equivalent control, we make σ(t) = 0 and obtain that
[c 1 x(t -τ1 + ς) -c 2 x(t -τ2 + ς) + b] u(t) = -(a 1 + c 1 -a 2 )x(t) -c 1 x(t -τ1 + ς) .
By substituting this expression in the equation for σ(t) = 0 we obtain that the sliding dynamics is given by the integral equation
S(t) -σ * = 0 , ( 16
)
where S is given by [START_REF] Utkin | Sliding Modes in Control and Optimization[END_REF]. Hence, our objective is to prove that the solution x(t) of ( 16) converges exponentially to x * .
Here we are going to use the results provided in Appendix B. First, let us rewrite ( 16) in a more suitable way. Define the change of variable z(t) = x(t) -x * , thus, from the dynamics on the sliding surface, we obtain the following integral equation
z(t) -a 2 t-ς t-τ2 z(s) ds + a 1 t-ς t-τ1 z(s) ds+ c 1 t-ς t-τ1
z(s) ds = 0 .
Note that this equation can be rewritten as follows
z(t) + t t * k(t -s)z(s) ds = f (t) , t ≥ t * , (17)
where t * is the reaching time to the sliding surface (i.e. the minimum t such that σ(t) = 0), k is given by ( 8) replacing the parameter r by t -s, and
f (t) = - t * t * -τ2 k(t -s)φ(s) ds , φ(t) = z(t) , ∀t ≤ t * .
Observe that (17) is a Volterra integral equation of the second type and the kernel k of the integral is a convolution kernel. Now, we can state directly the following result.
Theorem 2: If k : R ≥t * × R ≥t * → R is a measurable kernel with ||k|| L p (R ≥t * ) < 1, then for any f ∈ L 1 (R ≥t * )
there exists a unique solution of (17) and it is such that z ∈ L 1 (R ≥t * ). Moreover, z(t) → 0 exponentially as t → ∞.
Proof: First we claim that f is in L 1 (R ≥t * ), see the verification in Appendix C. According to Lemma 5, the assumptions in Theorem 1 guarantee that ||k|| L p (R ≥t * ) < 1. Thus, we can use Lemma 4 to guarantee existence and uniqueness of solutions of (17). Now, Lemmas 6 and 7 guarantee the exponential stability of z = 0.
Since z = 0 is exponentially stable, x(t) → x * exponentially on the sliding surface.
VI. ROBUSTNESS
In this section we analyse the robustness of the closed loop of (4) with [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF], [START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF]. For this, consider the system
ẋ(t) = a 1 x(t -τ 1 ) -a 2 x(t -τ 2 ) + [c 1 x(t -τ1 )- c 2 x(t -τ2 ) + b] u(t -ς) + δ(t) , (18)
where δ : R → R is an external disturbance. We assume that δ L ∞ (R ≥0 ) = ∆ for some finite ∆ ∈ R ≥0 . Considering (18), the time derivative of the sliding variable σ is
σ(t) = -1 2 g 1 (t) sign(σ(t)) + g 2 (t) + δ(t) , (19)
where g 1 and g 2 are given by [START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. Consider V (σ) = 1 2 σ 2 as a Lyapunov function candidate for (19). The derivative of V along ( 13) is given by
V ≤ -1 2 [g 1 (t) -2g 2 (t) sign(σ) -|δ(t)|] |σ| .
In Section V we proved that there exists a strictly positive such that g 1 (t) -2g 2 (t) sign(σ) ≥ for all t along the reaching phase, thus if ∆ < , then V ≤ -1 2 [ -∆|] |σ| and the sliding regime is established in finite-time. Nevertheless, since the sliding variable contains delayed terms of the control, the establishment of the sliding mode does not guarantee the complete disturbance rejection, see e.g. [START_REF] Kiong | Comments on "robust stabilization of uncertain input-delay systems by sliding mode control with delay compensation[END_REF], [START_REF] Polyakov | Minimization of disturbances effects in time delay predictor-based sliding mode control systems[END_REF]. Thus, let us analyse the behaviour of the sliding motion in the presence of the disturbance δ. By using again the equivalent control method (by taking into account the disturbance) we obtain the sliding dynamics S(t) -σ * -δ(t) = 0. If we use again the change of variable z(t) = x(t) -x * , then the sliding dynamics can be rewritten as
z(t) + t t * k(t -s)z(s) ds = f (t) + δ(t) , t ≥ t * , (20)
or equivalently z(t) + (k z)(t) = f (t) + δ(t). We have proven that the solution of (20) is given by
z(t) = [f + δ](t) -(r [f + δ])(t) ,
where r is a convolution operator of type L 1 (R ≥t * ), see Theorem 2 and Appendix B. Now, since f is bounded (see
Appendix C) and δ ∈ L ∞ (R ≥0 ) then [f + δ] ∈ L ∞ (R ≥t * ).
Hence, according to Lemma 8, we can ensure that z ∈ L ∞ (R ≥0 ).
VII. NUMERICAL EXAMPLE
Consider (4) with the parameters a 1 = 0.2, a 2 = 1, c 1 = 0.1, c 2 = 0.4, b = 1, τ 1 = 0.05, τ 2 = 0.11, τ1 = 0.07, τ2 = 0.09. The values of these parameters were chosen in the same order as those obtained in [START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF]. Of course, they satisfy all the conditions of Theorem 1. The simulations were made with Matlab by using an Explicit Euler integration method with a step of 1ms. In Fig. 1 we can observe the system's state for a simulation with initial condition x 0 = 0 in the nominal case. Fig. 2 shows the control signal. In Fig. 3 we can see a simulation considering a disturbance δ(t) = sin(10t)/10. Note in Fig. 4 that, for this example, the amplitude in steady state is less than the amplitude of the disturbance.
VIII. CONCLUSIONS
We proposed a Sliding Mode Controller for a class of scalar bilinear systems with delays. We have shown that the combination of Lyapunov function and Volterra operator theory provides a very useful tool to study the stability and robustness properties of the proposed control scheme. Naturally, a future direction in this research is to try to extend the control scheme to higher order systems.
APPENDIX
A. Solutions of delayed differential equations
The theory recalled in this section was taken from [START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF], see also [START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF] and [START_REF] Fridman | Introduction to Time-Delay Systems[END_REF]. Consider the system
ẋ = N i=1 a i (t)x(t -τ i ) , (21)
where each τ i ∈ R ≥0 , and each a i is a Lebesgue-measurable locally essentially bounded function. Definition 1: The function X(t, s) that satisfies, for each s ≥ 0, the problem
ẋ = N i=1 a i (t)x(t -τ i ) ,
x(t) = 0 for t < s, x(s) = 1, is called the fundamental function of (21).
It is assumed that X(t, s) = 0 when 0 ≤ t < s. Now consider the system
ẋ = N i=1 a i (t)x(t -τ i ) + f (t) , (22)
with initial conditions x(t) = 0 for all t < 0 and x(0) = x 0 for some x 0 ∈ R. Lemma 3: Assume that a i , τ i are as above and f is a Lebesgue-measurable locally essentially bounded function, then there exists a unique solution of ( 22) and it can be written as
x(t) = X(t, 0)x 0 + t 0 X(t, s)f (s) ds .
B. Volterra equations
Most of the results recalled in this section was taken from [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF]. For z : R → R consider the integral equation
z(t) + t t * k(t, s)z(s) ds = f (t) , t ≥ t * . ( 23
)
Define the map t → t t * k(t, s)z(s) ds as k z. Hence, we rewrite (23) as
z(t) + (k z)(t) = f (t) . (24)
A function r is called a resolvent of (24
) if z(t) = f (t) - (r f )(t). We say that the kernel k : J × J → R is of type L p on the interval J if k L p (J) < ∞, where k L p (J) = sup g1 L p (J) ≤1 g2 L p (J) ≤1 J J |g 1 (t)k(t, s)g 2 (s)| p ds dt.
The question about the existence and uniqueness of solutions of (24) is answered by the following lemma.
Lemma 4 ([10], Theorem 9-3.6): If k is a kernel of type L P on J that has a resolvent r of type L P on J, and if f ∈ L P (J), then (24) has a unique solution z ∈ L P (J), and such solution is given by z
(t) = f (t) -(r f )(t).
Now, we have two problems, verify if k is a kernel of type L P and if it has a resolvent r of type L P . For the particular case of p = 1 we have the following lemma.
Lemma 5 ([10], Proposition 9-2.7): Let k : J × J → R be a measurable kernel. k is of type L 1 on J if and only if N (k) = ess sup s∈J J |k(t, s)| dt < ∞. Moreover N (k) = k L p (J) . And finally. Lemma 6 ([10], Corollary 9-3.10): If k is a kernel of type L P on J and k L p (J) < 1, then k has a resolvent r of type L P on J. Now we can guarantee some asymptotic behaviour of z(t) according to the asymptotic behaviour of f (t) for a Volterra kernel k. Nonetheless, if such a kernel is of convolution kind, i.e. k(t, s) = k(t -s), we can say something else. For the following lemma let us denote the Laplace transform of k(t) as K(z), z ∈ C.
Lemma 7 ([10], Theorem 2-4.1): Let k be a Volterra kernel of convolution kind and L 1 type on R ≥0 . Then the resolvent r is of type L 1 on R ≥0 if and only if det(I + K(z)) = 0 for all z ∈ C such that Re{z} ≥ 0.
To finalise this section we recall the following lemma that is useful for the robustness analysis. Here we verify that f is in L 1 . First note that the integral in f restricts to t * -τ 2 ≤ s ≤ t * , therefore, the argument of k(t -s) is restricted to t -t * ≤ t -s ≤ t -t * + τ 2 . Recall that k(t-s) is different from zero only in the interval [min(ς, τ 1 ), τ 2 ]. Hence, under the integral in f , k(t -s) can be different from zero only for t * + min(ς, τ 1 ) -τ 2 ≤ t ≤ t * + τ 2 . Thus
||f (t)|| L 1 (R ≥0 ) = ∞ 0 |f (t)| dt = t * +τ2
t * +min(ς,τ1)-τ2 |f (t)| dt . Now, since φ(t) = x(t) -x * and x(t) was guaranteed to be bounded then there exists a finite φ * ∈ R ≥0 such that |φ(t)| ≤ φ * for all t ∈ [t * -τ 2 , t * ]. Note that also k is bounded by some finite k * , thus
f (t) L 1 (R ≥0 ) ≤ t * +τ2 t * +min(ς,τ1)-τ2 t * t * -τ2
k * φ * ds dt , therefore f (t) L 1 (R ≥0 ) ≤ k * φ * τ 2 (2τ 2 -min(ς, τ 1 )) .
Fig. 1 .Fig. 2 .
12 Fig. 1. State of the system in the nominal case.
Fig. 3 .Fig. 4 .
34 Fig. 3. State of the system in presence of disturbance.
Lemma 8 (
8 [START_REF] Gripenberg | Volterra Integral and Functional Equations[END_REF], Theorem 2-2.2): Let r be a convolution Volterra kernel of type L 1 (R ≥0 ), and let b ∈ L P (R ≥0 ) for some p ∈ [1, ∞]. Then r b ∈ L P (R ≥0 ), andr b L P (R ≥0 ) ≤ r L 1 (R ≥0 ) b L p (R ≥0 ) . C. Function f is in L 1
A video with some experiments, reported in[START_REF] Feingesicht | SISO model-based control of separated flows: Sliding mode and optimal control approaches[END_REF], can be seen at https://www.youtube.com/watch?v=b5NnAV2qeno.
For the definition of a nonoscillatory solution see e.g.[START_REF] Györi | Oscillation Theory of Delay Differential Equations With Applications[END_REF],[START_REF] Agarwal | Nonoscillation Theory of Functional Differential Equations with Applications[END_REF].
For the particular case of (13), the three methods given in[8, p. 50-56] to construct the differential inclusion coincide, see also[START_REF] Polyakov | Stability notions and lyapunov functions for sliding mode control systems[END_REF]. |
01763410 | en | [
"info",
"info.info-se",
"info.info-lo",
"info.info-pf"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01763410/file/sttt2018.pdf | Zheng Cheng
email: zheng.cheng@inria.fr
Massimo Tisi
email: massimo.tisi@imt-atlantique.fr
Slicing ATL Model Transformations for Scalable Deductive Verification and Fault Localization
Model-driven engineering (MDE) is increasingly accepted in industry as an effective approach for managing the full life cycle of software development. In MDE, software models are manipulated, evolved and translated by model transformations (MT), up to code generation. Automatic deductive verification techniques have been proposed to guarantee that transformations satisfy correctness requirements (encoded as transformation contracts). However, to be transferable to industry, these techniques need to be scalable and provide the user with easily accessible feedback.
In MT-specific languages like ATL, we are able to infer static trace information (i.e. mappings among types of generated elements and rules that potentially generate these types). In this paper we show that this information can be used to decompose the MT contract and, for each sub-contract, slice the MT to the only rules that may be responsible for fulfilling it. Based on this contribution, we design a fault localization approach for MT, and a technique to significantly enhance scalability when verifying large MTs against a large number of contracts. We implement both these algorithms as extensions of the VeriATL verification system, and we show by experimentation that they increase its industry-readiness.
1 A deductive approach for fault localization in ATL MTs (Online). https://github.com/veriatl/ VeriATL/tree/FaultLoc.
2 On scalability of deductive verification for ATL MTs (Online).
Introduction
Model-Driven Engineering (MDE), i.e. software engineering centered on software models, is widely recognized as an effective way to manage the complexity of software development. In MDE, software models are manipulated, evolved and translated by model transformation (MTs), up to code generation. An incorrect MT would generate faulty models, whose effect could be unpredictably propagated into subsequent MDE steps (e.g. code generation), and compromise the reliability of the whole software development process.
Deductive verification emphasizes the use of logic (e.g. Hoare logic [START_REF] Hoare | An axiomatic basis for computer programming[END_REF]) to formally specify and prove program correctness. Due to the advancements in the last couple of decades in the performance of constraint solvers (especially satisfiability modulo theory -SMT), many researchers are interested in developing techniques that can partially or fully automate the deductive verification for the correctness of MTs (we refer the reader to [START_REF] Ab | A survey of approaches for verifying model transformations[END_REF] for an overview).
While industrial MTs are increasing in size and complexity (e.g. automotive industry [START_REF] Selim | Model transformations for migrating legacy models: An industrial case study[END_REF], medical data processing [START_REF] Wagelaar | Using ATL/EMFTVM for import/export of medical data[END_REF], aviation [START_REF] Berry | Synchronous design and verification of critical embedded systems using SCADE and Esterel[END_REF]), existing deductive verification approaches and tools show limitations that hinder their practical application.
Scalability is one of the major limitations. Current deductive verification tools do not provide clear evidence of their efficiency for largescale MTs with a large number of rules and contracts [START_REF] Ab | A survey of approaches for verifying model transformations[END_REF]. Consequently, users may suffer from unbearably slow response when verification tasks scale. For example, as we show in our evaluation, the verification of a realistic refactoring MT with about 200 rules against 50 invariants takes hours (Section 6). In [START_REF] Briand | Making model-driven verification practical and scalable -experiences and lessons learned[END_REF], the author argues that this lack of scalable techniques becomes one of the major reasons hampering the usage of verification in industrial MDE.
Another key issue is that, when the verification fails, the output of verification tools is often not easily exploitable for identifying and fixing the fault. In particular, industrial MDE users do not have the necessary background to be able to exploit the verifier feedback. Ideally, one of the most user-friendly solutions would be the introduction of fault localization techniques [START_REF] Roychoudhury | Formulabased software debugging[END_REF][START_REF] Wong | A survey on software fault localization[END_REF], in order to directly point to the part of MT code that is responsible for the fault. Current deductive verification systems for MT have no support for fault localization. Consequently, manually examining the full MT and its contracts, and reasoning on the implicit rule interactions remains a complex and time-consuming routine to debug MTs.
In [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF], we developed the VeriATL verification system to deductively verify the correctness of MTs written in the ATL language [START_REF] Jouault | ATL: A model transformation tool[END_REF], w.r.t. given contracts (in terms of pre-/postconditions). Like several other MT languages, ATL has a relational nature, i.e. its core aspect is a set of so-called matched rules, that describe the mappings between the elements in the source and target model. VeriATL automatically translates the axiomatic semantics of a given ATL transformation in the Boogie intermediate verification language [START_REF] Barnett | Boogie: A modular reusable verifier for object-oriented programs[END_REF], combined with a formal encoding of EMF metamodels [START_REF] Steinberg | EMF: Eclipse modeling framework[END_REF] and OCL contracts. The Z3 au-tomatic theorem prover [START_REF] De Moura | Z3: An efficient SMT solver[END_REF] is then used by Boogie to verify the correctness of the ATL transformation. While the usefulness of Veri-ATL has been shown by experimentation [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF], its original design suffers from the two mentioned limitations, i.e. it does not scale well, and does not provide accessible feedback to identify and fix the fault.
In this article, we argue that the relational nature of ATL can be exploited to address both the identified limitations. Thanks to the relational structure, we are able to deduce static trace information (i.e. inferred information among types of generated target elements and the rules that potentially generate these types) from ATL MTs. Then, we use this information to propose a slicing approach that first decomposes the postcondition of the MT into subgoals, and for each sub-goal, slices out of the MT all the rules that do not impact the subgoal. Specifically, -First, we propose a set of sound natural deduction rules. The set includes 4 rules that are specific to the ATL language (based on the concept of static trace information), and 16 ordinary natural deduction rules for propositional and predicate logic [START_REF] Huth | Logic in Computer Science: Modelling and Reasoning About Systems[END_REF]. Then, we propose an automated proof strategy that applies these deduction rules on the input OCL postcondition to generate sub-goals. Each sub-goal contains a list of newly deduced hypotheses, and aims to prove a sub-case of the input postcondition. -Second, we exploit the hypotheses of each sub-goal to slice the ATL MT into a simpler transformation context that is specific to each sub-goal.
Finally we propose two solutions that apply our MT slicing technique to the tasks of enabling fault localization and enhancing scalability:
-Fault Localization. We apply our natural deduction rules to decompose each unverified postcondition in sub-goals, and generate several verification conditions (VCs), i.e. one for each generated sub-goal and corresponding MT slice. Then, we verify these new VCs, and present the user with the unverified ones. The unverified sub-goals help the user pinpoint the fault in two ways: (a) the failing slice is underlined in the original MT code to help localizing the bug; (b) a set of debugging clues, deduced from the input postcondition are presented to alleviate the cognitive load for dealing with unverified sub-cases. The approach is evaluated by mutation analysis. -Scalability. Before verifying each postcondition, we apply our slicing approach to slice the ATL MT into a simpler transformation context, thereby reducing the verification complexity/time of each postcondition (Section 5.1). We prove the correctness of the approach. Then we design and prove a grouping algorithm, to identify the postconditions that have high probability of sharing proofs when verified in a single verification task (Section 5.2). Our evaluation confirms that our approach improves verification performance up to an order of magnitude (79% in our use case) when the verification tasks of a MT are scaling up (Section 6).
These two solutions are implemented by extending VeriATL. The source code of our implementations and complete artifacts used in our evaluation are publicly available 12 .
This paper extends an article contributed to the FASE 2017 conference [START_REF] Cheng | A deductive approach for fault localization in ATL model transformations[END_REF] by the same authors. While the conference article was introducing the fault localization approach, this paper recognizes that the applicability of our slicing approach is more general, and can benefit other requirements for industry transfer, such as scalability. Paper organization. Section 2 motivates by example the need for fault localization and scalability in MT verification. Section 3 presents our solution for fault localization in the deductive verification of MT. Section 4 illustrates in detail the key component of this first appli-Fig. 1. The hierarchical and flattened state machine metamodel cation, the deductive decomposition and slicing approach. Section 5 applies this slicing approach to our second task, i.e. enhancing general scalability in deductive MT verification. The practical applicability and performance of our solutions are shown by evaluation in Section 6. Finally, Section 7 compares our work with related research, and Section 8 presents our conclusions and proposed future work.
Motivating Example
We consider as running case a MT that transforms hierarchical state machine (HSM ) models to flattened state machine (FSM ) models, namely the HSM2FSM transformation. Both models conform to the same simplified state machine metamodel (Fig. 1). For clarity, classifiers in the two metamodels are distinguished by the HSM and FSM prefix. In detail, a named StateMachine contains a set of labelled Transitions and named AbstractStates. Each Ab-stractState has a concrete type, which is either RegularState, InitialState or CompositeState. A Transition links a source to a target AbstractState. Moreover, CompositeStates are only allowed in the models of HSM, and optionally contain a set of AbstractStates.
Fig. 2 depicts a HSM model that includes a composite state3 . Fig. 3 demonstrates how the HSM2FSM transformation is expected to flatten it: (a) composite states need to be removed, the initial state within needs to become a regular state, and all the other states need to be preserved; (b) transitions targeting a composite state need to redirect to the initial state of such composite state, transitions outgoing from a composite state need to be duplicated for the states within such composite state, and all the other transitions need to be preserved.
Specifying OCL contracts. We consider a contract-based development scenario where the developer first specifies correctness conditions for the to-be-developed ATL transformation by using OCL contracts. For example, let us consider the contract shown in Listing 1. The precondition Pre1 specifies that in the input model, each Transition has at least one source. The postcondition Post1 specifies that in the output model, each Transition has at least one source.
While pre-/post-conditions in Listing 1 are generic well-formedness properties for state machines, the user could specify transformationspecific properties in the same way. For instance, the complete version of this use case also contains the following transformation-specific Developing the ATL transformation. Then, the developer implements the ATL transformation HSM2FSM (a snippet is shown in Listing 2 4 ). The transformation is defined via a list of ATL matched rules in a mapping style. The first rule maps each StateMachine element to the output model (SM2SM ). Then, we have two rules to transform AbstractStates: regular states are preserved (RS2RS ), initial states are transformed into regular states when they are within a composite state (IS2RS ). Notice here that initial states are deliberately transformed partially to demonstrate our problem, i.e. we miss a rule that specifies how to transform initial states when they are not within a composite state. The remaining three rules are responsible for mapping the Transitions of the input state machine.
1 context HSM!Transition inv Pre1: 2 HSM!Transition.allInstances()->forAll(t | not t.source.oclIsUndefined()) 3 -------------------------------- 4 context FSM!
Each ATL matched rule has a from section where the source pattern to be matched in the source model is specified. An optional OCL constraint may be added as the guard, and a rule is applicable only if the guard evaluates to true on the source pattern. Each rule also has a to section which specifies the elements to be created in the target model. The rule initializes the attributes/associations of a generated target element via the binding operator (<-). An important feature of ATL is the use of an implicit resolution algorithm during the target property initialization. Here we illustrate the algorithm by an example: 1) considering the binding stateMachine <-rs1.stateMachine in the RS2RS rule (line 13 of Listing 2), its righthand side is evaluated to be a source element of type HSM!StateMachine; 2) the resolution algorithm then resolves such source element to its corresponding target element of type FSM!StateMachine (generated by the SM2SM rule); 3) the resolved result is assigned to the left-hand side of the binding. While not strictly needed for understanding this paper, we refer the reader to [START_REF] Jouault | ATL: A model transformation tool[END_REF] for a full description of the ATL language.
Formally verifying the ATL transformation by VeriATL. The source and target EMF metamodels and OCL contracts combined with the developed ATL transformation form a VC which can be used to verify the correctness of the ATL transformation for all possible inputs, i.e. MM, Pre, Exec Post. The VC semantically means that, assuming the axiomatic semantics of the involved EMF metamodels (MM ) and OCL preconditions (Pre), by executing the developed ATL transformation (Exec), the specified OCL postcondition has to hold (Post).
In previous work, Cheng et al. have developed the VeriATL verification system that allows such VCs to be soundly verified [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF]. Specifically, the VeriATL system describes in Boogie what correctness means for the ATL language in terms of structural VCs. Then, Ve-riATL delegates the task of interacting with Z3 for proving these VCs to Boogie.In particular, VeriATL encodes: 1) MM using axiomatized Boogie constants to capture the semantics of metamodel classifiers and structural features, 2) Pre and Post using first order logic Boogie assumption and assertion statements respectively to capture the pre-/post-conditions of MTs, 3) Exec using Boogie procedures to capture the matching and applying semantics of ATL MTs. We refer our previous work [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF] for the technical description of how to map a VC to its corresponding Boogie program.
Problem 1: Debugging. In our example, VeriATL successfully reports that the OCL postcondition Post1 is not verified by the MT in Listing 2. This means that the transformation does not guarantee that each Transition has at least one source in the output model. Without any capability of fault localization, the developer needs to manually inspect the full transformation and contracts to understand that the transformation is incorrect because of the absence of an ATL rule to transform InitialStates that are not within a CompositeState.
To address problem 1, our aim is to design a fault localization approach that automatically presents users with the information in Listing 3 (described in detail in the follow-ing Section 3.1). The output includes: (a) the slice of the MT code containing the bug (that in this case involves only three rules), (b) a set of debugging clues, deduced from the original postcondition (in this case pointing to the the fact that T2TC can generate transitions without source). We argue that this information is a valuable help in identifying the cause of the bug.
Problem 2: Scalability. While for illustrative purposes in this paper we consider a very small transformation, it is not difficult to extend it to a realistically sized scenario. For instance we can imagine Listing 2 to be part (up to renaming) of a refactoring transformation for the full UML (e.g. including statecharts, but also class diagrams, sequence diagrams, activity diagrams etc.). Since the UML v2.5 [START_REF]Object Management Group: Unified modeling language (ver. 2.5)[END_REF] metamodel contains 194 concrete classifiers (plus 70 abstract classifiers), even the basic task of simply copying all the elements not involved in the refactoring of Listing 2 would require at least 194 rules. Such large transformation would need to be verified against the full set of UML invariants, that describe the well-formedness of UML artifacts according to the specification 5 . While standard Ve-riATL is successfully used for contract-based development of smaller transformations [START_REF] Cheng | A deductive approach for fault localization in ATL model transformations[END_REF], in our experimentation we show that it needs hours to verify a refactoring on the full UML against 50 invariants.
To address problem 2, we design a scalable verification approach aiming at 1) reducing the verification complexity/time of each postcondition (Section 5.1) and 2) grouping postconditions that have high probability of sharing proofs when verified in a single verification task (Section 5.2). Thanks to these techniques the verification time of our use case in UML refactoring is reduced by about 79%. 5 OCL invariants for UML. http://bit.ly/ UMLContracts 1 context HSM!Transition inv Pre1: ...
Fault localization in the running case
We propose a fault localization approach that, in our running example, presents the user with two problematic transformation scenarios. One of them is shown in Listing 3. The scenario consists of the input preconditions (abbreviated at line 1), a slice of the transformation (abbreviated at lines 3 -5), and a sub-goal derived from the input postcondition. The subgoal contains a list of hypotheses (lines 7 -12) with a conclusion (line 13).
The scenario in Listing 3 contains the following information, that we believe to be valuable in identifying and fixing the fault:
-Transformation slice. The only relevant rules for the fault captured by this problematic transformation scenario are RS2RS, IS2RS and T2TC (lines 3 -5). They can be directly highlighted in the source code editor. -Debugging clues. The error occurs when a transition t0 is generated by the rule T2TC (lines 8 -10), and when the source state of the transition is not generated (line 11). In addition, the absence of the source for t0 is due to the fact that none of the RS2RS and IS2RS rules is invoked to generate it (line 12).
From this information, the user could find a counter-example in the source models that falsifies Post1 (shown in the top of Fig. 4): a transition t c between an initial state i c (which is not within a composite state) and a composite state c c , where c c composites another initial state i c . This counter-example matches the source pattern of the T2TC rule (as shown in the bottom of Fig. 4). However, when the T2TC rule tries to initialize the source of the generated transition t2 (line 41 in Listing 2), i c cannot be resolved because there is no rule to match it. In this case, i c (of type HSM!Initial-State)) is directly used to initialize the source of t2 (t2.source is expected to be a sub-type of FSM!AbstractState). This causes an exception of type mismatch, thus falsifying Post1. The other problematic transformation scenario pinpoints the same fault, showing that Post1 is not verified by the MT also when t0 is generated by T2TA.
In the next sections, we describe in detail how we automatically generate problematic transformation scenarios like the one shown in Listing 3.
Solution overview
The flowchart in Fig. 5 shows a bird's eye view of our approach to enable fault localization for VeriATL. The process takes the involved metamodels, all the OCL preconditions, the ATL transformation and one of the OCL postconditions as inputs. We require all inputs to be syntactically correct. If VeriATL successfully verifies the input ATL transformation, we directly report a confirmation message to indicate its correctness (w.r.t. the given postcondition) and the process ends. Otherwise, we generate a set of problematic transformation scenarios, and a proof tree to the transformation developer.
To generate problematic transformation scenarios, we first perform a systematic approach to generate sub-goals for the input OCL postcondition. Our approach is based on a set of sound natural deduction rules (Section 4.1). The set contains 16 rules for propositional and predicate logic (such as introduction/elimination rules for ∧ and ∨ [START_REF] Huth | Logic in Computer Science: Modelling and Reasoning About Systems[END_REF]), but also 4 rules specifically designed for ATL expressions (e.g. rewriting single-valued navigation expression).
Then, we design an automated proof strategy that applies these natural deduction rules on the input OCL postcondition (Section 4.2). Executing our proof strategy generates a proof tree. The non-leaf nodes are intermediate results of deduction rule applications. The leafs in the tree are the sub-goals to prove. Each sub-goal consists of a list of hypotheses and a conclusion to be verified. The aim of our automated proof strategy is to simplify the original postcondition as much as possible to obtain a set of sub-conclusions to prove. As a byproduct, we also deduce new hypotheses from the input postcondition and the transformation, as debugging clues.
Next, we use the trace information in the hypotheses of each sub-goal to slice the input MT into simpler transformation contexts (Section 4.3). We then form a new VC for each subgoal consisting of the semantics of metamodels, input OCL preconditions, sliced transformation context, its hypotheses and its conclusion.
We send these new VCs to the VeriATL verification system to check. Notice that successfully proving these new VCs implies the satisfaction of the input OCL postcondition. If any of these new VCs is not verified by Ve-riATL, the input OCL preconditions, the corresponding sliced transformation context, hypotheses and conclusion of the VC are presented to the user as a problematic transformation scenario for fault localization. The VCs that were automatically proved by VeriATL are pruned away, and are not presented to the transformation developer. This deductive verification step by VeriATL makes the whole process practical, since the user is presented with a limited number of meaningful scenarios. Then, the transformation developer consults the generated problematic transformation scenarios and the proof tree to debug the ATL transformation. If modifications are made on the inputs to fix the bug, the generation of sub-goals needs to start over. The whole process keeps iterating until the input ATL transformation is correct w.r.t. the input OCL postcondition.
A Deductive Approach to Transformation Slicing
The key step in the solution for fault localization that we described in the previous section is a general technique for: 1) decomposing the postcondition into sub-goals by applying MT-specific natural deduction rules, and 2) for each sub-goal, slice the MT to the only rules that may be responsible for fulfilling that sub-goal.
In this section we describe this algorithm in detail, and in the next section we show that its usefulness goes beyond fault localization, by applying it for enhancing the general scalability of VeriATL.
Natural Deduction Rules for ATL
Our approach relies on 20 natural deduction rules (7 introduction rules and 13 elimination rules). The 4 elimination rules (abbreviated by X e ) that specifically involve ATL are shown in Fig. 6. The other rules are common natural deduction rules for propositional and predicate logic [START_REF] Huth | Logic in Computer Science: Modelling and Reasoning About Systems[END_REF]. Regarding the notations in our natural deduction rules:
-Each rule has a list of hypotheses and a conclusion, separated by a line. We use standard notation for typing (:) and set operations.
-Some special notations in the rules are T for a type, M M T for the target metamodel, R n for a rule n in the input ATL transformation, x.a for a navigation expression, and i for a fresh variable / model element.
In addition, we introduce the following auxiliary functions: cl returns the classifier types of the given metamodel, trace returns the ATL rules that generate the input type (i.e. the static trace information) 6 , genBy(i,R) is a predicate to indicate that a model element i is generated by the rule R, unDef(i) abbreviates i.oclIsUndefined(), and All(T) abbreviates T.allInstances().
Some explanation is in order for the natural deduction rules that are specific to ATL:
-First, we have two type elimination rules (T P e1 , T P e2 ). T P e1 states that every singlevalued navigation expression of the type T in the target metamodel is either a member of all generated instances of type T or undefined. T P e2 states that the cardinality of every multi-valued navigation expression of the type T in the target metamodel is either greater than zero (and every element i in the multi-valued naviga- The set of natural deduction rules is sound, as we show in the rest of this section. However, it is not complete, and we expect to extend it in future work. As detailed in Section 6.3, when the bug affects a postcondition that we don't support because of this incompleteness, we report to the user our inability to perform fault localization for that postcondition.
x.a : T T ∈ cl(M M T ) x.a ∈ All(T ) ∨ unDef (x.a) Tpe1 x.a : Seq T T ∈ cl(M M T ) (|x.a| > 0 ∧ ∀i • (i ∈ x.a ⇒ i ∈ All(T ) ∨ unDef (i))) ∨ |x.a| = 0 Tpe2 T ∈ cl(M M T ) trace(T ) = {R 1 , ..., Rn } i ∈ All(T ) genBy(i, R 1 ) ∨ ... ∨ genBy(i, Rn ) Tre1 T ∈ cl(M M T ) trace(T ) = {R 1 , ..., Rn} i : T unDef (i) ¬(genBy(i,
Soundness of natural deduction rules. The soundness of our natural deduction rules is based on the operational semantics of the ATL language. Specifically, the soundness for type elimination rules T P e1 and T P e2 is straightforward. We prove their soundness by enumerating the possible states of initialized navigation expressions for target elements. Specifically, assuming that the state of a navigation expression x.a is initialized in the form x.a<-exp where x.a is of a non-primitive type T :
-If exp is not a collection type and cannot be resolved (i.e. exp cannot match the source pattern of any ATL rules), then x.a is undefined 7 . -If exp is not a collection type and can be resolved, then the generated target element of the ATL rule that matches exp is assigned to x.a. Consequently, x.a could be either a member of All(T) (when the resolution result is of type T ) or undefined (when it is not). -If exp is of collection type, then all of the elements in exp are resolved individually, and the resolved results are put together into a pre-allocated collection col, and col is assigned to x.a.
The first two cases explain the two possible states of every single-valued navigation expressions (T P e1 ). The third case explains the two possible states of every multi-valued navigation expressions (T P e2 ). The soundness of trace elimination rules T R e1 is based on the surjectivity between each ATL rule and the type of its created target elements [START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]: elements in the target metamodel exist if they have been created by an ATL rule since standard ATL transformations are always executed on an initially empty target model. When a type can be generated by executing more than one rule, then a disjunction considering all these possibilities is made for every generated elements of this type.
About the soundness of the T R e2 rule, we observe that if a target element of type T is undefined, then clearly it does not belong to All(T). In addition, the operational semantics for the ATL language specifies that if a rule R is specified to generate elements of type T, then every target elements of type T generated by that rule belong to All(T) (i.e. R ∈ trace(T ) ⇒ ∀i • (genBy(i, R) ⇒ i ∈ All(T ))) [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF]. Thus, T R e2 is sound as a logical consequence of the operational semantics for the ATL language (i.e. R ∈ trace(T ) ⇒ ∀i • (i / ∈ All(T ) ⇒ ¬genBy(i, R))).
Automated Proof Strategy
A proof strategy is a sequence of proof steps. Each step defines the consequences of applying a natural deduction rule on a proof tree. A proof tree consists of a set of nodes. Each node is constructed by a set of OCL expressions as hypotheses, an OCL expression as the conclusion, and another node as its parents node.
Next, we illustrate a proof strategy (Algorithm 1) that automatically applies our natural deduction rules on the input OCL postcondition. The goal is to automate the derivation of information from the postcondition as hypotheses, and simplify the postcondition as much as possible.
Algorithm 1 An automated proof strategy for VeriATL
1: Tree ← {createNode({}, Post, null)} 2: do 3: leafs ← size(getLeafs(Tree)) 4:
for each node leaf ∈ getLeafs(Tree) do end for 13: while leafs = size(getLeafs(Tree))
Our proof strategy takes one argument which is one of the input postconditions. Then, it initializes the proof tree by constructing a new root node of the input postcondition as conclusion and no hypotheses and no parent node (line 1). Next, our proof strategy takes two sequences of proof steps. The first sequence applies the introduction rules on the leaf nodes of the proof tree to generate new leafs (lines 2 -7). It terminates when no new leafs are yield (line 7). The second sequence of steps applies the elimination rules on the leaf nodes of the proof tree (lines 8 -13). We only apply type elimination rules on a leaf when: (a) a free variable is in its hypotheses, and (b) a navigation expression of the free variable is referred by its hypotheses. Furthermore, to ensure termination, we enforce that if applying a rule on a node does not yield new descendants (i.e. whose hypotheses or conclusion are different from their parent), then we do not attach new nodes to the proof tree.
Transformation Slicing
Executing our proof strategy generates a proof tree. The leafs in the tree are the sub-goals to prove by VeriATL. Next, we use the rules referred by the genBy predicates in the hypotheses of each sub-goal to slice the input MT into a simpler transformation context. We then form a new VC for each sub-goal consisting of the axiomatic semantics of metamodels, input OCL preconditions, sliced transformation context (Exec sliced ), its hypotheses and its conclusion, i.e. MM, Pre, Exec sliced , Hypotheses Conclusion.
If any of these new VCs is not verified by VeriATL, the input OCL preconditions, the corresponding sliced transformation context, hypotheses and conclusion of the VC are constructed as a problematic transformation scenario to report back to the user for fault localization (as shown in Listing 3).
Correctness. The correctness of our transformation slicing is based on the concept of rule irrelevance (Theorem 1). That is the axiomatic semantics of the rule(s) being sliced away (Exec irrelevant ) has no effects to the verification outcome of its sub-goal. Proof. Each ATL rule is exclusively responsible for the generation of its output elements 8 Exec sliced ∪ irrelevant ⇐⇒ Exec sliced ∧ Exec irrelevant (i.e. no aliasing) [START_REF] Hidaka | On additivity in transformation languages[END_REF][START_REF] Tisi | Parallel execution of ATL transformation rules[END_REF]. Hence, when a subgoal specifies a condition that a set of target elements should satisfy, the rules that do not generate these elements have no effects to the verification outcome of its sub-goal. These rules can hence be safely sliced away.
Scalability by Transformation Slicing
Being able to decompose contracts and slice the transformation as described in the previous section, can be also exploited internally for enhancing the scalability of the verification process.
Typically, verification tools like VeriATL will first formulate VCs to pass to the theorem prover. Then, they may try to enhance performance by decomposing and/or composing these VCs:
-VCs can be decomposed, creating smaller VCs that may be more manageable for the theorem prover. For instance Leino et al. introduce a VC optimization in Boogie (hereby referred as VC splitting) to automatically split VCs based on the control-flow information of programs [START_REF] Leino | Verification condition splitting[END_REF]. The idea is to align each postcondition to its corresponding path(s) in the control flow, then to form smaller VCs to be verified in parallel. -VCs can be composed, e.g. by constructing a single VC to prove the conjunction of all postconditions (hereby referred as VC conjunction). This has the benefit to enable sharing parts of the proofs of different postconditions (i.e. the theorem prover might discover that lemmas for proving a conjunct are also useful for proving other terms).
However, domain-agnostic composition or decomposition does not provide significant speedups to our running case. For instance the Boogielevel VC splitting has no measurable effect. Once the transformation is translated in an imperative Boogie program, transformation rules, even if independent from each other, become part of a single path in the control-flow [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF]. Hence, each postcondition is always aligned to the whole set of transformation rules. We argue that a similar behavior would have been observed also if the transformation was directly developed in an imperative language (Boogie or a general-purpose language): a domain-agnostic VC optimization does not have enough information to identify the independent computation units within the transformation (i.e. the rules).
In what follows, we propose a two-step method to construct more efficient VCs for verifying large MTs. In the first step, we want to apply our MT-specific slicing technique (Section 4) on top of the Boogie-level VC splitting (Section 5.1): thanks to the abstraction level of the ATL language, we can align each postcondition to the ATL rules it depends on, thereby greatly reduce the size of each constructed VC. In the second step, we propose an ATL-specific algorithm to decide when to conjunct or split VCs (Section 5.2), improving on domain-agnostic VC conjunction.
Applying the Slicing Approach
Our first ATL-level optimization aims to verify each postcondition only against the rules that may impact it (instead of verifying it against the full MT), thus reducing the burden on the SMT solver. This is achieved by a transformation slicing approach for postconditions: first applying the decomposition in sub-goals and the slicing technique from Section 4, and then merging the slices of the generated sub-goals. The MT rules that lay outside the union are sliced away, and the VC for each postcondition becomes: MM, Pre, Exec slice Post, where Exec slice stands for axiomatic semantics of sliced transformation, and the sliced transformation is the union of the rules that affect the sub-goals of each postcondition.
Correctness. We first define a complete application of the automated proof strategy in Definition 1.
Definition 1 (Complete application of the automated proof strategy). The automated proof strategy is completely applied to a postcondition if it correctly identifies every element of target types referred by each sub-goal and every rule that may generate them.
Clearly, if not detected, an incomplete application of our automated proof strategy could cause our transformation slicing to erroneously slice away the rules that a postcondition might depend on, and invalidate our slicing approach to verify postconditions. We will discuss how we currently handle and can improve the completeness of the automated proof strategy in Section 6.3. One of the keys in handling incomplete cases is that we defensively construct the slice to be the full MT. Thus, the VCs of the incomplete cases become MM, Pre, Exec
Post. This key point is used to establish the correctness of our slicing approach to verify postconditions (Theorem 2).
Theorem 2 (Rule Irrelevance -Postconditions). MM, Pre, Exec sliced , Post ⇐⇒ MM, Pre, Exec sliced ∪ irrelevant Post Proof. We prove this theorem by a case analysis on whether the application of our automated proof strategy is complete: -Assuming our automated proof strategy is completely applied. First, because the soundness of our natural deduction rules, it guarantees the generated sub-goals are a sound abstraction of their corresponding original postcondition. Second, based on the assumption that our automated proof strategy is completely applied, we can ensure that the union of the static trace information for each sub-goal of a postcondition contains all the rules that might affect the verification result of such postcondition. Based on the previous two points, we can conclude that slicing away its irrelevant rules has no effects to the verification outcome of a postcondition following the same proof strategy as in Theorem 1. -Assuming our automated proof strategy is not completely applied. In this case, we will defensively use the full transformation as the slice, then in this case, our theorem becomes MM, Pre, Exec Post ⇐⇒ MM, Pre, Exec Post, which is trivially proved.
1 context HSM!Transition inv Pre1: ... For example, Listing 4 shows the constructed VC for Post1 of Listing 1 by using our program slicing technique. It concisely aligns Post1 to 4 responsible rules in the UML refactoring transformation. Note that the same slice is obtained when the rules in Listing 2 are a part of a full UML refactoring. Its verification in our experimental setup (Section 6) requires less than 15 seconds, whereas verifying the same postcondition on the full transformation would exceed the 180s timeout.
Grouping VCs for Proof Sharing
After transformation slicing, we obtain a simpler VC for each postcondition. Now we aim to group the VCs obtained from the previous step in order to further improve performance. In particular, by detecting VCs that are related and grouping them in a conjunction, we encourage the underlying SMT solver to reuse subproofs while solving them. We propose an heuristics to identify the postconditions that should be compositionally verified, by leveraging again the results from our deductive slicing approach.
In our context, grouping of two VCs A and B means that MM, Pre, Exec A∪B Post A ∧ Post B . That is, taking account of the axiomatics semantics of metamodel, preconditions, and rules impacting A or B, the VC proves the conjunction of postconditions A and B.
It is difficult to precisely identify the cases in which grouping two VCs will improve efficiency. Our main idea is to prioritize groups that have high probability of sharing subproofs. Conservatively, we also want to avoid grouping an already complex VC with any other one, but this requires to be able to estimate verification complexity. Moreover we want to base our algorithm exclusively on static information from VCs, because obtaining dynamic information is usually expensive in a large-scale MT settings.
We propose an algorithm based on two properties that are obtained by applying the natural deduction rules of our slicing approach: number of static traces and number of subgoals for each postcondition. Intuitively, each one of the two properties is representative of a different cause of complexity: 1) when a postcondition is associated with a large number of static traces, its verification is challenging because it needs to consider a large part of the transformation, i.e. a large set of semantic axioms generated in Boogie by VeriATL; 2) a postcondition that results a large number of sub-goals, indicates a large number of combinations that the theorem prover will have to consider in a case analysis step.
We present our grouping approach in Algorithm 2. Its inputs are a set of postconditions P, and other two parameters: max traces per group (MAX t ) and max sub-goals per group (MAX s ). The result are VCs in groups (G).
The algorithm starts by sorting the input postconditions according to their trace set size (in ascending order). Then, for each postcondition p, it tries to pick from G the candidate groups (C ) that may be grouped with p (lines 5 to 10). A group is considered to be a candidate group to host the given postcondition if the inclusion of the postcondition in the candidate group (trail ) does not yield a group whose trace and sub-goals exceed MAX t and MAX s .
If there are no candidate groups to host the given postcondition, a new group is created (lines 11 to 12). Otherwise, we rank the suitability of candidate groups to host the postcondition by using the auxiliary function rank (lines 13 to 15). A group A has a higher rank than another group B to host a given postcondition p, if grouping A and p yields a smaller trace set than grouping B and p. When two groups are with the same ranking in terms of traces, we subsequently give a higher rank to the group that yields smaller total number of sub-goals when including the input postcondition.
This ranking is a key aspect of the grouping approach: (a) postconditions with overlapping trace sets are prioritized (since the union of their trace sets will be smaller). This raises the probability of proof sharing, since overlapping trace sets indicate that the proof of the two postconditions has to consider the logic of some common transformation rules. (b) postconditions with shared sub-goals are prioritized (since the union of total number of sub-goals will be smaller). This also raises the probability of proof sharing, since case analysis on the same sub-goals does not need to be analyzed again.
Finally, after each postcondition found a group in G that can host it, we generate VCs for each group in G and return them. Note that the verification of a group of VCs yields a single result for the group. If the users wants to know exactly which postconditions have failed, they will need to verify the postconditions in the failed group separately.
Correctness. The correctness of our grouping algorithm is shown by its soundness as stated in Theorem 3.
Theorem 3 (Soundness of Grouping). MM, Pre, Exec
A∪B Post A ∧ Post B =⇒ MM, Pre, Exec A Post A ∧ MM, Pre, Exec B Post B
Proof. Following the consequences of logical conjunction and Theorem 2.
Evaluation
In this section, we first evaluate the practical applicability of our fault localization approach (Section 6.1), then we assess the scalability of our performance optimizations (Section 6.2). Last, we conclude this section with a discussion of the obtained results and lessons learned (Section 6.3).
Our evaluation uses the VeriATL verification system [START_REF] Cheng | A sound execution semantics for ATL via translation validation[END_REF], which is based on the Boogie verifier (version 2.3) and Z3 (version 4.5). The evaluation is performed on an Intel 3 GHz machine with 16 GB of memory running the Windows operating system. VeriATL encodes the axiomatic semantics of the ATL language (version 3.7). The automated proof strategy and its corresponding natural deduction rules are currently implemented in Java. We configure Boogie with the following arguments for fine-grained performance metrics: timeout:180 (using a verification timeout of 180 seconds) -traceTimes (using the internal Boogie API to calculate verification time).
Fault Localization Evaluation
Before diving into the details of evaluation results and analysis, we first formulate our research questions and describe the evaluation setup.
Research questions
We formulate two research questions to evaluate our fault localization approach: (RQ1) Can our approach correctly pinpoint the faults in the given MT? (RQ2) Can our approach efficiently pinpoint the faults in the given MT?
Evaluation Setup
To answer our research questions, we use the HSM2FSM transformation as our case study, and apply mutation analysis [START_REF] Jia | An analysis and survey of the development of mutation testing[END_REF] to systematically inject faults. In particular, we specify 14 preconditions and 5 postconditions on the original HSM transformation from [START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]. Then, we inject faults by applying a list of mutation operators defined in [START_REF] Burgueño | Static fault localization in model transformations[END_REF] on the transformation. We apply mutations only to the transformation because we focus on contract-based development, where the contract guides the development of the transformation. Our mutants are proved against the specified postconditions, and we apply our fault localization approach in case of unverified postconditions. We kindly refer to our online repository for the complete artifacts used in our evaluation 9 .
Evaluation Results
Table 1 summarizes the evaluation results for our fault localization approach on the chosen case study. The first column lists the identity of the mutants 10 . The second and third columns record the unverified OCL postconditions and their corresponding verification time.
The fourth, fifth, sixth and seventh columns record information of verifying sub-goals, i.e. the number of unverified sub-goals / total number of sub-goals (4th), average verification time of sub-goals (5th), the maximum verification time among sub-goals (6th), total verification of sub-goals (7th) respectively. The last column records whether the faulty lines (L f aulty , i.e. the lines that the mutation operators operated on) are presented in the problematic transformation scenarios (P T S) of unverified sub-goals. 9 A deductive approach for fault localization in ATL MTs (Online). https://github.com/veriatl/ VeriATL/tree/FaultLoc 10 The naming convention for mutants are mutation operator Add(A) / Del(D) / Modify(M), followed by the mutation operand Rule(R) / Filter(F) / TargetElement(T) / Binding(B), followed by the position of the operand in the original transformation setting. For example, MB1 stands for the mutant which modifies the binding in the first rule. First, we confirm that there is no inconclusive verification results of the generated subgoals, i.e. if VeriATL reports that the verification result of a sub-goal is unverified, then it presents a fault in the transformation. Our confirmation is based on the manual inspection of each unverified sub-goal to see whether there is a counter-example to falsify the subgoal. This supports the correctness of our fault localization approach. We find that the deduced hypotheses of the sub-goals are useful for the elaboration of a counter-example (e.g. when they imply that the fault is caused by missing code as the case in Listing 3).
Second, as we inject faults by mutation, identifying whether the faulty line is presented in the problematic transformation scenarios of unverified sub-goals is also a strong indication of the correctness of our approach. Shown by the last column, all cases satisfies the faulty lines inclusion criteria. 3 out 10 cases are special cases (dashed cells) where the faulty lines are deleted by the mutation operator (thus there are no faulty lines). In the case of MF6#2, there are no problematic transformation scenarios generated since all the sub-goals are verified. By inspection, we report that our approach improves the completeness of VeriATL. That is the postcondition (#2) is correct under MF6 but unable to be verified by Veri-ATL, whereas all its generated sub-goals are verified.
Third, shown by the fourth column, in 5 out of 10 cases, the developer is presented with at most one problematic transformation scenario to pinpoint the fault. This positively supports the efficiency of our approach. The other 5 cases produce more sub-goals to examine. However, we find that in these cases each unverified sub-goal gives a unique phenomenon of the fault, which we believe is valuable to fix the bug. We also report that in rare cases more than one sub-goal could point to the same phenomenon of the fault. This is because the hypotheses of these sub-goals contain a semantically equivalent set of genBy predicates. Although they are easy to identify, we would like to investigate how to systematically filter these cases out in the future.
Fourth, from the third and fifth columns, we can see that each of the sub-goals is faster to verify than its corresponding postcondition by a factor of about 2. This is because we sent a simpler task than the input postcondition to verify, e.g. because of our transformation slicing, the VC for each sub-goal encodes a simpler interaction of transformation rules compared to the VC for its corresponding postcondition. From the third and sixth columns, we can further report that all sub-goals are verified in less time than their corresponding postcondition.
Scalability Evaluation
To evaluate the two steps we proposed for scalable MT verification, we first describe our research questions and the evaluation setup. Then, we detail the results of our evaluation.
Research questions
We formulate two research questions to evaluate the scalability of our verification approach:
(RQ1) Can a MT-specific slicing approach significantly increase verification efficiency w.r.t. domain-agnostic Boogie-level optimization when a MT is scaling up? (RQ2) Can our proposed grouping algorithm improve over the slicing approach for largescale MT verifications?
Evaluation Setup
To answer our research questions, we first focus on verifying a perfect UML copier transformation w.r.t. to the full set of 50 invariants (naturally we expect the copier to satisfy all the invariants). These invariants specify the well-formedness of UML constructs, similar to the ones defined in Listing 1. We implement the copier as an ATL MT that copies each classifier of the source metamodel into the target, and preserves their structural features (i.e. 194 copy rules). Note that while the copier MT has little usefulness in practice, it shares a clear structural similarity with real-world refactoring transformations. Hence, in terms of scalability analysis for deductive verification, we consider it to be a representative example for the class of refactoring transformations. We support this statement in Section 6.3, where we discuss the generalizability of our scalable approach by extending the experimentation to a set of real-world refactoring transformations.
Our evaluation consists of two settings, one for each research question. In the first setting, we investigate RQ1 by simulating a monotonically growing verification problem. We first sort the set of postconditions according to their verification time (obtained by verifying each postcondition separately before the experimentation). Then we construct an initial problem by taking the first (simplest) postcondition and the set of rules (extracted from the UML copier) that copy all the elements affecting the postcondition. Then we expand the problem by adding the next simplest postcondition and its corresponding rules, arriving after 50 steps to the full set of postconditions and the full UML copier transformation.
At each of the 50 steps, we evaluate the performance of 2 verification approaches:
-ORG b . The original VeriATL verification system: each postcondition is separately verified using Boogie-level VC splitting. -SLICE. Our MT slicing technique applied on top of the ORG b approach: each postcondition is separately verified over the transformation slice impacting that specific postcondition (as described in Section 5.1).
Furthermore, we also applied our SLICE approach to a set of real-world transformations, to assess to which extent the previous results on the UML copier transformation are generalizable: we replaced the UML copier transformation in the previous experiment with 12 UML refactoring transformations from the ATL transformations zoo 11 , and verified them against the same 50 OCL invariants. When the original UML refactorings contain currently nonsupported constructs (please refer language support in Section 6.3 for details), we use our result on rule irrelevance (Theorem 2) to determine whether each invariant would produce the same VCs when applied to the copier transformation and to the refactorings. If not, we automatically issue timeout verification result to such invariant on the refactoring under study, which demonstrates the worst-case situation for our approach. By doing so, we ensure the fairness of the performance analysis for all the corpus.
For answering RQ2, we focus on the verification problem for the UML copier transformation, and compare two verification approaches, i.e. SLICE and GROUP, that ap-plies the grouping algorithm on top of SLICE (as described in Section 5.2). In particular, we variate the pair of arguments MAX t and MAX s (i.e. maximum traces and subgoals per group) to investigate their correlation with the algorithm performance.
Our scalability evaluation is performed on an Intel 3 GHz machine with 16 GB of memory running the Linux operating system. We refer to our online repository for the complete artifacts used in our evaluation 12
Evaluation Result
The two charts in Fig. 7 summarize the evaluation results of the first setting. In Fig. 7-(a) we record for each step the longest time taken for verifying a single postcondition at that step. In Fig. 7-(b) we record the total time taken to verify all the postconditions for each step. The two figures bear the same format. Their x-axis shows each of the steps in the first setting and the y-axis is the recorded time (in seconds) to verify each step by using the ORG b and SLICE approaches. The grey horizontal line in Fig. 7-(a) shows the verifier timeout (180s).
We learn from Fig. 7-(a) that the SLICE approach is more resilient to the increasing complexity of the problem than the ORG b approach. The figure shows that already at the 18th step the ORG b approach is not able to verify the most complex postcondition (the highest verification time reaches the timeout). The SLICE technique is able to verify all postconditions in much bigger problems, and only at the 46th step one VC exceeds the timeout.
Moreover, the results in Fig. 7-(b) support a positive answer to RQ1. The SLICE approach consistently verifies postconditions more efficiently than the ORG b approach. In our scenario the difference is significant. Up to step 18, both the approaches verify within timeout, but the verification time for ORG b shows exponential growth while SLICE is quasi-linear. At step 18th, SLICE takes 11.8% of the time of ORG b for the same verification result (171s against 1445s). For the rest of the experimentation ORG b hits timeout for most postconditions, while SLICE loses linearity only when the most complex postconditions are taken into account (step 30).
In our opinion, the major reason for the differences in shape of Fig. 7 is because the ORG b approach always aligns postconditions to the whole set of transformation rules, whereas the SLICE approach aligns each postcondition only to the ATL rules it depends on, thereby greatly reducing the size of each constructed VC.
Table 2 shows to which extent the previous results on the UML copier transformation are generalizable to other MTs. For each transformation the table shows the verification time (in seconds) spent by the ORG b and SLICE approaches respectively. The fourth column shows the improvement of the SLICE approach over ORG b .
From Table 2, we learn that when using the SLICE approach on the corpus, on average 43 (50 -7) out of 50 postconditions can expect a similar verification performance as observed in verifying the UML copier transformation. The reason is that our SLICE approach does not depend on the degree of supported features to align postconditions to the corresponding ATL rules. This gives more confidence that our approach can efficiently perform large scale verification tasks as shown in the previous experimentation, while we enable unsupported features.
We report that for the 12 transformations studied, the SLICE approach 1) is consistently faster than the ORG b approach and 2) is consistently able to verify more postconditions than the ORG b approach in the given timeout. On the full verification SLICE gains an average 71% time w.r.t. ORG b . The most gain is in the UML2Profiles case, which we observe 78% speed up than ORG b . The least gain is in the UML2Java case (68% speed up w.r.t. ORG b ), caused by 9 timeouts issued because of currently non-supported constructs (e.g. imperative call to helpers and certain iterators on OCL sequneces). All in all, these results con- firm the behavior observed in verifying the UML copier transformation.
Table 3 shows the evaluation result of the second setting. The first two columns record the two arguments sent to our grouping algorithm. In the 3rd column, we calculate the group ratio (GR), which measures how many of the 50 postconditions under verification are grouped with at least another one by our algorithm. In the 4th column (success rate), we calculate how many of the grouped VCs are in groups that decrease the global verification time. Precisely, if a VC P is the grouping result of VCs P 1 to P n , T a is the verification time of P using the GROUP approach, T 2 is the sum of the verification times of P 1 to P n using the SLICE approach, then we consider P 1 to P n are successfully grouped if T 1 is not reaching timeout and T 1 is less than T 2 . In the 5th column, we record the speedup ratio (SR), i.e. the difference of global verification time between the two approaches divided by the global verification time of the SLICE approach. In the 6th column, we record the time saved (TS) by the GROUP approach, by calculating the difference of global verification time (in seconds) between the two approaches.
The second setting indicates that our grouping algorithm can contribute to performance on top of the slicing approach, when the parameters are correctly identified. In our evaluation, the highest gain in verification time (134 seconds) is achieved when limiting groups to 6 maximum traces and 18 subgoals. In this case, 25 VCs participate in grouping, all of them successfully grouped. Moreover, we report that these 25 VCs would take 265 seconds to verify by using the SLICE approach, more than twice of the time taken by the GROUP approach. Consequently, the GROUP approach takes 1931 seconds to verify all the 50 VCs, 10% faster than the SLICE approach (2065 seconds), and 79% faster than the ORG b approach (9047 seconds).
Table 3 also shows that the two parameters chosen as arguments have a clear correlation with the grouping ratio and success rate of grouping. When the input arguments are gradually increased, the grouping ratio increases (more groups can be formed), whereas the success rate of grouping generally decreases (as the grouped VCs tend to become more and more complex). The effect on verification time is the combination of these two opposite behaviors, resulting in a global maximum gain point (MAX t =6, MAX s =18).
Finally, Table 3 shows that the best case for grouping is obtained by parameter values that extend the group ratio as much as possible, without incurring in a loss of success rate. However, the optimal arguments for the grouping algorithm may depend on the structure of the transformation and constraints. Their precise estimation by statically derived information is an open problem, that we consider for future work. Table 3 and our experience have shown that small values for the parameters (like in the first 5 rows) are safe pragmatic choices.
Discussions
In summary, our evaluations give a positive answer to all of our four research questions. It confirms that our fault localization approach can correctly and efficiently pinpoint the faults in the given MT: (a) faulty constructs are presented in the sliced transformation; (b) deduced clues assist developers in various debugging tasks (e.g. the elaboration of a counterexample); (c) the number of sub-goals that need to be examined to pinpoint a fault is usually small. Moreover, our scalability evaluation shows that our slicing and algorithmic VC grouping approaches improve verification performance up to 79% when a MT is scaling up. However, there are also lessons we learned from the two evaluations. Completeness. We identify three sources of incompleteness w.r.t. our proposed approaches.
First, incomplete application of the automated proof strategy (defined in Definition 1). Clearly, if not detected, an incomplete application of our automated proof strategy could cause our transformation slicing to erroneously slice away the rules that a postcondition might depend on. In our current solution we are able to detect incomplete cases, report them to the user, and defensively verify them. We detect incomplete cases by checking whether every elements of target types referred by each post-condition are accompanied by a genBy predicate (this indicates full derivation). While this situation was not observed during our experimentation, we plan to improve the completeness of the automated proof strategy in future by extending the set of natural deduction rules for ATL and design smarter proof strategies. By defensive verification, we mean that we will construct the slice to be the full MT for the incomplete cases. Thus, the VCs of the incomplete cases become MM, Pre, Exec Post, and fault localization is automatically disabled in these cases.
Second, incomplete verification. The Boogie verifier may report inconclusive results in general due to the underlying SMT solver. We hope the simplicity offered by our fault localization approach would facilitate the user in making the distinction between incorrect and inconclusive results. In addition, if the verification result is inconclusive, our fault localization approach can help users in eliminating verified cases and find the source of its inconclusiveness. In the long run, we plan to improve completeness of verification by integrating our approaches to interactive theorem provers such as Coq [START_REF] Bertot | Interactive Theorem Proving and Program Development: Coq'Art The Calculus of Inductive Constructions[END_REF] and Rodin [START_REF] Abrial | Rodin: An open toolset for modelling and reasoning in Event-B[END_REF] (e.g. drawing on recursive inductive reasoning). One of the easiest paths is exploiting the Why3 language [START_REF] Filliâtre | Why3 -where programs meet provers[END_REF], which targets multiple theorem provers as its back-ends.
Third, incomplete grouping. The major limitation of our grouping algorithm is that we currently have not proposed any reliable deductive estimation of optimal parameters MAX t and MAX s for a given transformation. Our evaluation suggests that conservatively choosing these parameters could be a safe pragmatic choice. Our future work would be toward more precise estimation by integrating with more statically derived information. Generalization of the experimentation. While evaluating our fault localization approach, we take a popular assumption in the fault localization community that multiple faults perform independently [START_REF] Wong | A survey on software fault localization[END_REF]. Thus, such assumption allows us to evaluate our fault localization approach in a one-post-condition-at-a-time manner. However, we cannot guarantee that this is general for realistic and industrial MTs. We think classifying contracts into related groups could improve these situations.
To further improve the generalization of our proposed approaches, we also plan to use synthesis techniques to automatically create more comprehensive contract-based MT settings. For example, using metamodels or OCL constraints to synthesize consistency-preserving MT rules [START_REF] Kehrer | Automatically deriving the specification of model editing operations from meta-models[END_REF][START_REF] Radke | Translating essential OCL invariants to nested graph constraints focusing on set operations[END_REF], or using a MT with OCL postconditions to synthesize OCL preconditions [START_REF] Cuadrado | Translating target to source constraints in model-to-model transformations[END_REF].
Language Support. Our implementation supports a core subset of the ATL and OCL languages: (a) declarative ATL (matched rules) in non-refining mode, many-to-many mappings of (possibly abstract) classifiers with the default resolution algorithm of ATL; (b) firstorder OCL contracts, i.e. OCL-Type, OCLAny, Primitives (OCLBool, OCLInteger, OCLString), Collection data types (i.e. Set, OrderedSet, Sequence, Bag), and 78 OCL operations on data types, including the forAll, collect, select, and reject iterators on collections. Refining mode (that uses in-place scheduling) is supported by integrating our previous work [START_REF] Cheng | Formalised EMFTVM bytecode language for sound verification of model transformations[END_REF]. The imperative and recursive aspects of ATL are currently not considered.
Usability. Currently, our fault localization approach relies on the experience of the transformation developer to interpret the deduced debugging clues. We think that counter-example generation would make this process more userfriendly, e.g. like quickcheck in Haskell [START_REF] Claessen | QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs[END_REF], or random testing in Isabelle/HOL [START_REF] Berghofer | Random Testing in Isabelle/HOL[END_REF]. In [START_REF] Cuadrado | Uncovering errors in ATL model transformations using static analysis and constraint solving[END_REF], the authors show how to combine derived constraints with model finder to generate counterexamples that uncover type errors in MTs. In the future, we plan to investigate how to use this idea to combine our debugging clues with model finders to ease the counter-example generation in our context. Finally, in case of large slices, we plan to automatically prioritize which unverified subgoals the user needs to examine first (e.g. by giving higher priority to groups of unverified sub-goals within the same branch of the proof tree). We are also working in eliminating sub-goals that are logically equivalent (as discussed in Section 6.1.3).
Related Work
Scalable Verification of MT. There is a large body of work on the topic of ensuring MT correctness [START_REF] Ab | A survey of approaches for verifying model transformations[END_REF], or program correctness in general [START_REF] Hatcliff | Behavioral interface specification languages[END_REF][START_REF] Prasad | A survey of recent advances in SAT-based formal verification[END_REF].
Poernomo outlines a general proof-as-modeltransformation methodology to develop correct MTs [START_REF] Poernomo | Proofs-as-modeltransformations[END_REF]. The MT and its contracts are first encoded in a theorem prover. Then, upon proving them, a functional program can be extracted to represent the MT based on the Curry-Howard correspondence [START_REF] Howard | The formulae-as-types notion of construction[END_REF].
UML-RSDS is a tool-set for developing correct MTs by construction [START_REF] Lano | A framework for model transformation verification[END_REF]. It chooses wellaccepted concepts in MDE to make their approach more accessible by developers, i.e. it uses a combination of UML and OCL to create a MT design and contracts.
Calegari et al. encode the ATL MTr and its metamodels into inductive types [START_REF] Calegari | A type-theoretic framework for certi-fied model transformations[END_REF]. The contracts for semantic correctness are given by OCL, which are translated into logical predicates. As a result, they can use the Coq proof assistant to interactively verify that the MTr is able to produce target models that satisfy the given contracts Büttner et al. use Z3 to verify a declarative subset of the ATL and OCL contracts [START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]. Their approach aims at providing minimal axioms that can verify the given OCL contracts.
Our work complements these works by focusing on scalability to make the verification more practical. To our knowledge our proposal is the first applying transformation slicing to increase the scalability of MT verification. Our work is close to Leino et al. [START_REF] Leino | Verification condition splitting[END_REF]. They introduce a Boogie-level VC splitting approach based on control-flow information. For example, the then and else blocks of an if statement branch the execution path, and can be hints for splitting VCs. This optimization does not have significant results in our context because the control-flow of ATL transformations is simple, yielding a single execution path with no potential to be split. This motivated us to investigate language-specific VC optimizations based on static information of ATL transformations. Our evaluation shows the integration of the two approaches is successful.
Fault Localization. Being one of the most user-friendly solutions to provide the users with easily accessible feedback, partially or fully automated fault localization has drawn a great attention of researchers in recent years [START_REF] Roychoudhury | Formulabased software debugging[END_REF][START_REF] Wong | A survey on software fault localization[END_REF]. Program slicing refers to identification of a set of program statements which could affect the values of interest [START_REF] Tip | A survey of program slicing techniques[END_REF][START_REF] Weiser | Program slicing[END_REF], and is often used for fault localization of general programming languages. W.r.t. other program slicing techniques, our work is more akin to traditional statement-deletion style slicing techniques than the family of amorphous slicing [START_REF] Harman | Amorphous program slicing[END_REF], since our approach does not alter the syntax of the MT for smaller slices. While amorphous slicing could potentially produce thinner slices for large MTs (which is important for the practicability of verification), we do not consider it in this work because: (a) the syntax-preserving slices constructed by the traditional approach is a more intuitive information to debug the original MT; (b) the construction of an amorphous slice is more difficult, since to ensure correctness, each altered part has to preserve the semantics of its correspondence.
Few works have adapted the idea of program slicing to localize faults in MTs. Aranega et al. define a framework to record the runtime traces between rules and the target elements these rules generated [START_REF] Aranega | Traceability mechanism for error localization in model transformation[END_REF]. When a target element is generated with an unexpected value, the transformation slices generated from the run-time traces are used for fault localization. While Aranega et al. focus on dynamic slicing, our work focuses on static slicing which does not require test suites to exercise the transformation.
To find the root of the unverified contracts, Büttner et al. demonstrate the UML-2Alloy tool that draws on the Alloy model finder to generate counter examples [START_REF] Büttner | Verification of ATL transformations using transformation models and model finders[END_REF]. However, their tool does not guarantee that the newly generated counter example gives additional information than the previous ones. Oakes et al. statically verify ATL MTs by symbolic execution using DSLTrans [START_REF] Oakes | Fully verifying transformation contracts for declarative ATL[END_REF]. This approach enumerates all the possible states of the ATL transformation. If a rule is the root of a fault, all the states that involve the rule are reported.
Sánchez Cuadrado et al. present a static approach to uncover various typing errors in ATL MTs [START_REF] Cuadrado | Uncovering errors in ATL model transformations using static analysis and constraint solving[END_REF], and use the USE constraint solver to compute an input model as a witness for each error. Compared to their work, we focus on contract errors, and provide the user with sliced MTs and modularized contracts to debug the incorrect MTs.
The most similar approach to ours is the work of Burgueño et al. on syntactically calculating the intersection constructs used by the rules and contracts [START_REF] Burgueño | Static fault localization in model transformations[END_REF]. To our knowledge our proposal is the first applying natural deduction with program slicing to increase the precision of fault localization in MT. W.r.t. the approach of Burgueño et al., we aim at improving the localization precision by considering also semantic relations between rules and contracts. This allows us to produce smaller slices by semantically eliminating unrelated rules from each scenario. Moreover, we provide debugging clues to help the user better understand why the sliced transformation causing the fault. However, their work considers a larger set of ATL. We believe that the two approaches complement each other and integrating them is useful and necessary.
Conclusion and Future Work
In summary, in this work we confronted the fault localization and scalability problems for deductive verification of MT. In terms of the fault localization problem, we developed an automated proof strategy to apply a set of designed natural deduction rules on the input OCL postcondition to generate sub-goals. Each unverified sub-goal yields a sliced transformation context and debugging clues to help the transformation developer pinpoint the fault in the input MT. Our evaluation with mutation analysis positively supports the correctness and efficiency of our fault localization ap-proach. The result showed that: (a) faulty constructs are presented in the sliced transformation, (b) deduced clues assist developers in various debugging tasks (e.g. to derive counterexamples), (c) the number of sub-goals that need to be examined to pinpoint a fault are usually small.
In terms of scalability, we lift our slicing approach to postconditions to manage large scale MTs by aligning each postcondition to the ATL rules it depends on, thereby reducing the verification complexity/time of individual postcondition. Moreover, we propose and prove a grouping algorithm to identify the postconditions that should be compositionally verified to improve the global verification performance. Our evaluation confirms that our approach improves verification performance up to 79% when a MT is scaling up.
Our future work includes facing the limitations identified during the evaluation (Section 6.3). We also plan to extend our slicing approach to metamodels and preconditions, i.e. slicing away metamodel constraints or preconditions that are irrelevant to each sub-goal. This would allow us to further reduce the size of problematic transformation scenario for the users to debug faulty MTs.
In addition, we plan to investigate how our decomposition can help us in reusing proof efforts. Specifically, due to requirements evolution, the MT and contracts are under unpredictable changes during the development. These changes can invalidate all of the previous proof efforts and cause long proofs to be recomputed. We think that our decomposition of sub-goals would increase the chances of reusing verification results, i.e. sub-goals that are not affected by the changes.
Fig. 2 .Fig. 3 .
23 Fig. 2. Example of HSM. Abstract (top) and concrete graphical syntax (bottom)
Listing 2 .
2 module HSM2FSM; create OUT : FSM from IN : HSM; rule SM2SM { from sm1 : HSM!StateMachine to sm2 : FSM!StateMachine ( name <-sm1.name ) } rule RS2RS { from rs1 : HSM!RegularState to rs2 : FSM!RegularState ( stateMachine <-rs1.stateMachine, name <-rs1.name ) } rule IS2RS { from is1 : HSM!InitialState (not is1.compositeState.oclIsUndefined()) to rs2 : FSM!RegularState ( stateMachine <-is1.stateMachine, name <-is1.name ) } -mapping each transition between two non-composite states rule T2TA { ... } -mapping each transition whose source is a composite state rule T2TB { ... } -mapping each transition whose target is a composite state rule T2TC { from t1 : HSM!Transition, src : HSM!AbstractState, trg : HSM!CompositeState, c : HSM!InitialState ( t1.source = src and t1.target = trg and c.compositeState = trg and not src.oclIsTypeOf(HSM!CompositeState)) to t2 : FSM!Transition ( label <-t1.label, stateMachine <-t1.stateMachine, source <-src, target <-c } Snippet of the HSM2FSM MT in ATL
Fig. 4 .
4 Fig. 4. Counter-example derived from Listing 3 that falsify Post1
Fig. 5 .
5 Fig. 5. Overview of providing fault localization for VeriATL
5 :
5 Tree ← intro(leaf) ∪ Tree 6: end for 7: while leafs = size(getLeafs(Tree)) 8: do 9: leafs ← size(getLeafs(Tree)) 10: for each node leaf ∈ getLeafs(Tree) do 11: Tree ← elimin(leaf) ∪ Tree 12:
Algorithm 2 7 : 17 :
2717 Algorithm for grouping VCs (P, MAX t , MAX s )1: P ← sortt(P) 2: G ← {} 3: for each p ∈ P do if trailt < MAXt ∧ trails < MAXs then return generate(G)
Fig. 7 .
7 Fig. 7. The evaluation result of the first setting
Transition inv Post1:
5 FSM!Transition.allInstances()->forAll(t | not
t.source.oclIsUndefined())
Listing 1. The OCL contracts for HSM and FSM
contract: if states have unique names within
any source model, states will have unique names
also in the generated target model. In general,
there are no restrictions on what kind of cor-
rectness conditions could be expressed, as long
as they are expressed in the subset of OCL we
considered in this work (see language support
in Section 6.3 for more details).
Theorem 1 (Rule Irrelevance -Sub-goals). MM, Pre, Exec sliced , Hypotheses Conclusion ⇐⇒ MM, Pre, Exec sliced ∪ irrelevant , Hypotheses Conclusion 8
Table 1 .
1 Evaluation metrics for the HSM2FSM case study
Unveri. Post. Veri. ID Time(ms) Unveri. / Total Sub-goals Max Time (ms) Avg. Time (ms) Total Time (ms) L f aulty ∈ P T S
MT2 #5 3116 3 / 4 1616 1644 6464 True
DB1 #5 2934 1 / 1 1546 1546 1546 -
MB6 #4 3239 1 / 12 1764 2550 21168 True
AF2 #4 3409 2 / 12 1793 2552 21516 True
MF6 #2 #4 3779 3790 0 / 6 1 / 12 1777 1774 2093 2549 10662 21288 N/A True
DR1 #1 #2 2161 2230 3 / 6 3 / 6 1547 1642 1589 1780 9282 9852 --
AR #1 #3 3890 4057 1 / 8 6 / 16 1612 1769 1812 1920 12896 28304 True True
Table 2 .
2 The generalization evaluation of the first setting
ID TimeORG b TimeSLICE Time Gained
UMLCopier 9047 2065 77%
UML2Accessors 9094 2610 71%
UML2MIDlet 9084 2755 70%
UML2Profiles 9047 2118 77%
UML2Observer 9084 2755 70%
UML2Singleton 9094 2610 71%
UML2AsyncMethods 9084 2755 70%
UML2SWTApplication 9084 2755 70%
UML2Java 9076 2923 68%
UML2Applet 9094 2610 71%
UML2DataTypes 9014 2581 71%
UML2JavaObserver 9084 2755 70%
UML2AbstractFactory 9094 2610 71%
Average 9078 2653 71%
Table 3 .
3 The evaluation result of the second setting
Maxt Maxs GR Succ. Rate SR TS
3 10 8% 100% 48% 16
4 13 22% 100% 49% 51
5 15 44% 100% 47% 108
6 18 50% 100% 51% 134
7 20 56% 93% 23% 73
8 23 62% 81% 11% 41
9 25 64% 72% -108% -400
10 28 62% 55% -158% -565
11 30 64% 31% -213% -789
12 33 68% 0% -212% -1119
13 35 68% 18% -274% -1445
14 38 72% 17% -433% -3211
15 40 72% 0% -438% -3251
16 43 76% 0% -547% -4400
17 45 76% 0% -620% -4988
We name the initial states in the concrete syntax of HSM and FSM models for readability.
Our HSM2FSM transformation is adapted from[START_REF] Büttner | On verifying ATL transformations using 'off-the-shelf' SMT solvers[END_REF]. The full version can be accessed at: https://goo.gl/MbwiJC.
In practice, we fill in the trace function by examining the output element types of each ATL rule, i.e. the to section of each rule.
In fact, the value of exp is assigned to x.a because of resolution failure. This causes a type mismatch exception and results in the value of x.a becoming undefined (we consider ATL transformations in non-refinement mode where the source and target metamodels are different).
The ATL transformations zoo. http://www. eclipse.org/atl/atlTransformations/
On scalability of deductive verification for ATL MTs (Online). https://github.com/veriatl/ VeriATL/tree/Scalability
https://github.com/veriatl/ VeriATL/tree/Scalability. |
01763422 | en | [
"math.math-oc",
"info.info-ro"
] | 2024/03/05 22:32:13 | 2020 | https://hal.science/hal-01763422/file/2017_LegrainOmerRosat_DynamicNurseRostering.pdf | Antoine Legrain
email: antoine.legrain@polymtl.ca
Jérémy Omer
email: jeremy.omer@insa-rennes.fr
Samuel Rosat
email: samuel.rosat@polymtl.ca
An Online Stochastic Algorithm for a Dynamic Nurse Scheduling Problem
Keywords: Stochastic Programming, Nurse Rostering, Dynamic problem, Sample Average Approximation, Primal-dual Algorithm, Scheduling
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
In western countries, hospitals are facing a major shortage of nurses that is mainly due to the overall aging of the population. In the United Kingdom, nurses went on strike for the first time in history in May 2017. Nagesh [START_REF] Nagesh | Nurses could go on strike for the first time in british history[END_REF] says that "It's a message to all parties that the crisis in nursing recruitment must be put center stage in this election". In the United States, "Inadequate staffing is a nationwide problem, and with the exception of California, not a single state sets a minimum standard for hospital-wide nurse-to-patient ratios." [START_REF] Robbins | We need more nurses[END_REF]. In this context, the attrition rate of nurses is extremely high, and hospitals are now desperate retain them. Furthermore, nurses tend to often change positions, because of the tough work conditions and because newly hired nurses are often awarded undesired schedules (mostly due to seniority-based priority in collective agreements). Consequently, providing high quality schedules for all the nurses is a major challenge for the hospitals that are also bound to provide expected levels of service.
The nurse scheduling problem (NSP) has been widely studied for more than two decades (refer to [START_REF] Burke | The state of the art of nurse rostering[END_REF] for a literature review). The NSP aims at building a schedule for a set of nurses over a certain period of time (typically two weeks or one month) while ensuring a certain level of service and respecting collective agreements. However, in practice, nurses often know their wishes of days-off no more than one week ahead of time. Managers therefore often update already-computed monthly schedules to maximize the number of granted wishes. If they were able to compute the schedules on a weekly basis while ensuring the respect of monthly constraints (e.g., individual monthly workload), the wishes could be taken into account when building the schedules. It would increase the number of wishes awarded, improve the quality of the schedules proposed to the nurses, and thus augment the retention rate.
The version of the NSP that we tackle here is that of the second International Nurse Rostering Competition of 2015 (INRC-II) [START_REF] Ceschia | The second international nurse rostering competition[END_REF], where it is stated in a dynamic fashion. The problem features a wide variety of constraints that are close to the ones faced by nursing services in most hospitals. In this paper, we present the work that we submitted to the competition and which was awarded second prize.
Literature review
Dynamic problems are solved iteratively without comprehensive knowledge of the future. At each stage, new information is revealed and one needs to compute a solution based on the solutions of the previous stages that are irrevocably fixed. The optimal solution of the problem is the same as that of its static (i.e., offline) counterpart, where all the information is known beforehand, and the challenge is to approach this solution although information is revealed dynamically (i.e., online).
Four main techniques have been developed to do this: computing an offline policy (Markov decision processes [START_REF] Puterman | Markov Decision Processes: Discrete Stochastic Dynamic Programming[END_REF] are mainly used), following a simple online policy (Online optimization [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] studies these algorithms), optimizing the current and future decisions (Stochastic optimization [START_REF] Birge | Introduction to Stochastic Programming[END_REF] handles the remaining uncertainty), or reoptimizing the system at each stage (Online stochastic optimization [START_REF] Van Hentenryck | Online Stochastic Combinatorial Optimization[END_REF] provides a general framework for designing these algorithms).
Markov decision processes decompose the problem into two different sets (states and actions) and two functions (transition and reward). A static policy is pre-computed for each state and used dynamically at each stage depending on the current state. Such techniques are overwhelmed by the combinatorial explosion of problems such as the NSP, and approximate dynamic programming [START_REF] Powell | Approximate Dynamic Programming: Solving the Curses of Dimensionality[END_REF] provides ways to deal with the exponential growth of the size of the state space. This technique has been successfully applied to financial optimization [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF], booking [START_REF] Patrick | Dynamic multipriority patient scheduling for a diagnostic resource[END_REF], and routing [START_REF] Novoa | An approximate dynamic programming approach for the vehicle routing problem with stochastic demands[END_REF] problems. In Markof decision processes, most computations are performed before the stage solution process, therefore this technique relies essentially on the probability model that infers the future events.
Online algorithms aim at solving problems where decisions are made in real-time, such as online advertisement, revenue management or online routing. As nearly no computation time is available, researchers have studied these algorithms to ensure a worst case or expected bound on the final solution compared to the static optimal one. For instance, Buchbinder [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] designs a primal-dual algorithm for a wide range of problems such as set covering, routing, and resource allocation problems, and provides a competitive-ratio (i.e., a bound on the worst-case scenario) for each of these applications. Although these techniques can solve very large instances, they cannot solve rich scheduling problems as they do not provide the tools for handling complex constraints.
Stochastic optimization [START_REF] Birge | Introduction to Stochastic Programming[END_REF] tackles various optimization problems from the scheduling of operating rooms [START_REF] Denton | Optimal allocation of surgery blocks to operating rooms under uncertainty[END_REF] to the optimization of electricity production [START_REF] Fleten | Short-term hydropower production planning by stochastic programming[END_REF]. This field studies the minimization of a statistical function (e.g., the expected value), assuming that the probability distribution of the uncertain data is given. This framework typically handles multi-stage problems with recourse, where first-level decisions must be taken right away and recourse actions can be executed when uncertain data is revealed. The value of the recourse function is often approximated with cuts that are dynamically computed from the dual solutions of some subproblems obtained with Benders' decomposition. However these Benders-based decomposition methods converge slowly for combinatorial problems. Namely, the dual solutions do not always provide the needed information and the solution process therefore may require more computational time than is available. To overcome this difficulty, one can use the sample average approximation (SAA) [START_REF] Kleywegt | The sample average approximation method for stochastic discrete optimization[END_REF] to approximate the uncertainty (using a small set of sample scenarios) during the solution and also to evaluate the solution (using a larger number of scenarios).
Finally, online stochastic optimization [START_REF] Van Hentenryck | Online Stochastic Combinatorial Optimization[END_REF] is a framework oriented towards the solution of industrial problems. The idea is to decompose the solution process in three steps: sampling scenarios of the future, solving each one of them, and finally computing the decisions of the current stage based on the solution of each scenario. Such techniques have been successfully applied to solve large scale problems as on-demand transportation system design [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF] or online scheduling of radiotherapy centers [START_REF] Legrain | Online stochastic optimization of radiotherapy patient scheduling[END_REF]. Their main strength is that any algorithm can be used to solve the scenarios.
Contributions
The INRC-II challenges the candidates to compute a weekly schedule in a very limited computational time (less than 5 minutes), with a wide variety of rich constraints, and with important correlations between the stages. Due to the important complexity of this dynamic NSP, none of the tools presented in the literature review allows to solve this problem. We therefore introduce an online stochastic algorithm that draws inspiration from the primal-dual algorithms and the SAA. In that method,
• the online stochastic algorithm offers a framework to solve rich combinatorial problems;
• the primal-dual algorithm speeds up the solution by inferring quickly the impact of some decisions;
• the SAA efficiently handles the important correlations between weeks without increasing tremendously the computational time.
Finally, the algorithm uses a free and open-source software as a subroutine to solve static versions of the NSP. It is described in details in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF] and summarized in Section 3.
We emphasize that the algorithm described in this article has been developed in a time-constrained environment, thus forcing the authors to balance their efforts between the different modules of the software.
The resulting code is shared in a public Git repository [START_REF] Legrain | Dynamic nurse scheduler[END_REF] for reproduction of the results, future comparisons, improvements and extensions. The remainder of the article is organized as follows. In Section 2, we give a detailed description of the NSP as well as the dynamic features of the competition. In Section 3, we state a static formulation and summarize the algorithm that we use to solve it. In Section 4, we present the dynamic formulation of the NSP, the design of the algorithm, and the articulation of its components.
In Section 5, we give some details on the implementation of the algorithm, study the performance of our method on the instances of the competition, and compare them to those obtained by the other finalist teams.
Our concluding remarks appear in Section 6.
The Nurse Scheduling Problem
The formulation of the NSP that we consider is the one proposed by Ceschia et al. [START_REF] Ceschia | The second international nurse rostering competition[END_REF] in the INRC-II, and the description that we recall here is similar to theirs. First, we describe the constraints and the objective of the scheduling problem Then, we discuss the challenges brought in by the uncertainty over future stages.
The NSP aims at computing the schedule of a group of nurses over a given horizon while respecting a set of soft and hard constraints. The soft constraints may be violated at the expense of a penalty in the objective, whereas hard constraints cannot be violated in a feasible solution. The dynamic version of the problem considers that the planning horizon is divided into one-week-long stages and that the demand for nurses at each stage is known only after the solution of the previous stage is computed. The solution of each stage must therefore be computed without knowledge of the future demand.
The schedule of a nurse is decomposed into work and rest periods and the complete schedules of all the nurses must satisfy the set of constraints presented in Table 1. Each nurse can perform different skills (e.g., Head Nurse, Nurse) and each day is divided into shifts (e.g., Day, Night). Furthermore, each nurse has signed a contract with their employers that determines their work status (e.g., Full-time, Part-time) and work agreements regulate the number of days and weekends worked within a month as well as the minimum and maximum duration of work and rest periods. For the sake of nurses' health and personal life and to ensure a sufficient level of awareness, some successions of shifts are forbidden. For instance, a night shift cannot be followed by a day shift without being separated by at least one resting day. The employers also need to ensure a certain quality of service by scheduling a minimum number of nurses with the right skills for each shift and day. Finally, the length of the schedules (i.e., the planning horizon) can be four or eight weeks.
Hard constraints
H1 Single assignment per day: A nurse can be assigned at most one shift per day. H2 Under-staffing: The number of nurses performing a skill on a shift must be at least equal to the minimum demand for this shift. H3 Shift type successions: A nurse cannot work certain successions of shifts on two consecutive days. H4 Missing required skill: A nurse can only cover the demand of a skill that he/she can perform. Soft constraints S1 Insufficient staffing for optimal coverage: The number of nurses performing a skill on a shift must be at least equal to an optimal demand. Each missing nurse is penalized according to a unit weight but extra nurses above the optimal value are not considered in the cost. S2 Consecutive assignments: For each nurse, the number of consecutive assignments should be within a certain range and the number of consecutive assignments to the same shift should also be within another certain range. Each extra or missing assignment is penalized by a unit weight. S3 Consecutive resting days: For each nurse, the number of consecutive resting days should be within a certain range. Each extra or missing resting day is penalized by a unit weight. S4 Preferences: Each assignment of a nurse to an undesired shift is penalized by a unit weight. S5 Complete week-end: A given subset of nurses must work both days of the week-end or none of them. If one of them works only one of the two days Saturday or Sunday, it is penalized by a unit weight. S6 Total assignments: For each nurse, the total number of assignments (worked days) scheduled in the planning horizon must be within a given range. Each extra or missing assignment is penalized by a unit weight. S7 Total working week-ends: For each nurse, the number of week-ends with at least one assignment must be less than or equal to a given limit. Each worked weekend over that limit is penalized by a unit weight. The hard constraints (Table 1, H1-H4) are typical for workforce scheduling problems: each worker is assigned an assignment or day-off every day, the demand in terms of number of employees is fulfilled, particular shift successions are forbidden, and a minimum level of qualification of the workers is guaranteed.
Soft constraints S1-S7 translate into a cost function that enhances the quality of service and retain the nurses within the unit. The quality of the schedules (alternation of work and rest periods, numbers of worked days and weekends, respect of nurses' preferences) are indeed paramount in order to retain the most qualified employees. These specificities make the NSP one of the most difficult workforce scheduling problems in the literature, because a personalized roster must be computed for each nurse. The fact that most constraints are soft eases the search for a feasible solution but makes the pursuit of optimality more difficult.
The goal of the dynamic NSP is to sequentially build weekly schedules so as to minimize the total cost of the aggregated schedule and ensure feasibility over the complete planning horizon. The main difficulty is to reach a feasible (i.e., managing the global hard constraints H3) and near-optimal (i.e., managing the global soft constraints S6 -S7 as well as consecutive constraints S2 -S3) schedule without knowing the future demands and nurses' preferences. Indeed, the hard constraints H1, H2, and H4 handle local features that do not impact the following days. Each of these constraints concern either one single day (i.e., one assignment per day H1) or one single shift (i.e., the demand for a shift H2 and the requirement that a nurse must possess a required skill H4). In the same way, soft constraints S1, and S4 -S5 are included in the objective with local costs that depend on one shift, day or weekend. To summarize, the proposed algorithm must simultaneously handle global requirements and border effects between weeks that are induced by the dynamic process. These effects are propagated to the following week/stage through the initial state or the number of worked days and weekends in the current stage.
The static nurse scheduling problem
We describe here the algorithm introduced in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF] to solve the static version of the NSP. This description is important for the purpose of this paper since parts of the dynamic method described in the subsequent sections make use of certain of its specificities. This method solves the NSP with a branch-and-price algorithm [START_REF] Desaulniers | Column generation[END_REF]. The main idea is to generate a roster for each nurse, i.e., a sequence of work and rest periods covering the planning horizon. Each individual roster satisfies constraints H1, H3 and H4, and the rosters of all the nurses satisfy H2. A rotation is a list of shifts from the roster that are performed on consecutive days, and preceded and followed by a resting day; it does not contain any information about the skills performed on its shifts. A rotation is called feasible (or legal) if it respects the single assignment and succession constraints H1 and H3. A roster is therefore a sequence of rotations, separated by nonempty rest periods, to which skills are added (see Example 1). The MIP described in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF] is based on the enumeration of possible rotations by column generation. As in most column-generation algorithms, a restricted master problem is solved to find the best fractional roster using a small set of rotations, and subproblems output rotations that could be added to improve the current solution or prove optimality. These subproblems are modeled as shortest path problems with resource constraints whose underlying networks are described in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF]. To obtain an integer solution, this process is embedded within a branch-and-bound scheme. The remainder of the section focuses on the master problem. For the sake of clarity, we assume that, for every nurse, the set of all legal rotations is available, which conceals the role of the subproblem. It is also worth mentioning that the software is based only on open-source libraries from the COIN-OR project (BCP framework for branch-and-cut-and-price and the linear solver CLP), and is thus both free and open-source.
Example 1. Consider the following single-week roster:
We consider a set N of nurses over a planning horizon of M weeks (or K = 7M days). The sets of all shifts and skills are respectively denoted as Σ and S. The nurse's type corresponds to the set of skills he or she can use. For instance, most head nurses can fill Head Nurse demand, but they can also fill Nurse demand in most cases. All nurses of type t ∈ T (e.g., nurse or head nurse) are gathered within the subset N t . For the sake of readability, indices are standardized in the following way: nurses are denoted as i ∈ N , weeks as m ∈ {1 . . . M } , days as k ∈ {1 . . . K} , shifts as s ∈ S and skills as σ ∈ Σ. We use (k, s) to denote the shift s of day k. All other data is summarized in Table 2.
Nurses L - i , L + i min/max
Remark (Initial state). Obviously, if CR 0
i > 0, then CD 0 i = CS 0 i = 0, and vice-versa, because the nurse was either working or resting on the last day before the planning horizon. Moreover, s 0 i only matters if the nurse was working on that day. The total number of worked days and worked week-ends of a nurse is set at zero (0) at the beginning of the planning horizon.
The master problem described in Formulation (1) assigns a set of rotations to each nurse while ensuring at the same time that the rotations are compatible and the demand is filled. The cost function is shaped by the penalties of the soft constraints as no other cost is taken into account in the problem proposed by the competition. For any soft constraint SX, its associated unit weight in the objective function is denoted as c X .
Let R i be the set of all feasible rotations for nurse i. The rotation j of nurse i has a cost c ij (i.e., the sum of the soft penalties S2, S4 and S5) and is described by the following parameters:
a sk ij , a k ij ,
min i∈N j∈Ri c ij x ij S2,S4,S5 + CR + i k=1 c 3 r ik + min(K+1,k+CR + i ) l=k+1 c ikl 3 r ikl S3 + c 6 (w + i + w - i ) S6 + c 7 v i S7 + c 1 K k=1 s∈S σ∈Σ z sk σ S1 (1a)
subject to:
[H1, H3] :
min(K+1,k+CR + i ) l=k+1 r ikl - j∈Ri:f + ij =k-1 x ij = 0, ∀i ∈ N , ∀k = 2 . . . K (1b) [H1, H3] : r ik -r i(k-1) + j∈Ri:f - ij =k x ij - k-1 l=max(1,k-CR + i ) r ilk = 0, ∀i ∈ N , ∀k = 2 . . . K (1c) [H1, H3] : K l=max(1,K+1-CR + i ) r ilK + r iK + j:f + ij =K x ij = 1, ∀i ∈ N (1d) [S6] : j∈Ri K k=1 a k ij x ij + w - i ≥ L - i , ∀i ∈ N (1e) [S6] : j∈Ri K k=1 a k ij x ij -w + i ≤ L + i , ∀i ∈ N (1f) [S7] : j∈Ri M m=1 b m ij x ij -v i ≤ B i , ∀i ∈ N (1g)
[H2] :
t∈T σ n sk tσ ≥ D sk σ , ∀s ∈ S, k ∈ {1 . . . K} , σ ∈ Σ (1h)
[S1] :
t∈T σ n sk tσ + z sk σ ≥ O sk σ , ∀s ∈ S, k ∈ {1 . . . K} , σ ∈ Σ (1i) [H4] : i∈Nt,j a sk ij x ij - σ∈Σt n sk tσ = 0, ∀s ∈ S, k ∈ {1 . . . K} , σ ∈ Σ (1j) x ij ∈ N, z sk σ , n sk tσ ∈ R, ∀i ∈ N , j ∈ R i , s ∈ S, k ∈ {1 . . . K} , t ∈ T , σ ∈ Σ (1k) r ikl , r ik , w + i , w - i , v i ≥ 0, ∀i ∈ N , k ∈ {1 . . . K} , l = k + 1 . . . min(K + 1, k + CR + i ) (1l)
where Σ t is the set of skills mastered by a nurse of type t (e.g., head nurses have the skills Head Nurse and Nurse), and T σ is the set of nurse types that masters skill σ (e.g., Head Nurse skill can be only provided by head nurses).
The objective function (1a) is composed of 5 parts: the cost of the chosen rotations in terms of consecutive assignments and preferences (S2, S4, S5), the minimum and maximum consecutive resting days violations (S3), the total number of working days violation (S6), the total number of worked week-ends violation (S7), and the insufficient staff for optimal coverage (S1). Constraints (1b)-(1d) are the flow constraints of the rostering graph (presented in Figure 1) of each nurse i ∈ N . Constraints (1e) and (1f) measure the distance between the number of worked days and the authorized number of assignments: variable w + i counts the number of missing days when the minimum number of assignments, L - i , is not reached, and w - i is the number of assignments over the maximum allowed when the total number of assignments exceeds L + i . Constraints (1g) measure the number of weekends worked exceeding the maximum B i . Constraints (1h) ensure that enough nurses with the right skill are scheduled on each shift to meet the minimal demand.
Constraints (1i) measure the number of missing nurses to reach the optimal demand. Constraints (1j) ensure a valid allocation of the skills among nurses of a same type for each shift. Constraints (1k) and (1l) ensure the integrality and the nonnegativity of the decision variables.
A valid sequence of rotations and rest periods can also be represented in a rostering graph whose arcs correspond to rotations and rest periods and whose vertices correspond to the starting days of these rotations and rest periods. Figure 1 shows an illustration of a rostering graph for some nurse i, and highlights the border effects. Nurse i has been resting for one day in her/his initial state, so the binary variable r i14 has a cost c 3 instead of zero, but the binary variable r i67 has a zero cost, because nurse i could continue to rest on the first days of the following week. If variable r i67 is set to one, nurse i will then start the following with one resting day as initial state. Finally, if nurse i was working in her/his initial state, the penalties associated to this border effect would be included in the cost of either the first rotation if the nurse continues to work, or the first resting arcs r i1k if the nurse starts to rest.
Ri1 Ri2 Ri3 Ri4 Ri5 Ri6 Ri7 Wi1 Wi2 Wi3 Wi4 Wi5 Wi6 Wi7
Handling the uncertain demand
This section concentrates on the dynamic model used for the NSP, and on the design of an efficient algorithm to compute near-optimal schedules in a very limited amount of computational time. We propose a dynamic math-heuristic based on a primal-dual algorithm [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] and embedded into a SAA [START_REF] Kleywegt | The sample average approximation method for stochastic discrete optimization[END_REF]. As previously stated, the dynamic algorithm should focus on the global constraints (i.e., H3, S6, and S7) to reach a feasible and near-optimal global solution.
The dynamic NSP
For the sake of clarity and because we want to focus on border effects, we introduce another model for the NSP, equivalent to Formulation [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF]. In this new formulation, weekly decisions and individual constraints are aggregated and border conditions are highlighted. The resulting weekly Formulation (2) clusters together all individual local constraints in a weekly schedule j for each week and enumerates all possible schedules.
The constraints of that model describe border effects. Although this formulation is not solved in practice, it is better-suited to lay out our online stochastic algorithm.
Binary variable y m j takes value 1 if schedule j is chosen for week m, and 0 otherwise. As for rotations, a global weekly schedule j ∈ R is described by a weekly cost c j and by parameters a ij and b ij that respectively count the number of days and weekends worked by nurse i. The variables w + i , w - i , and v i are defined as in Formulation [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF].
min j∈R M m=1 c j y m j S1-S5 + c 6 i∈N (w + i + w - i ) S6 + c 7 i∈N v i S7 (2a)
subject to:
[H1 -H4, S1 -S5] :
j∈R y m j = 1, ∀m ∈ {1 . . . M } [α m ] (2b)
[H3, S2, S3] :
j ∈Cj y m+1 j ≥ y m j , ∀j ∈ N , m = 1 . . . M -1 [δ m j ] (2c)
[S6] :
j∈R M m=1 a ij y m j + w - i ≥ L - i , ∀i ∈ N [β - i ] (2d)
[S6] :
j∈R M m=1 a ij y m j -w + i ≤ L + i , ∀i ∈ N [β + i ] (2e)
[S7] :
j∈R M m=1 b ij y m j -v i ≤ B i , ∀i ∈ N [γ i ] (2f)
y m j ∈ {0, 1}, ∀j ∈ R, m ∈ {1 . . . M } (2g) w + i , w - i , v i ≥ 0, ∀i ∈ N (2h)
The objective (2a) is decomposed into the weekly cost of the schedule and global penalties. Constraints (2b) ensure that exactly one schedule is chosen for each week. Constraints (2c) hide the succession constraints by summarizing them into a filtering constraint between consecutive schedules. These constraints simplify the resulting formulation, but will not be used in practice as their number is not tractable (see below). Constraints (2d)-(2f) measure the penalties associated with the number of worked days and weekends.
Constraints (2g)-(2h) are respectively integrality and nonnegativity constraints. The greek letters indicated between brackets (α, β, δ and γ) denote the dual variables associated with these constraints.
Constraints (2c) model the sequential aspect of the problem. This formulation is indeed solved stage by stage in practice, and thus the solution of stage m is fixed when solving stage m + 1. Therefore, when computing the schedule of stage m+1, binary variables y m j all take value zero but one of them, denoted as y m jm , that corresponds to the chosen schedule for week m and takes value 1. All constraints (2c) corresponding to y m j = 0 can be removed, and only one is kept:
j ∈Cj m y m+1 j ≥ 1
, where C jm is the set of all schedules compatible with j m , i.e., those feasible and correctly priced when schedule j m is used for setting the initial state of stage m + 1. Constraints (2c) can thus be seen as filtering constraints that hide the difficulties associated with the border effects induced by constraints H3, S2, and S3.
The main challenge of the dynamic NSP is to correctly handle constraints (2c)-(2h) to maximize the chance of building a feasible and near-optimal solution at the end of the horizon. Our dynamic procedure for generating and evaluating the computed schedules at each stage is based on the SAA. Algorithm 1 summarizes the whole iterative process over all stages. Each candidate schedule is evaluated before generating Sample a set Ω m of future demands for the evaluation while there is enough computational time do Generate a candidate weekly schedule j for stage m Initialize the evaluation algorithm with schedule j (i.e., set the initial state) for each scenario ω ∈ Ω m do Evaluate schedule j over scenario ω end for Store the schedule (S m := S m ∪ {j}) and its score (e.g., its average evaluation cost) end while Choose the schedule j m ∈ S m with the best score end for Compute the best schedule for the last stage M with the given computational time another new one. The available amount of time being short, we should not take the risk of generating several schedules without having evaluated them. This generation-evaluation step is repeated until the time limit is reached. Note that the last stage M is solved by an offline algorithm (e.g., the one described in Section 3), because the demand is totally known at this time.
The two following subsections describe each one of the main steps:
1. The generation of a schedule with an offline procedure that takes into account a rough approximation of the uncertainty; 2. The evaluation of that schedule for a demand scenario that measures the impact on the remaining weeks. (This step also computes an evaluation score of a schedule based on the sampled scenarios.)
Generating a candidate schedule
In a first attempt to generate a schedule, a primal-dual algorithm inspired from [START_REF] Buchbinder | Designing Competitive Online Algorithms via a Primal-Dual Approach[END_REF] is proposed. However, this procedure does not handle all correlations between the weekly schedules (i.e., Constraints (2c)). This primal-dual algorithm is then adapted to better take into account the border effects between weeks and make use of every available insight on the following weeks.
A primal-dual algorithm
Primal-dual algorithms for online optimization aim at building pairs of primal and dual solutions dynamically. At each stage, primal decisions are irrevocably made and the dual solution is updated so as to remain feasible. The current dual solution drives the algorithm to better primal decisions by using those dual values as multipliers in a Lagrangian relaxation. The goal is to obtain a pair of feasible primal and dual solutions that satisfy the complementary slackness property at the end of the process.
We use a similar primal-dual algorithm to solve the online problem associated to Formulation [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF]. In this dynamic process, we wish to sequentially solve a restriction of Formulation (2) to week m for all stages m ∈ {1, . . . , M } with a view to reaching an optimal solution of the complete formulation. This process raises an issue though: how can constraints (4c)-(4e) betaken into account in a restriction to a single week? To achieve that goal, the primal-dual algorithm uses dual information from stage m to compute the schedule of stage m + 1 by solving the following Lagrangian relaxation of Formulation (2):
min j∈R [ c j S1,S2,S3,S4,S5 + i∈N ( β+ i -β- i )a ij S6 + i∈N γi b ij S7 ]y m+1 j (3a) s.t.: [H1 -H3] : j∈Cj m y m+1 j = 1 (3b) y m+1 j ∈ {0, 1}, ∀j ∈ R (3c)
where βi , β+ i , γi ≥ 0 are multipliers respectively associated with constraints (2d) -(2f), and both constraints (2b) and (2c) that guarantee the feasibility of the weekly schedules are aggregated under Constraint (3b). More specifically, any new assignment for nurse i will be penalized with βi -β+ i and worked week-ends will cost an additional γi . It is thus essential to set these multipliers to values that will drive the computation of weekly schedules towards efficient schedules over the complete horizon. For this, we consider the dual of the linear relaxation of Formulation (2):
max M m=1 α m + i∈N (L - i β - i -L + i β + i -B i γ i ) (4a) s. t.: α m + i∈N (a ij β - i -a ij β + i -b ij γ i ) -δ m j + j ∈C -1 j δ m-1 j ≤ c j , ∀j ∈ R, ∀m [y m j ] (4b) β - i ≤ c 6 , ∀i ∈ N [w - i ] (4c) β + i ≤ c 6 , ∀i ∈ N [w + i ] (4d) γ i ≤ c 7 , ∀i ∈ N [v i ] (4e) β + i , β - i , γ i , δ m j ≥ 0, ∀j ∈ R, m ∈ {1 . . . M } , ∀i ∈ N (4f)
where set C -1 j contains all the schedules with which schedule j is compatible. Dual variables α m , δ m j , β - i , β + i and γ i are respectively associated with Constraints (2b), (2c), (2d), (2e), and (2f), and the variables δ 0 j are set to zero to obtain a unified formulation. The variables in brackets denote the primal variables associated to these dual constraints. At each stage, the primal-dual algorithm sets the values of the multipliers so that they correspond to a feasible and locally-optimal dual solution, and uses this solution as Lagrangian multipliers in Formulation [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF]. Another point of view is to consider the current primal solution at stage m as a basis of the simplex algorithm for the linear relaxation of Formulation [START_REF] Bent | Scenario-based planning for partially dynamic vehicle routing with stochastic customers[END_REF]. The resolution of stage m + 1 corresponds to the creation of a new basis: Formulation (3) seeks a candidate pivot with a minimum reduced cost according to the associated dual solution.
Not only does the choice of dual variables drives the solution towards dual feasibility, but it also guarantee that complementary conditions between the current primal solution at stage m and the dual solution computed for stage m + 1 are satisifed. In the computation of a dual solution, the variables α m and δ m j do not need to be explicitly considered, because they will not be used in Formulation [START_REF] Birge | Introduction to Stochastic Programming[END_REF]. What is more, focusing on stage m, the only dual constraints that involve α m and δ m j (4b), can be satisfied for any value of βi , β+ i and γi by setting δ m j = 0, ∀j ∈ R, and
α m = min j∈R {c j - i∈N (a ij β- i -a ij β+ i -b ij γi )}.
Observe that the expression of the objective function of Formulation (2) ensures that the only schedule variable satisfying y m jm > 0 will be such that
j m ∈ argmin j∈R {c j -i (a ij β - i -a ij β + i -b ij γ i )}, so comple- mentarity is achieved.
To set the values of βi , β+ i and γi , we first observe that complementary conditions are satisfied if
β- i = c 6 if j,m a ij y m j < L - i 0 otherwise β+ i = c 6 if j,m a ij y m j ≥ L + i 0 otherwise γi = c 7 if j,m b ij y m j ≥ B i 0 otherwise
Since the history of the nurses are initialized with zero assignment and week-end worked, we initially set βi = c 6 and β+ i = γi = 0 to satisfy complementarity. We then perform linear updates at each stage m, using the characteristics of the schedule j m chosen for the corresponding week:
β- i = max 0, β- i -c 6 a m ijm L - i , β+ i = min c 6 , β+ i + c 6 a m ijm L + i , γi = min c 7 , γi + c 7 b m ijm B i .
These updates do not maintain complementarity at each stage but allow for a more balanced penalization of the number of assignments and worked week-ends. The variations of βi , β+ i and γi ensure that constraints (2b) remain feasible for the previous stage, even though complementarity may be lost. to be able to derive a competitive-ratio. However, no competitive-ratio is sought by this approach and linear updates are easier to design. Non-linear updates could be investigated in the future.
Algorithm 2 summarizes the primal-dual algorithm. It estimates the impact of a chosen schedule on the global soft constraints through their dual variables. As it is, it gives mixed results in practice. The reason is that the information obtained through the dual variables does not describe precisely the real problem. At the beginning of the algorithm, the value of the dual variables drives the nurses to work as much as possible.
Consequently, the nurses work too much at the beginning and cannot cover all the necessary shifts at the end of the horizon. Furthermore, the expected impact of the filtering constraints (2c) are totally ignored in that version. Namely, the shift type succession constraints H3 imply many feasibility issues at the border between two weeks as Formulation ( 2) is solved sequentially with this primal-dual algorithm. The following two sections describe how this initial implementation is adapted to cope with these issues.
Algorithm 2: Primal-dual algorithm
βi = c 6 , β+ i = γi = 0, ∀i ∈ N for each stage m do Solve Formulation (3) with a deterministic algorithm
Update β- i = max 0, β- i -c 6 a m ijm L - i , ∀i ∈ N Update β+ i = min c 6 , β+ i + c 6 a m ijm L + i , ∀i ∈ N Update γi = min c 7 , γi + c 7 b m ijm Bi , ∀i ∈ N end for
Sampling a second week demand for feasibility issues
Preliminary results have shown that Algorithm 2 raises feasibility issues due to constraints H3 on forbidden shift successions between the last day of one week and the first day of the following one. In other words, there should be some way to capture border effects during the computation of a weekly schedule. Instead of solving each stage over one week, we solve Formulation (3) over two weeks and keep only the first week as a solution of the current stage. The compatibility constraints (2c) between stages m and m + 1 are now included in this two-weeks model. In this approach, the data of the first week is available but no data of future stages is available. The demand relative to the next week is thus sampled as described in Section 4.4.
The fact that the schedule is generated for stages m and m + 1 ensures that the restriction to stage m ends with assignments that are at least compatible in this scenario, thus increasing the probability of building a feasible schedule over the complete horizon.
Furthermore, for two different samples of following week demand, the two-weeks version of Formulation (3) should lead to two different solutions for the current week. As a consequence, we can solve the model several times to generate different candidate schedules for stage m. As described in Algorithm 1, we use this property to generate new candidates until time limit is reached.
Global bounds to reduce staff shortages
Preliminary results have also shown that Algorithm 2 creates many staff shortages in the last weeks.
Our intent is thus to bound the number of assignment and worked weekends in the early stages to avoid the later shortages. The naive approach is to resize constraints (2d)-(2f) proportionally to the length of the demand considered in Formulation (3) (i.e., two weeks in our case). However, it can be desirable to allow for important variations in the number of assignments to a given nurse from one week to another, and even from one pair of weeks to another. Stated otherwise, it is not optimal to build a schedule that can only draw one or two weeks-long patterns as would be the case for less constrained environments. A simple illustration arises by considering the constraints on the maximum number of worked weekends. To comply with these constraints, no nurse should be working every weekend and, because of restricted staff availability, it is unlikely that a nurse is off every weekend. Coupled with the other constraints, this results necessarily in complex and irregular schedules. Consequently, bounding the number of assignments individually would discard valuable schedules.
Instead, we propose to bound the number of assignments and worked weekends for sets of similar nurses in order to both stabilize the total number of worked days within this set and allow irregularities in the individual schedules. We choose to cluster nurses working under the same work contract, because they share the same minimum and maximum bounds on their soft constraints. Hence, for each stage m, we add one set of constraints similar to (2d)-(2f) for each contract. In the constraints associated with contract κ ∈ Γ, the left hand-sides are resized proportionally to the number of nurses with contract κ and the number of weeks in the demand horizon. Let L m- κ , L m+ κ , and B m κ be respectively the minimum and maximum total number of assignments, and the maximum total number of worked weekends over the two-weeks demand horizon for the nurses with contract κ. We define these global bounds as:
• L m- κ = 7 * 2 M -m+1 i:κi=κ max(0, L - κi - m <m m =1 j a ij y m j ), • L m+ κ = 7 * 2 M -m+1 i:κi=κ max(0, L + κi - m <m m =1 j a ij y m j ), • B m κ = 2 M -m+1 i:κi=κ max(0, B κi - m <m m =1 j b ij y m j ),
where κ i is the contract of a nurse i.
Finally, the objective (3a) is modified to take into account the new slack variables w m- κ , w m+ κ , v m κ associated to the new soft constraints. The costs of these slack variables is set to make sure that violations of the soft constraints are not penalized more than once for an individual nurse. For instance, instead of counting the full cost c 6 for variable w m+ κ , we compute its cost as (c 6 -max i|κi=κ (β + i )). This guarantees that an extra assignment is never penalized with more than c 6 for any individual nurse. The cost of the variables w m+ κ and v m κ have been modified in the same way for analogous reasons. Formulation [START_REF] Burke | The state of the art of nurse rostering[END_REF] summarizes the final model used for the generation of the schedules. We recall that the variables y m j are now selecting a schedule j which covers a two weeks demand, and that this formulation is in fact solved by a branch-and-price algorithm that selects rotations instead of weekly schedules.
min j∈R c j + i∈N (β + i -β - i )a ij + γ i b m ij y m j + κ∈Γ (c 6 -max i:κi=κ (β - i ))w m- κ + (c 6 -max i:κi=κ (β + i ))w m+ κ ) + (c 7 -max i:κi=κ (γ m i ))v m κ ) (5a)
s.t.: [H1, H2, H3, H3] :
j∈R y m j = 1, (5b)
[S6] :
i:κi=κ j∈R a ij y m j + w m- κ ≥ L m- κ , ∀κ ∈ Γ (5c)
[S6] :
i:κi=κ j∈R a ij y m j + w m+ κ ≤ L m+ κ , ∀κ ∈ Γ (5d)
[S7] :
i:κi=κ j∈R b ij y m j -v m κ ≤ B m κ , ∀κ ∈ Γ (5e) y m j ∈ {0, 1}, ∀j ∈ R (5f) w m- κ , w m+ κ , v m κ ≥ 0, ∀κ ∈ Γ (5g)
To conclude, Formulation (5) allows to anticipate the impact of a schedule on the future through two mechanisms: the problem is solved over two weeks to diminish the border effects that may lead to infeasibility, and the costs are modified to globally limit the penalties due to constraints S6 and S7. Furthermore, this formulation can generate different schedules fort the first week by considering different samples for the second week demand.
Evaluating candidate schedules
In the spirit of the SAA, the first-week schedules generated by Formulation ( 5) are evaluated to be ranked. The evaluation should measure the expected impact of each schedule on the global solution (i.e., over M weeks). This impact can be measured by solving a NSP several times over different sampled demands for the remaining weeks.
Let Ω m be the set of scenarios of future demands for weeks m + 1, . . . , M , and assume that a schedule j has been computed for week m. To evaluate schedule j, we wish to solve the NSP for each sample of future demand ω ∈ Ω m by using j to set the initial history of the NSP. Denoting V m jω the value of the solution, we can infer that the future cost c m jω of schedule j in scenario ω is equal to c j + V m jω : the actual cost of the schedule plus the resulting cost for scenario ω. Then, a score that takes into account all the future costs (c m jω ) ω∈Ω m of a given schedule j is computed. Several functions have been tested and preliminary results have shown that the expected value was producing the best results. Finally, the schedule j m with the best score is retained.
However, computing the value V m jω raises two main issues. First, the NSP is an integer program for which it can be time-consuming to even find a feasible solution. We thus use the linear relaxation of this problem as an estimation of the future cost. This simplification decreases drastically the computational time, but still can detect feasibility issues at the border between weeks m and m + 1. The second issue is that over a long time horizon, even the linear relaxation of the NSP cannot be solved in sufficiently small computational time. We thus restrict the evaluation to scenarios of future demands that are at most two weeks long. More specifically, the scenarios are one week-long for the penultimate stage (M -1) and two weeks long for the previous stages. We observed that this restriction allows to keep the solution time short enough while giving a good measure of the impact of the schedule j on the future.
To summarize, the value V m jω is computed by solving the linear relaxation of Formulation (1) for a twoweek demand ω, and the initial state is set by using the schedule j. Finally, the parameters L - i , L + i , and B i are proportionally resized over two weeks, as follows.
• L (m+1)- i = 7 * 2 M -m max(0, L - i - m m =1 j a m ij y m j ) ; • L (m+1)+ i = 7 * 2 M -m max(0, L + i - m m =1 j a m ij y m j ) ; • B m+1 i = 2 M -m max(0, B i - m m =1 j b m ij y m j ) .
As already stated, the number of evaluation scenario included in Ω m is kept low (e.g., |Ω m | = 5) to meet the requirements in computational time. These scenarios are sampled as described in the next section.
Sampling of the scenarios
The competition data does not provide any knowledge about past demands, potential probability distributions of the demand, nor any other type of information that could help for sampling scenarios of demand.
It is thus impossible to build complex and accurate prediction models for the future demand. At a given stage m, the algorithm has absolutely no knowledge about the future realizations of the demand, so the sampling can only be based on the current and past observations of the weekly demands on stages 1 to m.
To build scenarios of future demand, we simply perturb these observations with some noise that is uniformly distributed within a small range (typically one or two nurses) and randomly mix these observations (e.g., pick the Monday of one observation and the Tuesday from another one). The future preferences are not sampled in the scenarios, because they cannot lead to an infeasible solution, they do not induce border effects, and they have small costs when compared to the other soft constraints. The goal of the sampling method is only to obtain some diversity in the scenarios used to generate different candidate schedules and in those used to evaluate the candidate schedules. Assuming that the demands will not change dramatically from one week to another, this allows for additional robustness and efficiency in many situations.
Summary of the primal-dual-based sample average approximation
Algorithm 3 provides a detailed description of the overall algorithm we submitted to the INRC-II.
It generates several schedules with a primal-dual algorithm and evaluates them over a set Ω m of future demands. The evaluation step increases the probability of selecting a globally feasible schedule, that has already been feasible for several resolutions of the linear relaxation of Formulation [START_REF] Bäuerle | Markov Decision Processes with Applications to Finance[END_REF]. The performances of this algorithm are discussed in Section 5.
Algorithm 3: A primal-dual-based sample average approximation
β - i = c 6 , β + i = γ i = 0, ∀i ∈ N for each stage m = 1 . . . M -1 do
Initialize the set of candidate schedules of stage m: S m = ∅ Initialize the generation model using the chosen schedule of the previous stage m -1 (i.e., set the initial state) Sample a set Ω m of future demands for the evaluation while there is enough computational time do Sample a second week demand for the generation Solve Formulation ( 5) with a deterministic algorithm to build a two-weeks schedule (j1 , j 2 ) Store the schedule of the first week: S m := S m ∪ {j 1 } Initialize the evaluation model with schedule j 1 (i.e., set the initial state) for each demand ω ∈ Ω m do Compute the value V m jω as the optimal value of the linear relaxation of Formulation (1) over two weeks if m < M -1, and one week otherwise end for end while Choose the schedule j m with the best average evaluation cost Update the dual variables as in Algorithm 2 end for Compute the best schedule for the last stage M with the given computational time
Experimentations
This section presents the results obtained at the INRC-II. The competition was organized in two rounds.
In the selection round, each team had to submit their best results on a benchmark of 28 instances that were available to the participants before submitting the codes. The organizers then retained the best eight teams for the final round where they tested the algorithms against a new set of 60 instances. The algorithm described above ranked second in both rounds.
The instances used during each round are summarized in Tables 3 and4. They range from relatively small instances (35 nurses over 4 weeks) to really big ones (120 nurses over 8 weeks) that are very difficult to solve even in a static setting [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF]. The algorithms of the participants all had the same limited computational time to solve each stage (depending on the number of nurses and on the computer used for the tests 1 ). The solution obtained at each stage was used as an initial state for the schedule of the following week. If an algorithm reached an infeasible schedule, the iterative process was stopped. The final rank of each team was computed as the average rank over all the instances. In this section, we will focus our discussion on the quality of the results obtained with our algorithm.
More details about the competition and the results can be found on the competition website: http:// mobiz.vives.be/inrc2/.
Algorithm implementation
Algorithm 3 depends on how future demands are sampled, on the number of scenarios used for the evaluation, and last but not least, on the scheduling software.
The algorithm uses only five scenarios of future demands for the evaluation. It must indeed divide the short available computational time between the generation and the evaluation of the schedules. The first step aims at computing the best schedule according to the current demand while the second step seeks a robust planning that yields promising results for the following stages (high probability to remain feasible and near-optimal). In order to generate several schedules (at least two) within the granted computational time, the number of demand scenarios must remain small. Moreover, since the demand scenarios we generate are not based on accurate data, but only on a learned distribution, there is no guarantee that a larger number of scenarios would provide a better evaluation. In fact, we tested different configurations (3 to 10 scenarios used for the evaluation), and they all gave approximately the same results (the best results were obtained for 4 to 6 scenarios).
The code is publicly shared on a Git repository [START_REF] Legrain | Dynamic nurse scheduler[END_REF]. The scheduling software is implemented in C++ and is based on the branch-cut-and-price framework of the COIN-OR project, BCP. The choice of this framework is motivated by the competition requirement of using free and open-source libraries. The pricing problems are modeled as shortest paths with resource constraints and solved with the Boost library. The solution algorithm is not parallelized and it uses the branching strategy 'two dives' described in [START_REF] Legrain | A rotation-based branch-and-price approach for the nurse scheduling problem[END_REF]. This strategy 'dives' two times in the branching tree and keeps the best feasible solution. If no solution is found after two dives (which was never the case), the algorithm continues until it reaches either the time limit or a feasible solution. (but for one third position). Their algorithm is also based on a mixed integer programming approach that computes weekly schedules, but they directly model the problem using a large flow network where a state-expansion is used to consider the soft constraints. For the time being, only a brief description of the algorithm is available in [START_REF] Römer | A direct MILP approach based on state-expanded network flows and anticipation for multistage nurse rostering under uncertainty[END_REF]. Algorithm 3 obtains a fair second position and competes with the best algorithm, since it also ranks first or second on every instance but two, for which it ranks third. Finally, the third team is significantly behind the first two. As highlighted by Figure 2, the solutions found by their algorithms exhibit at least a 9% relative gap with respect to the best solution.
Selection instances
It is also important to note that these algorithms are randomized, because they are all based on random sampling of future demands. For the first phase of the competition, the participants had to provide the random seeds they used to obtain their results. During the second phase, the organizers executed the code of each team ten times with different arbitrary random seeds on each instance. Because of these variations, we run the algorithm many times on each of the instances used for the selection to submit only the best ones and increase our chance of qualification. Most teams must have used the same technique, since the ranking between the selection and the final rounds did not really change.
Final instances
The final results respect the same ranking as the one obtained after the selection. However, these comparisons are more fair, since the results were computed by the organizers, so that the teams were not able to select the solutions they submitted. This configuration evaluates in a better way the proposed algorithms, and especially their robustness. The organizers have even run 10 times the algorithms on each of the 60 final instances, and have thus compared the proposed software over 600 tests. the best schedules for about 65% of the instances, but our algorithm appears to be more robust. We indeed able produced a feasible schedule in every test but one, while winners could not build a feasible schedule in 5% of the cases (i.e., 34 tests). This comparison also highlights the balance that needs to be found between the time spent in the generation of the best possible schedules and their evaluation, since this second phase provides a measure of their robustness to future demands.
Figure 5 shows the cumulative distribution of the relative gap of the winning team solutions from ours as a function of the number of nurses. It is clear that once the instances exceed a certain size (i.e., 110 nurses), the quality of the solution of the winning team decreases. Indeed, in [START_REF] Römer | A direct MILP approach based on state-expanded network flows and anticipation for multistage nurse rostering under uncertainty[END_REF], the winning team comments that their algorithms was simply unable to find feasible integer solutions for some week demands of these instances, showing that the method experiments difficulties in scaling up. Furthermore, this algorithm was also not able to find feasible solutions for an important number of the small instances (i.e., 35 nurses). As a possible explanation, we have observed that it is more difficult to find feasible solutions for these instances, because they leave less flexibility for creating the schedules, i.e., because the hard constraints of the MIP are tighter. Stated otherwise, the proportion of feasible schedules that meet the minimum demand is much smaller for the smallest instances used in the competition.
Conclusions
This article deals with the nurse scheduling problem as described in the context of the international competition INRC-II. The objective is to sequentially build, week by week, the schedule of a set of nurses over a planning horizon of several weeks. In this dynamic process, the schedule computed for a given week is irrevocably fixed before the demand and the preferences for the next week are revealed. The main difficulty is to correctly handle the border effects between weeks and the global soft constraints to compute a feasible and near-optimal schedule for the whole horizon.
Our main contribution is the design of a robust online stochastic algorithm that performs very well over a wide range of instances (from 30 to 120 nurses over a four or eight weeks horizon). The proposed algorithm embeds a primal-dual algorithm within a sample average approximation. The primal-dual procedure generates candidate schedules for the current week, and the sample average approximation allows to evaluate each of them and retain the best one. The resulting implementation is shared on a public repository [START_REF] Legrain | Dynamic nurse scheduler[END_REF] and builds upon an open source static nurse scheduling software.
The designed algorithm won second prize in the INRC-II competition. The results show that, although this procedure does not compute the best schedules for a majority of instance, it is the most robust one.
Indeed, it finds feasible solutions for almost every instance of the competition while providing high-quality schedules.
T 1 Figure 1 :
11 Figure 1: Example of a rostering graph for nurse i ∈ N over a horizon of K = 7 days, where the minimum and maximum number of consecutive resting days are respectively CR - i = 2 and CR + i = 3, and the initial number of consecutive resting days is CR 0 i = 1. The rotation arcs (x ij ) are the plain arrows, the rest arcs (r ikl and r ik ) are the dotted arcs, and the artificial flow arcs are the dashed arrows. The bold rest arcs have a cost c 3 of and the others are free.
Algorithm 1 :
1 A sample average approximation based algorithm for each stage m = 1 . . . M -1 do Initialize the set of candidate schedules of stage m: S m = ∅ Initialize the generation algorithm with the chosen schedule j m-1 of the previous stage m -1 (i.e., set the initial state)
Figure 2 :
2 Figure 2: Cumulative distribution of the relative gap on the selection instances
Figure 3 :
3 Figure 3: Distribution of the objective value for the selection instances
Figure 3
3 Figure 3 shows the distribution of the objective value for 180 observations of the solution of an instance with 80 nurses over 8 weeks using Algorithm 3. The values of the solution are within a [-6%, +7%] range.
Figure 4 :
4 Figure 4: Cumulative distribution of the relative gap on the final instances
Figure 4
4 Figure 4 presents the relative gaps obtained on the final instances. The two first teams are really close and their algorithms highlight two distinct features of the competition. The winning team's algorithm builds
Figure 5 :
5 Figure 5: Cumulative distribution of the relative gap as a function of the number of nurses
Table 1 :
1 Constraints handled by the software.
Table 2 :
2 Summary of the input data.
Demand
D sk σ O sk σ min demand in nurses performing skill σ on shift (k, s) optimal demand in nurses performing skill σ on shift (k, s)
Initial state
CD 0 i CS 0 i s 0 i CR 0 i initial number of ongoing consecutive worked days for nurse i initial number of ongoing consecutive worked days on the same shift for nurse i shift worked on the last day before the planning horizon for nurse i initial number of ongoing consecutive resting days for nurse i
total number of worked days over the planning horizon for nurse i CR - i , CR + i min/max number of consecutive days-off for nurse i B i max number of worked week-ends over the planning horizon for nurse i
+ ij represent the first and last worked days of this rotation. Let x ij be a binary decision variable which takes value 1 if rotation j is part of the schedule of nurse i and zero otherwise. The binary variables r ikl and r ik measure if constraint S3 is violated: they are respectively equal to 1 if nurse i has a rest period from day k to l -1 including at most CR +
and b m ij which are equal to 1 if nurse i works respectively on shift (k, s), on day k, and weekend m, and 0 otherwise. Finally, f - ij and f i consecutive days (cost: c ikl 3 ), and if nurse i rests on day k and has already rested for at least CR + i consecutive days before k, and to zero otherwise. The integer variables w + i and w - i count the number of days worked respectively above L + i and below L - i by nurse i. The integer variable v i counts the number of weekends worked above B i by nurse i. Finally, the integer variables n sk σ , n sk tσ , and z sk σ respectively measures the number of nurses performing skill σ, the number of nurses of type t performing skill σ, and the undercoverage of skill σ on shift (k, s).
Table 3 :
3 Instances used for the selection
Number of nurses
35 70 110
Table 4 :
4 Instances used for the final
The computational times are given for a linux computer Intel(R) Xeon(R) X5675 @ 3.07GHz with 8 Go of available memory.
Despite the limits of this algorithm, our intent with this article is to present the exact implementation that has been submitted to the competition. There is place for improvements that could be developed in the future. For instance, the primal-dual algorithm could be enhanced with non-linear updates, new features recently developed in the static nurse scheduling software could be tested, or the bounding constraints added in the primal-dual algorithm could be refined. |
01761384 | en | [
"phys.mphy",
"spi.mat",
"spi.signal",
"spi.opti",
"spi.fluid",
"stat.me",
"stat.co"
] | 2024/03/05 22:32:13 | 2018 | https://imt-mines-albi.hal.science/hal-01761384/file/Mondiere-Controlling.pdf | A Mondiere
V Déneux
N Binot
D Delagnes
Controlling the MC and M 2 C carbide precipitation in Ferrium® M54® steel to achieve optimum ultimate tensile strength/fracture toughness balance
Keywords:
Controlling the MC and M2C carbide precipitation in
Ferrium® M54® steel to achieve optimum ultimate tensile strength/fracture toughness balance
Aurélien Mondière, Valentine Déneux, Nicolas Binot, Denis Delagnes
Introduction
Aircraft applications, particularly for landing gear, require steels with high mechanical resistance, fracture toughness and stress corrosion cracking resistance [START_REF] Flower | High Performance Materials in Aerospace[END_REF]. Additionally, the aerospace industry is looking for different ways to reduce the weight of landing gear parts, as the landing gear assembly can represent up to 7% of the total weight of the aircraft [START_REF] Kundu | Aircraft Design[END_REF]. The search for metal alloys with a better balance of mechanical properties while maintaining a constant production cost is stimulating research activities. For several decades, 300 M steel has been widely used for landing gear applications. However, its fracture toughness and stress corrosion cracking resistance need to be improved and aeronautical equipment suppliers are searching for new grades. As shown in Fig. 1, AerMet® 100 and Ferrium® M54® (M54®) grades are excellent candidates to replace the 300 M steels without any reduction in strength or increase in weight. Other grades do not present a high enough fracture toughness, or are not resistant enough.
The recent development of M54® steel since 2010 [START_REF] Jou | Lower-Cost, Ultra-High-Strength, High-Toughness Steel[END_REF] has led to a higher stress corrosion cracking resistance and lower cost due to its lower cobalt content (see Table 1), as compared to the equivalent properties of the AerMet® 100 grade. These two steels belong to the UHS Co-Ni steel family.
UHS Co-Ni steels were developed at the end of 1960s with the HP9-4-X [START_REF] Garrison | Ultrahigh-strength steels for aerospace applications[END_REF] and HY-180 [START_REF] Dabkowski | Nickel, Cobalt, Chromium Steel[END_REF] grades, with the main goal being to achieve higher fracture toughness than 300M or 4340 steels. The main idea was first to replace cementite by M 2 C alloy carbide precipitation during tempering to avoid brittle fracture without too large reduction in mechanical strength. A better balance of UTS/K 1C was achieved with AF1410 [START_REF] Little | High Strength Fracture Resistant Weldable Steels[END_REF] by increasing the content of carbide-forming elements. In addition, an improvement in fracture toughness was also requested and finally achieved by the accurate control of reverted austenite precipitation during tempering [START_REF] Haidemenopoulos | Dispersed-Phase Transformation Toughening in UltraHigh-Strength Steels[END_REF] and the addition of rare earth elements to change the sulfide type [START_REF] Handerhan | A comparison of the fracture behavior of two heats of the secondary hardening steel AF1410[END_REF][START_REF] Handerhan | Effects of rare earth additions on the mechanical properties of the secondary hardening steel AF1410[END_REF], resulting in an increase in inclusion spacing [START_REF] Garrison | Lanthanum additions and the toughness of ultra-high strength steels and the determination of appropriate lanthanum additions[END_REF]. Thus, AerMet® 100 was patented in 1993 [START_REF] Hemphill | High Strength, High Fracture Toughness Alloy[END_REF], incorporating these scientific progress to achieve the same strength level of 300 M but with a higher fracture toughness. Then, from the 1990s to the 2000s, scientists sought to improve the grain boundary cohesion to further increase the fracture toughness by W, Re and B additions [START_REF] Kantner | Designing Strength, Toughness, and Hydrogen Resistance: Quantum Steel[END_REF]. Thus, Ferrium® S53® steel, developed in 2007 [START_REF] Kuehmann | Nanocarbide Precipitation Strengthened Ultrahigh-Strength[END_REF], was the first steel of the family containing W. Seven years ago, Ferrium® M54® steel was designed, offering a steel with roughly the same mechanical properties as AerMet® 100, but with a lower price thanks to a lower cobalt content.
UHS Co-Ni steels all exhibit an excellent UTS/K 1C balance due to a M 2 C carbide precipitation during tempering in a highly dislocated lathmartensitic matrix [START_REF] Jou | Lower-Cost, Ultra-High-Strength, High-Toughness Steel[END_REF][START_REF] Little | High Strength Fracture Resistant Weldable Steels[END_REF][START_REF] Hemphill | High Strength, High Fracture Toughness Alloy[END_REF][START_REF] Ayer | Transmission electron microscopy examination of hardening and toughening phenomena in Aermet 100[END_REF][START_REF] Olson | APFIM study of multicomponent M2C carbide precipitation in AF1410 steel[END_REF][START_REF] Machmeier | Development of a strong (1650MNm -2 tensile strength) martensitic steel having good fracture toughness[END_REF]. However, there is limited literature on the recently developed M54® steel [START_REF] Wang | Austenite layer and precipitation in high Co-Ni maraging steel[END_REF][START_REF] Wang | Analysis of fracture toughness in high Co-Ni secondary hardening steel using FEM[END_REF][START_REF] Lee | Ferrium M54 Steel[END_REF][START_REF] Pioszak | Hydrogen Assisted Cracking of Ultra-High Strength Steels[END_REF].
The addition of alloying elements in UHS Co-Ni steels also forms stable carbides like M 6 C or M 23 C 6 during the heat treatment process.
The size of these stable carbides can easily reach several 100 nm, resulting in a significant decrease in fracture toughness by acting as microvoid nucleation sites during the mechanical load [START_REF] Schmidt | Solution treatment effects in AF1410 steel[END_REF]. These particles can be dissolved by increasing the austenitizing temperature, but the prior austenite grain size rapidly increases and induces a detrimental effect on the mechanical properties [START_REF] Sankaran | Metallurgy and Design of Alloys with Hierarchical Microstructures[END_REF]. The new challenge for these steels is thus to dissolve coarse stable carbides without an excessive grain growth.
This challenge is also well-known in other kinds of martensitic steels for other applications, such as hot work tool steels. Michaud [START_REF] Michaud | The effect of the addition of alloying elements on carbide precipitation and mechanical properties in 5% chromium martensitic steels[END_REF] showed that V-rich carbide precipitation during tempering achieves high mechanical properties at room temperature as well as at high temperature. However, precipitation stayed heterogeneously distributed in the matrix, regardless of the austenitizing and tempering conditions, and so fracture toughness and Charpy impact were limited. Indeed, the same V-rich precipitation (MC type) that controls the austenitic grain size during austenitizing and controls the strength during tempering were identified. The incomplete solutionizing of V-rich carbides during austenitizing does not permit a homogeneous concentration of alloying elements in the martensitic matrix after quench, which explains why the strength/fracture toughness balance is limited. The generic idea would be to introduce a double/different precipitation with a single and precise role for each population: to control the austenitic grain size OR to control the mechanical strength. In H11-type tool steels, the addition of Mo slightly improved the balance of properties [START_REF] Michaud | The effect of the addition of alloying elements on carbide precipitation and mechanical properties in 5% chromium martensitic steels[END_REF].
In steels for aircraft applications, Olson [START_REF] Olson | Overview: Science of Steel[END_REF] and Gore et al. [START_REF] Gore | Grain-refining dispersions and properties in ultrahigh-strength steels[END_REF] succeeded in introducing another type of homogeneous small particles which pin the grain even for elevated austenitization temperatures (T = 1200 °C) in AF1410: (Ti,Mo)(C,N). These carbides avoid grain coarsening between 815 °C and 885 °C at austenitization leading to an increase in fracture toughness due to coarse carbides dissolution [START_REF] Schmidt | Solution treatment effects in AF1410 steel[END_REF]. The patent of Ferrium® S53® steel also describes a nanoscale MC precipitation which pins the grain boundary and avoids grain coarsening by the dissolution of the coarse carbides [START_REF] Kuehmann | Nanocarbide Precipitation Strengthened Ultrahigh-Strength[END_REF].
Stable carbide dissolution in Ferrium® M54® seems to be particularly challenging due to the formation of both M 2 C and M 6 C Mo-rich carbides during the heat treatment process (see Fig. 2). Indeed, as Morich M 2 C carbides precipitate during tempering, the full dissolution of Mo-rich carbides is needed to achieve a homogeneous distribution of Mo within the matrix.
More specifically, particles that control the austenitic grain size need to be stable enough at high temperature to dissolve the whole population of M 2 C and M 6 C carbides without grain coarsening. The aim of this article is to investigate carbides precipitation in M54® after a cryogenic treatment following the quench as well as after tempering. Carbide distribution, size and composition are carefully described for both states.
Experiments
Materials and Heat Treatment
Specimens were taken at mid-radius of a single bar of diameter 10.25 cm in the longitudinal direction.
The performed heat treatments were in agreement with the QuesTek recommendations [27] and consisted of a preheating treatment at 315 °C/1 h, a solutionizing at 1060 °C/1 h, followed by an oil quench, cold treatment at -76 °C/2 h and tempering at 516 °C/10 h.
Experimental Techniques
Austenite grain size was measured after the quench. Precipitation in the quenched state, after cryogenic treatment, was observed to identify undissolved carbides. Secondary carbides were characterized after tempering, at the end of the whole heat treatment process.
Chemical composition of the alloy was measured with a Q4 Tasman Spark Optical Emission Spectrometer from Bruker.
Dilatometry was performed using a Netzsch apparatus, DIL402C. Samples for dilatometry were in the form of a cylinder of diameter 3,7 mm with a length of 25 mm. Samples were heated at 7 °C/min and cooled at 5 °C/min under argon atmosphere.
For the as-quenched state, carbides were extracted by chemical dissolution of the matrix with a modified Berzelius solution at room temperature [START_REF] Burke | Chemical extraction of refractory inclusions from iron-and nickel-base alloys[END_REF] as already developed by Cabrol et al. [START_REF] Cabrol | Experimental investigation and thermodynamic modeling of molybdenum and vanadium-containing carbide hardened iron-based alloys[END_REF]. At the end of the dissolution, the solution was centrifuged to collect nanoscale precipitates. A Beckman Coulter Avanti J-30I centrifugal machine equipped with a JA-30.50Ti rotor was used to centrifuge the solution. The experimental method is described precisely in [START_REF] Cabrol | Experimental investigation and thermodynamic modeling of molybdenum and vanadium-containing carbide hardened iron-based alloys[END_REF].
XRD characterizations of the powder obtained after the chemical dissolution and of the bulk sample were performed using a Panalytical X'Pert PRO diffractometer equipped respectively with a Cu or Co radiation source. Phase identification was achieved by comparing the diffraction pattern of the experimental samples with reference JCPDS patterns.
Prior austenite grain size measurement is difficult because of the very low impurity content in the grade M54®. An oxidation etching was conducted by heating polished samples in a furnace at a temperature of 900 °C and 1100 °C under room atmosphere for 1 h and slightly polishing them after quenching to remove the oxide layer inside the grains and keeping the oxide only at the grain boundary.
Transmission Electron Microscopy (TEM) observations were performed using a JEOL JEM 2100F. Thin foils for TEM were cut from the specimens and the thickness was reduced to approximately 150 μm. Then, they were cut into disks and polished to a thickness of about 60 μm. The thin foils were then electropolished in a perchloric acidmethanol solution at -15 °C with a TenuPol device.
Chemical composition at nanometer scale was determined using atom probe tomography (APT) at the Northwestern University Center for Atom-Probe Tomography (NUCAPT). Samples were prepared into rods with a cross section of 1 x 1mm 2 and electro-polished using a twostep process at room temperature [START_REF] Krakauer | Systematic procedures for atom-probe field-ion microscopy studies of grain boundary segregation[END_REF][START_REF] Krakauer | A system for systematically preparing atom-probe field-ion-microscope specimens for the study of internal interfaces[END_REF]. The APT analyses were conducted with a LEAP 4000X-Si from Cameca at a base temperature of -220 °C, a pulse energy of 30pJ, a pulse repetition rate of 250 kHz, and an ion detection rate of 0.3% to 2%. This instrument uses a localelectrode and laser pulsing with a picosecond 355 nm wavelength ultraviolet laser, which minimizes specimen fracture [START_REF] Bunton | Advances in pulsed-laser atom probe: instrument and specimen design for optimum performance[END_REF].
For the prediction of the different types and molar fraction of each phase according to temperatures, thermodynamics calculations were performed using ThermoCalc® software. This software and database were developed at the Royal Institute of Technology (KTH) in Stockholm [START_REF] Sundman | The Thermo-Calc databank system[END_REF]. ThermoCalc® calculations were performed using the TCFE3 database.
Results and Discussions
Discussion of Optimized Mechanical Properties With Finely Dispersed Nanometer Size Precipitation
Research activities on UHS steels for aircraft applications focus on maximizing mechanical strength without decreasing the fracture toughness and stress corrosion cracking resistance. To improve strength, dislocations mobility must be reduced. Consequently, increasing the number density of secondary particles (Np) is a wellknown method and the resulting hardening is given by the following equation [START_REF] Sankaran | Metallurgy and Design of Alloys with Hierarchical Microstructures[END_REF]:
∆ ≈ σ Gb f d ( ) P 0.5 (1)
where Δσ p is particle contribution to the yield strength, G is the shear modulus, d is the particle diameter, f the volume fraction of the particle and b the Burgers vector of dislocations. Indeed, for the same volume fraction, a small particles distribution leads to a better yield strength, due to the decrease in dislocation mobility.
To obtain this fine and dispersed precipitation, two different types of nucleation are generally observed to occur in UHS steels:
-Numerous preferential nucleation sites leading to heterogeneous nucleation; -Homogeneous supersaturation of carbide-forming elements.
For the first condition, the heterogeneous nucleation of M 2 C carbides on dislocations has already been observed in previous works [START_REF] Speich | Strength and toughness of Fe-10ni alloys containing C, Cr[END_REF][START_REF] Kuehmann | Thermal Processing Optimization of Nickel-Cobalt Ultrahigh-Strength Steels[END_REF]. Indeed, dislocation sites are energetically favorable due to atom segregation and the short diffusion path offered to the diffusing element (pipe diffusion). It is therefore important to maintain a high dislocation density during tempering. Consequently, cobalt is added to these alloys to keep a high dislocation density during tempering. As previously described in the literature [START_REF] Kantner | Designing Strength, Toughness, and Hydrogen Resistance: Quantum Steel[END_REF][START_REF] Olson | Overview: Science of Steel[END_REF], Co delays the dislocation recovery through the creation of short-range ordering (SRO) in the matrix. Co also decreases the solubility of Mo in ferrite and increases the carbon activity inside ferrite [START_REF] Speich | Strength and toughness of Fe-10ni alloys containing C, Cr[END_REF][START_REF] Rhoads | High strength, high fatigue structural steel[END_REF][START_REF] Honeycombe | Steels microstructure and properties[END_REF][START_REF] Speich | Tempering of steel[END_REF][START_REF] Delagnes | Cementite-free martensitic steels: a new route to develop high strength/high toughness grades by modifying the conventional precipitation sequence during tempering[END_REF], leading to a more intensive precipitation of M 2 C carbides.
The main criterion for accessing the second condition is related to the dissolution carbides during austenitizing. If carbides are not totally solutionized, the precipitation during tempering will be heterogeneously dispersed with a higher density of clusters in the areas of high concentration of the carbide-forming elements. To avoid heterogeneous concentration, remaining carbides from the previous stage of heat treatment should be totally dissolved and enough time should be spent at a temperature above the carbide solvus to obtain a homogeneous composition of the carbide-forming elements in austenite. Moreover, in order to obtain a fine and dispersed precipitation during tempering, the driving force must be increased by increasing the supersaturation resulting in a higher nucleation rate [START_REF] Kuehmann | Thermal Processing Optimization of Nickel-Cobalt Ultrahigh-Strength Steels[END_REF]. Furthermore, undissolved carbides also reduce the potential volume fraction of particles that may precipitate during tempering [START_REF] Sato | Improving the Toughness of Ultrahigh Strength Steel[END_REF] and almost total dissolution is needed. Thus, the austenitizing condition should be rationalized based on the carbide dissolution kinetics and diffusion coefficient of alloying elements in the matrix to obtain a homogeneous chemical composition of the carbide-forming elements in the martensitic matrix in the as-quenched state.
Identification of Carbide Solutionizing Temperature
The temperatures of phase transformation were determined by dilatometry experiments. According to the relative length change shown in Fig. 3(a), Ac 1 , Ac 3 and M s temperatures are clearly detected. To detect the solutionizing of carbides, the derivative of the relative length change was calculated. Carbide dissolution takes place at a temperature ranging from 970 °C to 1020 °C, as shown in Fig. 3(b).
If the austenitizing temperature is not high enough, undissolved carbides are clearly observed (see Fig. 4) and slightly decrease UTS from 1997 MPa at 1060 °C to 1982 MPa at 1020 °C, which is probably due to the carbon trapped inside those undissolved particles.
These coarse carbides can also be observed after polishing and a Nital 2% etch using SEM (Fig. 5). The volume fraction seems to be particularly high.
According to ThermoCalc® calculations, these undissolved carbides obtained after 1 h at 1020 °C are M 6 C carbides (see Fig. 2) containing a significant amount of W (see Table 2).
The high solutionizing temperature of the M54® steel as compared to other steels of the same family (free of W, see Table 3) is due to the tungsten addition which stabilizes the M 6 C carbides. If the austenitizing temperature is not high enough, undissolved carbides still remain (see Fig. 4 and Fig. 5) and the tensile properties (yield strength, UTS, elongation at rupture), as well as fatigue resistance are reduced. However, if the austenitizing temperature is too high and no carbides remain, a huge grain size coarsening can be observed also leading to a decrease in the usual mechanical properties.
According to Naylor and Blondeau [START_REF] Naylor | The respective roles of the packet size and the lath width on toughness[END_REF], thinner laths and lath packets, directly dependent on austenite grain size [START_REF] Sankaran | Metallurgy and Design of Alloys with Hierarchical Microstructures[END_REF], can improve fracture toughness by giving a long and winding route to the crack during rupture. Białobrzeska et al. [START_REF] Białobrzeska | The influence of austenite grain size on the mechanical properties of low-alloy steel with boron[END_REF] have clearly shown that at room temperature, strength, yield strength, fatigue resistance and impact energy increase when the average austenite grain size decreases. Thus, any coarsening of austenite grains should be avoided.
Pinning of the Grain Boundary and Chemical Homogenization of the Austenitic Matrix at 1060 °C
As previously mentioned in the introduction, to control the grain size during austenitizing without any impact on precipitation during tempering, the precipitation of two types of particles is needed: one type to control the grain size during solutionizing and the second type of particles which precipitates during tempering.
To achieve this goal, one way is to add MC type precipitation to avoid quick coarsening of austenitic grains. However, according to ThermoCalc® calculations, the MC solvus temperature is not sufficiently high to allow the total dissolution of M 6 C carbides (see Fig. 2). Thus, Olson [START_REF] Olson | Overview: Science of Steel[END_REF] and Gore [START_REF] Gore | Grain-refining dispersions and properties in ultrahigh-strength steels[END_REF] added some Ti to form more stable MC carbides and dissolve other coarse stable carbides. A little addition of Titanium is sufficient to obtain a significant effect on the grain size, as described by Kantner who adds 0.04%mass [START_REF] Kantner | Designing Strength, Toughness, and Hydrogen Resistance: Quantum Steel[END_REF] of titanium in Fe- 15Co-6Ni-3Cr-1.7Mo-2 W-0.25C and Fe-15Co-5Ni-3Cr-2.7Re-1.2 W-0.18C steels, or Lippard who adds only 0.01%mass [START_REF] Lippard | Microanalytical Investigations of Transformation[END_REF] in alloys AF1410, AerMet® 100, MTL2 and MTL3. A low volume fraction of thin particles seems to be efficient in preventing austenitic grain growth [START_REF] Gore | Grain-refining dispersions and properties in ultrahigh-strength steels[END_REF]. Indeed, an addition of 0.01%mass of Ti in the M54® grade is enough to shift the MC solvus temperature by approximately 100 °C above the MC solvus temperature of the M54® grade free of Titanium according to ThermoCalc® calculation (see Fig. 6).
Moreover, MC carbides contain a large amount of Ti (see Fig. 7) which is not the case for M 2 C precipitation during tempering. Consequently, Ti-rich MC carbides seem relevant, to be a solution to control the grain size without any impact on precipitation during tempering. The purpose of the following paragraph is to compare the experimental results with the above-mentioned theoretical prediction.
After austenitizing for 1 h at 1060 °C, fine undissolved carbides were found in the as-quenched state after cryogenic treatment in M54® steels. These carbides are thinner than the undissolved carbides observed. In addition, a lower volume fraction is measured after an austenitization at 1060 °C than after a 1020 °C or 920 °C austenitization (see Fig. 8). The average size of these carbides is around 70 nm, measured on a sample of 23 carbides sample. In addition, no coarse undissolved carbides are observed indicating that the optimal austenitization conditions are not far to be reached.
Chemical extraction of carbides in the as-quenched state was performed to determine the type of those undissolved carbides still remaining after a 1060 °C austenitizing. As predicted by the thermocalc calculation, a FCC structure (type MC) was clearly identified from the XRD patterns (see Fig. 9). Moreover, the chemical composition measured by EDX (Energy Dispersive X-ray spectroscopy) is (Ti 0.44 Mo 0.27 W 0.13 V 0.16 )C. This composition is in quite good agreement with the ThermoCalc® calculated composition (Ti 0.55 V 0.25 Mo 0.17 W 0.08 )C 0.95 .
According to Spark Optical Emission Spectrometer measurements, the average Ti concentration measured is about 0.013 wt% in M54® steel. Considering that all the Ti atoms precipitate and taking into account the chemical composition of the MC measured by EDX, the volume fraction of Ti-rich MC carbide is found to be nearly 0.06%.
The intercarbide distance can be estimated using the equation given by Daigne et al. [START_REF] Daigne | The influence of lath boundaries and carbide distribution on the yield strength of 0.4% C tempered martensitic steels[END_REF]:
= × d r π f 1.18 2 3 particle v ( 2
)
where d is the distance between particles, r is the radius of the particle and f v the volume fraction of particles. According to Eq. ( 2), the distance between the MC carbides with an addition of 0.013 wt% of Ti is about a micrometer. This value is in very good agreement with SEM observations (see Fig. 8) indicating that most of the titanium carbides remain undissolved after the austenitization at 1060 °C.
Furthermore, a relation has been developed in tool steels to describe the grain refinement by a particle dispersion in tools steels. Bate [START_REF] Bate | The effect of deformation on grain growth in Zener pinned systems[END_REF] suggested the following equation between the limiting grain size diameter D, the mean radius, r, and the volume fraction F v of the pinning particles:
= D r F 4 3 v (3)
The calculated average grain size diameter is 78 μm according to the Bate's Eq. ( 3) in M54®.
This value is in a very good agreement with the measured average grain size of 81 ± 39 μm at 900 °C or 79 ± 38 μm at 1100 °C (see Fig. 10). Approximately 300 grains were measured for each austenitizing temperature. According to the Bate's work, the estimated 0.06% volume fraction of undissolved MC carbides is sufficient to control the grain size of austenite.
Consequently, MC particles need only a very small quantity of carbide-forming elements required for M 2 C precipitation during tempering. In addition, the calculated diffusion lengths of the different carbide-forming elements, Mo, Cr, W, are clearly significantly higher than the distance between first neighbors of Mo, Cr, W, respectively in the austenitic matrix at the end of austenitization (1060 °C/1 h) (see Table 4). As a consequence, homogeneous composition of the austenite is quickly obtained before quenching.
By way of conclusion, a small amount of Ti-rich MC carbides control the austenitic grain size and above all, the complete dissolution of M 6 C molybdenum rich carbides leads to the homogeneous distribution of the M 2 C carbide-forming elements before quenching.
Precipitation During Tempering
The particles that precipitate during tempering are totally different from the carbides controlling the austenitic grain size. According to XRD results, M 2 C-type carbides are identified after a tempering for 500 h at 516 °C (see Fig. 11). This long duration of tempering is necessary to detect the diffraction peaks of M 2 C carbides. For the standard tempering of 10 h, the volume fraction and the size of carbides might be too low to be detected by XRD, or long-distance ordering of M 2 C carbides (hexagonal structure) might not be achieved as already suggested by Machmeier et al. [START_REF] Ayer | On the characteristics of M2C carbides in the peak hardening regime of AerMet 100 steel[END_REF].
Consequently, the same carbide type is identified in the M54®, AerMet® 100 and AF1410 steels [START_REF] Ayer | Transmission electron microscopy examination of hardening and toughening phenomena in Aermet 100[END_REF][START_REF] Ayer | Microstructural basis for the effect of chromium on the strength and toughness of AF1410-based high performance steels[END_REF]. Atom probe analyses were performed to determine the distribution of M 2 C carbides within the martensitic matrix and to estimate the chemical composition of M 2 C carbides. To define the particle/matrix interface found in the analyzed box, the adopted criterion is an isoconcentration of 36 at% of Molybdenum + Carbon. Carbides seem to be homogenously distributed within the matrix according to the (limited) volume analyzed by APT (see Fig. 12). According to TEM observations, the precipitation of M 2 C carbides during tempering is very fine with an average size of 9.6 × 1.2 nm measured on 130 carbides (see Fig. 13) and seems to be homogeneously distributed within the matrix, as already shown by APT. The shape of the M 2 C particles is very elongated with an aspect ratio near 10. The main conclusion can be summarized as follows: the 1060 °C austenitizing temperature contributes to a fine and dispersed precipitation of M 2 C carbides after tempering, thanks to a high supersaturation as well as a homogeneous distribution of carbide-forming elements.
The average chemical composition of the M 2 C carbides measured by atom probe is Mo-rich with a significant content of Cr, W and V (see Fig. 14).
The chemical composition of M 2 C measured by atom probe is in quite good agreement with the ThermoCalc® calculations (see Fig. 15). The M 2 C carbides contain mainly Mo and Cr with approx. 10% W and a small amount of Fe and V, as shown in Fig. 15 and Table 5.
However, the chemical composition of the carbides in M54® is quite different from the composition measured in AerMet® 100 and AF1410 steels (see Table 5). Indeed, the main difference comes from the W content in M 2 C carbides for the M54® steel. W has a slower diffusivity than other carbide-forming elements and stabilizes M 2 C carbides for long duration tempering [START_REF] Lee | Stability and coarsening resistance of M2C carbides in secondary hardening steels[END_REF] which guarantee the mechanical properties in a wide range of tempering condition. Moreover, very few cementite precipitates are observed in the M54® steels. This fact also contributes to the high fracture toughness value measured after tempering. Indeed, cementite is well known to strongly reduce the fracture toughness of high strength steels [START_REF] Speich | Strength and toughness of Fe-10ni alloys containing C, Cr[END_REF], particularly if the iron carbide is located at the interlath site. The W in M 2 C carbides allows a long duration of the tempering treatment resulting in the total dissolution of cementite without coarsening of M 2 C carbides.
Conclusion
Ferrium® M54® steel was developed by QuesTek using intensive thermodynamic calculations [START_REF] Olson | Materials genomics: from CALPHAD to flight[END_REF]. An excellent strength/fracture toughness balance is achieved with a UTS reaching 1965 MPa and a K 1C values up to 110 MPa√m.The main goal of this work is to provide experimental evidence and arguments explaining the outstanding UTS/ K1C balance of properties the work is focused on the precipitation identification during the heat treatment by a different scale microstructural study using advanced experimental tools (XRD, TEM, APT). To this end, the optimization of austenitizing conditions is of primary importance, in conjunction with the solutionizing of alloying elements needed for precipitation during tempering. The main results can be summarized as follows:
▪ Microstructure in the as-quenched state (after cryogenic treatment) can be defined as a Ti-rich MC carbide precipitation with a size from 50 nm to 120 nm in a martensitic matrix which is highly supersaturated in carbide-forming elements. In addition, those elements are homogeneously distributed within the matrix, according to length-diffusion calculations. ▪ The addition of small amount of titanium has led to full dissolution of the Mo-and W-rich carbides. Types of precipitates which control the grain size during the austenitization and which strengthen the steel during the tempering are then totally different. ▪ This final microstructure is obtained thanks to the proper solutionizing of alloying elements during austenitizing at high temperature (1060 °C) which results in: o A high supersaturation before tempering. o A homogeneously distributed nucleation of carbides. ▪ Microstructure in the tempered state 516 °C/10 h is characterized by a homogeneously distributed precipitation of nanometer-sized M 2 C carbides. These carbides contain W, which reduces their coarsening rate.
Table 5
Comparison of carbide composition of different UHS steels hardened by M 2 C carbide precipitation according to ThermoCalc® calculations and experimental measurements.
Fig. 1 .
1 Fig. 1. Comparison of different grades of steel according to their fracture toughness, ultimate tensile strength and stress corrosion cracking resistance (adapted from [3]).
Fig. 3 .
3 Fig. 3. Relative length change curve (a) and derivative of the relative change curves (b) obtained from dilatometer heating experiments.
Fig. 4 .
4 Fig. 4. SEM image of a fracture surface of a tensile specimen (austenitization performed at 1020 °C).
Fig. 5 .
5 Fig. 5. SEM image of an as-quenched sample austenitized at 920 °C after nital etch.
Fig. 6 .
6 Fig. 6. Mole fraction of phase according to austenitizing temperature in M54® with and without 0.01%mass Ti calculated with TCFE3 ThermoCalc® database.
Fig. 7 .
7 Fig. 7. Composition of MC carbide according to the temperature calculated with TCFE3 ThermoCalc® database.
Fig. 8 .
8 Fig. 8. SEM observations of undissolved carbides after 1060 °C austenitizing and Nital etch (as-quenched structure).
Fig. 9 .
9 Fig. 9. Pattern and experimental XRD profiles (relative intensities) of precipitates extracted from the as-quenched M54® steel.
Fig. 10 .
10 Fig. 10. Prior austenitic grain size in as-quenched state after 1 h austenitizing at 900 °C (a) and 1100 °C (b).
Fig. 11 .
11 Fig. 11. Reference JCPDS pattern and experimental XRD profiles (relative intensities) of samples tempered at 516 °C for 10 h and 500 h.
Fig. 12 .
12 Fig. 12. Three-dimensional APT reconstruction of Ni atoms (green) of a sample tempered at 516 °C for 10 h. Carbides are represented as violet isoconcentration surfaces (total concentration of Mo and C is 36 at. pct). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 14 .
14 Fig. 14. Proximity histogram of the 98 interfaces precipitate/matrix.
Fig. 15 .
15 Fig. 15. Chemical composition of M 2 C in M54® calculated with TCF3 ThermoCalc® database.
Table 1
1 Chemical composition (wt%) of UHS Co-Ni steels.
C Cr Ni Co Mo W V Ti Mn Si
M54® 0.3 1 10 7 2 1.3 0.1 0.02max / /
Aermet® 100 0.23 3.1 11.1 13.4 1.2 / / 0.05max / /
AF1410 0.15 2 10 14 1 / / 0.015 0.1 0.1
HP9-4-20 0.2 0.8 9 4 1 / 0.08 / 0.2 0.2
HY-180 0.13 2 10 8 1 / / / 0.1 0.05
S53® 0.21 10 5.5 14 2 1 0.3 0.2max / /
Fig. 2. Mole fraction of phase according to austenitizing temperature in M54® calculated with TCFE3 ThermoCalc® database (Ti-free).
Table 2
2 Composition of M 6 C carbides predicted by ThermoCalc® calculations.
Carbide M 6 C
Composition (860 °C) (Fe 2.8 Mo 2.05 W 0.96 Cr 0.12 V 0.07 )C
Table 3
3 Austenitization of different UHS steels hardened by M 2 C carbide precipitation.
Steel M54® AerMet® 100 AF1410
T aust (°C) 1060 885 843
Table 4
4 Diffusivity in γ-iron and diffusion distance during solutionizing of carbideforming elements.
Element Mo Cr W
Diffusivity in γ-iron (D, cm 2 / s) 0.036exp (-239,8/RT) 0.063exp (-252,3/RT) 0.13exp (-267,4/RT)
[46] [46] [46]
Diffusion distance during ~4 ~3 ~2
austenitization (1 h at
1060 °C) (μm)
Acknowledgements Atom-probe tomography was performed at the Northwestern University Center for Atom-Probe Tomography (NUCAPT). The LEAP tomograph at NUCAPT was purchased and upgraded with grants from the NSF-MRI (DMR-0420532) and ONR-DURIP (N00014-0400798, N00014-0610539, N00014-0910781, N00014-1712870) programs. NUCAPT received support from the MRSEC program (NSF DMR-1121262) at the Materials Research Center, the SHyNE Resource (NSF ECCS-1542205), and the Initiative for Sustainability and Energy (ISEN) at Northwestern University. Special thanks to Dr. Dieter Isheim for his analyses and invaluable help.
Assistance provided by QuesTek Innovations LLC through Chris Kern and Ricardo K. Komai.
Data availability
The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. |
01762573 | en | [
"info.info-ai",
"info.info-cv",
"info.info-lg",
"info.info-ts",
"scco",
"scco.neur",
"scco.psyc"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01762573v2/file/chapterBCISigProcHumans.pdf | Fabien Lotte
Camille Jeunet
Jelena Mladenovic
Bernard N'kaoua
Léa Pillette
A BCI challenge for the signal processing community: considering the user in the loop
Introduction
ElectroEncephaloGraphy (EEG)-based Brain-Computer Interfaces (BCIs) have proven promising for a wide range of applications, from communication and control for severely motor-impaired users, to gaming targeted at the general public, real-time mental state monitoring and stroke rehabilitation, to name a few [START_REF] Clerc | Brain-Computer Interfaces 2: Technology and Applications[END_REF][START_REF] Lotte | Electroencephalography Brain-Computer Interfaces[END_REF]. Despite this promising potential, BCIs are still scarcely used outside laboratories for practical applications. The main reason preventing EEG-based BCIs from being widely used is arguably their poor usability, which is notably due to their low robustness and reliability. To operate a BCI, the user has to encode commands in his/her EEG signals, typically using mental imagery tasks, such as imagining hand movement or mental calculations. The execution of these tasks leads to specific EEG patterns, which the machine has to decode by using signal processing and machine learning. So far, to address the reliability issue of BCI, most research efforts have been focused on command decoding only. This present book contains numerous examples of advanced machine learning and signal processing techniques to robustly decode EEG signals, despite their low spatial resolution, their noisy and non-stationary nature. Such algorithms contributed a lot to make BCI systems more efficient and effective, and thus more usable.
However, if users are unable to encode commands in their EEG patterns, no signal processing or machine learning algorithm would be able to decode them. There-1 fore, we argue in this chapter that BCI design is not only a decoding challenge (i.e., translating EEG signals into control commands), but also a human-computer interaction challenge, which aims at ensuring the user can control the BCI. Indeed, BCI control has been shown to be a skill, that needs to be learned and mastered [START_REF] Neuper | Neurofeedback Training for BCI Control[END_REF][START_REF] Jeunet | Human Learning for Brain-Computer Interfaces[END_REF]. Recent research results have actually shown that the way BCI users are currently trained was suboptimal, both theoretically [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF][START_REF] Lotte | Towards Improved BCI based on Human Learning Principles[END_REF] and practically [START_REF] Jeunet | Why Standard Brain-Computer Interface (BCI) Training Protocols Should be Changed: An Experimental Study[END_REF]. Moreover, the user is known to be one of the main cause of EEG signals variability in BCI, due to his/her change in mood, fatigue, attention, etc. [START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF][START_REF] Shenoy | Towards adaptive classification for BCI[END_REF].
Therefore, there are a number of open challenges to take the user into account during BCI design and training, for which signal processing and machine learning methods could provide solutions. These challenges notably concern 1) the modeling of the user and 2) understanding and improving how and what the user is learning.
More precisely, the BCI community should first work on user modeling, i.e. modeling and updating the user's mental states and skills overtime from their EEG signals, behavior, BCI performances, and possibly other sensors. This would enable us to design individualized BCI, tailored for each user, and thus maximally efficient for each user. The community should also identify new performance metrics -beyond classification accuracy -that could better describe users' skills at BCI control.
Second, the BCI community has to understand how and what the user learns to control the BCI. This includes thoroughly identifying the features to be extracted and the classifier to be used to ensure the user's understanding of the feedback resulting from them, as well as how to present this feedback. Being able to update machine learning parameters in a specific manner and a precise moment to favor learning without confusing the user with the ever-changeable feedback is another challenge. Finally, it is necessary to gain a clearer understanding of the reasons why mental commands are sometimes correctly decoded and sometimes not; what makes people sometimes fail at BCI control, in order to be able to guide them to do better.
Altogether, solving these challenges could have a substantial impact in improving BCI efficiency, effectiveness and user-experience, i.e., BCI usability. Therefore, this chapter aims at identifying and describing these various open and important challenges for the BCI community, at the user level, to which experts in machine learning and signal processing could contribute. It is organized as follows: Section 1.2 addresses challenges in BCI user modeling, while Section 1.3 targets the understanding and improvement of BCI user learning. For each section, we identify the corresponding challenges, the possible impact of solving them, and first research directions to do so. Finally the chapter summarizes these open challenges and possible solutions in Section 1.4.
Modeling the User
In order to be fully able to take the user into account into BCI design and training, the ideal solution would be to have a full model of the users, and in particular of the users' traits, e.g., cognitive abilities or personality, and states, e.g., current attention level or BCI skills at that stage of training. Signal processing and machine learning tools and research can contribute to these aspects by developing algorithms to estimate the users' mental states (e.g., workload) from EEG and other physiological signals, by estimating how well users can self-modulate their EEG signals, i.e., their BCI skills, and by dynamically modeling, using machine learning, all these aspects together. We detail these points below.
Estimating and tracking the user's mental states from multimodal sensors
Increase in the number of available low-cost sensors [START_REF] Swan | Sensor mania! the internet of things, wearable computing, objective metrics, and the quantified self 2.0[END_REF] and development in machine learning enables real time assessment of some cognitive, affective and motivational processes influencing learning, such as attention for instance. Numerous types of applications are already taking advantage of these pieces of information, such as health [START_REF] Jovanov | A wireless body area network intelligent motion sensors for computer assisted physical rehabilitation[END_REF], sport [START_REF] Baca | Rapid feedback systems for elite sports training[END_REF] or intelligent tutoring systems [START_REF] Woolf | Affective tutors: Automatic detection of and response to student emotion[END_REF]. Such states could thus be relevant to improve BCI learning as well. Among the cognitive states influencing learning, attention deserves a particular care since it is necessary for memorization to occur [START_REF] Fisk | Memory as a function of attention, level of processing, and automatization[END_REF]. It is a key factor in several models of instructional design, e.g., in the ARCS model where A stands for Attention [START_REF] Keller | The Arcs model of motivational design[END_REF]. Attention levels can be estimated in several ways. Based on the resource theory of Wickens, task performance is linked to the amount of attentional resources needed [START_REF] Wickens | Multiple resources and performance prediction[END_REF]. Therefore, performances can provide a first estimation of the level of attentional resources the user dedicates to the task. However, this metric also reflects several other mental processes, and should thus be considered with care. Moreover, attention is a broad term that encompasses several types of concepts [START_REF] Posner | Components of attention[END_REF][START_REF] Cohen | The neuropsychology of attention[END_REF]. For example, focused attention refers to the amount of information that can be processed at a given time whereas vigilance refers to the ability to pay attention to the apparition of an infrequent stimulus over a long period of time. Each type of attention has particular ways to be monitored, for example vigilance can be detected using blood flow velocity measured by transcranial Doppler sonography (TCD) [START_REF] Shaw | Effects of sensory modality on cerebral blood flow velocity during vigilance[END_REF]. Focused visual attention, which refers to the selection of visual information to process, can be assessed by measuring eye movements [START_REF] Glaholt | Eye tracking in the cockpit: a review of the relationships between eye movements and the aviators cognitive state[END_REF]. While physiological sensors provide information about the physiological reactions associated with processes taking place in the central nervous system, neuroimaging has the advantage of recording information directly from the source [START_REF] Frey | Review of the use of electroencephalography as an evaluation method for human-computer interaction[END_REF]. EEG recordings enable to discriminate some types of attention with various levels of reliability given the method used. For instance, alpha band (7.5 to 12.5 Hz) can be used for the discrimination of several levels of attention [START_REF] Klimesch | Induced alpha band power changes in the human EEG and attention[END_REF], while the amplitude of event related potentials (ERP) are modulated by visual selective attention [START_REF] Saavedra | Processing stages of visual stimuli and event-related potentials[END_REF]. While specific experiments need to be carried out to specify the exact nature of the type(s) of attention involved in BCI training, a relationship between gamma power (30 to 70 Hz) in attentional network and mu rhythm-based BCI performance have already been shown by Grosse-Wentrup et al. [START_REF] Grosse-Wentrup | Causal influence of gamma oscillations on the sensorimotor rhythm[END_REF][START_REF] Grosse-Wentrup | High gamma-power predicts performance in sensorimotor-rhythm braincomputer interfaces[END_REF]. Such linear correlation suggests the implication of focused attention and working memory [START_REF] Grosse-Wentrup | High gamma-power predicts performance in sensorimotor-rhythm braincomputer interfaces[END_REF] in BCI learning.
The working memory (WM) load or workload is another cognitive factor of influence for learning [START_REF] Baddeley | Working memory. Psychology of learning and motivation[END_REF][START_REF] Mayer | Multimedia learning (2nd)[END_REF]. It is related to the difficulty of the task, depends on the user's available resources and to the quantity of information given to the user.
An optimal amount of load is reached when the user is challenged enough not to get bored and not too much compared with his abilities [START_REF] Gerjets | Cognitive state monitoring and the design of adaptive instruction in digital environments: lessons learned from cognitive workload assessment using a passive braincomputer interface approach[END_REF]. Behavioral measures of workload include accuracy and response time, when physiological measures comprise eye-movements [START_REF] Sr | Analytical techniques of pilot scanning behavior and their application[END_REF], eye blinks [START_REF] Ahlstrom | Using eye movement activity as a correlate of cognitive workload[END_REF], pupil dilatation [START_REF] De Greef | Eye movement as indicators of mental workload to trigger adaptive automation[END_REF] or galvanic skin response [START_REF] Verwey | Detecting short periods of elevated workload: A comparison of nine workload assessment techniques[END_REF]. However, as most behavioral measures, these measures change due to WM load, but not only, making them unreliable to measure uniquely WM load. EEG is a more reliable measure of workload [START_REF] Wobrock | Continuous Mental Effort Evaluation during 3D Object Manipulation Tasks based on Brain and Physiological Signals[END_REF]. Gevins et al. [START_REF] Gevins | Monitoring working memory load during computer-based tasks with EEG pattern recognition methods[END_REF] showed that WM load could be monitored using theta (4 to 7 Hz), alpha (8 to 12Hz) and beta (13 to 30 Hz) bands from EEG data. Low amount of workload could be discriminated from high amount of workload in 27s long epochs of EEG with a 98% accuracy using Joseph-Vigliones neural network algorithm [START_REF] Joseph | Contributions to perceptron theory[END_REF][START_REF] Viglione | Applications of pattern recognition technology[END_REF]. Interestingly they also obtained significant classification accuracies when training their network using data from another day (ie. 95%), another person (ie. 83%) and another task (ie. 94%) than the data used for classification. Several experiments have since reported online (ie. real time) classification rate ranging from 70 to 99% to distinguish between two types of workload [START_REF] Blankertz | The Berlin braincomputer interface: non-medical uses of BCI technology[END_REF][START_REF] Grimes | Feasibility and pragmatics of classifying working memory load with an electroencephalograph[END_REF]. Results depend greatly on the length of the signal epoch used: the longer the epoch, the better the performance [START_REF] Grimes | Feasibility and pragmatics of classifying working memory load with an electroencephalograph[END_REF][START_REF] Mühl | EEG-based Workload Estimation Across Affective Contexts[END_REF]. The importance of monitoring working memory in BCI applications is all the more important because BCI illiteracy is associated with high theta waves [START_REF] Ahn | High theta and low alpha powers may be indicative of BCI-illiteracy in motor imagery[END_REF] which is an indicator of cognitive overload [START_REF] Yamamoto | Topographic EEG study of visual display terminal (VDT) performance with special reference to frontal midline theta waves[END_REF]. Finally, another brain imaging modality can be used to estimate mental workload: functional Near Infrared Spectroscopy (fNIRS). Indeed, it was shown that hemodynamic activity in the prefrontal cortex, as measured using fNIRS, could be used to discriminate various workload levels [START_REF] Herff | Mental workload during N-back taskquantified in the prefrontal cortex using fNIRS[END_REF][START_REF] Peck | Using fNIRS to measure mental workload in the real world[END_REF][START_REF] Durantin | Using near infrared spectroscopy and heart rate variability to detect mental overload[END_REF].
Learners state assessment has mostly focused on cognitive components, such as the ones presented above, because learning has often been considered as information processing. However, affects also play a central role in learning [START_REF] Philippot | Emotion and memory[END_REF]. For example, Isen [START_REF] Isen | Positive Affect and Decision Making, Handbook of emotions[END_REF] has shown that positive affective states facilitate problem solving. Emotions are often inferred using contextual data, performances and models describing the succession of affective states the learner goes through while learning. The model of Kort et al. [START_REF] Kort | An affective model of interplay between emotions and learning: Reengineering educational pedagogy-building a learning companion[END_REF] is an example of such model. Though physiological signals can also be used such as electromyogram, electrocardiogram, skin conductive resistance and blood volume pressure [START_REF] Picard | Affective wearables[END_REF][START_REF] Picard | Toward computers that recognize and respond to user emotion[END_REF]. Arroyo et al. [START_REF] Arroyo | Emotion Sensors Go To School[END_REF] developed a system composed of four different types of physiological sensors. Their results show that the facial recognition system was the most efficient and could predict more than 60% of the variance of the four emotional states. Several classification methods have been tried to classify EEG data and deduce the emotional state of the subject. Methods such as multilayer perceptron [START_REF] Lin | Multilayer perceptron for EEG signal classification during listening to emotional music[END_REF], K Nearest Neighbor (KNN), Linear Discriminant Analysis (LDA), Fuzzy K-Means (FKM) or Fuzzy C Means (FCM) were explored [START_REF] Murugappan | Timefrequency analysis of EEG signals for human emotion detection[END_REF][START_REF] Murugappan | Classification of human emotion from EEG using discrete wavelet transform[END_REF], using as input alpha, beta and gamma frequency bands power. Results are promising and vary around 75% accuracy for two to five types of emotions. Note, however, that the use of gamma band power features probably means that the classifiers were also using EMG activity due to different facial expressions. For emotion monitoring as well, fNIRS can prove useful. For instance, in [START_REF] Heger | Continuous affective states recognition using functional near infrared spectroscopy[END_REF], fNIRS was shown to be able to distinguish two classes of affective stimuli with different valence levels with average classification accuracies around 65%. Recognizing emotion represents a challenge because most of the studies rely on the assumption that people are accurate in recognizing their emotional state and that the emotional cues used have a similar and the intended effect on subject. Moreover, many brain structures involved into emotion are deep in the brain, e.g., the amygdala, and as such activity from these areas are often very weak or even invisible in EEG and fNIRS.
Motivation is interrelated with emotions [START_REF] Harter | A new self-report scale of intrinsic versus extrinsic orientation in the classroom: Motivational and informational components[END_REF][START_REF] Stipek | Motivation to learn: From theory to practice[END_REF]. It is often approximated using the performances [START_REF] Blankertz | The Berlin braincomputer interface: non-medical uses of BCI technology[END_REF]. Several EEG characteristics are modulated by the level of motivation. For example, this is the case for the delta rhythm (0.5 to 4 Hz) which could originate from the brain reward system [START_REF] Knyazev | EEG delta oscillations as a correlate of basic homeostatic and motivational processes[END_REF]. Motivation is also known to modulate the amplitude of the P300 event related potential (ERP) and therefore increases performance with ERP-based BCI [START_REF] Kleih | Motivation modulates the P300 amplitude during braincomputer interface use[END_REF]. Both motivation and emotions play a major role in biofeedback learning [START_REF] Miller | Some directions for clinical and experimental research on biofeedback. Clinical biofeedback: Efficacy and mechanisms[END_REF][START_REF] Yates | Biofeedback and the modification of behavior[END_REF][START_REF] Kübler | Braincomputer communication: Unlocking the locked in[END_REF][START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF][START_REF] Hernandez | Low motivational incongruence predicts successful EEG resting-state neurofeedback performance in healthy adults[END_REF] and in BCI performances [START_REF] Hammer | Psychological predictors of SMR-BCI performance[END_REF][START_REF] Neumann | Predictors of successful self control during braincomputer communication[END_REF].
Cognitive, affective and motivational states have a great impact on learning outcome and machine learning plays a key role in monitoring them. Though challenges remain to be overcome, such as detecting and removing artifacts in real time. For example, facial expressions often occur due to change in mental states and may create artifacts polluting EEG data and for which real time removal still represents an issue. Limitations also arise from the number of different states we are able to differentiate. The quantity of data to train the classifier increasing with the number of classes to differentiate. Future studies should also focus on the reliability and stability of the classification within and across individuals [START_REF] Christensen | The effects of day-today variability of physiological data on operator functional state classification[END_REF]. Indeed, classification accuracy, particularly online one, still needs to be improved. Furthermore, calibration of classifiers is often needed for each new subject or session which is time consuming and might impede the use of such technology on a larger scale. Finally, while several emotional states can be recognized from user's behavior, there is usually very limited overt behavior, e.g., movements or speech, during BCI use. Thus, future studies should try to differentiate more diverse emotional states, e.g., frustration, directly from EEG and physiological data.
Quantifying users' skills
As mentioned above, part of the user modeling consists in measuring the users' skills at BCI control. Performance measurement in BCI is an active research topic, and various metrics were proposed [START_REF] Thompson | Performance measurement for brain-computer or brain-machine interfaces: a tutorial[END_REF][START_REF] Hill | A general method for assessing braincomputer interface performance and its limitations[END_REF]. However, so far, the performance considered and measured was that of the whole BCI system. Therefore, such performance metrics reflected the combined performances of the signal processing pipeline, the sensors, the user, the BCI interface and application, etc. Standard performance metrics used cannot quantify specifically and uniquely the BCI users' skills, i.e., how well the user can self-modulate their brain activity to control the BCI. This would be necessary to estimate how well the user is doing, where are their strengths and weaknesses, in order to provide optimal instructions, feedback, application interface and training exercises.
We recently proposed some new metrics to go in that direction, i.e., to estimate specifically users' skills at BCI control, independently of a given classifier [START_REF] Lotte | Online classification accuracy is a poor metric to study mental imagery-based BCI user learning: an experimental demonstration and new metrics[END_REF]. In particular, we proposed to quantify the users' skills at BCI control, by estimating their EEG patterns distinctiveness between commands, and their stability. We notably used Riemannian geometry to quantify how far apart from each other the EEG patterns of each command are, as represented using EEG spatial covariance matrices, and how variable over trials these patterns are. We showed that such metrics could reveal clear user learning effects, i.e., improvements of the metrics over training runs, when classical metrics such as online classification accuracy often failed to do so [START_REF] Lotte | Online classification accuracy is a poor metric to study mental imagery-based BCI user learning: an experimental demonstration and new metrics[END_REF].
This work thus stressed the need for new and dedicated measures of user skills and learning. The metrics we proposed are however only a first attempt at doing so, with more refined and specific metrics being still needed. For instance, our metrics can mostly quantify control over spatial EEG activity (EEG being represented using spatial covariance matrices). We also need metrics to quantify how much control the user has over their spectral EEG activity, as well as over their EEG temporal dynamics. Notably, it would seem useful to be able to quantify how fast, how long and how precisely a user can self-modulate their EEG activity, i.e., produce a specific EEG pattern at a given time and for a given duration. Moreover, such new metrics should be able to estimate successful voluntary self-regulation of EEG signals amidst noise and natural EEG variabilities, and independently of a given EEG classifier. We also need metrics that are specific for a given mental task, to quantify how well the user can master this mental command, but also a single holistic measure summarizing their control abilities over multiple mental tasks (i.e., multiclass metrics), to easily compare users and give them adapted training and BCI systems. The signal processing and machine learning community should thus address all these open and difficult research problems by developing new tools to quantify the multiple aspects of BCI control skills.
Creating a dynamic model of the users' states and skills
A conceptual model of Mental Imagery BCI performance
In order to reach a better understanding of the user-training process, a model of the factors impacting Mental Imagery (MI)-BCI skill acquisition is required. In other words, we need to understand which users traits and states impact BCI performance, how these factors do interact and how to influence them through the experimental design or specific cognitive training procedures. We call such a model a Cognitive Model. Busemeyer and Diederich describe cognitive models as models which aim to scientifically explain one or more cognitive processes or how these processes interact [START_REF] Busemeyer | Cognitive modeling[END_REF]. Three main features characterize cognitive models: (1) their goal: they aim at explaining cognitive processes scientifically, (2) their format: they are described in a formal language, (3) their background: they are derived from basic principles of cognition [START_REF] Busemeyer | Cognitive modeling[END_REF]. Cognitive models guarantee the production of logically valid predictions, they allow precise quantitative predictions to be made and they enable generalization [START_REF] Busemeyer | Cognitive modeling[END_REF].
In the context of BCIs, developing a cognitive model is a huge challenge due to the complexity and imperfection of BCI systems. Indeed, BCIs suffer from many limitations, independent from human learning aspects, that could explain users modest performance. For instance, the sensors are often very sensitive to noise and do not enable the recording of high quality brain signals while the signal processing algorithms sometimes fail to recognize the encoded mental command. But it is also a huge challenge due to the lack of literature on the topic and to the complexity and cost associated with BCI experiments that are necessary to increase the quantity of experimental data required to implement a complete and precise model [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF].
Still, a cognitive model would enable us to reach a better understanding of the MI-BCI user-training process, and consequently to design adapted and adaptive training protocols. Additionally, it would enable BCI scientists to guide neurophysiological analyses by targeting the cognitive and neurophysiological processes involved in the task. Finally, it would make it possible to design classifiers robust to variabilities, i.e., able to adapt to the neurophysiological correlates of the factors included in the model. To summarize, building such a model, by gathering the research done by the whole BCI community, could potentially lead to substantial improvements in MI-BCI reliability and acceptability.
Different steps are required to build a cognitive model [START_REF] Busemeyer | Cognitive modeling[END_REF]. First, it requires a formal description of the cognitive process(es) / factors to be described based on conceptual theories. Next, since the conceptual theories are most likely incomplete, ad hoc assumptions should be made to complete the formal description of the targeted factors. Third, the parameters of the model, e.g., the probabilities associated with each factors included in the model, should be determined. Then, the predictions made by the model should be compared to empirical data. Finally, this process should be iterated to constrain and improve the relevance of the model.
By gathering the results of our experimental studies and of a review of the literature, we proposed a first formal description of the factors influencing MI-BCI performance [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF]. We grouped these factors into 3 categories [START_REF] Jeunet | Advances in user-training for mental-imagerybased BCI control: Psychological and cognitive factors and their neural correlates[END_REF]. The first category is "task-specific", i.e., it includes factors related to the BCI paradigm considered. Here, as we focused on Mental-Imagery based BCIs, this category gathers factors related to Spatial Abilities (SA), i.e., the ability to produce, transform and manipulate mental images [START_REF] Poltrock | Individual differences in visual imagery and spatial ability[END_REF]. Both the second and third categories include "task-unspecific" factors, or, in other words, factors that could potentially impact performance whatever the paradigm considered. More precisely, the second category includes motivational and cognitive factors, such as attention (state and trait) or engagement. These factors are likely to be modulated by the factors of the third category that are related to the technology-acceptance, i.e., to the way users perceive the BCI system. This last category includes different states such as the level of anxiety, self efficacy, mastery confidence, perceived difficulty or the sense of agency.
The challenge is thus to modulate these factors to optimize the user's state and trait and thus increase the probability of a good BCI performance and/or of an efficient learning. In order to modulate these factors -that can be either states (e.g., motivation) or malleable traits (e.g., spatial abilities), one can act on specific effectors: design artefacts or cognitive activities/training.
The effectors we will introduce hereafter are mainly based on theoretical hypotheses. Their impact on the users' states, traits and performance are yet to be quantified. Thus, although these links make sense from a theoretical point of view, they should still be considered with caution. We determined three types of links between the factors and effectors. "Direct influence on user state": these effectors are suggested to influence the user's state and, consequently, are likely to have a direct impact on performance. For instance, proposing a positively biased feedback -making users believe they are doing better than what they really are -has been suggested to improve (novice) users' sense of agency (i.e., the feeling of being in control, see Section 1.3.3.1 for more details) [START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF]. "Help for users with a specific profile": these effectors could help users who have a specific profile and consequently improve their performance. For instance, proposing an emotional support has been suggested to benefit highly tensed/anxious users [START_REF] N'kambou | Advances in intelligent tutoring systems[END_REF] (see Section 1.3.3.2 for more details). "Improved abilities": this link connects effectors of type cognitive activities/exercises to abilities (malleable traits) that could be improved thanks to these activities. For instance, attentional neurofeedback has been suggested to improve attentional abilities [START_REF] Zander | Towards neurofeedback for improving visual attention[END_REF]. For more details, see [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF].
This model has been built based on the literature related to mental-imagery based BCIs (and mainly to motor-imagery based BCIs). It would be interesting to investigate the relevance of this model for other BCI paradigms, such as BCIs based on Steady-State Visual Evoked Potentials (SSVEP) or BCIs based on P300. It is noteworthy that for instance, motivation has already been shown to modulate P300 amplitude and performance [START_REF] Kleih | Motivation modulates the P300 amplitude during braincomputer interface use[END_REF]. The effect of mastery confidence (which is included in the "technology-acceptance factors" in our model) on P300-based BCI performance has also been investigated [START_REF] Kleih | Does mastery confidence influence P300 based braincomputer interface (BCI) performance? In: Systems, Man, and Cybernetics[END_REF]. The results of this study were not conclusive, which led the authors to hypothesize that either this variable had no effect on performance or that they may not have succeeded to manipulate participants' mastery confidence. Further investigation is now required. Besides, the same authors proposed a model of BCI performance [START_REF] Kleih | Psychological factors influencing brain-computer interface (BCI) performance[END_REF]. This model gathers physiological, anatomical and psychological factors. Once again, it is interesting to see that, while organized differently, similar factors were included in the model. To summarize, it would be relevant to further investigate the factors influencing performance in different BCI paradigms, and then investigate to which extent some of these factors are common to all paradigms (i.e., task-unspecific), while determining which factors are specific to the paradigm/task. Then, the next step would be to propose a full and adaptive model of BCI performance. Now, from a signal processing and machine learning point of view, many challenges are remaining. We should aim at determining some physiological or neurophysiological correlates of the factors included in this model in order to be able to estimate, in real time, the state of the BCI user. Therefore, the signal processing community should design tools to recognize these neural correlates in real-time, from noisy signals. Besides, the model itself requires machine learning expertise to be implemented, as detailed in the next Section, i.e., Section 1.2.3.2. Then, one of the main challenges will be to determine, for each user, based on the recorded signals and performance, when the training procedure should be adapted in order to optimize the performance and learning process. Machine learning techniques could be used in order to determine, based on a pool of previous data (e.g., using case-based reasoning) and on theoretical knowledge (e.g., using rule-based reasoning), when to make the training procedure evolve. In the field of Intelligent Tutoring Systems (ITS), where the object is to adapt the training protocol dynamically to the state (e.g., level of skills) of the learner, a popular approach is to use multi-arm bandit algorithms [START_REF] Clement | Multi-arm bandits for intelligent tutoring systems[END_REF]. Such an approach could be adapted for BCI training. The evolution of the training procedure could be either continuous or divided into different steps, in which case it would be necessary to determine relevant thresholds on users' states values, from which the training procedures should evolve, e.g., to become more complex, to change the context and propose a variation of the training tasks, to go back to a previous step that may have not been assimilated correctly, etc.
A computational model for BCI adaptation
As discussed in previous sections, it is necessary to identify the psychological factors, user skills and traits which will determine a successful BCI performance. Coadaptive BCIs, i.e., dynamically adaptive systems which adjust to signal variabilities during a BCI task, and in such way adapt to the user, while the user adapts to the machine via learning -showed tremendous improvement in the system performance ( [START_REF] Schwarz | A co-adaptive sensory motor rhythms Brain-Computer Interface based on common spatial patterns and Random Forest[END_REF] for MI; [START_REF] Thomas | CoAdapt P300 speller: optimized flashing sequences and online learning[END_REF] for P300). However, these techniques dwell mostly within the signal variabilities, by only adjusting to them, without acknowledging and possibly influencing the causes of such variabilities -human factors. These factors, once acknowledged, should be structured in a conceptual framework as in [START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF] in order to be properly influenced or to be adapted upon. In this framework for adaptive BCI methods, the human psychological factors are grouped by their degree of stability or changeability in time, e.g., skills could take multiple sessions (months) to change, while attention drops operate within short time periods. All these changes might have certain EEG signatures, thus considering the time necessary for these factors to change, the machine could be notified to adapt accordingly, and could predict and prevent negative behavior. To influence user behavior, the framework contains a BCI task model, arranged within the same time scales as the user's factors. Consequently, if the user does not reach a certain minimal threshold of performance for one BCI task, the system would switch to another, e.g., if kinesthetic imagination of hand movements is worse than tongue than it would switch to tongue MI. Additionally, if the user shows MI illiteracy, after a session, then the system would switch to other paradigms, and so on. Hence, the task model represents the possible BCI tasks managed by the exploration/exploitation ratio to adapt to the users and optimally influence them, within the corresponding time scales. Once identified and modeled theoretically, we need to search for computational models generic enough which could encompass such complex and unstable behavior, and enable us to design adaptive BCIs, whose signal processing, training tasks and feedback are dynamically adapted to these factors.
Several behavioral sciences and neuroscience theories strive to explain the brain's cognitive abilities based on statistical principles. They assume that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using Bayesian probability methods. Kenneth Craik suggested in 1943 that the mind constructs "small-scale models" of reality -later named Men-tal Models [START_REF] Pn | Mental models: Towards a cognitive science of language, inference, and consciousness[END_REF] -that it uses to anticipate events. Using a similar principle, Active Inference, a generic framework based on Bayesian inference, models any adaptive system, as the brain, in a perception/action context [START_REF] Friston | The anatomy of choice: active inference and agency[END_REF]. Active Inference describes the world to be in a true state which can never be completely revealed as the only information the adaptive system has are observations obtained through sensory input. The true state of the world is in fact hidden to the observers, and as such is set in their internal, generative model of the world, as hidden states. The true state of the world is inferred through sensory input or observations and is updated in a generative model, i.e., an internal representation of the world containing empirical priors and prior beliefs. The empirical priors are the hidden states and the possible actions to be made when an observation occurs. The event anticipation and action selection are defined with the free energy minimization principle or minimization of the surprise, and utility function, i.e., a measure of preferences over some set of outcomes. In other words, a set of possible actions which were previously generated in the internal model as empirical priors are favored in order to get a desired outcome. For instance, if a stranger -A -asks a person -B -to borrow him a phone in the street, the outcome of this event or the decision of B would depend on his model of the world, or empirical priors that relate to such an event. B's decision will also depend on his prior beliefs, for instance a religious man would have principles such that one should always help those in need. B can never reveal the absolute truth about A's intentions. So, if B's experience, i.e., empirical priors were negative, and no prior beliefs or higher values govern his actions, he will be likely to refuse. However, if it was positive, B will be likely to accept to help A. Additionally, B's reaction time will depend on a specific prior which encodes the exploration/ exploitation ratio. Hence, B anticipates an outcome, and acts in such a way and in a certain time to reach that event imagined in the future. He inferred the true state -the stranger's intentions-with his empirical priors and he acted to achieve a desired outcome or comply to prior beliefs. The promotion of certain outcomes is encoded in the utility function, and are set as prior beliefs. The free energy minimization principle relies on minimizing the Kullback-Leibler divergence or the relative entropy between two probability distributions -the current state and the desired state. It can be thought of as a prediction error that reports the difference between what can be attained from the current state and the goals encoded by prior beliefs. So, by favoring a certain action, one can reduce the prediction error, and in this way the action becomes the cause of future sensory input. This computational framework enables us to model the causes of sensory input in order to better anticipate and favor certain outcomes, which is indeed what we are looking for in BCI systems.
A P300-speller is a communication BCI device which relies on a neurophysiological phenomenon -called the oddball effect -that triggers a peak in the EEG signal, around 300ms after a rare and unexpected event -a P300. This is why this type of BCI is also called a reactive BCI, as the machine elicits and detects event-related potentials (ERPs), i.e., the brain's reaction to stimuli. In the case of P300-speller, a set of letters are randomly flashed and the users need to focus their visual attention on the letter they wish to spell. Once the target letter is flashed (as an unexpected and rare event) the brain reacts enabling the machine to detect the ERP and spell the desired letter.
Bayesian inference has been successfully used for instance in designing adaptive P300-spellers [START_REF] Mattout | Improving BCI performance through co-adaptation: applications to the P300-speller[END_REF]. In this example, the outcome of a probabilistic classifier (two multivariate-Gaussian mixture) is updated online. In such a way, the machine spells a letter once it attains a certain confidence level, i.e., the decision speed or reaction time depends on the reliability of accumulated evidence. This permits the machine to stop at an optimal moment, while maximizing both speed and accuracy. However, as we mentioned earlier, this example is user-dependent and adaptive, but does not go further by considering the cause of such EEG variability in order to reduce or anticipate it. To achieve this, we could endow the machine with a certain intelligence, with Active Inference [START_REF] Mladenović | Endowing the Machine with Active Inference: A Generic Framework to Implement Adaptive BCI[END_REF]. As we explained, Active Inference is used to model cognitive behavior and decision making processes. However, in our case, we wish to equip the machine with such generative models, in order to achieve a full symbiotic user-machine co-adaptation. The true states, in this case, belong to the user characteristics and intentions, and are in fact hidden to the machine. Concretely, the hidden states are the letters or words the user intends to spell with the BCI. In the beginning all the letters have equal probability to be spelled, but the more the machine flashes letters, the more it accumulates empirical priors and becomes confident about the target letter. In such way, the user intentions are represented as empirical priors (hidden states) which the machine has to update through the accumulation of observations -the classifier output. Furthermore, the machine will act (flash) in such a way to achieve the desired outcome -to reveal the target letter in minimal time. Hence, by using these principles, we could not only achieve optimal stopping [START_REF] Mattout | Improving BCI performance through co-adaptation: applications to the P300-speller[END_REF] but also optimal flashing [START_REF] Mladenović | Endowing the Machine with Active Inference: A Generic Framework to Implement Adaptive BCI[END_REF], i.e., flashing such group of letters to maximize the P300 effect. The flashing would be in an intelligent order yet appear to the user to be in a random order, so that the oddball effect stays uncompromised.
The criteria of optimization, i.e., whether one would favor subjective user's experience over the system performance, depends on the purpose of the BCI system [START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF]. For example, for entertainment or rehabilitation purposes, it is important to motivate the user to keep playing or keep giving effort. To achieve this can be possible by using positively biased feedback. On the other hand, for controlling a wheelchair using a BCI, then the system's accuracy is of essential importance. Active Inference could provide such adaptive power, setting the BCI goals within an intelligent artificial agent which would encode the utility function, and would manipulate the exploration/exploitation factor, see Fig. 1.
The remaining challenges comprise using Active Inference to adapt tasks of other BCI paradigms such as Motor Imagery. The ultimate goal would be to use Active Inference to create a fully adaptive and user customizable BCI. In this case, the hidden states which the machine needs to infer and learn would be more than trial-wise user intentions, but also user's states, skills and traits (measured with passive BCI for instance) and provided to the machine as additional (neuro)physiological observations. The convenience about Active Inference is that it is applicable to any adaptive system. So, we can use any information as input (higher level user observations) and tune the parameters (priors) to each user, in order to provide them with optimal tasks. tive BCI. The machine "observes" one or several (neuro)physiological measurements which serve to infer the user's immediate intentions, or states, skills and traits in longer periods of time. Depending on the purpose of the BCI, its paradigm and exercise, and considering the information the machine learned about the user, it will provide optimal action (feedback or instructions in different modalities or difficulty). An intelligent agent will encode the priors (utility and exploration/exploitation ratio) that are regulated for each user and specific context; favoring the optimal paradigm, exercise and action within specific time-scales of adaptation.
The optimal tasks would be governed by the BCI purpose (control, communication, neuro-rehabilitation etc.), paradigm (P300, MI, SSEP) and exercise (MI of hands, feet, or counting the number of flashes etc).
Regarding signal processing, the adaptive processes which arise such as adapting spatial or temporal filters should not only adjust to the signal variabilities, but be also guided by the context and purpose of the BCI. This way, the signal processing techniques could extend their adaptive power and be more applicable and flexible across contexts and users. Furthermore, the signal processing pipeline would need to expand and include other possible (neuro)physiological measurements in order to measure high level user factors. The machine learning techniques will have to accommodate for more dimensions, not only the features extracted from EEG but the variable states of the user should be taken into account. Active inference would fit this landscape and add such a layer through an internal model of the various causes of signal variability and by its single cost function -free energy.
Improving BCI user training
Machine learning and signal processing tools can also be used to deepen our understanding of BCI user learning as well as to improve this learning. Notably, such tools can be used to design features and classifiers that are not only good to discriminate the different BCI commands, but also good to ensure that the user can understand and learn from the feedback resulting for this classifier/features. This feedback can also be further improved by using signal processing tools to preprocess it, in order to design an optimal display for this feedback, maximizing learning. Finally, rather than designing adaptive BCI algorithms solely to increase BCI command decoding accuracy, it seems also promising to adapt BCI algorithms in a way and at a rate that favor user learning. Altogether, current signal processing and machine learning algorithms should not be designed solely for the machine, but also with the user in mind, to ensure that the resulting feedback and training enable the user to learn efficiently. We detail these aspects below.
Designing features and classifiers that the user can understand and learn from
So far the features, e.g., the power in some frequency bands and channels, and classifiers, e.g., LDA or Support Vector Machine (SVM), used to design EEG-based BCI are optimized based solely on the basis of their discriminative power [START_REF] Lotte | A Review of classification algorithms for EEG-based Brain-Computer Interfaces[END_REF][START_REF] Blankertz | Optimizing spatial filters for robust EEG single-trial analysis[END_REF][START_REF]A Tutorial on EEG Signal-processing Techniques for Mental-state Recognition in Brain-Computer Interfaces[END_REF]. In other words, features and classifiers are built solely to maximize the separation between the classes/mental imagery tasks used to control the BCI, e.g., left versus right hand imagined movement. Thus, a purely machine-oriented criteria -namely data separation -is used to optimize features and classifiers, without any consideration for whether such features and classifiers lead to a feedback that 1) is understandable by the user and 2) can enable the user to learn to self-regulate those features. In the algorithms used so far, while the features are by design as separable as possible, there is no guarantee that they can become more separable with training. Actually, it is theoretically possible that some features with an initially lower discriminative power can be easier to learn to self-regulate. As such, while on the short-term selecting features that are initially as discriminant as possible makes sense, on the longer-term, if the user can learn EEG self-regulation successfully, then it may make more sense to select features that will lead to a possibly even better discrimination after user learning. Similarly, while the classifier output, e.g., the distance between the input feature vector and the LDA/SVM discriminant hyperplane [START_REF] Pfurtscheller | Motor Imagery and Direct Brain-Computer Communication[END_REF], is typically used as feedback to the user, it is also unknown whether such feedback signal variations can be understood or make sense for the user. Maybe a different feedback signal, possibly less discriminant, would be easier to understand and learn to control by the user. Interestingly enough, there are very relevant research results from neuroscience, psychology and human-computer interaction that suggest that there are some constraints and principles that need to be respected so as to favor user learning of selfregulation, or to enable users to understand as best as possible some visualization and feedback. In particular, it was shown with motor related invasive BCIs on monkeys, that using features that lie in the natural subspace of their motor-related activity, i.e., in their natural motor repertoire, leads to much more efficient learning of BCI control than using features that lie outside this subspace/natural repertoire [START_REF] Sadtler | Neural constraints on learning[END_REF][START_REF] Hwang | Volitional control of neural activity relies on the natural motor repertoire[END_REF]. This suggests that not all features have the same user-learning potential, and thus that features should be designed with such considerations in mind. Similarly, regarding feedback and visualization, humans perceive with more or less ease variations of a visual stimuli, depending on the spatial and temporal characteristics of these variations, e.g., how fast the stimuli changes, and what the amplitude of this change is, see, e.g., [START_REF] Ware | Information visualization: perception for design[END_REF] for an overview. For instance, it is recommended to provide visualizations that are consistent over time, i.e., whose meaning should be interpreted in the same way from one trial to the next, and that vary smoothly over time [START_REF] Ware | Information visualization: perception for design[END_REF]. This as well suggests that the feedback should ideally be designed while taking such principles into consideration. There are also many other human learning principles that are in general not respected by current BCI designs, see notably [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF] as well as section 1.3.3. There is thus a lot of room for improvement.
The learning and feedback principles mentioned above could be used as constraints into the objective functions of machine learning and signal processing algorithms used in BCI. For instance, to respect human perception principles [START_REF] Ware | Information visualization: perception for design[END_REF], we could add these perception properties as regularization terms in regularized machine learning algorithms such as regularized spatial filters [START_REF] Lotte | Regularizing Common Spatial Patterns to Improve BCI Designs: Unified Theory and New Algorithms[END_REF] or classifiers [START_REF] Lotte | A Review of classification algorithms for EEG-based Brain-Computer Interfaces[END_REF][START_REF] Lotte | A Review of Classification Algorithms for EEG-based Brain-Computer Interfaces: A 10-year Update[END_REF]. Similarly, regularization terms could be added to ensure that the features/classifier lie in the natural motor repertoire of the user, to promote efficient learning with motorrelated BCIs. This could be achieved for instance, by transferring data between users [START_REF] Lotte | Learning from other Subjects Helps Reducing Brain-Computer Interface Calibration Time[END_REF][START_REF]Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain-Computer Interfaces[END_REF], to promote features that were shown to lead to efficient learning in other users. In other words, rather than designing features/classifiers using objective functions that reflect only discrimination, such objective functions should consider both discrimination and human learning/perception principles. This would ensure the design of both discriminative and learnable/understandable features.
It could also be interesting to explore the extraction and possibly simultaneous use of two types of features: features that will be used for visualization and feedback only (and thus that may not be optimal from a classification point of view), and features that will be used by the machine to recognize the EEG patterns produced by the user (but not used as user training feedback). To ensure that such features are related, and thus that learning to modulate them is also relevant to send mental commands, they could be optimized and extracted jointly, e.g., using multi-task learning [START_REF] Caruana | Multitask learning[END_REF].
Identifying when to update classifiers to enhance learning
It is already well accepted that in order to obtain better performances, adaptive BCI systems should be used [START_REF] Millán | Asynchronous BCI and Local Neural Classifiers: An Overview of the Adaptive Brain Interface Project[END_REF][START_REF] Shenoy | Towards adaptive classification for BCI[END_REF][START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF]. Due to the inherent variability of EEG signals, as well as to the change in users' states, e.g., fatigue or attention, it was indeed shown that adaptive classifiers and features generally gave higher classification accuracy than fixed ones [START_REF] Shenoy | Towards adaptive classification for BCI[END_REF][START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF]. Typically, this adaptation consists in re-estimating parameters of the classifiers/features during online BCI use, in order to keep track of the changing features distribution. However, again, such adaptation is typically performed only from a machine perspective to maximize data discriminability, without considering the user in the loop. The user is using the classifier output as feedback to learn and to use the BCI. If the classifier is continuously adapted, this means the feedback is changing continuously, which can be very confusing for the user, or even prevent them from learning properly. Indeed, both the user and the machine need to adapt to each other -the so-called co-adaptation in BCI [START_REF] Vidaurre | Machine-learning-based coadaptive calibration for brain-computer interfaces[END_REF]. A very recent and interesting work proposed a simple computational model to represent this interplay between the user learning and the machine learning, and how this co-adaptation takes place [START_REF] Müller | A mathematical model for the two-learners problem[END_REF]. While such work is only a simulation, it has nonetheless suggested that an adaptation speed that is either too fast or too slow prevents this co-adaptation from converging, and leads to a decreased learning and performance.
Therefore, when to adapt, e.g., how often, and how to adapt, e.g., how much, should be made with the user in mind. Ideally, the adaptation should be performed at a rate and strength that suits each specific user, to ensure that it does not confuse users but rather that it helps them to learn. To do so seems to stress once more the need for a model of the user (discussed in Section 1.2). such model would infer from the data, among other, how much change the user can deal with, to adapt the classifier accordingly. In this model, being able to measure the users' BCI skills (see also Section 1.2.2) would also help in that regard. It would indeed enable to know when the classifier should be updated because the user had improved and thus their EEG patterns have changed. It would also be interesting to quantify which variations in the EEG feature distribution would require an adaptation that may be confusing to the user -e.g., those changing the EEG source used -and those that should not be -e.g., just tracking change in feature amplitude. This would enable to perform only adaptation that is as harmless as possible for the user. A point that would need to be explored is whether classifiers and features should only be adapted when the user actually changes strategy, e.g., when the user has learned a better mental imagery task. This indeed requires the classifier to be able to recognize such new or improved mental tasks, whether other adaptations may just add some feedback noise and would be confusing to the user.
Designing BCI feedbacks ensuring learning
Feedback is generally considered as an important facilitator of learning and skill acquisition [START_REF] Azevedo | A meta-analysis of the effects of feedback in computer-based instruction[END_REF][START_REF] Bangert-Drowns | The instructional effect of feedback in test-like events[END_REF] with a specific effect on the motivation to learn, see e.g., [START_REF] Narciss | How to design informative tutoring feedback for multimedia learning. Instructional design for multimedia learning[END_REF].
Black and William [START_REF] Black | Assessment and classroom learning. Assessment in Education: principles, policy & practice[END_REF] proposed that to be effective, feedback must be directive (indicate what needs to be revised) and facilitative (provides suggestions to guide learners). In the same way, Kulhavy and Stock proposed that effective feedback must allow verification, i.e., specify if the answer is correct or incorrect and elaboration, i.e., provide relevant cues to guide the learner) [START_REF] Kulhavy | Feedback in written instruction: The place of response certitude[END_REF].
In addition to guidance, informative feedback had to be goal-directed by providing learners with information about their progress toward the goal to be achieved. The feeling that the goal can be met is an important way to enhance the motivation and the engagement of the learners [START_REF] Fisher | Differential effects of learner effort and goal orientation on two learning outcomes[END_REF].
Feedback should also be specific to avoid being considered as useless or frustrating [START_REF] Williams | Teachers' Written Comments and Students' Responses: A Socially Constructed Interaction[END_REF]. It needs to be clear, purposeful, meaningful [START_REF] Hattie | The Power of Feedback[END_REF] and to lead to a feeling of competence in order to increase motivation [START_REF] Ryan | Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being[END_REF].
Another consideration that requires to be much more deeply explored is that feedback must be adapted to the characteristics of the learners. For example, [START_REF] Hanna | Effects of total and partial feedback in multiple-choice testing upon learning[END_REF] showed that elaborated feedback enhances performances of low-ability students, while verification condition enhance performances of high-ability students. In the BCI field, Kübler et al. showed that positive feedback (provided only for a correct response) was beneficial for new or inexperienced BCI users, but harmful for advanced BCI users [START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF].
As underlined in [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF], classical BCI feedback satisfies few of such requirements. Feedback typically specifies if the answer is correct or not -i.e., the feedback is corrective -but does not aim at providing suggestions to guide the learner -i.e., it is not explanatory. Feedback is also usually not goal directed and does not provide details about how to improve the answer. Moreover, the feedback may often be unclear and meaningless, since it is based on a classifier build using calibration data recorded at the beginning of the session, during which the user does not master the mental imagery task they must perform.
In [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF], we discussed the limits of feedback used in BCI, and proposed solutions, some of which having already yielded positive results [START_REF] Hwang | Neurofeedback-based motor imagery training for brain-computer interface (BCI)[END_REF][START_REF] Pfurtscheller | Motor Imagery and Direct Brain-Computer Communication[END_REF]. A possibility would be to provide the user with richer and more informative feedback by using, for example, a global picture of his/her brain activity, eg., a 2D or 3D topography of cortical activation obtained by inverse solutions. Another proposal is to collect a better information on the mental task achieved by the subject (for example by recording Event Related Desynchonisation/Synchronisation activity) to evaluate users' progress and give them relevant insights about how to perform the mental task. Finally, it would be relevant to use more attractive feedback by using game-like, 3D or Virtual Reality, thus increasing user engagement and motivation [START_REF] Lécuyer | Brain-Computer Interfaces, Virtual Reality and Videogames[END_REF][START_REF] Leeb | Brain-Computer Communication: Motivation, aim and impact of exploring a virtual apartment[END_REF].
In a recent study, [START_REF] Jeunet | Continuous Tactile Feedback for Motor-Imagery based Brain-Computer Interaction in a Multitasking Context[END_REF] tested a continuous tactile feedback by comparing it to an equivalent visual feedback. Performance was higher with tactile feedback indicating that this modality can be a promising way to enhance BCI performances.
To conclude, the feedbacks used in BCI are simple and often poorly informative, which may explain some of the learning difficulties encountered by many users. Based on the literature identifying the parameters that maximize the effectiveness of feedback in general, BCI studies have already identified possible theoretical improvements. However, further investigations will be necessary to explore new research directions in order to make BCI accessible to a greater number of people. In particular, the machine learning and signal processing communities have the skills and tools necessary to design BCI feedback that are clearer, adaptive and adapted to the user, more informative and explanatory. In the following we provide more details on some of these aspects. In particular, we discuss the importance to design adaptive biased feedback, emotional and explanatory feedback, and provide related research directions in which the machine learning and signal processing communities can contribute.
Designing adaptive biased feedback
As stated earlier in this chapter, it is essential to compute and understand the user's emotional, motivational and cognitive states in order to provide them with an appropriate, adapted and adaptive feedback that will favor the acquisition of skills especially during the primary training phases of the user [START_REF] Mcfarland | EEG-based communication and control: short-term role of feedback[END_REF]. Indeed, in the first stages, the fact that the technology and the interaction paradigm (through MI tasks) are both new for the users is likely to induce a pronounced computer anxiety associated with a low sense of agency. Yet, given the strong impact that the sense of agency (i.e., the feeling of being in control) has on performance -see Section 1.2.3.1 -it seems important to increase it as far as possible. Providing the users with a sensory feedback informing them about the outcome of their action (MI task) seems to be necessary in order to trigger a certain sense of agency at the beginning of their training. This sense of agency will in turn unconsciously encourage users to persevere, increase their motivation, and thus promote the acquisition of MI-BCI related skills, which is likely to lead to better performances [START_REF] Achim | Computer usage: the impact of computer anxiety and computer self-efficacy[END_REF][START_REF] Saadé | Computer anxiety in e-learning: The effect of computer self-efficacy[END_REF][START_REF] Simsek | The relationship between computer anxiety and computer selfefficacy[END_REF]. This process could underlie the (experimentally proven) efficiency of positively biased feedback for MI-BCI user-training.
Positively biased feedback consists in leading users to believe that their performance was better than it actually was. Literature [START_REF] Barbero | Biased feedback in brain-computer interfaces[END_REF][START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF] reports that providing MI-BCI users with a biased (only positive) feedback is associated with improved performances while they are novices. However, that is no longer the case once they have progressed to the level of expert users. This result could be due to the fact that positive feedback provides users with an illusion of control which increases their motivation and will to succeed. As explained by [START_REF] Achim | Computer usage: the impact of computer anxiety and computer self-efficacy[END_REF], once users reach a higher level of performance, they also experience a high level of self-efficacy which leads them to consider failure no longer as a threat [START_REF] Kleih | Motivation and SMR-BCI: fear of failure affects BCI performance[END_REF] but as a challenge. And facing these challenges leads to improvement. Another explanation is the fact that experts develop the ability to generate a precise predicted outcome that usually matches the actual outcome (when the feedback is not biased). This could explain why when the feedback is biased, and therefore the predicted and actual outcomes do not match, expert users attribute the discrepancy to external causes more easily. In other words, it can be hypothesized that experts might be disturbed by a biased feedback because they can perceive that it does not truly reflect their actions, thus decreasing their sense of being in control.
To summarize, it is noteworthy that the experience level of the user needs to be taken into account when designing the optimal feedback system, and more specifically the bias level. As discussed before, the user experience is nonetheless difficult to assess (see also Section 1.2.2). For instance, when using LDA to discriminate 2 classes, the LDA will typically always output a class, even if it is uncertain about it. This might lead to a class seemingly always recognized, even if the user does not do much. Hence, if both classes are equally biased, the user would most likely not gain motivation for the one always recognized -performing good, but could feel bored. Note that even if one class is always recognized (seemingly giving higher performances than the other class) that does not mean that the user is actually performing well when imagining such class, it can be due to the classifier being unbalanced and providing as output this class more often (e.g., due to a faulty electrode). On the other hand, if the biased feedback is applied to the class which is not well recognized the user would probably gain motivation. Thus, in [START_REF] Mladenović | The Impact of Flow in an EEG-based Brain Computer Interface[END_REF] the task was adaptively biased, depending on the user performances in real time, e.g., positively for the class which was recognized less often, and negatively for the one recognized more often, in order to keep the user engaged. This idea came from the Flow theory [START_REF] Csikszentmihalyi | Toward a psychology of optimal experience[END_REF] which explains that the intrinsic motivation, full immersion in the task and concentration can be attained if the task is adapted to user skills. Following the requirements of Flow theory, in [START_REF] Mladenović | The Impact of Flow in an EEG-based Brain Computer Interface[END_REF] the environment is designed to be engaging and entertaining, the goals clear with immediate visual and audio feedback, and task difficulty adapted to user performance in real-time. It is shown that the users feel more in control, and more in flow when the task is adapted. Additionally, the offline performance and flow level correlated. This suggests that adapting the task may create a virtuous loop, potentially increasing flow with performance.
The approach of providing an adapted and adaptive feedback, obtained by modulating the bias level, sounds very promising in order to maintain BCI users in a flow state, with a high sense of agency. Nonetheless, many challenges remain in order to optimize the efficiency of this approach. First, once more, it is necessary to be able to infer the state of the user, and especially their skill level, from their performance and physiological data. Second, we will have to determine the bias to be applied to the BCI output as a function of the evolution of the users' skills, but also as a function of their profile. Indeed, the basic level of sense of agency is not the same for everybody. Also, as shown in our models [START_REF] Jeunet | Towards a cognitive model of MI-BCI user training[END_REF][START_REF] Mladenovic | A generic framework for adaptive EEGbased BCI training and operation[END_REF], both the sense of agency and the flow are influenced by several factors: they do not depend only upon the performance. Thus, many parameters -related to users' states and traits -should be taken into account to know how to adapt the bias.
Designing adaptive emotional feedback
The functioning of the brain has often been compared to that of a computer, which is probably why the social and emotional components of learning have long been ignored. However, emotional and social contexts play an important role in learning [START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF][START_REF] Salancik | A social information processing approach to job attitudes and task design[END_REF]. The learner's affective state has an influence on problem solving strategies [START_REF] Isen | Positive Affect and Decision Making, Handbook of emotions[END_REF], and motivational outcome [START_REF] Stipek | Motivation to learn: From theory to practice[END_REF]. Expert teachers can detect such emotional states and react accordingly to the latter to positively impact learning [START_REF] Goleman | Emotional Intelligence[END_REF]. However, the majority of feedback used for BCI training users typically do not benefit from adaptive social and emotional feedback during BCI training. In [START_REF] Bonnet | Two brains, one game: design and evaluation of a multiuser BCI video game based on motor imagery[END_REF], we added some social context to BCI training by creating a game where BCI users had to compete or collaborate against or with each others, which resulted in improved motivation and better BCI performances for some of the participants. Other studies tried to provide a non adaptive emotional feedback, under the form of smileys indicating whether the mental command was successfully recognized [START_REF] Kübler | Brain-computer communication: self-regulation of slow cortical potentials for verbal communication[END_REF][START_REF] Leeb | Brain-Computer Communication: Motivation, aim and impact of exploring a virtual apartment[END_REF]. No formal comparisons without such emotional feedback was performed though, making the efficiency of such feedback still unknown. Intelligent tutoring Systems (ITS) providing an emotional and motivational support can be considered as a substitute and have been used in distant learning protocols where such feedback components were also missing. Indeed, they have proven to be successful in improving learning, self-confidence and affective outcome [START_REF] Woolf | Affective tutors: Automatic detection of and response to student emotion[END_REF]. We tested such method for BCI in [START_REF] Pillette | PEANUT: Personalised Emotional Agent for Neurotechnology User-Training[END_REF], where we implemented a learning companion for BCI training purpose. The companion provided both an adapted emotional support and social presence. Its interventions were composed of spoken sentences and facial expressions adapted based on the performance and progress of the user. Results show that emotional support and social presence have a beneficial impact on users' experience. Indeed, users that trained with the learning companion felt it was easier to learn and memorize than the group that only trained with the usual training protocol (i.e., with no emotional support or social presence). This learning companion did not lead to any significant increase in online classification performance so far, though, which suggests that it should be further improved. It could for example consider the user's profile which influences BCI performances [START_REF] Jeunet | Predicting Mental Imagery-Based BCI Performance from Personality, Cognitive Profile and Neurophysiological Patterns[END_REF], and monitor the user's emotional state and learning phase [START_REF] Kort | An affective model of interplay between emotions and learning: Reengineering educational pedagogy-building a learning companion[END_REF]. Indeed, both social and emotional feedback can have a positive, neutral or negative influence on learning depending on the task design, the type of feedback provided and the variables taken into account to provide the feedback [START_REF] Kennedy | The robot who tried too hard: Social behaviour of a robot tutor can negatively affect child learning[END_REF][START_REF] Johnson | An educational psychology success story: Social interdependence theory and cooperative learning[END_REF]. In this context, machine learning could have a substantial impact for the future applications of ITS in BCI training applications. In particular, it seems promising to use machine learning to learn from the students EEG, reactions and from its previous experience, what is the most appropriate emotional feedback it should provide to the user.
Designing explanatory feedback
As mentioned above, in many learning tasks -BCI included -the role of the feedback has been found to be essential in supporting learning, and to make this learning efficient [START_REF] Shute | Focus on Formative Feedback[END_REF][START_REF] Hattie | The Power of Feedback[END_REF]. While feedbacks can be of several types, for BCI training, this feedback is almost always corrective only [START_REF] Lotte | Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design[END_REF]. A corrective feedback is a feedback that tells the user whether the task they just performed is correct or incorrect. Indeed, in most BCIs, the feedback is typically a bar or a cursor indicating whether the mental task performed by the user was correctly recognized. Unfortunately human learning theories and instructional design principles all recommend to provide a feedback that is explanatory, i.e., which does not only indicate correctness, but also why it was correct or not. Indeed, across many learning tasks, explanatory feedback, which thus explains the reasons of the feedback, was shown to be superior to corrective one [START_REF] Shute | Focus on Formative Feedback[END_REF][START_REF] Hattie | The Power of Feedback[END_REF].
Consequently, it would be promising to try to design explanatory feedback for BCI. This is nonetheless a substantial challenge. Indeed, being able to provide explanatory feedback means being able to understand the cause of success or failure of a given mental command. so far, the BCI community has very little knowledge about these possible causes. Some works did identify some predictors of BCI performances [START_REF] Jeunet | Advances in user-training for mental-imagerybased BCI control: Psychological and cognitive factors and their neural correlates[END_REF][START_REF] Ahn | Performance variation in motor imagery brain-computer interface: A brief review[END_REF][START_REF] Grosse-Wentrup | What are the Causes of Performance Variation in Brain-Computer Interfacing[END_REF][START_REF] Blankertz | Neurophysiological predictor of SMR-based BCI performance[END_REF]. However, most of these works identified predictors of performance variations across many trials and possibly many runs or sessions. Exceptions are [START_REF] Grosse-Wentrup | Causal Influence of Gamma Oscillations on the Sensorimotor Rhythm[END_REF] and [START_REF] Schumacher | Towards explanatory feedback for user training in brain-computer interfaces[END_REF], who showed respectively than cortical gamma activity in attentional networks as well as tension in forehead and neck muscles were correlated to single trial performances. In [START_REF] Schumacher | Towards explanatory feedback for user training in brain-computer interfaces[END_REF] we designed a first explanatory feedback for BCI, informing users of their forehead and neck muscle tension, identifying when it was too strong, to guide them to be relaxed. Unfortunately this did not lead to significant increase in online BCI performance. Such work was however only a preliminary attempt that should thus be explored further, to identify new predictors of single trial performance, and use them as feedback.
We denote features measuring causes of success or failure of a trial or group of trials as feedback features. We thus encourage the feedback community to design and explore new feedback features. This is another machine learning and signal processing problem, in which rather than classifying EEG as corresponding to a given mental command or another, we should classify them as predicting a successful or a failed trial. Thus with different labels than before, machine learners can explore and design various tools to identify the most predictive feedback features. Such features could then be used as additional feedback during online BCI experiments, possibly supporting efficient BCI skills learning.
Conclusion
In this chapter, we tried to highlight to our readers that when designing Brain-Computer Interfaces, both the machine (EEG signal decoding) and the user (BCI skill learning and performance) should be taken into account. Actually, in order to really enable BCIs to reach their full potential, both aspects should be explored and improved. So far, the vast majority of the machine learning community has worked on improving and robustifying the EEG signal decoding, without considering the human in the loop. Here, we hope we convinced our readers that considering the human user is necessary -notably to guide and boost BCI user training and performanceand that machine learning and signal processing can bring useful and innovative solutions to do so. In particular, throughout the chapter we identified 9 challenges that would need to be solved to enable users to use and to learn to use BCI efficiently, and for each suggested potential machine learning and signal processing research directions to address them. These various challenges and solutions are summarized in Table 1.1.
We hope this summary of open research problems in BCI will inspire the machine learning and signal processing communities, and will motivate their scientists to explore these less traveled but essential research directions. In the end, BCI research does need contributions from these communities to improve the user experience and learnability of BCI, and enable them to become finally usable and useful in practice, outside laboratories.
Figure 1 . 1 :
11 Figure 1.1: A concept of how Active Inference could be used to implement a fully adap-
Table 1 .
1 1: Summary of signal processing and machine learning challenges to BCI user training and experience, and potential solutions to be explored.
Challenges Potential solutions
Modelling the BCI user Robust recognition of users' mental states from physiological signals exploring features, denoising and classification algorithms for each mental state
Quantifying the Riemannian geometry
many aspects of to go beyond
users' BCI skills classification accuracy
Determining when to adapt the training procedure, based on the user's state, to optimise performance and learning Case-based / Rule-based reasoning algorithms; Multi-arm bandits to adapt automatically the training procedure
Computationally modeling the users states and traits and adaptation tools Exploiting Active Inference tools
Understanding and improving BCI user learning Designing features and classifiers resulting in feedback favoring learning Regularizers incorporating human learning and perception principles
Adapting classifiers Triggering adaptation
with a way and timing based on a
favoring learning user's model
Adapting the bias based on the user's level of skills to maintain their flow and agency Triggering adaptation based on a model of the bias*skill relationship
Adapting feedback
to include Build on the existing
emotional support work of the ITS field
and social presence
Identifying/Designing Designing features
explanatory to classify correct
feedback features vs incorrect commands
Acknowledgments
This work was supported by the French National Research Agency with the REBEL project (grant ANR-15-CE23-0013-01), the European Research Council with the BrainConquest project (grant ERC-2016-STG-714567), the Inria Project-Lab BCI-LIFT and the EPFL/Inria International lab. |
01763807 | en | [
"sdv.ba.zi",
"sdv.ee"
] | 2024/03/05 22:32:13 | 2018 | https://hal.sorbonne-universite.fr/hal-01763807/file/SegonzaciaSubmission_DSR_Revised4HAL.pdf | Stéphane Hourdez
email: hourdez@sb-roscoff.fr
Cardiac response of the hydrothermal vent crab Segonzacia mesatlantica to variable temperature and oxygen levels
Keywords: Hypoxia, Oxyregulation, Critical temperature, Critical oxygen concentration
Segonzacia mesatlantica inhabits different hydrothermal vent sites of the Mid-Atlantic Ridge where it experiences chronic environmental hypoxia, and highly variable temperatures. Experimental animals in aquaria at in situ pressure were exposed to varying oxygen concentrations and temperature, and their cardiac response was studied. S. mesatlantica is well adapted to these challenging conditions and capable to regulate its oxygen uptake down to very low concentrations (7.3-14.2 µmol.l -1 ). In S. mesatlantica, this capacity most likely relies on an increased ventilation rate, while the heart rate remains stable down to this critical oxygen tension. When not exposed to temperature increase, hypoxia corresponds to metabolic hypoxia and the response likely only involves ventilation modulation, as in shallowwater relatives. For S. mesatlantica however, an environmental temperature increase is usually correlated with more pronounced hypoxia. Although the response to hypoxia is similar at 10 and 20˚C, temperature itself has a strong effect on the heart rate and EKG signal amplitude. As in shallow water species, the heart rate increases with temperature. Our study revealed that the range of thermal tolerance for S. mesatlantica ranges from 6 through 21˚C for specimens from the shallow site Menez Gwen (800 m), and from 3 through 19˚C for specimens from the deeper sites explored (2700 -3000 m).
Introduction
Environmental exposure to hypoxia in aquatic habitats can be common [START_REF] Hourdez | Hypoxic environments[END_REF]. Near hydrothermal vents, oxygen levels are often low and highly variable both in space and in time. These conditions result from the chaotic mixing of the hydrothermal vent fluid, which is hot, anoxic, and often rich in sulfide, with the deepsea water, cold and usually slightly hypoxic. Oxygen and sulfide spontaneously react, decreasing further the amount of available oxygen in the resulting sea water. The presence of reduced compounds in the hydrothermal fluid is paramount to the local primary production by autotrophic bacteria at the base of the food chain in these environments. To reap the benefits of this high local production in an otherwise seemingly barren deep-sea at similar depths, metazoans must possess specific adaptations to deal with the challenging conditions, among which chronic hypoxia is probably one of the most limiting. All metazoans that have been studied to date indeed exhibit oxygen requirements comparable to those of close relatives that live in well-oxygenated environments [START_REF] Childress | Metabolic rates of animals from hydrothermal vents and other deep-sea habitats[END_REF][START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF].
A study of morphological adaptations in decapodan crustaceans revealed that, contrary to annelid polychaetes [START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF], there is usually no increase in gill surface areas in vent decapods compared to their shallow-water relatives [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF]. In the vent decapods however, the scaphognathite is greatly enlarged, suggesting an increased ventilatory capacity. In situ observations of vent shrimp in settings typified by different oxygen concentrations also indicated that these animals increased their ventilation rates under lower oxygen conditions [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF]. This behavioral change is consistent with other decapods in which both the frequency and amplitude of scaphognathite beating are increased in response to hypoxia (see [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF][START_REF] Whiteley | Responses to environmental stresses: oxygen, temperature, and pH. Chapter 10. In: The Natural History of Crustaceans[END_REF][START_REF] Whiteley | Responses to environmental stresses: oxygen, temperature, and pH. Chapter 10. In: The Natural History of Crustaceans[END_REF] for reviews).
The vent crab Bythograea thermydron Williams 1980 is able to maintain its oxygen consumption relatively constant over a wide range of oxygen concentrations (i.e. oxyregulation capability), down to much lower concentrations than the shallow water species for which this ability was studied [START_REF] Gorodezky | Effects of sulfide exposure history and hemolymph thiosulfate on oxygen-consumption rates and regulation in the hydrothermal vent crab Bythograea thermydron[END_REF]. The capacity to oxyregulate can involve different levels of regulation. At the molecular level, the functional properties of the hemocyanins (in particular their oxygen affinity) play a central role. Hemocyanins from decapods that inhabit deep-sea hydrothermal vents exhibit very high oxygen affinities, allowing the extraction of oxygen from hypoxic conditions (see [START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF] for a review). The properties of these blood oxygen carriers can also be affected by allosteric effectors contained in the hemolymph. [START_REF] Gorodezky | Effects of sulfide exposure history and hemolymph thiosulfate on oxygen-consumption rates and regulation in the hydrothermal vent crab Bythograea thermydron[END_REF] showed that animals injected with thiosulfate, a byproduct of sulfide detoxification in the animals that increases hemocyanin affinity, allowed the crab to oxyregulate down to ever lower environmental oxygen concentrations. At the physiological level, adaptation to lower oxygen concentrations can involve ventilatory and cardio-circulatory responses (for a review, see [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF]. In the shallow water species studied to date, the circulatory response can be quite complex, involving modifications of the heart rate, stroke volume and peripheral resistance. Typically, decapods increase their ventilation (scaphognathite beating frequency and power), decrease their heart rate (bradycardia), and adjust the circulation of their hemolymph, decreasing its flow to digestive organs in favor of the ventral structures [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF].
The responses to variable oxygen levels involving modifications of heart parameters (contraction rate) and ventilation have however so far not experimentally been studied in hydrothermal vent species of crabs. We studied the cardiac response of the Mid-Atlantic Ridge (MAR) vent crab Segonzacia mesatlantica Williams 1988. This species has been collected at different sites from the MAR, with depths ranging from 850 m (Menez Gwen site) to 4080 m (Ashadze 1 site). To study the cardiac response to varying levels of environmental oxygen in S. mesatlantica, experimental animals were equipped with electrodes and their electrocardiograms (EKG) under different oxygen concentrations were recorded. As temperature affects oxygen demand as well, its effect on the EKG of the experimental crabs was also investigated.
Materials and methods
Animal collection
Specimens of the crab Segonzacia mesatlantica were collected on the hydrothermal vent sites Menez Gwen (37˚50'N, 31˚31'W, 855 m water depth), Logatchev (14˚45.12'N, 44˚58.71'W, 3050 m water depth), and Irinovskoe (14˚20.01'N, 44˚55.36'W, 2700 m water depth), on the Mid-Atlantic Ridge. They were captured with the remotely operated vehicle (ROV) MARUM-Quest, deployed from the Research Vessel Meteor (Menez MAR M82/3 and M126 cruises). The animals were brought back to the surface in a thermally insulated box attached to the ROV and quickly transferred to a cold room (5-8˚C) before they were used for experiments. The few specimens from the shallow site (Menez Gwen) that were not used in the experimental system and were maintained at atmospheric pressure all survived for at least two weeks.
Experimental system
The experimental animals were fitted with three stainless-steel thin wire electrodes: two inserted on either side of the heart and the third in the general body cavity as a reference (Fig. 1A). The flexible leads where then glued directly onto the carapace to prevent the movements of the crabs from affecting the position of the implanted electrodes. Attempts to use the less-invasive infrared sensor simply attached to the shell [START_REF] Depledge | A computer-aided physiological monitoring system for continuous, long-term recording of cardiac activity in selected invertebrates[END_REF]Andersen, 1990, Robinson et al., 2009) proved unsuccessful on this species and on another vent crab, Bythograea thermydron (Hourdez, unpub. failure). The same sensor type also proved unsuccessful to measure scaphognathite beating frequency (possible penetration of the seawater into the sensor). The equipped animals were maintained in an anodized aluminum, custom-built, 500-ml pressure vessel (inside diameter 10 cm and 6.4 cm height) with a Plexiglas window that allowed regular visual inspection of the animals. The smaller specimens (LI-2 through LI-6 and LI-10, see Table 1) were maintained in a smaller pressure vessel (inside diameter 3.5 cm and 5 cm height). The crabs were free to move inside the pressure vessel. During the experiment, the crabs usually remained calm, with some short activity periods. The water flow was provided by a HPLC pump (Waters 515), set to 3-5 ml min -1 , depending on the size of the specimen to yield an oxygen concentration decrease (compared to the inlet concentration) that was measurable with confidence. The pressure was regulated by a pressurerelief valve (Swagelok SS-4R3A) (Fig. 1B). Oxygen concentration in the inlet water was modulated by bubbling nitrogen and/or air with various flows.
Oxygen concentrations are reported in µmol.l -1 rather than partial pressures as these latter depend on the total pressure and are therefore difficult to compare between experiments run at different pressures. Oxygen concentration was measured directly after the pressure relief valve with an oxygen optode (Neofox, Ocean Optics). Three-way valves allowed water to flow either through the vessel containing the animal or through a bypass without affecting the pressure in the system. Oxygen consumption rate (in µmol.h -1 ) was then simply calculated as the difference between these two values, taking the flow rate into consideration: Oxygen consumption rate= (O2 in-O2 out) * WFR where 'O2 in' is the inlet oxygen concentration (in µmol.l -1 ) measured with the bypass in place, 'O2 out' the oxygen concentration (in µmol.l -1 ) measured when the water was flowing through the pressure vessel, and WFR the water flow rate (in l.h -1 ) controlled by the HPLC pump.
Temperature in the pressure vessel was controlled by immersion in a temperature-controlled water bath (10 or 20 ± 0.2 ˚C). Temperature ramping to study the effect of temperature on the heart rate was obtained by progressively increasing temperature in the water bath. We were interested in the crabs' response to rapid temperature variation and therefore chose a rate of about 1˚C every 15 minutes. All experiments were run at a pressure equivalent to in situ pressure for the two sites (80 bars (8 MPa), equivalent to 800 m water depth for Menez Gwen and 270 bars (27 MPa) for the Logatchev and Irinovskoe sites).
Recording of electrocardiograms (EKG)
An electrical feed-through in the pressure vessel wall allowed the recording of the EKG of animals under pressure. We worked on a total of twelve specimens, including ten from Semyenov/Irinovskoe, and two from Menez Gwen (Table 1).
Out of these twelve specimens, six were equipped with electrodes to monitor the electrical activity of their heart, including both Menez Gwen specimens. The voltage variations were recorded with a LabPro (Vernier) interface equipped with an EKG sensor (Vernier) for 30 s for each of the conditions. Voltage values were recorded every 1/100 s. Recordings were made every 2-15 minutes, depending on the rate of change of the studied parameters. Specifically, temperature change was fast and recordings were made every 2-3 minutes, while changes in oxygen concentration were slower and recordings were made every 10-15 minutes. As the animals live in a highly variable environment, we were interested in the response to rapidly-changing conditions and did not give animals time to acclimate to various oxygen levels or temperature values.
The crabs sometimes went through transient cardiac arrests (some as long as 20 seconds), a phenomenon also reported in shallow-water crabs, in particular in response to tactile and visual stimuli (e.g. [START_REF] Stiffler | A comparison of in situ and in vitro responses of crustacean hearts to hypoxia[END_REF][START_REF] Florey | The effects of temperature, anoxia and sensory stimulation on the heart rate of unrestrained crabs[END_REF][START_REF] Defur | The effects of environmental variables on the heart rates of invertebrates[END_REF]. Recordings comprising such arrests were not used for the calculation of heart rate. There was no apparent correlation between the conditions and the occurrence of the arrests, although they seemed to occur less at lower oxygen tensions (pers. obs.).
Changes in the parameters of the EKG (amplitude, shape) could reflect important modifications of cardiac output. We studied the effect of both temperature and oxygen concentration on the shape and amplitude of the EKG.
Electrode implantation, recovery and effect of pressure
Shortly after electrode implantation, the EKG was directly recordable at atmospheric pressure, although a bit erratic while manipulating the animal. For one of the shallower site crabs, the EKG was recorded for 4 hours at atmospheric pressure and 10˚C in the closed experimental vessel. Once under stable conditions, the EKG quickly became regular, and its shape resembled that under pressure (data not shown). From an initial heart rate oscillating between 30 and Consequently, all experiments were run at pressures equivalent to that of the depth at which they were captured and, after re-pressurization, the animals were given 8-12 hours of recovery before experiments were initiated.
Determination of curve parameters
We used curve fitting to determine key values for the heart rate and oxygen consumption as a function of oxygen concentration. In particular, the critical oxygen concentration at which oxygen consumption or heart rate drops can be relatively subjective or its determination strongly dependent on the relatively small number of data points below that value. When the critical oxygen concentration is small (as it is the case for S. mesatlantica, see results), very few data points can be obtained below that value. Instead, we used the equation:
! = # * % -' 1 + * * % -'
where X is the oxygen concentration, Y is the physiological parameter (heart rate or oxygen consumption rate), b is a steepness coefficient, the ratio a/b is the value at plateau (X infinite), and c is the intercept of the curve with the x-axis.
The curve fitting parameters a, b, and c were obtained with the software JMP11, based on an exploration of possible values for the parameters a, b, and c, and the best values were determined by minimizing the difference between the observed (experimental) and expected values (based on the curve equation). Because of the very steep drop of both heart rate and oxygen consumption rate (see Fig. 3), the intercept c is hereafter referred to as the critical oxygen concentration.
Results
Oxygen consumption rates
The oxygen consumption rates (in µmole O2 per hour) were measured for all 12 specimens (Table 1, Fig. 2). With a size range of 0.4-41.5 g wet weight, the oxygen consumption rate increases with an allometry coefficient of 0.48 (p=2.8 10 -8 ). The sex of the animals does not have a significant effect on the regression (ANCOVA, p=0.1784). The oxygen consumption rates for the two specimens from Menez Gwen (800 m depth) do not differ markedly from the specimens from the other sites (2700-3050 m depth), and fall within the 95% confidence interval established for the ten specimens from the deeper sites.
Effect of oxygen concentration on heart rate and oxygen consumption
In all investigated specimens, both the oxygen consumption rates and heart rates follow the same pattern. For all specimens equipped with electrodes (n=6), oxygen concentration does not affect the heart rate over most of the range of concentrations, until a critical low concentration is reached (Fig. 3). Below that concentration, the heart rate and the oxygen consumption both drop sharply.
The oxygen consumption reaches zero at oxygen concentrations ranging from 7.3 to 9.9 µmole.l -1 for the deeper sites and 11.3-14.2 µmole.l -1 for the shallower site (Table 1). At 10˚C, the heart rate typically oscillates between 61 and 68 beats per minute (b.p.m.) while it usually varies between 90 and 108 b.p.m. at 20˚C for the specimens from the shallower site (Supplementary data Fig. S2). For the specimens from the deeper sites, the heart rate at 10˚C is higher (72.3-81.5 b.p.m.; Table 1) than that of the specimens from the shallower site (62.5 and 69.0; Table 1). Although the heart rate tends to decrease with increasing wet weight, the correlation is not significant (log/log transform linear correlation p=0.15).
Effect of temperature
Temperature affects the beating frequency of the heart for the three specimens tested (Fig. 4). The Arrhenius plot for the two LI individuals shows a biphasic curve between 3˚C and the Arrhenius break point at 19˚C. At a temperature higher than 19˚C or lower than 3˚C, the heart rate is more variable and drops sharply in warmer water, indicating that 19˚C is the upper temperature limit for this species. Below 3˚C (normal deep-sea temperature in the area), the heartbeat is also irregular, possibly indicating a lower temperature limit for this species. This phenomenon is also observed at 6˚C for the specimen from the shallower site (normal deep-sea water temperature 8˚C for this area). In addition to the upper and lower breakpoints, there is an inflection point for the two deeper specimens at 10.7˚C (Fig. 4). The colder part of the curve has a slope of ca. 4, while the upper part of the curve has a slope of 1.7-2. This inflection point could also be present for the shallower specimen but the temperature range below that value is too short to allow a proper estimate of the slope.
Modifications the EKG characteristics
In addition to the beating frequency, temperature also affects the overall shape of the EKG (Fig. 5). It is characterized by two large peaks at 12˚C, in addition to a smaller one preceding the large peaks. The second large peak increases in height in respect to the first one as temperature increases. At 16˚C, the two peaks have approximately the same amplitude, and fuse completely at higher temperatures.
The height of the second peak increases with temperature while that of the first peak remains unchanged up to 16˚C. Beyond that temperature, the height of the fused peaks keeps increasing at a rate similar to that of the second peak, suggesting it is the contribution of that second peak that is responsible for the changes in amplitude. Beyond 20˚C, the amplitude tends to level off or decrease slightly.
Over most of the range tested, oxygen concentration does not seem to affect the amplitude of the EKG (Fig. 6). Below 25 µmole.l -1 of oxygen, however, the amplitude of the EKG drops sharply. This phenomenon occurs at lower oxygen concentrations than the drop in heart rate (32 µmole.l -1 at this temperature for the same animal). At values below 25 µmole.l -1 of oxygen, the shape of the EKG is also significantly affected (Fig. 7), with a drastic decrease of the second peak height, to the point it may completely disappear (Fig. 7, 17 µmole.l -1 oxygen inset). Upon return to oxygen concentrations greater than 25 µmole.l -1 , the EKG returns to its pre-hypoxia characteristics (Fig. 7, 62 µmole.l -1 oxygen inset).
Discussion
Oxygen consumption rate
As expected, the oxygen consumption rates increase with increasing size, and this increase follows an allometry with coefficient 0.48. This coefficient is in the low end of the range reported for other marine crustaceans [START_REF] Vidal | Rates of metabolism of planktonic crustaceans as related to body weight and temperature of habitat[END_REF]. This value is lower to that reported for the shore crab Carcinus maenas (0.598; [START_REF] Wallace | Activity and metabolic rate in the shore crab Carcinus maenas (L.)[END_REF]. There is evidence that the allometry of metabolism is linked to activity, metabolic rate, and habitat [START_REF] Carey | Economies of scaling: More evidence that allometry of metabolism is linked to activity, metabolic rate and habitat[END_REF].
Compared to the other hydrothermal vent species studied to date, Bythograea thermydron, the rate is very similar for the large animals (Mickel and Childress, 1982b). Mickel and Childress (1982) however report an allometry coefficient not significantly different from 1.0, although the total size range in their study was reportedly not sufficient (20.0-111.4 g wet weight) to obtain reliable data. The wet weights in our study cover two orders of magnitude (0.4-41.5 g wet weight), yielding a more reliable allometry coefficient. The oxygen consumption rates are also similar to other deep-sea and shallow-water crustaceans [START_REF] Childress | Metabolic rates of animals from hydrothermal vents and other deep-sea habitats[END_REF]. Deciphering the meaning of the small allometry coefficient will require a comparative study of crabs closely related to Segonzacia and inhabiting different habitats.
Changes in the EKG shape characteristics
In decapod crustaceans, the heartbeat is initiated within the cardiac ganglion, where a small number of pacemaker neurons control this heartbeat (for a review, see [START_REF] Mcmahon | Intrinsic and extrinsic influences on cardiac rhythms in crustaceans[END_REF]. [START_REF] Wilkens | Re-evaluation of the stretch sensitivity hypothesis of crustacean hearts: hypoxia, not lack of stretch, causes reduction in the heart rate of isolated[END_REF] reports that it is maintained in isolated hearts, provided that the partial pressure of oxygen is sufficient. The overall shape of the EKG resembles that recorded for Bythograea thermydron, another hydrothermal vent crab studied by Mickel and Childress (1982) and other, shallow-water, crabs [START_REF] Burnovicz | The cardiac response of the crab Chasmagnathus granulatus as an index of sensory perception[END_REF]. For B. thermydron, the authors did not consider changes in amplitude as a function of pressure because the amplitude seemed to be affected by the time elapsed since electrode implantation and pressure affected the electrical connectors in the vessel. In the present study however, pressure remained unchanged and changes in amplitude of the EKG were accompanied by modifications of the shape of the EKG, suggesting the amplitude changes were not artifacts.
In our recordings, the EKG pattern clearly comprises two major peaks that fuse at temperatures greater than 16˚C. The relationship between each on the peaks and its potential physiological role (pacemaker, cardiac output control) would be an interesting avenue to explore but the need to work under pressure for S. mesatlantica renders this line of study difficult in this species.
Effect of temperature
For the Menez Gwen animals, the Arrhenius plot of the heart rate revealed that the normal range of functioning for this species lies between 6˚C and 21˚C at 80 bars (8 Mpa). The temperature of the deep-sea water at these depths is close to 8˚C, and the animals are then unlikely to encounter limiting conditions in the colder end of the range. Hydrothermal fluid, mixing with the deep-sea water, can however yield temperatures far greater than the upper end of the range, likely limiting the distribution of the crabs in their natural environment, in combination with other limiting factors (e.g. oxygen, sulfide). The upper temperature is however lower than that reported for B. thermydron, the eastern Pacific relative of S. mesatlantica (Mickel and Childress, 1982), although pressure is very likely to affect the physiology of the animals. These authors report that at 238 atm (23.8 MPa, corresponding to their typical environmental pressure) B. thermydron is capable of surviving 1 h at 35˚C but died when exposed for the same duration at 37.5 or 40˚C. Animals maintained at 238 atm and 30˚C however exhibited a very disrupted EKG, and three of the five experimental animals died within 2 hours. Contrary to the East Pacific Rise species B. thermydron, the Mid-Altantic Ridge species S. mesatlantica is found living as shallow as 800 m.
Animals collected from this shallow site can survive at least 2 weeks at atmospheric pressure (provided they are kept in a cold room at 5-8˚C), suggesting that they do not experience disruptions of the heart function as severe as those observed in B. thermydron at 1 atm (Mickel and Childress, 1982). This hypothesis is supported by the observations performed at 1 atm on freshly collected animals that showed a normal aspect of the EKG (see 'Materials and Methods' section). S. mesatlantica is also found at greater depths on other sites of the Mid-Atlantic Ridge (down to at least 3000 m depth). Our work on specimens from these deeper sites did not reveal an extended upper temperature tolerance, on the contrary the Arrhenius breakpoint is 2˚C lower for these animals (19˚c for the two specimens tested instead of 21˚C for the shallower site specimen). The same shift is observed for the low temperatures: the lower thermal tolerance is about 6˚C for the shallow water specimen and about 3˚C for the deeper ones.
Overall, it seems the total thermal range is about 16˚C, with a shift towards colder temperature in deeper specimens. The absolute heart rates recorded for our species do not differ greatly from other crabs of similar sizes for comparable temperatures. They are very similar to those obtained for the shore crab Carcinus maenas [START_REF] Ahsanullah | Factors affecting the heart rate of the shore crab Carcinus maenas (L.)[END_REF][START_REF] Giomi | A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs[END_REF], the mud crab Panopeus herbsti, and the blue crab Callinectes sapidus [START_REF] Defur | The effects of environmental variables on the heart rates of invertebrates[END_REF]. Some other species have much greater heart rates (235 b.p.m. for Hemigrapsus nudus at 18˚C) or much lower values (82 b.p.m. for Libinia emarginata at 25˚C; [START_REF] Defur | The effects of environmental variables on the heart rates of invertebrates[END_REF].
The characteristics of the EKG pattern also changed with temperature. Although the amplitude of the signal for the first large peak remained unchanged, that of the second large peak increased linearly with temperature up to the maximal temperature. This could reflect modifications of the cardiac output in S. mesatlantica. In Cancer magister and C. productus, the cardiac output declines but an increased oxygen delivery to the organs is possible through a concomitant decrease of hemocyanin oxygen affinity [START_REF] Florey | The effects of temperature, anoxia and sensory stimulation on the heart rate of unrestrained crabs[END_REF]. There is however no study linking EKG parameters to cardiac output in crustaceans.
Oxygen and capacity limited thermal tolerance (OCLTT)
The concept of oxygen and capacity limited thermal tolerance (OCLTT) was developed to explain the observations on temperature tolerance [START_REF] Frederich | Oxygen limitation of thermal tolerance defined by cardiac and ventilatory performance in spider crab, Maja squinado[END_REF][START_REF] Pörtner | Climate change and temperature-dependent biogeography: oxygen limitation of thermal tolerance in animals[END_REF]. The authors hypothesized that a mismatch between oxygen demand and oxygen supply results from limited capacity of the ventilatory and the circulatory systems at temperature extremes. They argue that limitations in aerobic performance are the first parameters that will affect thermal tolerance. In Segonzacia mesatlantica, the Arrhenius plot of the heart rate exhibits a biphasic profile for these animals from the deeper sites, with an inflection point at 10.7˚C. In the temperate eurythermal crab Carcinus maenas, [START_REF] Giomi | A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs[END_REF] report a similar observation. The authors interpret this inflection as the pejus temperature, beyond which hypoxemia sets in until the critical temperature (onset of anaerobic metabolism). This would then indicate an optimal temperature range of 3-10.7˚C from S. mesatlantica, beyond which the exploitation of hemocyanin-bound oxygen reserve delays the onset of hypoxemia. Contrary to the C. maenas hemocyanin, however, the hemocyanin from S. mesatlantica does not release oxygen in response to increased temperature, a lack of temperature sensitivity that is found in other hydrothermal vent crustacea hemocyanins (Chausson et al., 2004, Hourdez and[START_REF] Hourdez | Adaptations to hypoxia in hydrothermal vent and cold-seep invertebrates[END_REF].
Effect of oxygen concentration
At hydrothermal vents, temperature and oxygen concentration are negatively correlated (Johnson et al., 1986). The animals therefore need to extract even more oxygen to meet their metabolic demand when it is less abundant in their environment. The conditions also fluctuate rapidly and animals need to respond quickly to the chronic hypoxia they experience.
In most crustaceans, exposure to hypoxia below the critical oxygen tension induces a bradycardia, coupled with a redirection of the hemolymph from the digestive organs towards ventral structures [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF]. In our species, the heart rate remained relatively stable over a very wide range of oxygen concentrations and only dropped below a critical oxygen concentration similar to that of the oxygen consumption. This critical oxygen concentration ranges from 7.3 to 14.2 µmol.l -1 . These values are greater than the half-saturation oxygen tension of the hemocyanin (P50=3.7 µmol.l -1 at 15˚C, [START_REF] Chausson | Respiratory adaptations to the deep-sea hydrothermal vent environment: the case of Segonzacia mesatlantica, a crab from the Mid-Atlantic Ridge[END_REF], suggesting that hemocyanin is not the limiting factor in the failure of oxyregulation at lesser environmental oxygen concentrations. Diffusive and convective (ventilation) processes are likely to limit oxygen uptake. However, the ability to maintain a stable heart rate down to low environmental tensions, along with the high affinity hemocyanin, likely accounts for the very low critical oxygen concentration observed for the vent crabs (this study; Mickel and Childress, 1982) compared to their shallow water relatives (e.g. 100-130 µmol.l -1 in Carcinus maenas; [START_REF] Taylor | The respiratory responses of Carcinus maenas to declining oxygen tension[END_REF].
The EKG amplitude and shape does not change in response to oxygen concentration variations over most of the tested range. Below the critical concentration however, both the amplitude and the presence of the second large peak are affected. This suggests that, although the pacemaker activity remains, the heart either contracts less strongly or does not contract at all. As a result, the hemolymph does not circulate and the animal is unable to regulate its oxygen uptake. As for shallow-water species, the animals are able to survive anoxia. In the crabs Cancer magister and C. productus, this can be tolerated for up to 1 hr [START_REF] Florey | The effects of temperature, anoxia and sensory stimulation on the heart rate of unrestrained crabs[END_REF]. Our experimental crabs were also maintained for the same duration below their ability to oxyregulate, a time during which, once oxygen reserves were depleted, they had to rely on anaerobiosis. During that time, the heart rate varied greatly (Fig. 7), possibly indicating attempts to reestablish oxygen uptake.
Measuring the modifications of blood flow to different parts of the body was unfortunately not feasible inside the pressure vessels. Similarly, we were not able to measure ventilation rate or ventilation flow in our set-up. However, the ability to oxyregulate while maintaining a stable heart rate (neither tachycardia nor bradycardia) strongly suggests that ventilation increases under hypoxic conditions as it does in for example in C. maenas [START_REF] Taylor | The respiratory responses of Carcinus maenas to declining oxygen tension[END_REF][START_REF] Giomi | A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs[END_REF]. In the hydrothermal vent shrimp Alvinocaris komaii, animals observed in situ in environments characterized by lower oxygen tensions exhibited a higher ventilation rate [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF], as typical of other decapods (see [START_REF] Mcmahon | Respiratory and circulatory compensation to hypoxia in crustaceans[END_REF]. Although not directly observed in S. mesatlantica, a similar response is very likely.
Conclusion
As all marine invertebrates, Segonzacia mesatlantica can experience hypoxia through both environmental exposure and as a consequence of increased metabolic consumption during exercise. Unlike most marine invertebrates however, environmental hypoxia is chronic -and possibly continuous-for this species. S. mesatlantica, like its East Pacific Rise congener Bythograea thermydron, is well adapted to these challenging conditions and capable regulate of regulating its oxygen uptake down to very low environmental tensions. In S. mesatlantica, this capacity most likely relies on an increased ventilation rate, while the heart rate remains stable. This is probably helped by the increased ventilatory capacity found in the vent species compared to their shallow water relatives [START_REF] Decelle | Morphological adaptations to chronic hypoxia in deep-sea decapod crustaceans from hydrothermal vents and cold-seeps[END_REF]. When not exposed to temperature increase, hypoxia corresponds to metabolic hypoxia and the response only involves ventilation modulation (and possibly circulatory adjustments). For this species, however, a temperature increase is usually correlated with more pronounced hypoxia. Although the response to hypoxia is similar at 10 and 20˚C, temperature itself has a strong effect on the heart rate and the characteristics of the EKG. It would be interesting to investigate whether the lack of temperature sensitivity [START_REF] Chausson | Respiratory adaptations to the deep-sea hydrothermal vent environment: the case of Segonzacia mesatlantica, a crab from the Mid-Atlantic Ridge[END_REF] impacts the cardiac output response to temperature in comparison to nonhydrothermal vent endemic species. ; in µmol.l -1 .h -1 ) as a 14 function of wet weight in grams (WW) for all specimens (n=12, see table 1 for 15 specimens characteristics). The linear regression has the equation log(Oxygen 16 cons.) = 0.48 * log(WW) + 0.87, and a correlation coefficient r 2 =0.959 (p<0.001). 17
45 beats per minute (b.p.m.), the heart rate increased to 55 b.p.m. between 3.5 and 4 hours after implantation. After pressure was applied a bradycardia appeared (heart rate down to 35 b.p.m.), which lasted for about 1.5 hours before the heart rate returned to typical values for 10˚C at 80 bars (8 MPa; 60-70 b.p.m., see below). Similarly, animals from the Irinovskoe or Logatchev sites (2700-3050 m depth) and acclimated to their in situ pressure, exhibit transient bradycardia when exposed to lowered pressure. Within a few minutes, the animals stabilized their heart rate to values greater than simulated in situ pressure values. At pressure values lower than 150 bars (15 mPa), the heart rate remained relatively stable (see supplementary material S1). Upon return to the in situ pressure value, the heart rate rapidly returned to the initial value.
Figure 1 :Figure 2 :
12 Figure 1: Experimental set-up. (A) Electrode implantation on an experimental 8 animal (cephalothorax width about 50 mm). (B) Flow-through pressure vessel, 9 control of oxygen concentration, and position of oxygen optode. The bypass line 10 allows the isolation of the vessel and the measurement of oxygen concentration 11 in the inlet water. 12 13 Figure 2: Oxygen consumption rates (oxygen cons.; in µmol.l -1 .h -1 ) as a 14
18
Figure 3 :Figure 4 :Figure 5 :
345 Figure 3: Oxygen consumption rate (open diamonds) and heart rate (black 19 diamonds) in response to oxygen levels in the pressure vessel. Conditions: 20 270 bars (27 MPa) of pressure at 10˚C for specimen LI-1 (see Table 1 for 21 specimen characteristics). The curves were fit to the datapoints as described in 22 the Materials and methods section. 23 24 Figure 4: Arrhenius plot of temperature-induced changes of the heart rate 25 (HR) of S. mesatlantica for three specimens under in situ pressure. 1000/ K: 26 reciprocal temperature in Kelvin (multiplied by 1000 for ease of reading). See 27 Table 1 for specimens characteristics. The Arrhenius breakpoint, inflection point 28 in the relationship, and slope values on either side of this point are also indicated. 29 30 Figure 5: Modifications of EKG characteristics in response to temperature 31 under 80 bars (8 MPa) of pressure and at non-limiting oxygen 32 concentrations (50-100 µmol.l -1 ). Open diamonds: amplitude of the first peak 33 in the EKG; black squares: amplitude of the second peak. Note that at 34 temperature values greater than 16˚C, the two peaks fuse, and only the black 35 symbols are used. Each datapoint corresponds to a mean of 30-40 measurements 36 (depending on the temperature) and its standard deviation. 37
Figure 5 70 71
Table 1 :
1 Collection, morphological and physiological characteristics of the experimental animals at 10˚C and under in situ pressure. 1 Animals LI-2 through LI-6 and LI-10 were too small for adequate electrode implantation. 2 Individuals from the Menez Gwen site; LI: Individuals from the Logatchev or Irinovskoe sites; a: Cephalothorax width; b: average 3 value for the plateau area of the graph; nd: not determined. 4
Depth of Sex Size a Wet Heart rate b Resp. rate Critical O 2
Specimen capture (m) (mm) weight (g) (b.p.m.) (mmol.h -1 ) conc.
ID (µmol.l -1 )
MG-1 800 M 51 41.5 69.0 41.1 14.2
MG-2 800 F 34 24.2 62.5 35.4 11.3
LI-1 3050 M 27 7.7 81.1 16.6 9.8
LI-2 3050 M 13 0.9 No data 5.3 8.8
LI-3 3050 M 21 3.0 No data 11.8 9.9
LI-4 3050 M 17 1.7 No data 8.8 8.9
LI-5 3050 M 7.5 0.2 No data 3.3 8.9
LI-6 3050 M 12 0.6 No data 5.5 9.2
LI-7 3050 F 23 3.6 81.5 12.3 8.9
LI-8 2700 F 53 39.1 74.8 47.2 nd
LI-9 2700 F 31 10.1 72.3 31.4 7.9
LI-10 3050 M 12 0.4 No data 6.3 7.3
MG:
Acknowledgements
All the work described here would not have been possible without the skills and help of the ROV MARUM-Quest crew, not only for animal collections but also for fixing a broken fiber optics cable used in my system: many thanks to a great crew. The crew of the RV Meteor has also been very helpful on board. I would also like to thank Nicole Dubilier, chief scientist, for inviting me on this cruise and for exciting scientific discussions. The modular pressure vessels used in this study were based with permission on a design by Raymond Lee. I would like to thank Jim Childress for very insightful discussions. This research was supported in part by the European Union FP7 Hermione programme (Hotspot Ecosystem Research and Man's Impact on European Seas; grant agreement no. 226354), and by the Region Bretagne HYPOXEVO grant. The German Research Foundation (DFG) and the DFG Cluster of Excellence "The Ocean in the Earth System" at MARUM, Bremen (Germany) are acknowledged for funding and support of the research cruise with the RV Meteor (MenezMar M82/3 and M126) and ROV MARUM-Quest.
The author declares that he has no competing interests. |
01763827 | en | [
"info.info-ni"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01763827/file/1570414966.pdf | Dan Radu
email: dan.radu@aut.utcluj.ro
Adrian Cretu
email: adriancr.ro@gmail.com
Benoît Parrein
Jiazi Yi
email: yi@lix.polytechnique.fr
Camelia Avram
email: camelia.avram@aut.utcluj.ro
Adina As
Benoit Parrein
email: benoit.parrein@polytech.univ-nantes.fr
Adina Astilean
email: adina.astilean@aut.utcluj.ro
Flying Ad Hoc Network for Emergency Applications connected to a Fog System
The main objective of this paper is to improve the efficiency of vegetation fire emergency interventions by using MP-OLSR routing protocol for data transmission in Flying Ad Hoc NETwork (FANET) applications. The presented conceptual system design could potentially increase the rescuing chances of people caught up in natural disaster environments, the final goal being to provide public safety services to interested parties. The proposed system architecture model relies on emerging technologies (Internet of Things & Fog, Smart Cities, Mobile Ad Hoc Networks) and actual concepts available in the scientific literature. The two main components of the system consist in a FANET, capable of collecting fire detection data from GPS and video enabled drones, and a Fog/Edge node that allows data collection and analysis, but also provides public safety services for interested parties. The sensing nodes forward data packets through multiple mobile hops until they reach the central management system. A proof of concept based on MP-OLSR routing protocol for efficient data transmission in FANET scenarios and possible public safety rescuing services is given.
Introduction
The main objective of this paper is to introduce MP-OLSR routing protocol, that already proved to be efficient in MANET and VANET scenarios Yi et al (2011a), [START_REF] Radu | Acoustic noise pollution monitoring in an urban environment using a vanet network[END_REF], into FANET applications. Furthermore, as a proof of concept, this work presents a promising smart system architecture that can improve the saving chances of people caught in wildfires by providing real-time rescuing services and a temporary communication infrastructure. The proposed system could locate the wildfire and track the dynamics of its boundaries by deploying a FANET composed of GPS and video enabled drones to monitor the target areas. The video data collected from the FANET is sent to a central management system that processes the information, localizes the wildfire and provides rescuing services to the people (fire fighters) trapped inside wildfires. The data transmission QoS (Quality of Service) for data transmission in the proposed FANET network scenario is provided to prove the efficiency of MP-OLSR, a multipath routing protocol based on OLSR, in this types of applications.
Wildfires are unplanned events that usually occur in natural areas (forests, prairies) but they could also reach urban areas (buildings, homes). Many such events occured the last years (e.g. Portugal andSpain 2017, Australia 2011). The forrest fire in the north of Portugal and Spain killed more than 60 people. During the Kimberley Ultra marathon held in Australia in 2011 multiple persons were trapped in a bush fire that started during a sports competition.
The rest of this paper is structured as folows. Section 2 presents the related works in the research field. Section 3 introduces the proposed system design. Section 4 shows and discusses the QoS performance evaluation results. Finally, Section 5 concludes the paper.
Related works
Currently there is a well-known and increasing interest for providing Public Safety services in case of emergency/disaster situations. The US Geospatial Multi Agency Coordination1 provides a web service that displays fire dynamics on a map by using data gathered from different sources (GPS, infrared imagery from satellites). A new method for detecting forest fires based on the color index was proposed in [START_REF] Cruz | Efficient forest fire detection index for application in unmanned aerial systems (uass)[END_REF]. Authors suggest the benefits of a video surveillance system installed on drones. Another system, composed of unmanned aerial vehicles, used for dynamic wildfire tracking is discussed in [START_REF] Pham | A distributed control framework for a team of unmanned aerial vehicles for dynamic wildfire tracking[END_REF].
This section presents the state of the art of the concepts and technologies used for the proposed system design, current trends, applications and open issues.
Internet of Things and Fog Computing
Internet of Things (IoT), Fog Computing, Smart Cities, Unmanned Aerial Vehicle Networks, Mobile Ad Hoc Networks, Image Processing Techniques and Algorithms, and Web Services are only some of the most promising actual, emerging technologies. These all share a great potential to be used together in a large variety of practical applications that could improve, sustain and support peoples life.
There are many comprehensive surveys in the literature that analyse the challenges of IoT and provide insights over the enabling technologies, protocols and possible applications [START_REF] Al-Fuqaha | Internet of things: A survey on enabling technologies, protocols, and applications[END_REF]. In the near future, traditional cloud computing based architectures will not be able to sustain the IoT exponential growth leading to latency, bandwidth and inconsistent network challenges. Fog computing could unlock the potential of such IoT systems.
Fog computing refers to a computing infrastructure that allows data, computational and business logic resources and storage to be distributed between the data source and the cloud services in the most efficient way. The architecture could have a great impact in the emerging IoT context, in which billions of devices will transmit data to remote servers, because its main purpose is to extend cloud infrastructure by bringing the advantages of the cloud closer to the edge of the network where the data is collected and pre-processed. In other words, fog computing is a paradigm that aims to efficiently distribute computational and networking resources between the IoT devices and the cloud by:
• allowing resources and services to be located closer or anywhere in between the cloud and the IoT devices; • supporting and delivering services to users, possibly in an offline mode when the network is partitioned by example; • extending the connectivity between devices and the cloud across multiple protocol layers.
In the near future, traditional cloud computing based architectures will not be able to sustain the IoT exponential growth leading to latency, bandwidth and inconsistent network challenges. Fog computing could unlock the potential of such IoT systems.
Currently the use cases and the challenges of the edge computing paradigm are discussed in various scientific works [START_REF] Lin | A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications[END_REF][START_REF] Al-Fuqaha | Internet of things: A survey on enabling technologies, protocols, and applications[END_REF], [START_REF] Ang | Big sensor data systems for smart cities[END_REF]. Some of the well known application domains are: energy, logistics, transportation, healthcare, industrial automation, education and emergency services in case of natural or man made disasters. Some of the challenging Fog computing research topics are: crowd based network measurement and interference, client side network control and configuration, over the top content management, distributed data centers and local storage/computing, physical layer resource pooling among clients, Fog architecture for IoT, edge analytics sensing, stream mining and augmented reality, security and privacy.
There are numerous studies that connect video cameras to Fog & IoT applications. The authors of Shi et al (2016) discuss a couple of practical usages for Fog computing: cloud offloading, video analytics, smart home and city, and collaborative edge. Also, some of the research concepts and opportunities are introduced: computing stream, naming schemes, data abstraction, service management, privacy and security, and optimization metrics. Authors of [START_REF] Shi | The promise of edge computing[END_REF] present a practical use case in which video cameras are deployed in public areas or on vehicles and they could be used to identify a missing persons image. In this case, the data processing and identification could be done at the edge without the need of uploading all the video sources to the cloud. A method that distributes the computing workload between the edge nodes and the cloud was introduced Zhang et al (2016). Authors try to optimize data transmission and ultimately increase the life of edge devices such as video cameras.
Fog computing could be the solution to some of the most challenging problems that arise in the Public Safety domain. Based on the most recent research studies and previous works concerning public safety Radu et al (2012), [START_REF] Yi | Multipath routing protocol for manet: Application to h.264/svc video content delivery[END_REF] it can be stated that real time image & video analysis at the edge of a FANET network could be successfully implemented in the public safety domain, more specifically for fire detection and for rescuing emergency services provisioning.
One of the most important advantages of Fog computing is the distributed architecture that promises better Quality of Experience and Quality of Service in terms of response, network delays and fault tolerance. This aspect is crucial in many Public Safety applications where data processing should be done at the edge of the system and the response times have hard real-time constraints.
Flying Ad Hoc Networks
Unmanned Aerial Vehicles (UAV's, commonly known as drones) become more and more present in our daily lifes through their ease of deployment in areas of interest. The high mobility of the drones, with their enhanced hardware and software capabilities, makes them suitable for a large variety of applications including transportation, farming and disaster management services. FANET's are considered as a sub type of Mobile Ad Hoc Networks networks that have a greater degree of mobility and usually the distance between nodes is greater as stated in [START_REF] Bekmezci | Flying ad-hoc networks (fanets): A survey[END_REF].
A practical FANET testbed, build on top of Raspberry Pi c , that uses two WiFi connections on each drone (one for ad hoc network forwarding and the other for broadcasted control instructions is described in [START_REF] Bekmezci | Flying ad hoc networks (fanet) test bed implementation[END_REF]. Another FANET implementation that consists of quadcopters for disaster assistance, search and rescue and aerial monitoring as well as the design challenges are presented in [START_REF] Yanmaz | Drone networks: Communications, coordination, and sensing[END_REF] 2.3 Routing protocols OLSR (Optimized Link State Routing) protocol proposed in Jacquet et al ( 2001) is an optimization of link state protocol. This single path routing approach presents the advantage of having shortest path routes immediately available when needed (proactive routing). OLSR protocol has low latency and performs best in large and dense networks.
In [START_REF] Haerri | Performance comparison of aodv and olsr in vanets urban environments under realistic mobility patterns[END_REF] OLSR and AODV are tested against node density and data traffic rate. Results show that OLSR outperforms AODV in VANETs, providing smaller overhead, end-to-end delay and route lengths. Furthermore there are extensive studies in the literature regarding packets routing in FANET's. Authors of [START_REF] Oubbati | A survey on position-based routing protocols for flying ad hoc networks (fanets)[END_REF] give a classification and taxonomy of existing protocols as well as a complete description of the routing mechanisms for each considered protocol. An example of a FANET specific routing protocol is an adaptation of OLSR protocol that uses GPS information and computes routes based on the direction and relative speed between the UAV's is proposed in [START_REF] Rosati | Dynamic routing for flying ad hoc networks[END_REF].
In this paper authors use MP-OLSR (Multiple Paths OLSR) routing protocol based on OLSR proposed in Yi et al (2011a), that allows packet forwarding in FANET and MANET networks through spatially separated multiple paths. MP-OLSR exploits simultaneously all the available and valuable multiple paths between a source and a destination to balance traffic load and to reduce congestion and packet loss. Also it provides a flexible degree of spatial separation between the multiple paths by penalizing edges of the previous paths in an original Dijkstra algorithm execution.
Based on the above considerations, a system architecture that can improve the saving chances of people caught in wildfires by providing real-time rescuing services and a temporary communication infrastructure is proposed.
System Design
One of the main objectives of this work is to design and develop a smart system architecture, based on FANET networks, which integrates with the numerous emergent applications offered by the Internet of Things, that is:
• extensible: the system architecture should allow any new modules to be easily plugged in; • reliable: the system should support different levels of priority and quality of service for the modules that will be plugged in. For example, the public safety and emergency services that usually have real-time hard constraints should have a higher priority than other services that are not critical; • scalable: the architecture should support the connection of additional new Fog components, features and high node density scenarios;
• resilient: the system will be able to provide and maintain an acceptable level of service whenever there are any faults to the normal operation.
The overview of the proposed model, in the context of Internet of Things & Fog Computing, is given in Figure 1. The system could locate the wildfire and track the dynamics of its boundaries by deploying a flying ad hoc network composed of GPS and video enabled drones to monitor the target areas. The fire identification data collected from the FANET is sent to a central management system that processes the data, localizes the wildfire and provides rescuing services to the people (fire fighters) trapped inside the wildfires. The proposed system intends to support and improve emergency intervention services by integrating, based on the real-time data collected from the Fog network, multiple practical services and modules such as:
• affected area surveillance;
• establishing the communication network between the disaster survivors and rescue teams; • person in danger identification and broadcasting of urgent notifications;
• supporting the mobility of the first responders through escape directions;
• rescuing vehicle navigation.
Our FEA (FANET Emergency Application) network topology is presented in Figure 2 and it is composed of three main components:
• A MANET of mobile users phones;
• FANET -video and GPS equipped drones that also provide sufficient computational power capabilities for fire pattern recognition; • Fog infrastructure that supports FANET data collection at the sink node located at the edge of the network. This provides data storage, computational power and supports different communication technologies for the interconnection with other edge systems.
This last component can be done through an object store as proposed in Confais et al (2017b) where a traditional Bittorrent P2P network can be used for storage purpose. Combined with a Scale-out NAS as in Confais et al (2017a), the Fog system avoids costly metadata management (even in local accesses) and computing capacity thanks to an intensive I/O distributed file system. Moreover, the global Fog system allows to work on a disconnected mode in case of network partitionning with the backbone.
FEA uses a FANET network, to collect fire identification data from drones (GPS and video enabled), and a MANET network composed of users smartphones. Sensing nodes periodically transmit data to the central management system where the fire dynamics is determined for monitoring purposes. If a fire has been detected by a sensing drone, based on the dynamics of the fire, rescuing information will be computed and broadcasted back into the FANET and MANET so that the people trapped in the fire to be able to receive the safety information on their smartphones in real-time. We make the folowing assumptions, that will be taken into account for the simulation scenario modelling, regarding the FEA message forwarding:
• when fire is detected by sensing drones they start to periodically forward data packets with the information regarding fire dynamics over multiple hops in the mesh network towards the sink node; • the central management system processes the fire detection data received from FANET nodes and computes the fire dynamics using the GPS coordinates that are included in the received data messages; • the central management system sends back into the mobile network (FANET and MANET) rescuing information that will be received by people in danger on their smartphones.
In FEA system FANET nodes are responsible for: fire identification based on video recording, forwarding the processed information (alongside with GPS coordinates) towards the collector node and rescuing information forwarding to the MANET nodes. The proposed network architecture could also serve as a temporary communication infrastructure between rescuing teams and people in danger.
One of the many advantages of FEA is the ease of deployment, all the technologies and components of the system are widely available, inexpensive and easy to provide. Also the Quality of Service in the FANET network, which is essential in emergency services where delays and packet delivery rations are very important, is enhanced by using MP-OLSR routing protocol that chooses the best multiple paths available between source and destination.
System Evaluation
The simulations are performed to evaluate MP-OLSR in the proposed FANET scenario. This section is organized as follows. The simulation environment configuration and scenario assumptions are given in Section 4.1 and then the Quality of Service performance are compared between OLSR and MP-OLSR in Section 4.2
Simulation Scenario
For the simulations we designed a 81 nodes FANET & MANET hybrid topology placed in a 1480 square meters grid topology. The Random Waypoint Model mobility pattern was used for different maximal speeds suitable for the high mobility of drones: 1-15 m/s (3.6-54 km/h). We make the assumption that only a subset of nodes (possibly the ones that detect fire or the smart phones of people in danger) need to communicate with the Fog edge node through the mesh network so the data traffic is provided by 4 Constant Bit Rate (CBR) sources. Qualnet 5 was used as a discrete event network simulator. The detailed parameters for the Qualnet network scenario and routing protocols configuration parameters are listed in Table 1. The terrain altitude profile is shown in Figure 3.
Simulation Results
For each routing protocol a number of 80 simulations were executed (10 different seeds/speed ranges). To compare the performances of the protocols, the following metrics are used:
• Packet delivery ratio (PDR): the ratio of the data packets successfully delivered at all destinations. • Average end-to-end delay: averaged over all received data packets from sources to destinations as depicted in [START_REF] Schulzrinne | Rfc 1889: Rtp: A transport protocol for real-time applications[END_REF]. • Jitter: average jitter is computed as the variation in the time between packets received at the destination caused by network congestions and topology changes.
Figures 4, 5 and 6 show the QoS performance of MP-OLSR and OLSR in terms of PDR, end-to-end delay and Jitter results with standard deviation for each point.
From the obtained results it can be seen that PDR decreases slightly with the mobility as expected. For the proposed FANET scenario MP-OLSR delivers an average of 10% higher PDR than OLSR protocol.As expected, when the speed increases to values closer to the high mobility of FANET scenarios the links become more unstable so OLSR performance decreases while MP-OLSR provides a much better overall delivery ratio than OLSR (around 9% in average at higher speeds). MP-OLSR also performs much better than OLSR in terms of end-to-end delay and Jitter. The delay of OLSR is around 2 times higher at the highest speed while Jitter is 50% higher. This aspect is very important for the proposed emergency application where the re-sponse time must be provided as quickly as possible. Furthermore, the MP-OLSR standard deviation for all the results is smaller than for OLSR.
Conclusion and Future Work
We described FEA system as a possible emergency application for MP-OLSR routing protocol which uses a FANET network to collect fire dynamics data from drones and through a central management system it provides safety instructions back to the people in danger. The performance evaluation results show that MP-OLSR is suitable for FANET scenarios, most specifically emergency applications, where the mobility is high and response times have hard real-time constraints.
The folowing are some of our future works: system deployment on a real testbed, analysis of the cooperation between MANET & FANET, data analysis based on thermal cameras.
Fig. 1
1 Fig. 1 System overview
Fig. 2
2 Fig. 2 Emergency system architecture
Fig. 3
3 Fig. 3 Qualnet altitude profile pattern for 100 m 2
Fig. 4
4 Fig. 4 Delivery ratio
Fig. 5
5 Fig. 5 End-to-end delay
Fig. 6
6 Fig. 6 Jitter
Table 1
1 Simulation parameters.
Simulation Parameter Value Routing Parameter Value
Simulator Qualnet 5 TC Interval 5 s
Routing protocols OLSRv2 and MP-OLSR HELLO Interval 2 s
Area 1480 x 1480 x 34.85 m 3 Refresh Timeout Interval 2 s
Number of nodes 81 Neighbor hold time 6 s
Initial nodes placement Grid Topology hold time 15 s
Mobility model Random Waypoint Duplicate hold time 30 s
Speeds 1-15 m/s Link Layer Notification Yes
Number of seeds 10 No. of path in MP-OLSR 3
Transport protocol UDP
IP IPv4
IP fragmentation unit 2048 bytes
Physical layer model PHY 802.11b
Link layer data rate 11 Mbits/s
Number of CBR sources 4
Sim duration 100 s
CBR start-end 15-95 s
Transmission interval 0.05 s
Application packet size 512 bytes
https://www.geomac.gov |
01763828 | en | [
"shs.eco"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01763828/file/Quality%201920-30_HES.pdf | Jean-Sébastien Lenfant
Early Debates on Quality, Market Coordination and Welfare in the U.S. in the 1930s
Introduction
The concept of quality in economics, as an relevant aspect of economic coordination, has gone through ups and down since it was identified as a decision variable of the producer in a competitive monopolist environment in Chamberlin's The Theory of Monopolistic Competition (1933, first ed.;[START_REF] O'brien | Sound Buying Methods for Consumers[END_REF]second ed.). To my knowledge, the concept of quality-how it has been defined and integrated into economic thinkinghas not retained the attention of historians of economic thought so far. Anyone looking for milestones in the history of quality in economics will find some accounts in works done in the fields of economic sociology [START_REF] Karpik | L'économie des singularités[END_REF], management (Garvin, 1984) and economics of conventions [START_REF] Eymard-Duvernay | Conventions de qualité et formes de coordination[END_REF][START_REF] Eymard-Duvernay | La qualification des biens[END_REF]. Through the rare articles providing some historical sketch on the concept, the reader will get the idea that it was lurking in the theory of monopolistic competition (Chamberlin, 1933), that it was implicitly accounted for in Lancaster's "new approach to consumer behavior" [START_REF] Lancaster | A New Aproach to Consumer Behavior[END_REF], that it was instrumental to Akerlof's "lemons" market story [START_REF] Akerlof | The Market for 'Lemons'. Quality Uncertainty and the Market Mechanism[END_REF] and, eventually, that it became a relevant variable for the study of macroeconomic issues [START_REF] Stiglitz | The Causes and Consequences of the Dependence of Quality on Price[END_REF]. The purpose of the present article is to provide the starting point for a systematic history of quality in economics, going back some years before Chamberlin's staging of the concept. Indeed, it is our view that quality deserves more attention than it has received so far in economics and that it should not be relegated to marketing or socio-economic studies. A history of economics perspective on this concept is a requisite to help us understand the fundamental difficulties that accompany any attempt at discussing quality in standard economic thought as well as the fruitfulness of the concept for our thinking of market failures and of welfare and regulatory issues in economics. The way it has been addressed is likely to tell us a lot of how standard views on rational behavior and market coordination have overshadowed cognitive and informational aspects of consumers' and producers' behavior, which were central in many discussions about branding, grading, and labeling of goods as well as about educating the consumer. Those cognitive and informational aspects are now resurfacing within behavioral perspectives on consumption and decision (nudge being one among many possible offspring of this) and consequently reflections on quality in economics may be a subject for renewed enquiry. Actually, many aspects of economic life related to the quality of goods-information, grading, standardizing, consumers' representations of quality-have been at the core of many researches and discussions mainly in the 1930s in the U.S. Those researches did not proceed at the outset from broad theoretical views on competition or price coordination, instead they stemmed from practical and sectoral accounts of specific impediments experienced by producers of agricultural products on the one hand and, on the other hand, by the progressive setting up of institutions and organizations devoted to consumers' protection, with a particular intensity during Roosevelt's New Deal program. The present article will focus on this body of literature, with a view of enhancing the arguments pro and con the need for institutions protecting the consumer and improving the marketing of goods. First, contributions to this literature are to be replaced within a historical account of the development of institutions in charge of doing research or of implementing standards to improve the functioning of markets and to protect consumers in this period. Research on the subject of quality was undoubtedly driven by a specific historical set of events, notably the development of mass-production, marketing and branding in the 1920s and later on by the decrease of prices and losses of quality that accompanied the Great Depression. The first section deals with the institutional context. It focuses on the prominent role of the Bureau of Agricultural Economics and discusses the motivations and successes of the New Deal institutions linked with consumers' protection. Section 2 deals with the way agricultural economists discussed the issue of quality in relation with coordination failures and their consequences on welfare. As will be shown, early debates on quality in the 20th century lead to two opposite ways of thinking about quality in relation with market coordination. Two points are in order before to embark.
First, a noteworthy aspect of the literature under study here-notably the one linked with farm economics-is that it is following its own agenda independently of theoretical developments in the academia. For this very reason, the research on quality and market coordination will barely echo on the publication of Chamberlin's book and the follow-up literature and its is ignored as well in Chamberlin's book. 1Most of the research dealing with quality in the field of farm economics has been done during the years 1928-1939 and continued slowly after WWII until around 1950 or so, then almost vanishing.
Second, in those years, the concept of quality not having been introduced into standard economics, we cannot expect to confront ideas of the 1930s with an even idealized theoretical account of quality. It has to be approached as something that is in need of being defined and constructed against the marginalist school of eco-nomics.2 1 Promoting the interest of producers ... and consumers
The development of grades, brands, informative labeling, and more broadly of quality indicators as a means to improving the marketing of goods and the allocation of resources was addressed slowly in the 1910s and gained momentum in the 1930s. Our goal in this section is to present the main institutions and organizations that have been involved in the promotion of standards and grades as a means for improving the functioning of markets and the welfare of producers and/or consumers.
The main features of this overview are first, that standards and grading practices have been promoted slowly due to pressures on the part of industrials against it and second, that the structuring of a policy devoted to protecting consumers has been addressed late compared with policies focused on producers protection on wholesale markets. We shall first highlight the role of the Bureau of Agricultural Economics as one of the agencies in charge of promoting quality standards. We then present the role of consumer associations and the development of Home Economics as a field of research and teaching linked with the promotion of the consumer as a legitimate figure whose Welfare should be protected and promoted by government policies. We then move to a general view of the development of standards in the 1930s.
The Bureau of Agricultural Economics and the Grading of Agricultural Products
The history of the Bureau of Agricultural Economics is inseparable from the history of farm economics in the U. (1928) it soon appeared that growers and shippers could not know whether price variations resulted from variations in supply and demand or if they reflected differences in the quality of products. Thus, the need for standardizing the goods exchanged was established as a condition for market expansion. 3 It was expected that standards be based on scientific inquiry regarding the factors influencing quality (and then price), in relation with the use of the goods. As Tenny (1946Tenny ( , 1020)), a former researcher at the Bureau of Markets, puts it "other problems seemed to be getting themselves always to the front but non more so than cotton standards and grain standards." It is most interesting that the issue of quality and standards should be in a sense a foundational issue for the BAE. The BAE was then involved progressively into establishing standards for many agricultural products. The issue of standards concerned first and foremost cotton and grain, but "in fact, there was scarcely an agricultural product for which grades were not established, and these all were based largely not only on trade practices but also on scientific studies." [START_REF] Tenny | The Bureau of Agricultural Economics. The Early Years[END_REF](Tenny, , 1022) ) In the 1920s, one major contribution of the BAE was the Outlook program, whose aim was to provide forecasts about supply, demand and prices of several agricultural products to farmers and to educate them to deal with the information contained in the outlook and take decisions [START_REF] Kunze | The Bureau of Agricultural Economics' Outlook Program in the 1920s as Pedagogical Device[END_REF]. For certain categories of products, it was conceived that forecasts could be published in time to allow growers to adapt their choice of plantations. As an elite institution producing research and policy recommendations, the BAE has been associated with a progressive view on economic policy-whose purpose was to devise policies that would secure faire prices to growers [START_REF] Mcdean | Professionalism, policy, and farm economists in the early Bureau of Agricultural Economics[END_REF]-and that would deliver well organized and intelligible statistical information about prices. Contrary to farmer's Congressmen and group leaders, most economists at the BAE were favoring equality of opportunity to farmers and government policymaking role to improve on market outcomes [START_REF] Hardin | The Bureau of Agricultural Economics under fire: A study in valuation conflicts[END_REF]. Historically, it has been considered convenient to construct official standards for the quality of cotton. Under Cotton Futures Act, the US Department of Agriculture (USDA) controlled the standardization of cotton offered on exchange contracts. Until then, cotton grown and delivered in the US was classified according to US standards, while it was subjected to Liverpool standards on international markets. There was then a move towards acceptance of American standards throughout the world in 1924. Since its creation in 1922, the BAE's activities have often been criticized for interfering with market outcomes (through market information given to farmers and through predictions regarding futures). However, the BAE power was limited. In 1938, the BAE was given more influence as a policy-making agency at the Department of Agriculture (rather than being centered on research). This increase of power was short-lasting due to power struggles within the Department between different action agencies, and would be even more reduced in 1947, after publication of a report on the effects of the return of veterans to agriculture. (see [START_REF] Hardin | The Bureau of Agricultural Economics under fire: A study in valuation conflicts[END_REF]. 4Progressively, other kinds of products benefited from grading. This could be on a voluntary basis, as for instance with potatoes. For instance, [START_REF] Hopper | Discussion of [Urgent Needs for Research in Marketing Fruits and Vegetables[END_REF] notes that the graded potatoes from other states tended to replace non graded products. The, compulsory grading was beneficial to Ontario growers, demonstrating to potato buyers in Ontario that the locally grown product, when properly graded, was equal to that produced in other areas (Hopper, 1936, 418). The BAE was in the forefront to develop standards of many farm products (fruits, vegetables, meat). Its main concern, however, was standards for wholesale markets. Concerns about consumers welfare and protection through standards and grades of quality would develop later. For instance the Food and Drug Administration (FDA) had already passed laws protecting consumers against dangerous ingredients and material in food and in pharmaceutical products, but still the protection of the consumer was minimal hand most often incidental. Also, the Federal Trade Commission regulations against unfair advertising were conceived of as a protection to producers and not to consumers. Overall, until the late 1920s, the BAE could appear as the main agency able to provide careful analysis and reflections on quality. Things would change to some extent in the 1930s, under Roosevelt's administration.
A Progressive Move Towards Consumers' Protection
The following gives an overview of institutions (private or governmental) involved in promoting standards for consumer goods and protecting consumers. It points to the lack of coordination of the different institutions and their inability to develop an analytically sound basis for action. It is beyond the scope of this article to provide an overview of the associations, agencies, groups, clubs, laboratories and publications involved in consumers' protection. Rather, it is to illustrate that their helplessness in promoting significant changes in legislation reflect disagreements as to the abilities of consumers and efficiency of markets in promoting the appropriate level of qualities to consumers. This period, mainly associated with Roosevelt's New Deal programs, shows the limits and shortcomings of its accomplishment as regards consumer's protection. According to [START_REF] Cohen | A Consumers' Republic. The Politics of Mass Consumption in Postwar America[END_REF][START_REF] Cohen | Is it Time for Another Round of Consumer Protection? The Lessons of Twentieth-Century U.S. History[END_REF], one can distinguish two waves of consumer mobilization before WWII, the first one during the Progressive Era and the second one during the New Deal (1930s-1940s), two periods when reformers have sought to organize movements to obtain more responsible and socially equitable legislations, notably regarding the protection and safety of consumers. While the reformers of the Progressive Era focused on safety laws and better prices (Pure Food and Drug Act, Meat Inspection Act, anti-trust Federal Trade Commission Act) reformers of the 1930s would insist on the promotion of consumers' welfare in general in the context of the Great Depression. The rapid development of private initiatives aimed at defending and educating consumers has been associated to the violent upheavals of the 1920s and 1930s. The 1920s witnessed an acceleration of mass-production and the development of modern ways of marketing goods and advertizing brands, while the Great Depression induced sudden variations in price and quality of goods. Movements active in promoting the protection of the consumer were very diverse in their motivations and philosophy. Consumer associations and clubs were active in organizing consumer education and lots of buying cooperatives over the country were created with a view of rebalancing the bargaining power and offering goods that suited consumers' needs. Incidentally, consumer movements were prominently organized and activated by women-e.g. The American Association of University Women-and many consumers education programs were targeted towards women, who most often were in charge of the budget of the family [START_REF] Cohen | Is it Time for Another Round of Consumer Protection? The Lessons of Twentieth-Century U.S. History[END_REF]. Among private initiatives, let's mention the Consumer Research, the American Home Economics Association, the National Consumers League (founded 1891), the General Federation of Women's Clubs, the League of Women Voters, and the American Association of University Women. Several different associations have been engaged in consumers' education and protection during the 1920s-1930s. This can be looked at within a context of the "Battle of Brands", that is an overwhelming number of brands selling similar products, with important quality differences. As Ruth O'Brien, a researcher at the Bureau of Home Economics, would sum up: "Never before have we had so many consumer organizations, so much written and spoken about consumers' problems." (O'Brien, 1935, 104). Numerous college-trained homemakers did lead consumers' study groups on advertising, informative labels and price and quality variations among branded goods [START_REF] O'brien | Sound Buying Methods for Consumers[END_REF]. They were trained through programs of professional organizations such as the American Association of University Women, the American Home Economics Association, the League of Women Voters and the National Congress of Parents and Teachers. Education is important above all for durable goods for which it is not possible to benefit from experience. As bodies active in the promotion of standards and labels, they were asking for facts, that is characteristics of goods: "If consumer education is to be really effective as ammunition, it must consist of carefully directed shrapnel made of good hard facts." (O'Brien, 1935, 105) Before the New Deal, there were already some governmental institutions in charge of consumers' protection, at least indirectly. This is the case of the BAE as well as of the Bureau of Standards and the Food and Drug Administration. Special mention has to be made of the Office of Home Economics (1915) that became a fully-fledged Bureau of Home Economics in 1923 (BHE); it was an agency from the Department of Agriculture whose function was to help households adopt good practices and routines of everyday life (cooking, nutrition, clothing, efficient time-use). Among other things the BHE studied the consumption value of many kinds of goods: the nutritive value of various foods, the utility of different fabrics, and the performance results of houshold equipment. The New Deal period appeared as a favorable period for the expression of consumers' interest. In the framework of the National Recovery Act, the Roosevelt administration sat up two important agencies in this respect: The Consumers' Advisory Board, in the industrial administration, and the Consumers' Counsel in the agricultural administration. Both were supposedly in charge of voicing the consumers' interests. Several authors in the 1930s have expressed hopes and doubts that those institutions would be given enough power to sustain a balanced development of the economy (e.g. [START_REF] Means | The Consumer and the New Deal[END_REF][START_REF] Douglas | The Rôle of the Consumer in the New Deal[END_REF]Walker, 1934;Blaisdell, 1935, Sussman and[START_REF] Sussman | Standards and Grades of Quality for Foods and Drugs[END_REF]. Gardiner C. [START_REF] Means | The Consumer and the New Deal[END_REF] offers the more dramatic account of such hopes:5
The Consumers' Advisory Board, The Consumers' Counsel-these are names which point to a new development in American economic policy-a development which offers tremendous opportunities for social well-being. Whether these opportunities will be realized and developed to the full or will be allowed to lapse is a matter of crucial importance to every member of the community. It may well be the key that will open the way to a truly American solution of the problem which is leading other countries in the direction of either fascism or communism. (Means, 1934, 7) The move towards recognizing the importance of the consumer in the economic process and its full representation in institutions is thus a crucial stake of the 1930s. To Means and other progressives, laissez-faire doctrines, entrusting the consumer as the rational arbitrator of the economy, are based on an ideal of strong and transparent market coordination, which, it is said, was more satisfactorily met in small enterprise capitalism. Modern capitalism, on the contrary, is characterized by a shift of the coordination from markets to big administrative units, and prices of most commodities are now administered instead of being bargained. The consumer has lost much of its bargaining power and he is no longer in a position to know as much as the seller about the quality of goods. To avoid the hazards of an institutionalization of administered prices, the consumer is essential.6 To Means, as to other commentators, it is high time that consumers be given a proper institutional recognition in government administration. 7 During the New Deal, some associations and prominent figures have been active in urging for a true recognition of the consumers interests through the creation of a Department of the Consumer (asked for by Consumers' Research) or at least a consumers' standard board. This project was notably supported by the report of the Consumers' Advisory Board known as the "Lynd Report" after the name of Robert Lynd, head of the Consumer Advisory Board.8 . The Consumers' Advisory Board was a much weaker organization than the Industrial Advisory Board or the Labor Advisor Board (Agnew, 1934). Despite efforts to promote some kind of independent recognition of the interests of consumers, achieving parity with Departments of Labor, Commerce and Agriculture, it was never adopted. 9 The literature offers some testimonies and reflections pointing out the relative inefficiency and limited power of agencies in charge of standards. On the one hand, there seems to be "general agreement that the interests of the consumer are not adequately represented in the process of government", while on the other hand "there is wide diversity of opinion as to the most effective method of remedying this deficiency. Concepts of the appropriate functions to be assigned to a Consumers' Bureau vary widely" (Nelson, 1939, 151). 10This state of institutional blockage stems from a fundamental difficulty to recognize the Consumer as a powerful and pivotal entity in the functioning of a market economy.11 . Clearly, the consumer represents a specific function (not a subgroup in the economy) and its interest "is diffuse compared with the other interests mentioned; it is harder to segregate and far more difficult to organize for its own protection" (Nelson, 1939, 152). More or less, similar ideas are expressed by [START_REF] Douglas | The Rôle of the Consumer in the New Deal[END_REF] and [START_REF] Means | The Consumer and the New Deal[END_REF]. To Paul H. Douglas, chief of the Bureau of Economic Education of the National Recovery Administration, a genuine balanced organization of powers in a modern economy, searching for harmony of interests would imply the creation of a Department of the Consumer, a necessary complement to the Department of Commerce and the Department of Labor [START_REF] Douglas | The Rôle of the Consumer in the New Deal[END_REF].12
Helplessness of institutions regarding quality standards
If we now look more precisely at the issue of standards and grades, it turns out that different agencies in charge of setting standards (Bureau of Standards, Bureau of Agricultural Economics, Federal Trade Commission) acted in a disorganized fashion and where hampered to provide specific protection to consumers through dilution of their power [START_REF] Nelson | Representation of the Consumer Interest in the Federal Government[END_REF]; see also Auerbach, 1949). By the end of the 1930s, the prevailing comments pointed out that the development of standards and grades for different kinds of consumers goods was not satisfactorily enforced. Ardent advocates for a new Food and Drug Act, by which a system of standards and grades of quality be established Sussman and Gamer (1935,581) make a dark picture of the situation, with only very few products being controlled by standards of quality (tea, butter) whose aim is essentially to define what is the product. The situation is one where government administrations are entitled to set standards of quality in a very loose way. Actually, Sussman and Gamer point out that committees in charge of standards have but an advisory status in general, and that the setting of standards may be purely formal, making the enforcement of laws costly and uncertain:
The Food and Drugs Act, which merely prohibits the sale of adulterated products, presents insuperable obstacles to proper enforcement because it contains no indication of the standard, a deviation from which constitutes adulteration. . . . In consequence, no standards are provided by which a court may judge whether a product is in fact adulterated or misbranded. The result is that each case must stand upon its own facts and the government is obliged to use numerous experts and scientific data to indicate the proper standard and to prove that there was a departure therefrom. (Sussman and Gamer, 1935, 583) Lack of coordination, power struggles and pressures from industry explain that the outcome of the New Deal was eventually seen as unsatisfactory in terms of consumers protection. 13 At the same time, the NRA has imposed informative labeling and NRA tags on many products. It has extended the power to establish standards of quality to cover all food products. It has proscribed unfair or deceptive advertising practices. But only on rare occasions rules or standards have been promulgated that went beyond the proposals made by industrials. 14 What comes out of debates on the proper scope of consumers interests in governmental policy is that it leads to identify the terms in which the status of quality can be handled in economics from a theoretical standpoint. Many issues will be set out on this occasion. First, there is the objective vs subjective account of quality, second, there is the impossibility to grade goods adequately according to a single scale, third, there is the scientific vs subjective way of grading, fourth, there is the allocative effect of the absence of grades or standards on the market vs the disturbing effect of standards on business and innovation, fifth, there is the effect of standards and grades on branding (and incidentally on the range of qualities available to consumers). The general trend towards more standardization and labeling was forcefully opposed by many industries and pro-market advocates. One good illustration of such views is George Burton [START_REF] Hotchkiss | Milestones in the History of Standardizing Consumers' Goods[END_REF]. The main fear of Hotchkiss is that standardization will eradicate branding and will lower the average quality of goods. 15 He brushes aside criticisms that the number of trade-marks bewilders the consumers or that trade marks achieve monopoly through advertising (Hotchkiss, 1936, 74). This is not to deny the usefulness of grades on some products when a scientific 13 Nelson (1939 160) notes that statutory provisions prohibited executive department from lobbying. "Their only permissible activity, which some are exploiting fully, is to assist independent consumer organizations to present their points of view by furnishing them with information and advice. It has proved impossible, however, to maintain any coordinated consumer lobby to offset activities of business pressure groups."
14 One such example is the promulgation of rules regarding rayon industry by the FTC 15 "Trade-marks have developed when consumers came to accept some marks of makers 'as better guides' in purchasing than the hall mark of the Gild, or the seal of the town or crown officer. ... The trade-mark acquired value only through the experience of satisfied consumers, and when consumers found a mark they could trust, they did not care particularly whether it was the mark of a manufacturer or of a merchant. Even though the modern factory system has made it possible for manufacturers in many fields to dispose of their whole output under their own trade-mark, many of them still supply merchants, wholesalers, and large-unit retailers with equivalent merchandise to be marketed under their private trade-marks. Only a small proportion of consumers know or care that one trade-mark is a mark of origin and the other of sponsorship." (Hotchkiss, 1936, 73-74) grading is possible. 16 However, Hotchkiss questions the use of standards imposed by official regulation, because the benefits are counterbalanced by more disadvantages and because limits to the sellers initiative in the end limit the freedom of buyers. Also, buying by specifications (intermediate goods) is not perfect and cannot satisfy all departments within an organization. Or else, there is no absolute uniformity in testing articles and fallible human judgment leads to approximations. Hotchkiss's assessment is typical of a pro-market bias by which in the end consumers, on an equal footing with producers, participate through their choice in fostering the adequate production of an adequate range of qualities for different products. Administrative intervention would but corrupt such a mechanism:
The whole history of official regulation of quality can be summed up as follows. No form of it (that I have been able to discover) has over any long period, been honestly or efficiently administered. No form of consumers' standards has continued to represent the wants and desires of consumers. No form of regulation has ever succeeded in protecting the consumers against fraud. Nor form of it has failed to prove oppressive and irksome to consumers themselves. Few business men have any confidence that a trial of official regulation of quality in America now would work out any more successfully. (Hotchkiss, 1936, 77) Besides, it would prevent the sound regulation through consumers' sovereign judgment:
The marketing of consumer goods is still accompanied by many abuses, but they cannot be ascribed to helplessness on the part of buyers. On the contrary, the marketing system in twentieth-century America puts greater power in the hands of consumers than any similar group has ever known. The power they exercise in daily over-the-counter buying can dictate standards of quality far better than can be done by delegated authority. They can force the use of more informative labels and advertising. (Hotchkiss, 1936, 77-78) This process can be achieved through private initiatives to diffuse information to consumers. The modern housewife has to search for information by herself, with the help of domestic science experts, dietitians, testing laboratories, always keeping the power of final decision.17 Hotchkiss' stand is in line with a tradition of strong opposition to establishing standards and grades of quality. 18This running idea of a well informed consumer, having at his disposal, if he wants to, sufficient information to make his choice, is precisely what is challenged by consumers' protection movements. Ruth O'Brien, a researcher at the Bureau of Home Economics, notes that precisely consumers' organization have resentment "at the fog which baffles and bewilders anyone trying to compare the myriad of brands on the present market." (O'Brien, 1935, 104) What then should be the right extent of a legislation on standards? Going beyond traditional oppositions, the question was addressed from different standpoints, which laid the foundations for making quality a subject of inquiry for economists. A minimal conception would uphold for the provision of definitions of identity, eliminating the problem for courts of determining whether a product is or is not what it is supposed to be. 19 This preliminary step is notably insufficient if it is to protect consumers, who have no way for ascertaining the quality of a product and its ability to satisfy their needs. To this aim, minimum standards of quality are required. Even those would not be enough most of the time if a legislation is to be oriented towards protecting consumers, therefore "a comprehensive scheme of consumer protection must embrace definitions of identity, minimum standards of quality, and grades" (Sussman and Gamer, 1935, 587). Beyond, there are questions left to scientists and technicians, to economists and psychologists regarding the adequate basis, factors, attributes, properties and characteristics to grade a product. Here, we arrive at the difficulty in deciding with reference to what standards should be determined and of putting such standards in an Act, because they may reveal out-of-date or faulty, and because revisions have to be made in due time due to constant innovations of products and new uses. Manufacturers must innovate and develop their business within a context of constancy of standards. Consequently, "the legislators' function is limited to providing that mechanism which will best serve the purpose of the scientist or technician. Any attempt to set out standards in the act itself might seriously limit the effectiveness of a system of standards and grades." (Sussman and Gamer, 1935, 589) Fundamentally, even authors who are definitely aware of the necessity to protect consumers tend to recognize the complexity of erecting such a set of standards if it is to be neither purely formal nor contriving with some principles of market mechanism and individual liberty: "it is doubtful if government can do more than establish certain minimum standards of physical quality for that limited class of products, the use of which is intimately related to public health and safety. It is also doubtful if its sphere can be much extended without public regulation and control on a scale incompatible with our ideals of economic liberty." (Walker, 1934, 105) This overview on the motives and debates on the protection of the consumers and of farmers as a specific category of agents in the 1930s shows that there is no consensus as to the proper scope of government intervention and as regards the kind of standards and grades to be promoted. In the end of this descriptive overview of the stakes of introducing quality indicators on goods, it turns out that quality is identified as a complex subject for economics which has definite consequences on market outcomes, and for that very reason which deserves to be analyzed in a scientific way (through statistical evaluations, through experiments, through theoretical modeling). Even though the absence of standards or grades is identified as a source of coordination failure, net losses to producers and consumers, the best way of intervening, through purely decentralized and private certifying bodies or through Governmental agencies, is open to debate and in need of economic inquiry. However, in the 1930s, the stage was set for a serious discussion of quality issues into economics. Economists involved would certainly share the view that the subject of quality is definitely an essential element of market coordination, with potentially important welfare effects on both consumers and producers. 35 years before [START_REF] Akerlof | The Market for 'Lemons'. Quality Uncertainty and the Market Mechanism[END_REF], Ruth O'Brien could summarize the situation in a clear-cut manner: Grade labeling will affect the brand which has been selling a C grade for an A price. It should. It will affect unethical advertising. It should. But it will help rather than hinder the reputable manufacturer and distributor who now are obliged to meet such kinds of competition. We are all familiar with instances in which a very poor quality of a commodity has completely forced a higher quality off the market because there was no grading or definite means of informing the public of the differences between the products. Only superlatives were available to describe both. It is not to the consumer's interest that all low quality be taken off the market. But it is to her interest that she know what she is buying, that she pay a price which corresponds to this quality, and that she have a basis for comparing different qualities. (O'Brien, 1935, 108, emphasis mine).
Quality as a supplementary datum for economics
This section aims at understanding how the concept of quality could make its way into economic analysis proper. What has been seen in the first section is that there have been debates about quality and standards in the 1920s and 1930s that were driven by the identification of inefficiencies on markets, specifically on farm products markets. The view that what is being sold on markets is as important as the price at which it is sold was firmly established. From a history of economic though perspective, what needs to be addressed is how economists would engage in giving this idea a scientific content, that is, how they would endeavor to adapt the framework of the marginalist theory of value in order to make room for quality as supplementary datum of economic analysis. In the following, I regard work in the field of farm economics as a set of seminal contributions growing out of the context of inefficient market coordination on agricultural products described above. One cannot expect to see a whole new set of theory coming out well packed from those reflections. On the contrary, the best that we can expect is a set of ideas about analytical and theoretical issues linked with quality that suggest the intricacies of the subject and the methods to be followed to analyze quality issues. As a first step, I present Frederick Waugh's 1928 seminal statistical work and followup literature, which serve as a starting point to identify themes and the lineaments of their theoretical treatment.
Price-quality relationships for farm products
Frederick Waugh pioneered work on quality and its influence on market equilibrium. "Quality Factors Influencing Vegetable Prices" appeared in 1928 in the Journal of Farm Economics, and was to be mentioned quite often as a reference article on this topic, giving an impetus to lots of research on quality of farm products. Waugh's contribution is of first importance, and probably it went unsurpassed in terms of method, setting up a standard of analysis for the next decade. His goal is to focus on quality as a factor influencing price differentials among goods at a microeconomic level, and this is as he puts it "an important difference" (Waugh, 1928, 185). The originality of Waugh's study is concentrate exclusively on the "causes of variation in prices received for individual lots of a commodity at a given time" (Waugh, 1928, 185). The motivation behind is not purely theoretical; it is that variations between the prices of different lots affect the returns to individual producers. One central point that needs considering is how Waugh and his followers would define quality. Quality, eventually, is any physical characteristic likely to affect the relative price of two lots of goods at the same time on a market. For instance, regarding farm products, it can be shape, color, maturity, uniformity, length, diameter, etc.); and it is the task of the economist to discover those that are relevant from a market value perspective. One can note that quality factors are those that are to play a significant role on the market, at the aggregate level; consequently, it is assumed that even though some consumers may not be sensitive to such or such characteristic, quality factors are those that are commonly accepted as relevant to construct a hierarchy of values on the market and to affect the relative prices of goods. Here, a first comment is worth doing. Even though individuals may be giving some relative importance to different sets of characteristics, only those characteristics that are relevant enough in the aggregate shall be kept in the list of quality characteristics. In some sense, we can say that from this perspective, the market prices observed on different lots reveal quality characteristics. The aim of Waugh is not to question quality per se, but rather to identify that each market for an agricultural product, say cotton, is actually the aggregate of different sub-markets on which different qualities of cotton are supplied and demanded. But at the same time, Waugh is also aware that the information and marketing processes can be misleading and ineffective if they do not correspond to the quality differentials that make sense to market participants. We are thus here, at the very beginning, at the crossing between two potentially different ways of analyzing quality and the coordination aspects linked to it: One that relies on the forces of the market-a balance between producers and buyers behaviorto construct quality scales and reveal what counts as a determinant of quality; another one that recognizes that participants in the market are active--perhaps in an assymetric way--in constructing quality differentials (through marketing and signaling devices) and in making prices reflect those differences. The motivation for the whole analysis is clearly to help farmers adapt their production plan and marketing behavior to take advantage of as much as the market can offer them:
The farmer must adopt a production program which will not only result in a crop of the size most suited to market conditions, but he must produce varieties and types of each commodity which the market wants and for which it is willing to pay. His marketing methods, also, should be based on an understanding of the market demand for particular qualities. Especially, his grading and packaging policies should be based on demand if they are to be successful. Such terms as 'No.1,' 'A grade,' or 'Fancy' are meaningless unless they represent grades which reflect in their requirements those qualities which are important in the market. (Waugh, 1928, 186) On the one hand, it assumed that we can be confident that the market will deliver information both on what counts as quality and how much it counts (as explaining a price differential). Lets note that this may be demanding too much to markets, which are first supposed to indicate the equilibrium conditions for each well identified product, for given conditions on supply and demand. 20 On the other hand, if a relevant quality differential is assumed to exist on a market and that this quality differential is not reflected enough-or not at all-in price differentials, then the market is deemed inefficient and it is interpreted as the result of an inability of participants to discriminate between the qualities of different lots. The fundamental tension is here, in the fact that it is expected too much from market data; First to indicate price premium paid to the best quality, second to identify relevant characteristics that explain them, and third to reveal market failures to value quality. From the last quotation, we would tend to understand that to Waugh, objectivity about quality is structured on the demand side and that farmers should adapt their production and the information associated to each lot to the kind of information that is relevant for buyers/consumers, to the exclusion of other kinds of information. 21 Waugh (1928) reports the results of a study of different products at the Boston wholesale market recording the price and quality of lots sold and analyzing the influence on price of various factors through multiple correlation methods. We shall retain his analysis of the markets for asparagus and cucumbers. The asparagus market reveals that green color is the most important factor in Boston, explaining 41% of the price variation, while the size factor explains 15% of variation of the price only. As regards cucumbers, two factors were measured, length and diameter 20 We will not digress on this issue in a purely theoretical fashion, which would lead us much too far.
21 Remind that Waugh focuses mainly on wholesale markets, where buyers are middlemen. Waugh relies on some statistical studies already done or being done to the effect of eliciting a quantitative measurement of the effect of quality on price. One is on the influence of protein content in wheat to prices, another one is on egg quality and prices in Wilmington. The goal of those studies is quite practical, it is to identify a possible discrepancy between the structure of demand in terms of quality requirements and the actual structure of supply, and to discuss the possibility for farmers to adjust the production in terms of quality. Of course, most of the reasoning can apply to markets with ultimate consumers:"If it can be demonstrated that there is premium for certain qualities and types of products, and if that premium is more than large enough to pay the increased cost of growing a superior product, the individual can and will adapt his production and marketing policies to the market demand." (Waugh, 1928, 187). Occasionally, Waugh criticizes existing surveys who aim at discovering desirable qualities to consumers, because they give no idea of their relative importance, because they are often biased by the methods used, and eventually because the choice of consumption will depend not only on quality but also on price (see [START_REF] Waugh | Urgent Needs for Research in Marketing Fruits and Vegetables[END_REF] (expressed as a percentage of length), and length explains 59% of the price. 22 The main conclusion is that there is a discrepancy between what counts for consumers and what serves as official characteristics used for grading the goods:
This type of study gives a practical and much-needed check on official grades and on market reports. It is interesting to note that U.S. grade No1 for asparagus does not require any green color, and the U.S. grades for cucumbers do not specify any particular length nor diameter. It is true that the length of green color on asparagus and length of cucumbers may be marked on the box in addition to the statement of grade, but if these factors are the most important ones, should not some minimum be required before the use of the name? (Waugh, 1928, 195) To sum up, Waugh considers that on a specific market-here the Boston marketthere is an objective hierarchy of the lots sold according to some qualities that buyers consider the most relevant, thus ignoring others. It is not clear however to what extent producers are aware of those relevant qualities (in their sorting of lots). This hierarchy is manifestly at odds with the characteristics that make official grades used by sellers on the market. There is thus a likely discrepancy between the required characteristics of official grades (which are constructed outside the market) and the ones that are important on the market. Waugh calls for a better overlap between official grades and the preferences of consumers (or retailers) on the market. 23 One year later, in 1929, the Journal of Farm Economics would publish a symposium on this very same topic of price-quality relationships, later followed by many studies on different farm products. Clearly, the analysis leads to challenge the theory of value, that relies on scarcity and utility, because at best it does not deal with the influence of quality on utility (Youngblood, 1929, 525). 24 . Farm economists working on quality agree that producers, especially farmers, do not care about producing better quality and improving their revenue. They care predominantly about increasing the yield. More, marketing practices on certain markets show that cotton is sold on the basis of an average price corresponding to an average quality. Here, we touch to the issue of a performative effect of the use or non-use of standards on markets. Because high quality cotton is not rewarded on local markets, producers are not inclined in planting high quality cotton and are creating the conditions for expelling even more better qualities out of the market: "While the individual farmer may feel that he is profiting by the production of lowgrade or short-staple cotton, he is obviously lowering the average of the quality of cotton in his market and, therefore, the average price level not only for himself but for all his neighbors. From a community standpoint, therefore, the higher the quality of the cotton, the higher the price level." (Youngblood, 1929, 531) Clearly, Youngblood anticipates very important issues: "It need not be expected that the cotton growers will appreciate the importance of quality so long as they have no adequate incentive to grow better cotton" (Youngblood, 1929,531). Actually, this is often the case on unorganized markets (mainly local markets) contrary to big regional or national markets, where trading is based on quality. 25 If it is recognized that quality differentials are not systematically accounted for, how can we expect to provide some objectivity to quality as a relevant economic variable? Implicitly, the answer is that some markets, particularly the biggest and best organized can be taken as a yardstick, as providing an objective scale for quality and price differentials. What we would like to point out is that in this body of literature, the markets are given a power to make appear positive valuations of quality differential, provided that participants receive adequate incentives. Clear-cut facts are to be enough to make things function and increase the wealth of growers. This contention is backed on the principle that the market failure can be established through comparing its outcome with the outcome of another market (with similar goods exchanged)
An important point in our story is that more or less, all the economists involved in those years, working in the field of agricultural economics, seem to say that the system of grading is recognized by some participants in the markets, but not by all. Then, this lack of information or knowledge prevents the working of the markets from central markets to local ones. In the case of cotton, impossibility to assess correctly quality and to value it leads to careless harvesting and to breeding high-yielding short-staples varieties, thus leading to a sub-optimal equilibrium on markets [START_REF] Cox | Relation of the Price and Quality of Cotton[END_REF]. 26 . The remedies to this situation are to concentrate markets by eliminating smaller ones and create enough business and to develop community production and cooperative marketing. Within a very short time span, a great number of studies on quality as related to price were conducted along the methodological lines set out by [START_REF] Waugh | Quality factors influencing vegetable prices[END_REF] (see [START_REF] Tolley | Recent Developments in Research Method and Procedure in Agricultural Economics[END_REF][START_REF] Waite | Consumer Grades and Standards[END_REF][START_REF] Norton | Differentiation in marketing farm products[END_REF]. Again, the common view is that those studies should serve as guides to production and marketing methods and in establishing standard grades representing variations in quality corresponding with market price differentials. Most of those studies concern the wholesale markets and are suppose to help improving coordination between producers and middlemen (shippers, merchants) Some study point to the fact that markets can be particularly biased, giving no reward to quality differentials. For instance, [START_REF] Allred | Farm price of cotton in relation to quality[END_REF] show that on spot markets, growers are not rewarded for better quality above Middling. For those qualities, the hierarchy is not reflected in prices. 25 A number of experiments carried out by the BAE on cotton have explored the link between price and quality. They confirm "that staple length is of greater significance than grade [START_REF] Crawford | Analysis of the Relation of Quality to Price of Cotton: Discussion by G.L. Crawford[END_REF] on organized markets, but not on unorganized ones:'The unorganized local cotton markets rather effectively kill all incentive that a farmer may have to produce cotton of superior spinning utility. The question of the proper recognition of quality in our local markets is one of the fundamental problems with which we have to deal in cotton production and marketing.' (Crawford, 1929, 541)". From statistics on trading on those markets, it turns out that staple length rather than grade is important for price differentials. The need is to provide clear-cut facts about the respective values of different grades on the markets (notably for exporting) and to adapt cotton growing to international demand.
26 "the farmers are not able to class their cotton accurately and a large percentage of the local buyers are not able to do so. Bargaining is done in horse-trading fashion on price and not on quality." (Cox, 1929, 548) 27 To improve the coordination on those markets, it is thus necessary to improve the bargaining power of sellers (growers) and to improve their knowledge of quality and to develop a good system of classing.28 From this overview of studies done by farm economists, a first blind spot can be identified. In some cases, it is said, sellers and buyers are able to bypass usual grades and prices are established according to some quality characteristics that seem to be reasonably shared by both parties on the market. In other cases, notably in the case of cotton, it is deplored that even though some quality characteristics could be identified as relevant for price differentials, it can happen that some participants do not make efforts to improve the quality of their crop or to sort it out in a proper way, thus making up the conditions for a market on which high quality will not be rewarded, in which traders expect that the relevant variable for dealing will be price and no one expects much from quality differentials. Everything happens as if because they fear that the quality differentials will not be rewarded enough to cover the cost, it is not necessary to grow high quality cotton. What needs to be understood then is how far markets are deemed efficient enough by themselves to make quality differentials be valued; and if not, what is the proper scope of government intervention.
2.2 Making quality objective: markets do not lie but implementing a common language on quality is not easy task.
Following Waugh's and other farm economists' contributions, we can identify a first set of works whose aim is to discuss the discrepancy between actual systems of grades and the factors that are explaining price differentials. The economist's point of view on grades, as we have seen, is that consumer grades are a means of securing the competitive conditions on the market. Grades are systems of classification of goods aimed at facilitating economic processes, providing information to market participants (producers, growers, middlemen, cooperatives, wholesale buyers, consumers) and meliorating the formation of prices and consumer's choices. Grades were sometimes adopted on organized markets but not so much regarding the sale of commodities to consumers. Thus, if it is expected that grades play a coordination function on markets, it is necessary that the meaning of each grade be relevant to market participants, notably to buyers. It has been often remarked after [START_REF] Waugh | Quality factors influencing vegetable prices[END_REF] that the construction of standards does not fit necessarily with the consumer/buyer view of quality. This is in itself a subject of passionate debates, which has many dimensions. It has to do with measurement issues, with consumers preferences, with multidimensionality of quality, and with the variety of uses that a given good can serve.
The data of quality measurement The most common explanation is that factors affecting quality are not easily measured. As [START_REF] Tenny | Standardization of farm products[END_REF] puts it,"It must not be supposed that Federal standards for farm products necessarily reflect the true market value of the product. There are several reasons why they may not. For instance, there are frequently certain factors which strongly influence market quality for which no practical method of measurement has been devised for use in commercial operations. Until comparatively recently the important factor of protein determination in wheat was ignored in commercial operations although it was given indirect recognition by paying a premium for wheat from sections where the average protein content was high." (Tenny, 1928, 207) Here, there is the tension between a tendency to privilege characteristics of goods that can be measured in a scientific way without relying on subjective judgment, like moisture or protein content for different kinds of cereals. The question is to what extent those characteristics are likely to allow the establishment of grades adapted to the functioning of markets?29 However, grading cannot result always from scientific measurement. It is often a matter of judgment, appealing to the senses of sight, taste and smell (for butter).
Dealing with multidimensionality Another difficulty with grading is that the multidimensionality of quality makes it unfit for measurement along a single dimension.
Lots of examples are discussed in the literature. Probably the agricultural product most studied in the 1930s is cotton. For the case of cotton, the usual grades used for standards are color and freedom from trash. 30 It is known that the length of the staple is an important factor too, but it is dealt with separately [START_REF] Tenny | Standardization of farm products[END_REF].
Apart from standards on the quality of the bale, there are seven basic grades of cotton linters, based on the length of the fiber. Grades, if they are to synthesize a set of properties not correlated in the good, must be constructed on the basis of an idealized good. For instance, quality grades of cotton have been constructed on the basis of a cotton having perfect uniform fibers, characterized by its strength and brightness. According to Youngblood there has been a development of "the art of classing" (Youngblood, 1929, 527;see also Palmer, 1934) first through private standards built by spinners, and then later through official standards. However, "within reasonable limits, adjacent grades, staple lengths, and characters of cotton may substitute for each other in a given use." (Youngblood, 1929, 528) and no synthetic indicator has been devised. The simplest way of establishing a one dimensional synthetic grade is to calculate a weighted average of different scores obtained for different factors of grade, as has been done for canned products [START_REF] Hauck | Research as a Basis for Grading Fruits and Vegetables[END_REF] If ever it seems impossible to merge two or more quality properties into one, then it may be enough to grade different characteristics independently and then to let the consumer choose the combination he prefers. 31 It may be difficult to obtain a useful grading system if based on too many factors, because those factors can be met independently. If a given lot is high according to some factors and off for another, it can lead to rank it low. This can lead participants in the market to trade without taking account of the official grade (Jesness, 1933, 710). Also, the factors relevant for grading can change according to the final use of the goods. Color of apples is important for eating but not for cider purposes.
Grades and consumers' preferences There seems to be a large consensus that grades should be implemented as much as possible in reference with consumers' preferences:
No useful purpose is served in attempting to judge the flavor of butter unless flavor affects the demand for butter. If color has no influence on demand, why be concerned with an attempt to measure it? The problem of defining market grades in reality is one of determining the considerations which are of economic importance in influencing demand and then to find technical factors which are susceptible of measurement as a means of assigning proper weights to each of them. The economic basis of grades is found in factors affecting the utility of goods and hence the demand for them. Grades are concerned with the want-satisfying qualities of products. (Jesness, 1933, 708-709).
Actually, this leads to recognize that very few is known about consumers' preferences and their willingness to pay a premium for such and such quality differential (Hauck, 1936, 397). In dealing with this aspect of the construction of official grades and standards, we arrive at a limit point. Shall we assume the consumers preferences are given and that all agree-or at least the majority-with the list of relevant characteristics that make quality. For instance, Jesness's, chief of the Federal Bureau of Agricultural Economics, recognizes that in case consumers are not aware of the meaning of grade, they will soon learn to adopt the grading system provided to them by experts: That the consumer may not always appear to exercise the best judgment in his preference is beside the point. The primary purpose in grades is to recognize preferences as they are, not as the developer of grades may think they should be. This, however, is not a denial of the possibility of using established grades as a means of educating consumers in their preferences. (Jesness, 1933, 709) Thus, in the end, if it turns out that no grading system is self-evident and easily recognized as useful to consumers, at least it can function as a focal point and become common knowledge. There must be some expert way of indicating a hierarchy of products qualities. At the same time, there is much to known about consumers' behavior. Certainly, there may be different demand schedules according to grades, and it would be useful to obtain information about the influence of grades over one another and to know precisely what consumers use the product for (Jesness, 1933, 716;[START_REF] Waite | Consumer Grades and Standards[END_REF]. Even though economists are well aware of this, they can but acknowledge that no satisfactory methods for eliciting preferences have been devised. Eliciting preferences from observation does not deliver information about what the consumer is aware of and what, if ever he is, he takes as relevant for his choice. 32 Waite, for instance, advocates for giving the consumers information that he may not at first consider as relevant for his choice and for the quality of goods he consumes:
It is a valuable thing to indicate to consumers specifications of essential qualities even though these do not become reflected in price, but this is more or less of a social problem since it involves the should aspects of the problem. Economics demands that we proceed somewhat differently and endeavor to indicate groups that are significantly price different both from demand and supply aspects. (Waite, 1934, 253) Articulating preferences and income Agricultural economists did not engage very far into the study of preferences. The main idea is that there should be some representative preferences and buying practices for different strata of income. Knowing better about preferences by income groups shall help to know what percentage of production of a top grade is necessary in a crop to make it pay better to sort them out to sell separately [START_REF] Hauck | Research as a Basis for Grading Fruits and Vegetables[END_REF]. To Norton, for instance, different markets are actually to be related to different strata of income: "What is needed for accurate analysis of retail price differentiation is an accurate measure of how different strata of demand respond to different price policies. Certainly the theoretical reactions of groups will vary. At high-income levels a minor change in the price of a food item will not affect purchases; at low-income levels, it may have a decided effect." (Norton, 1939, 590) 33 As [START_REF] Froker | Consumers' Incomes and Demand for Certain Perishable Farm Products]: Discussion[END_REF] would point out, most preference studies merge preferences and demand behavior [START_REF] Froker | Consumers' Incomes and Demand for Certain Perishable Farm Products]: Discussion[END_REF]. Notably, providing incentives to farmers to increase quality should not lend them to think that the best quality can be sold without limit. 34 As [START_REF] Rasmussen | Consumers' Incomes and Demand for Certain Perishable Farm Products[END_REF] would make clear, only a small proportion of American families do have sufficient incomes to buy the best quality of food. In the end, only a comprehensive study mixing knowledge on preferences, incomes and uses of product would allow to develop a system of grades that improves market coordination:"If grades are to be the means both of increasing net farm income and of consumer satisfaction, it seems obvious that such grades must 32 The method of questionnaires or statistical studies on choice are criticized. They will not disclose what the consumer would do if granted the opportunity to buy the good. Besides usual difficulties, one must get an idea of whether "failure [of a characteristic] to be price significant is due to inability of consumers under present marketing methods to differentiate these qualities, or consumer ignorance of their importance, or simply indifference of consumers" (Waite, 1934, 252).
33 [START_REF] Norton | Differentiation in marketing farm products[END_REF] is probably one of the first to link his analysis with Chamberlin's theory of monopolistic competition. To Norton, different factors used to differentiate food products to reach income groups might be classified as follows: A. service differentials: delivery vs carrying; cash vs credit; packages vs bulk; B. Product differentials: "quality: a range of choices", size, price of cut; "style: up-to-the-minute or out-of-date"; C. Advertising differentials : branded vs unbranded; featured characteristics vs standard grades; presumed uniformity or necessity for expert knowledge. The factors put to the fore differ according to the farm products. Illustration with the milk market of New York City and the automobile industry (proposing different lines of cars at different prices, including second-hand cars)
34 Also, Secretary Henry Wallace would assess in 1938 in his annual report:"We need to avoid too much insistence on only first-quality foods. All foods should meet basic health requirements; but thousands of families would rather have grade C food at a low price than grade A food at a high price, and thousands of farmers have grade C food to sell. Our marketing system must efficiently meet the needs of the poor as well as of the rich." (quoted in Rasmussen, 1939, 154) bear definite and clear-cut relationships to the economic desires of both dealers and consumers, and must recognize (first) differences in levels of consumer purchasing power; (second) differences in preferences of individuals; and (third) differences in the purposes for which products may be used and the qualities needed for each purpose." (Rasmussen, 1939, 149) The above considerations lead us to consider the idea that markets may not be enough to the understanding of coordination outcomes once it is recognized that information (or absence of information) about grading is influential on market outcomes.
Toward a cognitive theory of quality: protecting consumers from market failures
If grades are not used on markets, or if they do not play a role in establishing price differentials, it does not prove that grade specifications are wrong: "It does indicate either that consumers don't recognize quality (at any rate they reward it by paying a premium to get it) or that retailers base their prices upon factors other than quality, or that our standards of quality differ substantially from those which consumers consider important." (Hauck, 1936, 399) Here, we open to a quite different view on the use of government intervention through quality standards. The goal is not to help sellers and buyers to share a common language according to quality characteristics that all recognize as being relevant for improving their coordination on markets. It is more ambitious and contains a normative account of quality and of the nature of government intervention. It is contemplated that grading does necessarily contain an educational dimension, and that grading contributes to the formation of preferences instead of just revealing then and making them expressible on markets. We mean here that some authors have pointed out that contrarily to the market-driven coordination point of view, market failures indicate a need to develop grades as a means to repairing the failures of market coordination and the causes of those failures, which are possible as an exploitation of consumers cognitive deficiencies. According to Waite,"Moreover, the grades tend to protect consumers from certain obvious abuses arising from the profit making motive of the economic order. For example, there is a tendency for businessmen in a competitive society to secure protection for their sales by building around their product thru [sic] brands or other distinguishing devices semi-monopolistic situations. Grades break down these protective devices by expanding similarity of essential characteristics to a broader group." (Waite, 1934, 248) This is clearly pointing to a normative role of grades understood as a protecting device to help consumers improve their bargaining power, not merely to improve coordination. There is a counterbalancing effect of grades. Gilbert Sussman and Saul Richard Gamer, two members of the Agricultural Adjustment Administration, take as a starting point "that the consumer has no practical way of knowing or discovering at present the quality of any food and drug he buys, much less whether any particular brand of a product he purchases is good, bad or indifferent as compared to any other particular brand which he might have chosen." (Sussman and Gamer, 1935, 578). This fact is well recognized, and consumers are frequently mislead / deceived by such a situation, notably since "it has been indisputably established that the price at which a particular article may sell is not a satisfactory, if any, index to the quality of the product. Nor does the use of brand or trade names supply an adequate guide." (Sussman and Gamer, 1935, 578). The great number of brands for particular articles forbids rational buying. This view is radically at odds with the starting point of Waugh's reflections on the quality-price relationship, and thus it is rejecting the market point of view on quality. If markets are to work as coordination devices, it supposes that buyers are helped to make well-informed decisions. Cognitive limits of the consumer are recognized as a basis for producers' resistance to mandatory grades.
It may be readily admitted that the fact that consumers frequently are not rational in their decisions is a limitation which is encountered in this field. . . .The irrationality of the consumer itself may well be worth studying in connection with determining upon the economic basis of market grades. (Jesness, 1933, 716) Quality per se is not something given to consumers, not something evident. On the contrary, the grading of goods or the rating of commodities is said to acquaint consumers with the characteristics of a good that the expert deems essential [START_REF] Waite | Consumer Grades and Standards[END_REF]. Here we touch to what is probably the most delicate issue from a theoretical and policy point of view. There is clearly they idea that grades, and more specifically any system of rating of consumers' goods, is influencing the preferences of consumers by constructing their own system of preferences and the way they assess goods, pointing out some characteristics over other characteristics. Waite identifies that producers are usually reluctant to adopt standards for consumer goods. The adoption of grades stems from a necessity to bypass the cognitive limitations of consumers: "Where such grades have been accepted by the industry it has been usually because qualities were indistinguishable by consumers and misrepresentation so rampant that consumers were utterly bewildered and hesitated to purchase with a consequent great decline in sales and individual profits." (Waite, 1934, 249) Otherwise stated, producers are willing to accept standards when it reduces information asymmetry, which is the cause for a low level of transactions. This analysis does not make clear what comes from pure absence of knowledge on the part of the consumer or from difficulties to cope with too much information. Hence, by lack of government intervention to constrain producers, grade labeling is often permissive (and not compulsory). "But where the market is not demoralized there is strong opposition to the adoption of consumer grades. Here those with a reputation for consistently superior products may secure enhanced prices because of that reputation, and those with shoddy products may secure higher prices than they could with labeling. It is unlikely that many products will find their markets sufficiently demoralized by bad trade practices to accept readily mandatory grades. This has forced us to make grades largely permissive in character. We have had sufficient experience with these permissive grades to demonstrate that in the majority of cases opposition of important trade groups will preclude their widespread adoption. With permissive grades the only hope is to educate consumers to purchase products so labeled, but with the inertia of consumers and determined resistance of a considerable part of the trade practical results are remote." (Waite, 1934, 249) Anyway, there is consumer's inertia, and permissive labels are of weak effect on behavior. Clearly, Waite identifies that there is an opportunity for strengthening rules about standards and labels and that this may allow to reduce the overall exploitation of the consumer's ignorance about qualities: "The participation of the government as a party in these agreements [about codes and marketing between producers in many industries], charges it with the duty of a broad social viewpoint, which includes among other things insistence of protection of consumers from the exploitation which is widespread under competitive system. This opportunity is passing rapidly and it is pathetic that we are failing in the use of it." (Waite, 1934, 249) The interpretation of the need of grades determines also the kind of policy recommendation. To Waite, there is sort of a mix of given consumer's preferences and expert analysis that should be constitutive of the definition of grades:
The grades may specify simply the characteristics which are now judged important with respect to products by consumers themselves as reflected for example in the price they are willing to pay. The grades may specify, however, characteristics which consumers would judge important and for which they would be willing to pay if they were able to distinguish them or were provided with the opportunity. These characteristics may be unassociated with present easily observable external characteristics known to consumers, or may be observable but due to other associated undesirable characteristics from which they have not been separated the consumer may be unable to register a preference. Finally the grades may specify characteristics which are judged important by expert opinion. They may designate qualities which should be important to consumers. Grades in this sense contain an element of propaganda in the direction of consumption in desirable channels, the full force of which we do not know, as yet. Consumers may feel the higher grades more valuable, particularly in the cases where the specifications are not readily distinguishable and in those cases they will probably react with a willingness to pay somewhat higher prices, thus widening the spread between the better and lower qualities. This is a form of the time honored device now used by business men to differentiate their product and sell to consumers at a higher price because the consumer is made to think the product superior and it may be turned to the advantage of consumers by designation by disinterested agencies. The second advantage of consumer grades is that the designation of these qualities may assist early subdivision of the product into groups possessing these characteristics. Early subdivision will facilitate economical handling of the product and will tend to reflect back to producers characteristics desired by consumers. This should lead to higher prices for these types of products possessing these characteristics and a subsequent larger production wherever these qualities are subject to control. This, in turn, should result in greater consumer satisfaction and enhanced incomes to the more effective producers. (Waite, 1934, 250)
The rise of the Office takes places in a context of rapid expansion of markets from a local to a national and international scale. Under the head of Taylor, the Office of Farm Management took control over the Bureau of Markets and the Bureau of Crop and Livestock Estimates. The BAE was officially established as an agency of the U.S. Department of Agriculture (headed by Secretary Wallace) by the Congress July 1st 1921. The Bureau of Markets, created in 1915, was in charge of helping farmers to market their crops. Notably, it organized a telegraphic market news service for fruits and vegetables. According to Lloyd
S.A. It gathered pioneers of farm economics soon after World War I who developed sophisticated methods to estimate demand and supply functions on different agricultural markets. The BAE emerged little by little, as an extension of the Office of Farm Management, under the heading of Henry C. Taylor by 1919 onwards. He recruited new personnel along high training standards in economics and organized the Office in different committees each focused on one aspect of farm economics. His goal was to promote new methods of management and a reorganization of farms adapted to market conditions
[START_REF] Mcdean | Professionalism, policy, and farm economists in the early Bureau of Agricultural Economics[END_REF]
Of course, the effects of monopolistic competition on the concept of quality in economics shall be a subject for future study. Be it enough to mention that reflections on quality based on monopolistic competition tools did not actually blossom before after WWII. However, this does not contradict the fact that many arguments used by agricultural economists do have a monopolistic competitive flavor
This is not to deny that the issue of quality has been a relevant issue in economics since 18th century, and that it has been part of the legal-economic nexus since the Middle Ages[START_REF] Lupton | Quality Uncertainty in Early Economic Thought[END_REF]
Among other justifications for standardization is the need for credit. Farm products being used as collateral for loans, lenders need to appraise the quality of the products. More generally, it is reducing transaction costs.
Notably, appropriations for economic investigations were severely reduced in between 1941 and 1947, while more appropriations were given to crop and livestock estimates, thus reducing the ability of the BAE to sustain research and policy recommendations (seeHardin, 1946, 641)
Gardiner C. Means, a member of the Consumers' Advisory Board in the National Recovery Administration was called to act as Economic Adviser on Finance to the Secretary of Agriculture.
The main thesis in[START_REF] Means | The Consumer and the New Deal[END_REF] that administrative price ends with control over production and increase of prices that are detrimental to overall welfare, needs not further comments here.
To Means, "First in importance among such organizations would come those which are in no way committed to the producer point of view-teachers' societies, organizations of Government employees, churches, women's organizations, engineering societies, and, of course, the consumer cooperatives. These organizations could in a clear-cut manner carry the banner of the consumer and act as channels through which consumers' action could be taken. They are in a position not only to educate their members but also through their representatives to exert definite pressure to counterbalance moves on the part of producer interests which would otherwise jeopardize the operations of the economy."(Means, 1934, 16)
On Lynd's personal record as a theologist and social scientist, seeMcKellar (1983).
Such a Department would be entrusted with the development of commodity grades and standards, acquainting consumers with established rules and standards, crystallizing consumer sentiment and urging business and government agencies to cooperate in the effort. To Nelson, this is a dead-born project, "Conceivably this proposal may constitute an ultimate goal; it is not an immediate practical possibility."(Nelson, 1939, 162)
Regarding consumer education, various federal agencies render available to the consumer information which will permit him to buy more efficiently. But official publications had too limited a circulation. The best known is the Consumer's guide, published for five years by the Consumer's Counsel of Agricultural Adjustment Administration, with a maximum permissible circulation of 135 000.
11 The original and "natural" organization of powers is done first along functional lines in the U.S. (there are departments of State, of War, of Navy, of Treasury, of Justice), each representing the citizens as a whole. When new Departments have been established, they were representing specific interests of major economic groups (Department of Agriculture, Commerce, Labor). "Thus far, the consumer has not been accorded similar recognition. This is not at all surprising. It is only recently that the distinctive nature of the consumer interest has come to be clearly understood and that its representatives have become articulate."[START_REF] Nelson | Representation of the Consumer Interest in the Federal Government[END_REF]
151)12 Even under the NRA, when the Consumers' Advisory Board was accorded parity with the Advisory Boards representing Industry and Labor, if faced constant opposition and seldom succeeded in "achieving any effective voice in NRA Policy."(Nelson, 1939, 156)
In a few fields, such as dairy products, considerable progress has been made toward setting up grades that are useful to the ultimate consumer. This is easier with milk, where the degree of freedom from bacteria may be the basis for distinguishing between Grades A, B, C, than with butter or cheese, where relative desirability rests on a composite of characteristics. 'Scored' butter has been available for some time, but only a small percentage of housewives have shown a disposition to use the 'score' of butter as their guide in purchasing. Even in buying milk the Grade is only one of many factors that determine the consumers' choice.(Hotchkiss, 1936, 76)
The only thing that could induce a consumer to forego partly their freedom of choice is the offer of a financial saving, through buying cooperatives (like book clubs), thus abiding to the choice of books made by their committee.(Hotchkiss, 1936, 78)
The opposition to give the government authority on standards, for instance, was successful in the
Food and Drug Act, senators arguing that "each case should stand upon its own facts" (quoted bySussman and Gamer, 1935, 585)
"No longer will a court, in a prosecution for adulteration or misbranding, be compelled in the first instance to determine whether a particular article is or is not a macaroon"(Sussman and Gamer, 1935, 585)
Waugh also identifies factors affecting the price of tomatoes. The main factors affecting the price of tomatoes are firmness (30%) and absence of cracks
But Waugh also wonders whether this hierarchy on the Boston market is the same on other markets.
The standard theory assumes that goods on a market are homogeneous and that there is not the slightest difference in quality between two units of the same good consumed. Otherwise, this would cause a change in the preference for goods and a consequential change on the ratio of exchange(Jevons, 1871; Clark, 1899)
Regarding lower grades (inferior to Middling), the price paid and the discounts are more or less reflecting the discounts observed on spot markets. More or less, most studies confirm those results, confirming the weak relationship between quality and price (see[START_REF] Cox | Factors influencing Corn Prices[END_REF],[START_REF] Kapadia | A Statistical Study of Cotton Prices in Relation to Quality and Yeld[END_REF],Hauck (1936, 399),Garver (1937, 807)). Among factors explaining this situation is the fact that those local markets are not as liquid as are spot markets.
This may imply to promote the use of single variety communities, of good practices for harvesting and ginning, to introduce licensed classing of samples offered by association of growers. There were some cotton classing schools. On the art of classing cotton, seePalmer (1933)
A connected issue is that some standards that can be adapted to the wholesale market will be useless on the retail market.
Grading cotton consists in appraising the cotton by observation of the color, the bloom, and the amount of waste appearing in a sample of cotton taken from the bale, while stapling is the method of valuing the cotton by measuring the length, strength and fineness properties of the fibers. These estimates are subject to considerable errors of judgment.
e.g. a blanket can be graded according to warmth and according to durability. |
01763857 | en | [
"sdv.neu.nb",
"scco.neur"
] | 2024/03/05 22:32:13 | 2018 | https://amu.hal.science/hal-01763857/file/Ramdani%20et%20al.%20DOI-1.pdf | Céline Ramdani
email: celineramdani@hotmail.fr
Franck Vidal
Alain Dagher
Laurence Carbonnell
Thierry Hasbroucq
Dopamine and response selection: an Acute Phenylalanine/Tyrosine Depletion study
Keywords: Dopamine, Supplementary motor areas, Simon task, Electroencephalography, Response selection, Acute phenylalanine/tyrosine depletion: APTD
The role of dopaminergic system in decision-making is well documented, and evidence suggests that it could play a significant role in response selection processes. The N-40 is a fronto-central event-related potential, generated by the supplementary motor areas (SMAs) and a physiological index of response selection processes. The aim of the present study was to determine whether infraclinical effects of dopamine depletion on response selection processes could be evidenced via alterations of the N-40. We obtained a dopamine depletion in healthy volunteers with the acute phenylalanine and tyrosine depletion (APTD) method which consists in decreasing the availability of dopamine precursors. Subjects realized a Simon task in the APTD condition and in the control condition. When the stimulus was presented on the same side as the required response, the stimulus-response association was congruent and when the stimulus was presented on the opposite side of the required response, the stimulus-response association was incongruent. The N-40 was smaller for congruent associations than for incongruent associations. Moreover, the N-40 was sensitive to the level of dopaminergic activity with a decrease in APTD condition compared to control condition. This modulation of the N-40 by dopaminergic level could not be explained by a global decrease of cerebral electrogenesis, since negativities and positivities indexing the recruitment of the primary motor cortex (anatomically adjacent to the SMA) were unaffected by APTD. The specific sensitivity of N-40 to ATPD supports the model of Keeler et al. (Neuroscience 282:156-175, 2014) according to which the dopaminergic system is involved in response selection.
Introduction
Decision-making can be regarded as a set of cognitive processes that contribute to the production of the optimal alternative among a set of concurrently possible actions. The role of the dopaminergic system in human decision-making is well documented (e.g., [START_REF] Montague | A framework for mesencephalic dopamine systems based on predictive Hebbian learning[END_REF][START_REF] Montague | Computational roles for dopamine in behavioural control[END_REF][START_REF] Rogers | The roles of dopamine and serotonin in decision making: evidence from pharmacological experiments in humans[END_REF].
Response selection (i.e., the association of a specific action to a specific sensation) can be considered as the core process of decision-making [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], and one can wonder whether the dopaminergic system is directly involved in this process. According to [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], the striatal direct pathway (D1 receptors subtype) would allow preparation for response selection, while the striatal indirect pathway (D2 receptor subtypes) would allow selection of the appropriate response within the prepared set of all possible responses, through. [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF] called this system a Bprepare and selectâ rchitecture.
The implication of the dopaminergic system in preparatory processes has been widely acknowledged in animals. Response preparation is impaired in rats after dopamine depletion [START_REF] Brown | Simple and choice reaction time performance following unilateral striatal dopamine depletion in the rat[END_REF]. Now, [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] went a step further showing that, during preparation, the activation of the dopaminergic system adjusts to the difficulty of the response selection to be performed after this preparation.
In humans, taking advantage of the high iron concentration in the substantia nigra (SN) which reveals this structure as a relatively hypodense zone on T2*-weighted images (including EPI volumes), [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] accurately examined the activation of the SN during the 7.5-s preparatory period of a between-hand choice reaction time (RT) task. At the beginning of the preparatory period, a precue indicated which one of two (easy or difficult) stimulus-response associations should be applied when the response signal (RS) would be delivered, at the end of the preparatory period. The SN BOLD signal increased after the precue in both cases. However, whereas the BOLD signal returned to baseline towards the end of the preparatory period in the easiest of the two possible response selection conditions, this signal remained at high levels until the end of the preparatory period in the most difficult condition. Interestingly, no such an interaction could be evidenced in the neighboring subthalamic nucleus (STN); given the close functional relationships between STN and SN pars reticulata, the authors convincingly argued that the BOLD signal sensitivity to the difficulty of the selection process resulted from a sensitivity of the dopaminergic neurons of SN pars compacta to this manipulation. This interpretation is highly consistent with Keeler et al.'s (2014) model which assumes that the dopaminergic system plays a prominent role in response selection processes. Now, after the RS, that is in the period when response selection itself occurs, no differential effect could be evidenced, but this might easily be explained by the poor temporal resolution of fMRI method (RTs were about 550 and 600 ms only, in the easy and difficult conditions, respectively).
Evidencing a direct effect of the dopaminergic system on response selection processes themselves (which take place during the RT period) would lend direct additional support to Keeler et al.'s view that the dopaminergic system plays an essential role in response selection, not only in preparing for its difficulty [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] but also in carrying out response selection processes themselves; this was the aim of the present study which will be made explicit as follows.
Among the main targets of the basal ganglia (via the thalamus) are the supplementary motor areas (SMAs). Recent fMRI data did demonstrate that acute diet-induced dopamine depletion (APTD) impairs timing in humans by decreasing activity not only in the putamen but also in the SMAs [START_REF] Coull | Dopamine precursor depletion impairs timing in healthy volunteers by attenuating activity in putamen and supplementary motor area[END_REF], which role in motor as well as sensory timing is well documented (e.g., [START_REF] Coull | Neuroanatomical and neurochemical substrates of timing[END_REF].
SMAs are often assumed to play a prominent role not only in timing but also in response selection (e.g., Mostofsky and Simmonds 2008, for a review). Taking into account the sensitivity to dopamine depletion of the SMAs [START_REF] Coull | Dopamine precursor depletion impairs timing in healthy volunteers by attenuating activity in putamen and supplementary motor area[END_REF], one might therefore wonder, in the frame of Keeler et al.'s (2014) model, whether their activities would also be impaired, by APTD during the reaction period of a RT task in which a response selection is required. Given the short time range of RTs, a high temporal resolution method is needed to address this question. EEG seems to be particularly adapted, since it is classically considered as having an excellent temporal resolution [START_REF] Sejnowski | Brain and cognition[END_REF].
During the reaction time of a between-hand choice RT task, an electroencephalographic (EEG) component has been evidenced in humans (the N-40; [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF] right over the SMAs. Given that the N-40 peaks about 50 ms before the peak activation of the (contralateral) primary motor cortex involved in the response, it has been proposed that this component is an index of response selection which might arise from the SMAs. In accordance to this view, it has been shown that (1) that the N-40, although present in choice conditions was absent in a go/ no-go task, a task in which no response selection is required [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF]and (2) that the amplitude of the N-40 was modulated by the difficulty of the selection process, being smaller for easier selections [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF]. Finally, tentative source localization performed with two independent methods (sLORETA and BESA), pointed to quite superficial medio frontal generators corresponding to the SMAs [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF]).
If we admit that the N-40 is generated by the SMAs and index response selection, a convenient way to address the question of the involvement of the dopaminergic system in response selection processes consists in examining the sensitivity of the N-40 to APTD in a between-hand choice RT task quite similar to the one used by [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF]: we chose a Simon task (see Fig. 1, see [START_REF] Simon | The effects of an irrelevant directional cue on human information processing[END_REF] for a review). Now, in between-hand choice RT tasks, the N-40 is followed by a transient (negative) motor potential [START_REF] Deecke | Voluntary finger movement in man: cerebral potentials and theory[END_REF]) revealing the build-up of the motor command in the (contralateral) primary motor areas (M1) controlling the responding hand [START_REF] Arezzo | Intracortical sources and surface topography of the motor potential and somatosensory evoked potential in the monkey[END_REF]. Concurrently, a transient positive wave, reflecting motor inhibition and related to error prevention [START_REF] Meckler | Motor inhibition and response expectancy: a Laplacian ERP study[END_REF], develops over (ipsilateral) M1 controlling the non-responding hand [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF]. If one admits that these activities are not directly related to response selection, examining their (in)sensitivity to dopamine depletion allows examining the selectivity of the effects of this depletion (if present) on response selection processes.
In a previous study, we submitted subjects to APTD. Although we did evidence subtle behavioral effects that can be attributed to action monitoring impairments [START_REF] Ramdani | Dopamine precursors depletion impairs impulse control in healthy volunteers[END_REF], we did not find any clear behavioral evidence of APTD-induced response selection impairment.
To evidence the role of the dopaminergic system in response selection, the present study was aimed at assessing whether infraclinical effects of dopamine depletion on response selection processes can be evidenced via selective alterations of the N-40.
Material and method
The experimental procedure has been described in detail elsewhere [START_REF] Ramdani | Dopamine precursors depletion impairs impulse control in healthy volunteers[END_REF], and only essential information is provided here.
Twelve healthy subjects participated in this experiment.
Dopamine depletion
Dopamine availability was decreased using the APTD method [START_REF] Mctavish | Effect of a tyrosine-free amino acid mixture on regional brain catecholamine synthesis and release[END_REF][START_REF] Leyton | Effects on mood of acute phenylalanine/tyrosine depletion in healthy women[END_REF][START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Nagano-Saito | Dopamine depletion impairs frontostriatal functional connectivity during a set-shifting task[END_REF], 2012). The present experiment comprised two experimental sessions differing by the level of tyrosine and phenylalanine in the amino acid mixture: (i) the Bplacebo session^: the subject performed the task after ingestion of a mixture containing 16 essential amino acids (including tyrosine and phenylalanine) and (ii) the Bdepleted session^: the subject performed the task after the ingestion of the mixture without tyrosine and phenylalanine. Plasma concentrations of phenylalanine, tyrosine, and other large neutral amino acids (LNAAs; leucine, isoleucine, methionine, valine, and tryptophane) were measured by HPLC with fluorometric detection on an Ultrasphere ODS reversephase column (Beckman Coulter) with ophtalaldehyde precolumn derivatization and amino-adipic acid as an internal standard. Plasma concentrations of tryptophan were measured by HPLC-FD on a Bondpak reverse-phase column (Phenomenex).
Task and design
Each subject performed both sessions, on separate days, at least 3 days apart. Subjects were not taking any medication at the time of the experiment. None of them had a history of mental or neurologic illness. They were asked not to take stimulating substances (e.g., caffeine or stimulant drugs) or alcohol the day and the night before both sessions. The day before each session, subjects ate a low protein diet provided by investigators and fasted after midnight. On the test days, subjects arrived at 8:30 a.m. at the laboratory and had blood sample drawn to measure plasma amino acid concentrations. They ingested one of the two amino acid mixtures at 9:00 a.m. in a randomized, double-blind manner. Peak dopamine reduction occurs during a period 4-6 h after ingestion of the two amino acid mixtures [START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF]). They were tested from 1:30 p.m. At 3:00 p.m., subjects had a second blood sample drawn to measure plasma amino acid concentrations.
The order of depleted and placebo sessions was counterbalanced between subjects.
Subjects performed a between-hand choice reaction time RT task.
A trial began with the presentation of a stimulus. The subjects' responses turned off the stimulus, and 500 ms later, the next stimulus was presented. If subjects had not responded within 800 ms after stimulus onset, the stimulus was turned off and the next stimulus was displayed 500 ms later.
At the beginning of an experimental session, subjects had one training block of 129 trials. Then, they were required to complete 16 blocks of 129 trials each. A block lasted about 2 min. There was 1 min break between two blocks and 5 min break every four blocks. The training block was discarded from statistical analyses.
The structure of this between-hand choice RT task realized a Simon task [START_REF] Simon | The effects of an irrelevant directional cue on human information processing[END_REF]. The stimuli of this Simon task were the digits three, four, six, and seven presented either to the right or the left of a central fixation point. Half of the subjects responded with the right thumb on the right force sensor for even digits and with the left thumb on the left force sensor for odd digits; the other half performed the reverse mapping. When the stimulus was presented on the same side as the required response, the stimulus-response association was congruent. When the stimulus was presented on the side opposite to the required response, the stimulus-response association was incongruent. A block contained 50% of the congruent trials and 50% of the incongruent ones. [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] (see Fig. 1) manipulated the complexity of the stimulus-response association by varying the spatial correspondence between the direction (right or left) indicated by a centrally presented arrow and the position (right or left) of the required response: on congruent associations, responses had to be given on the side indicated by the arrow, while on incongruent associations, responses had to be given on the opposite side. In the present Simon task, congruency was manipulated by varying the spatial correspondence between the position of the stimulus and the position of the required response, given that (1) as in the [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] study, congruency affects response selection processes [START_REF] Hommel | A feature-integration account of sequential effects in the Simon task[END_REF][START_REF] Kornblum | The effects of irrelevant stimuli: the time course of S-S and S-R consistency effects with Stroop-like stimuli, Simon-like tasks, and their factorial combinations[END_REF]Proctor and Reeve 1990) and (2) congruency affects the amplitude of the N-40 in a Simon task [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF]. Subjects were asked to respond as fast and as accurately as possible.
RT was defined by the time interval between the stimulus onset and the mechanical response.
Electrophysiological recordings and processing
The electromyographic (EMG) activity of the flexor pollicis brevis (thenar eminence, inside base of the thumb) was recorded bipolarly by means of surface Ag-AgCl electrodes, 6 mm in diameter, fixed about 20 mm apart on the skin of each thenar eminence. The recorded EMG signals were digitized online (bandwidth 0-268 Hz, 3 db/octave, sampling rate 1024 Hz), filtered off-line (high pass = 10 Hz), and then, inspected visually [START_REF] Van Boxtel | Detection of EMG onset in ERP research[END_REF]). The EMG onsets were hand scored because human pattern recognition processes are superior to automated algorithms (see [START_REF] Staude | Precise onset detection of human motor responses using a whitening filter and the log-likelihood-ratio test[END_REF]). To overcome subjective influence on the scoring, the experimenter who processed the signals was unaware of the type of associations (congruent, incongruent) or session (placebo, APTD) to which the traces corresponded.
Electroencephalogram (EEG) and electro-oculogram (EOG) were recorded continuously from preamplified Ag/ AgCl electrodes (BIOSEMI Active-Two electrodes, Amsterdam). For EEG, 64 recording electrodes were disposed according to the 10/20 system with CMS-DRL as reference and ground (specific to the Biosemi acquisition system). A 65th electrode on the left mastoid served to reference the signal offline. Electrodes for vertical and horizontal EOG were at the Fp1 and below the left eye and at the outer canthus of the left and right eyes, respectively. The signal was filtered and digitized online (bandwidth 0-268 Hz, 3 db/octave, sampling rate 1024 Hz). EEG and EOG data were numerically filtered offline (high pass = 0.02 Hz). No additional filtering was performed. Bipolar derivations were calculated offline for vertical and horizontal EOGs. Then, ocular artifacts were subtracted [START_REF] Gratton | A new method for off-line removal of ocular artifact[END_REF]. A trial-by-trial visual inspection of monopolar recordings allowed us to reject unsatisfactory subtractions and other artifacts.
The scalp potential data were segmented from (-500) to (+ 500 ms) with the EMG onset as zero of time. Afterward, for each individual, scalp potential data were averaged time locked to the EMG onset. However, on scalp potential data, due to volume conduction effects, the N-40 is overlapped by large components generated by remote generators and by closer ones in the primary motor areas. Therefore, it hardly shows up on scalp potential recordings [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF].
The surface Laplacian (SL) transformation (see Carvalhaes and de [START_REF] Carvalhaes | The surface Laplacian technique in EEG: theory and methods[END_REF] for theory and methods), acting as a high-pass spatial filter [START_REF] Nuñez | Estimation of large scale neocortical source activity with EEG surface Laplacians[END_REF], is very efficient in attenuating volume conduction effects [START_REF] Giard | Scalp current density mapping in the analysis of mismatch negativity paradigms[END_REF][START_REF] Kayser | On the benefits of using surface Laplacian (current source density) methodology in electrophysiology[END_REF]. Because of this property, the SL transformation allows unmasking the N-40, by removing spatial overlap between this component and other ones. Therefore, the Laplacian transformation was applied on each individual scalp potential average, obtained for each subjects in each condition (congruent and incongruent), and each session (placebo and APTD); surface Laplacian was estimated after spherical spline interpolation with four as the degree of spline and a maximum of 10°for the Legendre polynomial, according to the method of [START_REF] Perrin | Scalp current density mapping: value and estimation from potential data[END_REF].
N-40 At the FCz electrode, the N-40 begins to develop about 70 ms and peaks about 20 ms before EMG onset. The slopes of the linear regression of this wave were computed for each subject from -70 to -20 ms, so within a 50-ms time window. The slopes of the N-40 were determined for Bpure^correct trials only. These values were then submitted to repeatedmeasures canonical analyses of variance (ANOVA). The ANOVA involved two within-subjects factors: session (placebo, APTD) and congruency (congruent, incongruent) for mean results. Contrary to stimulus-locked data, when studying response-locked averages, the choice of the appropriate baseline is always problematic. To circumvent the problem, peakto-peak measures are often used, as they are baseline free.
They may be conceived of as a crude slope measure. However, mean slopes can be estimated by computing the linear regression line by the least squares method, in an interval of interest. In this case, slope analysis has certain advantages over amplitude analysis: (i) they are also independent of the baseline and (ii) they give morphological information on the polarity of the curves in delimited time windows and are less variable than amplitude measures [START_REF] Meckler | Motor inhibition and response expectancy: a Laplacian ERP study[END_REF].
Activation/inhibition pattern For correct trials, we analyzed EEG activities over the primary sensory-motor area (SM1) contralateral and ipsilateral to the response (over C3 and C4 electrodes): we measured the slopes computed by linear regression in a specific time window of 50 ms (-30 ms; 20 ms) (e.g., [START_REF] Meckler | Motor inhibition and response expectancy: a Laplacian ERP study[END_REF]. These mean values of slope were submitted to an ANOVA involved two within-subjects factors: session (placebo, APTD) and congruency (congruent, incongruent).
Results
Amino acid plasmatic concentrations
For phenylalanine (see Table 1), the ANOVA revealed an effect of the session (F (1, 11) = 37.49; p = 0.000075) and no effect of the time of the drawn (F (1, 11) = 0.13; p = 0.73). These two factors interacted (F (1, 11) = 119.81; p = 0.000000), signaling that the session affected the samples drawn at the end of the testing session, 6 h after ingestion (F (1, 11) = 65.84; p = 0.000006), but not the samples drawn before ingestion (F (1, 11) = 0.016; p = 0.902). For tyrosine (see Table 1), the ANOVA revealed a main effect of the session (F (1, 11) = 68.91; p = 0.000005) and a main effect of time (F (1, 11) = 17.95; p = 0.0014). These two factors interacted (F (1, 11) = 68.83; p = 0.000005), indicating that the session affected tyrosine levels at the end of the session (F (1, 11) = 71.16; p = 0.000004) but not prior to ingestion (F (1, 11) = 0.24; p = 0.631).
In sum, plasma concentrations of tyrosine and phenylalanine were significantly lower for the depleted session than for the placebo session.
Reaction time of correct responses
There was a main effect of congruency of 13 ms (congruent 408 ms, incongruent 421 ms, F (1, 11) = 57.58; p = 0.00001) but no effect of session (placebo 414 ms; depleted 416 ms, F (1, 11) = 0.056; p = 0.817). These two factors did not interact on mean RT (F (1, 11) = 0.116; p = 0.739).
Error rate
There was a non-significant trend for an increase of error rate on incongruent stimulus-response associations (7.28%) compared to congruent stimulus-response associations (5.79%) (F = 3.70; p = 0.081). The error rate was not statistically different for the placebo (6.62%) and depleted (6.45%) sessions (F (1, 11) = 0.123; p = 0.732). There was no interaction between congruency and session (F (1, 11) = 0.026; p = 0.874).
N-40 (Fig. 2)
Only data obtained on correct trials have been analyzed.
As expected from [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], the slope of the N-40 was steeper on the incongruent (-57.42 μV/cm 2 /ms) than on the congruent condition (-24.50 μV/cm 2 /ms) (size effect on steepness -32.92 μV/cm 2 /ms, F (1, 11) = 5.41, p = 0.040). The congruency effect observed on slopes of the N-40 was attributed to the more demanding selection on incongruent stimulus-response associations as compared to congruent stimulus-response associations. Moreover, the slope of the N-40 was also steeper on placebo (-59.39 μV/cm 2 /ms) than on the APTD session (-22.54 μV/cm 2 /ms) (size effect on steepness -36.85 μV/cm 2 /ms, F (1, 11) = 5.50, p = 0.039).
These two factors (congruency and sessions) did not interact (F (1, 11) = 0.030; p = 0.865).
Activation/inhibition pattern (Fig. 3) Inspection of the Laplacian traces reveals that, for all conditions, a negativity/positivity pattern developed before EMG onset over the controlateral and ipsilateral M1s respectively.
As expected from [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF], over contralateral M1 (over controlateral electrode), we observed a negative wave peaking about EMG onset while over the ipsilateral M1 (over ipsilateral electrode), we observed a positive wave.
Regarding contralateral negativity, there was no effect of congruency (congruent condition -210.37 μV/cm 2 /ms and incongruent condition -224.19 μV/cm 2 /ms; F (1, 11) = 0.67; p = 0.429) or main effect of the session (placebo session -213.10 μV/cm 2 /ms, APTD session -221.47 μV/cm 2 /ms; F (1, 11) = 0.066; p = 0.802). These two factors did not interact (F (1, 11) = 1.18, p = 0.299).
Regarding ipsilateral positivity, there was neither an effect of congruency (congruent condition 102.51 μV/ms and incongruent condition 110.26 μV/ms; F (1, 11) = 0.349; p = 0.566) nor an effect of session (placebo session 109.53 μV/ms, APTD session 103.25 μV/ms; F (1, 11) = 0.182; p = 0.677). These two factors did not interact (F (1, 11) = 0.098; p = 0.759).
Discussion
The present study reproduces already available data: (1) RTs were longer on congruent than on incongruent trials, revealing the existence of a Simon effect; (2) an activation/inhibition pattern developed over M1s before the response [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF][START_REF] Van De Laar | Lifespan changes in motor activation and inhibition during choice reactions: a Laplacian ERP study[END_REF][START_REF] Alexander | Linking motor-related brain potentials and velocity profiles in multi-joint arm reaching movements[END_REF]); (3) this activation/inhibition pattern was preceded by an N-40 [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF][START_REF] Alexander | Linking motor-related brain potentials and velocity profiles in multi-joint arm reaching movements[END_REF], which is in line with the notion that N-40 index response selection processes, upstream to response execution as manifested by contralateral M1 activation; (4) the amplitude of the N-40 depended on congruency, being larger on congruent than on incongruent associations [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], which is also in line with the notion that N-40 indexes response selection processes; (5) the procedure used here was efficient in inducing a clear ATPD, known to induce a secondary dopamine depletion (McTavish et al. [START_REF] Leyton | Effects on mood of acute phenylalanine/tyrosine depletion in healthy women[END_REF][START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Nagano-Saito | Dopamine depletion impairs frontostriatal functional connectivity during a set-shifting task[END_REF], 2012); (6) acute dopamine depletion had no effect on RT, error rate, or the size of the congruency effect, in line with Larson et al.'s (2015) results who did not evidence either any APTD effect on RT, error rates, or the size of the congruency effect, in another conflict task. Therefore, given that the procedure used here induced a clear APTD, and that subjects exhibited behavioral and EEG patterns as expected from previous literature, we can be quite confident that subjects were submitted to classical appropriate experimental conditions.
In these conditions, (1) over M1s, neither contralateral activation nor ipsilateral inhibition were sensitive to congruency, suggesting that congruency has little or no effect on execution processes (contralateral M1) or error prevention (ipsilateral M1); (2) over M1s, neither contralateral activation nor ipsilateral inhibition were sensitive to APTD, suggesting that the dopamine depletion has little or no effect on execution processes (contralateral M1) or error prevention (ipsilateral M1);
(3) over the SMAs, the N-40 was reduced after APTD, suggesting that the dopamine depletion affects response selection processes.
A first comment is in order. The sensitivity of the N-40 to ATPD cannot be attributed to a general effect on cerebral electrogenesis. First, because ERPs recorded over M1s were unaffected by ATPD; secondly, because Larson and his colleagues (2015) examined extensively the sensitivity to ATPD of several ERPs, assumed to reveal action monitoring processes, namely the N450 [START_REF] West | Effects of task context and fluctuations of attention on neural activity supporting performance of the Stroop task[END_REF], the conflict slow potential [START_REF] West | Effects of task context and fluctuations of attention on neural activity supporting performance of the Stroop task[END_REF][START_REF] Mcneely | Neurophysiological evidence for disturbances of conflict processing in patients with schizophrenia[END_REF], the Error Negativity [START_REF] Falkenstein | Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks[END_REF][START_REF] Gehring | A neural system for error detection and compensation[END_REF], or the Error Positivity [START_REF] Falkenstein | Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks[END_REF], and none of these activities were sensitive to ATPD, thus confirming that the present effects observed on the N-40 cannot result from a general decrease of electrogenesis. Therefore, one can conclude that the sensitivity of the N-40 to ATPD is specific.
Second, because dopamine depletion has little or no effect on execution processes (contralateral M1) or proactive control of errors (ipsilateral M1), one can conclude that the effect of ATPD observed on the N-40 over the SMAs reflects a selective influence of dopamine depletion on response selection processes (as proposed in the BIntroduction^section), without noticeable effects on processes occurring downstream, i.e., Fig. 2 N-40: in black, the placebo session and in gray, the APTD session; in dash, the congruent condition and in solid the incongruent condition. Maps have the same scale and dated at -30 ms. The zero of time corresponds to the onset of the EMG burst response execution. Note that the selective influence of APTD, but also of congruency, on upstream processes, both leaving unaffected contingent downstream activities, suggests the existence of separate modules in information processing operations [START_REF] Sternberg | Separate modifiability, mental modules, and the use of pure and composite measures to reveal them[END_REF][START_REF] Sternberg | Modular processes in mind and brain[END_REF] and the existence of Bfunctionally specialized [neural] processing modules ( Sternberg 2011, page 158). Now, because APTD had no influence on RT, error rate, or the size of the congruency effect [START_REF] Ramdani | Dopamine precursors depletion impairs impulse control in healthy volunteers[END_REF][START_REF] Larson | The effects of acute dopamine precursor depletion on the cognitive control functions of performance monitoring and conflict processing: an event-related potential (ERP) study[END_REF], we must conclude that the effect of an about 30% dopamine depletion [START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Montgomery | Reduction of brain dopamine concentration with dietary tyrosine plus phenylalanine depletion: an [11C] raclopride PET study[END_REF]) observed here on the N-40 reveals a weak infraclinical functional deficit in response selection operations. One can imagine that a stronger depletion would have a behavioral expression on RTs.
According to the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], fMRI data reported by [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] indicate that during the preparatory period of a choice RT task, the dopaminergic system is involved in preparing for response selection; the SN pars compacta BOLD signal increased after the precue but returned to baseline before the end of the preparatory period in the easiest of the two possible response selection conditions. However due to the low temporal resolution of fMRI, no evidence could be provided regarding response selection per se.
If one admits that the N-40 is a physiological index of response selection processes [START_REF] Vidal | An ERP study of cognitive architecture and the insertion of mental processes: Donders revisited[END_REF][START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], the selective sensitivity of the N-40 to ATPD, with spared M1s activation/inhibition pattern, lends support to the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF] which assumes that the dopaminergic system is involved in response selection per se. In the motor loop between the basal ganglia and the cortex [START_REF] Alexander | Linking motor-related brain potentials and velocity profiles in multi-joint arm reaching movements[END_REF], the SMAs constitute a major cortical target of the basal ganglia, via the thalamus. If we assume that the N-40 is generated by the SMAs [START_REF] Carbonnell | The N-40: an electrophysiological marker of response selection[END_REF], it is likely that the final effect of dopaminergic depletion on response selection takes place in the SMAs because of a decrease in thalamic glutamatergic output to this area, due to dopamine depletion-induced striatal impairment. Of course, it cannot be excluded that APTD influenced SMAs activity through direct dopaminergic projections to the cortex; however, this seems unlikely since PET studies show that most of the APTD-induced dopamine depletion involves striatal structures [START_REF] Leyton | Decreasing amphetamine-induced dopamine release by acute phenylalanine/tyrosine depletion: a PET/ [ 11 C] raclopride study in healthy men[END_REF][START_REF] Montgomery | Reduction of brain dopamine concentration with dietary tyrosine plus phenylalanine depletion: an [11C] raclopride PET study[END_REF].
Fig. 3 Activation/inhibition pattern: In green, the placebo session; light green: activation of the (contralateral) primary motor cortex involved in the response; and dark green: inhibition of the (contralateral) primary motor cortex involved in the response. In red, the APTD session; dark red: activation of the (contralateral) primary motor cortex involved in the response; and light red: inhibition of the (ipsilateral) primary motor cortex involved in the response. The zero of time corresponds to the onset of the EMG burst. Maps have the same scale and dated at 0 ms. On the left side of maps, in blue, the activity of controlateral primary motor cortex involved in the response and on the right side of the map, in red, the activity of ipsilateral primary motor cortex involved in the response According to Grillner andhis colleagues (2005, 2013), the basal ganglia are strongly involved in the selection of basic motor programs (e.g., locomotion, chewing, swallowing, eye movements…), through a basic organization that has been conserved throughout vertebrate phylogeny, from lamprey to primates. The model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], the data of [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF], and the present results suggest that the implication of basal ganglia in action selection might also concern more flexible experience-dependent motor programs.
A final comment is in order. Although the Laplacian transformation largely increases the spatial resolution of EEG data, it is not possible to spatially separate the subregions of SMAs, i.e., the pre-SMA and SMA proper [START_REF] Luppino | Multiple representations of body movements in mesial area 6 and the adjacent cingulate cortex: an intracortical microstimulation study in the macaque monkey[END_REF][START_REF] Matsuzaka | A motor area rostral to the supplementary motor area (presupplementary motor area) in the monkey: neuronal activity during a learned motor task[END_REF]Picard andStrick 1996, 2001). [START_REF] Larson | The effects of acute dopamine precursor depletion on the cognitive control functions of performance monitoring and conflict processing: an event-related potential (ERP) study[END_REF] reported that APTD does not affect the Error Negativity (note that we did not evidence either any effect of APTD on the Error Negativity in the present experiment [data not shown]). Now, it has been demonstrated with intracerebral electroencephalography in human subjects that the Error Negativity is primarily generated in SMA proper but not in pre-SMA [START_REF] Bonini | Action monitoring and medial frontal cortex: leading role of supplementary motor area[END_REF]. This suggests that SMA proper activity is not noticeably impaired by ATPD. As a consequence, it seems likely that the N-40 is generated in the pre-SMA. Although both areas receive disynaptic inputs from the basal ganglia via the thalamus, a differential sensitivity of SMA proper and pre-SMA to dopamine depletion is not necessarily surprising if one considers that pre-SMA and SMA proper are targeted by neurons located in neurochemically and spatially distinct regions of the internal segment of the globus pallidus [START_REF] Akkal | Supplementary motor area and presupplementary motor area: targets of basal ganglia and cerebellar output[END_REF]), a major output structure of the basal ganglia.
Two limitations of the present study must be noticed. First, considering that the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF] not only assumes that the dopaminergic system is involved in response selection but also supposes that response selection of the appropriate response is achieved via the D2 system, the present study is unable to determine whether the effects of dopamine depletion are due to D1, D2 subreceptor types, or both; the same remark would also hold for [START_REF] Yoon | Delay period activity of the substantia nigra during proactive control of response selection as determined by a novel fMRI localization method[END_REF] results. Secondly, our ATPD manipulation is an all or none one, with no possibility to evidence Bdose-dependent effects.^Future pharmacological manipulations would allow dose-dependent manipulations with possibly no behavioral effects at lower doses and behavioral impairments at higher doses. Such manipulations would also allow separating at least in part D2 from D1 effects to test further the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF].
To conclude, our results extend those of Yoon, who showed in a fMRI study that the dopaminergic system was finely sensitive to the complexity/simplicity of response selection during preparation. Thanks to the N-40, in accordance with the model of [START_REF] Keeler | Functional implications of dopamine D1 vs. D2 receptors: a 'prepare and select' model of the striatal direct vs. indirect pathways[END_REF], the present results directly indicate that the dopaminergic system is selectively involved in response selection per se, with little or no effect on response execution processes or proactive control of errors.
Fig. 1
1 Fig. 1 Schematic representation of the task used by Yoon et al. (2015) and the present task. Congruency in the Yoon et al. (2015) task was between the arrow direction and the response side and in the present task between stimulus position and response side
Table 1
1
Amino acid plasmatic
concentrations of phenylalanine Amino acid Before the absorption After the absorption
of the mixture of the mixture
Phenylalanine (μmol/l) placebo 60.3 ± 3.2 106.8 ± 6.1*
Phenylalanine (μmol/l) depleted 59.8 ± 3.5 17.4 ± 3.6*
Tyrosine (μmol/l) placebo 74.4 ± 4.1 272.7 ± 10.8*
Tyrosine (μmol/l) depleted 71.7 ± 4.4 20.5 ± 4.4*
Laboratoire de Neurosciences Cognitives, Aix-Marseille Univ/CNRS, Marseille, France
Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
Institut de Médecine Navale du Service de Santé des Armées, Toulon, France
Acknowledgements We thank Dominique Reybaud, Bruno Schmid, and the pharmacy personnel of the hospital Sainte Anne for their helpful technical contribution.
Funding information
The authors also gratefully acknowledge the financial support from the Institut de Recherches Biomédicales des Armées, France.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest. |
01760614 | en | [
"sde.mcg",
"sde.be"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01760614/file/art_10.1007_s11306-017-1169-z.pdf | Stephane Greff
email: stephane.greff@imbe.fr
Mayalen Zubia
email: mayalen.zubia@upf.pf
Claude Payri
email: claude.payri@ird.fr
Olivier P Thomas
email: olivier.thomas@nuigalway.ie
Thierry Perez
email: thierry.perez@imbe.fr
Stéphane Greff
Stéphane Greff
Chemogeography of the red macroalgae Asparagopsis: metabolomics, bioactivity, and relation to invasiveness
Keywords: Asparagopsis taxiformis, Macroalgal proliferations, Metabolomics, Transoceanic comparisons, Microtox®, UHPLC-HRMS
Introduction
Ecologists generally assume that biotic interactions are prominent under tropics [START_REF] Schemske | Is there a latitudinal gradient in the importance of biotic interactions?[END_REF] where species richness and biomass are considered higher [START_REF] Brown | Why are there so many species in the tropics?[END_REF][START_REF] Mannion | The latitudinal biodiversity gradient through deep time[END_REF]. The latitudinal gradient hypothesis (LGH) states that tropical plants inherit more defensive traits from higher pressures of competition, herbivory and parasitism than their temperate counterparts [START_REF] Coley | Comparison of herbivory and plant defenses in temperate and tropical broad-leaved forests[END_REF][START_REF] Coley | Herbivory and plant defenses in tropical forests[END_REF][START_REF] Schemske | Is there a latitudinal gradient in the importance of biotic interactions?[END_REF]). The same trend exists in marine ecosystems as temperate macroalgae are consumed overall twice more than the better defended tropical ones [START_REF] Bolser | Are tropical plants better defended? Palatability and defenses of temperate vs. tropical seaweeds[END_REF]. However, the assumption that both biotic interactions and defense metabolism are strongly related to latitudinal gradient, and resulting from co-evolutionary processes, still requires additional evidences [START_REF] Moles | Dogmatic is problematic: Interpreting evidence for latitudinal gradients in herbivory and defense[END_REF][START_REF] Moles | Is the notion that species interactions are stronger and more specialized in the tropics a zombie idea?[END_REF]. Studies did not show any relationship between herbivory pressure and latitudes [START_REF] Adams | A test of the latitudinal defense hypothesis: Herbivory, tannins and total phenolics in four North American tree species[END_REF][START_REF] Andrew | Herbivore damage along a latitudinal gradient: relative impacts of different feeding guilds[END_REF], and an opposite trend has also been demonstrated in some cases [START_REF] Del-Val | Seedling mortality and herbivory damage in subtropical and temperate populations: Testing the hypothesis of higher herbivore pressure toward the tropics[END_REF]. For instance, phenolic compounds in terrestrial [START_REF] Adams | A test of the latitudinal defense hypothesis: Herbivory, tannins and total phenolics in four North American tree species[END_REF]) and marine ecosystems seem to be equally present under low and high latitudes [START_REF] Targett | Biogeographic comparisons of marine algal polyphenolics: evidence against a latitudinal trend[END_REF][START_REF] Van Alstyne | The biogeography of polyphenolic compounds in marine macroalgae: Temperate brown algal defenses deter feeding by tropical herbivorous fishes[END_REF].
Chemical traits may also be related to environmental and/or biotic interactions changes [START_REF] Nylund | Metabolomic assessment of induced and activated chemical defence in the invasive red alga Gracilaria vermiculophylla[END_REF]. Several ecosystems are affected by introduction of nonindigenous species (NIS) which may disrupt biotic interactions [START_REF] Schaffelke | Impacts of introduced seaweeds[END_REF][START_REF] Simberloff | Impacts of biological invasions: What's what and the way forward[END_REF]. After the loss of specific competitors, NIS may reallocate the energy originally dedicated to defenses (specialized metabolism) into reproduction and growth (primary metabolism), and succeed in colonized environments [START_REF] Keane | Exotic plant invasions and the enemy release hypothesis[END_REF]. Interactions between NIS and native species may also modify chemical traits as it is argued by the novel weapon hypothesis (NWH) [START_REF] Callaway | Novel weapons: invasive success and the evolution of increased competitive ability[END_REF]. In addition, the production of defensive compounds may also be influenced by several abiotic factors such as temperature [START_REF] Ivanisevic | Biochemical trade-offs: evidence for ecologically linked secondary metabolism of the sponge Oscarella balibaloi[END_REF][START_REF] Reverter | Secondary metabolome variability and inducible chemical defenses in the Mediterranean sponge Aplysina cavernicola[END_REF], light [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF][START_REF] Deneb | Chemical defenses of marine organisms against solar radiation exposure Marine Chemical Ecology[END_REF][START_REF] Paul | The ecology of chemical defence in a filamentous marine red alga[END_REF], and nutrient availability [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF]. Moreover, internal factors such as reproductive stage (Ivanisevic et al. 2011a;[START_REF] Vergés | Sex and life-history stage alter herbivore responses to a chemically defended red alga[END_REF], ontogeny [START_REF] Paul | Simple growth patterns can create complex trajectories for the ontogeny of constitutive chemical defences in seaweeds[END_REF]) are globally subjected to seasonal variation and consequently they may affect the s pecialized metabolism ( Ivanisevic et al. 2011a).
The genus Asparagopsis (Rhodophyta, Bonnemaisoniaceae) is currently represented by two species, A. taxiformis (Delile) Trévisan de Saint-Léon and A. armata (Harvey) [START_REF] Andreakis | Asparagopsis taxiformis and Asparagopsis armata (Bonnemaisoniales, Rhodophyta): Genetic and morphological identification of Mediterranean populations[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF]. Asparagopsis taxiformis is widespread in temperate, subtropical and tropical areas and, so far, six cryptic lineages with distinct geographic distributions have been described for this species [START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Andreakis | Endemic or introduced? Phylogeography of Asparagopsis (Florideophyceae) in Australia reveals multiple introductions and a new mitochondrial lineage[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF]). Among them, the worldwide fragmented distribution pattern of A. taxiformis lineage two is explained by multiple introduction events, and in some places of the Southwestern Mediterranean Sea for instance, it is clearly invasive and outcompeting indigenous benthic organisms [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF][START_REF] Zanolla | The seasonal cycle of Asparagopsis taxiformis (Rhodophyta, Bonnemeaisoniaceae): key aspects of the ecology and physiology of the most invasive macroalga in Southern Spain[END_REF].
The genus Asparagopsis is known to biosynthesize about one hundred of halogenated volatile hydrocarbons containing one to four carbons with antimicrobial, antifeedant and cytotoxic properties [START_REF] Genovese | In vitro evaluation of antibacterial activity of Asparagopsis taxiformis from the Straits of Messina against pathogens relevant in aquaculture[END_REF][START_REF] Kladi | Volatile halogenated metabolites from marine red algae[END_REF]Paul et al. 2006b). Assessment of resources allocated to defense traits can be obtained through analysis of the specialized metabolism using metabolomics. Another way to evaluate resources allocated to defense traits is to measure the bioactivity of organismal extract as a proxy of defense-related compounds biosynthesis. The Microtox® assay is a simple, efficient and rapid method that highly correlates with other biological tests [START_REF] Botsford | A comparison of ecotoxicological tests[END_REF]. Trade-off between the specialized metabolism, and the primary metabolism dedicated to essential biochemical processes such as growth and reproduction can be assessed by this approach [START_REF] Ivanisevic | Biochemical trade-offs: evidence for ecologically linked secondary metabolism of the sponge Oscarella balibaloi[END_REF]). Bioactivities of extracts can be directly correlated to the expression level of targeted metabolites [START_REF] Cachet | Metabolomic profiling reveals deep chemical divergence between two morphotypes of the zoanthid Parazoanthus axinellae[END_REF][START_REF] Martí | Quantitative assessment of natural toxicity in sponges: toxicity bioassay versus compound quantification[END_REF][START_REF] Reverter | Secondary metabolome variability and inducible chemical defenses in the Mediterranean sponge Aplysina cavernicola[END_REF], and metabotypes were shown to explain bioactivity patterns [START_REF] Ivanisevic | Biochemical trade-offs: evidence for ecologically linked secondary metabolism of the sponge Oscarella balibaloi[END_REF]). However, metabolomics doesn't match necessarily bioactivity assessment. Indeed, metabolomics provides an overall picture of chemical complexity of a biological matrix, this picture being dependent of the selected technique (MS or NMR), but in any case without any indication of putative synergetic or antagonistic effects of the detected compounds. On the other hand, an assay such as the Microtox® integrates all putative synergetic or antagonistic effects of the extracted compounds, but the obtained value is only a proxy depending on the specificity of the model bacterial strain response.
The first objective of our study was to assess macroalgal investment in defensive traits using two non-equivalent approaches, UHPLC-HRMS metabolic fingerprinting and biogeographic variations of macroalgal bioactivities assessed with the Microtox® assay. The second objective was to understand how environmental factors (temperature, light) may influence macroalgal defensive traits. Finally, we also have evaluated the relationship between bioactivities and the status of the macroalga, its origin (introduced vs. native) and its cover together as an indicator of invasiveness, in order to relate the involvement of macroalgal chemical defenses in its proliferation trait.
Methods
Biological Material
Among the six different lineages of A. taxiformis (Delile) Trevisan de Saint-Léon (Rhodophyta, Bonnemaisoniaceae) [START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Andreakis | Endemic or introduced? Phylogeography of Asparagopsis (Florideophyceae) in Australia reveals multiple introductions and a new mitochondrial lineage[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF], only five were considered in this study. This alga can cover hard and soft substrate from 0 to 45 m depth both in temperate and tropical waters. Asparagopsis armata (Harvey), a species distributed worldwide and currently composed of two distinct genetic clades [START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF], was also considered in this study. Only one lineage mostly growing on hard substrates at shallow depth was investigated. The genus is dioecious. Gametophytes stages present distinct male and female individuals, which alternates with a heteromorphic tetrasporophyte "Falkenbergia stage". In this study, we focused on the gametophyte stage of the macroalgae.
Sampling
A total of 289 individuals of A. taxiformis gametophytic stage were collected in 21 stations selected in 10 sites from two zones (temperate and tropical), from October 2012 to April 2015 (Table 1). Sampled stations presented highly variable A. taxiformis covers. Three classes of Asparagopsis cover were determined by visual assessment: low, 0-35%, medium, 35-65%, high 65-100%. Asparagopsis armata was sampled in South of Spain where it lives in sympatry with A. taxiformis. Two temporal samplings of A. taxiformis were performed in Réunion (Saint Leu, four dates from October 2012 to July 2013) and in France (La Ciotat, six dates from November 2013 to April 2015).
Metabolite extraction
After collection, samples were transported in a cooler, stored at -20 °C before freeze-drying. Dried samples were preserved in silica gel and sent to Marseille (France). Each sample was then individually ground into a fine powder using a blender (Retsch® MM400, 30 Hz during 30 s). One hundred milligram of each sample were extracted 3 times with 2 mL of MeOH/CH 2 Cl 2 1:1 (v/v) in an ultrasonic bath (1 min) at room temperature. The filtrates (PTFE, 0.22 µm, Restek®) were pooled and concentrated to dryness, adsorbing extracts on C18 silica particles (100 mg, non-end-capped C18 Polygoprep 60-50, Macherey-Nagel®). The extracts were then subjected to SPE (Strata C18-E, 500 mg, 6 mL, Phenomenex®) eluting with H 2 O, MeOH, and CH 2 Cl 2 (5 mL of each) after cartridge cleaning (10 mL MeOH) and conditioning (10 mL H 2 O). MeOH fractions were evaporated to dryness, resuspended in 2 mL of MeOH prior to metabolomic analyses by UHPLC-QqTOF. After this first analysis, the same macroalgal extracts were concentrated to dryness ready to be used for bioactivity assessment using the Microtox® assay.
Metabolomic analyses
Chemicals
Methanol, dichloromethane and acetonitrile of analytical quality were purchased from Sigma-Aldrich (Chroma-solv®, gradient grade). Formic acid and ammonium formate (LC-MS additives, Ultra grade) were provided by Fluka.
LC-MS analyses
Analyses were performed on an UHPLC-QqToF instrument: UHPLC is equipped with RS Pump, autosampler and thermostated column compartment and UV diode array (Dionex Ultimate 3000, Thermo Scientific®) and coupled to a high resolution mass spectrometer (MS) equipped with an ESI source (Impact II, Bruker Daltonics®). Mass spectra were acquired in positive and negative mode consecutively. Elution rate was set to 0.5 mL min -1 at a constant temperature of 40 °C. Injection volume was set to 10 µL. Chromatographic solvents were composed of A: water with 0.1% formic acid (positive mode), or 10 mM ammonium formate (negative mode), and B: acetonitrile/water (95:5) with the same respective additives. UHPLC separation occurs on an Acclaim RSLC C18 column (2.1 × 100 mm, 2.2 µm, Thermo Scientific®). According to the study of spatial and temporal patterns, chromatographic elution gradients were adjusted to improve peak resolution using a pooled sample. Two chromatographic elution gradients were thus applied. For the study of spatial patterns, the program was set up at 40% B during 2 min, followed by a linear gradient up to 100% B during 8 min, then maintained 4 min in isocratic mode. The analysis was followed by a return to initial conditions for column equilibration during 3 min for a total runtime of 17 min. For the study of temporal patterns, the program was set up at 2% B during 1 min, followed by a linear gradient up to 80% B during 5 min, then maintained 6 min in isocratic mode at 80% B. The analysis was followed by a phase of 100% B during 4 min and equilibration during 4 min for a total runtime of 20 min. Analyses were processed in separate batches for the study of spatial and temporal variation of the metabotypes. Macroalgal extracts were randomly injected according to sampling sites or dates, alternating the pooled sample injected every 6 samples to realize inter and intra-batch calibration due to MS shift over time. MS parameters were set as follows for positive mode (and negative mode): nebulizer gas, N 2 at 31 psi (51 psi), dry gas N 2 at 8 L min -1 (12 L min -1 ), capillary temperature at 200 °C and voltage at 2500 V (3000 V). Data were acquired at 2 Hz in full scan mode from 50 to 1200 amu. Mass spectrometer was systematically calibrated with formate/acetate solution forming clusters on the studied mass range before a full set of analysis. The same calibration solution was automatically injected before each sample for internal mass calibration. Data-dependent acquisition MS 2 experiments were also conducted (renewed every three major precursors) on some samples of each location.
Data analyses
Constructor raw analyses were automatically calibrated using internal calibrant, before exporting data in netCDF files (centroid mode) using Bruker Compass DataAnalysis 4.3. All converted analysis were then processed by XCMS software [START_REF] Smith | XCMS: processing mass spectrometry data for metabolite profiling using nonlinear peak alignment, matching, and identification[END_REF]) under R (R_Core_Team 2013), using the different steps necessary to generate the final data matrix: (1) Peak picking (peakwidth = c [START_REF] Cardigos | Non-indigenous marine species of the Azores[END_REF]20), ppm = 2) without threshold prefilter [START_REF] Patti | Meta-analysis of untargeted metabolomic data from multiple profiling experiments[END_REF]), ( 2) retention time correction (method = "obiwarp"), (3) grouping (bw = 10, minfrac = 0.3, minsamp = 1), (4) Fillpeaks, and finally (5) report and data matrix generation transferred to Excel. Each individual ion was finally normalized (if necessary according to the drift of equivalent ion of pooled samples) and injection order (van der [START_REF] Van Der Kloet | Analytical error reduction using single point calibration for accurate and precise metabolomic phenotyping[END_REF]. Data were calibrated between batches (inter-batch calibration for the study of spatial patterns) by dividing each ion by the intra-batch mean value of pooled-sample ions, and multiplying by the total mean value for all batches [START_REF] Ejigu | Evaluation of normalization methods to pave the way towards large-scale LC-MS-based metabolomics profiling experiments[END_REF]. Metabolites were annotated with constructor software (Bruker Compass DataAnalysis 4.3).
Bioactivity assays
Bioactivities of macroalgal extracts were assessed using the standardized Microtox® assay (Microbics, USA). This ecotoxicological method measures the effect of compounds on the respiratory metabolism of Allivibrio fischeri which is correlated to the intensity of its natural bioluminescence [START_REF] Johnson | Microtox® acute toxicity test[END_REF]. Extracts were initially prepared at 2 mg mL -1 in artificial seawater with 2% of acetone to facilitate dissolution, and then diluted (twofold) three times in order to test their effect on bacteria and to draw EC 50 curves. EC 50 expressed in µg mL -1 represents the a According to the definition of an "introduction" by [START_REF] Boudouresque | Les espèces introduites et invasives en milieu marin, 3 edn[END_REF]: transportation favored directly or indirectly by humans, biogeographical discontinuity with native range and established (self-sustaining population). b Cover: low (up to 35%), medium (from 35 to 65%), high (over 65%). In summary, high probability of introduction together with high cover is considered equivalent to high invasiveness concentration that decreases half of the initial luminescence after 5 min of exposure to extracts.
Environmental factors
Sea surface temperature (SST in °C) and Photosynthetically available radiation (PAR in mol m -2 day -1 ) were obtained from NASA GES DISC for all sites (http://giovanni.gsfc.nasa.gov). In France, supplementary abiotic factors related to water chemistry such as ammonium (NH 4 + ), nitrate (NO 3 -), and phosphate (PO 4 3-) concentrations (in µmol L -1 ) were provided by SOMLIT (http://somlit. epoc.u-bordeaux1.fr/fr/).
Statistical analyses
Principal components analysis (PCA) were realized using the "ade4 package" [START_REF] Dray | The ade4 package: implementing the duality diagram for ecologists[END_REF]. PCA were centered (the mean ion intensity of all samples was subtracted to each sample ion intensity) and normalized (consequently divided by the relative standard deviation of ion intensity of all samples). PERMANOVA (adonis function, 1e 5 permutations) and ANalysis OF SIMilarity (ANO-SIM function using Euclidean distances) were performed with the "vegan package" [START_REF] Oksanen | vegan: community ecology package. R package version 2[END_REF]. PLS-DA were realized using the "RVAideMemoire package" [START_REF] Hervé | RVAideMemoire: Diverse Basic Statistical and Graphical Functions[END_REF] on scaled raw data according to zones and unscaled log-transformed data according to sites. Permutational test based on cross model validation procedures (MVA.test and pairwise.MVA.test) were used to test differences between groups: outer loop fivefold cross-validation, inner loop fourfold cross-validation according to zones and sites [START_REF] Szymanska | Double-check: validation of diagnostic statistics for PLS-DA models in metabolomics studies[END_REF]. Very important peaks (VIPs) were determined according to the PLS-DA loading plots. Non-parametric analysis (Kruskal-Wallis test followed by Steel-Dwass-Critchlow-Fligner post-hoc test, and Mann-Whitney) were performed on XLSTAT version 2015.4.01.20575 to test differences of macroalgal bioactivities according to zones and sites. The relationships of macroalgal bioactivities against A. taxiformis cover, latitudes or environmental factors were assessed using Spearman's correlations rank test (Rs) using XLSTAT.
Results
Spatial variation of the macroalgal chemical profiles and bioactivities
All the macroalgal metabotypes were plotted on principal component analysis (PCA) taking into account 2683 negative and 2182 positive ions. The PCA shows that the global inertia remains low with 11.9% of explanation of the variability (Fig. 1). Variances on axis 1-3 and 2-3 of the PCA show similar values with 11.2% and 9.3%, respectively. But the divergence between groups (zones and sites) is more evident along the 2-3 axis. Whereas the difference between macroalgal metabotypes in tropical and temperate zones is not statistically supported (PERMANOVA, F = 2.7, p = 0.064), a significant difference between sites is recorded (PERMANOVA, F = 2.9, p = 0.003), with a weak Pearson correlation factor (R 2 = 0.13). A similarity test between sites confirmed that macroalgae sampled in Azores and France are significantly different from all other sites, as well as for macroalgae sampled in Martinique and New Caledonia (ANOSIM, R = 0.281, p < 0.001) (Fig S1
and Table S1 in Supporting Information). Eight metabolite features were selected as chemomarkers with regards to the congruence of ions detected in both negative and positive modes from the PPLS-DA loading plots. They partly explain the dispersion of groups that differentiate temperate metabotypes from tropical ones (PPLS-DA, NMC = 2.5%, p = 0.001) (Fig S2). Differentiation of groups was also effective for metabotypes of algae sampled from the different sites (PPLS-DA, NMC = 31.8%, p = 0.001) (Fig S3
and Table S2). Most probable raw formula of biomarkers did not match with any known compounds from the genus Asparagopsis (Table S3).
Macroalgal bioactivities are negatively correlated with latitudes (Rs = -0.148, R 2 = 0.02, p = 0.041). Overall A. taxiformis from temperate zones show higher bioactivities (EC 50 = 32 ± 3 µg mL -1 , mean ± SE) than those from tropical zones (85 ± 5 µg mL -1 , Mann-Whitney, U = 4256, p < 0.001, Figs. 2,3a). Macroalgal bioactivities also differ significantly according to sampling sites (Kruskal-Wallis, K = 90, p < 0.001, Fig. 2b). EC 50 ranged from 20 ± 4 µg mL -1 (Azores) to 117 ± 15 µg mL -1 (French Polynesia). Similarly to macroalgal bioactivities found in Azores, high values were recorded in France (31 ± 4 µg mL -1 ), Algeria (33 ± 5 µg mL -1 ) and Spain (52 ± 13 µg mL -1 ). In comparison, A. armata sampled in South of Spain did not show significant different EC 50 values from A. taxiformis sampled in France, Algeria and Spain (ESP arm , 24 ± 4 µg mL -1 , Steel-Dwass-Critchlow-Fligner post hoc test, p > 0.05).
Macroalgae from Martinique (114 ± 24 µg mL -1 ) and French Polynesia (117 ± 15 µg mL -1 ) showed the lowest bioactivities, whereas macroalgae from Mayotte showed the highest values among the tropical macroalgae (57 ± 8 µg mL -1 ). Asparagopsis taxiformis from other tropical sites (New Caledonia, Réunion and Guadeloupe) exhibited intermediate bioactivities (respectively 72 ± 7 µg mL -1 , 80 ± 10 µg mL -1 , 80 ± 22 µg mL -1 ).
No relationship between the spatial pattern of variability in the macroalgal bioactivity and the macroalgal cover has been established (Rs = 0.061, R 2 = 0.004, p = 0.336). This spatial pattern in macroalgal bioactivities is actually negatively correlated with SST (Rs = -0.428, R 2 = 0.18, p < 0.001) and PAR (Rs = -0.37, R 2 = 0.14, p < 0.001) which explain respectively 18 and 14% of the overall variability (Fig. 1; Table 2).
Temporal variation of the macroalgal chemical profiles and bioactivities
Macroalgae from France and Réunion displayed distinct metabotypes that varied in time, with overall a much higher variability recorded in Réunion and no clear pattern of seasonal variation in France (Fig. 3a,b). PCAs show that inertia are globally higher when assessing the spatial variability, with about 19% explained in France and 24% in Réunion. Although we were able to distinguish several metabotypes in these time series, no clear chemomarkers were identified to explain this variability. This divergence likely relies on several minor ions.
The EC 50 values for A. taxiformis from France range from 13 ± 3 to 37 ± 5 µg mL -1 (mean ± SE) revealing a high bioactivity all along the year (Fig. 4a), individuals sampled in January 2015 exhibiting the lowest bioactivity values recorded for this site (EC 50 = 37 ± 5 µg mL -1 ). This temporal pattern of variability is positively correlated to SST variations (Rs = 0.287, R 2 = 0.08, p = 0.02) (PAR, p > 0.05), and negatively correlated to variation in ammonium and nitrate concentrations (Rs = -0.373, R 2 = 0.14, p = 0.003 for NH 4 + ; Rs = -0.435, R 2 = 0.19, p < 0.001 for NO 3 -; Table 2). In Réunion, macroalgal bioactivities show a higher variability than those recorded for temperate site (Kruskal-Wallis, K = 29, p < 0.001; Fig. 4b). The lowest values (EC 50 = 204 ± 13 µg mL -1 ) were recorded in January when the seawater temperature is the highest (monthly S4 for details according to sites) mean SST of 27.1 °C, Table S5), whereas the highest values (EC 50 = 14 ± 3 µg mL -1 ) were recorded in July when the seawater temperature is lower (monthly mean SST of 23.5 °C). There is thus a negative correlation between the macroalgal bioactivity, the SST and the PAR variability (Rs = -0.729, R 2 = 0.53, p < 0.001 and Rs = -0.532, R 2 = 0.28, p < 0.001, for SST and PAR respectively).
Discussion
Applying LC-MS-based metabolomics to halogenated metabolites
The genus Asparagopsis is known to biosynthesize about one hundred of halogenated volatile organic compounds [START_REF] Kladi | Volatile halogenated metabolites from marine red algae[END_REF]). Whereas the major metabolites are assumed to be low molecular weight brominated compounds (Paul et al. 2006a), the metabolomic approach used in this study mostly detected non-halogenated metabolites. This might be explained by the volatility of these small compounds that are mainly detected using GC-MS analysis. Higher molecular weight metabolites with six carbons named mahorones were reported from this species [START_REF] Greff | Mahorones, highly brominated cyclopentenones from the red alga Asparagopsis taxiformis[END_REF]. Targeted search of the mahorones in collected gametophytes revealed the presence of 5-bromomahorone in almost all samples without any clear pattern of distribution between samples. The second mahorone was not detected maybe because of difficulties in the ionization process of these molecules as described by [START_REF] Greff | Mahorones, highly brominated cyclopentenones from the red alga Asparagopsis taxiformis[END_REF].
In this study, brominated and iodinated metabolites, were only evidenced by the release of bromide and iodide in the negative mode. Electrospray ionization is strongly dependent of metabolite physical and chemical properties. In negative mode, the detection of halogenated metabolites are not favored as the electrons may be trapped by halogens, rendering halogenated metabolites unstable and undetectable to mass spectrometer (except for halide ions). A metabolomic approach using HRMS is thus suitable for the detection of easily ionizable metabolites present in the macroalgae, but there is a limitation when the major specialized metabolites are highly halogenated.
Relationship between metabotypes and bioactivities
Although, various metabotypes were clearly discriminated, the divergence is due to a high number of minor ions. In this study, phenotypes of temperate macroalgae, especially A. armata sampled in Spain and A. taxiformis sampled in Spain and Algeria, distinguish mostly by the presence of some metabolite features named MF1-MF8 not previously described for these species. Macroalgae sampled in temperate environments evidenced higher bioactivities than those sampled under tropical environments indicating that macroalgal investment in defense was greater under higher latitudes. So far, temperate A. taxiformis (France, Azores, Spain and Algeria) is mainly represented by the introduced lineage 2 (L2) [START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF] suggesting that this NIS can modify its investment in chemical traits. Both species, A. taxiformis and A. armata, sampled in South Spain showed closer phenotypes and bioactivities than A. taxiformis sampled at a larger geographic scale. This outcome suggests that macroalgal phenotypes are more driven by environmental factors, at least partly related to microbial communities, than to genetic factors. This phenotypic variability related to the exposome was already known at the morphological level, as a given genetic lineage or population can include various morphotypes [START_REF] Dijoux | La diversité des algues rouges du genre Asparagopsis en Nouvelle-Calédonie : approches in situ et moléculaires[END_REF][START_REF] Monro | The evolvability of growth form in a clonal seaweed[END_REF], and the morphotype variability could never be explained by genetics [START_REF] Dijoux | La diversité des algues rouges du genre Asparagopsis en Nouvelle-Calédonie : approches in situ et moléculaires[END_REF]. Although MS metabolomics was applied with success to a rather good number of chemotaxonomy or chemosystematics works, this study did not allow to discriminate macroalgal lineages. The unusual ionization processes of the major and highly halogenated specialized metabolites produced by these species might be one of the main explanation, calling thus to use other technical approaches. Besides metabolomics, bioactivity assessment using the Microtox® assay appeared as a relevant complement to our MS approach in order to detect putative shift in macroalgal chemical diversity and its related bioactivity.
In addition, the results of the Microtox® analyses, used as a proxy of the production of chemical defenses, are not in accordance with the Latitudinal Gradient Hypothesis (LGH) used on land where plants allocate more in defensive traits under lower latitudes. It also shows that environmental factors are driving-forces that can strongly influence the specialized metabolism and its related bioactivity or putative ecological function [START_REF] Pelletreau | New perspectives for addressing patterns of secondary metabolites in marine macroalgae[END_REF][START_REF] Puglisi | Marine chemical ecology in benthic environments[END_REF][START_REF] Putz | Chemical defence in marine ecosystems[END_REF]. A higher herbivory pressure in tropical ecosystems than in temperate ones can rely on the species richness and biomass of tropical ecosystems [START_REF] Brown | Why are there so many species in the tropics?[END_REF][START_REF] González-Bergonzoni | Meta-analysis shows a consistent and strong latitudinal pattern in fish omnivory across ecosystems[END_REF], but also to a stronger resistance of herbivores to plant metabolites [START_REF] Craft | Biogeographic and phylogenetic effects on feeding resistance of generalist herbivores toward plant chemical defenses[END_REF]. Previous study conducted with A. armata demonstrated that an increase in toxicity towards bacteria was related to the amount of bioactive halogenated compounds (Paul et al. 2006a). The halogenation process was also shown determinant for the deterrence of non-specialized herbivores (Paul et al. 2006b;[START_REF] Rogers | Ecology of the sea hare Aplysia parvula (Opisthobranchia) in New South Wales, Australia[END_REF][START_REF] Vergés | Sex and life-history stage alter herbivore responses to a chemically defended red alga[END_REF]) and the abalone Haliotis rubra (Paul et al. 2006b;Shepherd andSteinberg 1992 in Paul 2006) are known to graze A. armata, and only the sea hare Aplysia fasciata was reported to feed on A. taxiformis [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF].
Competition for space might also promote defensive traits as macroalgae can be abundant in temperate infralittoral zones [START_REF] Mineur | European seaweeds under pressure: Consequences for communities and ecosystem functioning[END_REF][START_REF] Vermeij | Biogeography and adaptation: Patterns of marine life[END_REF]. For A. taxiformis in the Mediterranean Sea, the pressure of competition might be expected to be rather high in spring when productivity is the highest [START_REF] Pinedo | Seasonal dynamics of upper sublittoral assemblages on Mediterranean rocky shores along a eutrophication gradient[END_REF], but our temporal survey showed rather similar bioactivities all along the year. If macroalgal-macroalgal interactions can induce defensive metabolites biosynthesis, it remains however difficult to explain why A. taxiformis maintains such a high level of defensive traits whereas these interactions are supposed to decrease. Competition is closely related to light availability. Under temperate latitudes, the photophilic community is generally more bioactive than the hemisciaphilic communities indicating that light plays a key role in bioactivity and the biosynthesis of defense related metabolites [START_REF] Martí | Seasonal and spatial variation of species toxicity in Mediterranean seaweed communities: correlation to biotic and abiotic factors[END_REF][START_REF] Mtolera | Stress-induced production of volatile halogenated organic compounds in Eucheuma denticulatum (Rhodophyta) caused by elevated pH and high light intensities[END_REF]. [START_REF] Paul | The ecology of chemical defence in a filamentous marine red alga[END_REF] demonstrated that the production of specialized metabolites was not costly for A. armata when the light is not limited, as biosynthesis was positively correlated to growth. Yet, light is scarcely limited except when competition with fleshy macroalgae reaches a maximum. Under tropics, high irradiance should lead to the synthesis of defense metabolites as revealed for the Rhodophyta Eucheuma denticulatum [START_REF] Mtolera | Stress-induced production of volatile halogenated organic compounds in Eucheuma denticulatum (Rhodophyta) caused by elevated pH and high light intensities[END_REF], but excessive irradiance may also stress macroalgae leading to biosynthesis switch [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF].
Taking all these factors together, seasonal bioactivity variation can give some clues. Surprisingly, the highest variation of macroalgal bioactivities was displayed by A. taxiformis in tropical region (Réunion) while seasonality was quite weak with a lower temperature range (5 °C) and high irradiance all along the year. During austral winter (with SST of 23-24 °C), A. taxiformis from Réunion showed bioactivities equivalent to temperate zones whereas the lowest bioactivities were displayed in austral summer when the water temperature was higher (26-27 °C). Asparagopsis taxiformis thermal tolerance was tested up to 30 °C (Padilla-Gamino and Carpenter 2007). However, high temperatures coupled to high irradiance and low nutrient levels that characterize tropical environments [START_REF] Vermeij | Biogeography and adaptation: Patterns of marine life[END_REF]) may lead to metabolic alterations as suggested by [START_REF] Cronin | Effects of light and nutrient availability on the growth, secondary chemistry, and resistance to herbivory of two brown seaweeds[END_REF]. Thus, a way to explain the higher defensive traits in temperate environments is to consider that maintaining a high level of defensive trait may not be so costly as long as light and nutrients are available, and temperature physiologically adequate.
Relationship between macroalgal bioactivities and invasiveness
In temperate regions, A. taxiformis was reported to be recently introduced in many places. In Azores, A. taxiformis spread all around the islands until the last 90′ and it is now well established [START_REF] Cardigos | Non-indigenous marine species of the Azores[END_REF][START_REF] Chainho | Non-indigenous species in Portuguese coastal areas, coastal lagoons, estuaries and islands[END_REF][START_REF] Micael | Tracking macroalgae introductions in North Atlantic oceanic islands[END_REF]). The last report on the worldwide distribution of A. taxiformis genetic lineages confirmed the presence of the introduced L2 in two Azorean Islands [START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF]). In the Western Mediterranean Sea, only the L2 has been recorded so far [START_REF] Andreakis | Asparagopsis taxiformis and Asparagopsis armata (Bonnemaisoniales, Rhodophyta): Genetic and morphological identification of Mediterranean populations[END_REF][START_REF] Andreakis | Phylogeography of the invasive seaweed Asparagopsis (Bonnemaisoniales, Rhodophyta) reveals cryptic diversity[END_REF][START_REF] Andreakis | High genetic diversity and connectivity in the polyploid invasive seaweed Asparagopsis taxiformis (Bonnemaisoniales) in the Mediterranean, explored with microsatellite alleles and multilocus genotypes[END_REF][START_REF] Dijoux | The more we search, the more we find: discovery of a new lineage and a new species complex in the genus Asparagopsis[END_REF], but an invasive trait is not recorded everywhere [START_REF] Zenetos | Alien species in the Mediterranean Sea by 2012. A contribution to the application of European Union's Marine Strategy Framework Directive (MSFD). Part 2[END_REF]. In Alboran Sea, where this species has spread quickly since the late XXth century, A. taxiformis can form monospecific stands in several places along the Iberian coasts [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF][START_REF] Altamirano | New records for the benthic marine flora of Chafarinas Islands (Alboran Sea, Western Mediterranean)[END_REF]. In the same biogeographic region, a high cover of A. taxiformis was also recorded off the Algerian coast, while the macroalga is poorly distributed at Ceuta and the Strait of Gibraltar. Thus, for the widespread lineage 2 which is considered as invasive in some regions of the Mediterranean Sea and North Atlantic [START_REF] Altamirano | The invasive species Asparagopsis taxiformis (Bonnemaisoniales, Rhodophyta) on Andalusian coasts (Southern Spain): Reproductive stages, new records and invaded communities[END_REF][START_REF] Micael | Tracking macroalgae introductions in North Atlantic oceanic islands[END_REF][START_REF] Streftaris | Alien marine species in the Mediterranean-the 100 'Worst Invasives' and their impact[END_REF], we observed highly variable fate in the indigenous benthic community, proliferating in some places and rather discreet in others. This observation can be extended to different lineages present in other geographic context, which tends to indicate no link between macroalgal bioactivities, their metabotypes and their invasiveness. It is likely that other physiological traits, compared to indigenous sessile organisms, may explain its success in certain habitats, such as special efficiencies for up-taking nutrients, for spreading a specific life cycle stage, or for resisting to environmental stress thanks to the polyploid status of the thalli.
Fig. 1
1 Fig. 1 Principal component analysis (PCA) of methanolic macroalgal extracts analyzed in UHPLC-QqToF (positive and negative modes) according to zones (temperate versus tropical) and sites PYF: French Polynesia, MTQ: Martinique, GUA: Guadeloupe, MYT: Mayotte, REU: Réunion, NCL: New Caledonia, AZO: Azores, FRA: France, DZA: Algeria; ESP: Spain with A. taxiformis, ESP tax and
Fig. 2
2 Fig. 2 Mean bioactivities (±SE) of methanolic macroalgal extracts measured with Microtox® ecotoxicological assay according to (a) sampling zones: tropical vs. temperate and (b) sites: PYF: French Polynesia, NCL: New Caledonia, REU: Réunion, MYT: Mayotte, MTQ: Martinique, GUA: Guadeloupe, AZO: Azores, FRA, France, DZA: Algeria, ESP: Spain. Numbers of samples tested written in bars. Comparisons between zones were achieved with Mann-Whitney test. Comparisons between sites were achieved with Kruskal-Wallis test followed by Steel-Dwass-Critchlow-Fligner post hoc test. Letters figure differences between groups
Fig. 3
3 Fig. 3 Principal component analysis (PCA) of methanolic macroalgal extracts analyzed in UHPLC-QqToF according to temporal variation for two sites/zones: (a) France for temperate zone and (b) Réunion for tropical zone. EC 50 (in µg mL -1 ) of A. taxiformis opposite to
Fig. 4
4 Fig. 4 Mean bioactivities (± SE) of methanolic macroalgal extracts measured with Microtox® assay according to season at (a) La Ciotat (France) and (b) Saint Leu (Réunion). Numbers of samples tested are written in bars. Comparisons were achieved using Kruskal-Wallis test followed by Steel-Dwass-Critchlow-Fligner post hoc test. Letters figure differences between groups
Table 1
1 Sampling sites of Asparagopsis spp. around the world for the study of spatial and temporal variation of macroalgal bioactivities and chemical phenotypes
Cover b Sampling date Latitude Longitude Depth Sampling effort A. taxiformis A. armata high 07/11/2012 38°31.309′N 28°38.315′W -9 high 04/09/2013 36°47.587′N 3°21.078′E -12 high 25/04/2013 11 36°43.276′N 3°44.119′O 3-25 5 low 26/04/2013 35°53.848′N 5°18.479′O -1 low 27/04/2013 35°53.936′N 5°18.495′O -1 low 21/05/2013 43°9.957′N 5°36.539′E 5-8 9 low 14/04/2014 16°9.603′N 61°32.74′W 10-12 10 low 07/03/2014 14°28.86′N 61°5.095′W 24-24 7 high 06/04/2013 12°49.277'S 45°17.406′E -10 high 06/04/2013 12°59.987'S 45°6.543′E -9 low 29/10/2012 21°10.157'S 55°17.102′E <1 20 low 31/01/2013 22°18.888'S 166°26.085′E 0.5-2 2 medium 01/02/2013 22°20.816'S 166°13.954′E 6-14 11 medium 01/02/2013 22°20.836'S 166°13.906′E 10-35 5 low 4-5/02/2013 8 20°13.23'S 165°17.122′E 6-12 low 06/02/2013 3 20°40.403'S 164°15.431′E - medium 07/02/2013 26 21°41.376'S 165°27.735′E 5-35 low 22/11/2012 6 17°43.681'S 149°35.399′W - low 22/11/2012 6 17°32.974'S 149°37.852′W - low 23/11/2012 5 17°32.179'S 149°35.703′W - low 26/11/2012 10 17°36.661'S 149°37.296′W 4-35 low 1-8/02/2013 16 23° 10.051'S 134° 55.839′W 3-5 08/11/2013 8 43°9.957′N 5°36.539′E 5-8 08/01/2014 10 43°9.957′N 5°36.539′E 5-8 06/05/2014 7 43°9.957′N 5°36.539′E 5-8 01/07/2014 10 43°9.957′N 5°36.526′E 5-8 30/09/2014 10 43°9.957′N 5°36.526′E 5-8 01/08/2015 9 43°9.957′N 5°36.526′E 5-8 15/04/2015 5 43°9.957′N 5°36.526′E 5-8 04/10/2012 10 21°10.157'S 55°17.102′E <1 30/01/2013 10 21°10.157'S 55°17.102′E <1 25/04/2013 9 21°10.157'S 55°17.102′E <1 03/07/2013 21°10.157'S 55°17.102′E <1
Probability of introduction a high high high high high high uncertain uncertain uncertain uncertain uncertain low low low low low low low low low low low
Zone Sites Stations Most prob- able lineage Spatial variation Temperate Azores -L2 Algeria Ilôt de Bounettah L2 Spain La Herradura L2 Ceuta Ciclon de Tierra L2 Ceuta Ciclon de Fuera L2 France La Ciotat L2 Tropical Guadeloupe Caye a Dupont L3 Martinique Anses d'Arlet L3 Mayotte Aéroport L4 Kani kéli L4 Réunion Saint Leu Ravine L4 New Caledonia Ilot Canard L5 Dumbéa L5 Dumbéa L5 Touho L5 Koumac Kendec L5 Bourail L5 French Polynesia Paea L4 Faaa1 L4 Faaa2 L4 Taapuna L4 Mangareva L4 (L5) Temporal variation Temperate France La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 La Ciotat-Mugel L2 Tropical Réunion Saint Leu Ravine L4 Saint Leu Ravine L4 Saint Leu Ravine L4 Saint Leu Ravine L4
Table 1 (
1
Depth Sampling effort Total 289
Longitude
Sampling date Latitude
Cover b
Probability of introduction a
Most prob- able lineage
Stations
continued) Sites
Zone
Table 2 Spearman
2
's matrix of correlations between Pattern Sites Variables Bioactivity SST PAR NH 4 + NO 3 -
dependant (bioactivity) and independent variables (SST: sea surface temperature, PAR: photosynthetically active radiation, NH 4 + : ammonium concentration, NO 3 -: nitrate concentration, PO 4 3-: phosphate concentration) Spatial Temporal France SST PAR SST PAR NH 4 + NO 3 -PO 4 3- -0.428 (0.18) -0.370 (0.14) 0.287 (0.08) 0.175 (0.03) -0.373 (0.14) -0.435 (0.19) 0.079 (0.06) 0.377 0.661 -0.617 -0.654 -0.450 -0.622 -0.233 -0.294 0.811 0.640 0.602
Réunion SST -0.
Bold numbers show significant value at the level α ≤ 0.05. Coefficient of determination (Spearman R 2 ) are
into brackets
729 (0.53) PAR -0.532 (0.28) 0.913 et
al. 2006b
). However, only few grazers are recognized to feed on Asparagopsis: the sea hare Aplysia parvula
(Paul |
01764115 | en | [
"shs.eco"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764115/file/substituability_EJHET_7March2018_pour%20HAL.pdf | Jean-Sébastien Lenfant
Substitutability and the Quest for Stability; Some Reflexions on the Methodology of General Equilibrium in Historical Perspective
Keywords: stability, general equilibrium, gross substitutability, substitutability, complementarity, Hicks (John Richard), law of demand, Sonnenschein-Mantel-Debreu, methodology B21, B23, B41, C62
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
1 The heuristic value of the substitutability assumption
It is common view that the aims of general equilibrium theory have been seriously disrupted and reoriented after the famous Sonnenschein-Mantel-Debreu theorems.
The hopes for finding general sufficient conditions under which a tâtonnement process is stable for a competitive economy have turned into dark pessimism, and even to disinterest. The story of the stability issue as a specific research program within GET is rather well known. [START_REF] Ingrao | The invisible hand: economic equilibrium in the history of science[END_REF] have identified the steps and boundaries (see also [START_REF] Kirman | Demand theory and general equilibrium: from explanation to introspection, a journey down the wrong road[END_REF]). Some scholars have dealt more specifically with the issue of dynamics, establishing connections with the history of general equilibrium theory [START_REF] Weintraub | Appraising general equilibrium analysis[END_REF], [START_REF] Weintraub | Stabilizing Dynamics: constructing economic knowledge[END_REF], [START_REF] Hands | Restabilizing dynamics: Construction and constraint in the history of walrasian stability theory[END_REF]).
In contrast, the methodological appraisal of this story has not been pushed very far. Hands (2010 and[START_REF] Hands | Derivational robustness, credible substitute systems and mathematical economic models: the case of stability analysis in walrasian general equilibrium theory[END_REF] provides insights regarding the notion of stability of consumer's choice and revealed preference theory in relation with the stability of general equilibrium, but his aim is not to provide a comprehensive analysis of the stability issue. Hence, as a shortcut to the history of stability, the most common opinion on the subject [START_REF] Guerrien | Concurrence, flexibilité et stabilité: des fondements théoriques de la notion de flexibilité[END_REF][START_REF] Rizvi | Responses to arbitrariness in contemporary economics[END_REF][START_REF] Bliss | Hicks on general equilibrium and stability[END_REF] credits the famous Sonnenschein-Mantel-Debreu theorems for having discarded any serious reference to the invisible hand mechanism to reach a competitive market equilibrium. However, one can find a slightly different position regarding the stability literature in [START_REF] Ingrao | The invisible hand: economic equilibrium in the history of science[END_REF]. According to them, from the very beginning of the 1960s, mathematical knowledge on dynamical systems and some well known instability results [START_REF] Scarf | Some examples of global instability of the competitive equilibrium[END_REF][START_REF] Gale | A note on global instability of competitive equilibrium[END_REF] made stability researches already a vain task.
The gap between those two positions is not anecdotal. Firstly, according to the stance adopted, the place of the SMD results is not the same, both theoretically and from a symbolic point of view. Secondly, there are methodological consequences at stake on the way we can represent the development of general equilibrium theory, and more specifically, on the kind of methodological principles at work in a field of research characterized first of all by strong mathematical standards.
The aim of this paper is to identify some methodological principles at stake in the history of the stability of a competitive general equilibrium. More precisely, I would like to identify some criteria, other than mere analytical rigor, that were in use to direct research strategies and to evaluate and interpret theorems obtained in this field. This methodological look at the stability literature may lead to a more progressive view of the history, where results are modifying step by step the feelings of mathematical economists on the successes and failures of a research program.
My aim in this article is to provide a first step into the history of stability of a Walrasian exchange economy, taking Hicks's Value and capital (1939) as a starting point. To this end, I will put in the foreground the concept of substitutability. Indeed, substitutability has been a structuring concept for thinking about stability. It is my contention here that the concept of substitutability helps to provide some methodological thickness to the history of general equilibrium theory, not captured by purely mathematical considerations. Hence, I uphold that it allows identifying some methodological and heuristic constraints that were framing the interpretation of the successes and failures in this field.
It is well known that a sufficient condition for local and global stability of the Walrasian tâtonnement in a pure exchange economy is the gross-substitutability assumption (GS) (i.e., that all market excess demands increase when the price of other goods increases). By reconstructing the history of stability analysis through the concept of substitutability, I uphold that the representations attached to substitutability constituted a positive heuristic for the research program on stability. Therefore, tracking the ups and downs of this concept within general equilibrium provides some clues to appraise the methodology of general equilibrium in historical perspective and to account for the rise and fall of stability analysis in general equilibrium theory.
The research progam on stability of a competitive general equilibrium is by itself rather specific within GET, and bears on other subfields of GET such as uniqueness and comparative statics. It is also grounded on some views about the meaning of the price adjustment process. Stability theorems are for the most part of them not systematically microfounded: they are formulated at first as properties on the aggregate excess demands (such as gross substitutability, diagonal dominance, weak axiom of revealed preference), and their theoretical value is then assessed against their descriptive likelyhood and heuristic potential, and not against their compatibility with the most general hypotheses of individual rationality. The paper upholds that the concept of substitutability, as a tool for expressing market interdependencies, was seen as a common language to mathematical economists, rich enough to develop a research program on stability and to appraise its progress and failures.
An ever recurring question behind different narratives on GET revolves around the principles explaining the logic of its development, the fundamental reasons that explain GET was a developing area of research in the 1950s-1960s while it became depreciated in the 1970s. Adopting a Lakatosian perspective, one would say that the research program on GET was progressive in the 1960s and became regressive in the 1970s. Even this question assumes that we (methodologists, theorists, historians) agree upon the idea that GET is functioning as a research program and that it went trough two different periods, one during which new knowledge accumulated and one that made new "positive" results hopeless, even devaluating older results in view of new ones.
Recent trends in economic methodology have left behind the search for such normative and comprehensive systems of interpretation of the developments of economic theories. They focus instead on economics as a complex or intricate system of theories, models, fields of research, each (potentially) using a variety of methods as rationalizing and exploratory tools (econometrics and statistical methods applied to various data, simulations, experiments). The first goal of methodological inquiries is then to bring some order into the ways those various tools and methods are applied in practice, how they are connected (or not) through specific discourses, what are the rationales of the practitioners themselves when they apply them.
As far as we are concerned here, the question lends itself how GET can be grasped as an object of inquiry in itself, and more precisely how a field of questionings and research within this field-the stability of a competitive system-can be analyzed both as an autonomous field and in connection with other parts of GET. The present contribution does not claim to provide the structuring principle that explains the ups and downs of the researches on the stability of general competitive equilibrium. It is too evident that various aspects of this research are connected with what is taking place in other parts of the field. First, the kind of mathematical object which is likely to serve as a support for discussing about stability is not independent of the choice of the price-adjustment process that is used to describe the dynamics of the system when it is out-of-equilibrium. Hence, the explanatory power of a set of assumptions (about demand properties) is not disconnected from the explanatory power of another set of assumptions (the price adjustment process), which himself has to be connected with the assumptions about agents behavior, motivations and perception of their institutional environment (e.g. price taking behaviors, utility maximizing and profit maximizing assumptions). In a sense, while it is useful to analyze the proper historical path of researches on the stability of a Walrasian tâtonnement with a methodological questioning in mind, the historian-methodologist should be aware that various rationales are likely to play a role in its valuation as a relevant or anecdotal result. Second, the kind of assumptions made on a system of interdependent markets will have simultaneous consequences on different subfields of GET. An all too obvious example is that Gross Substitutability is both sufficient for uniqueness and global stability of a competitive equilibrium and allows for some comparative statics theorems. Third, the simple fact of identifying an autonomous subfield of research and to claim that it is stable enough through time to be analyzed independently of some internal issues that surface here and there, is something that needs questioning. I have in mind the fact that it is not something quite justified to take the stability of a competitive equilibrium as a historically stable object on which we may apply confidently various methodological hypotheses. There is first the question of delimiting the kind of tools used to describe such a competitive process. Certainly the Walrasian Tâtonnement (WT) has been acclaimed as the main tool for this, but the methodological rationale for it needs be considered in detail to account for the way theorists interpret the theorems of stability. One set of question could be: What about similar theorems when non-tâtonnement processes are considered? Why discard processes with exchanges out of equilibrium? Another set would be: Why not considering that the auctioneer takes into account some interdependencies on the market to calculate new prices? Should we search for stability theorems that are independent of the speed of adjustment on markets? Should stability be independent of the choice of the numéraire?
It is my contention here that the methodology of economics cannot hope to find out one regulatory principle adapted for describing and rationalizing the evolutions of a field of research when the studied object is by itself under a set various forces from inside and outside that make it rather unstable. If I do not claim for an explanatory principle of the research on stability theorems, what does this historical piece of research pretend to add to existing litterature? It provides a principle that is in tune with most recent research on the methodology of GET, as exemplified in [START_REF] Hands | Derivational robustness, credible substitute systems and mathematical economic models: the case of stability analysis in walrasian general equilibrium theory[END_REF]. It argues that the mathematical economists involved in the search for stability theorems adopted a strategy that focused on the ability to provide an interpretative content to their theorems, which by itself was necessary to formulate ways of improvement and generalization. In this respect, the concept of substitutability offered a way to connect the properties of individual behaviors with system-wide assumptions (such as GS) and to appraise those assumptions as more or less satisfactory or promising, in consideration of the kind of interpretable modifications that can be elaborated upon, using the language of substitutability. In so doing, using economically interpretable and comparable sets of assumptions is presented as a criteria for valuating theorems, confronting them and fostering new research strategies; while at the same time it does not pretend to exhaust the reasons for interpreting those results with respect to the developments in other fields of GET. Even if the language of substitutability would lead to some new results (in the 1970s-1990s), their valuation would become too weak in comparison with what was expected as a satisfactory assumption after the critical results obtained by Sonnenschein, Mantel and Debreu. The paper aims at putting some historical perspective on how the concept of substitutability failed to convey enough economically interpretable and fruitful content.
The paper is organised as follows. Section 2 deals with Hicks' Value and capital (1939) and its subsequent influence on stability issues until the middle of the 1950s. During this time span, stability is linked intimately with the search for comparative static results. It is a founding time for the heuristic of substitutability, and more precisely for the idea of a relation between substitutability and stability. (2. Stability and Substitutability: A Hicksian Tradition). With the axiomatic turn of GET, there are hopes for finding relevant conditions of stability. On the one hand, substitutability remains a good guiding principle, while on the other hand, the first examples of instability are presented, making findings of reasonable conditions of stability more urgent. (3. From Gross Substitutability to instability examples). The last time period in this story is much more uneasy and agitated. It is characterised with hidden pessimism and with difficulties in making substitutability a fruitful concept for stability theorems. One among other results, the SMD theorems come as a confirmation that the search for stability of the Walrasian tâtonnement is a dead-end. But as we will see, it is not the only result that played a role in the neglect of stability analysis (4. The end of a research program). As a conclusion, I provide an evaluation of the SMD results and of their consequences within the context of many other results (5. Concluding comments).
Stability and substitutability: a Hicksian tradition
In Value and Capital (1939), Hicks makes a systematic use of substitutes and complements to express stability conditions. He upholds a narrow link between stability and substitutability, giving to the concept of substitutability an explanatory value of the stability of market systems and praising its qualities to describe the main features of market interdependencies. This view would imprint the future of the search for stability conditions. I will first present Hicks' ideas on stability and substitutability (2.1 Stability and substitutability according to Hicks). Then, I show how a Hicksian tradition in GET was established in the 1940s and 1950s (2.2 A Hicksian tradition).
Stability and substitutability according to Hicks
Let's remind first some technical definitions. In Value and Capital, Hicks provides a definition of substitutes and complements on the basis of the [START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF] fundamental equation of value. The Hicks-Slutsky decomposition of the derivative of the demand for good i, x i with respect to the price of a good j, p j is:
∂x i (p, r) ∂p j = ∂h i (p, u) ∂p j -x j ∂x i (p, r) ∂r (1)
with r the income of an agent, x i (p, r) the Marshallian demand for i, h i (p, u) the compensated demand (or Hicksian demand) for i where u is the (indirect) level of utility attainable with (p, r), noted v(p, r).
From (1) we say that i and j are net substitutes, independent or net complements if the change in the compensated demand of i due to a change in p j is positive, null, or negative:
∂h i (p, v(p, r)) ∂p j 0 (2)
From the equation (1) we say that i and j are gross substitutes, independent or gross complements if the change in the Marshallian demand of i due to a change in p j verifies
∂x i (p, r) ∂p j 0 (3)
At an aggregated level, definitions (2) and ( 3) are usable for a general description of substitution between different markets, and equation (1) can serve to discuss the direction and the strength of revenue effects.
As can be infered from (1), two goods may be localy net substitutes (resp. net complements) and gross complements (resp. gross substitutes) depending on the direction and magnitude of income effects in the Slutsky-Hicks decomposition. What is true at the individual level is alo true at the aggregate. Hence, as is well known, the symmetry property of Hicksian demand functions ∂h i (p,r)
∂p j = ∂h j (p,r) ∂p i
is not true for Marshallian demands, except of course when income effects can be neglected.
In 1874, Walras had launched the idea of a sequential and iterative process -a tâtonnement-to model the price dynamics on competitive markets and to establish the possibility for such idealized markets to "discover" by groping the equilibrium-whose existence was theoretically assumed by the equality of equations and unknowns in the model. Walras would also connect the tâtonnement with some comparative static results. 1In Value and Capital, Hicks reinstates Walrasian general equilibrium analysis, which had been deemed fruitless by Marshall. 2 This renewal of interest for general equilibrium, it is worth noting, arises precisely from the availability of new tools to analyze choice and demand, notably the Slutsky equation, hence also the distinction between income and substitution effects and the new definition of substitutes and complements build from it. 3 In Hicks's view, even more certainly than for Walras, there is no doubt that the law of supply and demand leads the economy to an equilibrium. Hicks will follow Walras's reasoning on stability, with the aim of providing a precise mathematical account for it and to discuss with much more attention the effect of interdependencies between markets. Since the first part of the analysis proceeds in an exchange economy, the Slutsky equation then becomes:
∂z i (p, r) ∂p j = ∂h i (p, u) ∂p j -z j ∂x i (p, r) ∂r (4)
This leads to the well-known distinction between perfect and imperfect stability and its mathematical treatment in the Appendix of Value and capital. Consider the Jacobian matrix of the normalized system of n goods, that is, the matrix JZ containing all the cross derivatives of excess demand functions relative to all prices [z ij (p )], i, j ∈ [1, ...n] (the price of good n + 1 being set equal to 1). Stability will be perfect if the principal minors of JZ, calculated at equilibrium p , alternate in sign, the first one being negative. The system is imperfectly stable if only the last of these determinants respects the sign condition.
Hicks's analysis proceeds from the generalization of the results obtained in a two-good economy. He thinks that, except for particular cases, income effects to buyers and sellers on each market should tend to compensate each others: Therefore, when dealing with problems of the stability of exchange, it is a reasonable method of approach to begin by assuming that income effects do cancel out, and then to inquire what difference it makes if there is a net income effect in one direction or the other. (Hicks, 1939, 64-65) Thus, actually, through this thought experiment, the Jacobian of the system is identical to the matrix of substitution effects (the Slutsky matrix) since income effects on each market-i.e. associated to each price variation-are assumed to cancel out. And after a rather clumsy discussion about introducing income effects in the reasoning, Hicks comes to the following conclusion:
To sum up the negative but reassuring conclusions which we have derived from our discussion of stability. There is no doubt that the existence of stable systems of multiple exchange is entirely consistent with the laws of demand. It cannot, indeed, be proved a priori that a system of multiple exchange is necessarily stable. But the conditions of stability are quite easy conditions, so that it is quite reasonable to assume that they will be satisfied in almost any system with which we are likely to be concerned. The only possible ultimate source of instability is strong asymmetry in the income effects. A moderate degree of substitutability among the bulk of commodities will be sufficient to prevent this cause being effective. (Hicks, 1939, 72-73, emphasis mine) them as a relationship between two goods as regards a third one (or money). [START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF] did not provide a new definition, which, he suggested, would have been disconnected from human feelings What kind of substitutability is refereed to here? That the goods are net substitutes to one another. Consequently, the argument goes, symmetrical revenue effects at the aggregate level will have only a weak effect compared with the aggregate substitution effect, so that the Jacobian matrix is approximately symmetric. In so doing, Hicks develops a descriptive and explicative point of view on stability, and substitutability is given a prominent role. Substitutability is entrusted to produce a stylised representation of the interdependencies between markets, likely to receive a validation a priori. Thus, the idea that substitutes are dominating over the system is regarded as a natural and virtuous property of the economic system.
In the wake of Samuelson's discarding of Hicks' mathematical treatment of stability, there has been a tendency to evaluate Hick's analysis of stability exclusively from the standpoint of the mathematical apparatus of Value and capital, i.e. from the perfect/imperfect stability distinction, with a view of pinpointing its wrong mathematical conclusions. Instead, to our story, it is worth insisting that Hicks' reasoning in the text provides insights about the importance of substitutability as a structuring device. The heart of Hicks' reasoning, actually, is a discussion of interdependencies in a three-good case.
The gist of the discursive argument about stability of multiple markets in Value and Capital is in Chapter V, § §4-5, after Hick's introduction of the distinction between perfect and imperfect stability. It is to be noted also that the perfect/imperfect stability distinction is aimed at being powerful to consider cases of intertemporal non clearing Keynesian equilibria.
Here, the main question is whether an intrinsically stable market (say of good X) can be made unstable through reactions of price adjustments on other markets (themselves being out of equilibrium following the initial variation of the price p x and the subsequent reallocations of budgets). The interactions between markets for X and Y (T being the third composite commodity) are first studied under the assumption that net income effects can be neglected. Hicks discusses the case of price elasticities of excess demand for Y on the excess demand for X and T . He ends with the reassuring idea that in the three-good case, and even more when the number of goods widens, cases of strong complementarity are seldom and X will most of the time be "mildly substitutable" with most of the goods constitutive of the composite commodity T . 4Hence, the whole discussion of the mathematical apparatus is conducted through the idea of neglecting asymmetric income effects, focusing on complementarity relations to deal with instability and on a reasonable spread of substitutability to ensure stability. This latter argument would be corrected in the second edition of Value and Capital. Our point, here, is that from a heuristic or interpretative standpoint, Hicks' overall discussion is biased not specifically by its mathematical treatment, which is constrained by focusing on symmetrical systems; it is also biased by the strong separation between discussion of net complementarity and substitution on the one side and discussion of income effects on the other side. Hicks' discursive focus on complementarity has two opposite effects. First, it introduces the language of substitutability as the prominent device to discuss stability issues (as will be the case again when discussing intertemporal equilibrium). Second, it isolates the analysis of substitutability from the one on income effects, thus introducing a strong separation between arguments in terms of income effects and arguments in terms of substitutability to deal with stability analysis.
The establishment of a Hicksian tradition on stability analysis
Hicks's analysis of system stability was first challenged by [START_REF] Samuelson | The stability of equilibrium: comparative statics and dynamics[END_REF]1942;1944) 5 , who rejected his method and results. Samuelson's criticism was the starting point for a series of restatements of Hicks' intuitions by [START_REF] Lange | Price flexibility and employment[END_REF][START_REF] Lange | Complementarity and interrelations of shifts in demand[END_REF], [START_REF] Mosak | General-equilibrium theory in international trade[END_REF], [START_REF] Smithies | The stability of competitive equilibrium[END_REF] and [START_REF] Metzler | Stability of multiple markets: the hicks conditions[END_REF]. Hicks's views and intuitions were partially saved, pointing out its usefulness to think about stability issues. This led to establishing the language of substitutability as a heuristically fruitful concept to think about stability of general equilibrium. However, Hicks' narrow view which concentrated on net substitutability was abandoned and the analysis would now be conducted in terms of gross-substitutes and complements According to Samuelson, Hicksian stability is a static approach to stability. It consists mainly "to generalize to any number of markets the stability conditions of a single market" (Samuelson, 1947, 270). Instead, [START_REF] Samuelson | The stability of equilibrium: comparative statics and dynamics[END_REF] proposes the first true dynamic expression of Walrasian tâtonnement as a simultaneous process of price adjustment on markets. It takes the form of a differential equation system, ṗi = H(z i (p), H being a monotonic positive function (actually, Samuelson considers the simple case ṗi = z i (p)). Such a system is stable if and only if real parts of the eigenvalues associated to the Jacobian [Z ij ] of the system are all negative. More precisely, Samuelson introduces a distinction between local stability and global stability of an equilibrium price vector p . The former obtains when applying the differential equation to a system of demand functions starting from the neighborhood of p tends to p . Global stability obtains if the price path tends to p starting from any initial positive price vector. In the local stability case, the Jacobian can be approximated by linearization around p and the tâtonnement process will be stable if and only if the real part of all the characteristic roots of the matrix are strictly negative. Samuelson would show that the conditions of Hicksian perfect stability are neither necessary [START_REF] Samuelson | The stability of equilibrium: comparative statics and dynamics[END_REF] nor sufficient [START_REF] Samuelson | The relation between hicksian stability and true dynamic stability[END_REF] for local linear stability (the same, a fortiori, for Hicksian imperfect stability). Hence, to Samuelson, Hicksian matrices (ie matrices with principal minors alternating in sign, starting with a negative sign) are not a sound starting point for thinking about stability. The main lesson to be drawn from Samuelson's analysis is that taking the income effect seriously is indispensable to making a serious analysis of stability. However, one weakness of Samuelson's analysis of stability is that he does not strive to provide interpretable conditions of stability. In the time period following immediately Samuelson's reformulation of stability conditions, Metzler, Lange, Mosak and Smithies would rework the Hicksian intuition in line with Samuelson's mathematical apparatus. [START_REF] Smithies | The stability of competitive equilibrium[END_REF], apparently independently of Samuelson, discussed the case of stability of a monopolistic price competition economy and arrived at different necessary and sufficient properties of the roots of the characteristic polynomial of his system, (which, by the way, followed a sequential process of adjustment): those roots should be less than unity in absolute value. Interestingly, he noted that the advantage of his method over Samuelson's is that his result "leads more readily to general economic interpretation than Mr. Samuelson's method" (Smithies, 1942, 266) As for Mosak, Lange and Metzler, their investigations on the stability of economic equilibrium circa 1942-1945 can be interpreted as a series of work aiming at developping a Hicksian method in the analysis of Keynesian economic ideas, focussing on the general equilibrium framework and promoting the idea that unemployment can result from intertemporal durable underemployment of some ressources. Those various contributions will deal with international trade, imperfect competition, monetary theory and financial behaviors to build on Hicks's intuitions. Passages dealing with the stability of general static equilibrium are occasions to adapt Hick's results to the modern treatment of the price adjustment process provided by Samuelson. Lange's (1944) Price Flexibility and Employment is probably the most representative account of those various attempts. He had already identified that the theory of complementarity would be a debated topic [START_REF] Lange | Complementarity and interrelations of shifts in demand[END_REF]. The main point here is that the interplay of markets depends partly on individual's behaviors towards money. Indeed, [START_REF] Lange | Complementarity and interrelations of shifts in demand[END_REF] provided a systematic analysis of complementarity relationships at the market level. He refrains from Hicks's tendency to identifiy complementarity as a possible cause of instability. Lange also introduces a notion of partial stability of order m, expressing the fact that a system can be stable for a given subset of m prices that are adjusted (m < n). He discusses Hick's dynamic stability conditions and notes that since Samuelson leaves out the derivative of the function H in the characteristic determinant, it is tantamount to assuming that the flexibility of all prices is the same. He highlights that Hicksian stability makes sense in case when the Jacobian (the characteristic determinant) is symmetric: then, all roots are real, and the Hicksian conditions are necessary and sufficient for perfect stability.6 . The meaning of symmetry, he goes on, is that "the marginal effect of a change in the price p s upon the speed of adjustment of the price p r equals the marginal effect of a change in the price p r upon the speed of adjustment of the price p s " (Lange, 1944, 98). Thus, stability analysis does not require an equal speed of adjustment on each market, but that the effects of a price change (dp r ) upon the speed of adjustment on another market dps dt are symmetric. Mosak's exposition of the theory of stability of the equilibrium in General-Equilibrium Theory in International Trade also discusses the flaws of Hick's stability analysis. Its main merit, in this respect, is to operate a shift in the interpretation of stability. Instead of focusing on the symmetry properties of the Jacobian, the analysis of stability now revolves around the properties of excess demands, which can be conducted either in terms of gross-substitutability vs gross-complementarity or in terms of asymmetrical vs symmetrical income effects. 7 ... If the rate of change of consumption with respect to income is the same for all individuals then this net income effect will be zero. In order that the net income effect should be at all large, ∂xs ∂r must be considerably different for buyers of x s from what it is for sellers. it is not too unreasonable to assume therefore that ordinarily the income effects will not be so large as to render the system unstable" (Mosak, 1944, p.42).
Mosak would also mention the assumption that usually, goods that are net substititutes are also gross-substitutes (Mosak, 1944, p.45). [START_REF] Metzler | Stability of multiple markets: the hicks conditions[END_REF] established that under gross substitutability (GS), the conditions of Hicksian stability are the same as the conditions for true dynamic stability. Metzler insists that Hicks's analysis of stability aims at giving some ground to comparative statics results by providing a theory of price dynamics when a system is out of equilibrium. The conclusion that lends itself from Samuelson's results is that "Hicksian stability is only remotely connected with true dynamic stability" (Metzler, 1945, 279). However, Metzler's argues, "the Hicks conditions are highly useful despite their lack of generality" (Metzler, 1945, 279):
In the first place ... Hicks conditions of perfect stability are necessary if stability is to be independent of . . . price responsivness. Second, and more important, in a certain class of market systems Hicksian perfect stability is both necessary and sufficient for true dynamic stability. In particular, if all commodities are gross substitutes, the conditions of true dynamic stability are identical with the Hicks condition of perfect stability. (Metzler, 1945, 279-280).
The idea to take into account speeds of adjustement on each market is congenial to Samuelson's dynamic stability conditions and was further identified as a defect of Hicks' analysis by [START_REF] Lange | Price flexibility and employment[END_REF]. This point is quite interesting since it illustrates how some properties of a mathematical tool can be entrusted with important descriptive qualities. Clearly, imposing that a price-adjustment should not be independent of the speed of adjustment may be taken as a gain in generality in some sense, but to some clearly it was not and it would appear as an unnecessary constraint. 8 Given that the knowledge of speeds of adjustement is likely to be dependent upon specific institutional properties of an economic system, it is desirable to formulate stability conditions in terms that are independent of such speeds. For all that, the fact that Hicks conditions of perfect stability are necessary in his case does not make them sufficient for stability. At least, when the assumption of gross 7 Stability can be destroyed only if the market income effects are sufficiently large to overcome the relationships which prevail between the substitution terms. It cannot be destroyed by any possible degree of complementarity.
8 [START_REF] Smithies | The stability of competitive equilibrium[END_REF] analyses the stability of a monopolistic competitive framework. Starting from profit maximization conditions of n producers with their own market demand expectations. Each producer will change its price according to a continuous adjustment process proposed by [START_REF] Lange | Formen der angebotsanpassung und wirtschaftliches gleichgewicht[END_REF] taking into account the difference between the last period price expectation and the price of the last period.
substitutability is made, Hicksian stability is necessary and sufficient for true dynamic stability. Metzler agrees that this property may not be useful since "almost all markets have some degree of complementarity" (Metzler, 1945, 291). Hence ignoring gross complementarity in the system can lead to "serious errors" (Metzler, 1945, 284). However, Metzler's feeling is in tune with other researchers interested in stability analysis, and it is upholding the interest of Hicks fundamental intuitions:
It is natural to speculate about the usefulness of these conditions for other classes of markets as well. The analysis presented above does not preclude the possibility that the Hicks conditions may be identical with the true dynamic conditions for certain classes of markets in which some goods are complementary. Indeed, Samuelson has previously demonstrated one such case [Samuelson, 1941, 111]; ... Further investigation may reveal other cases of a similar nature. In any event, an investigation which relates the true stability conditions to the minors of the static system will be highly useful, whether or not the final results are in accord with the Hicks conditions. (Metzler, 1945, 292) In a few words, the main contribution of the Metzler analysis (together with those mentioned above) is to introduce once more the concept of substitutability into the analysis of stability, and to focus on the interpretative content of the analysis. The Hicksian tradition is reformulated around the gross substitutability hypothesis. And this hypothesis is taken as a fruitful point of departure.
To make a conclusion on this first group of works on stability, one can say that the concept of substitutability has been worked out in the 1930s and 1940s so that it can be used to describe and to interpret the main stability properties of an economic system. So, beyond the mathematics of stability, there is an interpretative content and a "positive heuristic" attached to substitutability. It is enhanced first by the idea that substitutes are good for stability (Hicks), and then, following Samuelson's criticism, that income effects should not be so ditributed as to disturb the stabilizing properties of symmetric systems, which implies in turn to consider that net substitution will dominate over income effects. Hence, even if a theory of aggregate income effects is needed, most of the arguments and the dynamic of research will take gross-substitutability as the starting point for further results.
It was so much shared by the researcher involved in GET at the time that Newman could write more than a decade later:
A good deal of the work on the analysis of stability has been directed towards establishing intuitively reasonable-or at least readily comprehensibleconditions on the elements of A, that will ensure stability. (Newman, 1959, 3) Research in this direction was on the tracks since the beginning of the 1950s. it was notably explored by [START_REF] Morishima | On the laws of change of the price system in an economy which contains complementary commodities[END_REF] by formulating a stability theorem when some complementarity is introduced into the modeL A Morishima is characterized by a complementarity-substitutability chain hypothesis, (CS): Substitutes of substitutes (and complements of complements) are substitutes, and substitutes of complements (and complements of substitutes) are complements.
Morishima derives a number of theorems from such a system, notably that dynamic stability conditions are equivalent to the Hicksian conditions for perfect stability. This result, in turn, was important for establishing the interest of Hicksian stability conditions despite Samuelson's criticism. However, Morishima's analysis introduced new unexpected constraints regarding the choice of a numéraire and would lead to further comments in the next decades. Now, by the end of the 1950s, the idea that stability should be analyzed through the properties of the matrix representing the derivatives of excess demand functions relatively to prices was well established. Moreover, research focused on systems implying gross substitutability or on systems whose content could be described as properties of the [z ij ] involving substitutability. The existence and optimality theorems of the 1950s put stability issues in the background for a few years, only to surface in the end of the 1950s in a more serious form. Now, the question is: What will be left of this analysis of stability after the axiomatic turn in general equilibrium theory, once the issue of global stability will become central?
3 From "gross-substitutability" to instability examples (1958)(1959)(1960)(1961)(1962)(1963) The turn of the 1960s represents the heyday of the research on the stability of a system of interdependent markets connected through a simple dynamics of price adjustments, the Walrasian tâtonnement. At this moment in time, the use of the langage of substitutablity is structuring research towards theorems of stability. Morishima even proclaimed that "Professor Hicks is the pioneer who prepared the way to a new economic territory-a system in which all goods are gross substitutes for each other" (Morishima, 1960, 195). Indeed, work on stability of general equilibrium was rather limited in the 1950s, researchers being more focussed on existence and welfare theorems, and there was a sudden boom by the very end of the 1950s. The time span between 1958 and 1963 is fundamental both for the structuring of the research on stability and the importance of the language of subtitutability as the main interpretative device to think about stability. In this section, I would like to put to the foreground the mode of development of general equilibrium analysis after Arrow-Debreu-McKenzie theorems of existence. I will begin by enhancing the intuitive privilege that was attributed to the stability hypothesis, and as a consequence, the interest for the gross-substitutability assumption (2.1. "Gross substitutability" as a reference assumption). Then, I will focus on instability examples and the way the results have been received by theoreticians (2.2 Scarf and Gale's counter-examples). The discussion of alternative sufficient conditions of stability are then discussed (2.3 Gross Substitutability, Diagonal Dominance and WARP)
"Gross substitutability" as a reference assumption
Most of the work on stability in the fifties and sixties is centered on the hypothesis of gross substitutability. It is a sufficient hypothesis for unicity of equilibrium [START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF]. It is also the hypothesis with which Arrow, Block and Hurwicz established the global stability of the tâtonnement in 1959. This result was presented as a confirmation of the importance for stability of the substitution among goods. Let's make a short digression on the status of concept and hypothesis within the axiomatic phase of general equilibrium theory. Axiomatisation is usually at odds with the interpretative content of the concepts and hypothesis [START_REF] Debreu | Theoretic models: mathematical form and economic content[END_REF][START_REF] Ingrao | The invisible hand: economic equilibrium in the history of science[END_REF]). The question is thus whether the heuristic properties of substitutability should remain relevant in this context. The answer is yes. Leaving aside the relevance of the tâtonnement as a descriptive tool, the fact is that most of the theoreticians, I mean those who were interested in the work on stability, tended to consider that the concepts and assumptions used should have some heuristic properties and descriptive qualities. This aspect of the work on stability, compared with other fields of general equilibrium theory, is hardly underlined [START_REF] Hands | Derivational robustness, credible substitute systems and mathematical economic models: the case of stability analysis in walrasian general equilibrium theory[END_REF]. In any case, it is certainly a key to study the development of stability analysis and to understand the reactions of the main protagonists. Otherwise stated, everything happens as if the descriptive content of general equilibrium was not only at the level of the dynamic process but also at the level of the properties of the excess demand functions giving stability. In this sense, substitutability plays a heuristic role in the stability analysis, in conformity with Hicks's ideas. It is also to be mentioned that some theoreticians have always privileged a use of axiomatics bounded by the constraints of providing interpretable theorems. This is exemplified in [START_REF] Arrow | General competitive analysis[END_REF] and it can be traced back to Abraham [START_REF] Wald | Über einige gleichungssysteme der mathematischen ökonomie[END_REF]. The assumption of grosssubstitutability, as such, could appear to any economist with a solid background in mathematics as the most natural assumption to obtain global or local stability of the price adjustment process. Indeed, GS appears in many studies on dynamic stability in the late 1950s. Let's mention [START_REF] Hahn | Gross substitutes and the dynamic stability of general equilibrium[END_REF], [START_REF] Negishi | A note on the stability of an economy where all the goods are gross substitutes[END_REF], [START_REF] Mckenzie | Stability of equilibrium and the value of positive excess demand[END_REF], [START_REF] Arrow | Some remarks on the equilibria of economic systems[END_REF], and the now classical [START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF] and [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] articles. 9 Arrow and Hurwicz (1958) testifies for the optimist flavor of the time. Under GS, homogeneity of demand functions and the Walras Law, they show that the tâtonnement process is globally stable. The proof makes use of Lyapunov's second method to study dynamical systems. In this article, they show that certain kinds of complementarity relations are logically impossible within the framework of a Walrasian economy. This is taken as reducing the conditions of instability:
[The] theorem . . . suggests the possibility that complementarity situations which might upset stability may be incompatible with the basic assumptions of the competitive process. (Arrow and Hurwicz, 1958, 550) In the same time, the gross substitutability assumption is seen as not realistic. But gross substitutability is after all nothing more than a sufficient condition for stability, and the field of investigation seems to be open for less stringent hypotheses, introducing complementarity. So, during the axiomatic turn, there is a slight epistemological shift in stability analysis. On the one hand, there is still the Hicksian idea that substitutes are good for stability, but it is quite clear that substitutes, as opposed to complementary goods, will not do all the work, and that the task will not be so easy to achieve. The fact is that generalization of the gross substitutability assumption (the weak gross substitutability) was not that easy to obtain. On the other hand, it is clear also that substitutability is still regarded as the most important concept in order to express stability conditions and to describe the structural properties of an economy. Neither the diagonal dominance, nor the weak axiom of revealed preferences in the aggregate caught that much interest (see below).
Through this theorem, Arrow, Block and Hurwicz were confirming the importance of GS for global stability after other results of local stability [START_REF] Hahn | Gross substitutes and the dynamic stability of general equilibrium[END_REF][START_REF] Negishi | A note on the stability of an economy where all the goods are gross substitutes[END_REF]. [START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF] had already provided a proof for this theorem in a three-good economy. Even if GS could appear as an ad hoc assumption, given its strong mathematical properties on the price path, it was regarded however as a central assumption and a relevant and promising starting point for further inquiries. Other aspects of the Arrow, Block and Hurwicz contribution were regarded as strong results. Notably, the fact that the global stability was obtained both in an economy with or without numéraire (the normalized and non-normalized cases). Moreover, global stability implied that the price adjustment process was not necessarily linear: any sign-preserving adjustment was accepted.
On this occasion, we see that various constraints are likely to operate on the judgmment about the quality or the importance of a result. It turns out that the superiority of having a theorem independent of the choice of a numéraire is something that could appear as justified by the search for the greatest generality, while its genuine significance from an interpretative content was not discussed or debated.
As a consequence of this optimism and of the heuristic content of substitutability, one can understand the situation of the work on stability at the end of the 1950s. The idea that it would be necessary to find out stable systems with complementarity was clearly identified. [START_REF] Enthoven | A theorem on expectations and the stability of equilibrium[END_REF] show that if A is stable, the DA is stable iff the diagonal elements in D are all positive. They address the limits of such a model:
In any actual economy, however, we must be prepared to find substantial, asymmetrical income effects and a goodly sprinkling of gross complementarity. It is desirable, therefore, to try to find other classes of matrices about which useful statements about stability can be made. (Enthoven and Arrow, 1956, 453) For all that, due to its structuring role, there is a kind of benevolence towards the gross substitutability assumption. It is the task for unsatisfied people to prove that gross substitutability is not appealing, and that the concept of substitutability may not be enough to study stability. What makes the interest of this story is that counter-examples of unstable economies will arrive a few months later.
Scarf and Gale's counter-examples
The two important contributions of [START_REF] Scarf | Some examples of global instability of the competitive equilibrium[END_REF] and [START_REF] Gale | A note on global instability of competitive equilibrium[END_REF] will shift the debate on stability. I will not enter precisely into their construction here. Just to go to the point of my analysis, they construct a general equilibrium model with three goods, based on individual rational agents, so that the tâtonnement process of the economy does not converge to the unique equilibrium. Scarf's example implies complementarities between two goods, and asymmetrical income effects. Scarf comments on his results, underlying that instability comes from pathological excess demand functions. Scarf's attitude towards this result is ambiguous. On the one hand, he asserts that "Though it is difficult to characterise precisely those markets which are unstable, it seems clear that instability is a relatively common phenomenon" (Scarf, 1960, 160). On the other hand, he gives some possible objections to the empirical relevance of his model: As a final interpretation, it might be argued that the types and diversities of complementarities exhibited in this paper do not appear in reality, and that only relatively similar utility functions should be postulated, and also that some restrictions should be placed on the distribution of initial holdings. This view may be substantiated by the known fact that if all the individuals are identical in both their utility functions and initial holdings, then global stability obtains. (Scarf, 1960, 160-161) Scarf's comment shows negatively how the language of substitutability makes sense to interpret results, be there positive or negative ones. The presence of complementarity in the system is a guarantee of descriptive relevance of the model. And Scarf goes even farther in suggesting that complementarity may be a cause for instability while a sufficient degree of substitutability may ensure stability.
As for Gale (1963, 8), he would insist on Giffen goods to explain the instability examples obtained. In line with this tendency to entrust substitutability with an explanatory power, the same kind of interpretation can be found in [START_REF] Negishi | The stability of a competitive economy: a survey article[END_REF] and also in Quirk and Saposnik (1968, 191), who are of opinion that the stability of a tâtonnement "is closely tied up with the absence of strongly inferior goods".
The different reasons invoked to comment the instability examples should not be overplayed. They stem also from the tendency to disconnect analytical cases. The apearance of a Giffen effect is linked to situations when substitution is difficult and is not independent with the specific situation of the some agents in terms of initial endowments compared with other agents in the economy.
Nevertheless, the Scarf and Gale examples are received with a kind of perplexity. Everything happens as if their models where singular models, and thus as if they were not affecting the general idea that systems including enough substitutability may be stable. At the same time, it is now felt urgent to find less stringent conditions including complementarity, guaranteeing stability. At this moment in time, the interpretative content of substitutability is at stake. With Scarf's and Gale's examples, the situation is reversed. The suspicion is now clearly on stable systems, and it is the task for all those who have a positive a priori in favor of stability to produce examples of stable systems including complementarity relations. In fact, Scarf's results make it possible to question the heuristic content of substitutability.
Actually, by identifying many possible sources of instability, relating to the spread of initial endowments and to the variety of preferences, and to their implications on demand, the interpretative and descriptive content of substitutability looses some ground. It does not seem possible to express only with substitutes and complements the characteristics of an economy, and its properties for stability. Nevertheless, substitutability remains the main concept with which it is thinkable to search for stability conditions. As a proof for this, it is remarkable that neither the diagonal dominance hypothesis, nor the weak axiom of revealed preference would be serious candidates for serving as a starting point to think about stability, at least in those years.
Discussing Diagonal Dominance and WARP in the aggregate
The general idea that we uphold here is of a methodological nature. Whereas some authors would tend to apply an external set of criteria for success and failure of a research program, we would like to on some complementary criteria to appraise the history of the research done on stability. Research on stability was structured on a set of soft constraints in terms of methods and tools to be used, which are regarded as more fundamentally in tune with the spirit of the general equilibrium research program. To name just a few, such soft constraints concern the choice of the price adjustment process, the interpretative and descriptive potential of conditions for stability, the relative importance of global vs local stability results, the search for results that are independent of the choice of a numéraire, a tendency to prefer models implying unicity of equilibrium. All those constraints are structuring the expectations and valuations of the results obtained. So far, we have seen that until the beginning of the 1970s, substitutability was able to meet a number of constraints and to offer a good starting point for a descriptive interpretation of GET. We have to discuss more in depth why substitutability was priviledged compared with the assumptions of Diagonal Dominance and the Weak Axiom of Revealed Preferences in the aggregate.
The research programme on the stability of general equilibrium is very specific within GET, and it has consequences also on uniqueness and comparative statics because it is disconnected from direct microfoundations of the statements. Sets of conditions proposed as sufficient conditions for stability are related to market properties and the study of the micreconomic foundations of those properties is postponed. Meanwhile, stability conditions are valued according with their heuristic content or likelihood. Actually, no alternative condition on the properties of excess demands would appear as a promising alternative before the end of the 1950s. One such alternatives is the Diagonal Dominance condition. It states that the terms of the matrix JZ are such that (DD)
z ii < 0 and |z jj | > n i=1 i =j |z ij | j = (1, 2, ...n) (5)
This condition appeared at first in [START_REF] Newman | Some notes on stability conditions[END_REF] but was independently explored by Arrow and Hahn 10 . This condition states that the effect of a price change of good i on the excess demand for good i must be negative and greater in absolute value than the sum of absolute values of the indirect effects of the same variation in price of good i on the excess demands of all other goods. 11
In some sense, DD has much to be recommended. It is less stringent than gross substitutability, because (GS) implies (DD). But practically, no utility function has been found that may imply diagonal dominance without implying also gross substitutability. Moreover, only certain forms of diagonal dominance do guarantee stability. Then, it seems easier to provide less stringent conditions by taking gross substitutability as a starting point that can be amended and weakened than by taking diagonal dominance as a starting point. Other reasons can also explain why DD was not taken as an interesting basis for research on stability in those years. Actually, one can figure out the economic content of DD, expressing that the own price effect on a market dominates the whole set of indirect effects from other prices. On second thought, it turns out that this idea of domination involves some quantitative property that are better avoided. At least, it is in those terms that general equilibrium theorists conceived of the search for general theorems. In this respect, as long as one could hope to find satisfactory results only with qualitative assumptions, quantitative constraints, interpretable as they may, were not favored. Moreover, it does not seem easy to use DD as a starting point to search for less stringent assumptions. For instance, Arrow and hahn would point out that it has a "Marshallian flavor" (Arrow and Hahn, 1971, 242) and that it does not carry with it enough heuristic power. Such views on DD would change later on, as the set of constraints would weaken.
What about WARP and stability? It was already known since Wald that WARP in the aggregate was a sufficient condition for uniqueness of equilibrium. [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] showed that WARP is a sufficient condition for local stability. Actually, WARP is a necessary and sufficient condition for the uniqueness of equilibrium in certain cases. GS thus implies WARP, with the advantage that the GS property is preserved through aggregation while it is not the case for WARP (in a more than three-good case). Actually, those relationships would not be discussed in the 1960s, hence DD appeared as the only alternative starting point for discussing about stability, even though research on stability was somehow disconnected from the immediate search for microfoundations of the assumptions made on the properties of excess demand functions.
Hahn has informed me that he and Kenneth Arrow have used it in some as yet unpublished work. It is common in the mathematical literature." (Newman, 1959, 4) 11 An alternative statement for (DD), (DD') is that the own price effect z ii be greater in absolute value to the sum of all cross price effects from the variation of the prices of other goods
|z ii | > n j=1 i =j |z ij | i = 1, 2, ...n.
A JZ satisfying both DD and DD' is strongly dominant diagonal. Newman also mentions another set of stabiity conditions based on (DD)-quasi-dominant main diagonalthat was proposed by Hahn and Solow. 4 The end of a research programme?
So far, I have indicated how a general framework of interpretation of the work on stability of an exchange economy was constructed. As was seen in the first section, the idea to search for sufficient conditions introducing complementarity pre-existed to the Arrow Block Hurwicz result and to the Scarf and Gale counter examples. In this section, I want to focus on two different kinds of obstacles that were put on the road. Firstly, from an internal point of view. All the attempts that were made to generalise the gross substitutability assumption did not give many results. What is clear from Scarf and Gale is that it was no more possible to introduce complementarity arbitrarily (4.1 The impossible generalisation). From an external point of view, then, some work in the seventies and eighties questioned radically the research programme, as it had been formulated by Walras (4.2 Through the lookingglass, and what Sonnenschein, Mantel, Debreu, Smale and others found there).
The impossible generalization
This is an important point for my thesis. The time period following immediately the Scarf results shows that researchers have not much hopes to obtain much better than the [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF] theorem. It is the true moment when the heuristic of substitutability failed and faded away within one decade. It is striking for instance how this set of conditions is treated in [START_REF] Arrow | General competitive analysis[END_REF] General competitive analysis, a book which was (and still is in many respects) the state of the art of GET.
Actually, as we have mentionned before, the hopes to find stability results with complementarity relations was on the agenda from 1945 on (Metzler), and it had just been confirmed by the results on global stability. From the beginning of the fifties on, Morishima was working on this agenda [START_REF] Morishima | On the laws of change of the price system in an economy which contains complementary commodities[END_REF](Morishima, , 1954[START_REF] Morishima | On the three hicksian laws of comparative statics[END_REF][START_REF] Morishima | A generalization of the gross substitute system[END_REF]). Morishima's idea was to introduce complementarity relations between certain goods. He thus proposed an economy whose excess demands would verify that goods were grouped together so that all the substitutes of substitutes are substitutes to each other, and so that all complementary goods of complementary goods where complementary to each other. In the same spirit, [START_REF] Mckenzie | Stability of equilibrium and the value of positive excess demand[END_REF] established the dynamic stability of an exchange economy in which certain sums of the partial derivatives of the excess demand with respect to the prices are positive, which allows introducing a certain amount of complementarity into the system. 12The above discussion on the relative merits of GS, DD and WARP is thus conditional on the kind of constraints that the theoretician takes as granted. This would certainly have an impact on further research (section 4). It was already apparent here and there in the 1950s and 1960s. For instance, McKenzie (1958) offers one of the first study in which GS implies global stability of the unique equilibrium. when there is no num éraire in the model. He also provides a model with a num éraire which makes it possible to consider the stability of a system in which certain weighted sums of the partial derivatives of excess demands with respect to prices are positive, a "natural generalization". McKenzie's comment shows that the descriptive potential of an assumption is linked with the choice of constraints. Indeed, the case in which some complementarity is introduced in the model leads only to a local stability result and aknowledges the multiplicity of equilibria, a situation which he finds descriptively adequate, i.e. in accordance with his own ideas about the stylized facts: "In this case, one must be content with a local stability theorem, but one hardly need apologize for that. Global stability is not to be expected in general" (McKenzie, 1958, 606) Finally, [START_REF] Nikaido | Generalized gross substitute system and extremization[END_REF] proposed the generalised gross substitutability assumption, i.e. that the sum of the symmetrical terms relative to the diagonal of the Jacobian be positive. In such a system, if tea is a gross complement to sugar, then sugar must be a gross substitute to tea. Some remarks on all these developments are in order: Firstly, all the results obtained have strong limitations relatively to the programme of general equilibrium theory. They are not independent of the choice of the numéraire good and they are valid only locally. For example, the Morishima case was showed to be incompatible with a Walrasian economy, because of the properties of the numéraire commodity. That stability should be invariant under a change of numéraire seemed "reasonable" (Newman, 1959, 4). [START_REF] Ohyama | On the stability of generalized metzlerian systems[END_REF] added a condition on the properties of substitution of the numéraire with other goods to ensure stability. Secondly, most of the results I have mentioned suppose that they refer to quantitative constraints on excess demand functions, in the sense that they suppose comparing the relative strength of partial derivatives. From this point of view, the diagonal dominance hypothesis goes in the same direction.
All these limitations illustrate the doubts that arose regarding the hopes for finding a true generalisation of the gross substitutability hypothesis. Indeed, this kind of quantitative constraint is something general equilibrium theorists would prefer to dispense with. At least this general view was not debated. Meanwhile, the heuristic of substitutability is shrinking. The question now is to discuss the relevance of quantitative and structural restrictions on excess demand functions. The change in the spirit and in the state of mind of the theoreticians can be clearly felt. Just to give a quotation by [START_REF] Quirk | Complementarity and stability of equilibrium[END_REF] [START_REF] Quirk | Complementarity and stability of equilibrium[END_REF], who focuses specifically on the limits of a purely qualitative approach to GET: "In contrast to the Arrow-Hurwicz results, here we do not prove instability but instead show that stability cannot be proved from the qualitative properties of the competitive model alone, . . . except in the gross substitute case" (Quirk, 1970, 358). It is a very clear way to renounce establishing general properties compatible with stability. This reduces quite naturally the analytical appeal of substitutability as a single comprehensive tool to deal with stability issues (see also Ohyama (1972, 202). Finally, at this moment on, theoreticians have realised to what degree the gross substitutability assumption was specific, as the only qualitative hypothesis on excess demand functions guaranteing stability of a tâtonnement process. Of course, it may be that some mathematical economists were perfectly aware that GS is too much an had hoc mathematical assumption to obtain stability, but still its specificity as a qualitative assumption was much better acknowledged by the end of the 1960s and beginning of the 1970s.
A word is in order regarding the presentation of the sufficient conditions for stability in [START_REF] Arrow | General competitive analysis[END_REF]'s General competitive analysis. The presentation of stability results in surveys such as [START_REF] Newman | Some notes on stability conditions[END_REF] and [START_REF] Negishi | The stability of a competitive economy: a survey article[END_REF] were clearly transmitting the view of a progressive program of research, with a need to understand the links between different sets of conditions. Yet, Negishi introduced some more temperate view, both enhancing the GS assumption as concentrating the essence of the knowledge on stability and pointing out that due to unstability examples, theoreticians would better concentrate on alternate adjustement processes (such as non-tâtonnement processes). 13 To sum up, in the beginning of the seventies, the work on stability gives a very pessimistic, and even negative, answer to the agenda originally formulated by Metzler and then by Arrow, Block and Hurwicz. Two kinds of results will come and evacuate a bit more any interest with this kind of work: the well-known Sonnenschein-Mantel-Debreu theorem on the one hand, and the Smale-Saari-Simon results on the other hand. Already in the thirties it was known that some properties of individual demand behavior would not be preserved at the aggregated level in general (see for example [START_REF] Schultz | Theory and measurement of demand[END_REF] and [START_REF] Hicks | Value and capital[END_REF]. Clearly, there was a gap between weak restrictions on the demand side and stringent sufficient conditions for stability. So, while the work on stability was progressing only very slowly, and not with the results that were expected, a group of theoreticians was engaged in taking the problem from the other side, that is, from the hypothesis of individual maximising behaviour:
"Beyond Walras' Identity and Continuity, that literature makes no use of the fact that community demand is derived by summing the maximizing actions of agents" (Sonnenschein, 1973, 353) If it is not possible to demonstrate that an economic system with complementarity relations among markets is stable, is it not possible to show that any general equilibrium system based on rational agents exhibits some properties regarding the excess demand functions. This would be at least a way to "measure" the gap between what the logic of GET gives us and what we expect from it in order to arrive at stability theorems. The answer to this question is well known. It is a series of negative results known as Sonnenschein-Mantel-Debreu theorems or results 14 . Market excess demand generated by an arbitrary spread of preferences and initial endowments will exhibit no other properties than Walras Law and the Homogeneity of 13 Negishi's reaction to Scarf examples is interesting in its way to put the emphasis on the choice of the "computing device" and not on the interpretative content of stability conditions. This is another instance of the fact that the proper balance between different attitudes regarding the research program was not discussed and can only be grasped here and there from passing remarks: "We must admit that the tâtonnement process is not perfectly reliable as a computing device to solve the system of equations for general economic equilibrium. It is possible to interprete these instability examples as showing that the difficulty is essentially due to the assumption of tâtonnement (no trade out of equilibrium) and to conclude that the tâtonnement process does not provide a correct representation of the dynamics of markets." (Negishi (1962, 658-9))
14 SMD theorems are named after a series of articles published in 1972-1974. degree zero of excess demands relative to prices. Otherwise stated, given an arbitrary set of excess demands, one can always construct an economy that will produce those excess demands. The question raised by Sonnenschein, Mantel, Debreu and others goes against the usual stream of investigation concerning stability. But it is the most natural stream in terms of the individualistic methodological foundations of the general equilibrium program. Nevertheless, this result raised some perplexity from the field of econometrics. After all, the distribution of endowments and of preferences allowing for such arbitrary excess demand in the aggregate may well be as much (or even more) unrealistic as the ones generating a representative agent [START_REF] Deaton | Models and projections of demand in post-war Britain[END_REF]. [START_REF] Kirman | Market excess demand in exchange economies with identical preferences and collinear endowments[END_REF] showed that the class of excess demands would not be restrained even if the agents had the same preference relations and co-linear endowments. To improve further on the constraints would mean to construct a representative agent. So, the SMD result would imply that Giffen goods are quite "normal" goods in a general equilibrium framework, and following Scarf and Gale conclusions, "instability" would be a common feature of economic systems.
Then, the S-M-D theorem reduces still a bit more the relevance of quantitative restrictions that would yield stability. The change in the spirit of the economists has been portrayed by Mantel:
"Another field in which new answers are obtained is that of stability of multimarket equilibrium. It is not so long ago that the optimistic view that the usual price adjustment process for competitive economies is, as a rule, stable, could be found-an outstanding representative is that of [START_REF] Arrow | On the stability of the competitive equilibrium[END_REF]. Counterexamples with economies with a single unstable equilibrium by [START_REF] Scarf | Some examples of global instability of the competitive equilibrium[END_REF] and [START_REF] Gale | A note on global instability of competitive equilibrium[END_REF] had a sobering effect, without destroying the impression that the competitive pricing processes show some kind of inherent stability. Here the question arises whether such counterexamples are likely, or whether they are just unlikely exceptions" (Mantel et al., 1977, 112) After the SMD results, Scarf and Gale counterexamples could no longer be regarded as improbable, if the excess demand should have arbitrary properties. But from a historical point of view, one must keep in mind that there was a twelve years gap between the reception of the SMD results and the strengthening of thsee results by [START_REF] Kirman | Market excess demand in exchange economies with identical preferences and collinear endowments[END_REF]. What happened during that time span is also very fruitful for our inquiry. For all those who were discouraged by the turn of events, for those who had only a poor faith in the possibility to find satisfactory theorems, the Scarf counterexamples were a starting point for something else. We have seen that Scarf himself felt uncomfortable with the instability result, and that he felt that some disturbing cause of instability may have been arbitrarily introduced in the model. This was the starting point for an inquiry into dynamic systems and algorithmic computation of equilibrium [START_REF] Scarf | The computation of economic equilibria[END_REF]. In this field of research, Steve Smale endeavored to cope with the question of stability. His purely mathematical look at the subject kept the interpretative content outside, and he readily understood that in general equilibrium "complexity keeps us from analysing very far ahead" (Smale,976c,290). Rather than concentrating his reproaches on the descriptive content of the tâtonnement process and on the stability conditions that were found, Smale tackles another question, quite different from that of Sonnenschein, Mantel and Debreu. If equilibrium exists, how is this equilibrium reached? After [START_REF] Scarf | The computation of economic equilibria[END_REF], [START_REF] Smale | A convergent process of price adjustment and global newton methods[END_REF] will found a dynamic process much more complex than Walrasian tâtonnement which allows finding the equilibrium, for any arbitrary structure of the excess demand functions. This process, the Global Newton method, is a generalisation of a classical algorithm of computation of equilibrium. In this process, the variations in the prices on each market dp i dt will not depend solely on the sign of the excess demand z i (p) on this market, but also on the excess demands on other markets. This dynamic process is Dz(p) dp dt = -λz(p) with λ having the same sign as the determinant of the Jacobian. Smale's shift in the way to attack the issue of stability is of interest when confronted with the constraints that general equilibrium theorists had put on the research program, focusing on Walrasian tâtonnement as a neutral dynamics, whose advantage came essentially from the mathematical simplicity in handling it. [START_REF] Hahn | Reflections on the invisible hand[END_REF] reaction to this kind of process is embarrassed. Indeed, Smale's process is shifting the general equilibrium program. What can be the meaning of a dynamic process in which the behaviour of the prices on each market depends of the situation on every other market? Hahn does not have any answer to give. The fundamental problem is that this process is very demanding in terms of information. While the Walrasian auctioneer does not have anything else to know than excess demands at a given price vector, the fictional auctioneer of the Global Newton method will have to know about the qualitative properties of each excess demands. [START_REF] Saari | Effective price mechanisms[END_REF] established that this amount of information was the price to be paid to find a computational method independent of the sign of the excess demand. This is precisely this kind of information that the use of a Walrasian tâtonnement dynamics aimed at ignoring. With the Sonnenschein-Mantel-Debreu and Kirman and Koch's theorems on the one hand and with Smale, Saari and Simon's results on the other, the stability research program, in its original form, has collapsed.
It is not the purpose of the present study to tell the details of all the escapes from SMD [START_REF] Lenfant | L'équilibre général depuis sonnenschein, mantel et debreu: courants et perspectives[END_REF]. As far a substitutability is concerned, it is worth noting that it appears here and there.
A first consequence of the Scarf-Smale escape from stability issues is that the concept of substitutability becomes at best to weak to serve as a descriptive basis of the properties of stable systems, at least as long as the search for global stability theorems is at stake. Slowly, the condensed structure of constraints pertaining to the research program on stability has disaggregated. For instance, the SMD results have destroyed the perspective to search for reasonable conditions on uniqueness and global stability. Hence, research has been reoriented towards different perspectives, either to concentrate on the algorithm that permits to calculate equilibria (this is the Scarf perspective) or to concentrate on local stability results in various frameworks. It is not the purpose of this article to discuss various ways to react to SMD. Work on the stability of a Walrasian type price adjustment process has led to some new results regarding WARP and DD in relation to GS. To [START_REF] Hahn | Reflections on the invisible hand[END_REF] even though GS implies DD, there is practically no example of utility functions satisfying DD but not GS. [START_REF] Keenan | Diagonal dominance and global stability[END_REF] show that DD is a sufficient condition for stability in an unnormalized tâtonnement process.
The effect of SMD results on the research program on stability cannot be examined independently of their broader impact on the theory of general equilibrium. Once the perspective of obtaining uniqueness vanishes, the interest for local stability comes back to the forefront. Once the idea of treating money as a specific input into GET which cannot be dealt with endogeneously is well accepted, the idea of appraising results that are not independent of the choice of the numéraire may attract more attention. Whatever the global effect of SMD, it is still to be found that some research focus on substitutability as a stabilizing phenomenon. For instance, Keenan (2000) has established that the standard conditions for global stability of WT (either GS, DD or that the Jacobian is Negative semidefinite) "can be translated into ones that need be imposed only on the aggregate substitution matrix" (Keenan, 2001, 317) do depend exclusively on substitution effects: "Thus for each condition on the matrix of total price effects implying global stability, there is a corresponding one on only the matrix of compensated price effects which also implies global stability" (Keenan, 2001, 317) Keenan's agenda may seem dubious, in its way to treat substitutability has a concept which is sufficient to support all the relevant information for understanding stability. At least, it could be taken as a remnant of the heyday of stability theory. 15To us, it is revealing of the still lively importance of the concept of substitutability as a heuristic device for discussing stability. Following a quite different agenda, [START_REF] Grandmont | Transformations of the commodity space, behavioral heterogeneity, and the aggregation problem[END_REF] has established conditions on the interdependence of preferences within the economic system (increasing heterogeneity) with the result that sufficient heterogeneity leads to GS for a growing set of initial values of the price vector and to WARP in the limit, thus guaranteeing stability of the WT. But a different view could be held on the basis of a more tracatable and applied approach to GET, in line with Scarf's agenda, such as the one upheld by [START_REF] Kehoe | Gross substitutability and the weak axiom of revealed preference[END_REF]. Kehoe argues that in production economies, the GS assumption looses much of its interest because there are cases of multiple equilibria. He focuses on the number of equilibria rather than on stability issues. In this framework, it is possible to construct economies with Cobb-Douglas consumers (hence well-behaved GS behaviors) and yet a production technology that generates several equilibria. In contrast, WARP (in the aggregate) implies uniqueness even in production economies [START_REF] Wald | Über einige gleichungssysteme der mathematischen ökonomie[END_REF]. Hence, in production economies, GS does not imply WARP.
Final remarks
In this paper, my aim was to put to the foreground the uses of the concept of substitutability in general equilibrium theory. Substitutability, as the main concept used to describe the qualitative properties of an economic system, was expected to provide also good interpretative properties i.e. it was hoped that substitutability would be a sufficient way to express general conditions under which the stability of the tâtonnement would be guaranteed. I have interpreted this very general idea as a guiding principle for the researches on stability. It was thought that substitutes and complements should represent enough information to formulate "reasonable" or "hardly credible" stability conditions. The point was then to see how this guiding principle, this positive heuristic, has been affected by the mathematical results that were foundeaton, and how it came to be deprived of its interpretative content. Of course, I do not pretend that substitutability was the only concept implied in the elaboration of the research programme. It is quite clear from my presentation that the formalisation of the Walrasian tâtonnement, the reflection on quantitative constraints, have also played a role in this story. They formed a complex system of rules to be followed and where themselves embedded in different representations of the purpose of GET. The present article does not pretend to have identified the one single way of interpreting the history of this research program and its connections with other aspects of GET. It has highlighted that within the development of GET, it is possible to identify descriptive heuristics that seem to have payed a role in structuring the research agenda and the interpretation of the results. But in the final analysis, the concept of substitutability has served as a criterion in order to evaluate the relevance of most of the results and to appraise the theoretical consequences of those results on the research programme. It has been a tool for rationalising the path followed by stability analysis. From a methodological point of view, a conclusion that can be drawn from this study of stability is that the weakening of a research program and its reformulation within the framework of a purely mathematical theory, do not depend on a unique result, be it a negative result. The matter depends more pragmatically on the accumulation of many negative or weak results that come to be interpreted as a bundle of results indicating that something else must be done and that the programme must be amended. And it might be that the Sonnenschein-Mantel-Debreu result was not the most important result with regard to this amendment. In this respect, there was not any more important result, because it was more questioning, and it makes sense to us when connected to other results and to the general principles that dominated research on GET in the 1960s and 1970s. This overview, it is hoped, opens more fundamentaly to a new representation of the development of GET based on simulations. Thus, a number of disruptive shifts from the original research program have changed the whole understanding of the tenets of GET and of the role played by different assumptions. The evolution of the theoretical involvements of theoreticians into the concept of substitutability offers, we think, a fruitful perspective on the transformations of a complex and intricate research topic such as GET.
4. 2
2 Through the looking glass and what Sonnenschein, Mantel, Debreu, Smale and others found there.
[START_REF] Walras | Eléments d'économie politique pure ou théorie de la richesse sociale (elements of pure economics, or the theory of social wealth)[END_REF], after discussing the tâtonnement when the price vector is not at equilibrium, applies the same technic to discussing the effects of a simple change in the parameters of the model, e.g. a change in the intial endowment in one good to one agent. The whole set of results-a mix of stability and comparative statics-is precisely what Walras calls the "Law of supply and demand"
Actually, the mathematical analysis of stability was already published in 1937 in the booklet presenting the appendix of Value and capital, Théorie mathématique de la valeur en régime de libre concurrence[START_REF] Hicks | Théorie mathématique de la valeur en régime de libre concurrence[END_REF].
[START_REF] Hicks | A reconsideration of the theory of value. part i[END_REF] provided the state of the art of the ordinalist theory of choice and demand, obtaining independently of[START_REF] Slutsky | Sulla teoria del bilancio del consumatore[END_REF] a decomposition of the single price effect on demand into two effects[START_REF] Chipman | Slutsky's 1915 article: How it came to be found and interpreted[END_REF]. They also corrected Pareto's insuficiencies[START_REF] Pareto | Manual of political economy[END_REF] as regards the defition of complements and substitutes, which implied to recognize
If the the market for X is unstable taken by itself, price reactions will tend to increase market disequilibrium, hence it will not be made stable through reactions with other markets(Hicks (1939), 71-72, §5). Again, this result would probably be different if a number of market interactions are taken into account.
see also[START_REF] Samuelson | Foundations of economic analysis[END_REF] which contains in substance the three articles mentioned
Symmetry of the characteristic determinant of order m implies (and requires) symmetry of all its principal minors
We know fromNegishi (1958, 445, fn) that his contribution and those of[START_REF] Hahn | Gross substitutes and the dynamic stability of general equilibrium[END_REF] and[START_REF] Arrow | On the stability of the competitive equilibrium, i[END_REF] were prepared independently and submitted to Econometrica in between Appril and July 1957.
"This condition apears to be new in the literature on general equilibrium, although Dr. Frank
For any partition of the set of goods J = (1, ...n) into two subsets J 1 and J 2 , we havei∈I1 z ij2 + i∈I2 z ij1 > 0 for all j 1 ∈ J 1 , j 2 ∈ J 2
Note that on this occasion, Keenan favors the discussion of conditions on the Jacobian to the use of a Lyapunov function |
01764151 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764151/file/462132_1_En_13_Chapter.pdf | Hussein Khlifi
email: hussein.khlifi@ensam.eu
Abhro Choudhury
Siddharth Sharma
Frédéric Segonds
Nicolas Maranzana
Damien Chasset
Vincent Frerebeau
Towards cloud in a PLM context: A proposal of Cloud Based Design and Manufacturing methodology
Keywords: Cloud, Collaborative Design, PLM, Additive Manufacturing, Manufacturing
Product Lifecycle Management (PLM) integrates all the phases a product goes through from inception to its disposal but generally, the entire process of the product development and manufacturing is time-consuming even with the advent of Cloud-Based Design and Manufacturing (CBDM). With enormous growth in Information Technology (IT) and extensive growth in cloud infrastructure the option of design and manufacturing within a cloud service is a viable option for future. This paper proposes a cloud based collaborative atmosphere with real-time interaction between the product development and the realization phases making the experience of design and manufacturing more efficient. A much-optimized data flow among various stages of a Product Lifecycle has also been proposed reducing the complexity of the overall cycle. A case study using Additive Manufacturing (AM) has also been demonstrated which proves the feasibility of the proposed methodology. The findings of this paper will aid the adoption of CBDM in PLM industrial activities with reduced overall cost. It also aims at providing a paradigm shift to the present design and manufacturing methodology through a real-time collaborative space
Introduction
With the emergence of new advanced technologies and rapidly increasing competition for efficient product development, researchers and industry professionals are constantly looking for new innovations in the field of design and manufacturing. It has become a challenge to meet the dynamics of today's Marketplace in the manufacturing field as the product development processes are geographically spread out. In the research community of Cloud Based Design and Manufacturing ongoing debate constantly takes place on the key characteristics like cloud based design, communication among users, safety of data, data storage, and data management among others. Such discussions have now been answered with the developments of cloud based design and manufacturing. Efforts are now directed towards making advancements in the field of design and manufacturing by using IT tools & PLM concepts. Few researchers are advancing in the field of developing a PLM paradigm in linking modular products between supplier and product developers [START_REF] Belkadi | Linking modular product structure to suppliers' selection through PLM approach: A Frugal innovation perspective[END_REF], few others have extended their PLM research in the domains of Building information modeling by taking motivation and best practices from PLM by emphasizing more on information centric management approach in construction projects [START_REF] Boton | Comparing PLM and BIM from the Product Structure Standpoint[END_REF]. A revolutionary advancement of cloud services which now offers distributed network access, flexibility, availability on demand and pay per use services has certainly given push for applying cloud computing technology in the field of manufacturing. The intended idea of performing manufacturing on cloud has reached to such an extent that industries are forced to carry out operations in cloud rather than using the traditional methods. Today's world is moving faster and is more connected than ever before due to globalization which has created new opportunities & risks. Traditional methods lack the ability to allow users, who are geographically spread out to work in a collaborative environment to perform design & manufacturing operations. Traditional Design processes have a one-way process that consists of four main phases: customer, market analysis, designer and manufacturing engineers followed in the same order where each phase was a standalone centralized system with minimum cross functional interaction. With the time technologies like CAD, internet services and client server model evolved drastically but overall the advantages provided by these systems were limited in nature as it was still following the same one-way methodology [START_REF] Abadi | Data Management in the Cloud: Limitations and Opportunities[END_REF]. Moreover, there exists a rigid and costly system of supply chain till now whereas in Cloud-based Supply Chain, the supply chain is customer centric and the users with specific needs are linked with the service providers while meeting the cost, time and quality requirements of the user. This is where adoption of Cloud Based Design and Manufacturing (CBDM) becomes essential as it is based on a cloud platform that allows users to collaborate and use the resources on demand and on a self-service basis. This provides flexibility and agility, which is required to reconfigure the resources to minimize the down-time, also called Rapid scalability. CBDM is designed to allow collaboration and communication between various actors involved from design to delivery phase in the crossdisciplinary teams to work in a collaborative way in real time from anywhere in the world with access to internet. Cloud manufacturing allows to produce variety of products of varying complexity and helps in mass customization. Using the CBDM system, the prototypes of the part can be manufactured without buying costly manufacturing equipment. Users can pay a subscription fee to acquire software licenses and use manufacturing equipment instead of purchasing them. Finally, usage of cloudbased environment leads to saving of opportunities as the tasks that were not economically viable earlier can be done using the cloud services.
2
State of The Art
Cloud Based Collaborative Atmosphere
With the coming & advancement of Web 2.0, social collaborative platforms provided a wonderful way to exchange information and data [START_REF] Wu | Cloud-based design and manufacturing systems: A social network analysis[END_REF]. The internet based information and communication technologies are now allowing to exchange information in real time and are providing means to put into practice the concepts of mass collaboration, distributed design & manufacturing processes [START_REF] Schaefer | Distributed Collaborative Design and Manufacture in the Cloud-Motivation, Infrastructure, and Education[END_REF]. Collaboration-based Design & Manufacturing comprises all the activities that revolve around the manufacture of a product and leads to significant economies of scale, reduced time to market, improvement in quality, reduced costs etc. In a cloud manufacturing system, manufacturing resources & capabilities, software etc. are interconnected to provide a pool of shared resources and services like Design as a Service, Simulation as a Service, and Fabrication as a Service to the consumers [START_REF] Ren | Cloud manufacturing: From concept to practice[END_REF]. Current researches have emphasized a lot on the connectivity of products or in other words smart connected products via cloud environment for better collaboration of various operations of manufacturing being carried out on a product [START_REF] Goto | Multi-party Interactive Visioneering Workshop for Smart Connected Products in Global Manufacturing Industry Considering PLM[END_REF] and hence this acted as a first motivation of going into cloud domain for design and manufacturing. Also many larger scale enterprises have formed decentralized and complex network of their operations in the field of design and manufacturing where constant interaction with small scale enterprise is becoming a challenge. However, with the emergence of cloud computing there is an observation that more and more enterprises have shifted their work into cloud domain and have saved millions of dollars [START_REF] Wu | Cloud-based design and manufacturing: Status and promise," a service-oriented product development paradigm for the 21st century[END_REF][START_REF] Wu | Cloud-based manufacturing: old wine in new bottles?[END_REF] and hence this forms our second motivation behind implementing manufacturing which in our case is AM on "Cloud" which is backed by the fact that the currently automobile and aeronautics giants have been shifting wide portion of their work into cloud platform by implementing cloud computing technology into many business lines pertaining to engineering domain. This also reaffirms our belief that cloud computing is envisaged to transform enterprise both small and big to profit from moving their design and manufacturing task into the cloud. Hence this forms the first pillar of the proposed CBDM.
Rapid manufacturing scalability
The idea of providing manufacturing services on the internet was in fact developed a long time ago when researchers envisaged the propagation of IOT (Internet of Things) in the production. Recent research has showcased the importance to have continuous process flow in lean product development which gave rise to an idea of having scalability in manufacturing process to have more liquidity in the manufacturing process.
In a world of rapid competition, scalability of rapid manufacturing is more important than ever. In the alignment to the statement made by Koren et al [START_REF] Yoram | Design of Reconfigurable Manufacturing Systems[END_REF] regarding importance of reconfigurable manufacturing systems (RMSs) for quick adjustment in production capacity and functionality, CBDM allows users to purchase services like manufacturing equipment, software licences with reconfiguration module which in turns allow scalability of the manufacturing process and prevents over purchasing of computing and manufacturing capacities. This digital manufacturing productivity greatly enhances the scalability of the manufacturing capacity in comparison to the traditional manufacturing paradigm and this has been evident from the recent research work carried out by Lechevalier et al. [START_REF] Lechevalier | Model-based Engineering for the Integration of Manufacturing Systems with Advanced Analytics[END_REF] and Moones et al. [START_REF] Moones | Interoperability improvement in a collaborative Dynamic Manufacturing Network[END_REF] who have showcased efficient interoperability in a collaborative and dynamic manufacturing framework. As stated by Wua et al [START_REF] Wua | Cloud-based design and manufacturing: A new paradigm in digital manufacturing and design innovation[END_REF], from the perspective of manufacturing scalability, CBDM allows the product development team to leverage more cost-effective manufacturing services from global suppliers to rapidly scale up and down the manufacturing capacity during production. Hence, Rapid manufacturing scalability forms our second pillar of the proposed methodology.
Design and additive manufacturing methodology model
In this section, the flow of information in the digital chain has been studied to optimize the quality of AM which remains our focus in the experiment to test proposed methodology. This information management system interacts with the support infrastructure [START_REF] Kim | Streamlining the additive manufacturing digital spectrum: A systems approach[END_REF] (The standards, methods, techniques and software). The table whose phase 3 to 6 are represented in Fig. 1, provides an overview of the eight distinct stages and transitions. With a clear understanding of the various phases of additive manufacturing and transitions of information between each phase, we were able to identify optimization opportunities of additive manufacturing and establish mechanisms and tools to achieve them. In the current research phase 3 and 4 have been considered as represented by dotted line region in Fig. 1. This transition is an important preparedness activity to AM that is essential to the achievement of the final product [START_REF] Fenves | A core product model for representing design information[END_REF]. It includes activities like journal of 3D model, generation of the carrier around the 3D model, decomposition in successive layers of a 3D model and generating a code which contains the manufacturing instructions for the machine. It is this transition stage "Activities for AM process" which is dealt later in this research project where AM process is optimized in the proposed methodology making this model a fourth pillar to the methodology.
Real-Time Business Model
One of the major advantages of using CBDM is that we are always linked to the outer world and this lets us know the real-time scenario. So as one of the pillars of our methodology we propose Real-time Business model to execute the entire process in the most efficient way in terms of quality and cost. The Real-Time Request for Quotation (RT-RFQ) is an interesting feature which increases the utility of the system. This basically utilizes the Knowledge Management System (KMS) which are an integral part of Cloud based design and manufacturing systems [START_REF] Li | Cost, sustainability and surface roughness quality-A comprehensive analysis of products made with personal 3D printers[END_REF]. The selection of candidate KSPs is done based on the abilities and the capacities of the KSP to produce the product within stipulated time, cost and quality. The entire process of generating a request for quotation, finalising the service provider and delivery of the final product is in real-time thus creating collaboration between the sellers and the buyers, which we name it as "Market Place". The entire Material Management and the Supply Chain of the product in a collaborative platform is an integral part of our proposed methodology thus forming one of the pillars.
Proposal of a methodology
Synthesis
Synthesis of the proposed methodology is supported by four foundation pillars: Cloud environment, Rapid manufacturing scalability, Design and additive manufacturing methodology model and real-time business model. As discussed in the section 2.3, optimization process involved in AM workflow is rich in research opportunities and thus important to reduce the number phases involved in the manufacturing process. In the construction of methodology, a centralized system has been considered which controls all the process i.e. cloud domain and forms a platform where all actions will take place. Thus "Cloud" atmosphere forms heart of the methodology which starts with inputs that are decided during the RFQ and award acknowledgement process of a project. 3D design (phase-1) followed by two new functionalities such as Preparation for manufacturing (phase-2) and the Marketplace (phase-4). Then comes generic processes manufacturing (phase 4) which in combination with phase-1, phase-2 and phase-3 gives the power of rapid manufacturing scalability as discussed in the section 2.2. Last two phases represent packaging (phase-5) and delivery (phase-6) that constitutes the supply chain network of the process and are interconnected to phase-3 "Mar-ketplace" in cloud by the means of interactions. Phase-3, phase-5 and phase-6 along with inputs given to the process is inspired form the real-time business model as discussed in the section 2.4. This way four pillars forms the backbone of the methodology. Collaboration at each phase in form of propagation of design, consultation, evaluation and notification happens in parallel or simultaneously during the process which forms a distributed and connected network in the methodology.
In addition to defining pillars, the existing methodologies workflow was simplified. The methodology process has been scaled down to six phases, instead of eight as mentioned by Kim D [START_REF] Kim | Streamlining the additive manufacturing digital spectrum: A systems approach[END_REF]. For that, some sub-stages were regrouped into phases to optimize the process and simplify the methodology Indeed, it was noticed that by reducing phases and regrouping linked sub-stages to a single phase, we can minimize the interactions that could happen during transitions between the different phases that aided us in achieving 6 phase methodology process with multiple parallel interactions. By grouping sub steps to main steps, we proposed a 6-phase methodology. This approach of grouping sub steps represents our idea of moving for a "task to do" vision to a "defined role" vision. Instead of thinking as a task of 3D scanning, 3D modelling or a triangulation together, it would be a better to think as a task of a function such as 3D designer or a mechanical engineer. Following this approach, we group several tasks to a specific role. That's how we simplified our methodology, which is checked and validated in the case study applied to AM.
Methodology
From the 3D design to the product delivery, this methodology describes six phases including five transitions with tractability on the cloud as outlined in the Fig. 2.
As shown on Fig. 2, the methodology process starts with a 3D design phase (1) which involves designing the product in a 3D environment, produce 3D CAD File and save it on the Cloud by allowing collaborative work with someone who has the access. This file is then sent to be prepared for manufacturing [START_REF] Boton | Comparing PLM and BIM from the Product Structure Standpoint[END_REF]. A preparation of the 3D model before manufacturing is basically deciding the manufacturing process that will be used to produce the designed part. Sub steps such as repair geometry, meshing, weight optimization and finite elements simulations are grouped in a single manufacturing preparation phase [START_REF] Boton | Comparing PLM and BIM from the Product Structure Standpoint[END_REF]. Once the file prepared for manufacturing, it's uploaded to a Marketplace (3) platform where the product will be evaluated and reviewed by service providers. It an online collaborative platform which brings together buyers (designers, engineers and product developers) and sellers the key service providers (KSPs) who manufacture and bring the design and the concept to realization. Here phase 2 and 3 works in parallel to double check whether the 3D file is ready for manufacturing or requires a further preparation or optimization for manufacturing process to be used. At this stage, the product design has been optimized, prepared for manufacturing and the most efficient service provider has been awarded the order by the designer. Those service providers will lead the customer to the appropriate manufacturing process and will start the manufacturing phase (4). A validation and evaluation product loop occurs after manufacturing to make sure the product matches the requirement specifications. Once the product is manufactured and validates the re-quirements, the service provider proceeds to the packaging (5) then the delivery (6). The service provider selected in the Market place also has the responsibility of providing packaging and delivery service. The methodology proposed here, is the result of a conceptual and theoretical work. However, it must be applied at a practical level to evaluate its efficiency. We have implemented the proposed thermotical model on a case study to enlighten the benefits of this model in a real-world scenario. The following section describes a case study of the proposed methodology, applied to Additive Manufacturing.
Additive manufacturing (AM) has become a new way to realize objects from a 3D model [START_REF] Thompson | Design for Additive Manufacturing: Trends, opportunities, considerations, and constraints[END_REF] as it provides a cost-effective and time-efficient way to produce lowvolume, customized products with complex geometries and advanced material properties and functionalities.
From 3D design to product delivery, step by step the proposed methodology discussed in the section 3 has been applied in AM context and thus changing the step (2) from "Preparation for manufacturing" to "Preparation to Additive Manufacturing" and the rest remains the same. As the project was conducted in a partnership with Dassault Systèmes, and the fact that we want to use a unique platform for the whole CBDM process, the "3DEXPERIENCE" solution by enterprise was used to test the proposed methodology. The focus was on optimizing the methodology dataflow, which impacts directly the product quality.
Step 1: 3D design In the first phase, the user will use a 3D design app on the cloud and work collaboratively. Once the product is designed and converted into an appropriate format, we proceed to the preparation process for manufacturing.
Step 2: Preparation for manufacturing
We have a 3D model file at this stage which requires preparation for the 3D printing. The Fig. 3 describes the fundamental AM processes and operations followed during preparation for manufacturing the CAD model in an AM environment. During the process Pre-context setting, Meshing were also carried out. Step 3: The 3DMarketplace The 3DMarketplace is a platform for additive manufacturing. It addresses the end-toend process of upstream material design, downstream manufacturing processes and testing to provide a single flow of data for engineering parameters. The objective here as a buyer is to select the most efficient key service provider possessing the required capabilities and skill sets on the Marketplace to proceed to the manufacturing phase (Fig. 4). The Marketplace shows up a list of service providers that can process the product manufacturing. A printing request was sent to the laboratory where a back and forth transition between the buyer and the service provider is necessary to make assure the printability of the 3D model and the use of the right manufacturing technology. This phase is done by confirmation of the order and starting of the AM process.
Step 4-5-6: Manufacturing, Packaging and delivery As defined in the proposed methodology section, the service provider from the Marketplace takes care of the manufacturing, packaging and delivery service. For the delivery, we chose to pick up the part. The customer can rate their experience and raise complaints on the 3DMarketPlace if required, and thus allowing improvement in the services provided.
Conclusion and Future work
The successful implementation of Cloud based additive manufacturing demonstrated that Collaborative and distributed design and manufacturing task as complex as AM can be performed with ease by using cloud based service. This research points towards a centralized user interface i.e. cloud platform which forms the heart of the proposed methodology thus allowing its users to aggregate data and facilitate coordination, communication and collaboration among its various players of design, development, delivery and business segments. We optimized the digital workflow while applying the proposed methodology, which helped in obtaining better quality products, shorter machining time, less material use and reduced AM costs. One of the main gains from study was the use of 3D mar-ket place in the methodology which offers a collaborative atmosphere for discussing subjects such 3D model design, geometry preparation and the appropriate manufacturing and also aides in the evaluation and validation of the two previous phases of the proposed methodology which is great from the outlook of the optimization and accuracy point of view in product development and delivery. The prototype of the CBDM system presented in this work will help to develop confidence in the functioning of a CBDM system especially in the domain of AM and will serve an ideal framework for developing it better for the near future.
Future work can consist of an adapted version of the proposed methodology CBAM (Cloud Based Additive Manufacturing) with more optimized process for AM. Overall the proposed methodology based on the work performed in the case study offers: a simplified, optimized, collaborative and AM applied solution that could be used in industrial and academic contexts and further strengthens the idea of adoption of cloud based services in the manufacturing sector soon.
Fig. 1 .
1 Fig. 1. Extract of digital channel Information flow for AM as proposed by Kim D [17]
Fig. 2 .
2 Fig. 2. Proposal of a CBDM methodology
Fig. 3 .
3 Fig. 3. Preparation for manufacturing steps
Fig. 4 .
4 Fig. 4. The 3D Marketplace procees with used service providers during experiment
Case study: Additive ManufacturingThis research is conducted in a partnership between the LCPI, a research lab in the Engineering School Arts et Métiers ParisTech, and Dassault Systèmes company. Collaborate to unify an academic research entity and an industrial Leader is one of our main purposes to point out merits of CBDM such as distributive & collaborative network as a solution to today's design & manufacturing activities. The proposed model in this paper, is tested by carrying out designing, manufacturing, trading on Marketplace and finally packaging for a very common industrial product called "joiner" in a collaborative & distributed environment on a cloud platform to demonstrate the feasibility of the proposed solutions by experimental tests. |
01764153 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764153/file/462132_1_En_41_Chapter.pdf | Farouk Belkadi
Ravi Kumar Gupta
Stéphane Natalizio
email: snatalizio@audros.fr
Alain Bernard
Modular architectures management with PLM for the adaptation of frugal products to regional markets
Keywords: PLM, Modular Architecture, Product Features, Co-evolution 1
Nowadays companies are challenged with high competitiveness and saturation of markets leading to a permanent need of innovative products that ensure the leadership of these companies in existing markets and help them to reach new potential markets (i.e. emerging and mature market). Requirements of emerging markets are different in terms of geographic, economic, cultural, governance policies and standards. Thus, adopting existing European product to develop new products tailored to emerging markets is one possible strategy that can help companies to cope with such challenge. To do so, a large variety of products and options have to be created, managed and classified according to the requirements and constraints from a target regional market. This paper discusses the potential of PLM approach to implement the proposed modular product design approach for the adaptation of European product and production facilities to emerging markets. Using modular approach, the product design evolves iteratively coupling the configuration of various alternatives of product architectures and the connection of functional structures to their contexts of use. This enables the customization of adapted product to specific customer's needs.
Introduction
Customer's requirements fluctuate across geographical regions, standards, and context of use of the product of interest, whereas global production facilities to address such requirements are constrained by local governing policies, standards, and local resources availability. In order to address emerging market's needs and adapt existing product development facilities, it is important to analyze and evaluate different possibilities of product solutions against specific requirements of one regional market.
An emerging market is generally characterized as a market under development with less presence of standards and policies comparing to mature markets in the developed countries [START_REF]MSCI Market Classification Framework[END_REF]. To respond to the competition from these emerging countries, frugal innovation is considered as a solution to produce customized products in a shorter time for improving the attractiveness of western companies [START_REF] Khanna | Emerging Giants: Building World-Class Companies in Developing Countries[END_REF]. Frugal innovation or frugal engineering is the process of reducing the complexity and cost of goods, and their production. A frugal product is defined in most industries in terms of the following attributes: Functional, Robust, User-friendly, Growing, Affordable and Local. The details of these attributes are given in [START_REF] Bhatti | Frugal Innovation: Globalization, Change and Learning in South Asia[END_REF][START_REF] Berger | Frugal products, Study results[END_REF].
As per the study [START_REF] Gupta | Adaptation of european product to emerging markets: modular product development[END_REF], these frugal attributes are not always sufficient for adapting existing product development facilities in European countries to emerging markets. Several additional factors can influence consumer behavior as well such as cultural, social, personal, psychological and so on. To answer this demand, companies have to provide tangible goods and intangible services that result from several processes involving human and material resources to provide an added value to the customer.
However, looking to the large variety of markets, customer categories, needs and characteristics, companies have to create and manage a huge variety of products and services, under more complex constraints of delivery time reduction and cost saving. To do so, optimization strategy should concern all steps of the development process, including design, production, packaging and transportation [START_REF] Ferrell | Marketing: Concepts and Strategies[END_REF].
Generally, three categories of product are distinguished depending on the level of customization and the consideration of customer preferences, namely: (i) standard products that don't propose any customization facility; (ii) mass customized product offering customization on some parts of the product, and (iii) unique product developed to answer specific customer demand. Despite this variety, every product is defined through a bundle of elements and attributes capable of exchange and use. It is often proven that modular architectures offer high advantages to support creation and management of various product architectures from the same family. Taking advantage from this concept, this paper proposes the use of a modular approach to address the emerging market requirements through the adaptation of original products. The key issue is the use of PLM (Product Lifecycle Management) framework as a kernel tool to support both the management of product architectures and the connection of these architectures with production strategies. The specific use case of product configuration of mass customized product is considered as application context.
The next section discusses the main foundation of modular approach and its use for the configuration of product architectures. Section 3 discusses the implementation of the proposed approach in Audros software. Audros is a French PLM providing a set of flexible tools adaptable to a lot of functional domains through an intelligent merge of the business process model, the data model generator and the user interface design. Finally, section 4 gives the conclusion and future works.
2
Product configuration strategies within modular approach
Product modular architectures
Product architecture is the way by which the functional elements (or functions) of a product are arranged into physical units (components) and the way in which these units interact [START_REF] Eppinger | Product Design and Development[END_REF]. The choice of product architecture has broad implications for product performance, product change, product variety, and manufacturability [START_REF] Ulrich | The role of product architecture in the manufacturing firm[END_REF]. Product architecture is thought of in terms of its modules. It is also strongly coupled to the firm's development capability, manufacturing specialties, and production strategy [START_REF] Pimmler | Integration analysis of product decompositions[END_REF].
A product module is a physical or conceptual grouping of product components to form a consistent unit that can be easily identified and replaced in the product architecture. Alternative modules are a group of modules of the same type and satisfy several reasoning criteria/features for a product function. Modularity is the concept of decomposing a system into independent parts or modules that can be treated as logical units [START_REF] Pimmler | Integration analysis of product decompositions[END_REF][START_REF] Jiao | Fundamentals of product family architecture[END_REF]. Modular product architecture, sets of modules that are shared among a product family, can bring cost savings and enable the introduction of multiple product variants quicker than without architecture. Several companies have adopted modular thinking or modularity in various industries such as Boeing, Chrysler, Ford, Motorola, Swatch, Microsoft, Conti Tires, etc. [START_REF] O'grady | The age of modularity: Using the new world of modular products to revolutionize your corporation[END_REF]. Hubka and Eder [START_REF] Hubka | Theory of technical systems[END_REF] define a modular design as "connecting the constructional elements into suitable groups from which many variants of technical systems can be assembled". Salhieh and Kamrani [START_REF] Salhieh | Macro level product development using design for modularity[END_REF] define a module as "building block that can be grouped with other building blocks to form a variety of products". They also add that modules perform discrete functions, and modular design emphasizes minimization of interactions between components.
Generic Product Architecture (GPA) is a graph where nodes represent product modules and links represent connections among product modules according to specific interfaces (functional, physical, information and material flow) to represent a product or a set of similar products forming a product family. A GPA represents the structure of the functional elements and their mapping into different modules and specifies their interfaces. It embodies the configuration mechanism to define the rules of product variant derivation [START_REF] Elmaraghy | Product Variety Management[END_REF]. A clear definition of the potential offers of the company and the feasibility of product characteristics could be established for a set of requirements [START_REF] Forza | Application Support to Product Variety Management[END_REF]. Figure 1 shows an example of modular product architecture for the case of bobcat machine, including the internal composition of modules and the interaction between them [START_REF] Bruun | Interface diagram: Design tool for supporting the development of modularity in complex product systems[END_REF]. The similar concepts mentioned in the literature are 'building product architecture', 'design dependencies and interfaces' and 'architecture of product families', which can be used for the development of GPA. The GPA can be constructed by using different methods presented in the literature [START_REF] Jiao | Product family design and platform-based product development: a state-of-the-art review[END_REF][START_REF] Bruun | PLM support to architecture based development contribution to computer-supported architecture modelling[END_REF].
Construction of modular architectures
The use of the modular approach should propose the facility to work in different configurations. The concept of GPA can give interesting advantages for these issues. Indeed, by using existing GPA to extract reusable modules, a first assessment of interfaces compatibilities and performance of the selected modules can be performed regarding various product structures. Thus, module features are defined to support these assessments and used to link process specifications, production capabilities, and all other important criteria involved in the product development process. As the developed GPA is a materialization of the existing products, the adaptation of these products to the new market requirements will be obtained through some swapping, replacing, combining and/or modification actions on the original product architectures.
In fact, the application of customer-driven product-service design can follow one of two ways processes; either collectively through generic product architecture by mapping all the requested functions, or by mapping functions individually through features and then configuring product modules (cf. Figure 2). In this last case, more flexibility is allowed for the selection of products modules and consequently more innovative possibilities for the final product alternatives. However, more attention is required for the global consistency of the whole structure. The concept of "feature" is considered as a generic term that includes technical characteristics used for engineering perspective as well as inputs for decision-making criteria, useful for the deployment of customer-driven design process in the context of adaptation of existing European product and development facilities to an emerging market.
Fig. 2. Two ways product configuration strategies for identification of modules for a product
In the first case, starting from existing solutions implies a high level of knowledge about the whole development process and will reduce considerably the cost of adaptation to a new market. Using individual mapping of modules, the second way will give more possibilities to imagine new solutions (even the design process doesn't start from scratch) by reusing modules that are not originally created for the identic function. The implementation scenarios detailing these two ways are the following:
Configuration 1: Mapping of Requested Functions to GPA. The starting point in this configuration is the existing product families, really produced to meet certain functions and sold to customers in other markets. The goal is then to adapt the definition of modules regarding the new requirements according to their level of correspondence with existing functions, the importance of each customer option, and possible compatibilities between local production capabilities and those used for the realization of the original product. The modular approach is used to satisfy set of functions collectively through GPA by mapping all the functions required.
Configuration 2: Mapping set of functions to modules through features. In the second configuration, the modular approach is used to satisfy functions individually through features. More attention is given to product modules separately regardless of the final products structures involving these modules. This is also the case when the previous product structures contain partial correspondence with new requirements. This configuration offers more innovation freedom for the design of new product but include a strong analysis of interface compatibilities across modules. In this configuration, we go from the interpretation of the functions to identify all modules' features and then, search if there are some adequate modules and then configure these modules to possible products architectures.
Implementing modular approach in PLM for the configuration of customized product
By using modular architectures, different product configurations can be built as an adaptation of existing products or the creation of new ones through the combination and connection of existing modules developed separately in previous projects. Product Configuration is already used for mass customization perspective [START_REF] Daaboul | Design for mass customization: Product variety VS process variety[END_REF]. This can be also used to increase product variety for regional adaptation and improve the possibility to the customer to choose between different options for an easily customized product with low production cost. This is possible through the matching among product modules, process modules and production capabilities. The development of a product for a new market can then be obtained through a concurrent adjustment of the designed architecture and the production strategy, considered as a global solution. Following this approach, the involving of the customer into the product development process is made through an easier clarification of his needs as a combination of functions and options. These functions/options have to be connected in the design stage to pre-defined modules. Customers are then engaging only in the modules which they are interested in and presenting a high potential of adaptation. In the production side, alternatives of process are defined for each alternative of product configuration so that all the proposed options presented in the product configurator are already validated in terms of compatibility with the whole product architecture and production feasibility. This ensures more flexibility in the production planning.
Figure 3 shows a global scenario connecting a product configurator with the PLM. Following this scenario, the customer can visualize different options for one product type and submit his preferences. These options are already connected to a list of pre-defined models which are designed previously and stored in the PLM. The selection of a set of options will activate various product architectures in the PLM. Based on the selected set of options, the designer extracts the related product architectures. For every option as displayed to the customer in the configurator, a set of modules alternatives are available in the PLM and can be managed by the Designer to create the final product architecture as a combination of existing architectures.
In addition, when selecting the product family and the target market, the PLM interfaces provide a first filtering of modules respecting the target market requirements.
Fig. 3. Scenario of product configuration with PLM
The creation of the predefined models in the PLM is part of a design process which is fulfilled in the design department based on the configuration strategies presented in section 2.2. For each target market or potential category of customers, every type of product is presented with its main architecture connected to a set of alternative architectures. Each alternative implements one or more product options that are tailored to specific regional markets by means of related alternatives of production process.
The main question to be resolved in this design stage concerns the characteristics which the concept of modules should adopt in order to cope with the co-evolution strategy of product architecture and production process, respecting customization constraints. In this case, specific features are defined with the module concept as decision-making criteria to support the product configuration process within a coevolution perspective as given below:
Criticality: The importance of a module in the final product architecture regarding the importance of the related option/function to the customer. This will help the designer to choose between solutions in presence of some parameters conflicts. Interfacing: The flexibility of one module to be connected with other modules in the same architecture. This increases its utilization in various configurations. Interchangeability: The capacity of one module to be replaced by one or more other modules from the same category to provide the same function. Based on this feature and the previous one, the customer can select only compatible options.
Process Connection: It gives information about the first time the related module is used in the production process and the dependency with other assembly operations. This is particularly important if the company aims to propose more flexibility to the customer for selecting some options although the production process is started.
To support the implementation of such process, a data model is implemented in the Audros PLM to manage a large variety of product alternatives connected to several alternatives of production (cf. Figure 4). In this model, every function is implemented through one or several technical alternatives. The concept of "module" is used to integrate one (and only one) technical solution in one product structure. Every product is composed of several structures representing product alternatives. Each structure is composed of a set of modules and connectors that present one or more interfaces. The concept of product master represents the models of mature products that will be available for customization within the product configurator and able.
Fig. 4. PLM Data model implementing modular approach
Based on this data model, several scenarios are defined as an implementation of the construction and use processes of modular architectures (see Figure 2). These scenarios concern, for instance, the creation of original products from scratch or from the adaptation of existing ones, the connection between PLM and product configurator for the ordering of new customized product and the connection PLM-MPM (manufacturing process management) for the realization of the selected alternatives, etc. A recognition scenario of the ordering and customization of a frugal product based on the adaptation of an existing one, using PLM is described as follow. The customer or the marketing department choose an existing product as a base and define customization to be applied to adapt the product by the design office.
Actors: Customer/Marketing department of the company + Design department Goal: select the product to be customized and ordered Pre-condition:
─ If request comes from the Marketing department, a new product family will be developed with options. ─ If the request comes from the customer, a new customer order with customization will be considered. Post condition: Instance of the product master is created, request is sent to design Events and interactions flow:
─ The user chooses product type and target market ─ The system returns the list of suitable options ─ The user creates an order for the desired products ─ The system creates a new product, instance of chosen product master ─ The user selects the options ─ The system analyzes the order and identifies suitable modules for each option ─ The system filters the alternatives of modules for each function regarding the interfacing and compatibility criteria. ─ The system generates potential alternatives of product architecture ─ The system sends a notification of design request to design office.
The Graphical User Interface (GUI) of the PLM tool has been designed to provide flexible and user-friendly manipulation of any type of product structure as well as its different modules and features. The global template of the GUI is the same for all screens, but the content adapts itself depending on the data to be managed and the context of use (Scenario and Use). With this GUI, the user will have a unified interface that will help the designer for the design of a frugal product and its co-evolution with the production process as follow:
Create and analyze various product architectures at any level, from different point of view (functional, technical solutions, compatibility, manufacturing, etc.) Promote re-use and adaptations of existing solutions in the design of product architectures. This is based on the searching facilities for object (function, modules, alternatives …) in a very simple and quick way. Manipulate product and production data (create/modify/adapt solutions) Access easily to all related documents like market survey, customer feedback, etc.
The following figure (cf. Fig 5) presents the main GUIs of the proposed PLM platform as used in the proposed frugal design process. The flexibility of this platform takes advantage from the use of the "effectivity parameter" describing the link between two PLM objects. The effectivity parameters, displayed in the GUI, are used for data filtering as well as representation and manipulation of objects during the configuration process. There is no limit for the definition of effectivity parameters. Examples of effectivity parameters used in the case of frugal product configuration are: Criticality; Customization; Manufacturing plant; Sales country; Product option/variation; and Begin/end date of validity.
Conclusion
PLM tool configuration for the representation and the management of Product modular architectures has been introduced so as to respond to the requirements of adapting product-service design and production in a customer-driven context. The focus is the tailoring of mature product solutions to customer's needs in emerging market. Module features have been defined to help translate the regional customer requirements into product functions and product structure design. It is also used to connect the product design to production planning as well as other downstream activities.
The modular design approach for the adaptation of European product to emerging markets has been proposed for this objective. The proposed modular product design approach is actually under implementation for supporting the configuration and customization of aircrafts in aeronautic domain and the co-design of production systems tailored to regional markets. Another application, in domestic appliance industry concerns the integration of the customer in the definition of product variety through a smart organization of feedback survey following modular structures, highlighting the preferences of potential customers in a target regional market. Software interoperability and information exchanges between involved tools in these industrial scenarios is ensured using PLM framework, considered as a hub.
Fig. 1 .
1 Fig. 1. Example of generic product architecture of Bobcat machine (adapted from [16]).
Fig. 5 .
5 Fig. 5. Several PLM GUIs as a whole process
Acknowledgement
The presented results were conducted within the project "ProRegio" entitled "customer-driven design of product-services and production networks to adapt to regional market requirements", funding by the European Union's Horizon 2020 research and innovation program, grant agreement n° 636966. |
01764172 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764172/file/462132_1_En_59_Chapter.pdf | D Morales-Palma
I Eguía
M Oliva
F Mas
C Vallellano
Managing maturity states in a collaborative platform for the iDMU of aeronautical assembly lines
Keywords: Product and process maturity, Industrial Digital Mock-Up (iDMU), Digital manufacturing, Digital factory, PLM
Collaborative Engineering aims to integrate both functional and industrial design. This goal requires integrating the design processes, the design teams and using a single common software platform to hold all the stakeholders contributions. Airbus company coined the concept of the industrial Digital Mock Up (iDMU) as the necessary unique deliverable to perform the design process with a unique team. Previous virtual manufacturing projects confirmed the potential of the iDMU to improve the industrial design process in a collaborative engineering environment. This paper presents the methodology and preliminary results for the management of the maturity states of the iDMU with all product, process and resource information associated with the assembly of an aeronautical component. The methodology aims to evaluate the suitability of a PLM platform to implement the iDMU in the creation of a control mechanism that allows a collaborative work.
Introduction
Reducing product development time, costs and quality problems can be achieve through effective collaboration across distributed and multidisciplinary design teams. This collaboration requires a computational framework which effectively enables capture, representation, retrieval and reuse of product knowledge. Product Lifecycle Management (PLM) refers to this enabling framework to help connect, organize, control, manage, track, consolidate and centralize all the mission-critical information that affects a product and the associate processes and resources. PLM offers a process to streamline collaboration and communication between product stakeholders, engineering, design, manufacturing, quality and other key disciplines.
Collaboration between product and process design teams has the following advantages for the company: reduction of time required to perform tasks; improvement of the ability to solve complex problems; increase of the ability to generate creative alternatives; discussion of each alternative to select as viable and to make decisions; communication improvement; learning; personal satisfaction; and encouraging innovation [START_REF] Alonso | Enterprise Collaboration Maturity Model (ECMM): Preliminary Definition and Future Challenges[END_REF]. However, collaboration processes need to be explicitly designed and managed to maximize the positive results of such an effort.
Group interaction and cooperation requires four aspects to be considered: people have to exchange information (communication), organize the work (coordination), operate together in a collective workspace (group memory) and be informed about what is happening and get the necessary information (awareness).
Maturity models have been designed to assess the maturity of a selected domain based on a comprehensive set of criteria [START_REF] Bruin | Understanding the main phases of developing a maturity assessment model[END_REF]. These models have progressive maturity levels, allowing the organization to plan how to reach higher maturity levels and to evaluate their outcomes on achieving that.
A maturity model is a framework that describes, for a specific area of interest, a set of levels of sophistication at which activities in this area can be carried out [START_REF] Alonso | Enterprise Collaboration Maturity Model (ECMM): Preliminary Definition and Future Challenges[END_REF]. Essentially, maturity models can be used: to evaluate and compare organizations' current situation, identifying opportunities for optimization; to establish goals and recommend actions for increasing the capability of a specific area within an organization; and as an instrument for controlling and measuring the success of an action [START_REF] Hain | Developing a Situational Maturity Model for Collaboration (SiMMCo) -Measuring Organizational Readiness[END_REF].
Product lifecycle mainly comprises several phases, e.g. research, development, production and operation/product support [START_REF] Wellsandt | A survey of product lifecycle models: Towards complex products and service offers[END_REF]. The development phase comprises the sub phases shown in Fig. 1: feasibility, concept, definition, development and series, which involve improvement and modifications. Product collaborative design encompasses all the processes before the production phase, and the information management strategy of products achieve internal information sharing and collaborative design by integrating data and knowledge throughout the whole product lifecycle and managing the completeness of the information in each stage of product design. Researches on the product maturity are mainly about project management maturity which are used to evaluate and improve the project management capabilities of enterprises. A smaller part of the researches have discussed the concept of product maturity, and the number of works devoted to studying maturity of related processes and resources is insignificant. Wang et al. [START_REF] Wang | Research on Space Product Maturity and Application[END_REF] proposed the concept of space product maturity and established a management model of product maturity, but it lacks the research about product maturity promoting the product development process. Tao and Fan [START_REF] Tao | Application of Maturity in Development of Aircraft Integrated Process[END_REF] discussed the concept of maturity and management control method in the process of integration, but the division of the maturity level is not intuitive, and discussed little about application of product maturity in collaborative R&D platform. Chen and Liu [START_REF] Chen | Maturity Management Strategy for Product Collaborative Design[END_REF] provided the application of a strategy of product maturity for collaborative design on the collaborative development platform Teamcenter to verify the effectiveness and the controllability of the strategy. Wuest et al. [START_REF] Wuest | Application of the stage gate model in production supporting quality management[END_REF] adapted the state gate model, a well-established methodology for product and software development, to production domain and indicated that it may provide valuable support for product and process quality improvement although the success is strongly dependent of the right adaptation.
The main objective of this paper is the design of a maturity management model for controlling the functional and industrial design phase of an aeronautical assembly line in the Airbus company (Fig. 1), and explores the development of this model in 3DExperience, a collaborative software platform by Dassault Systémes [9].
Antecedents and iDMU concept
The industrial Digital Mock-Up (iDMU) is the Airbus proposal to perform the design process with a unique team with a unique deliverable. The iDMU is defined by Airbus to facilitate the integration of the processes of the aircraft development on a common platform throughout all their service life. It is a way to help in making the functional and the industrial designs evolving jointly and collaboratively. An iDMU gathers all the product, processes and resources information to model and validate a virtual assembly line, and finally to generate the shopfloor documentation needed to execute the manufacturing processes [START_REF] Menéndez | Virtual verification of the AIRBUS A400M final assembly line industrialization[END_REF][START_REF] Mas | Collaborative Engineering: An Airbus Case Study[END_REF].
Airbus promoted the Collaborative Engineering in the research project "Advanced Aeronautical Solutions Using PLM Processes and Tools" (CALIPSOneo) by implementing the iDMU concept [START_REF] Mas | Collaborative Engineering Paradigm Applied to the Aerospace Industry[END_REF][START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF][START_REF] Mas | PLM Based Approach to the Industrialization of Aeronautical Assemblies[END_REF]. The iDMU implementation was made for the industrialization of the A320neo Fan Cowl, a mid-size aerostructure. It was built by customizing CATIA/DELMIA V5 [9] by means of the PPR model concept. The PPR model of this commercial software provided a generic data structure that had to be adapted for the products, processes and resources of each particular implementation. In this case, a specific data structure was defined to support the Airbus products, the industrial design process, the process structure nodes, the resources structure nodes and their associated technological information, 3D geometry and metadata.
The process followed by Airbus to execute a pilot implementation of the iDMU is briefly described as follows. The previously existing Product structure was used and an ad-hoc application was developed that periodically updated all the modifications released by functional design. The Process and Resources structures were populated directly in the PPR context. The Process structure comprised four levels represented by four concepts: assembly line, station, assembly operation and task. Each concept has its corresponding constraints (precedence, hierarchy), its attributes and its allocation of products to be assembled and resources to be used. Once the PPR structures were defined, the system calculated the product digital mock-up and the resources digital mock-up that relate to each process node. As a result, the designer created simulations in the 3D graphical environment to analyse and validate the defined manufacturing solution. This validation of the process, product and resource design, by means of Virtual Manufacturing utilities in a common context, is a key feature in the Collaborative Engineering deployment.
The iDMU supports the collaborative approach through 3 main elements. First, it allows sharing different design perspectives, to reveal solutions that while valid for a perspective (e.g. resources design) cause problems in other perspectives (e.g. industrialization design), and to solve such issues. Second, it enables checking and validation of a high number of alternatives, allowing improving the harmonization and optimization of the design as a whole. And third, it is possible to reuse information contained in the iDMU by other software systems used in later stages of the lifecycle, facilitating the integration and avoiding problems with translation of models into intermediate formats, and making easier the use of new technologies such as augmented reality.
The CALIPSOneo project [START_REF] Mas | Collaborative Engineering Paradigm Applied to the Aerospace Industry[END_REF][START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF][START_REF] Mas | PLM Based Approach to the Industrialization of Aeronautical Assemblies[END_REF], with a scope limited to the A320neo fan cowl, allowed confirming that the iDMU provides a suitable platform to develop the sociotechnical process needed by the collaborative engineering. However, the project also revealed that the general functionalities provided by the adopted PLM commercial solution required an important research and development work to implement the data structures and functions needed to support the iDMU.
An important factor in the implementation of an iDMU is the need for a PLM tool capable of coordinating the workflow of all participants by means the definition and control of the lifecycle of allocated elements of the PPR structure, i.e. to manage its maturity states. At present, this issue is being addressed in the research project "Value Chain: from iDMU to Lean documentation for assembly" (ARIADNE).
Methodology
As said before, one of the studies carried out within the scope of the ARIADNE project was the analysis of capabilities that a PLM tool requires to manage the maturity states of the iDMU. Such a PLM tool aims the following objectives: To define independent and different maturity states sets for Product, Process and Resource revisions. To define precedence constraints between the maturity states of a Process revision, and the maturity states of its related Products and Resources. To define, for each Process revision maturity state, other conditions (e.g. attribute values) that are to be met prior to evolving a Process revision to the maturity state. To define, for each Process revision maturity state, that some process data or relations are not modifiable from this maturity state onwards. To display online, in the process revision iDMU, the Products and Resources evolved through maturities from the last time it was vaulted.
To display online, in the process revision iDMU, the impact of the evolved Products and Resources and how easy these issues can be fixed.
In order to prove the capabilities of a new PLM tool to meet these objectives, a simple lifecycle model is proposed. The model has only three possible maturity states for every element of the PPR structure: In Work, Frozen and Released. However, the importance of the proposed model lies in a set of constraints that prevent the promotion between maturity states, as described below. This simple model aims to be a preliminary test to evaluate a new PLM tool, so that it can be improved and extended with new states, relationships, constraints, rules, etc.
The In Work state is used for a new version of a product, process or resource element in the PPR structure. In Work data are fully modifiable and can be switched to Frozen by the owner, or to Released by the project leader. Frozen is an intermediate state between In Work and Released. It can be used, for example, for data waiting for approval. Frozen data are partially modifiable (for minor version changes) and can be switched back and forth between In Work and Frozen by the owner, or to Released by the project leader. Released is the final state of a PPR element, e.g. when a product is ready for production, a process is accepted for industrialization, or a resource is fully configured for its use. Released data cannot be deleted and cannot switched back to previous states. In this situation, all product, process and resource elements in the PPR structure are In Work. The collaborative environment must allow the visualization and query of information under development to the different actors of the system, based on roles and permissions, so that it helps to detect design errors and make right decisions.
The new PLM tool must provide a set of rules or constraints that allow to control and alert the designer about non-coherent situations. Fig. 2 schematically presents some constraints to promote a PPR element. For instance, it is not possible to assign to a process node a maturity state of Frozen until the related product node has a maturity state of Released and the allocated resource has a maturity state of Frozen. In a similar way, to promote a process to Released, the allocated resource must be in Re-leased. On the other hand, the resource element can only reach the maturity state of Released when the process element has been Frozen previously.
In addition to define constraints between elements of different types (product, process and resource), it is necessary to establish rules between elements of the same type to control the change of maturity states of their interconnected elements. For instance, the following constraint inside the Product structure could be established: the designer of a product consisting of several parts can change the state of the product element to Frozen/Released when all its parts already have that same state, so that a part still unfinished (In Work) alerts him that the product cannot be promoted yet.
Practical application
The proposed model for managing the maturity states of an iDMU was implemented and tested in a PLM commercial software, within the frame of the ARIADNE project.
The implementation was carried out with the 3DExperience software solution by Dassault Systémes. The PPR structure in 3DExperience differs slightly from CATIA/DELMIA V5 so that the process of building the iDMU is different from those developed in previous projects. A significant difference is that the previous 3-elements PPR structure is replaced by a 4-elements structure, as represented schematically in Fig. 3: Product: it presents the functional zone breakdown in an engineering oriented organization. It is modelled by Design Engineering to define the functional view for structure and system installation. Process: it is focused to model the process plan from a functional point of view. It is indeed a product structure composed of a cascade of components identified by part numbers that presents how the product is built and assembled. Thus, both product and process elements of the PPR structure are directly correlated.
System: it defines the work flow operation. It contains a set of system/operations that corresponds to the steps necessary to correlate with the Process structure. It contains the information necessary to perform operations such as balancing the assembly lines. Resource: it represents the layout design for a manufacturing plant. Resources can be classified as working (e.g. robot, worker, conveyor), non-working (e.g. tool device) or organizational (e.g. station, line). The required resources are attached to operations in the System structure, as shown in Fig. 3.
The adopted PLM software integrates a default lifecycle model to any created object that controls the various transitions in the life of the object. This model includes elements such as user roles, permissions, states and available state changes. To facilitate the collaborative work, 3DExperience also provides a lifecycle model to manage Engineering Changes, which has links to PPR objects, and a transfer ownership functionality that can be used to pass an object along to another authorized user to promote it. Both PPR and Engineering Changes lifecycle models can be customized. These characteristics made 3DExperience an adequate collaborative platform for the purpose of this work. The objectives that a PLM tool must satisfy for managing the maturity states, described in the previous section, were analysed to fit the 4-element PPR structure of the 3DExperience software. Accordingly, the proposed model was redefined as shown in Fig. 4(a). As can be seen, the set of constraints for the System lifecycle is equivalent to the previous set of constraints for the Process lifecycle, whereas that Process elements are the bridge between Products and Systems.
A series of roles has been defined (see Fig. 4(a)) to implement the proposed model of maturity states in 3DExperience, such as the Project Leader (PL) and a different type of user to design each of the PPR structures: a Designer Engineer (DE), a Process Planner (PP), a Manufacturing Engineer (ME) and a Resources Designer (RD). Each system user is responsible for designing and promoting/demoting each node of its structure to the three possible states, as shown in Fig. 4(a). The PL coordinates all maturity state changes: he checks that there are no inconsistencies and gives the other users permission to make the changes.
Designers have several possibilities for building the iDMU using the 3DExperience graphical interface. Briefly, the maturity state is stored as an attribute of each PPR element, so it can be accessible from the query tool "Properties". The software also provides the "Team Maturity" utility to display information in the graphical environment about the maturity states. This utility displays a coloured flag in each element of the model tree that indicates its maturity state; however, it applies just for Product and Resource elements, i.e. elements that have associated geometry. Another utility allows displaying graphical information about the related elements of an allocated iD-MU element. Both graphical utilities, for maturity states and related elements, were used to search and filter information before changing an object state. To promote or demote the maturity state of an iDMU element, the "Change Maturity" utility presents different fields with the available changing states and related information according to the lifecycle model, roles and permissions.
The Airbus A400M empennage (about 34000 parts, see Fig. 4(b)) and its assembly processes were selected to develop the iDMU in 3DExperience. The empennage model developed in CATIA V5 was used as the Product structure. Process, System and Resource structures were modelled from scratch. Different use cases were evaluated by choosing small and more manageable parts of the iDMU to change their maturity states in the collaborative platform. The following is a summary of the implementation process carried out. An example is shown in Fig. 5. At the beginning of the lifecycle, the PL authorized all other system actors to work together in the iDMU at the same time in the collaborative platform (label a in Fig. 5). The main PPR structures were created and scope links were established between them. In this situation, all PPR nodes were In Work while the iDMU was designed in a collaborative and coordinated way.
One of the first state changes in the iDMU is made by the DE when it promotes a component or subproduct to Frozen (b). In this situation, only minor design changes could be made to the frozen component, which will have no impact on the rest of the iDMU (including other components of the product). Demoting the component to an In Work state (c) would indicate that major changes are required as a result of the current design state in other areas of the iDMU. In general, the promotion to Released of every PPR structure will be carried out in an advanced state of the whole iDMU. This means that their design has been considered as stable and that no significant changes will occur that affect other parts of the iDMU.
Maturity state changes in the Process structure are conditioned by the state of related components in the Product structure. Thus, before promoting a Process element (d), the PP must check the status related components with the aforementioned 3DExperience utilities to search and analyse the related elements and their states of maturity. If the related product is Frozen/Released, the PP can request authorization to the PL to promote the Process element.
Another of the first state changes of maturity that occurs in the iDMU is that of resources. Thus, the RD promotes a resource to Frozen (e) or demotes it to In Work (f) following the same guidelines as the DE with the products. Instead, the promotion of a resource to Released (g) can only be authorized by the PL when the related assembly system is Frozen, indicating that the assembly line has been designed except for possible minor changes that would not affect the definition of the resources.
The ME is the last actor to promote the state of his work in the iDMU: the design of the assembly system/line. In order to freeze his work (h), the ME needs to know in advance the final design of the product assembly process and also the definition of the necessary resources. Any changes in product or process structures, even if they are minor, could have a relevant impact on the definition of the assembly line. Therefore, the ME must previously verify that related assembly processes are Released and required resources are Frozen. Since resource nodes are linked to the System structure through operation nodes, the ME extensively uses the 3DExperience utilities to trace all affected nodes and check their maturity states. As discussed above, the promotion to Released of all PPR structures occurs in an advanced development of the iDMU, being the last two steps those relating to Resource and System structures.
Conclusions
This paper presents the methodology and preliminary results for the management in a collaborative environment of the maturity states of PPR elements with all product, process and resource information associated with the assembly of an aeronautical component. The methodology aims to evaluate the suitability of PLM tools to implement the Airbus methodology in the creation of a control mechanism that allows a collaborative work. The proposed model shows in a simple way the importance of the flow of information among the different participants of a unique team to build an iDMU as the unique deliverable in a collaborative platform. An outstanding feature of the lifecycle model is its ability to authorize or restrict the promotion of a product, process or resource element depending on the states of the related elements. Different use cases with coherent and non-coherent situations have been successfully analysed using 3DExperience to implement an iDMU for the Airbus A400M empennage.
In this work, the change management of maturity states has been coordinated by a Project Leader. The next step will be to customize 3DExperience to automate the maturity state changes, so that the system is responsible for evaluating the information of related elements, allowing or preventing the designer from promoting an iDMU element.
Fig. 1 .
1 Fig. 1. Airbus product lifecycle and milestones development.
Fig. 2 .
2 Fig. 2. Proposed simple model for the lifecycle of the PPR structure.
Fig. 2
2 Fig.2shows a schema of the proposed model. At the beginning of the lifecycle, since Design Engineering starts the product design, usually Manufacturing Engineering can begin to plan the process, set up the layout and define the necessary resources. In this situation, all product, process and resource elements in the PPR structure are In Work. The collaborative environment must allow the visualization and query of information under development to the different actors of the system, based on roles and permissions, so that it helps to detect design errors and make right decisions.The new PLM tool must provide a set of rules or constraints that allow to control and alert the designer about non-coherent situations. Fig.2schematically presents some constraints to promote a PPR element. For instance, it is not possible to assign to a process node a maturity state of Frozen until the related product node has a maturity state of Released and the allocated resource has a maturity state of Frozen. In a similar way, to promote a process to Released, the allocated resource must be in Re-
Fig. 3 .
3 Fig. 3. Schema of implementation of Airbus iDMU concept in 3DExperience.
Fig. 4 .
4 Fig. 4. (a) Extension of proposed model and (b) Airbus A400M empennage.
Fig. 5 .
5 Fig. 5. An example of implementation of the proposed simple model.
Acknowledgements
The authors wish to thank the Andalusian Regional Government and the Spanish Government for its financial support through the research project "Value Chain: from iDMU to Lean documentation for assembly" (ARIADNE). The work of master thesis students, Gonzalo Monguió and Andrés Soto, is also greatly acknowledged. |
01764176 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764176/file/462132_1_En_10_Chapter.pdf | Manuel Oliva
email: manuel.oliva@airbus.com
Jesús Racero
Domingo Morales-Palma
Carmelo Del Valle
email: carmelo@us.es
Fernando Mas
email: fernando.mas@airbus.com
Jesus Racero
Carmelo Del Valle
Value Chain: From iDMU to Shopfloor Documentation of Aeronautical Assemblies
Keywords: PLM, iDMU, interoperability, Collaborative Engineering, assembly 2 ARIADNE project
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
PLM systems integrate all phases in the product development. The full product lifecycle, from the initial idea to the end-of-life, generates a lot of valuable information related to the product [START_REF] Ameri | Product lifecycle management: closing the knowledge loops[END_REF].
In aerospace industry, the long lifecycle (about 50 years), the number of parts (over 700.000 as average in a short range aircraft) and the modifications, make the aircraft a high complex product. Such complexity is drawn both from the complexity of the product and from the amount of resources and multidisciplinary work teams involved.
A complexity of multidisciplinary is found during the interaction between functional and industrial designers which brings inefficiencies in developing time, errors, etc. Research studies propose the necessity to evolve from the concurrent way of working to a more efficient one with the objective to deliver faster, better and cheaper products [START_REF] Pardessus | Concurrent Engineering Development and Practices for aircraft design at Airbus[END_REF], [START_REF] Haas | Concurrent engineering at Airbus -a case study[END_REF], [START_REF] Mas | Concurrent conceptual design of aero-structure assembly lines[END_REF]. One proposal to comply with such challenge is the Collaborative Engineering concept [START_REF] Lu | A scientific foundation of Collaborative Engineering[END_REF], [START_REF] Morate | Collaborative Engineering Paradigm Applied to the Aerospace Industry[END_REF].
Collaborative Engineering involves a lot of changes in terms of organization, teams, relationships, skills, methods, procedures, standards, processes, tools, and interfaces: it is a business transformation process. The main deliverable of a collaborative team is the iDMU [START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF]. The iDMU concept is the approach defined by Airbus to facilitate the integration of the aircraft development on a common platform throughout all their service life. An iDMU gathers all the product, processes and resources data, both geometrical and technological to model a virtual assembly line. An iDMU provides a single environment, in which the assembly line industrial design is defined and validated.
To cover the bridge between the complexity of product information and the different PLM software tools to manage it, interoperability has raised as a must nowadays to improve the use of existing data stored in different formats and systems [START_REF] Penciuc | Towards a PLM interoperability for a collaborative design support system[END_REF]. Interoperability foundation is the Model Base Engineering (MBE), as a starting point for organizing a formal way of communicating and building knowledge [START_REF] Liao | Semantic annotations for semantic interoperability in a product lifecycle management context[END_REF] from data and information.
The development of solutions, to facilitate the implementation of both the concurrent engineering and the Collaborative Engineering in the aerospace industry, was the objective of some projects since the end of the 1990's decade. Two of the most relevant ones are the European projects ENHANCE [START_REF] Braudel | Overall Presentation of the ENHANCE Project[END_REF], [START_REF]VIVACE Project[END_REF] and VIVACE [START_REF] Van | Engineering Data Management for extended enterprise[END_REF].
In the last decade, different research projects have been conducted for a complete integration of the iDMU and all the elements in the different stages of the life cycle (from design to manufacturing). The CALIPSOneo project [START_REF] Mas | PLM based approach to the industrialization of aeronautical assemblies[END_REF] was launched by Airbus to promote the Collaborative Engineering. It implements the iDMU as a way to help in making the functional and the industrial designs evolving jointly and collaboratively. The project synchronizes, integrates and configures different software applications that promote the harmonization of common set of PLM and CAD tools.
EOLO (Factories of the future. Industrial development) project was developed as an initiative to achieve a better integration between information created in the industrialization phases and the information created in the operation and maintenance phases.
The ARIADNE project emerges as an evolution of both, CALIPSOneo and EOLO projects, which incorporates the integrated management of the iDMU life cycle (product, processes and resources), the Collaborative Engineering and interoperability between software systems (independent vendor). These characteristics will provide an improvement of data integration, knowledge base and quality of the final product. , the PLM platform currently running in most of the aerospace companies, and 3DExperience. An analysis of the 3DExperience platform is been performed in ARIADNE with the objective of checking the main functionalities needed for the industrial design, that represents an improvement over CATIA v5. It is not an exhaustive analysis of all functionalities of 3DExperience, but it is a study of the characteristics provided by 3DExperience that covers the main requirements of manufacturing engineering activities for the industrialization of an aerospace assembly product. HELIOS (New shopfloor assembly documentation models). HELIOS proposes research on a solution to extract information from an iDMU independently of the software provider. The conceptual solution is based on developing the models and transformations needed to explode the iDMU for any other external system. Currently, any system that needs to exploit the iDMU would have to develop its own interfaces. In case the iDMU is migrated to a different PLM, those interfaces must be changed also. To help with those inefficiencies and to be independent from any existing PLM, HELIOS will generate a standardized software code (EXPRESS-i) that any external system can use to communicate and obtain the required information from the iDMU. ORION (Laser authoring shop floor documentation). ORION aims to develop a system to exploit assembly process information contained in the iDMU with Augmented Reality (AR) technics using laser projection technology. This system will get any data from the iDMU needed for the assembly, verification or maintenance process. ORION is based on the SAMBAlaser project [START_REF] Serván | Augmented Reality Using Laser Projection for the Airbus A400M Wing Assembly[END_REF], an 'AR by laser' technology developed by Airbus. ORION will analyze new ways for laser programming besides numerical control and will provide a 3D simulation tool. Also it will propose a data model to integrate the iDMU with the AR laser system and to facilitate the laser programming and execution.
ARIADNE project functional architecture
ARIADNE architecture is a consequence of the conclusions and the proposed future work in CALIPSOneo project in 2013. CALIPSOneo architecture for a collaborative environment was CATIA v5 in conjunction with DPE (DELMIA Process Engineering) to hold the process definition in a database (also called Manufacturing Hub by Dassault Systèmes). The architecture in CALIPSOneo, although still in production in Airbus and available in the market, is not and architecture ready to support the requirements from Industry 4.0 and is quite out of phase in technology to connect or communicate with today's technology.
To develop MINOS, the decision on the tool to support it was 3DExperience a natural evolution of CATIA v5. Data used in MINOS, the Airbus military transport aircraft A400M Empennage shown in Figure 3a, are in CATIA v5 format. To keep the 3DExperience infrastructure simple, and thanks to the relative low volume of data of the A400M empennage, a single virtual machine with all the required servers were deployed for the project.
For the interoperability between CATIA v5 and 3DExperience, CATIA v5 the input data are stored in file based folders containing the geometry in CATPart and the product structure in CATProduct as Figure 5. FBDI (File Based Data Import) is the process provided by Dassault Systémes that reads and or imports information (geometry and product structure) into 3DExperience. The option 'Import as Native', selected in FBDI will read the CATIA v5 as a reference, meaning, creating a 3D representation in 3DExperience as in CATIA v5, but will not allow it to be modified. Resources and assembly processes will be designed in 3DExperience based on the product (in CATIA v5) previously imported. For the interoperability analysis, the wing tip of the Airbus C295 (a medium range military transport aircraft) was chosen.
Developments in HELIOS and ORION will be based also on CATIA v5 data availability.
ARIADNE pretends to use only off-the-shelf functionalities offered natively by 3DExperience with no additional development.
Implementation and results
Collaborative Engineering, interoperability and iDMU exploitation are the targets in the different work packages of ARIADNE. The implementation and results are described in this section.
Collaborative Engineering
Collaborative Engineering requires an integrated 3D environment where functional and industrial engineers can work together influencing each other. The main driver for the Collaborative Engineering method is the construction of the iDMU. ARIADNE is focused in the collaboration between the functional design and industrial design teams. ARIADNE will check if 3DExperience provides such environment to build the iDMU where Collaborative Engineering can be accomplish.
To analyze 3DExperience collaborative environment a few use cases were defined and tested with the Airbus A400M empennage product represented in Figure 2a.
One of the bases to integrate the information in a PLM is to be able to hold the different ways or views (As-Design, As-Planned, As-Prepared) [START_REF] Mas | Proposal for the conceptual design of aeronautical final assembly lines based on the Industrial Digital Mock-Up concept[END_REF], shown in Figure 2b, for defining the product in Airbus. Keeping these views connected is basic to the Collaborative Engineering [START_REF] Mas | iDMU as the Collaborative Engineering engine: Research experiences in Airbus[END_REF]. In the work performed it was possible to build the As-Design view. Following, the As-Planned view was built from the As-Designed view while sharing the same 3D geometry for each of the structures. This is represented in Figure 3b. The third structure created in ARIADNE, which is the main one used for the industrialization of a product, is the As-Prepared. This structure is also a product structure rearranged as a result of the different assembly processes needed to build the product. The As-Prepared tree organization shown in Figure 4a is a consequence of the network of assembly processes. To build such network, precedence between assembly processes and operations must be assigned as in Figure 4b. Also tools like Gantt representation in Figure 4c helps deciding the precedence based on constrains (resources and times). Additional functionality for balancing constrains is too basic in 3DExperience for Airbus product complexity. Optimization tool is not offered by 3DExperience. Additional development would be needed to cover these last two functionalities [START_REF] Rios | A review of the A400m final assembly line balancing methodology[END_REF].
c) Operations Gantt chart
The iDMU was built by assigning product and resources to each operation together with the precedence. With such information in the iDMU, the use case design in context was performed. The design of an assembly process or a resource requires the representation of the product and the industrial environment based on the operations previously performed. This context was possible to be calculated and represented in 3DExperience.
The reconciliation between As-Planned and As-Prepared was tested to make sure that every product was assigned to a process. This functionality is also shown in the tree process structure with color code nodes. ARIADNE analyzed the capabilities to check how functional designers and industrial designers could carry out their activities influencing each other. For this, a mechanism to follow the evolution of the maturity states of the product, process and resources was proposed [START_REF] Morales-Palma | Managing maturity states in a collaborative platform for the iDMU of aeronautical assembly lines[END_REF]. This mechanism is intended to foster an interaction between both design areas.
3.3
Interoperability CATIA v5 and 3DExperience.
Recent developed aircrafts (A380, A350 and A400M) in Airbus have been designed in CATIA v5. Migration of a complete product design of an aircraft requires a high effort in resources and cost. Finding a solution where the product design can be kept in CATIA v5 while downstream product lifecycle uses a more adequate environment to cover their activities becomes a target for MINOS work package.
MINOS analyzed the degree of interoperability between 3DExperience and CATIA v5. Interoperability in this use case is understood as the set of required characteristics to develop the industrialization activities performed by manufacturing engineering in 3DExperience without affecting the product design activities (functional design) of the design office performed in CATIA v5. Initially a reading of product design (product structure and product design) in CATIA v5 was carried out in 3DExperience, step 1 in Figure 5. Checking the result of such work in 3DExperience demonstrated a successful import of the information for the product structure and for the 3D geometry. Following, a modification was intro-duced in the CATIA v5 product, step 2 in Figure 5. FBDI process did detect such change of the product and propagated it in 3DExperience, step 3 in Figure 5. Also 3DExperience sent a warning to update the product structure with the modified product. An impact on the process and resources related to the modified product was performed based on functionalities provided by 3DExperience.
3.4
Interoperability and iDMU exploitation.
Due to the increasing added value that the iDMU provides, it becomes an important asset to a company. Once assembly processes are designed and stored in the iDMU, information to production lines to perform the tasks can be extracted with an automatic application system.
As the current production environment in Airbus is CATIA v5, extracting information from the iDMU is constrained to such environment. HELIOS has developed an interoperable framework based in a set of transformations to exploit the iDMU independently from the PLM vendor and STEP is the tool selected. The use case HELIOS is based on is the ORION UML (Universal Model Language) model. The ORION UML model is transformed (UML2EXPRESS) into a schema defined in a standard language such as EXPRESS [START_REF]Industrial automation systems and integration -Product data representation and exchange[END_REF]. The schema is the input to any PLM (CATIA v5 or 3DExperience) to extract the information from the iDMU with a second set of transformations (PLM2EXPRESS). This last transformation will generate the instantiated code (EXPRESS-i) with the required information. This standardized code will be the same input to the different laser vendors.
Currently, in Airbus, the SAMBAlaser [START_REF] Serván | Augmented Reality Using Laser Projection for the Airbus A400M Wing Assembly[END_REF] is in production for projection of work instructions. To enhance the SAMBAlaser functionalities, ORION work package has developed an integrated user interface with the laser system control, optimized the quantity of information to project without flickering and built a simulation tool to check the capabilities of projecting within an industrial environment without occlusion.
Conclusions
Main conclusion is the successful proof of concept of the existing PLM technology in an industrial environment.
As mentioned, first test of interoperability CATIA v5 and 3DExperience was successfully. As preliminary conclusions it would be possible for industrialization engineers to work in a more advanced environment, 3dExperience, while functional designers can keep working in CATIA v5. Additional in-depth use cases (annotations, kinematics, and tolerances) need to be performed to check the degree of interoperability.
The introduction of HELIOS as the framework that 'separates' or make any iDMU exploitation system independent of the PLM that support it, is an important step for interoperability between different PLM systems and vendor independency and also enhances the necessity to have a model based definition for the iDMU. Thus, once 3DExperience becomes the production environment in Airbus, ORION will not need to be modified. HELIOS will be able to support any other iDMU exploitation system just by expanding the UML model.
The three views interconnected (As-Design, As-Planned, As-Prepared) together with the capability of creating a network of processes and operations have proven to build an iDMU to support the collaboration engineering and the facilitation of the interaction between functional an industrial engineers. 3DExperience has demonstrated to provide an interoperable collaborative 3D PLM environment to the industrialization of aeronautical assemblies. However an enterprise organizational model must be put in place to bring together functional and industrial engineering as one team with the iDMU as the unique deliverable.
Since ARIADNE is a proof of concept, no direct estimates on cost, time or other benefits are measured. However, based on previous experiences, significant benefits (time, costs, and reduction of errors) are expected after the deployment phase.
5
Future work
The current status of ARIADNE project suggests some improvements and future work after the proof of concept of the technology. ARIADNE project has tested some basic 3DExperience capabilities. The need to explore the 3DExperience capabilities to support the industrialization of an aircraft requires launching additional industrial use cases to cover industrialization activities. ARIADNE project avoids developing IT interfaces. Connection and interfaces from other tools that provide solutions not fully covered by 3DExperience such as line or station balancing and optimization might need to be analyzed. ARIADNE objective was not to test computing performances. Performing stress test with high volume of data (metadata, 3D geometry) is another important point to study, mainly for the aerospace industry.
Figure 1 .
1 Figure 1. ARIADNE project organization
Figure 2 .
2 Figure 2. a) Empennage A400M. b) Airbus product viewsA set of additional functionalities were performed in the As-Planned view such as the possibility of navigating through the structure as well as in the As-Design or filtering of product nodes in the product tree. Reconciliation in 3DExperience has proven to be an important functionality to assure a fully connection As-Design and As-
Figure 3 .
3 Figure 3. a) As-Design view. b) As-Planned view
Figure 4 .
4 Figure 4. a) As-Prepared. b) Precedence between operations. c) Operations Gantt chart
Figure 5 .
5 Figure 5. Interoperability CATIA v5 (functional design) and 3DExperience (industrial design)
Acknowledgments
Authors wish to thanks to Andres Soto, Gonzalo Monguió and Andres Padillo for their contributions. ARIADNE is partially funded by CTA (Corporación Tecnológica Andaluza) with support from the Regional and National Government. |
01764177 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764177/file/462132_1_En_44_Chapter.pdf | Matteo Rucco
email: ruccomatteo@gmail.com
Katia Lupinetti
email: katia.lupinetti@ge.imati.cnr.it
Franca Giannini
email: giannini@ge.imati.cnr.it
Marina Monti
email: monti@ge.imati.cnr.it
Jean-Philippe Pernot
email: jean-philippe.pernot@ensam.eu
J.-P Pernot
CAD Assembly Retrieval and Browsing
Keywords: Assembly retrieval, shape matching, information visualization
Introduction
The large use of CAD (Computer Aided Design) and CAM (Computer Aided Manufacturing) systems in industries has generated a number of 3D databases making available a large amount of 3D digital models. The reuse of these models, either single parts or assemblies, and the exploitation of the knowledge associated with them are becoming an important way to facilitate new designs. To track and organize data related to a product and its lifecycle, modern CAD systems are integrated into PDM (Product Data Management) and PLM (Product Lifecycle Management) systems. Among others, the associated data usually involve the technical specifications of the product, provisions for its manufacturing and assembling, types of materials used for its production, costs and versioning. These systems efficiently manage a search based on textual metadata, which cannot be sufficient to effectively retrieving the searched data. Actually, standard parts, text-based annotation and naming convention are company-or operator-specific, thus difficult to generalize as search keys. To overcome these limitations, content-based algorithms for 3D model retrieval are being developed based on shape characteristics. A wide literature is available and some commer-cial systems provide shape-based model retrieval. [START_REF] Biasotti | Retrieval and clas-sification methods for textured 3D models: a comparative study[END_REF][START_REF] Cardone | A survey of shape similarity assessment algorithms for product design and manufacturing applications[END_REF][START_REF] Iyer | Three-dimensional shape searching: state-of-the-art review and future trends In[END_REF] provide an overview of the 3D shape descriptors most used in the CAD domain. However, these descriptors focus solely on the shape of a single component, which is not adapted for more complex products obtained as assemblies. An effective assembly search cannot be limited to simple shape comparison among components, but requires also information that is not always explicitly encoded in the CAD models, e.g. the relationships and the joint constraints between assembly components.
In this paper, we present methods for the retrieval of globally and/or partially similar assembly models according to different user-specified search criteria [START_REF] Lupinetti | CAD assembly descriptors for knowledge capitalization and model retrieval[END_REF] and for the inspection of the provided results. The proposed approach creates and exploits an assembly descriptor, called Enriched Assembly Model (EAM), organized in several layers that enable multi-level queries and described in section 4.1. The rest of the paper is organized as follows. Section 2 provides an overview of related works. Issues related to assembly retrieval are described in Section 3, while Section 4 presents the assembly descriptor and the comparison procedure. Section 5 reports some of the obtained results, focusing on the developed inspection capabilities. Section 6 concludes the paper discussing on current limits and future work.
Related works
Shape retrieval has been investigated far and wide in the recent years [START_REF] Biasotti | Retrieval and clas-sification methods for textured 3D models: a comparative study[END_REF][START_REF] Cardone | A survey of shape similarity assessment algorithms for product design and manufacturing applications[END_REF][START_REF] Iyer | Three-dimensional shape searching: state-of-the-art review and future trends In[END_REF][START_REF] Tangelder | A survey of content based 3D shape retrieval methods[END_REF]. However, most of the work present in literature deal with the shape of a single component and do not consider other relevant information of the assembly such as the relationships between the parts. One of the pioneer works dealing with assembly retrieval was presented by Deshmukh, et al. [START_REF] Deshmukh | Content-based assembly search: A step towards assembly reuse In[END_REF]. They investigate the possible usage scenarios for assembly retrieval and proposed a flexible retrieval system exploiting the explicit assembly data stored in a commercial CAD system. Hu et al. [START_REF] Hu | Relaxed lightweight assembly retrieval using vector space model In[END_REF] propose a tool to retrieve assemblies represented as vectors of watertight polygon meshes. Identical parts are merged and a weight based on the number of occurrences is attached to each part in the vector. Relative positions of parts and constraints are ignored, thus the method is weak in local matching. Miura and Kanai [START_REF] Miura | 3D Shape retrieval considering assembly structure In[END_REF] extend their assembly model by including structural information and other useful data, e.g. contact and interference stages and geometric constraints. However, it does not consider high-level information, such as kinematic pairs and some information must be made explicit by the user. A more complete system is proposed by Chen et al. [START_REF] Chen | A flexible assembly retrieval approach for model reuse In[END_REF]. It relays on the product structure and the relationships between the different parts of the assembly. The adopted assembly descriptor considers different information levels including the topological structure, the relationships between the components of the assembly, as well as the geometric information. Thus, the provided search is very flexible accepting rough and incomplete queries. Anyhow, most of the work require user support for the insertion of the required information and weakly support the analysis and browsing of the obtained results, which for large assemblies can be very critical. To overcome these limitations, in this paper, we present an assembly descriptor (i.e. the Enriched Assembly Model), which can support user requests based on different search criteria not restrained to the identification of assembly models with the same structure in terms of sub-assemblies, and tools for facilitating the inspection and browsing of the results of the retrieval process.
Assembly retrieval issues
Retrieving similar CAD assembly models can support various activities ranging from the re-use of the associated knowledge, such as production or assembly costs and operations, to part standardization and maintenance planning. For instance, knowing that a specific subassembly, which includes parts having a high consumption rate due to their part surrounding and usage, is present in various larger products may help in defining more appropriate maintenance actions and better planning of the warehouse stocks. Similarly, knowing that different products having problems share similar configurations can help in detecting critical configurations. Considering these scenarios, it is clear that simply looking for products (i.e. assemblies) that are completely similar to a given one is important but limited. It is therefore necessary to have the possibility to detect if an assembly is contained into another as well as local similarities among assemblies, i.e. assemblies that contain similar sub-assemblies. These relations can be described using the set theory. Being ≅ the symbol indicating the similarity according to given criteria, given two assemblies A and B, we say that:
A is globally similar to B iff for each component a Depending on the retrieval purpose, not only the criteria change but also the interest on the similarity among the parts or on their connections can have different priority. It is therefore important to provide flexible retrieval tools that can be adapted to the specific need and thus able to consider the various elements characterizing the assembly despite on how the assembly was described by the user (e.g. structural organization) or on the information available on the CAD model itself (e.g. explicit mating conditions).
i ∈ A, ∃ b h ∈ B s.t. a i ≅ b h , for each relation (a i, a j ) ∈ A, ∃ (b h, b k ) ∈ B s.t. (a i,
In addition, it might be difficult to assess the effective similarity when various elements contribute to it. It is crucial to provide tools for gathering results according to the various criteria and for their inspection. This is very important in the case of large assemblies, where detecting the parts considered similar to a given assembly might be particularly difficult.
4
The proposed approach
Based on the above considerations, we propose a method for the comparison of assembly models exploiting various levels of information of the assembly. Differently from most of the work presented in literature, our method can evaluate all the three types of similarity described above. It uses a multilayer information model, the socalled Enriched Assembly Model (see section 4.1), which stores the data describing the assembly according to three different layers, in turns specified at different level of details thus allowing a refinement of the similarity investigation. Depending on the type of requested similarity, an association graph is build putting in relation the elements of the EAM of two CAD models to be compared. The similar subset of these two models are then corresponding to the maximal clique of the association graph (see section 4.2). To analyze the retrieved results, a visualization tool has been developed; it highlights the correspondences of the parts and provides statistics on the matched elements (see section 4.3).
Enriched Assembly Model (EAM)
The EAM is an attributed graph, where nodes are the components and/or composing sub-assemblies while arcs represent their adjacency relations. It uses four information layers: structure, interface, shape and statistics [START_REF] Lupinetti | CAD assembly descriptors for knowledge capitalization and model retrieval[END_REF].
The structural layer encodes the hierarchical assembly structure as specified at the design stage. In this organization, the structure is represented as a tree where the root corresponds to the entire assembly model, the intermediate nodes are associated with the sub-assemblies and the leaves characterize the parts. Attributes to specify parts arrangement (regular patterns of repeated parts) are attached to the entire assembly and to its sub-assemblies [START_REF] Lupinetti | Use of regular patterns of repeated elements in CAD assembly models retrieval In[END_REF]. The organization in sub-assemblies is not always present and may vary according to the designer's objectives.
The interface layer specifies the relationships among the parts in the assembly. It is organized in two levels: contacts and joints. The first contains the faces involved in the contact between two parts and the degree of freedom between them. The joint level describes the potentially multiple motions resulting from several contacts between two parts [START_REF] Lupinetti | Automatic Extraction of Assembly Component Relationships for Assembly Model Retrieval[END_REF].
The shape layer describes the shape of the part assembly by several dedicated descriptors. Using several shape descriptors helps answering different assembly retrieval scenarios, which can consider different shape characteristics and at different level of details. They include information like shape volume, bounding surface area, bounding box and spherical harmonics [START_REF] Kazhdan | Rotation invariant spherical harmonic representation of 3 d shape descriptors[END_REF].
The statistics layer contains values that roughly characterize and discern assembly models. Statistics are associated as attributes to the various elements of the EAM. For the entire assembly and for each sub-assembly, they include: the numbers of subassemblies, of principal parts, of fasteners, of patterns of a specific type, of a specific joint type. To each node corresponding to a component, the statistics considered are: percentage of a specific type of surface (i.e. planar, cylindrical, conical spherical, toroidal, free form), number of maximal faces of a specific type of surface. Finally, for each arc corresponding to a joint between parts, the stored statistics include the number of elements in contact for a specific contact type.
The E.A.M. is created using ad hoc developed modules [START_REF] Lupinetti | CAD assembly descriptors for knowledge capitalization and model retrieval[END_REF][START_REF] Lupinetti | Use of regular patterns of repeated elements in CAD assembly models retrieval In[END_REF][START_REF] Lupinetti | Automatic Extraction of Assembly Component Relationships for Assembly Model Retrieval[END_REF], which analyze the content of the STEP (ISO 10303-203 and ISO 10303-214) representation of the assembly and extract the required information.
EAM comparison
Adopting this representation, if two models are similar, then their attribute graphs must have a common sub-graph. The similarity assessment between two EAMs can then be performed by matching their attribute graphs and finding their maximum common subgraph (MCS). The identification of the MCS is a well-known NP-hard problem and among the various techniques proposed for its solution [START_REF] Bunke | A comparison of algorithms for maximum common subgraph on randomly connected graphs[END_REF] we chose the detection of the maximal clique of the association graph, since it allows identifying also locally similarities.
The association graph is a support graph that reflects the adopted high-level similarity criteria. Each node in the association graph corresponds to a pair of compatible nodes in the two attributed graphs according to the specified criteria. Associated arcs connect nodes if they have equivalent relations expressed as arcs connecting the corresponding nodes in the attribute graphs.
A clique is a sub-graph in which for each couple of nodes a connecting arc exists. For the clique detection we applied the Eppstein-Strash algorithm [START_REF] Eppstein | Listing all maximal cliques in large sparse real-world graphs[END_REF]. This algorithm represents an improved version of the algorithm by Tomita [START_REF] Tomita | The worst-case time complexity for generating all maximal cliques and computational experiments[END_REF], which is in turn based on the Bron-Kerbosch algorithm for the detection of all maximal cliques in graphs [START_REF] Bron | Algorithm 457: finding all cliques of an undirected graph[END_REF]. As far as we know, Eppstein-Strash algorithm is up to now the best algorithm for listing all maximal cliques in undirected graphs, even in dense graphs. The performances of the algorithm are in general guaranteed by the degeneracy ordering.
The algorithm of Eppstein-Strash improves Tomita's algorithm by using the concept of degeneracy. The degeneracy of a graph G is the smallest number d such that every subgraph of G(V, E) contains a node of degree at most d. Moreover, every graph with degeneracy d has a degeneracy ordering: a linear ordering of the vertices such that each node has at most d neighbors after it in the ordering. Eppstein-Strash algorithm first computes the degeneracy ordering; then for each node v in the order, starting from the first, the algorithm of Tomita is used to compute all cliques containing v and v's later neighbors. Other improvements depend on the use of adjacency lists for data representation. For more details we refer to [START_REF] Eppstein | Listing all maximal cliques in large sparse real-world graphs[END_REF].
Among all the maximal cliques present in the associated graph, we consider as interesting candidates of the similar sub-graphs only those having: 1) the majority of arcs corresponding to real joints between the corresponding components, 2) a number of nodes bigger than a specified value. In this way, priority is given to sub-graphs which contain a significant number of joined similar components, thus possibly corresponding to sub-assemblies. Then, for each selected clique, a measure vector is computed. The first element of the vector indicates the degree of the clique, while the others report the similarity of the various assembly characteristics taken into consideration for the similarity assessment. Depending on the search objectives the set of characteristics to consider may change. The default characteristics are the shape of the components and the type of joint between them. The examples and results discussed in the next section consider the default characteristic selection.
Result visualisation
The proposed retrieval system has been implemented in a multi-module prototype system. The creation of the EAM description is developed by using Microsoft Visual C# 2013 and exploiting the Application Programming Interface (API) of the commercial CAD system SolidWorks. The matching and the similarity assessment module is developed by using Java and is invoked during the retrieval as a jar file. In the end, to analyze the obtained results, a browser view has been implemented. It is obtained by multiple dynamic web pages that are based on HTML5, jQuery, Ajax and PHP. Moreover, Mysql is used as database system, while X3D library is used for the STEP model visualization.
The system has been tested on assembly models obtained from on-line repositories [17, 18, 19] and from university students' tests. 2 shows an example of the developed user interface, where the design can choose an assembly model as query and set the required criteria of similarity. In this example, it is required to retrieve models similar for shape and joint. Some results of this query are shown in Fig. 3. The first model in the picture (top-left) coincides with the query model. The retrieved models are gathered together in the other views of Fig. 3. Each retrieved and displayed assemblies has a clique that has been detected in the association graph and satisfies the required conditions. The assemblies are visualized in X3D view that allows rotating, zooming and selecting the various 3D components of the retrieved assembly. Components are visualized in the transparency mode to make possible to see also the internal ones. Under each model, three bars are shown to quickly get an idea of how similar to the query the retrieved assemblies are. The first two indicate the percentage of coverage (i.e. percentage of matched elements) with respect to both the query and target model. Thank to these bars, the user can see the type of similarity (i.e. global, partial or local). If the green is not complete, it means that just a subset of the query model is matched, thus the similarity is locally. The global similarity is shown by the purple bar, if this bar is not complete, then the similarity is partial. The last bar shows the average shape similarity among the components associated with the displayed clique. Simply looking at the reported model and checking the purple bar, the user can notice that (except the first model which represents the query model) no models are globally similar to the query one according to the criteria he/she has specified. The first model in the second row is partially similar with the query one, since the query is completely included in it (see the green bar). Other models are locally similar with the query model, thus just a subset of the query model is included in them. If the user wants to further analyze the levels of similarity of the chosen characteristics or to visualize all the subsets of matched parts, he/she can select one of the retrieved assemblies and a new browser page is prompted. Once selected, a new page as in Fig. 4 is available, where the user can get the list of all the interesting clique, using the sliders at the top of window. With these sliders, the user can choose some thresholds that the proposed results have to satisfy. In particular, they refer to the dimension of the matched portion, the shape similarity measure and the joint similarity measure. After the setting of those parameters, the button "Clique finding" can be pressed to get the results displayed in a table as visible on the left of Fig. 5. The rows of the table gather together all the matching portions that satisfy the required criteria. In this example, four information are accessible for each matching portion: an identification number, the number of matched parts, the shape similarity and the joint similarity. Selecting one of them, the corresponding clique is visualized within the assembly. It highlights the component correspondence with the query model using same colors for corresponding components in the two objects, as shown on the right part of Fig. 4. To easy the comparison according to several available criteria (here reported just the default ones), a radar chart is used. It illustrates the shape and joint similarities among the overall assembly in relation to the clique degree. This type of visualization is very useful to compare multiple data and to have a global evaluation in just a look. Moreover, the radar charts are convenient to compare two or more models on various features expressed through numerical values. The larger is the covered area, the more the two assemblies are similar. In the reported case, the user can observe immediately that the two models are not completely matched, even if they look very similar. This is because the gears in the two models have a significant different shape, which avoids including those parts among the matched one, thus decreasing the global level of similarity. On the other side, the retrieved portion completely satisfies the requests, thus reporting an assemment of 1.
Conclusions
In this paper, methods for the identification and evaluation of similarities between CAD assemblies are presented. While almost all of the products are made of assembled parts, most of the works present in literature are addressing the problem of similarity among single parts. For assemblies, the shape of the components is not the only characteristic to be considered. Increasing the number of elements to consider, augments from the one hand the possibility of adapting the search to specific user needs, and on the other hand the difficulty to evaluate the results. The method here described can consider all or a subset of the various aspects of the assembly, namely the shape of the components, their arrangements (i.e. patterns), their mating contacts and joints. The evaluation of the retrieved results is supported by exploiting colour variations in the 3D visualisation of the components in correspondence between the compared assemblies. Measures and statistics quantifying the similarity of the overall assemblies and of the matched subparts are reported according to the various considered characteristics.
In future work, we plan to introduce graph databases, such as Neo4j, for speedingup the search of local similarity among big assembly models. We also intend to improve the clique-finding algorithm by allowing it to select automatically the dimension of the biggest clique. Moreover, we intend to involve the definition of a single measure for the overall ranking of the retrieved assemblies similar to a query one. This information will be displayed in an ad-hoc infographics, which will be developed for improving the user-experience.
Fig. 1 .
1 Fig. 1. Example of different type of similarities
Fig. 2 .
2 Fig. 2. An assembly model and the similarity criteria used for the matching
Figure
Figure2shows an example of the developed user interface, where the design can choose an assembly model as query and set the required criteria of similarity. In this example, it is required to retrieve models similar for shape and joint. Some results of this query are shown in Fig.3. The first model in the picture (top-left) coincides with the query model. The retrieved models are gathered together in the other views of Fig.3. Each retrieved and displayed assemblies has a clique that has been detected in the association graph and satisfies the required conditions. The assemblies are visualized in X3D view that allows rotating, zooming and selecting the various 3D components of the retrieved assembly. Components are visualized in the transparency mode to make possible to see also the internal ones. Under each model, three bars are shown to quickly get an idea of how similar to the query the retrieved assemblies are. The first two indicate the percentage of coverage (i.e. percentage of matched elements) with respect to both the query and target model. Thank to these bars, the user can see the type of similarity (i.e. global, partial or local). If the green is not complete, it means that just a subset of the query model is matched, thus the similarity is locally. The global similarity is shown by the purple bar, if this bar is not complete, then the similarity is partial. The last bar shows the average shape similarity among the components associated with the displayed clique. Simply looking at the reported model and checking the purple bar, the user can notice that (except the first model which represents the query model) no models are globally similar to the query one according to the criteria he/she has specified. The first model in the second row is partially similar with the query one, since the query is completely included in it (see the green bar). Other models are locally similar with the query model, thus just a subset of the query model is included in them.
Fig. 3 .
3 Fig. 3. A sample of the retrieved models for the proposed speed reducer query (top left)
Fig. 4 .
4 Fig. 4. Initial page for investigating the model similarity
Fig. 5 .
5 Fig. 5. Example of matching browsing |
01764180 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764180/file/462132_1_En_24_Chapter.pdf | Widad Es-Soufi
email: widad.es-soufi@ensam.eu
Esma Yahia
email: esma.yahia@ensam.eu
Lionel Roucoules
email: lionel.roucoules@ensam.eu
A Process Mining Based Approach to Support Decision Making
Keywords: Process mining, Decision mining, Process patterns, Decisionmaking, Business process
Currently, organizations tend to reuse their past knowledge to make good decisions quickly and effectively and thus, to improve their business processes performance in terms of time, quality, efficiency, etc. Process mining techniques allow organizations to achieve this objective through process discovery. This paper develops a semi-automated approach that supports decision making by discovering decision rules from the past process executions. It identifies a ranking of the process patterns that satisfy the discovered decision rules and which are the most likely to be executed by a given user in a given context. The approach is applied on a supervision process of the gas network exploitation.
Introduction
Business process is defined as a set of activities that take one or more inputs and produce a valuable output that satisfies the customer [START_REF] Hammer | Reengineering the Corporation: A Manifesto for Business Revolution[END_REF]. In [START_REF] Weske | Business Process Management: Concepts, Languages, Architectures[END_REF], authors define it as a set of activities that are performed in coordination in an organizational and technical environment and provide an output that responds to a business goal. Based on these definitions, the authors of this paper describe the business process, as a set of linked activities that have zero or more inputs, one or more resources and create a high added value output (i.e. product or service) that satisfies the industrial and customers constraints. These linked activities represent the business process flow and are controlled by different process gateways (And, Or, Xor) [START_REF]Business Process Model and Notation (BPMN) Version 2[END_REF]Sec. 8.3.9] that give rise to several patterns (pattern 1 to 9 in Fig. 1) where each one is a linear end-to-end execution. The "And" gateway, also called parallel gateway, means that all the following activities are going to be executed in several possible orders. The "Or" gateway, also called inclusive gateway, means that one or all the following activities are going to be executed based on some attributes conditions. The "Xor" gateway, also called exclusive gateway, means that only one following activity among others, is going to be executed based on some attributes conditions.
The presence of gateways in business processes results in making several decisions based on some criteria like experience, preference, or industrial constraints [START_REF] Iino | Decision-Making in Engineering Design: Theory and Practice[END_REF].
Making the right decisions in business processes is tightly related to business success. Indeed, a research that involved more than a thousand companies, shows an evident correlation between decision effectiveness and business performance [START_REF] Blenko | Decision Insights: The five steps to better decisions[END_REF]. In [START_REF] Es-Soufi | On the use of Process Mining and Machine Learning to support decision making in systems design[END_REF], authors explain that the process of decision-making can be broken down into two sub processes: The global and the local decision making. In this research, authors focus on the global decision making and aim at developing a generic approach that assists engineers in managing the business process associated with the life of their products or services. The approach automatically proposes a predicted ranking of the business process patterns, that are the most likely to be executed by a given user in a given context. This comes down to exploring these patterns and the decisions that control them in a complex business process, i.e. where all gateways are present (Fig. 1). Authors assume that this objective can be achieved using process mining techniques.
This paper is organized as follows. In Section 2, a literature review on decision and trace variants mining are discussed. The proposed approach is presented in Section 3 and then illustrated in a case study in Section 4. Finally, the discussion of future work concludes the paper.
Literature Review on Decision and Trace Variants Mining
Process mining is a research field that supports process understanding and improvement, it helps to automatically extract the hidden useful knowledge from the recorded event logs generated by information systems. Three types of applications in process mining are distinguished: discovery, conformance, and enhancement [START_REF] Van Der Aalst | Process mining: overview and opportunities[END_REF]. In this paper, authors focus on the discovery application, namely, the decision mining and the trace variants mining. A brief summary is provided of each.
Decision mining is a data-aware form of the process discovery application since it enriches process models with meaningful data. It aims at capturing the decision rules that control how the business processes flow (e.g. conditions 1,2,3,4 in Fig. 1). In [START_REF] Rozinat | Decision mining in business processes[END_REF], authors define it as the process in which data dependencies, that affect the routing of each activity in the business process, are detected. It analyses the data flow to find the 1 http://www.bpmn.org/ Process patterns: 1-A1A2A3A4A5A6A7 2-A1A2A3A5A4A6A7 3-A1A2A3A5A6A4A7 4-A1A2A5A6A3A4A7 5-A1A2A5A3A6A4A7 6-A1A2A5A3A4A6A7 7-A1A8A9A11 8-A1A8A10A11 9-A1A8A9A10A11 10-A1A8A10A9A11 rules that explain the rationale behind selecting an activity among others when the process flow splits [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF].
While executing a business process, one may adopt the same logic several times (e.g. always executing pattern 1 in Fig. 1, rather than patterns 2 to 6, if condition 1 is enabled). This results in the existence of similar traces in the recorded event log. Trace variants mining aims at identifying the trace variants and their duplicates (e.g. patterns 1 to 9 in Fig. 1). Each trace variant refers to a process pattern that is a linear end-to-end process execution where only the activities execution order is taken into account [START_REF] Es-Soufi | On the use of Process Mining and Machine Learning to support decision making in systems design[END_REF].
Decision Mining
The starting point of the most common decision mining techniques is a recorded event log (i.e. past executions traces) and its corresponding petri net2 model that describes the concurrency and synchronisation of the traces activities. To automatically generate a petri net model from an event log, different algorithms were proposed. The alpha algorithm, alpha++ algorithm, ILP miner, genetic miner, among others, are presented in [START_REF] Van Dongen | Process mining: Overview and outlook of petri net discovery algorithms[END_REF], and the inductive visual miner that was recently proposed in [START_REF] Leemans | Exploring processes and deviations[END_REF].
Many research works contribute to decision mining development. In [START_REF] Rozinat | Decision mining in business processes[END_REF], authors propose an algorithm, called Decision point analysis, which allows one to detect decision points that depict choice splits within a process. Then for each decision point, an exclusive decision rule (Xor rule) in the form "v op c", where "v" is a variable, "op" is a comparison operator and "c" is a constant, allowing one activity among others to be executed is detected. The decision point analysis is implemented as a plug-in for the ProM3 framework. In [START_REF] Leoni | Discovering branching conditions from business process execution logs[END_REF], authors propose a technique that improves the decision point analysis by allowing one to discover complex decision rules for the Xor gateway, based on invariants discovery, that takes into account more than one variable, i.e. in the form "v1 op c" or "v1 op v2", where v1 and v2 are variables. This technique is implemented as a tool named Branch Miner 4 . In [START_REF] Catalkaya | Enriching business process models with decision rules[END_REF], authors propose a technique that embeds decision rules into process models by transforming the Xor gateway into a rule-based Xor gateway that automatically determines the optimal alternative in terms of performance (cost, time) during runtime. This technique is still not yet implemented. In [START_REF] Bazhenova | Deriving decision models from process models by enhanced decision mining[END_REF], authors propose an approach to derive decision models from process models using enhanced decision mining. The decision rules are discovered using the decision point analysis algorithm [START_REF] Rozinat | Decision mining in business processes[END_REF], and then enhanced by taking into account the predictions of process performance measures (time, risk score) related to different decision outcomes. This approach is not yet implemented. In [START_REF] Dunkl | A method for analyzing time series data in process mining: application and extension of decision point analysis[END_REF], authors propose a method that extends the Decision point analysis [START_REF] Rozinat | Decision mining in business processes[END_REF] which allows only single values to be analysed. The proposed method takes into account time series data (i.e. sequence of data points listed in time order) and allows one to generate complex decision rules with more than one variable. The method is implemented but not publicly shared. In [START_REF] Ghattas | Improving business process decision making based on past experience[END_REF], authors propose a process mining based technique that allows one to identify the most performant process path by mining decision rules based on the relationships between the context (i.e. situation in which the past decisions have taken place), path decisions and process performance (i.e. time, cost, quality). The approach is not yet implemented.
In [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF], authors introduce a technique that takes the process petri net model, the process past executions log and the alignment result (indicating whether the model and the log conform to each other) as inputs, and produces a petri net model with the discovered inclusive/exclusive decision rules. It is implemented as a data flow discovery plug-in for the ProM framework. Another variant of this plug-in that needs only the event log and the related petri net as inputs is implemented as well. In [START_REF] Mannhardt | Decision mining revisited-discovering overlapping rules[END_REF], authors propose a technique that aims at discovering inclusive/exclusive decision rules even if they overlap due to incomplete process execution data. This technique is implemented in the multi-perspective explorer plug-in [START_REF] Mannhardt | The Multi-perspective Process Explorer[END_REF] of the ProM framework. In [START_REF] Sarno | Decision mining for multi choice workflow patterns[END_REF], authors propose an approach to explore inclusive decision rules using the Decision point analysis [START_REF] Rozinat | Decision mining in business processes[END_REF]. The approach consists in manually modifying the petri net model by transforming the "Or" gateway into an "And" gateway followed by a "Xor" gateway in each of its outgoing arcs.
Trace Variants Mining
Different researches were interested in trace variants mining. In [START_REF] Song | Trace Clustering in Process Mining[END_REF], authors propose an approach based on trace clustering, that groups the similar traces into homogeneous subsets based on several perspectives. In [START_REF] Bose | Abstractions in process mining: A taxonomy of patterns[END_REF], authors propose a Pattern abstraction plug-in, developed in ProM, that allows one to explore the common low-level patterns of execution, in an event log. These low-level patterns can be merged to generate the process most frequent patterns which can be exported in one single CSV file. The Explore Event Log (Trace Variants/Searchable/Sortable) visualizer 5 , developed in ProM, sorts the different trace variants as well as the number and names of duplicate traces. These variants can be exported in separate CSV files, where each file contains the trace variant, i.e. process pattern, as well as the related duplicate traces.
Discussion
In this paper, authors attempt to discover the decision rules related to both exclusive (Xor) and inclusive (Or) gateways, as well as the different activities execution order. Regarding decision mining, the algorithm that generates the petri net model should be selected first. Authors reject the algorithms presented in [START_REF] Van Dongen | Process mining: Overview and outlook of petri net discovery algorithms[END_REF] and select the inductive visual miner [START_REF] Leemans | Exploring processes and deviations[END_REF] as the petri net model generator. Indeed, experience has shown that only the inductive visual miner allows the inclusive gateways to be identified by the decision mining algorithm. This latter should afterward be selected.
The research works presented in [START_REF] Rozinat | Decision mining in business processes[END_REF], [START_REF] Leoni | Discovering branching conditions from business process execution logs[END_REF]- [START_REF] Ghattas | Improving business process decision making based on past experience[END_REF] attempt to discover exclusive decision rules considering only the exclusive (Xor) gateway. The work presented in [START_REF] Sarno | Decision mining for multi choice workflow patterns[END_REF] considers the inclusive and exclusive decision rules discovery, but the technique needs a manual modification of the petri net model which is not practical when dealing with complex processes. Therefore, authors assume that these works are not relevant for the proposition and consider the works presented in [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF] and [START_REF] Mannhardt | Decision mining revisited-discovering overlapping rules[END_REF] which allow the discovery of both inclusive and exclusive decision rules. Moreover, authors assume that the data flow discovery plug-in [START_REF] De Leoni | Data-aware Process Mining: Discovering Decisions in Processes Using Alignments[END_REF] is more relevant since the experience has shown that the other one [START_REF] Mannhardt | Decision mining revisited-discovering overlapping rules[END_REF] could not correctly explore the decision rule related to the variables whose values do not frequently change in the event log.
Regarding trace variants mining, authors do not consider the approach presented in [START_REF] Song | Trace Clustering in Process Mining[END_REF] as relevant for the proposition since the objective is to discover the patterns that are exactly similar, i.e. patterns with the same activities that are performed in the same order. The work presented in [START_REF] Bose | Abstractions in process mining: A taxonomy of patterns[END_REF] and the Explore Event Log visualizer are considered as relevant for the proposition. Since none of the proposed techniques allow one to export a CSV file that contains only the trace variants and their frequency, authors assume that exploring trace variants using the Explore Event Log visualizer is more relevant because the discovered patterns can be exported in separate CSV files, which facilitates the postprocessing that needs to be made.
Decision and Trace Variants Mining Based Approach
The approach presented in Fig. 2 is the global workflow of the proposal and enables the achievement of the current research objective through seven steps. The first step of the approach concerns the construction of the event log from the past process executions. These latter represent the process traces generated with respect to the trace metamodel depicted in [START_REF] Roucoules | Engineering design memory for design rationale and change management toward innovation[END_REF][START_REF] Es-Soufi | Collaborative Design and Supervision Processes Meta-Model for Rationale Capitalization[END_REF] and expressed in XMI (XML Metadata Interchange) format. These traces should be automatically merged into a single XES 7(eXtensible Event Stream) event log in order to be processed in ProM, the framework in which the selected decision mining technique is developed. This automatic merge is implemented using ATL 8 (Atlas Transformation Language).
The second step concerns the generation of the petri net model from the event log. To this end, the inductive visual miner is used. Having both the event log and its corresponding petri net model, the decision mining practically starts using the data flow discovery plug-in as discussed in Section 2.
The third step aims at deriving the decision rules related to all the variables in the event log and exporting them in a single PNML 9 (Petri Net Markup Language) file. PNML is a XML based standardized interchange format for Petri nets that allows the decision rules to be expressed as guards, this means that the transition from a place (i.e. activity) to another can fire only if the guard, and thus the rule, evaluates to true. For instance, condition 1 in Fig. 1 is the decision rule that enables the transition from A1 to A2. The experience has shown that when all the variables in the event log are considered in the decision mining, some decision rules related to some of these variables may not be derived as expected, the origin of this problem is not yet clear. Therefore, to avoid this situation and be sure to have a correct decision rule, authors propose to execute the data flow discovery plug-in for each variable, this results in as much decision rules as variables (step 3 in Fig. 2).
The PNML files, that are generated in step 3, should be automatically merged into one single PNML file that contains the complete decision rules, i.e. related to all the event log's variables (step 4 in Fig. 2). This automatic merge is implemented using the Java programming language.
In parallel with decision mining (finding the Or and XOR rules), the trace variants mining can be performed in order to find the end-to-end processes (e.g. patterns 1 to 9 in Fig. 1). The Explore Event Log visualizer, as discussed in Section 2, is used to explore patterns in an event log. The detected patterns are then exported in CSV files where each file contains one pattern and its duplicates (step 1' of Fig. 2). To fit our objective, the patterns files need to be automatically post processed. This consists in computing the occurrence frequency of each pattern and removing its duplicates and then creating a file that contains a ranking of the different, non-duplicate, patterns based on their occurrence frequency (step 2' in Fig. 2). This post processing is implemented using the Java programming language.
During a new process execution, the ranked patterns file is automatically filtered to fit both the discovered decision rules and the user's context (user's name, date, process type, etc.). In other words, the patterns that do not satisfy the decision rules and those that are, for example, performed by another user than the one that is currently performing the process are removed. As a result, a ranking of suggestions (i.e. patterns that are the most suitable for the current user's context) are proposed to the user (step 5 in Fig. 2). The selected pattern is, then, captured and stored in order to enrich the event log.
4
Case Study: Supervision of Gas Network Exploitation
Systems supervision is a decision-based activity carried out by a supervisor to survey the progress of an industrial process. It is a business process that produces an action, depending on both the supervision result and the set-point (i.e. target value for the supervised system), that resolves systems malfunction. The authors of this paper present a supervision case study where the supervisor of an industrial process should take, in the shortest time, the right decision in case an alarm is received. The challenge here is to provide this supervisor with a ranking of the process patterns that are the most likely to be executed in his context. The proposed approach is verified under a specific supervision process related to gas network exploitation.
The process starts by receiving the malfunction alarm. The Chief Operating Officer (COO) has, then, to choose the process that best resolves the problem in this context. This latter can be described by the field sensors values, season, supervisor's name, etc. The first step of the proposed approach is to transform the already captured sixty traces of this supervision process into a single XES event log (step 1 in Fig. 2) and then generate its corresponding petri net model (step 2 in Fig. 2). Then, from the event log and the petri net model, generate the decision rules for each variable and export them in PNML files (step 3 in Fig. 2). In this process, the decision variables are: Pressure, season, network status, flow rate, human resource (the decision rule related to the pressure variable is depicted in Fig. 3). These PNML files are then merged into one single PNML file that contains the complete decision rules related to all the decision variables (step 4 in Fig. 2).
Fig. 3. Discovered decision rules for the pressure variable
In this process, based on both pressure value and season, the COO decides whether to send an emergency or a maintenance technician. If the emergency technician is sent (i.e. the decision rule: ((pressure>22millibars)and(season≠fall)) evaluates to true), he has then to decide which action should be performed based on the measured flow rate. If the decision rule ((pressure≤22millibars)and(season=fall)) evaluates to true, then the maintenance technician is sent. Moreover, if the rule (pressure<19millibars) evaluates to true, then in addition to sending the maintenance technician, the supervisor should extend the time scale then share it and write the problem then share it. In this last case, the inclusive logic is transformed into a parallel logic, and thus the activities may be executed in different possible orders.
In parallel with the decision rules mining, the step 1' and 2' in Fig. 2 are performed; the patterns that are contained in the event log are discovered (Fig. 4.a) then exported in CSV files and finally post processed by removing each pattern's duplicates and computing their occurrence frequency. If we consider all the possible process patterns and the different rules, it is possible to construct the BPMN process depicted in Fig. 5. These patterns (Fig. 4.a) are, then, filtered based on the current context and the decision rules that are generated (step 4 in Fig. 2). For instance, if the alarm is received in the fall by John, and the pressure of the supervised network equals to 18 millibars which is less than both 22 and 19 millibars (Fig. 5), the approach proposes two possible patterns to solve the problem (Fig. 4.b), where the first one "P12" is the most frequently used in this context.
Conclusion and Future Work
The objective of this paper is to support engineers in their decision-making processes by proposing the most relevant process patterns to be executed given the context. Through the proposed approach, the past execution traces are first analysed and the decision rules that control the process are mined. Then, the patterns and their occurrence frequency are discovered, postprocessed and filtered based on the discovered decision rules and the user context parameters. A ranking of the most likely patterns to be executed are then proposed. This approach illustrates the feasibility of the assumption about using process mining techniques to support decision making in complex processes that are controlled by inclusive, exclusive and parallel gateways. Future work consists in fully automating the approach and integrating it in the process visualizer tool presented in [START_REF] Roucoules | Engineering design memory for design rationale and change management toward innovation[END_REF]. It also consists in evaluating this approach, using real-world design and supervision processes, with respect to some performance indicators such as execution time, quality of the proposed decisions, changes propagation, etc.
Fig. 1 .
1 Fig. 1. Example of process patterns (expressed in BPMN 1 )
Fig. 2 .
2 Fig. 2. Overview of the proposal (expressed in IDEF0 6 )
Fig. 4 .
4 Fig. 4. (a) Discovered patterns, (b) proposed patterns
Fig. 5 .
5 Fig. 5. Part of the resulting supervision process with the different rules (expressed in BPMN)
https://en.wikipedia.org/wiki/Petri_net
http://www.promtools.org/
http://sep.cs.ut.ee/Main/BranchMiner
https://fmannhardt.de/blog/process-mining-tools
https://en.wikipedia.org/wiki/IDEF0
http://www.xes-standard.org/
https://en.wikipedia.org/wiki/ATLAS_Transformation_Language
http://www.pnml.org/
Acknowledgments. This research takes part of a national collaborative project (Gontrand) that aims at supervising a smart gas grid. Authors would like to thank the companies REGAZ, GDS and GRDF for their collaboration. |
01764212 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764212/file/462132_1_En_2_Chapter.pdf | Mourad Messaadia
email: mmessaadia@cesi.fr
Fatah Benatia
email: fatahbenatia@hotmail.com
David Baudry
email: dbaudry@cesi.fr
Anne Louis
email: alouis@cesi.fr
PLM Adoption Model for SMEs
Keywords: PLM, ICT Adoption, SMEs, Data analysis 1
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
The literature review has addressed the topic of PLM from different angles. However, the adoption aspect was only dealt by a few works such [START_REF] Bergsjö | PLM Adoption Through Statistical Analysis[END_REF] where author proposes statistical tools to improve the organizational adoption of new PLM systems and highlights on the importance of survey early in the PLM introduction process; [START_REF] Ristova | AHP methodology and selection of an advanced information technology due to PLM software adoption[END_REF] provides a review of the main developments in the AHP (Analytical Hierarchy Process) methodology as a tool for decision makers to be able to do more informed decisions regarding their investment in PLM; [START_REF] Rossi | Product Lifecycle Management Adoption versus Lifecycle Orientation: Evidences from Italian Companies[END_REF] on the adoption of PLM IT solutions and discussed the relationship between "PLM adopter" and "lifecycle-oriented" companies in order to achieve the adoption aspect we have considered PLM as an innovate ICT for SMEs. Thus we integrated works on ICT and innovation adoptions. ICT technology is one of the ways, at the disposal of a company to increase its productivity. ICT can reduce business costs, improve productivity and strengthen growth possibilities and the generation of competitive advantages [START_REF] Bergsjö | PLM Adoption Through Statistical Analysis[END_REF]. Despite the work done and large companies evolution in terms of PLM, SMEs still have difficulties to understand all the potential of such technologies [START_REF] Hollenstein | The decision to adopt information and communication technologies (ICT): firm-level evidence for Switzerland[END_REF]. Their adoption of ICT is slow and late, primarily because they find that ICT adoption is difficult [START_REF] Hashim | Information communication technology (ICT) adoption among SME owners in Malaysia[END_REF] and SMEs adoption is still lower than expected.
When implementing a PLM solution in a company, the implementation difficulties are directly dependent on the complexity of the organization, costs and the possible opacity of the real behaviours in the field. Indeed, the implementation of PLM solution seems to scare SMEs in terms of resource costs and deployment.
The integration of the PLM solutions and its adoption by the SMEs has succeeded the interest of several research works. Among these research works we distinguish those on adoption process improving through statistical tools [START_REF] Bergsjö | PLM Adoption Through Statistical Analysis[END_REF]. In the same way authors in [START_REF] Fabiani | ICT adoption in Italian manufacturing: firm-level evidence[END_REF] conducted an investigation around 1500 enterprises and analyse the process adoption. This investigation shows that size on enterprise, human capital of the workforce and the geographic proximity with large firms has an impact on ICT adoption. In another hand, we find investigation based on empirical analysis which highlights the role of management practices, especially the manager, and quality control on the ICT adoption.
Another investigation was conducted on a thousand firms in manufacturing in Brazil and India and examines the characteristics of firms adopting ICT and the consequences of adoption for performance [START_REF] Basant | ICT adoption and productivity in developing countries: new firm level evidence from Brazil and India[END_REF]. In addition to previous results, they show the impact of educational system and the positive association between ICT adoption and education. Several barriers to IT adoption have been identified, including: lack of knowledge about the potential of IT, a shortage of resources such financial and expertise and lack of skills [START_REF] Hashim | Information communication technology (ICT) adoption among SME owners in Malaysia[END_REF].
According to [START_REF] Forth | Information and Communication Technology (ICT) adoption and utilisation, skill constraints and firm level performance: evidence from UK benchmarking surveys[END_REF] the skill workers have an impact on ICT adoption. Workers with high (low) proportions of skill can have a comparative advantage (disadvantage) in minimizing the costs both of ICT adoption and of learning how to make best use of ICTs.
An investigation of works done on ICT adoption conclude on the importance to analyse the impact on ICT system implementation and adoption processes and how they do so, and how implementation and adoption processes could be supported on the organizational, group, and individual levels [START_REF] Korpelainen | Theories of ICT system implementation and adoption-A critical[END_REF]. Based on previous works, we will consider that PLM is an innovative ICT solution for SMEs.
Next paragraph will introduce the problem statement and context of study. Third paragraph is on the proposed the model of PLM adoption based on quantitative KPIs. The fourth paragraph highlights the obtained results and their discussion. Finally, we conclude and discuss future work on how to improve and deploy our model.
STUDY CONTEXT
The first initiative of this work was conducted during the INTERREG project called "BENEFITS" where different adoption KPI's was identified [START_REF] Messaadia | PLM adoption in SMEs context[END_REF].
On the basis of an analysis of the various studies carried out with several companies, it is possible to collect different indicators. These indicators have been classified according to 4 axes identified through PLM definitions analysis. The 4-axis structure (Strategy, Organisation, Process and Tools) seemed clear and gave a good visibility to the impact of the indicators on the different levels of enterprise [START_REF] Messaadia | PLM adoption in SMEs context[END_REF].
For our work, Survey conducted followed different steps from questionnaire designing until data analysis [START_REF]Survey methods and practices[END_REF]. One of problems faced during questionnaire design is the decision of what questions to ask, how to best word them and how to arrange the questions to yield the information required. For these questions were conducted on the basis of indicators, words were reviewed by experts and finally we reorganised questions according to new 4 axes: Human Factors, Organisational Factors, Technical Factors and Economic Factors. This new decomposition does not affect the indicators but brings a fluidity and easier understanding for the interviewees (SMEs).
Fig.1.PLM axis structuration
Also, the objective of the investigation is to understand the needs of SMEs according to the introduction of digital technology within the automotive sector and to anticipate the increase in competence needed to help these SMEs face the change by setting up the necessary services and training. The survey was conducted on a panel of 33 companies (14 with study activities and 19 with manufacturing activities) of which 50% are small structures as shown in Fig. 2.
Fig.2.Panel of SMEs interviewed
PLM ADOPTION INDICATORS
The concept of adoption may be defined as a process composed of a certain number of steps by which a potential adopter must pass before accepting the new product, new service or new idea [START_REF] Frambach | Organizational innovation adoption: A multilevel framework of determinants and opportunities for future research[END_REF]. Adoption can be seen as an individual adoption and organizational adoption. The individual one focuses on user behaviour according to new technology and have an impact on the investment in IT technology [START_REF] Magni | Intra-organizational relationships and technology acceptance[END_REF]. In the organisational adoption the organisation forms an opinion of the new technology and assesses it. Based on this, organisation makes the decision to purchase and use this new technology [START_REF] Magni | Intra-organizational relationships and technology acceptance[END_REF]. Based on work done in [START_REF] Messaadia | PLM adoption in SMEs context[END_REF] we developed the questionnaire according to adoption factors (Table 1.).
QUESTIONNAIRE ANALYSIS
The previous step was the construction of the questionnaire by methodological tool with a set of questions that follow in a structured way (Fig. 3). It is presented in electronic form and was administered directly through face to face and by phone. ε
For:
1, … , , the hypothesis related to the model (eq.1) is the distribution of the error ε is independent and the error is centred with constant ance ~ 0, ; In order to conclude that there is a significant relationship between PLM level and Adoption factors, the Regression (eq.1) is used during estimation and to improve the quality of the estimates. The first step is to calculate the adoption factors according to: Once the fourth factors calculated, the matrix form of our model becomes:
M ' ⋮ $ O = M '' '' '' '' ' ' ' ' ⋮ $' ⋮ $' ⋮ ⋮ $' $' O P Q + M ' ⋮ $ O ⟺ = S = TU + E (2)
For resolving our equation (Eq.2) we need to calculate the estimated matrix B. With estimated B called:
U W = T X T Y' T X S (3)
Through all these equations (observation) we can give the general regression equation of PLM.
= + + + + (4)
The methodology adopted started by determining (estimating) , , , parameters of the multiple-regression function. The result of estimation is defined by: Z, W , , \ . For this, we choose the method of "mean square error" calculated through Matlab. In the second step, we calculate the dependency between PLM level (result of multipleregression) and the adoption factors (H, O, T and E) by the regression coefficient (R), especially the Determination Coefficient (D).
Where:
] = ^ = ___ _H , aa^=
Numerical Results
After the investigation the PLM-Eval-Tool generates a data table (Fig. 4) of evaluated responses that will be used to build our adoption model. Once the data collected, we applied our approach for obtaining the estimated parameters Z, W , , \ through (Eq.3).
Fig.4. Brief view of collected data
M Z W ̂ \ O P 0.0697 0.6053 0.1958 0.1137 Q (5)
With ^ 0.9841 which is considered as a very good regression, and validate the proposed equation (Eq.1).
The numerical result equation is:
KmnopnX q$ = 0.0697 + 0.6053 + 0.1958 0.1137 (6)
Result discussion
Concerning the "Error" we will consider the highest one which is equal to
' $ ∑ s.t 'u vv 0.0128.
This means that all values of PLM_Evaluation will be considered with ± 0.0128. We can also determine confidence interval for the parameters a, b, c and d using the student law t x ,y , where α is the Confidence threshold, or the Tolerance error rate, the choice of the value α in our case is α 0.05 and { = 4 is the degree of freedom (the number of parameters) σ } ~ is the standard deviation (the square root of the variance). In our case / •,€ / s.s•,t 2.132 (Fig. 5). Using data from a sample, the probability that the observed values are the chance result of sampling, assuming the null hypothesis ( s ) is true, is calculated. If this probability turns out to be smaller than the significance level of the test, the null hypothesis is rejected. † s : = 0 ' ∶ … 0
For this we will calculate:
= |n Z| ‡ } ˆ
Then we will compare it to the value of / s,s•; t = 2,13 If T‰ / s,s•; t = 2,13, we accept s : = 0, the H parameter does not influence the realization of PLM and we will then recreate another equation of regression without H. The same analysis was done for b, c, d.
DISCUSSION
Once the model developed another aspect of the analysis was explored, that of the recommendations. Effectively, the PLM-Eval-Tool offer also a view (fig. 6.) the results according to such factors as change management, structured sharing, extended enterprise, evaluation capacity and willingness to integrate. These factors are seen as a numerical focus, and first returns on SMEs analysis are:
• 30% of companies consider themselves to be under-equipped regarding to information technology. • Companies recognize that information technology is very much involved in the development process, but for the majority of them organizational aspects and informal exchanges are decisive.
• They believe that they have the in-house skills to anticipate and evaluate techno- According to obtained results, here is a list of first actions that we propose to implement:
• To make the players in the sector aware of the evolution of this increasingly digital environment.
• Diagnose the existing digital chaining in companies to promote the benefits of the PLM approach. (Processes, tools, skills, etc.) • To propose levers of competitiveness by the identification of "Mutualized Services" and "Software as a Service" solutions.
• To propose devices to gain skills and accompany the change management of manufacturers, equipment manufacturers, to the SMEs in the region.
CONCLUSION
The statistical analysis allowed us to develop a mathematical model to evaluate the adoption of an SME in terms of PLM. Thus, SMEs will be able to carry out a first self-evaluation without calling on honest consultants. However, this model will have to improve with more SMEs results and taking into account the different activity sectors aspect.
As future work, we envisage to work on several cases studies (deployment on France) in order to improve the mathematical model. Also, another work will be carried out in order to generate recommendations automatically. The aim of this approach is to offer SMEs a tool for analysis and decision-making for the upstream stage in the introduction or adoption of PLM tools.
ACKNOWLEDGEMENT
Acknowledgement is made to PFA automotive which has initiated this study around the technical information, processes and skills management system, which provides data structuring for the extended company with the support of the DIRECCTE IdF, the RAVI for the identification of companies and the CETIM to conduct the interviews.
Fig. 3 .
3 Fig. 3. PLM-Eval-Tool: questionnaire
() * +, -(*./ + . *0 /* /+ /ℎ* * + +) 0 , /+
a+)* .-( * ^*J *.. + ; aa = +/ 0 a+)* a-( *; aa = a+)* .-( * + . aa = aa^+ aa = If |^| → 1, We have a strong dependence and good regression
Fig. 5 .
5 Fig.5. Student table
Fig. 6 .
6 Fig.6. Radar showing the average of the results obtained by the companies that responded to the questionnaire. Scaling from 0: Very low to 5: Very good
Table 1 .
1 Adoption factors according to the 4 th axes
Axes Questions according to adoption factors
Human factor Ability to assess technological opportunities (FH1)
Resistance to change (FH2)
The learning effects on previous use of ICT technology (FH3)
Relative advantage (FH4)
Risk aversion (FH5)
Emphasis on quality (FH6)
Organisational Average size of effective of SME between 50 and 200 (FO1)
factor Age of SMEs (FO2)
Competitive environment (FO3)
Rank of SME (FO4)
Geographical proximity (FO5)
Number of adopters (FO6)
Interdependencies Collaboration (FO7)
Existing leading firms (OEM) in your economic environment
(FO8)
Informal communication mode (FO9)
Existing Innovation process (FO10)
Knowledge Management (FO11)
Process synchronization (FO12)
Existing R&D activities (FO13)
Existing certified (QM) system (FO14)
Technological The position of SME related to ICT technologies (FT1)
factor Interoperability (FT2)
Ergonomic (FT3)
Compatibility with similar technology (FT4)
Compatibility with needs and existing process (FT5)
How is evaluated before adopting technology (FT6)
Have you had the opportunity to test the technology before its
adoption (FT7)
Complexity (FT8)
The frequency of new technology integration (FT9)
Level of skill and knowledge (FT10)
Existing software (PDM, CAD/CAM,ERP) (FT11)
Economical Indirect costs (FE1) |
01764219 | en | [
"info"
] | 2024/03/05 22:32:13 | 2017 | https://inria.hal.science/hal-01764219/file/462132_1_En_25_Chapter.pdf | Joel Sauza Bedolla
Gianluca D'antonio
email: gianluca.dantonio@polito.it
Frédéric Segonds
email: frederic.segonds@ensam.eu
Paolo Chiabert
email: paolo.chiabert@polito.it
PLM in Engineering Education: a pilot study for insights on actual and future trends
Keywords: Product Lifecycle Management, Education, Survey
Universities around the world are teaching PLM following different strategies, at different degree levels and presenting this approach from different perspectives. This paper aims to provide preliminary results for a comprehensive review concerning the state of the art in PLM education. This contribution presents the design and analysis of a questionnaire that has been submitted to academics in Italy and France, and companies involved in a specific Master program on PLM. The main goal of the survey is to collect objective and quantitative data, as well as opinions and ideas gained from education expertise. The collected results enable to depict the state of the art of PLM education in Italian universities and to gain some insights concerning the French approach; the structure of the survey is validated for further worldwide submission.
Introduction
Product Lifecycle Management (PLM) is a key factor for innovation. The PLM approach to support complex goods manufacturing is now considered as one of the major technological and organizational challenges of this decade to cope with the shortening of product lifecycles [START_REF] Garetti | Organisational change and knowledge management in PLM implementation[END_REF]. Further, in a globalized world, products are often designed and manufactured in several locations worldwide, in "extreme" collaborative environments.
To deal with these challenges and maintain their competitiveness, companies and professional organizations need employees to own a basic understanding of engineering practices, and to be able to perform effectively, autonomously, in a team environment [START_REF] Chen | Web-based mechanical engineering design education environment simulating design firms[END_REF]. Traditional methodologies for design projects (i.e. with collocated teams and synchronous work) could be effective until a few decades ago, but they are insufficient nowadays. Thus, engineering education has changed in order to provide students with some experience in collaborative product development during their studies. It is essential to train students to Computer Supported Collaborative Work (CSCW) [START_REF] Pezeshki | Preparing undergraduate mechanical engineering students for the global marketplace -New demands and requirements[END_REF], and PLM is a means for students to structure their design methodology. Indeed, before starting an efficient professional collaboration, future engineers must be mindful of how this approach works, and how tasks can be split between stakeholders. Thus, from an educational point of view, the PLM approach can be considered as a sophisticated analysis and visualization tool that enables students to improve their problem solving and design skills, as well as their understanding of engineering systems behaviour [START_REF] Chen | Web-based mechanical engineering design education environment simulating design firms[END_REF]. Moreover, PLM can also be a solution to face one of the main problems in our educational system: the fragmentation of the knowledge and its lack of depth [START_REF] Pezeshki | Preparing undergraduate mechanical engineering students for the global marketplace -New demands and requirements[END_REF].
The main research question from here is: "How can we, as engineering educators, respond to global demands to make our students more productive, effective learners? And how can PLM help us to achieve this goal?". At the state of the art, the information about PLM education is fragmented. Hence, the aim of this paper is to propose a survey structure to collect quantitative data about the existing university courses in PLM, identify the most common practices and possible improvements to closer adhere to the needs of manufacturers.
The remainder of the paper is organized as follows: in section 2, an analysis of literature concerning recent changes in educational practices in engineering education is presented and the state of the art of PLM education is settled. Then, the survey structure is presented in section 3. The results are presented in section 4: data collected from Italian universities are presented, as well as the results of the test performed in France to validate the survey structure. Finally, in section 5, some conclusive remarks and hints for future work are provided.
State of the art
In literature, there is no evidence of a complete and full review of how PLM is taught in higher institutions around the world. Still, partial works can be found. Gandhi [START_REF] Gandhi | Product Lifecycle Management in Education: Key to Innovation in Engineering and Technology[END_REF] presents the educational strategy employed by three US universities. Fielding et al. [START_REF] Fielding | Product lifecycle management in design and engineering education: International perspectives[END_REF] show examples of PLM and collaborative practices implemented in higher education institutions from the United States and France. Sauza et al. [START_REF] Sauza-Bedolla | PLM in a didactic environment: the path to smart factory[END_REF] performed a two-step research. The first attempt consisted in a systematic research of keywords (i.e. PLM education, PLM certification, PLM course, PLM training) in the principal citation databases. Nevertheless, the analysis of scientific literature was limited to some specific programs of a limited quantity of countries. For this reason, the research was extended to direct research on universities' websites. The inclusion criteria for institutions was the attendance to one of the two main events in scientific and industrial use of PLM: (i) the IFIP working group 5.1 PLM Inter-national Conference, and (ii) Partners for the Advancement of Collaborative Engineering Education. The review process covered 191 universities from Europe, Asia, America and Oceania. It was found that there is a high variety in the topics that are presented to students, departments involved in the course management, the education strategy and the number of hours related to PLM.
The analysis presents useful insights. However, the research methodology based on website analysis was not sufficient and may present some lacks. In some cases, websites did not present a "search" option and this limited the accessibility to information. Moreover, during the research, some issues with languages were experienced: not all of the universities offered information in English, and for this reason, the universities were not considered. In some other cases, information was presented in the curricula that can be accessed only to institution members. The specific didactics nature of this study is precisely in that it brings researchers and professors from engineering education to work explain their vision of how PLM is taught. The objective is to get real participatory innovation based on integration of the PLM within a proven training curriculum in engineering education. One step further, we prone that by stimulating the desire to appropriate knowledge, innovative courses are also likely to convince a broad swath of students averse to traditional teaching methods and much more in phase with their definition as "digital natives" [START_REF] Prensky | Digital natives, digital immigrants. Part 1[END_REF] This paper is intended to be the first step of a broader effort to map the actual situation of PLM education around the world. This contribution presents the methodology employed to scientifically collect information from universities. Before going global, a first test has been made to evaluate the robustness of the tool in the authors' countries of origin, where the knowledge of the university system structure was clear.
Methodology
In order to get insights on the state of the art in PLM education, a survey structured in three parts has been prepared.
The first part is named "Presentation": the recipient is asked to state the name of his institution and to provide an email address for possible future feedbacks. Further, he is asked to state whether he is aware about the existence of courses in PLM in his institution or not, and if he is in charge of such courses. In case of positive reply, the recipient is invited to fill the subsequent part of the survey.
The second part of the survey aims to collect objective information to describe the PLM course. In particular, the following data are required:
- Finally, in the third part of the survey, subjective data are collected to measure the interest of the recipient in teaching the PLM approach and the interest of the students in this topic (both in a likert 1-5 scale). Further, an opinion about the duration of the course is required (not enough/proper/excessive) and whether the presentation of applied case studies or the contribution of industrial experts are included in the course. A space for further free comments is also available.
The
The invitations to fill the survey been organized in two steps. First, a full experiment has been made in Italy. The official database owned by the Italian Ministry of Education and University has been accessed to identify the academics to be involved. In Italy, academics are grouped according to the main topic of their research. Therefore, the contacts of all the professors and researchers working in the closest topics to PLM have been downloaded, namely: (i) Design and methods of industrial engineering; (ii) Technologies and production systems; (iii) Industrial plants; (iv) Economics and management Engineering; (v) Information elaboration systems; (vi) Computer science. This research led to a database consisting of 2208 people from 64 public universities. A first invitation and a single reminder have been sent; the survey, realized through a Google Form, has been made accessible online for 2 weeks in January 2017.
The second step consisted in inviting a small set of academics from French universities through focused e-mails: 11 replies have been collected. 11 replies have been collected. Further, a similar survey has been submitted to French companies employing people that has been attending a Master in PLM in the years 2015 and 2016.
Survey data analysis 4.1 Results from the Italian sample
The overall number of replies from Italian academics is equal to 213, from 49 different institutions. Among this sample, 124 people do not have information about PLM courses in their universities; therefore, they were not asked further questions. The 89 respondents aware about a PLM course belong to 36 universities; among them, 40 professors are directly involved in teaching PLM. A synthetic overview of the results is provided in Fig. 1; the map of the Italian universities in which PLM is taught is shown in Fig. 2. Type of course. The teachers involved in teaching PLM state that this topic is mostly dealt in broader courses, such as Drawing, Industrial Plants, Management. Practical activities. Among the 40 PLM teachers, 25 of them do not use software to support their educational activity. Some courses deploy Arena, Enovia, the PLM module embedded in SAP, Windchill. Other solutions, developed by smaller software houses are also used. Among the respondents, no one uses Aras Innovator, a PLM solution that has a license model inspired by open source products. However, in the majority of the teachers (27), industrial case studies are presented to show the role of PLM in managing product information and to provide students with a practical demonstration of the possible benefits coming from its implementation. Furthermore, interventions from industrial experts, aiming to show the practical implications of the theoretical notions taught in frontal lectures, are planned by 21 teachers.
Interest in PLM. The interest of students in PLM is variable: the replies are equally distributed among "Low" or "Fair" (25 occurrences) and "High" or "Very high" (25 occurrences). The interest of replicants in PLM is variable too: 34 people replied "Strongly low", "Low" or "Fair"; 34 people replied "High" or "Very high"; the remainder sample states "I don't know". As expected, the interest in PLM of people teaching this topic is high: 29 people replied "High" or "Very high" (out of a sample of 40 teachers).
Results from the French sample
On the French side, 11 replies were collected from 7 different Universities and School of Engineering. All the respondents teach PLM courses in their Universities. Similarly to the Italian sample, PLM is mostly taught in the M.Sc. level: beside a Master course, one B.Sc. and 8 M.Sc. courses were mapped. Most of the courses (8) are devoted to Mechanical Engineers. In 6 cases, a specific course is designed for PLM; further, in the Ecole supérieure d'électricité settled in Châtenay-Malabry the so-called 'PLM week' is organized. The duration of the PLM courses mainly ranges between 32 and 64 hours, which is an appropriate duration, according to the teachers; conversely, in the broader courses, the time spent in teaching PLM is lower than 6 hours. The only Master mapped through the survey is held in Ecole Nationale Supérieure d'Arts et Métiers (Paris): the duration is equal to 350 hours, with high interest of the participants.
A reduced version of the survey was also sent to a small set of French companies to map internal courses in PLM.7 replies have been obtained.: 3 were from large companies in the field of aeronautics, textile and consulting, and 4 were small-medium companies from the PLM and BIM sector. 57% of these companies declare they have courses dedicated to PLM. The name of the courses are various. In particular, a textile enterprise has course structured in 11 modules as business process:
1
Conclusions
The present paper presented a methodology for a systematic overview about university education in PLM. A survey has been submitted to all the Italian academics performing research and teaching activities in fields related with PLM. The percentage of respondents in the Italian experiment was approximately 10%, which is in line with the expectations of the authors: these replies enabled to identify PLM courses in 36 different universities, mainly located in the north-central part of the country, which is characterized by a higher density of industries. However, to have a successful realization of the survey a complete database of university teachers is mandatory.
The proof-of-concept realized on the French sample led to good results: no criticalities have been found in the survey. Hence, the next steps of the work are the creation of the recipients database and the full-scale experiment. Then, the experiment can be replicated in other countries, to have a more exhaustive picture about PLM education. We plan to rely on Bloom taxonomy of objectives to sharpen the skills taught in PLM courses [START_REF] Bloom | Bloom taxonomy of educattional objectives[END_REF].
Our research question was: "How can we, as engineering educators, respond to global demands to make our students more productive, effective learners? And how can PLM help us to achieve this goal?". A first insight given to this research question is the proposal, as an ultimate goal, of the creation of a network made of PLM teachers, that will enable mutual exchange of expertise, teaching material, exercises and practices. To reach this goal and to wider our approach to IFIP WG 5.1 community, a first step could be the creation of shared storage space for documents that allow any user to teach PLM at any level.
level at which the course is taught (among B.Sc, M.Sc, Ph.D, Master); -The curriculum in which the course is taught (free reply); -At which year the course is taught, and the overall duration of the curriculum (values constrained between 1 and 5); -The department in charge of the course (free reply); -If PLM is taught in a devoted course (Yes/No) or as a topic in a broader course (Yes/No); -The name of the course (free reply) and its duration; -If software training is included (Yes/No) and which software is used.
Degree level. In the sample of 36 universities, PLM is taught at different levels. The Master of Science is the most common: 53 courses have been identified. In 22 cases, PLM is also taught at the Bachelor level. Furthermore, there are 4 courses devoted to Ph.D. candidates and 2 Masters are organized. The latter two Master courses are organized in the Polytechnic universities of Torino and Milano; however, the first one has recently moved to University of Torino. Curricula. There is a variety of curricula involved in teaching PLM. Course for Management Engineering and Mechanical Engineering are organized (23 occurrences each). The area of Computer Science is also involved (23 occurrences): topics concerning the architecture of PLM systems, or the so-called Software Lifecycle Management are taught. Moreover, PLM courses are also provided in Industrial Engineering (6 occurrences), Automotive Engineering (B.Sc. at Polytechnic University di Torino) and Building Engineering (Ph.D. course at Politecnico di Bari).
Fig. 1 .
1 Fig. 1. Synthesis of the results obtained through both the Italian and the French PLM teachers.
Fig. 2 .
2 Fig. 2. Map of the Italian universities in which PLM is taught.
Acknowledgments
The authors are grateful to the colleagues and industrials that replied to the survey. |
01764438 | en | [
"phys.phys.phys-atom-ph",
"phys.astr"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764438/file/UltracoldVFShortHAL.pdf | Improving the accuracy of atom interferometers with ultracold sources R. Karcher, 1 A. Imanaliev, 1 S. Merlet, 1 and F. Pereira Dos Santos 1 1 LNE-SYRTE, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, 61 avenue de l'Observatoire 75014 Paris (Dated: April 11, 2018) We report on the implementation of ultracold atoms as a source in a state of the art atom gravimeter. We perform gravity measurements with 10 nm/s 2 statistical uncertainties in a so-far unexplored temperature range for such a high accuracy sensor, down to 50 nK. This allows for an improved characterization of the most limiting systematic effect, related to wavefront aberrations of light beam splitters. A thorough model of the impact of this effect onto the measurement is developed and a method is proposed to correct for this bias based on the extrapolation of the measurements down to zero temperature. Finally, an uncertainty of 13 nm/s 2 is obtained in the evaluation of this systematic effect, which can be improved further by performing measurements at even lower temperatures. Our results clearly demonstrate the benefit brought by ultracold atoms to the metrological study of free falling atom interferometers. By tackling their main limitation, our method allows reaching record-breaking accuracies for inertial sensors based on atom interferometry.
Atom gravimeters constitute today the most mature application of cold atom inertial sensors based on atom interferometry. They reach performances better than their classical counterparts, the free fall corner cube gravimeters, both in terms of short term sensitivity [1,2] and long term stability [3]. They offer the possibility to perform high repetition rate continuous measurements over extended periods of time [3,4], which represents an operation mode inaccessible to other absolute gravimeters. These features have motivated the development of commercial cold atom gravimeters [5], addressing in particular applications in the fields of geophysics. Nevertheless, the accuracy of these sensors is today slightly worse. Best accuracies in the 30 -40 nm/s 2 range have been reported [3,4] and validated through the participation of these instruments to international comparisons of absolute gravimeters since 2009 [6,7], to be compared with the accuracy of the best commercial state of the art corner cube gravimeters, of order of 20 nm/s
2 [START_REF]FG5-X specifications[END_REF].
The dominant limit in the accuracy of cold atom gravimeters is due to the wavefront distortions of the lasers beamsplitters. This effect is related to the ballistic expansion of the atomic source through its motion in the beamsplitter laser beams, as illustrated in figure 1, and cancels out at zero atomic temperature. In practice, it has been tuned by increasing the atomic temperature [4] and/or by using truncation methods, such as varying the size of the detection area [START_REF] Schkolnik | [END_REF] or of the Raman laser beam [10]. Comparing these measurements with measured or modelled wavefronts allows to gain insight on the amplitude of the effect, and estimate the uncertainty on its evaluation. It can be reduced by improving the optical quality of the optical elements of the interferometer lasers, or by operating the interferometer in a cavity [11], which filters the spatial mode of the lasers, and/or by compensating the wavefront distortions, using for instance a deformable mirror [12].
The strategy we pursue here consists in reducing the atomic temperature below the few µK limit imposed by cooling in optical molasses in order to study the temper-ature dependence of the wavefront aberration bias over a wider range, and down to the lowest possible temperature. For that, we use ultracold atoms produced by evaporative cooling as the atomic source in our interferometer. Such sources, eventually Bose-Einstein condensed, show high brightness and reduced spatial and velocity spread. These features allow for a drastic increase in the interaction time, on the ground [13] or in space [14] and for the efficient implementation of large momentum transfer beam splitters [15][16][17]. The potential gain in sensitivity has been largely demonstrated (for instance, by up to two orders of magnitude in [13]). But it is only recently that a gain was demonstrated in the measurement sensitivity of an actual inertial quantity [18], when compared to best sensors based on the more traditional approach exploiting two photon Raman transitions and laser cooled atoms. Here, implementing such a source in a state of the art absolute gravimeter, we demonstrate that ultracold atom sources also improve the accuracy of atom interferometers, by providing an ideal tool for the precise study of their most limiting systematic effect.
We briefly detail here the main features of our cold atom gravimeter. A more detailed description can be found in [4]. It is based on an atom interferometer [19] based on two-photon Raman transitions, performed on free-falling 87 Rb atoms. A measurement sequence is as follows. We start by collecting a sample of cold or ultracold atoms, which is then released in free fall. After state preparation, a sequence of three Raman pulses drives a Mach Zehnder type interferometer. These pulses, separated by free evolution times of duration T = 80 ms, respectively split, redirect and recombine the matter waves, creating a two-wave interferometer. The total duration of the interferometer is thus 2T = 160 ms. The populations in the two interferometer output ports N 1 and N 2 are finally measured by a state selective fluorescence detection method, and the transition probability P is calculated out of these populations (P = N 1 /(N 1 + N 2 )). This transition probability depends on the phase difference accumulated by the matter waves along the two arms of the interferometer that is, in our geometry, given by Φ = k. gT 2 , where k is the effective wave vector of the Raman transition and g the gravity acceleration. Gravity measurements are then repeated in a cyclic manner. Using laser cooled atoms, repetition rates of about 3 Hz are achieved which allows for a fast averaging of the interferometer phase noise dominated by parasitic vibrations. We have demonstrated a best short term sensitivity of 56 nm.s -2 at 1 s measurement time [1], which averages down to below 1 nm.s -2 . These performances are comparable to the ones of the two other best atom gravimeters developed so far [2,3]. The use of ultracold atoms reduces the cycling rate due to the increased duration of the preparation of the source. Indeed, we first load the magneto-optical trap for 1 s (instead of 80 ms only when using laser cooled atoms) before transferring the atoms in a far detuned dipole trap realized using a 30 W fibre laser at 1550 nm. It is first focused onto the atoms with a 170 µm waist (radius at 1/e 2 ), before being sent back at a 90 • angle and tightly focused with a 27 µm waist, forming a crossed dipole trap in the horizontal plane. The cooling and repumping lasers are then switched off, and we end up with about 3 × 10 8 atoms trapped at a temperature of 26 µK. Evaporative cooling is then implemented by decreasing the laser powers from 14.5 W and 8 W to 2.9 W and 100 mW typically in the two arms over a duration of 3 s. We finally end up with atomic samples in the low 100 nK range containing 10 4 atoms. Changing the powers at the end of the evaporation sequence allows to vary the temperature over a large temperature range, from 50 nK to 7 µK. The total preparation time is then 4.22 s, and the cycle time 4.49 s, which reduces the repetition rate down to 0.22 Hz. Furthermore, at the lowest temperatures, the number of atoms is reduced down to the level where detection noise becomes comparable to vibration noise. The short term sensitivity is thus significantly degraded and varies in our experiment in the 1200-3000 nm.s -2 range at 1 s, depending on the final temperature of the sample. The red line is a fit to the data with a subset of five Zernike polynomials and the filled area the corresponding 68% confidence area.
We performed differential measurements of the gravity value as a function of the temperature of the source, which we varied over more than two orders of magnitude. The results are displayed as black circles in figure 2, which reveals a non-trivial behaviour, with a fairly flat behaviour in the 2-7 µK range, consistent with previous measurements obtained with optical molasses [4], and a rapid variation of the measurements below 2 µK. This shows that a linear extrapolation to zero temperature based on high temperature data taken with laser cooled atoms would lead to a significant error. These measurements have been performed for two opposite orientations of the experiment (with respect to the direction of the Earth rotation vector) showing the same behaviour, indicating that these effects are not related to Coriolis acceleration [4]. Moreover, the measurements are performed by averaging measurements with two different orientations of the Raman wavevector, which suppresses the effect of many systematic effects, such as differential light shifts of the Raman lasers that could vary with the temperature [4].
To interpret these data, we have developed a Monte Carlo model of the experiment, which averages the con-tributions to the interferometer signal of atoms randomly drawn in their initial position and velocity distributions. It takes into account the selection and interferometer processes, by including the finite size and finite coupling of the Raman lasers, and the detection process, whose finite field of view cuts the contribution of the hottest atoms to the measured atomic populations [20]. This model is used to calculate the effect of wavefront aberrations onto the gravity measurement as a function of the experimental parameters. For that, we calculate for each randomly drawn atom its trajectory and positions at the three pulses in the Raman beams, and take into account the phase shifts which are imparted to the atomic wavepackets at the Raman pulses: δφ = kδz i , where δz i is the flatness defect at the i-th pulse. We sum the contributions of a packet of 10 4 atoms to the measured atomic populations to evaluate a mean transition probability. The mean phase shift is finally determined from consecutive such determinations of mean transition probabilities using a numerical integrator onto the central fringe of the interferometer, analogous to the measurement protocol used in the experiment [4]. With 10 4 such packets, we evaluate the interferometer phase shifts with relative uncertainties smaller than 10 -3 . We decompose the aberrations δz onto the basis of Zernike polynomials Z m n , taking as a reference radius the finite size of the Raman beam (set by a 28 mm diameter aperture in the optical system). Assuming that the atoms are initially centred on the Raman mirror and in the detection zone, the effect of polynomials with no rotation symmetry (m = 0) averages to zero, due to the symmetry of the position and velocity distributions [12]. We thus consider here only Zernike polynomials with no angular dependence that correspond to the curvature of the wavefront (or defocus) and to higher order spherical aberrations.
To illustrate the impact of finite size effects, we display in figure 3 calculated gravity shifts corresponding to different cases, for a defocus (Z 0
2 ) with a peak-to-peak amplitude of 2a 0 = 20 nm across the size of the reference radius, which corresponds to δz(r) = a 0 (1 -2r 2 ), with r the normalized radial distance. The black squares corresponds to the ideal case of infinite Raman laser radius size and detection field of view and give a linear dependence versus temperature. The circles (resp. triangles) correspond to the case of finite beam waist and infinite detection field of view (resp. infinite beam waist and finite detection field of view), and finally diamonds include both finite size effects. Deviations from the linear behaviour arise from the reduction or suppression of the contribution of the hottest atoms. The effect of the finite Raman beam waist is found to be more important than the effect of the finite detection area. Finally, we calculate for this simple study case a bias of -63 nm/s 2 at the temperature of 1.8 µK, for a peak-to-peak amplitude of 20 nm. This implies that, at the temperature of laser cooled samples and for a pure curvature, a peak-to-peak amplitude of less than 3 nm (λ/260 PV) over a reference diameter of 28 mm is required for the bias to be smaller than 10 nm/s 2 .
We then calculate the effect of the 7 first Z 0 n polynomials (for even n ranging from 2 and 14) for the same peak-to-peak amplitude of 2a 0 = 20 nm as a function of the atomic temperature. Figure 4 displays the results obtained, restricted for clarity to the first five polynomials. All orders exhibit as common features a linear behaviour at low temperatures and a trend for saturation at high temperatures. Interestingly, we find non monotonic behaviours in the temperature range we explore and the presence of local extrema.
Using the phase shifts calculated at the temperatures of the measurements, the data of figure 2 can now be adjusted, using a weighted least square adjustment, by a combination of the contribution of the first Zernike polynomials, which then constitute a finite basis for the decomposition of the wavefront. The adjustment was realized for increasing numbers of polynomials, so as to assess the impact of the truncation of the basis. We give in table I the values of the correlation coefficient R and the extrapolated value at zero temperature as a function of the number of polynomials. We obtain stable values for both R and the extrapolated value to zero temperature, of about -55 nm/s 2 for numbers of polynomials larger than 5. This indicates that the first 5 polynomials are enough to faithfully reconstruct a model wavefront that well reproduces the data. When increasing the number of polynomials, we indeed find that the reconstructed wavefront is dominated by the lowest polynomial orders. The results of the adjustment with 5 polynomials is displayed as a red line in figure 2 and the 68% confidence bounds as a filled area. The flatness of the reconstructed wavefront at the centre of the Raman laser beam is found to be as small as 20 nm PV (Peak Valley) over a diameter of 20 mm. The bias due to the optical aberrations at the reference temperature of 1.8 µK, which corresponds to the temperature of the laser cooled atom source, is thus 56(13) nm/s 2 . Its uncertainty is three times better than its previous evaluation [4], which in principle will improve accordingly our accuracy budget.
On the other hand, interatomic interactions in ultracold sources can induce significant phase shifts [21,22] and phase diffusion [23], leading to bias and loss of contrast for the interferometer. Nevertheless, the rapid decrease of the atomic density when interrogating the atoms in free fall reduces drastically the impact of interactions [24][25][26]. To investigate this, we have performed a differential measurement for two different atom numbers at the temperature of 650 nK. The number of atoms was varied from 25000 to 5000 by changing the efficiency of a microwave pulse in the preparation phase, which leaves the spatial distribution and temperature unchanged. We measured an unresolved difference of -7(12) nm/s 2 . This allows us to put an upper bound on the effect of interactions, which we find lower than 1 nm/s 2 per thousand atoms.
The uncertainty in the evaluation of the bias related to optical aberrations can be improved further by performing measurements at even lower temperatures, which will require in our set-up to improve the efficiency of the evaporative cooling stage. A larger number of atoms would allow to limit the degradation of the short term sensitivity and to perform measurements with shorter averaging times. More, absorption imaging with a vertical probe beam would allow for spatially resolved phase measurements across the cloud [13], which would allow for improving the reconstruction of the wavefront. The temperature can also be drastically reduced, down to the low nK range, using delta kick collimation techniques [27,28]. In addition to a reduced ballistic expansion, the use of ultracold atoms also offers a better control of the initial position and mean velocity of the source with respect to laser cooled sources, which suffer from fluctuations induced by polarisation and intensity variations of the cooling laser beams. Such an improved control reduces the fluctuations of systematic effects related to the transverse motion of the atoms, such as the Coriolis acceleration and the bias due to aberrations, and thus will improve the long term stability [26].
With the above-mentioned improvements, and after a careful re-examination of the accuracy budget [7], accuracies better than 10 nm/s 2 are within reach. This will make quantum sensors based on atom interferometry the best standards in gravimetry. Furthermore, the improved control of systematics and the resulting gain in stability will open new perspectives for applications, in particular in the field of geophysics [29]. Finally, the method proposed here can be applied to any atomic sensor based on light beamsplitters, which are inevitably affected by distortions of the lasers wavefronts. The improved control of systematics it provides will have significant impact in high precision measurements with atom interferometry, with important applications to geodesy [30,31], fundamental physics tests [14,32,33] and to the development of highest grade inertial sensors [34].
FIG. 1 .
1 FIG.1. (color online) Scheme of the experimental setup, illustrating the effect of wavefront aberrations. Due to their ballistic expansion across the Raman beam, the atoms sample different parasitic phase shifts at the three π/2 -π -π/2 pulses due to distortions in the wavefront (displayed in blue as a distorted surface). This leads to a bias, resulting from the average of the effect over all atomic trajectories, filtered by finite size effects, such as related to the waist and clear aperture of the Raman beam and to the finite detection field of view.
FIG. 2 .
2 FIG. 2. (color online)Gravity measurements as a function of the atom temperature. The measurements, displayed as black circles, are performed in a differential way, with respect to a reference temperature of 1.8 µK (displayed as a red circle). The red line is a fit to the data with a subset of five Zernike polynomials and the filled area the corresponding 68% confidence area.
FIG. 3 .
3 FIG. 3. (color online) Calculation of the impact of the size of the Raman beam waist (RB) and of the detection field of view (DFoV) on the gravity shift induced by a defocus as a function of the atomic temperature. The peak-to-peak amplitude of the defocus is 20 nm. The results correspond to four different cases, depending on whether the sizes of the Raman beam waist and detection field of view are taken as or infinite.
ACKNOWLEDGMENTS
We acknowledge the contributions from X. Joffrin, J. Velardo and C. Guerlin in earlier stages of this project. We thank R. Geiger and A. Landragin for useful discussions and careful reading of the manuscript. |
01764494 | en | [
"math.math-gr",
"math.math-gt"
] | 2024/03/05 22:32:13 | 2019 | https://hal.science/hal-01764494/file/MinimalStandardizers_HAL-1.pdf | María Cumplido
On the minimal positive standardizer of a parabolic subgroup of an Artin-Tits group
The minimal standardizer of a curve system on a punctured disk is the minimal positive braid that transforms it into a system formed only by round curves. In this article, we give an algorithm to compute it in a geometrical way. Then, we generalize this problem algebraically to parabolic subgroups of Artin-Tits groups of spherical type and we show that, to compute the minimal standardizer of a parabolic subgroup, it suffices to compute the pn-normal form of a particular central element.
Introduction
Let D be the disk in C with diameter the real segment [0, n + 1] and let D n = D \ {1, . . . , n} be the n-punctured disk. The n-strand braid group, B n , can be identified with the mapping class group of D n relative to its boundary ∂D n . B n acts on the right on the set of isotopy classes of simple closed curves in the interior of D n . The result of the action of a braid α on the isotopy class C of a curve C will be denoted by C α and it is represented by the image of the curve C under any automorphism of D n representing α. We say that a curve is non-degenerate if it is not homotopic to a puncture, to a point or to the boundary of D n , in other words, if it encloses more than one and less than n punctures. A curve system is a collection of isotopy classes of disjoint non-degenerate simple closed curves, that are pairwise non-isotopic.
Curve systems are very important as they allow to use geometric tools to study braids. From Nielsen-Thurston theory [START_REF] Thurston | On the geometry and dynamics of diffeomorphisms of surfaces[END_REF], every braid can be decomposed along a curve system, so that each component becomes either periodic or pseudo-Anosov. The simplest possible scenario appears when the curve is standard : Definition 1. A simple closed curve in D n is called standard if it is isotopic to a circle centered at the real axis. A curve system containing only isotopy classes of standard curves is called standard.
Every curve system can be transformed into a standard one by the action of a braid, as we shall see. Let B + n be the submonoid of B n of positive braids, generated by σ 1 , . . . , σ n-1 [START_REF] Artin | Theory of Braids[END_REF]. We can define a partial order on B n , called prefix order, as follows: for α, β ∈ B n , α β if there is γ ∈ B + n such that αγ = β. This partial order endows B n with a lattice structure, i.e., for each pair α, β ∈ B n , their gcd α ∧ β and their lcm α ∨ β with respect to exist and are unique. Symmetrically, we can define the suffix order as follows: for α, β ∈ B n , β α if there is γ ∈ B + n such that γα = β. We will focus on B n as a lattice with respect to , and we remark that B + n is a sublattice of B n . In 2008, Lee and Lee proved the following: Theorem 2 ([17, Theorem 4.2]). Given a curve system S in D n , its set of standardizers
St(S) = {α ∈ B + n : S α is standard } 1 INTRODUCTION
2 is a sublattice of B + n . Therefore, St(S) contains a unique -minimal element.
The first aim of this paper is to give a direct algorithm to compute the -minimal element of St(S), for a curve system S. The algorithm, explained in Section 4, is inspired by Dynnikov and Wiest's algorithm to compute a braid given its curve diagram [START_REF] Dynnikov | On the complexity of braids[END_REF] and the modifications made in [START_REF] Caruso | Algorithmes et généricité dans les groupes de tresses[END_REF].
The second aim of the paper is to solve the analogous problem for Artin-Tits groups of spherical type. Definition 3. Let Σ be a finite set of generators and M = (m s,t ) s,t∈Σ a symmetric matrix with m s,s = 1 and m s,t ∈ {2, . . . , ∞} for s = t. The Artin-Tits system associated to M is (A, Σ), where A is a group (called Artin-Tits group) with the following presentation A = Σ | sts . . . ms,t elements = tst . . . ms,t elements ∀s, t ∈ Σ, s = t, m s,t = ∞ .
For instance, B n has the following presentation [START_REF] Artin | Theory of Braids[END_REF] B n = σ 1 , . . . , σ n-1 σ i σ j = σ j σ i , |i -j| > 1 σ i σ j σ i = σ j σ i σ j , |i -j| = 1 .
The Coxeter group W associated to (A, Σ) can be obtained by adding the relations s 2 = 1: If W is finite, the corresponding Artin-Tits group is said to have spherical type. We will just consider Artin-Tits groups of spherical type, assuming that a spherical type Artin-Tits system is fixed. If A cannot be decomposed as direct product of non-trivial Artin-Tits groups, we say that A is irreducible. Irreducible Artin-Tits groups of spherical type are completely classified [START_REF] Coxeter | The complete enumeration of finite groups of the form r 2 i = (r i r j ) k ij = 1[END_REF].
W = Σ | s 2 =
Let A be an Artin-Tits group of spherical type. A standard parabolic subgroup, A X , is the subgroup generated by some X ⊆ Σ. A subgroup P is called parabolic if it is conjugate to a standard parabolic subgroup, that is, P = α -1 A Y α for some standard parabolic subgroup A Y and some α ∈ A. Notice that we may have P = α -1 A Y α = β -1 A Z β for distinct Y, Z ⊂ Σ and distinct α, β ∈ A. We will write P = (Y, α) to express that A Y and α are known data defining the parabolic subgroup P .
There is a natural way to associate a parabolic subgroup of B n to a curve system. Suppose that A = B n and let A X be the standard parabolic subgroup generated by {σ i , σ i+1 , . . . , σ j } ⊆ {σ 1 , . . . , σ n-1 }. Let C be the isotopy class of the circle enclosing the punctures i, . . . , j + 1 in D n . Then A X fixes C and we will say that that A X is the parabolic subgroup associated to C. Suppose that there is some curve system C , such that C = C α for some α ∈ B n . Then α -1 A X α fixes it and we say that α -1 A X α is its associated parabolic subgroup. The parabolic subgroup associated to a system of non-nested curves is the direct sum of the subgroups associated to each curve. Notice that this is a well defined subgroup of B n , as the involved subgroups commute. Therefore, we can talk about parabolic subgroups instead talking about curves. Most of the results for curves on D n that can be translate in terms of parabolic subgroups can also be extend to every Artin-Tits group of spherical type. That is why parabolic subgroups play a similar role, in Artin-Tits groups, to the one played by systems of curves in B n .
Our second purpose in this paper is to give a fast and simple algorithm to compute the minimal positive element that conjugates a given parabolic subgroup to a standard parabolic subgroup. The central Garside element of a standard parabolic subgroup A X will be denoted by c X and is to be defined in the next section.
Having a generic parabolic subgroup, P = (X, α), the central Garside element will be denoted by c P . We also define the minimal standardizer of the parabolic subgroup P = (X, α) to be the minimal positive element that conjugates P to a standard parabolic subgroup. The existence and uniqueness of this element will be shown in this paper. Keep in mind that the pn-normal form of an element is a particular decomposition of the form ab -1 , where a and b are positive and have no common suffix. The main result of this paper is the following:
Theorem 37 Let P = (X, α) be a parabolic subgroup. If c P = ab -1 is in pn-normal form, then b is the minimal standardizer of P .
Thus, the algorithm will take a parabolic subgroup P = (X, α) and will just compute the normal form of its central Garside element c P , obtaining immediately the minimal standardizer of P .
The paper will be structured in the following way: In Section 2 some results and concepts about Garside theory will be recalled. In Sections 3 and 4 the algorithm for braids will be explained. In Section 5 the algorithm for Artin-Tits groups will be described and, finally, in Section 6 we will bound the complexity of both procedures.
Preliminaries about Garside theory
Let us briefly recall some concepts from Garside theory (for a general reference, see [START_REF] Dehornoy | Gaussian Groups and Garside Groups, two Generalisations of Artin Groups[END_REF]). A group G is called a Garside group with Garside structure (G, P, ∆) if it admits a submonoid P of positive elements such that P ∩ P -1 = {1} and a special element ∆ ∈ P, called Garside element, with the following properties:
• There is a partial order in G, , defined by a b ⇔ a -1 b ∈ P such that for all a, b ∈ G it exists a unique gcd a ∧ b and a unique lcm a ∨ b with respect to . This order is called prefix order and it is invariant under left-multiplication.
• The set of simple elements [1, ∆] = {a ∈ G | 1 a ∆} generates G.
• ∆ -1 P∆ = P.
• P is atomic: If we define the set of atoms as the set of elements a ∈ P such that there are no non-trivial elements b, c ∈ P such that a = bc, then for every x ∈ P there is an upper bound on the number of atoms in a decomposition of the form x = a 1 a 2 • • • a n , where each a i is an atom.
The conjugate by ∆ of an element x will be denoted τ (x) = x ∆ = ∆ -1 x∆.
In a Garside group, the monoid P also induces a partial order invariant under rightmultiplication, the suffix order . This order is defined by a b ⇔ ab -1 ∈ P, and for all a, b ∈ G there exists a unique gcd (a ∧ b) and a unique lcm (a ∨ b) with respect to . We say that a Garside group has finite type if [1, ∆] is finite. It is well known that Artin-Tits groups of spherical type admit a Garside structure of finite type [START_REF] Brieskorn | Artin-gruppen und Coxeter-gruppen[END_REF][START_REF] Dehornoy | Gaussian Groups and Garside Groups, two Generalisations of Artin Groups[END_REF]. Moreover: We say that x
= ∆ k x 1 • • • x r is in left normal form if k ∈ Z, x i / ∈ {1
, ∆} is a simple element for i = 1, . . . , r, and x i x i+1 is in left normal form for 0 < i < r.
Analogously,
x = x 1 • • • x r ∆ k is in right normal form if k ∈ Z, x i / ∈ {1
, ∆} is a simple element for i = 1, . . . , r, and x i x i+1 is in right normal form for 0 < i < r.
It is well known that the normal form of an element is unique [START_REF] Dehornoy | Gaussian Groups and Garside Groups, two Generalisations of Artin Groups[END_REF]Corollary 7.5]. Moreover, the numbers r and k do not depend on the normal form (left or right). We define the infimum, the canonical length and the supremum of x respectively as inf(x) = k, (x) = r and sup(x) = k +r.
Let a and b be two simple elements such that a • b is in left normal form. One can write its inverse as b -1 a -1 = ∆ -2 ∂ -3 (b)∂ -1 (a). This is in left normal form because ∂ -1 (b)∂(a) is in normal form by definition and τ = ∂ 2 preserves . More generally (see [START_REF] Elrifai | Algorithms for positive braids[END_REF]
), if x = ∆ k x 1 • • • x r is in left normal form, then the left normal form of x -1 is x -1 = ∆ -(k+r) ∂ -2(k+r-1)-1 (x r )∂ -2(k+r-2)-1 (x r-1 ) • • • ∂ -2k-1 (x 1 )
For a right normal form, x = x 1 • • • x r ∆ k , the right normal form of x -1 is:
x -1 = ∂ 2k+1 (x r )∂ 2(k+1)+1 (x r-1 ) • • • ∂ 2(k+r-1)+1 (x 1 )∆ -(k+r) Definition 7 ([5, Theorem 2.6]). Let a, b ∈ P, then x = a -1 b is said to be in np-normal form if a ∧ b = 1. Similarly, we say that x = ab -1 is in pn-normal form if a ∧ b = 1. Definition 8. Let ∆ k x 1 • • • x r
with r > 0 be the left normal form of x. We define the initial and the final factor respectively as ι(x) = τ -k (x 1 ) and ϕ(x) = x r . We will say that x is rigid if ϕ(x) • ι(x) is in left normal form or if r = 0. Definition 9 ( [START_REF] Elrifai | Algorithms for positive braids[END_REF], [START_REF] Gebhardt | The cyclic sliding operation in Garside groups[END_REF]Definition 8]). Let ∆ k x 1 • • • x r with r > 0 be the left normal form of x. The cycling of x is defined as
c(x) = x ι(x) = ∆ k x 2 • • • x r ι(x).
The decycling of x is d(x) = x (ϕ(x) -1 ) = ϕ(x)∆ k x 1 . . . x r-1 . We also define the preferred prefix of x as p(x) = ι(x) ∧ ι(x -1 ).
The cyclic sliding of x is defined as the conjugate of x by its preferred prefix:
s(x) = x p(x) = p(x) -1 xp(x).
Let G be a Garside group. For x ∈ G, inf s (x) and sup s (x) denote respectively the maximal infimum and the minimal supremum in the conjugacy class x G .
• The super summit set [START_REF] Elrifai | Algorithms for positive braids[END_REF][START_REF] Picantin | The Conjugacy Problem in Small Gaussian Groups[END_REF]
of x is SSS(x) = {y ∈ x G | is minimal in x G } = {y ∈ x G | inf(y) = inf
SC(x) = {y ∈ x G | s m (y) = y for some m ≥ 1}
These sets are finite if the set of simple elements is finite and their computation is very useful to solve the conjugacy problem in Garside groups. They satisfy the following inclusions:
SSS(x) ⊇ U SS(x) ⊇ SC(x).
The braid group, B n
A braid with n strands can be seen as a collection of n disjoint paths in a cylinder, defined up to isotopy, joining n points at the top with n points at the bottom, running monotonically in the vertical direction.
Each generator σ i represents a crossing between the strands in positions i and i + 1 with a fixed orientation. The generator σ -1 i represents the crossing of the same strands with the opposite orientation. When considering a braid as a mapping class of D n , these crossings are identified with the swap of two punctures in D n (See Figure 1).
σ 1 σ -1 2 Figure 1: The braid σ 1 σ -1
2 and how it acts on a curve in D 3 .
Remark 10. The standard Garside structure of the braid group
B n is (B n , B + n , ∆ n ) where ∆ n = σ 1 ∨ • • • ∨ σ n-1 = (σ 1 σ 2 • • • σ n-1 )(σ 1 σ 2 • • • σ n-2 ) • • • (σ 1 σ 2 )σ 1
The simple elements in this case are also called permutation braids [START_REF] Elrifai | Algorithms for positive braids[END_REF], because the set of simple braids is in one-to-one correspondence with the set of permutations of n elements. Later we will use the following result:
Lemma 11 ([11, Lemma 2.4]).
Let s be a simple braid. Strands j and j + 1 cross in s if and only if σ j s.
Detecting bending points
In order to describe a non-degenerate closed curve C in D n , we will use a notation introduced in [START_REF] Fenn | Ordering the braid groups[END_REF]. Recall that D n has diameter [0, n + 1] and that the punctures of D n are placed at 1, 2, . . . , n ∈ R. Choose a point on C lying on the real axis and choose an orientation for C. We will obtain a word W (C) representing C, on the alphabet { , , 0, 1, . . . , n}, by running along the curve, starting and finishing at the chosen point. We write down a symbol for each arc on the lower half plane, a symbol for each arc on the upper half plane, and a number m for each intersection of C with the real segment (m, m + 1). An example is provided in Figure 2. For an isotopy class of curves C, W (C) is the word associated to a reduced representative C red , i.e., a curve in C which has minimal intersection with the real axis. C red is unique up to isotopy of D n fixing the real diameter setwise [START_REF] Fenn | Ordering the braid groups[END_REF], and W (C) is unique up to cyclic permutation and reversing.
Remark 12. Notice that if a curve C does not have minimal intersection with the real axis, then W (C) contains a subword, up to reversal and cylic permutation, of the form p p or p p . Hence, the curve can be isotoped by "pushing" this arc in order not to intersect the real axis. This is equivalent to removing the subword mentioned before from W (C). In fact, we will obtain W (C) by removing all subwords of this kind from W (C). The process of removing p p (resp. p p ) from W (C) is called relaxation of the arc p p (resp. p p).
Definition 13. Let C be a non-degenerate simple closed curve. We say that there is a bending point (resp. reversed bending point) of C at j if we can find in W (C), up to cyclic permutation and reversing, a subword of the form i j k (resp. i j k) for some 0 ≤ i < j < k ≤ n (Figure 3).
We say that a curve system has a bending point at j if one of its curves has a bending point at j.
i + 1 j j + 1 k C Figure 3: A bending point at j in a curve C.
The algorithm we give in Section 4 takes a curve system S and "untangles" it in the shortest (positive) way. That is, it gives the shortest positive braid α such that S α is standard, i.e., the minimal element in St(S). Bending points are the key ingredient of the algorithm. We will show that if a curve system S has a bending point at j, then σ j is a prefix of the minimal element in St(S). This will allow to untangle S by looking for bending points and applying the corresponding σ j to the curve until no bending point is found. The aim of this section is to describe a suitable input for this algorithm and to show the following result. Proposition 14. A curve system is standard if and only if its reduced representative has no bending points.
Dynnikov coordinates
We have just described a non-degenerate simple closed curve in D n by means of the word W (C). There is a different and usually much shorter way to determine a curve system S in D n : its Dynnikov coordinates [8, Chapter 12]. The method to establish the coordinates of C is as follows. Take a triangulation of D n as in Figure 4 and let x i be the number of times the curve system S intersects the edge e i . The Dynnikov coordinates of the curve system are given by the t-uple (x 0 , x 1 , . . . , x 3n-4 ). There exists a reduced version of these coordinates, namely (a 0 , b 0 , . . . , a n-1 , b n-1 ), where
a i = x 3i-1 -x 3i 2 , b i = x 3i-2 -x 3i+1 2 , ∀i = 1, ..., n -2
and a 0 = a n-1 = 0, b 0 = -x 0 and b n-1 = x 3n-4 . See an example in Figure 5.
1 2 3 n -1 n e 0 e 3n-4
e 2 e 5 e 3n-7
e 1 e 3 e 4 e 6 e 3n-6 e 3n-5 Furthermore, there are formulae determining how these coordinates change when applying σ ±1 j , to the corresponding curve, for 0 < j < n. Proposition 15 ([7, Proposition 8.5.4]). For c = (a 0 , b 0 , . . . , a n-1 , b n-1 ), we have
c σ -1 k = (a 0 , b 0 , . . . , a n-1 , b n-1 ),
with a j = a j , b j = b j for j ∈ {k -1, k}, and
a k-1 = a k-1 + (δ + + b k-1 ) + , a k = a k -(δ + -b k ) + , b k-1 = b k-1 -(-δ ) + + δ + , b k = b k + (-δ ) + -δ + , where δ = a k -a k-1 , δ = a k -a k-1 and x + = max(0, x).
We also have
c σ k = c λσ -1 k λ with (a 1 , b 1 , . . . , a n-1 , b n-1 ) λ = (-a 1 , b 1 , . . . , -a n-1 , b n-1 ).
Remark 16. Notice that the use of σ -1 k in the first equation above is due to the orientation of the strands crossings that we are taking for our braids (see Figure 1), which is the opposite of the orientation used in [START_REF] Dehornoy | Why are braids orderable?[END_REF].
, a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) = (0, -1, 1, -2, 3, -3, 0, 6).
Let us see how to detect a bending point of a curve system S with these coordinates. First of all, notice that there cannot be a bending point at 0 or at n. It is easy to check that there is a bending point at 1 if and only if x 2 < x 3 (Figure 6a). Actually, if R is the number of subwords of type 0 1 k for some 1 < k ≤ n, then x 3 = x 2 + 2R. Symmetrically, there is a bending point at n -1 if and only if x 3n-6 < x 3n-7 . A bending point at i, for 1 < i < n -1, is detected by comparing the coordinates a i-1 and a i (Figure 6b). Notice that arcs not intersecting e 3i-2 affect neither a i-1 nor a i , and arcs not intersecting the real line do not affect the difference a i-1 -a i . Hence, there is a bending point of S at i if and only if a i-1 -a i > 0. Using a similar argument we can prove that there is a reversed bending point of S at i if and only if a i-1 -a i < 0. Moreover, each bending point (resp. reversed bending point) at i increases (resp. decreases) by 1 the difference a i-1 -a i . We have just shown the following result: Lemma 17 (Bending point with Dynnikov coordinates). Let S be a curve system on D n with reduced Dynnikov coordinates (a 0 , b 0 , . . . , a n-1 , b n-1 ). For j = 1, . . . , n -1 there are exactly R ≥ 0 bending points (resp. reversed bending points) of S at j if and only if a j-1 -a j = R (resp. a j-1 -a j = -R) .
Lemma 18. Let S be a curve system as above. Then S is symmetric with respect to the real axis if and only if a i = 0, for 0 < i < n.
Proof. Just notice that a symmetry with respect to the real axis does not affect b-coordinates and changes the sign of every a i , for 0 < i < n. Lemma 19. A curve system is standard if and only if it is symmetric with respect to the real axis.
Proof. For every m = 0, . . . , n, we can order the finite number of elements in S ∩ (m, m + 1) from left to right, as real numbers. Given an arc a b in W (S), suppose that it joins the i-th element in S ∩ (a, a + 1) with the j-th element in S ∩ (b, b + 1). The symmetry with respect to the real axis preserves the order of the intersections with the real line, hence the image a b of the above upper arc will also join the i-th element in S ∩ (a, a + 1) with the j-th element in S ∩ (b, b + 1). This implies that both arcs a b and a b form a single standard curve a b . As this can be done for every upper arc in S, it follows that S is standard.
Proof of Proposition 14. If the curve system is standard, then it clearly has no bending points. Conversely, if it has no bending points, by Lemma 17 the sequence a 0 , . . . , a n-1 is nondecreasing, starting and ending at 0, so it is constant. By Lemmas 18 and 19, the curve system is standard.
Standardizing a curve system
We will now describe an algorithm which takes a curve system S, given in reduced Dynnikov coordinates, and finds the minimal element in St(S). The algorithm will do the following: Start with β = 1. Check whether the curve has a bending point at j. If so, multiply β by σ j and restart the process with S σ j . A simple example is provided in Figure 7. The formal way is described in Algorithm 1. The minimality of the output is guaranteed by the following theorem, which shows that σ j is a prefix of the minimal standardizer in St(S), provided S has a bending point at j. Theorem 20. Let S be a curve system with a bending point at j. Then σ j is a prefix of α, for every positive braid α such that S α is standard.
β = σ 2 σ 1 σ 1 σ 2 σ 1 σ 1
To prove the theorem we will need a result from [START_REF] Calvez | Dual Garside structure and reducibility of braids[END_REF]. Definition 21. We will say that a simple braid s is compatible with a bending point at j if the strands j and j + 1 of s do not cross in s. That is, if σ j s (by Lemma 11). Let s 1 and s 2 be two simple braids such that s 1 s 2 is in left normal form. Let C be a curve with a bending point at j compatible with s 1 . Then, there exists some bending point of C s 1 compatible with s 2 .
Remark 23. The previous lemma holds also for a curve system, with the same proof.
Proof of Theorem 20. Suppose that α ∈ B + n is such that S α is standard and σ j α. Let s 1 • • • s r be the left normal form of α. Notice that there is no ∆ p in the normal form. Otherwise, σ j would be a prefix of α, because it is a prefix of ∆. By Lemma 11, the strands j and j + 1 of s 1 do not cross because σ j is not a prefix of α. Thus, s 1 is compatible with a bending point at j and by Lemma 22, S s 1 has a bending point compatible with s 2 . By induction, S s 1 •••sm has a bending point compatible with s m+1 , for m = 2, . . . , r, where s r+1 is chosen to be such that s r • s r+1 is in left normal form. Hence, S α has a bending point, i.e., it is not standard, which is a contradiction.
In Algorithm 1 we can find the detailed procedure to compute the minimal element in St(S). Notice that, at every step, either the resulting curve has a bending point, providing a new letter of the minimal element in St(S), or it is standard and we are done. The process stops as B + n has homogeneous relations (actually, atomicity suffices to show that the process stops), so all positive representatives of the minimal element have the same length, which is precisely the number of bending points found during the process.
Notice that Theorem 20 guarantees that the output of Algorithm 1 is a prefix of every standardizer of S. This provides an alternative proof of the existence and uniqueness of a minimal element in St(S).
Standardizing a parabolic subgroup
Now we will give an algorithm to find the minimal standardizer of a parabolic subgroup P = (X, α) of an Artin-Tits group A of spherical type. The existence and uniqueness of this element will be shown by construction.
Proposition 24 ([18]
). A parabolic subgroup A X of an Artin-Tits group of spherical type is an Artin-Tits group of spherical type whose Artin-Tits system is (A X , X).
Proposition 25 ([2, Lemma 5.1, Theorem 7.1]). Let (A Σ , Σ) be an Artin-Tits system where A Σ is of spherical type. Then, a Garside element for A Σ is:
∆ Σ = s∈Σ s = s∈Σ s,
and the submonoid of positive elements is the monoid generated by Σ. Moreover, if A Σ is irreducible, then (∆ Σ ) e generates the center of A Σ , for some e ∈ {1, 2}. Definition 26. Let A X be an Artin-Tits group of spherical type. We define its central Garside element as c X = (∆ X ) e , where e is the minimal positive integer such that (∆ X ) e ∈ Z(A X ). We also define c X,α := αc X α -1 .
Proposition 27 ([16, Proposition 2.1]). Let X, Y ⊆ Σ and g ∈ A. The following conditions are equivalent,
1. g -1 A X g ⊆ A Y ; 2. g -1 c X g ∈ A Y ;
3. g = xy where y ∈ A Y and x conjugates X a subset of Y .
The above proposition is a generalization of [19, Theorem 5.2] and implies, as we will see, that conjugating standard parabolic subgroups is equivalent to conjugating their central Garside elements. This will lead us to the definition of the central Garside element for a non-standard parabolic subgroup as given in Proposition 34. In order to prove the following results, we need to define an object that generalizes to Artin-Tits groups of spherical type some operations used in braid theory:
Definition 28. Let X ⊂ Σ, t ∈ Σ. We define r X,t = ∆ X∪{t} ∆ -1
X . Remark 29. In the case t / ∈ X, this definition is equivalent to the definition of positive elementary ribbon [START_REF] Godelle | Normalisateur et groupe d'Artin de type sphérique[END_REF]Definition 0.4]. Notice that if t ∈ X, r x,t = 1. Otherwise, notice that ∆ X∪{t} is simple, and that a simple element cannot be written as a word with two consecutive repeated letters [2, Lemma 5.4]. As ∆ X can start with any letter of X, it follows that if t / ∈ X, the only possible final letter of r X,t is t. In particular r X,t t.
Proposition 30. There is a unique Y ⊂ X ∪ {t} such that r X,t X = Y r X,t .
Proof. Given Z ⊂ Σ, conjugation by ∆ Z permutes the elements of Z. Let us denote by Y the image of X under the permutation of X ∪ {t} induced by the conjugation by ∆ X∪{t} . Then
r X,t Xr -1 X,t = ∆ X∪{t} ∆ -1 X X∆ X ∆ -1 X∪{t} = ∆ X∪{t} X∆ -1 X∪{t} = Y.
Artin-Tits groups of spherical type can be represented by Coxeter graphs. Recall that such a group, A, is defined by a symmetric matrix M = (m i,j ) i,j∈S and the finite set of generators Σ. The Coxeter graph associated to A is denoted Γ A . The set of vertices of Γ A is Σ, and there is an edge joining two vertices s, t ∈ Σ if m s,t ≥ 3. The edge will be labelled with m s,t if m s,t ≥ 4. We say that the group A is indecomposable if Γ A is connected and decomposable otherwise. If A is decomposable, then there exists a non-trivial partition Σ = X
1 • • • X k such that A is isomorphic to A X 1 × • • • × A X k , where each A X j is indecomposable (each X j is just the set of vertices of a connected component of Γ X ). Each A X j is called an indecomposable component of A. Lemma 31. Let X, Y ⊂ Σ and let X = X 1 • • • X n and Y = Y 1 • • • Y m be
the partitions of X and Y , respectively, inducing the indecomposable components of A X and A Y . Then, for every g ∈ A, the following conditions are equivalent:
1. g -1 A X g = A Y .
2. m = n and g = xy, where y ∈ A Y and the parts of Y can be reordered so that we have
x -1 X i x = Y i for i = 1, . . . , n.
3. m = n and g = xy, where y ∈ A Y and the parts of Y can be reordered so that we have
x -1 A X i x = A Y i for i = 1, . . . , n.
Proof. Suppose that g -1 A X g = A Y . By Proposition 27, we can decompose g = xy where y ∈ A Y and x conjugates the set X to a subset of the set Y . Since conjugation by y induces an automorphism of A Y , it follows that x conjugates A X isomorphically onto A Y , so it conjugates X to the whole set Y . Since the connected components of Γ X (resp. Γ Y ) are determined by the commutation relations among the letters of X (resp. Y ), it follows that conjugation by x sends indecomposable components of X onto indecomposable components of Y . Hence m = n and x -1 X i x = Y i for i = 1, . . . , n (reordering the indecomposable components of Y in a suitable way), as we wanted to show. Thus, statement 1 implies statement 2. Statement 2 implies 3 trivially and finally the third statement implies the first one as
A X = A X 1 × • • • × A Xn and A Y = A Y 1 × • • • × A Yn . Lemma 32. Let X, Y ⊆ Σ, g ∈ A. Then, g -1 A X g = A Y ⇐⇒ g -1 c X g = c Y .
Proof. Suppose that g -1 c X g = c Y . Then, by Proposition 27, we have g -1 A X g ⊆ A Y and also gA Y g -1 ⊆ A X . As conjugation by g is an isomorphism of A, the last inclusion is equivalent to A Y ⊆ g -1 A X g. Thus, g -1 A X g = A Y , as desired.
Conversely, suppose that g -1 A X g = A Y . By using Lemma 31, we can decompose g = xy where y ∈ A Y and x is such that x -1 A X i x = A Y i , where A X i and A Y i are the indecomposable components of A X and A Y for i = 1, . . . , n. As the conjugation by x defines an isomorphism between A X i and A Y i , we have that
x -1 Z(A X i )x = Z(A Y i ). Hence, we have x -1 c X i x = ∆ k Y i
for some k ∈ Z, because the center of irreducible Artin-Tits groups of spherical type is cyclic (Proposition 25). Let
c X i = ∆ 1 X i and c Y i = ∆ 2 Y i .
As A X i and A Y i are isomorphic, 1 = 2 . Also notice that in an Artin-Tits group of spherical type the relations are homogeneous and so
k = 1 = 2 , having x -1 c X i x = c Y i . Let = max{ i | c X i = ∆ i X i } = max{ i | c Y i = ∆ i Y i },
and denote d
X i = ∆ X i and d Y i = ∆ Y i for i = 1, . . . , n. Notice that d X i is equal to either c X i or (c X i ) 2
, and the same happens for each
d Y i , hence x -1 d X i x = d Y i for i = 1, . . . , n. Then, as c X = n i=1 d X i and c Y = n i=1 d Y i , it follows that x -1 c X x = c Y . Therefore, g -1 c X g = y -1 (x -1 c X x)y = y -1 c Y y = c Y .
Lemma 33. Let P = (X, α) be a parabolic subgroup and A Y be a standard parabolic subgroup of an Artin-Tits group A of spherical type. Then we have
g -1 P g = A Y ⇐⇒ g -1 c X,α g = c Y . Proof. If P = (X, α), it follows that g -1 P g = A Y if and only if g -1 αA X α -1 g = A Y . By Lemma 32, this is equivalent to g -1 αc X α -1 g = c Y , i.e., g -1 c X,α g = c Y .
Proposition 34. Let P = (X, α) = (Y, β) be a parabolic subgroup of an Artin-Tits group of spherical type. Then c X,α = c Y,β and we can define c P := c X,α to be the central Garside element of P .
Proof. Suppose that g is a standardizer of P such that g -1 P g = A Z . By using Lemma 33, we have that c Z = g -1 c X,α g = g -1 c Y,β g. Thus, c X,α = c Y,β .
By Lemma 33, a positive standardizer of a parabolic subgroup P = (X, α) is a positive element conjugating c P to some c Y . Let
C + A Σ (c P ) = {s ∈ P | s = u -1 c P u, u ∈ A Σ }
be the set of positive elements conjugate to c P (which coincides with the positive elements conjugate to c X ). The strategy to find de minimal standardizer of P will be to compute the minimal conjugator from c P to C + A Σ (c P ). That is, the shortest positive element u such that u -1 c X,α u ∈ P.
Proposition 35. If x = ab -1 is in pn-normal form and x is conjugate to a positive element, then b is a prefix of every positive element conjugating x to C + A Σ (x).
Proof. Suppose that ρ is a positive element such that ρ -1 xρ is positive. Then 1 ρ -1 xρ. Multiplying from the left by x -1 ρ we obtain x -1 ρ ρ and, since ρ is positive, x -1 x -1 ρ ρ. Hence x -1 ρ or, in other words ba -1 ρ. On the other hand, by the definition of pn-normal form, we have a
∧ b = 1, which is equivalent to a -1 ∨ b -1 = 1 [14, Lemma 1.3]. Multiplying from the left by b, we obtain ba -1 ∨ 1 = b.
Finally, notice that ba -1 ρ and also 1 ρ. Hence b = ba -1 ∨ 1 ρ. Since b is a prefix of ρ for every positive ρ conjugating x to a positive element, the result follows.
Lemma 36. Let A X be a standard parabolic subgroup and t ∈ Σ. If α∆ k X t, then α r X,t , for every k > 0.
Proof. Since the result is obvious for t ∈ X (r X,t = 1), suppose t / ∈ X. Trivially, α∆ k X ∆ X . As α∆ k X t, we have that α∆ k X ∆ X ∨ t. By definition, ∆ X ∨ t = ∆ X∪{t} = r X,t ∆ X . Thus, α∆ k X r X,t ∆ X and then α∆ k-1 r X,t , because is invariant under right-multiplication. As r X,t t (see Remark 29), the result follows by induction.
Theorem 37. Let P = (X, α) be a parabolic subgroup. If c P = ab -1 is in pn-normal form, then b is the -minimal standardizer of P .
Proof. We know from Proposition 35 that b is a prefix of any positive element conjugating c P to a positive element, which guarantees its -minimality. We also know from Lemma 33 that any standardizer of P must conjugate c P to a positive element, namely to the central Garside element of some standard parabolic subgroup. So we only have to prove that b itself conjugates c P to the central Garside element of some standard parabolic subgroup. We assume α to be positive, because there is always some k ∈ N such that ∆ 2k α is positive and, as ∆ 2 lies in the center of A, P = (X, α) = (X, ∆ 2k α).
The pn-normal form of c P = αc X α -1 is obtained by cancelling the greatest common suffix of αc X and α. Suppose that t ∈ Σ is such that α t and αc X t.
If t / ∈ X, then r X,t = 1 and by Lemma 36 we have that α r X,t , i.e., α = α 1 r X,t for some
α 1 ∈ A Σ . Hence, αc X α -1 = α 1 r X,t c X r -1 X,t α -1 1 = α 1 c X 1 α -1 1
for some X 1 ⊂ Σ. In this case, we reduce the length of the conjugator (by the length of r X,t ). If t ∈ X, t commutes with c X , which means that
αc X α -1 = α 1 tc X t -1 α -1 1 = α 1 c X 1 α -1 1 ,
where α 1 is one letter shorter than α and X 1 = X. We can repeat the same procedure for α i c X i α -1 i , where X i ⊂ Σ, t i ∈ Σ such that α i t i and α i c X i t i . As the length of the conjugator decreases at each step, the procedure must stop, having as a result the pn-normal form of c P , which will have the form:
c P = (α k c X k )α -1 k , for k ∈ N, X k ⊂ Σ.
Then, α k = b clearly conjugates c P to c X k , which is the central Garside element of a standard parabolic subgroup, so b is the -minimal standardizer of P .
We end this section with a result concerning the conjugacy classes of elements of the form c P . As all the elements of the form c Z , Z ⊆ X, are rigid (Definition 8), using the next theorem we can prove that the set of sliding circuits of c P is equal to its set of positive conjugates.
Theorem 38 ([15, Theorem 1]). Let G be a Garside group of finite type. If x ∈ G is conjugate to a rigid element, then SC(x) is the set of rigid conjugates of x.
Corollary 39. Let P = (X, α) be a parabolic subgroup of an Artin-Tits group of spherical type. Then
C + A Σ (c P ) = SSS(c P ) = U SS(c P ) = SC(c P ) = {c Y | Y ∈ Σ, c Y conjugate to c X }.
Proof. By Theorem 38, it suffices to prove that C + A Σ (c P ) is composed only of rigid elements of the form c Z . Let P = (X, β) and suppose that c P ∈ C + A Σ (c P ). Let b be the minimal standardizer of c P . By Proposition 35, Theorem 37 and Lemma 33, b is the minimal positive element conjugating c P to C + A Σ (c P ), which implies that b = 1, so P is standard. Hence, all positive conjugates of c P are equal to c Y for some Y , therefore they are rigid.
Corollary 40. Let P = (X, α) be a parabolic subgroup of an Artin-Tits group of spherical type. Then the set of positive standardizers of P ,
St(P ) = {α ∈ A + Σ , | c α P = c Y , for some Y ⊆ Σ},
Complexity
In this section we will describe the computational complexity of the algorithms which compute minimal standardizers of curves and parabolic subgroups. Let us start with Algorithm 1, which computes the minimal standardizer of a curve system. The complexity of Algorithm 1 will depend on the length of the output, which is the number of steps of the algorithm. To bound this length, we will compute a positive braid which belongs to St(S). This will bound the length of the minimal standardizer of S.
The usual way to describe the length (or the complexity) of a curve system consists in counting the number of intersections with the real axis, i.e., (S) = #(S ∩ R). For integers 0 ≤ i < j < k ≤ n, we define the following braid (see Figure 8):
s(i, j, k) = (σ j σ j-1 • • • σ i+1 )(σ j+1 σ j • • • σ i+2 ) • • • (σ k-1 σ k-2 • • • σ i+k-j ) C 1 2 3 4 5 6
Figure 8: Applying s(0, 3, 6).
Lemma 41. Applying s = s(i, j, k) to a curve system S, when i j k is a bending point, decreases the length of the curve system at least by two.
Proof. We will describe the arcs of the curves of S in a new way, by associating a real number c p ∈ (0, n + 1) to each of the intersections of S with the real axis, where p is the position of the intersection with respect to the other intersections: c 1 is the leftmost intersection and c (S) is the rightmost one. We will obtain a set of words representing the curves of S, on the alphabet { , , c 1 , . . . , c (S) }, by running along each curve, starting and finishing at the same point. As before, we write down a symbol for each arc on the lower half plane, and a symbol for each arc on the upper half plane. We also define the following function that sends this alphabet to the former one:
L : { , , c 1 , . . . , c (S) } -→ { , , 0, . . . , n}
L( ) = , L( ) = , L(c p ) = c p .
Take a disk D such that its boundary ∂(D) intersects the real axis at two points, x 2 and x 3 , which are not punctures and do not belong to S. Consider another point x 1 , which should not be a puncture or belong to S, on the real axis such that L(x 1 ) < L(x 2 ). Suppose that there are no arcs of S on the upper-half plane intersecting the arc x 1 x 2 and there are no arcs of S on the lower-half plane intersecting the arc x 2 x 3 . We denote I 1 = (0, x 1 ), I 2 = (x 1 , x 2 ), I 3 = (x 2 , x 3 ) and I 4 = (x 3 , n + 1) and define |I t | as the number of punctures that lie in the interval I t .
We consider an automorphism of D n , called d = d(x 1 , x 2 , x 3 ), which is the final position of an isotopy that takes D and moves it trough the upper half-plane to a disk of radius centered at x 1 , which contains no point c p and no puncture, followed by an automorphism which fixes the real line as a set and takes the punctures back to the positions 1, . . . , n. This corresponds to "placing the interval I 3 between the intervals I 1 and I 2 ". Firstly, we can see in Figure 9 that the only modifications that the arcs of S can suffer is the shifting of their endpoints. By hypothesis, there are no arcs in the upper half-plane joining I 2 with I j for j = 2, and there are no arcs in the lower half-plane joining I 3 with I j for j = 3. Any other possible arc is transformed by d into a single arc, so every arc is transformed in this way. Algebraically, take an arc of S,
I 1 I 2 I 3 I 4 I 1 I 2 I 3 I 4 x 1 x 2 x 3
(a) d acting on the arcs in the upper half plane
I 1 I 2 I 3 I 4 I 1 I 2 I 3 I 4 x 1 x 2 x 3 (
L(c p ) = p if c p ∈ I 1 , I 4 , p + |I 3 | if c p ∈ I 2 , p -|I 2 | if c p ∈ I 3 , for p = a, b.
After applying this automorphism, the curve could fail to be reduced, in which case relaxation of unnecessary arcs could be done, reducing the complexity of S. Now, given a bending point i j k of S, consider the set
B = {c p c q c r | L(c p ) < L(c q ) < L(c r ) and L(c q ) = j}
and choose the element of B with greatest sub-index q, which is also the one with lowest p and r. Define x 1 , x 2 and x 3 such that x 1 ∈ (c p-1 , c p ) ∩ (L(c p ), L(c p ) + 1), x 2 ∈ (c q , c q+1 ) ∩ (j, j + 1) and x 3 ∈ (c r-1 , c r ) ∩ (L(c r ), L(c r ) + 1). Then, the braid s(L(c p ), j, L(c r )) is represented by the automorphism d(x 1 , x 2 , x 3 ) (see Figure 10). Notice that the choice of the bending point from B guarantees the non-existence of arcs of C intersecting x 1 x 2 or x 2 x 3 . After the swap of I 2 and I 3 , the arc c q c r will be transformed into c q c r , where L(c q ) = L(c r ) = L(c r ), and then relaxed, reducing the length of S at least by two. i j k
I 1 I 2 I 3 I 4
Figure 10: Applying s(i, j, k) to a curve is equivalent to permute their intersections with the real axis and then make the curve tight.
The automorphism s = s(i, j, k) involves at most (k -j) • (j -i) generators and this number is bounded by 1 4 n 2 , because (k -j) + (j -i) ≤ n and (u + v) 2 ≤ 4uv for every u, v ≤ 1. Then, the output of our algorithm has at most 1 8 (S)n 2 letters, because we have proven that s reduces the length of the curve system at each step. Let us bound this number in terms of the input of the algorithm, i.e., in terms of reduced Dynnikov coordinates.
Definition 42. We say that there is a left hairpin (resp. a right hairpin) of C at j if we can find in W (C), up to cyclic permutation and reversing, a subword of the form i j -1 k (resp. i j k) for some i, k > j -1 (resp. i, k < j) (see Figure 11).
Proposition 43. Let S be a curve system on D n represented by the reduced Dynnikov coordinates (a 0 , b 0 , . . . , a n-1 , b n-1 ). Then (S)
≤ n-1 i=0 (2|a i | + |b i |).
Proof. Notice that each intersection of a curve C with the real axis corresponds to a subword of W (C) of the form i j k or i j k. If i < j < k the subword corresponds to a bending point or a reversed bending point respectively. If i, k > j, there is a left hairpin at j + 1. Similarly, if i, k < j, there is a right hairpin at j. Recall that Lemma 17 already establishes how to detect bending points with reduced Dynnikov coordinates. In fact, there are exactly R bending points (including reversed ones) at i if and only if |a i-1 -a i | = R. We want to detect also hairpins in order to determine (S). Observe in Figure 11 that the only types of arcs that can appear in the region between the lines e 3j-5 and e 3j-2 are left or right hairpins and arcs intersecting both e 3j-5 and e 3j-2 . The arcs intersecting both e 3j-5 and e 3j-2 do not affect the difference x 3j-5 -x 3j-2 whereas each left hairpin decreases it by 2 and each right hairpin increases it by 2. Notice that in the mentioned region there cannot be left and right hairpins at the same time. Then, there are exactly R left (resp. right) hairpins at j if and only if b j-1 = -R (resp. b j-1 = R). Hence, as a 0 = a n-1 = 0, we have: Corollary 44. Let S be a curve system on D n represented by the reduced Dynnikov coordinates (a 0 , b 0 , . . . , a n-1 , b n-1 ). Then, the length of the minimal standardizer of S is at most
1 8 n-1 i=0 (2|a i | + |b i |) • n 2 .
Proof. By Lemma 41, the length of the minimal standardizer of S is at most 1 8 (S)n 2 . Consider the bound for (S) given in Proposition 43 and the result will follow.
Remark 45. To check that this bound is computationally optimal we need to find a case where at each step we can only remove a single bending point, i.e., we want to find a family of curve systems {S k } k>0 such that the length of the minimal standardizer of S k is quadratic on n and linear on (S). Let n = 2t + 1, t ∈ N. Consider the following curve system on D n , S 0 = {t n } and the braid α = s(0, t, n -1). Now define S k = (S 0 ) α -k . The curve S k is called a spiral with k half-twists (see Figure 12) and is such that (S k ) = 2(k + 1). Using Algorithm 1, we obtain that the minimal standardizer of this curve is α k , which has k • t 2 factors. Therefore, the number of factors of the minimal standardizer of S k is of order O( (S k ) • n 2 ).
t n To find the complexity of the algorithm which computes the minimal standardizer of a parabolic subgroup P = (X, α) of an Artin-Tits group A, we only need to know the cost of computing the pn-normal form of c P . If x r • • • x 1 ∆ -p with p > 0 is the right normal form of c P , then its pn-normal form is (x r • • • x p+1 )(x p • • • x 1 ∆ -p ). Hence, we just have to compute the right normal form of c P in order to compute the minimal standardizer. It is well known that this computation has quadratic complexity (for a proof, see [9, Lemma 3.9 & Section 6 ]). Thus, we have the following: Proposition 47. Let P = (X, α) be a parabolic subgroup of an Artin-Tits group of spherical type, and let = (α) be the canonical length of α. Computing the minimal standardizer of P has a cost of O( 2 ).
Definition 4 .Definition 6 .
46 We define the right complement of a simple element a as ∂(a) = a -1 ∆ and the left complement as ∂ -1 (a) = ∆a -1 . Remark 5. Observe that ∂ 2 = τ and that, if a is simple, then ∂(a) is also simple, i.e., 1 ∂(a) ∆. Both claims follow from ∂(a)τ (a) = ∂(a)∆ -1 a∆ = ∆ since ∂(a) and τ (a) are positive. Given two simple elements a, b, the product a • b is said to be in left (resp. right) normal form if ab ∧ ∆ = a (resp. ab ∧ ∆ = b). The latter is equivalent to ∂(a) ∧ b = 1 (resp. a ∧ ∂ -1 (b) = 1).
s (y) and sup(y) = sup s (y)} • The ultra summit set of x [13, Definition 1.17] is U SS(x) = {y ∈ SSS(x) | c m (y) = y for some m ≥ 1} • The set of sliding circuits of x [15, Definition 9] is
Figure 2 :
2 Figure 2: W (C) = 0 6 4 2 1 4 5 1 .
Figure 4 :
4 Figure 4: Triangulation used to define Dynnikov coordinates.
Figure 5 :
5 Figure 5: The Dynnikov coordinates and reduced Dynnikov coordinates of C are, respectively, (x 0 , . . . , x 8 ) = (1, 2, 4, 2, 6, 9, 3, 12, 6); (a 0 , b 0 , a 1 , b 1 , a 2 , b 2 , a 3 , b 3 ) = (0, -1, 1, -2, 3, -3, 0, 6).
A bending point at i.
Figure 6 :
6 Figure 6: Detecting bending points with Dynnikov coordinates.
Figure 7 :
7 Figure 7: A simple example of how to find the minimal standardizer of a curve.
Algorithm 1 :
1 Standardizing a curve system Input : The reduced coordinates (a 0 , b 0 , . . . , a n-1 , b n-1 ) of a curve system S on D n . Output: The -minimal element of St(S). c = (a 0 , b 0 , . . . , a n-1 , b n-1 ); β = 1; j = 1; while j < n do if c[a j ] < c[a j-1 ] then c = c σ j ; (use Proposition 15) β = β • σ j ; j = 1; else j = j + 1; return β; Lemma 22 ([3, Lemma 8]).
Let s 1 and s 2 be two positive standardizers of P and let α := s 1 ∧ s 2 and β := s 1 ∨ s 2 . By Corollary 39 and, for example, [15, Proposition 7, Corollary 7 & 8], we have that c α P = c Y and c β P = c Z for some Y, Z ⊆ Σ. Hence α, β ∈ St(P ), as we wanted to show.
c a c b (resp. c a c b ), such that L(c a ) = ã and L(c b ) = b. Then, its image under d is c a c b (resp. c a c b ) where
Figure 9 :
9 Figure 9: How the automorphism d(x 1 , x 2 , x 3 ) acts on the arcs of C.
j e 3j- 4 e 2 (Figure 11 :
4211 Figure 11: Detecting hairpins with Dynnikov coordinates.
(((
|a i-1 | + |a i |) + 2|a i | + |b i |).
Figure 12 : 1 i=0( 2 n- 1 i=0(
12121 Figure 12: The curve S 5 .
Acknowledgements. This research was supported by a PhD contract funded by Université Rennes 1, Spanish Projects MTM2013-44233-P, MTM2016-76453-C2-1-P, FEDER and French-Spanish Mobility programme "Mérimée 2015". I also thank my PhD advisors, Juan González-Meneses and Bert Wiest, for helping and guiding me during this research work. |
01764547 | en | [
"info.info-cv"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01764547/file/texMesh.pdf | Mohamed Boussaha
email: mohamed.boussaha@ign.fr
Bruno Vallet
email: bruno.vallet@ign.fr
Patrick Rives
email: patrick.rives@inria.fr
LARGE SCALE TEXTURED MESH RECONSTRUCTION FROM MOBILE MAPPING IMAGES AND LIDAR SCANS
Keywords: Urban scene, Mobile mapping, LiDAR, Oriented images, Surface reconstruction, Texturing
is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.
INTRODUCTION 1.1 Context
Mobile Mapping Systems (MMS) have become more and more popular to map cities from the ground level, allowing for a very interesting compromise between level of detail and productivity. Such MMS are increasingly becoming hybrid, acquiring both images and LiDAR point clouds of the environment. However, these two modalities remain essentially exploited independently, and few works propose to process them jointly. Nevertheless, such a joint exploitation would benefit from the high complementarity of these two sources of information:
• High resolution of the images vs high precision of the Li-DAR range measurement.
• Passive RGB measurement vs active intensity measurement in near infrared.
• Different acquisition geometries.
In this paper, we propose a fusion of image and LiDAR information into a single representation: a textured mesh. Textured meshes have been the central representation for virtual scenes in Computer Graphics, massively used in the video games and animation movies industry. Graphics cards are highly optimized for their visualization, and they allow a representation of scenes that holds both their geometry and radiometry. Textured meshes are now gaining more and more attention in the geospatial industry as Digital Elevation Models coupled with orthophotos, which were well adapted for high altitude airborne or space-borne acquisition, are not suited for the newer means of acquisition: closer range platforms (drones, mobile mapping) and oblique imagery.
We believe that this trend will accelerate, such that the geospatial industry will have an increasing need for efficient and high quality surface reconstruction and texturing algorithms that scale up to the massive amounts of data that these new means of acquisition produce.
This paper focuses on:
• using a simple reconstruction approach based on the sensor topology
• adapting the state-of-the-art texturing method of [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] to mobile mapping images and LiDAR scans
We are able to produce a highly accurate surface mesh with a high level of detail and high resolution textures at city scale.
Related work
In this paper we present a visibility consistent 3D mapping framework to construct large scale urban textured mesh using both oriented images and georeferenced point cloud coming from a terrestrial mobile mapping system. In the following, we give an overview of the various methods related to the design of our pipeline.
From the robotics community perspective, conventional 3D urban mapping approaches usually propose to use LiDAR or camera separately but a minority has recently exploited both data sources to build dense textured maps [START_REF] Romanoni | Mesh-based 3d textured urban mapping[END_REF].
In the literature, both image-based methods [START_REF] Wu | Towards linear-time incremental structure from motion[END_REF][START_REF] Litvinov | Incremental solid modeling from sparse structure-from-motion data with improved visual artifacts removal[END_REF][START_REF] Romanoni | Incremental reconstruction of urban environments by edge-points delaunay triangulation[END_REF] and LiDARbased methods [START_REF] Hornung | Octomap: an efficient probabilistic 3d mapping framework based on octrees[END_REF][START_REF] Khan | Adaptive rectangular cuboids for 3d mapping[END_REF] often represent the map as a point cloud or a mesh relying only on Figure 1. The texturing pipeline geometric properties of the scene and discarding interesting photometric cues while a faithful 3D textured mesh representation would be useful for not only navigation and localization but also for photo-realistic accurate modeling and visualization.
The computer vision, computer graphics and photogrammetry communities have generated compelling urban texturing results. [START_REF] Sinha | Interactive 3d architectural modeling from unordered photo collections[END_REF]) developed an interactive system to texture architectural scenes with planar surfaces from an unordered collection of photographs using cues from structurefrom-motion. [START_REF] Tan | Large scale texture mapping of building facades. The International Archives of the Photogrammetry[END_REF] proposed an interactive tool for only building facades texturing using oblique images. [START_REF] Garcia-Dorado | Automatic urban modeling using volumetric reconstruction with surface graph cuts[END_REF] perform impressive work by texturing entire cities. Still, they are restricted to 2.5D scene representation and they also operate exclusively on regular block city structures with planar surfaces and treat buildings, ground, and buildingground transitions differently during texturing process. In order to achieve a consistent texture across patch borders in a setting of unordered registered views, [START_REF] Callieri | Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3d models[END_REF][START_REF] Grammatikopoulos | Automatic multi-view texture mapping of 3d surface projections[END_REF] choose to blend these multiple views by computing a weighted cost indicating the suitability of input image pixels for texturing with respect to angle, proximity to the model and the proximity to the depth discontinuities. However, blending images induces strongly visible seams in the final model especially in the case of a multi-view stereo setting because of the potential inaccuracy in the reconstructed geometry.
While there exists a prominent work on texturing urban scenes, we argue that large scale texture mapping should be fully automatic without the user intervention and efficient enough to handle its computational burden in a reasonable time frame without increasing the geometric complexity in the final model. In contrast to the latter methods, [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] proposed to use the multi-view stereo technique [START_REF] Frahm | Building rome on a cloudless day[END_REF][START_REF] Furukawa | Towards internet-scale multi-view stereo[END_REF] to perform a surface reconstruction and subsequently select a single view per face based on a pairwise Markov random field taking into account the viewing angle, the proximity to the model and the resolution of the image. Then, color discontinuities are properly adjusted by looking up the vertex' color along all adjacent seam edges. We consider the method of [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] as a base for our work since it is the first comprehensive framework for texture mapping that enables fast and scalable processing.
In our work, we abstain from the surface reconstruction step for multiple reasons. As pointed out above, methods based on structure-from-motion and multi-view stereo techniques usually yield less accurate camera parameters, hence the reconstructed geometry might not be faithful to the underlying model compared to LiDAR based methods [START_REF] Pollefeys | Detailed real-time urban 3d reconstruction from video[END_REF] which results in ghosting effect and strongly visible seams in the textured model. Besides, such methods do not allow a direct and automatic processing on raw data due to relative parameters tuning for each dataset and in certain cases their computational cost may become prohibitive. [START_REF] Caraffa | 3d octree based watertight mesh generation from ubiquitous data[END_REF] proposed a generic framework to generate an octree-cell based mesh and texture it with the regularized reflectance of the LiDAR. Instead, we propose a simple but fast algorithm to construct a mesh from the raw LiDAR scans and produce photo-realistic textured models. In Figure 1, we depict the whole pipeline to generate large scale high quality textured models leveraging on the georeferenced raw data. Then, we construct a 3D mesh representation of the urban scene and subsequently fuse it with the preprocessed images to get the final model.
The rest of the paper is organized as follows: In Section 2 we present the data acquisition system. A fast and scalable mesh reconstruction algorithm is discussed in Section 3. Section 4 explains the texturing approach. We show our experimental results in Section 5. Finally, in Section 6, we conclude the paper proposing out some future direction of research. The used LiDAR scanner is a RIEGL VQ-250 that rotates at 100 Hz and emits 3000 pulses per rotation with 0 to 8 echoes recorded for each pulse, producing an average of 250 thousand points per second in typical urban scenes. The sensor records information for each pulse (direction (θ, φ), time of emission) and echo (amplitude, range, deviation).
DATA ACQUISITION
The MMS is also mounted with a georeferencing system combining a GPS, an inertial unit and an odometer. This system outputs the reference frame of the system in geographical coordinates at 100Hz. Combining this information with the information recorded by the LiDAR scanner and its calibration, a point cloud in (x, y, z) coordinates can be constructed. In the same way, using the intrinsic and extrinsic calibrations of each camera, each acquired image can be precisely oriented. It is important for our application to note that this process ensures that images and LiDAR points acquired simultaneously are precisely aligned (depending on the quality of the calibrations).
SENSOR TOPOLOGY BASED SURFACE RECONSTRUCTION
In this section, we propose an algorithm to extract a large scale mesh on-the-fly using the point cloud structured as series of line scans gathered from the LiDAR sensor being moved through space along an arbitrary path.
Mesh extraction process
During urban mapping, the mobile platform may stop for a moment because of external factors (e.g. road sign, red light, traffic congestion . . . ) which results in massive redundant data at the same scanned location. Thus, a filtering step is mandatory to get an isotropic distribution of the line scans. To do so, we fix a minimum distance between two successive line scans and we remove all lines whose distances to the previous (unremoved) line is less than a fixed threshold. In practice, we use a threshold of 1cm, close to the LiDAR accuracy.
Once the regular sampling is done, we consider the resulting point cloud in the sensor space where one dimension is the acquisition time t and the other is the θ rotation angle. Let θi be the angle of the i th pulse and Ei the corresponding echo. In case of multiple echoes, Ei is defined as the last (furthest) one, and in case of no return, Ei does not exist so we do not build any triangle based on it. In general, the number Np of pulses for a 2π rotation is not an integer so Ei has six neighbors Ei-1, Ei+1, Ei-n, Ei-n-1, Ei+n, Ei+n+1 where n = Np is the integer part of Np. These six neighbors allow to build six triangles. In practice, we avoid creating the same triangle more than once by creating for each echo Ei the two triangles it forms with echoes of greater indexes: Ei, Ei+n, Ei+n+1 and Ei, Ei+n+1, Ei+1 (if the three echoes exist) as illustrated in Figure 3. This allows the algorithm to incrementally and quickly build a triangulated surface based on the input points of the scans. In practice, the (non integer) number of pulses Np emitted during a 360 deg rotation of the scanner may slightly vary, so to add robustness we check if θi+n < θi < θi+n+1 and if it doesn't, increase or decrease n until it does.
Mesh cleaning
The triangulation of 3D measurements from a mobile mapping system usually comes with several imperfections such as elongated triangles, noisy unreferenced vertices, holes in the model, redundant triangles . . . to mention a few. In this section, we focus Figure 3. Triangulation based on the sensor space topology on three main issues that frequently occur with mobile terrestrial systems and affect significantly the texturing results if not adequately dealt with.
Elongated triangles filtering
In practice, neighboring echoes in sensor topology might belong to different objects at different distances. This generates very elongated triangles connecting two objects (or an object and its background). Such elongated triangles might also occur when the MMS follows a sharp turn. We filter them out by applying a threshold on the maximum length of an edge before creating a triangle, experimentally set to 0.5m for the data used in this study.
Isolated pieces removal
In contrast with camera and eyes that captures light from external sources, the LiDAR scanner is an active sensor that emits light itself. This results in measurements that are dependent on the transparency of the scanned objects which cause a problem in the case of semitransparent faces such as windows and front glass. The laser beam will traverse these objects, creating isolated pieces behind them in the final mesh. To tackle this problem, isolated connected components composed by a limited number of triangles and whose diameter is smaller than a user-defined threshold (set experimentally) are automatically deleted from the final model.
Hole filling
After the surface reconstruction process, the resulting mesh may still contain a consequent number of holes due to specular surfaces deflecting the LiDAR beam, occlusions and the non-uniform motion of the acquisition vehicle. To overcome this problem we use the method of (Liepa et al., 2003).
The algorithm takes a user-defined parameter which consists of the maximum hole size in terms of number of edges and closes the hole in a recursive fashion by splitting it until it gets a hole composed exactly with 3 edges and fills it with the corresponding triangle.
Scalability
The interest in mobile mapping techniques has been increasing over the past decade as it allows the collection of dense and very accurate and detailed data at the scale of an entire city with a high productivity. However, processing such data is limited by various difficulties specific to this type of acquisition especially the very high data volume (up to 1 TB by day of acquisition (Paparoditis et al., 2012)) which requires very efficient processing tools in terms of number of operations and memory footprint. In order to perform an automatic surface reconstruction over large distances, memory constraints and scalability issues must be addressed. First, the raw LiDAR scans are sliced into N chunks of 10s of acquisition which corresponds to nearly 3 million points per chunk. Each recorded point cloud (chunk) is processed separately as explained in the work-flow of our pipeline presented in Figure 4, allowing a parallel processing and faster production. Yet, whereas the aforementioned filtering steps alleviate the size of the processed chunks, the resulting models remain unnecessarily heavy as flat surfaces (road, walls) may be represented by a very large number of triangles that could be drastically reduced without loosing in detail. To this end, we apply the decimation algorithm of (Lindstrom and[START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF][START_REF] Lindstrom | Evaluation of memoryless simplification[END_REF]. The algorithm proceeds in two stages. First, an initial collapse cost, given by the position chosen for the vertex that replaces it, is assigned to every edge in the reconstructed mesh. Then, at each iteration the edge with the lowest cost is selected for collapsing and replacing it with a vertex. Finally, the collapse cost of all the edges now incident on the replacement vertex is recalculated. For more technical details, we refer the reader to (Lindstrom and[START_REF] Lindstrom | Fast and memory efficient polygonal simplification[END_REF][START_REF] Lindstrom | Evaluation of memoryless simplification[END_REF].
TEXTURING APPROACH
This section presents the used approach for texturing large scale 3D realistic urban scenes. Based on the work of [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF], we adapt the algorithm so it can handle our camera model (with five perspective images) and the smoothing parameters are properly adjusted to enhance the results. In the following, we give the outline of this texturing technique and its requirements. To work jointly with oriented images and LiDAR scans acquired by a mobile mapping system, the first requirement is that both sensing modalities have to be aligned in a common frame. Thanks to the rigid setting of the camera and the LiDAR mounted on the mobile platform yielding a simultaneous image and Li-DAR acquisition, this step is no more required. However, such setting entails that a visible part of the vehicle appears in the acquired images. To avoid using these irrelevant parts, an adequate mask is automatically applied to the concerned images (back and front images) before texturing as shown in Figure 5.
Typically, texturing a 3D model with oriented images is a twostage process. First, the optimal view per triangle is selected with respect to certain criteria yielding a preliminary texture. Second, a local and global color optimization is performed to minimize the discontinuities between adjacent texture patches. The two steps are discussed in Sections 4.2 and 4.3.
View selection
To determine the visibility of faces in the input images, a pairwise Markov random field energy formulation is adopted to compute a labeling l that assigns a view li to be used as texture for each mesh face Fi:
E(l) = F i ∈F aces E d (Fi, li) + F i ,F j ∈Edges
Es(Fi, Fj, li, lj)
(1) where
E d = - φ(F i ,l i ) ||∇(I l i )||2dp
(2)
Es = [li = lj] (3)
The data term E d (2) computes the gradient magnitude ||∇(I l i )||2 of the image into which face Fi is projected using a Sobel operator and sum over all pixels of the gradient magnitude image within face Fi's projection φ(Fi, li). This term is large if the projection area is large which means that it prefers close, orthogonal and in-focus images with high resolution. The smoothness term Es (3) minimizes the seams visibility (edges between faces textured with different images). In the chosen method, this regularization term is based on the Potts model ([.] the Iverson bracket) which prefers compact patches by penalizing those that might give severe seams in the final model and it is extremely fast to compute. Finally, E(l) (1) is minimized with graph-cuts and α-expansion [START_REF] Boykov | Fast approximate energy minimization via graph cuts[END_REF].
Color adjustment
After the view selection step, the obtained model exhibits strong color discontinuities due to the fusion of texture patches coming from different images and to the exposure and illumination variation especially in an outdoor environment. Thus, adjacent texture patches need to be photometrically adjusted. To address this problem, first, a global radiometric correction is performed along the seam's edge by computing a weighted average of a set of samples (pixels sampled along the discontinuity's right and left) depending on the distance of each sample to the seam edge extremities (vertices). Then, this global adjustment is followed by a local Poisson editing [START_REF] Pérez | Poisson image editing[END_REF] applied to the border of the texture patches. All the process is discussed in details in [START_REF] Waechter | Let there be color! Large-scale texturing of 3D reconstructions[END_REF] work.
Finally, the corrections are added to the input images, the texture patches are packed into texture atlases, and texture coordinates are attached to the mesh vertices.
EXPERIMENTAL RESULTS
Mesh reconstruction
In Figure 6, we show the reconstructed mesh based on the sensor topology and the adopted decimation process. In practice, we parameterize the algorithm such that the approximation error is below 3cm, which allows in average to reduce the number of triangles to around 30% of the input triangles.
Texturing the reconstructed models
In this section, we show some texturing result (Figure 7) and the influence of the color adjustment step on the final textured models (Figure 8). Before the radiometric correction, one can see several color discontinuities especially on the border of the door and on some parts of the road (best viewed on screen). More results are presented in the appendix to illustrate the high quality textured models in different places in Rouen, France.
Performance evaluation
We evaluate the performance of each step of our pipeline on a dataset acquired by Stereopolis II [START_REF] Paparoditis | Stereopolis ii: A multi-purpose and multi-sensor 3d mobile mapping system for street visualisation and 3d metrology[END_REF] 1, we present the required input data to texture a chunk of acquisition (10s); the average number of views and the number of triangles after decimation. Figure 9 shows the timing of each step in the pipeline to texture the described setting. Using a 16-core Xeon E5-2665 CPU with 12GB of memory, we are able to generate a 3D mesh of nearly 6 Million triangles in less than one minute compared to the improved version of Poisson surface reconstruction [START_REF] Kazhdan | Screened poisson surface reconstruction[END_REF] where they reconstruct a surface of nearly 20000 triangle in 10 minutes. Moreover, in order to texture small models with few images (36 of size (768 × 584)) in a context of super-resolution, [START_REF] Goldlücke | A super-resolution framework for high-accuracy multiview reconstruction[END_REF] takes several hours (partially on GPU) compared to the few minutes we take to texture our huge models. Finally, all the dataset is textured in less than 30 computing hours. The sensor mesh reconstruction is quite novel but very simple.
We believe that such a textured mesh can find multiple applications, directly through visualization of a mobile mapping acquisition, or more indirectly for processing jointly image and LiDAR data: urban scene analysis, structured reconstruction, semantization, ...
PERSPECTIVES
This work leaves however important topics unsolved, and most importantly the handling of overlaps between acquired data, at intersections or when the vehicle passes multiple times in the same scene. We have left this issue out of the scope of the current paper as it poses numerous challenges:
• Precise registration over the overlaps, sometimes referred to as the loop-closure problem.
• Change detection between the overlaps.
• Data fusion over the overlaps, which is strongly connected to change detection and how changes are handled in the final model.
Moreover, this paper proposed a reconstruction from LiDAR only, but we believe that the images hold a pertinent geometric information that could be used to complement the LiDAR reconstruction, in areas occluded to the LiDAR but not to the cameras (which often happens as their geometries are different). Finally, an important issue that is partially tackled in the texturation: the presence of mobile objects. Because the LiDAR and images are most of the time not acquired strictly simultaneously, mobile objects might have an incoherent position between image and LiDAR, which is a problem that should be tackled explicitly.
Figure 2 .
2 Figure 2. The set of images acquired by the 5 full HD cameras (left, right, up, in front, from behind)
Figure 4 .
4 Figure 4. The proposed work-flow to produce large scale models
4. 1
1 Figure 5. Illustration of the acquired frontal images processing
Figure 9 .
9 Figure 9. Performance evaluation of a chunk of 10s of acquisition
Figure 6. Decimation of sensor space topology mesh
Table 1 .
1 Statistics on the input data per chunkIn Table
dur-
ACKNOWLEDGEMENTS
We would like to acknowledge the French ANR project pLaT-INUM (ANR-15-CE23-0010) for its financial support.
APPENDIX
In this appendix, we show more texturing results obtained from the acquired data in Rouen, France. Due to memory constraints, we are not able to explicitly show the entire textured model (17Km). However, we can show a maximum size of 70s (350m) of textured acquisition (Figure 10). |
01764553 | en | [
"spi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764553/file/doc00028903.pdf | Bruno Jeanneret
email: bruno.jeanneret@ifsttar.fr
Daniel Ndiaye
Sylvain Gillet
Rochdi Trigui
H&HIL: A novel tool to test control strategy with Human and Hardware In the Loop
With this work, the authors try to make HIL simulation more realistic with the introduction of the Human driver in the loop. To succeed in this objective we develop a set of tools to connect easily a wide variety of real or virtual devices together : of course the driver but also racing gamer joystick, real engine, virtual drivetrain, virtual driver environment ... The detailed approach represent a step forward before testing control strategy or new powertrain on a real vehicle. First results showed a good effectiveness and modularity of the tool.
I. INTRODUCTION
The transportation sector is responsible for a wide share of Energy consumption and pollutant emission everywhere in the world. The growth of environmental awareness is today among the most constringent drivers that the researchers and the manufacturers have to consider when designing environmental friendly solutions for drive trains, including electric, hybrid and fuel cell Vehicles. Nevertheless, most of the development and the evaluation of the new solutions still follow classical scheme that includes modeling and testing components and drivetrain under standard driving cycles or in the best case, using preregistered real world driving cycle [START_REF] Oh | Evaluation of motor characteristics for hybrid electric vehicles using the hil concept[END_REF], [START_REF] Trigui | Performance comparison of three storage systems for mild hevs using phil simulation[END_REF], [START_REF] Bouscayrol | Hardware in the loop simulation in "Control and Mechatronics[END_REF], [START_REF] Shidore | Phev 'all electric range' and fuel economy in charge sustaining mode for low soc operation of the jcs vl41m li-ion battery using battery hil[END_REF], [START_REF] Castaings | Comparison of energy management strategies of a battery/supercapacitors system for electric vehicle under real-time constraints[END_REF] and [START_REF] Verhille | Hardwarein-the-loop simulation of the traction system of an automatic subway[END_REF]. This methodology of design does not include the variability of the driving conditions generated by the two major factors that are the driver behavior and the infrastructure influence (elevation, turns, speed limitation, traffic jam, traffic lights, . . . ).
Moreover, the progress registered in the Intelligent Transportation Systems (ITS) makes it possible today to optimize the actual use of the vehicles (classical, EVs, HEVs) by capturing instantaneous information about the infrastructure (road profile, road signs), the traffic and also about the weather conditions. These informations are used for HEV and PHEVs to optimize their energy management and for mono-sources vehicles (classical, EV) to give inputs to the ADAS systems in order to perform a lower consumption and emission (ecodriving concepts for example). The simulation and testing using standard or preregistered driving cycles is therefore too limited to take into account all these aspects. The need is then to develop new simulation and testing schemes able to consider a more realistic modeling of the vehicle environment and to include the driver or its model in the simulation/testing loop.
The methodology presented in this paper is based on a progressive and modular approach for simulating and testing in a HIL configuration different types of drive trains while including the vehicle environment models and the driver. The modularity is developed considering two axes:
• virtual to real : The ability of the developed system ranges from all simulation configuration (SIL) to driver + hardware in the loop simulation (H&HIL)
• multiplatform: The communication protocol between different systems or models allows easy exchange of systems and simulation platforms (energetic and dynamic vehicle model, different driving simulators configuration, Power plant in a HIL configuration, . . . ). The chosen protocol, namely the CAN (Control Area Network) can also easily be used to address properly further steps like vehicle prototype design
In the following sections we will describe the developed concepts and tools. The last chapter presents two applications made with the tool. A conclusion lists applications that can be developped with the facility.
II. TOOL DESCRIPTION
A. Architecture of the tool
The main program named MODYVES aims at connecting a generic input and a driver to a generic output with a vehicle model. The main architecture of the framework is presented in figure 1. Inputs can be choosen among:
• a Gamepad like Logitech G27 Joystick or equivalent. Modyves uses Simple DirectMedia Layer1 to provide low level access to the device
B. Communication layer
The CAN protocol is widely used in automotive industry. For this reason, we selected this kind of network for the communication of the different pieces of hardware. Easy to implement, another advantage is that the communication can be secured by testing the Rx time (Received frame) in a real time clock.
Two different USB to CAN converter have been integrated in the tool, PEAK2 module and Systec3 module.
Both provide device driver (Dynamic Link Library files) and header files to connect to the module and parametrise it, decode received frame and send transmitted ones.
C. Hardware
Two main hardwares have been integrated in our tool, a prototyping hardware commercialised by dSPACE, namely Micro-autobox (MABX-II), and a low cost 32 bit microcontroler from Texas Instrument, the C2000 Peripheral Experimenter kit equipped with a F28335 control card. Both are supported by The Mathworks with matlab and simulink coder dSPACE hardware, and matlab and embedded coder for C2000.
The table I makes a short comparison between hardwares. Of course these two devices have not the same performances and are not suitable for the same kind of application.
One can notice the power of dSPACE board (procesor speed and size of RAM and Flash), but the major advantage of the tool is its combinations with a RTI interface and controlDesk software that enable to rapidly develop, debug and validate a real time project.
On the other hand, TI C2000 control card is well suited for a variety of automotive applications. If offers a good processor speed with a moderate but sufficient Flash memory to developp and embed a real time application at a moderate price. Debugging is far more complex than with dSPACE product.
D. Software
Modyves is written in python and is cadenced by a windows timer, so depending upon computer characteristics and computer load, it can deviate from its theorical execution period. Nevertheless, when connected to real plant, the only critical part running in Modyves is the driver behavior and the communication layer.
The IFSTTAR driving simulator is a static driving simulator based on a real Peugeot 308 and a software part named SIM2. The cockpit and all commands are unchanged to offer the most realistic driving environments to the driver. An embedded electronic card in the vehicle reads all sensors values placed on the pedals, the gearbox and the steering wheel. It also controls the force feedback on the steering wheel. The electronic card controls the vehicle dashboard by sending CAN informations through the OBD connector. SIM2 is the IFSTTAR simulator software, and contains various types of models: vehicle, road, sound, traffic and visual models. The road scene is displayed on 5 screens offering up to 180°h
orizontal Field Of View. The IFSTTAR driving simulator is used in the field of human factor research, ergonomics studies, energy-efficient driving, advanced training and studies of "Human and Hardware in Loop"( H&HIL).
VEHLIB is namely a Simulink library. A framework has been developped for years around this library to integrate all the necessary components model to develop and simulate conventional, hybrid or all electric vehicles [START_REF] Vinot | Model simulation, validation and case study of the 2004 THS of Toyota Prius[END_REF] [START_REF] Jeanneret | New Hybrid Concept Simulation Tools, evaluation on the Toyota Prius Car[END_REF]. VEHLIB is developped with a combined backward or forward approach. The forward models are able to run on real time hardware with their respectives connexion blocks to real environment [START_REF] Jeanneret | Mise en oeuvre d une commande temps reel de transmission hybride sur banc moteur[END_REF].
III. EXAMPLES OF APPLICATION
The figure 3 presents different step of integration from the pure simulation on a personal computer up to the deployement on a final facilty. This procedure has been succesfully conducted for IFSTTAR's driving simulator. For simplicity reason, the vehicle model has been compiled in a C2000 hardware instead of linking the dynamic library to the simulator software. As presented in figure 1, the tool allows to make a large variety of tests and integration depending on the effort and the final objective. Two applications are presented hereafter, the first one is a model in the loop application with a G27 joystick, the second is a Power Hardware in the Loop with a driver simulator, a virtual vehicle connected to a real engine. Both include a driver in the loop.
A. G27 Joystick and Model in the loop application
A first example to introduce the human in the loop consists of a real time application running in a soft real time mode (i.e. a windows timer). It is easy to implement and need only a G27 Joystick and a computer to run (See example 1 in figure II-D).
As mentioned earlier, this application uses a windows timer to set the switching time of the application. In order to verify the real execution period of Modyves, the jitter time has been monitored on a personal laptop (intel core i7 3610QM @2.3 GHz running Windows 7) without perturbation (no other program were running on the computer). The theoretical frequency is 1 kHz. The results are presented in figure 4. The mean step time is 0.001002 sec., the max value is 0.0027 sec. (only one occurence during this test which lasted around 30 seconds) and the minimum value is 0.0009999 second. These values do not affect neither the driver perception, nor the model behavior, because they are far from the time constant of these "systems". Consequently the deviation from the theoretical period is small enough for our application.
B. Driver simulator with HIL application on engine test bench
This is the most complete situation described in figure II-D. In this case, the vehicle model (excepted the engine) is running in a hard real time platform, a Micro-autobox hardware in this particular case. The latter communicates with the engine test bench and exchange some informations with it, namely:
• send accelerator pedal position to the engine and rotation speed to the electric generator
• receive actual torque
The bench is presented in figure 7. The bench is running in the so-called throttle/speed mode. At each time step, the actual torque is measured on the bench, transmitted to the model and introduced in the simulated clutch. It passes throwgh the different components up to the vehicle wheels. The longitudinal motion equation is resolved allowing to calculate the speed of the different shafts up to the engine speed, which is in turn transmitted to the bench as a target generator speed. In the same time, the accelerator position is also send to the engine ECU.
The driving simulator is presented in figure 8. One can notice that fuel cut-off is effective in engine simulation when the vehicle decelerate (figure 6) but are not present in the bench (figure 10) where cut-off is not validated in the actual engine ECU.
IV. CONCLUSION
In this paper, a generic framework has been presented to develop and test mecatronic applications in a progressive way. The facility is used in the laboratory to test ADAS (Advanced Driver Assistance Systems) and allows to take into account driver behavior as well as realistic vehicle environment. In this context, not only the fuel consumption is measured in a real engine, but one can measure engine emissions thanks to the gas analyzers available in the laboratory including CO, HC, NOx and particles measurements.
Number of applications could be performed with this facility:
• the coupled setting could be used to approach a real use behavior of the vehicle and the ICE instead of performing only standard driving cycles. In fact, the new emission regulation consider pollutant measurement on a real track in a procedure called Real Drive Emission (RDE) using Portable devices (PEMS). In order to anticipate this phase, the coupled test bench could help for the engine design and tuning in order to reduce real emissions
• the facility could embed ADAS system. They can be easily implemented and tested on urban, road or highway context with a human driver in the loop. For example, the assistance systems for eco-driving could be assessed in terms of near-real fuel consumption and pollutant emission
• the simulator can be a platform to evaluate different degrees of driving delegation towards autonomous vehicle. For example, it is easy to simulate manual or automatic gearbox, to test different kind of speed regulator, speed limitation...
• hybrid vehicles can be emulated in the real time model or they can be introduced in the bench to simulate hybrid vehicle. A clutch can also be physically present to test all electric mode for a parallel hybrid vehicle. With this kind of application, several energy management laws can be quickly implemented and tested in a almost real environment A further step for the software can consist of generalizing it to allow the core module to connect to any kind of devices as described in figure 11.
Fig. 1 .
1 Fig. 1. The Modyves framework
Fig. 2 .
2 Fig. 2. VEHLIB in one H&HIL configuration
Fig. 3 .
3 Fig. 3. The different step of integration
Fig. 4 .
4 Fig. 4. Jitter for a 1 kHz application
Fig. 5 .Fig. 6 .
56 Fig. 5. Vehicle speed and driver actuators
Fig. 7 .
7 Fig. 7. Engine on the test bench
Fig. 8 .
8 Fig. 8. Driving simulator Figures 9 and 10 illustrate the behaviour of the driver and the response of the engine on the test bench.
Fig. 9 .Fig. 10 .
910 Fig. 9. Vehicle speed and driver actuators
Fig. 11 .
11 Fig. 11. The engine for software integration
TABLE I .
I HARDWARE CHARACTERISTICS
https://www.libsdl.org/
http://www.peak-system.com/
http://www.systec-electronic.com/ |
01764560 | en | [
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764560/file/gr2017-pub00055873.pdf | Bernd Kister
Stéphane Lambert1
Bernard Loup
IMPACT TESTS ON SMALL SCALE EMBANKMENTS WITH ROCKERY -LESSONS LEARNED
Keywords: rockfall protection embankment (RPE), impact, rockery, block shape, ratio rotational to translational energy, freeboard, energy dissipation
In the project AERES (Analysis of Existing Rockfall Embankments of Switzerland) smallscale quasi-2D-experiments had been done with embankments with stones placed parallel to the slope and stones placed horizontally. The experiments showed that rotating cylinders acting as impactors may surmount an embankment with batter 2:1, even if the freeboard is chosen to 1.5 times the block diameter. So a slope with an inclination of about 60° and equipped with rockery in general does not guarantee that a freeboard of one block diameter will be sufficient as described by the Austrian standard. During the test series also a block with an octagonal cross section had been used. This block, with no or only very low rotation, on the other hand was not able to surmount an embankment with rockery and a freeboard of about 0.8 times the block diameter. The evaluation of test data showed additional that the main part of energy dissipation occurs during the first 6 ms of the impact process. At least 75% to 85% of the block's total kinetic energy will be transformed into compression work, wave energy and heat when the block hits the embankment.
INTRODUCTION
A common measure used in Switzerland to stabilize steep slopes of rockfall protection embankments is the use of rockery. According to the interviews done with employees of canton departments as well as with design engineers the rockery used in the past at the uphill slope of an embankment was constructed with a batter between 60° to 80°. But in general only less attention had been paid on the behavior of such natural stone walls during the impact of a block. The main reasons to use rockery was to limit the area, which is necessary for embankment construction, and/or to stop rolling blocks.
To check the experimental results of [START_REF] Hofmann | Bemessungsvorschlag für Steinschlagschutzdämme[END_REF] and to study the impact process of blocks impacting rockfall protection embankments (RPE) with a rockery cover at the uphill slope, small scale quasi-2D-experiments have been executed and analyzed at the Lucerne University of Applied Sciences and Arts (HSLU) during the project AERES [START_REF] Kister | Analysis of Existing Rockfall Embankments of Switzerland (AERES)[END_REF].
The load case "impact of a block onto an embankment" may be divided into two scenarios:
The embankment is punched-through by the block and the embankment is surmounted by the block. The first one is a question of stability of the construction itself while the second one concerns the fitness for purpose of a RPE. The most significant results of the tests done in the project AERES concerning the fitness for purpose of a RPE are shown below.
TEST CONDITIONS
Two types of rockery with different orientation of stones at the "uphill" slope had been studied. To create a relatively "smooth" rockery surface the stones had been placed parallel to the slope (Fig. 1a). For a "rough" rockery surface the stones had been placed horizontally, which resulted in a graded slope (Fig. 1b). Additional impact tests had been done on an embankment with a bi-linear slope with rockery at the lower part and soil at the upper part (Fig. 1c). The batter of the "rockery" was chosen to be 2:1, 5:2 and 5:1 in the tests. Three types of impactors had been used: A concrete cylinder G with a ratio of rotational to translational energy > 0.3, a hollow cylinder GS with a triaxial acceleration sensor inside with a ratio of rotational to translational energy between 0.2 and 0.1 and a block OKT with an octagonal cross section with no or only very low rotation. The ratio of rotational to translational energy of block GS corresponds very well with the results of in-situ tests done with natural blocks [START_REF] Usiro | An experimental study related to rockfall movement mechanism[END_REF]. The impact translational velocity in most tests was between 6 m/s and 7 m/s. Transformed to a prototype embankment with height of approx. 7 m this will result in real world block velocities of about 18 m/s to 21 m/s [START_REF] Kister | Analysis of Existing Rockfall Embankments of Switzerland (AERES)[END_REF].
FREEBOARD
The statements of Hofmann & Mölk [START_REF] Hofmann | Bemessungsvorschlag für Steinschlagschutzdämme[END_REF] concerning the freeboard have been transferred to the Austrian technical guideline ONR 24810:2013 [2] and it is said that the freeboard for an embankment with riprap (resp. rockery) and a slope angle of 50° or more should be at least one block diameter. To determine the maximum climbing height of a block during an impact and to get information about the influence of the roughness of the rockery surface, two impact
a) b) c)
tests on embankment models with a batter 2:1, but with different "rockery roughness" had been done. For these tests the freeboard was chosen to be approximately 1.9 times the block diameter. This value is a little bit less than the value of 2 times the block diameter specified in [2] for the freeboard of pure soil embankments, but significantly larger than the minimum value specified for embankments with rockery. The impact point was at a level, where the embankment thickness is larger than three times the block diameter (Fig. 2) and therefore according to [START_REF] Kister | Development of basics for dimensioning rock fall protection embankments in experiment and theory[END_REF] there was no risk that the embankment will be punched through. For "rough" rockery surface the climbing height of block GS was 1.8 times the block diameter for the first impact and 1.55 times the block diameter for the second impact (Fig. 2). The "rough" surface of the rockery led to a larger climbing height for the block than the "smooth" surface, although the block velocities had been very similar. The first impact of the block resulted in a larger climbing height than the second impact for both surface roughness types.
The tests showed that a slope with an inclination of about 60° and equipped with rockery in general does not guarantee that a freeboard of approx. one block diameter will be sufficient as described by the Austrian standard.
BLOCK SHAPE AND BLOCK ROTATION
During the test series the block OKT with an octagonal cross section and with no or only very low rotation was not able to surmount an embankment with rockery, if a crest to block diameter ratio of approx. 1.1 was chosen, even so the freeboard was only about 0.8 times the block diameter. Fig. 3 shows the trajectories of the concrete cylinder G with rotation and the block OKT impacting an embankment with stones placed horizontally at the "uphill" slope, batter of rockery 5:2. The difference in the impact translational velocities of both blocks was about 7%, which is within the measurement error (G: 5.9 m/s, OKT: 6.3 m/s). So block shape and block rotation play a significant role in the impact process.
ENERGY DISSIPATION
The evaluation of test data received in the project AERES showed that the main part of energy dissipation occurs during the first 6 ms of the impact process. During this period the block translational velocity is reduced to less than the half value for all three types of impactors used in the project. Differences in loss of block velocity and block energy of the three used impactors mainly occur after the large drop of velocity and energy.
CONCLUSION
The following parameters had been found, which are dominating the impact process and led the embankment be either surmounted by a block or punched-through: The total block energy, the ratio of rotational to translational block energy, the impact angle, which is a function of block trajectory and slope inclination, the shape of the block, the embankment's thickness at the impact point. These parameters are mainly responsible for the fitness for purpose of a rockfall protection embankment. The experiments have shown that there are some interactions between these parameters, which could not be solved in detail with the existing experimental set-up. Further research has to be done to determine the freeboard in case of blocks with a natural shape and with a ratio of rotational to translational energy between the limits 0.1 and 0.2.
Fig. 1
1 Fig. 1 Orientation of stones at the "uphill" slope: a) upright, b) horizontal, c) upright, upper slope without stones and reduced slope angle
Fig. 2 :
2 Fig. 2: Max. climbing height CH max of impactor GS, stones placed horizontally, freeboard FB = 1.9*2r: a) first impact: CH max approx. 1.8*2r, b) second impact: CH max approx. 1.55*2r, 2r = block diameter
Fig. 3 :
3 Fig. 3: Influence of block shape and block rotation: Trajectories of cylinder G (a) and block OKT (b), embankment with stones placed horizontally at the "uphill" slope, batter of rockery 5:2
Lucerne University of Applied Sciences and Arts, Technikumstrasse
21, CH -6048 Horw, Switzerland, since 2017: kister -geotechnical engineering & research, Neckarsteinacher Str. 4 B, D -Neckargemünd, +49 6223 71363, kister-ger@t-online.de 2 irstea, 2 rue de la Papeterie -BP 76, 38402 Saint-Martin-d'Hères cedex, France, +33 (0)4 76 76 27 94, stephane.lambert@irstea.fr 3 Federal Office for the Environment (FOEN), 3003 Bern, Switzerland, Tel. +41 58 465 50 98, bernard.loup@bafu.admin.ch |
01764616 | en | [
"phys.phys.phys-flu-dyn"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01764616/file/CnF.pdf | Pranav Chandramouli
email: pranav.chandramouli@inria.fr
Dominique Heitz
Sylvain Laizet
Etienne Mémin
Coarse large-eddy simulations in a transitional wake flow with flow models under location uncertainty
The focus of this paper is to perform coarse-grid large eddy simulation (LES) using recently developed sub-grid scale (SGS) models of cylinder wake flow at Reynolds number (Re) of 3900. As we approach coarser resolutions, a drop in accuracy is noted for all LES models but more importantly, the numerical stability of classical models is called into question. The objective is to identify a statistically accurate, stable sub-grid scale (SGS) model for this transitional flow at a coarse resolution. The proposed new models under location uncertainty (MULU) are applied in a deterministic coarse LES context and the statistical results are compared with variants of the Smagorinsky model and various reference data-sets (both experimental and Direct Numerical Simulation (DNS)). MULU are shown to better estimate statistics for coarse resolution (at 0.46% the cost of a DNS) while being numerically stable. The performance of the MULU is studied through statistical comparisons, energy spectra, and sub-grid scale (SGS) contributions. The physics behind the MULU are characterised and explored using divergence and curl functions. The additional terms present (velocity bias) in the MULU are shown to improve model performance. The spanwise periodicity observed at low Reynolds is achieved at this moderate Reynolds number through the curl function, in coherence with the birth of streamwise vortices.
Large Eddy Simulation, Cylinder Wake Flow
Introduction
Cylinder wake flow has been studied extensively starting with the experimental works of Townsend (1; 2) to the numerical works of Kravchenko and others (3; 4; 5). The flow exhibits a strong dependance on Reynolds number Re = U D/ν, where U is the inflow velocity, D is the cylinder diameter and ν is the kinematic viscosity of the fluid. Beyond a critical Reynolds number Re ∼ 40 the wake becomes unstable leading to the well known von Kármán vortex street. The eddy formation remains laminar until gradually being replaced by turbulent vortex shedding at higher Re. The shear layers remain laminar until Re ∼ 400 beyond which the transition to turbulence takes place up to Re = 10 5 -this regime, referred to as the sub-critical regime, is the focus of this paper.
The transitional nature of the wake flow in the sub-critical regime, especially in the shear layers is a challenging problem for turbulence modelling and hence has attracted a lot of attention. The fragile stability of the shear layers leads to more or less delayed roll-up into von Kármán vortices and shorter or longer vortex formation regions. As a consequence significant discrepancies have been observed in near wake quantities both for numerical simulations [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF] and experiments [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF].
Within the sub-critical regime, 3900 has been established as a benchmark Re. The study of [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] provides accurate experimental data-set showing good agreement with previous numerical studies contrary to early experimental datasets [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF]. The early experiments of Lourenco and Shih (9) obtained a V-shaped mean streamwise velocity profile in the near wake contrary to the U-shaped profile obtained by [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF]. The discrepancy was attributed to inaccuracies in the experiment -a fact confirmed by the studies of [START_REF] Mittal | Suitability of Upwind-Biased Finite Difference Schemes for Large-Eddy Simulation of Turbulent Flows[END_REF] and (4). Parnaudeau et al.'s [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] experimental database, which obtains the U-shaped mean profile in the near wake, is thus becoming useful for numerical validation studies. With increasing computation power, the LES data sets at Re = 3900 have been further augmented with DNS studies performed by [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF].
The transitional nature of the flow combined with the availability of validated experimental and numerical data-sets at Re = 3900 makes this an ideal flow for model development and comparison. The LES model parametrisation controls the turbulent dissipation. A good SGS model should ensure suitable dissipation mechanism. Standard Smagorinsky model [START_REF] Smagorinsky | General circulation experiments with the primitive equations[END_REF] based on an equilibrium between turbulence production and dissipation, has a tendency to overestimate dissipation in general [START_REF] Meyers | On the model coefficients for the standard and the variational multi-scale Smagorinsky model[END_REF]. In transitional flows, where the dissipation is weak, such a SGS model leads to laminar regimes, for example, in the shear layers for cylinder wake. Different modifications of the model have been proposed to correct this behaviour. As addressed by [START_REF] Meyers | On the model coefficients for the standard and the variational multi-scale Smagorinsky model[END_REF], who introduced relevant improvements, the model coefficients exhibit a strong dependency both on the ratio between the integral length scale and the LES filter width, and on the ratio between the LES filter width and the Kolmogorov scale. In this context of SGS models, coarse LES remains a challenging issue.
The motivation for coarse LES is dominated by the general interest towards reduced computational cost which could pave the way for performing higher Re simulations, sensitivity analyses, and Data Assimilation (DA) studies. DA has gathered a lot of focus recently with the works of ( 13), [START_REF] Gronskis | Inflow and initial conditions for direct numerical simulation based on adjoint data assimilation[END_REF], and [START_REF] Yang | Enhanced ensemble-based 4dvar scheme for data assimilation[END_REF] but still remains limited by computational requirement.
With the focus on coarse resolution, this study analyses the performance of LES models for transitional wake flow at Re 3900. The models under location uncertainty (16; 17) are analysed in depth for their performance at a coarse resolution and compared with classical models. The models are so called as the equations are derived assuming that the location of a fluid parcel is known only up to a random noise i.e. location uncertainty. Within this reformulation of the Navier-Stokes equations, the contributions of the subgrid scale random component is split into an inhomogeneous turbulent diffusion and a velocity bias which corrects the advection due to resolved velocity field. Such a scheme has been shown to perform well on the Taylor Green vortex flow [START_REF] Harouna | Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling[END_REF] at Reynolds number of 1600, 3000, and 5000. The new scheme was shown to outperform the established dynamic Smagorinky model especially at higher Re. However, this flow is associated to an almost isotropic turbulence and no comparison with data is possible (as it is a pure numerical flow). Here we wish to assess the model skills with respect to more complex situations (with laminar, transient and turbulent areas) and coarse resolution grids. We provide also a physical analysis of the solutions computed and compare them with classical LES schemes and experimental data. Although the models are applied to a specific Reynolds number, the nature of the flow generalises the applicability of the results to a wide range of Reynolds number from 10 3 -10 5 , i.e. up to the pivotal point where the transition into turbulence of the boundary layer starts at the wall of the cylinder. The goal is to show the ability of such new LES approaches for simulation at coarse resolution of a wake flow in the subcritical regime. Note that recently for the same flow configuration, [START_REF] Resseguier | Stochastic modelling and diffusion modes for proper orthogonal decomposition models and small-scale flow analysis[END_REF] have derived the MULU in a reduced-order form using Proper Orthogonal Decomposition (POD), successfully providing physical interpretations of the local corrective advection and diffusion terms. The authors showed that the near wake regions like the pivotal zone of the shear layers rolling into vortices are key players into the modelling of the action of small-scale unresolved flow on the resolved flow .
In the following, we will show that the MULU are able to capture, in the context of coarse simulation, the essential physical mechanisms of the transitional very near wake flow. This is due to the split of the SGS contribution into directional dissipation and velocity bias. The next section elaborates on the various SGS models analysed in this study followed by a section on the flow configuration and numerical methods used. A comparison of the elaborated models and the associated physics is provided in the results section. Finally, a section of concluding remarks follows.
Models under location uncertainty
General classical models such as Smagorinsky or Wall-Adaptive Local Eddy (WALE) viscosity model proceed through a deterministic approach towards modelling the SGS dissipation tensor. However, [START_REF] Mémin | Fluid flow dynamics under location uncertainty[END_REF] suggests a stochastic approach towards modelling the SGS contributions in the Navier-Stokes (NS) equation. Building a stochastic NS formulation can be achieved via various methods. The simplest way consists in considering an additional additive random forcing [START_REF] Bensoussan | Equations stochastiques du type Navier-Stokes[END_REF]. Other modelling considered the introduction of fluctuations in the subgrid models (21; 22). Also, in the wake of Kraichnan's work [START_REF] Kraichnan | The structure of isotropic turbulence at very high Reynolds numbers[END_REF] another choice consisted in closing the large-scale flow in the Fourier space from a Langevin equation (24; 25; 26). Lagrangian models based on Langevin equation in the physical space have been also successfully proposed for turbulent dispersion [START_REF] Sawford | Generalized random forcing in random-walk models of turbu[END_REF] or for probability density function (PDF) modelling of turbulent flows (28; 29; 30). These attractive models for particle based representation of turbulent flows are nevertheless not well suited to a large-scale Eulerian modelling.
In this work we rely on a different stochastic framework of the NS equation recently derived from the splitting of the Lagrangian velocity into a smooth component and a highly oscillating random velocity component (i.e. the uncertainty in the parcel location expressed as velocity) [START_REF] Mémin | Fluid flow dynamics under location uncertainty[END_REF]:
dX t dt = u(X t , t) + σ(X t , t) Ḃ
The first term, on the right-hand side represents the large-scale smooth velocity component, while the second term is the small-scale component. This latter term is a random field defined from a Brownian term function Ḃ = dB/dt and a diffusion tensor σ. The small-scale component is rapidly decorrelating at the resolved time scale with spatial correlations (which might be inhomogeneous and non stationary) fixed through the diffusion tensor. It is associated with a covariance tensor:
Q ij (x, y, t, t ) = E((σ(x, t)dB t )(σ(x, t)dB t ) T ) = c ij (x, y, t)δ(t -t )dt. ( 1
)
In the following the diagonal of the covariance tensor, termed here as the variance tensor, plays a central role; it is denoted as a(x) = c(x, t). This tensor is a (3 × 3) symmetric positive definite matrix with dimension of a viscosity in m 2 s -1 . With such a decomposition, the rate of change of a scalar within a material volume, is given through a stochastic representation of the Reynolds Transport Theorem (RTT) (16; 17). For an incompressible small-scale random component (∇•σ = 0) the RTT has the following expression: d
V(t) q = V(t) d t q + ∇•(q u)- 1 2 d i,j=1 ∂ x i (a ij ∂ x j q) dt+∇q •σdB t dx. ( 2
)
where the effective advection u is defined as:
u = u - 1 2 ∇ • a. (3)
The first term on the right-hand side represents the variation of quantity q with respect to time: d t q = q(x, t+dt)-q(x, t). It is similar to the temporal derivative. It is important here to quote that q is a non differentiable random function that depends among other things on the particles driven by the Brownian component and flowing through a given location. The second term on the right-hand side stands for the scalar transport by the largescale velocity. However, it can be noticed that this scalar advection is not purely a function of the large-scale velocity. Indeed, the large-scale velocity is here affected by the inhomogeneity of the small-scale component through a modified large-scale advection (henceforth termed as velocity bias u ), where the effect of the fluctuating component is taken into account via the small-scale velocity auto-correlations a = (σσ T ). A similar modification of the large-scale velocity was also suggested in random walks Langevin models by [START_REF] Macinnes | Stochastic particle dispersion modeling and the tracer-particle limit[END_REF] who studied various stochastic models for particle dispersion -they concluded that an artificially introduced bias velocity to counter particle drift was necessary to optimise the models for a given flow. In the framework of modelling under location uncertainty, this term appears automatically. The third term in the stochastic RTT corresponds to a diffusion contribution due to the small-scale components. This can be compared with the SGS dissipation term in LES Modelling. This dissipation term corresponds to a generalization of the classical SGS dissipation term, which ensues in the usual context from the Reynolds decomposition and the Boussinesq's eddy viscosity assumption. Here it figures the mixing effect exerted by the smallscale component on the large-scale component. Despite originating from a very different construction, in the following, for ease of reference, we keep designating this term as the SGS contribution. The final term in the equation is the direct scalar advection by the small-scale noise.
It should be noted that the RTT corresponds to the differential of the composition of two stochastic processes. The Ito formulae, which is restricted to deterministic functions of a stochastic process, does not apply here. An extended formulae know as Ito-Wentzell (or generalized Ito) formulae must be used instead [START_REF] Kunita | Stochastic Flows and Stochastic Differential Equations[END_REF].
Using the Stochastic RTT, the large-scale flow conservation equations can be derived (for the full derivation please refer to (16; 17)). The final conservation equations are presented below:
Mass conservation:
d t ρ + ∇ • (ρ u)dt + ∇ρ •σ dB t = 1 2 ∇ • (a∇ρ)dt, (4)
which simplifies to the following constraints for an incompressible fluid:
∇ • σ = 0, ∇ • u = 0, (5)
The first constraint maintains a divergence free small-scale velocity field, while the second imposes the same for the large smooth effective component.
We observe that the large-scale component, u, is allowed to be diverging, with a divergence given by ∇ • ∇ • a. As we shall see, this value is in practice quite low. This weak incompressibility constraint results in a modified pressure computation, which is numerically not difficult to handle. Imposing instead a stronger incompressibility constraint on u introduces an additional cumbersome constraint on the variance tensor (∇ • ∇ • a = 0). In this work we will rely on the weak form of the incompressibility constraint. The largescale momentum equation boils down to a large-scale deterministic equation after separation between the bounded variation terms (i.e. terms in "dt") and the Brownian terms, which is rigorously authorized -due to uniqueness of this decomposition. Momentum conservation:
∂ t u + u∇ T (u - 1 2 ∇ • a) - 1 2 ij ∂ x i (a ij ∂ x j u) ρ = ρg -∇p + µ∆u. ( 6
)
Similar to the deterministic version of the NS equation, we have the flow material derivative, the forces, and viscous dissipation. The difference lies in the modification of the advection which includes the velocity bias and the presence of the dissipation term which can be compared with the SGS term present in the filtered NS equation. Both the additional terms present in the stochastic version are computed via the auto-correlation tensor a. Thus to perform a LES, one needs to model, either directly or through the smallscale noise, the auto-correlation tensor. Two methodologies can be envisaged towards this: the first would be to model the stochastic small-scale noise (σ(X t , t) Ḃ) and thus evaluate the auto-correlation tensor. We term such an approach as purely 'stochastic LES'. The second method would be to model the auto-correlation tensor directly as it encompasses the total contribution of the small scales. This method can be viewed as a form of 'deterministic LES' using stochastically derived conservation equations and this is the approach followed in this paper. The crux of the 'deterministic LES' approach thus revolves around the characterisation of the auto-correlation tensor. The small-scale noise is considered subsumed within the mesh and is not defined explicitly. This opens up various possibilities for turbulence modelling. The specification of the variance tensor a can be performed through an empirical local velocity fluctuation variance times a decorrelation time, or by physical models/approximations or using experimental measurements. The options explored in this study include physical approximation based models and empirical local variance based models as described below. Note that this derivation can be applied to any flow model. For instance, such a modelling has been successfully applied to derive stochastic large-scale representation of geophysical flows by (17; 33; 34).
A similar stochastic framework arising also from a decomposition of the Lagrangian velocity has been proposed in [START_REF] Holm | Variational principles for stochastic fluid dynamics[END_REF] and analysed in [START_REF] Crisan | Solution properties of a 3d stochastic Euler fluid equation[END_REF] and [START_REF] Cotter | Stochastic partial differential fluid equations as a diffusive limit of deterministic Lagrangian multi-time dynamics[END_REF]. This framework leads to enstrophy conservation whereas the formulation under location uncertainty conserves the kinetic energy of a transported scalar [START_REF] Resseguier | Geophysical flows under location uncertainty, part I: Random transport and general models[END_REF].
Physical approximation based models:
Smagorinsky's work on atmospheric flows and the corresponding model developement is considered to be the pioneering work on LES modelling [START_REF] Smagorinsky | General circulation experiments with the primitive equations[END_REF]. Based on Boussinesq's eddy viscosity hypothesis which postulates that the momentum transfer caused by turbulent eddies can be modelled by an eddy viscosity (ν t ) combined with Prandtl's mixing length hypothesis he developed a model (Smag) for characterising the SGS dissipation.
ν t = C||S||, (7)
τ = C||S||S, (8)
where τ stands for the SGS stress tensor, C is the Smagorinsky coefficient defined as (C s ∆) 2 , where ∆ is the LES filter width,
||S|| = 1 2 [ ij (∂ x i u j + ∂ x j u i ) 2 ] 1 2
is the Frobenius norm of the rate of strain tensor, and
S ij = 1 2 ( ∂ ūi ∂x j + ∂ ūj ∂x i ). (9)
Similar to Smagorinsky's eddy viscosity model, the variance tensor for the formulation under location uncertainty can also be specified using the strain rate tensor. Termed in the following as the Stochastic Smagorinsky model (StSm), it specifies the variance tensor similar to the eddy viscosity in the Classical Smagorinsky model:
a(x, t) = C||S||I 3 , (10)
where I 3 stands for 3 × 3 identity matrix and C is the Smagorinsky coefficient.
The equivalency between the two models can be obtained in the following case (as shown by ( 16)):
The SGS contribution (effective advection and SGS dissipation) for the StSm model is:
u j ∂ x j (∂ x j a kj ) + ij ∂ x i (a ij ∂ x j u k ) = u j ∂ x j (∂ x j ||S||δ kj ) + ij ∂ x i (||S||δ ij ∂ x j u k ), = u k ∆||S|| + ||S||∆u k + j ∂ x j ||S||∂ x j u k , (11)
and the SGS contribution for Smagorinsky model (∇ • τ ) is:
∇ • τ = j ∂ x j (||S||S), = j ∂ x j (||S||(∂ x j u k + ∂ x k u j )), = j ∂ x j ||S||∂ x j u k + ∂ x j ||S||∂ x k u j + ||S||∆u k . ( 12
)
An equivalency can be drawn between the two models by adding j ∂ x j ||S||∂ x k u ju k ∆||S|| to the StSm model. The additional term may also be written as:
∂ x k j ∂ x j (||S||)u j - j ∂ x j ∂ x k (||S||)u j -u k ∆||S||, ( 13
)
where the first term represents a velocity gradient which can be included within a modified pressure term as is employed for Smagorinsky model. The other two terms can be neglected for smooth enough strain rate tensor function. For smooth deformations both models are equivalent in terms of dissipation. It is important to note here that even if the effective advection is ignored in the StSm model, the two models still differ in a general case due to the first two terms in (13). Smagorinsky's pioneering work remains to date a popular model for LES, however it has certain associated drawbacks. The model assumes the existence of equilibrium between the kinetic energy flux across scale and the large scales of turbulence -this equilibrium is not established in many cases such as the leading edge of an airplane wing or turbulence with strong buoyancy. In addition, a form of arbitrariness and user dependency is introduced due to the presence of the Smagorinsky coefficient. This coefficient is assumed to be constant irrespective of position and time. Lilly [START_REF] Lilly | The representation of small scale turbulence in numerical simulation experiments[END_REF] suggests a constant coefficient value to be appropriate for the Smagorinsky model. However this was disproved by the works of [START_REF] Meyers | On the model coefficients for the standard and the variational multi-scale Smagorinsky model[END_REF], and ( 39) who shows that a constant value did not efficiently capture turbulence especially in boundary layers.
Numerous attempts were made to correct for the fixed constant such as damping functions [START_REF] Van Driest | The problem of aerodynamic heating[END_REF] or renormalisation group theory [START_REF] Yakhot | Renormalization group analysis of turbulence. I. Basic theory[END_REF]. Germano et al. [START_REF] Germano | A dynamic subgrid-scale eddy viscosity model[END_REF] provided a non ad-hoc manner of calculating the Smagorinsky coefficient varying with space and time using the Germano identity and an additional test filter t (termed as the Dynamic Smagorinsky (DSmag) model).
L ij = T ij -τ t ij , (14)
where τ stands for the SGS stress filtered by the test filter t, T is the filtered SGS stress calculated from the test filtered velocity field, and L stands for the resolved turbulent stress. The Smagorinsky coefficient can thus be calculated as:
C 2 s = < L ij M ij > < M ij M ij > , where (15)
M ij = -2∆ 2 (α 2 || St || St ij -(|| S|| Sij ) t (16)
and α stands for the ratio between the test filter and the LES filter. The dynamical update procedure removes the user dependancy aspect in the model, however it introduces unphysical values for the coefficient at certain instances. An averaging procedure along a homogenous direction is necessary to provide physical values for C s . However, most turbulent flows, foremost being wake flow around a cylinder, lack a homogenous direction for averaging. In such cases, defining the coefficient is difficult and needs adhoc measures such as local averaging, threshold limitation, and/or filtering methods to provide nominal values for C s . For the present study, we focus on the classical and dynamic variations of the Smagorinsky model -these model were used to study cylinder wake flow by ( 8), ( 42), [START_REF] Ouvrard | Classical and variational multiscale LES of the flow around a circular cylinder on unstructured grids[END_REF], among many others.
Local variance based models:
As the name states, the variance tensor can be calculated by an empirical covariance of the resolved velocity within a specified local neighbourhood. The neighbourhood can be spatially or temporally located giving rise to two formulations. A spatial neighbourhood based calculation (referred to as Stochastic Spatial Variance model (StSp)) is given as:
a(x, nδt) = 1 |Γ| -1 x i ∈η(x) (u(x i , nδt) -ū(x, nδt))(u(x i , nδt) -ū(x, nδt)) T C sp , (17)
where ū(x, nδt) stands for the empirical mean around the arbitrarily selected local neighbourhood defined by Γ. The constant C sp is defined as [START_REF] Harouna | Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling[END_REF]:
C sp = res η 5 3 ∆t, ( 18
)
where res is the resolved length scale, η is the Kolmogorov length scale and ∆t is the simulation time step. A similar local variance based model can be envisaged in the temporal framework; however, it has not been analysed in this paper due to memory limitations.
It is important to note that the prefix stochastic has been added to the MULU to differentiate the MULU version of the Smagorinsky model from its classical purely deterministic version. The model equations while derived using stochastic principles are applied in this work in a purely deterministic sense. The full stochastic formulation of MULU has been studied by [START_REF] Resseguier | Geophysical flows under location uncertainty, part I: Random transport and general models[END_REF].
Flow configuration and numerical methods
The flow was simulated using a parallelised flow solver, Incompact3d, developed by [START_REF] Laizet | High-order compact schemes for incompressible flows: A simple and efficient method with quasispectral accuracy[END_REF]. Incompact3d relies on a sixth order finite difference scheme (the discrete schemes are described in [START_REF] Lele | Compact finite difference schemes with spectral-like resolution[END_REF]) and the Immersed Boundary Method (IBM) (for more details on IBM refer to [START_REF] Gautier | A DNS study of jet control with microjets using an immersed boundary method[END_REF]) to emulate a body forcing. The main advantage of using IBM is the ability to represent the mesh in cartesian coordinates and the straightforward implementation of high-order finite difference schemes in this coordinate system. The IBM in Incompact3d has been applied effectively to cylinder wake flow by [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] and to other flows by [START_REF] Gautier | A DNS study of jet control with microjets using an immersed boundary method[END_REF], and (47) among others. A detailed explanation of the IBM as applied in Incompact3d, as well as its application to cylinder wake flow can also be found in [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF]. It is important to note that this paper focuses on the accuracy of the sub-grid models within the code and not on the numerical methodology (IBM/numerical schemes) of the code itself.
The incompressibility condition is treated with a fractional step method based on the resolution of a Poisson equation in spectral space on a staggered pressure grid combined with IBM. While solving the Poisson equation for the stochastic formulation, the velocity bias was taken into account in order to satisfy the stochastic mass conservation constraints. It can be noted that although the solution of the Poisson equation in physical space is computationally heavy, the same when performed in Fourier space is cheap and easily implemented with Fast Fourier transforms. For more details on Incompact3d the authors refer you to [START_REF] Laizet | High-order compact schemes for incompressible flows: A simple and efficient method with quasispectral accuracy[END_REF] and [START_REF] Laizet | Incompact3d: A powerful tool to tackle turbulence problems with up to O(105) computational cores[END_REF].
The flow over the cylinder is simulated for a Re of 3900 on a domain measuring 20D × 20D × πD. The cylinder is placed in the centre of the lateral domain at 10D and at 5D from the domain inlet. For statistical purposes, the centre of the cylinder is assumed to be (0, 0). A coarse mesh resolution of 241 × 241 × 48 is used for the coarse LES (cLES). cLES discretisation has been termed as coarse as this resolution is ∼ 6.2% the resolution of the reference LES of (7) (henceforth referred to as LES -Parn). In terms of Kolmogorov units (η), the mesh size for the cLES is 41η × 7η -60η × 32η. The Kolmogorov length scale has been calculated based on the dissipation rate and viscosity, where the dissipation rate can be estimated as ∼ U 3 /L where U and L are the characteristic velocity scale and the integral length scale. A size range for y is used due to mesh stretching along the lateral (y) direction which provides a finer mesh in the middle. Despite the stretching, the minimum mesh size for the cLES is still larger than the mesh size of particle imagery velocimetry (PIV) reference measurements of (7) (henceforth referred to as PIV -Parn). For all simulations, inflow/outflow boundary condition is implemented along the streamwise (x) direction with free-slip and periodic boundary conditions along the lateral (y) and spanwise (z) directions respectively -the size of the spanwise domain has been fixed to πD as set by [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF], which was also validated by [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] to be sufficient with periodic boundary conditions. The turbulence is initiated in the flow by introducing a white noise in the initial condition. Time advancement is performed using the third order Adam-Bashforth scheme. A fixed coefficient of 0.1 is used for the Smagorinsky models as suggested in literature (43) while a spatial neighbourhood of 7 × 7 × 7 is used for the Stochastic Spatial model. For the dynamic Smagorinsky model, despite the lack of clear homogenous direction, a spanwise averaging is employed. In addition, the constant is filtered and a threshold on negative and large positive coefficients is also applied to stabilise the model. Note that the positive threshold is mesh dependant and needs user-intervention to specify the limits.
The reference PIV [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] was performed with a cylinder of diameter 12 mm and 280 mm in length placed 3.5D from the entrance of the testing zone in a wind tunnel of length 100 cm and height 28 cm. Thin rectangular end plates placed 240 mm apart were used with a clearance of 20 mm between the plates and the wall. 2D2C measurements were carried out at a free stream velocity of 4.6 m s -1 (Re ∼ 3900) recording 5000 image pairs separated by 25 µs with a final interrogation window measuring 16 × 16 pixels with relatively weak noise. For more details about the experiment refer to [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF].
The high resolution LES of ( 7) was performed on Incompact3d on the same domain measuring 20D × 20D × πD with 961 × 961 × 48 cartesian mesh points. The simulation was performed with the structure function model of ( 49) with a constant mesh size. LES -Parn is well resolved, however, there is a distinct statistical mismatch between LES -Parn and PIV-Parn especially along the centre-line (see figure 1a and figure 1b). Literature suggests that the wake behind the cylinder at a Re ∼ 3900 is highly volatile and different studies predict slightly varied profiles for the streamwise velocity along the centre-line. The averaging time period, the type of model, and the mesh type all affect the centre-line velocity profile. As can be seen in figure 1a and figure 1b, each reference data set predicts a different profile/magnitude for the streamwise velocity profiles. The DNS study of [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF] does not present the centreline velocity profiles. This provided the motivation for performing a DNS study at Re ∼ 3900 to accurately quantify the velocity profiles and to reduce the mismatch between the existing experimental and simulation datasets. The DNS was performed on the same domain with 1537 × 1025 × 96 cartesian mesh points using Incompact3d with stretching implemented in the lateral (y) direction.
From figure 1a we can see that the DNS and the PIV of Parnaudeau are the closest match among the data sets while significant deviation is seen in the other statistics. In the fluctuating streamwise velocity profiles, the only other data sets that exist are of (50) who performed Laser Doppler Velocimetry (LDV) experiments at Re = 3000 and Re = 5000. Among the remaining data-sets (LES of Parnaudeau, PIV of Parnaudeau, and current DNS) matching profiles are observed for the DNS and PIV despite a magnitude difference. These curves also match the profiles obtained by the experiments of Norberg (50) in shape, i.e. the similar magnitude, dual peak nature. The LES of Parnaudeau [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] is the only data-set to estimate an inflection point and hence is not considered further as a reference. The lower energy profile of the PIV may be attributed to the methods used for calculating the vector fields which employ a large-scale representation of the flow via interrogation win- 1 concisely depicts all the important parameters for the flow configuration as well as the reference datasets. Wake flow around a cylinder was simulated in the above enumerated configuration with the following SGS models: Classic Smagorinsky (Smag), Dynamic Smagorinsky (DSmag), Stochastic Smagorinsky (StSm), and Stochastic Spatial (StSp) variance. In accordance with the statistical comparison performed by [START_REF] Beaudan | Numerical experiments on the flow past a circular cylinder at sub-critical Reynolds number[END_REF], first and second order temporal statistics have been compared at 3 locations (x = 1.06D (top), x = 1.54D(middle), and x = 2.02D(bottom)) in the wake of the cylinder. All cLES statistics are computed (after an initial convergence period) over 90,000 time steps corresponding to 270 non-dimensional time or ∼ 54 vortex shedding cycles. All statistics are also averaged along the spanwise (z) direction. The model statistics are evaluated against the PIV experimental data of [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] and the DNS for which the data has been averaged over 400,000 time steps corresponding to 60 vortex sheddings. The work of [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] suggests that at least 52 vortex sheddings are needed for convergence which is satisfied for all the simulations. In addition, spanwise averaging of the statistics results in converged statistics comparable with the PIV ones. Both DNS and PIV statistics are provided for all statistical comparison, however, the DNS is used as the principal reference when an ambiguity exists between the two references.
Results
In this section, we present the model results, performance analysis and physical interpretations. Firstly, The cLES results are compared with the reference PIV and the DNS. The focus on centreline results for certain comparisons is to avoid redundancy and because these curves show maximum statistical deviation. This is followed by a characterisation and physical analysis of the velocity bias and SGS contributions for the MULU. The section is concluded with the computation costs of the different models.
Coarse LES
For cLES, the MULU have been compared with classic and dynamic version of the Smagorinksy model, DNS, and PIV -Parn. Figure 2 and figure 3 depict the mean streamwise and lateral velocity respectively plotted along the lateral (y) direction. In the mean streamwise velocity profile (see figure 2a), the velocity deficit behind the cylinder depicted via the U-shaped profile in the mean streamwise velocity is captured by all models. The expected downstream transition from U-shaped to a V-shaped profile is see for all the models -a delay in transition is observed for Smag model which biases the statistics at x = 1.54D and 2.02D. For the mean lateral component (see figure 3), all models display the anti-symmetric quality with respect to y = 0. Smag model shows maximum deviation from the reference DNS statistics in all observed profiles. All models but Smag capture the profile well while broadly StSp and DSmag models better capture the magnitude. As a general trend, Smag model can be seen to under-predict statistics while StSm model over-predicts. A better understanding of the model performance can be obtained through Figure 4 -6 which depict the second order statistics, i.e. the rms component of the streamwise (< u u >) and lateral (< v v >) velocity fluctuations and the cross-component (< u v >) fluctuations. The transitional state of the shear layer can be seen in the reference statistics by the two strong peaks at x = 1.06D in figure 4a. The magnitude of these peaks is in general underpredicted, however, a best estimate is given the MULU. DSmag and Smag models can be seen to under-predict these peaks at all x/D. This peak is eclipsed by a stronger peak further downstream due to the formation of the primary vortices (see figure 4b) which is captured by all the models.
The maxima at the centreline for figure 5 and the anti-symmetric structure for figure 6 are seen for all models. Significant mismatch is observed between the reference and the Smag/StSm models especially in figure 5a and 6a. In all second-order statistics, StSm model improves in estimation as we move further downstream. No such trend is seen for StSp or DSmag models while a constant under-prediction is seen for all Smag model statistics. This under-prediction could be due to the inherent over-dissipativeness of the Smagorinsky model which smooths the velocity field. This is corrected by DSmag/StSm models and in some instances over-corrected by the StSm model. A more detailed analysis of the two formulations under location uncertainty (StSm and StSp) is presented in sections 4.2.
The smoothing for each model is better observed in the 3D isocontours of vorticity modulus (Ω) plotted in figure 7. Plotted at non-dimensional Ω = 7, the isocontours provide an understanding of the dominant vortex structures within the flow. While large-scale vortex structures are observed in all flows, the small-scale structures and their spatial extent seen in the DNS are better represented by the MULU. The over-dissipativeness of the Smag model leads to smoothed isocontours with reduced spatial extent. The large-scale vortex structures behind the cylinder exhibit the spanwise periodicity observed by Williamson (53) for cylinder wake flow at low Re ∼ 270. Inferred to be due to mode B instability by Williamson, this spanwise periodicity was associated with the formation of small-scale streamwise vortex pairs. It is interesting to observe here the presence of similar periodicity at higher Re -this periodicity will be further studied at a later stage in this paper.
A stable shear layer associated with higher dissipation is observed in Smag model with the shear layer instabilities beginning further downstream than the MULU. An accurate shear layer comparison can be done by calculating the recirculation length (L r ) behind the cylinder. Also called the bubble length, it is the distance between the base of the cylinder and the point with null longitudinal mean velocity on the centreline of the wake flow. This parameter has been exclusively studied due to its strong dependence on external disturbances in experiments and numerical methods in simulations (54; 4). The effective capture of the recirculation length leads to the formation of U-shaped velocity profile in the near wake while the presence of external disturbances can lead to a V-shaped profile as obtained by the experiments of (9). Parnaudeau et al. [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] used this characteristic to effectively parameterise their simulations. The instantaneous contours can provide a qualitative outlook on the recirculation length based on shear layer breakdown and vortex formation. However, in order to quantify accurately the parameter, the mean and rms streamwise velocity fluctuation components were plotted in the streamwise (x) direction along the centreline (see figure 8a, and figure 8b). The recirculation length for each model is tabulated in table 2. StSp and DSmag models capture the size of the recirculation region with 0% error while the StSm model under estimates the length by 5.9% and the Smag model over estimates by 15.9%. The magnitude at the point of inflection is accurately captured by all the models (figure 8a).
For the rms centreline statistics of figure 8b, due to ambiguity between references, the DNS is chosen for comparison purposes. However, the similar magnitude, dual peak nature of the profile can be established through both the references. This dual peak nature of the model was also observed in the experiments of (50) who concluded that within experimental accuracy, the secondary peak was the slightly larger RMS peak as seen for the DNS. The presence of the secondary peak is attributed to the cross over of mode B for LES-Parn in figure 1b despite the simulation being within the transition regime.
The fluctuating centreline velocity profiles for the deterministic Smagorinsky models display an inflection point unlike the references. The MULU display a hint of the correct dual peak nature while under-predicting the magnitude matching with the PIV's large scale magnitude rather than the DNS. Although the Smag model has a second peak magnitude closer to the DNS, the position of this peak is shifted farther downstream. This combined with the inability of the model to capture the dual-peak nature speaks strongly against the validity of the Smag model statistics. Further analysis can be done by plotting 2D isocontours of the streamwise fluctuating velocity behind the cylinder, as shown in figure 9. The isocontours are averaged in time and along the spanwise direction. The profiles show a clear distinction between the classical models and the MULU in the vortex bubbles just behind the recirculation region. The vortex bubbles refer to the region in the wake where the initial fold-up of the vortices start to occur from the shear layers. The MULU match better with the DNS isocontours within this bubble as compared to the Smag or DSmag models. Along the centreline, MULU under-predict the magnitude, as depicted by the lower magnitude dual peaks in figure 8b. As we deviate from the centre-line, the match between the MULU and the DNS improves considerably. The mismatch of the isocontours in the vortex bubbles for the Smag and DSmag models with the DNS suggests that a higher magnitude for the centreline profile is not indicative of an accurate model. The dual peak nature of the streamwise velocity rms statistics show a strong dependance on the numerical model and parameters. This can be better understood via the constant definition within the StSp model formulation (refer to [START_REF] Harouna | Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling[END_REF]). The constant requires definition of the scale ( res ) of the simulation which is similar to ∆ used in classic Smagorinsky model, i.e. it defines the resolved length scale of the simulation. In the case of stretched mesh, the definition of this res can be tricky due to the lack of a fixed mesh size. It can be defined as a maximum (max(dx, dy, dz)) or a minimum (min(dx, dy, dz)) or an average (dx * dy * dz) (1/3) ). A larger value res is large, corresponding to a "PIV resolution", the centreline streamwise rms statistics display a dual peak nature with a larger initial peak similar to PIV reference. A smaller value for res , corresponding to a"higher resolution LES", shifts this dual peak into a small initial peak and a larger second peak similar to the DNS and of higher magnitude. The statistics shown above have been obtained with an res defined as (max(dx, dy, dz)) to emulate the coarseness within the model. Figure 10a and figure 10b show the power spectra of the streamwise and lateral velocity fluctuation calculated over time using probes at x/D = 3D behind the cylinder along the full spanwise domain. For the model power spectra, 135, 000 time-steps were considered corresponding to a nondimensional time of 405 which encapsulates ∼ 81 vortex shedding cycles. The hanning methodology is used to calculate the power spectra with an overlap of 50% considering 30 vortex shedding cycles in each sequence. The reference energy spectra (namely HWA) have been obtained from [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF] while the DNS energy spectra has been calculated similar to the cLES. The process of spectra calculation for both reference and models are identical. All the values have been non-dimensionalised.
The fundamental harmonic frequency (f /f s = 1) and the second harmonic frequency are captured accurately by all models in the v-spectra. Let us recall that the cLES mesh is coarser than the PIV grid. Twice the vortex shedding frequency is captured by the peak in u-spectra at f /f s ∼ 2 as expected -twice the Strouhal frequency is observed due to symmetry condition at the centreline [START_REF] Ma | Dynamics and lowdimensionality of a turbulent near wake[END_REF]. The HWA measurement has an erroneous peak at f /f s ∼ 1 which was attributed to calibration issues and the cosine law by [START_REF] Parnaudeau | Experimental and numerical studies of the flow over a circular cylinder at Reynolds number 3900[END_REF]. All models match with both the reference spectra. One can observe a clear inertial subrange for all models in line with the expected -5/3 slope. The models in the order of increasing energy at the small scales is DSmag <Smag = StSp <StSm. For the StSm model, an accumulation of energy is observed at the smaller scales in the u-spectra, unlike the StSp model. This suggests that the small-scale fluctuations seen in vorticity or velocity contours for the StSp model (i.e. in figure 7) are physical structures and not a numerical accumulation of energy at the smaller scales known to occur for LES.
The statistical comparisons show the accuracy and applicability of the MULU. The next sub-section focuses on the physical characterisation of the MULU -SGS dissipation, velocity bias and their contributions are studied in detail.
Velocity bias characterisation
The functioning of the MULU is through the small-scale velocity autocorrelation a. The effect of this parameter on the simulation is threefold: firstly, it contributes to a velocity bias/correction which is a unique feature of the MULU. Secondly, this velocity correction plays a vital part in the pressure calculation to maintain incompressibility. Finally, it contributes to the SGS dissipation similar to classical LES models -this signifies the dissipation occurring at small scales. This threefold feature of the MULU is explored in this section.
The contribution of the velocity bias can be characterised by simulating the MULU (StSm and StSp) with and without the velocity bias (denoted by Nad for no advection bias) and comparing the statistics (see figure 11a -11b). Only the centre-line statistics have been shown for this purpose as they display maximum statistical variation among the models and provide an appropriate medium for comparison. For the simulations without the velocity bias, the convective part in the NS equations remains purely a function of the large-scale velocity. In addition, the weak incompressibility constraint ( 5) is not enforced in the simulations with no velocity bias and the pressure is computed only on the basis of large-scale velocity. Similar to the Smagorinsky model, where the gradients of the trace of the stress tensor are considered subsumed within the pressure term, the divergence of the velocity bias is considered subsumed within the pressure term. The simulations parameters and flow configuration remain identical to cLES configuration.
The statistics show improvement in statistical correlation when the velocity bias is included in the simulation -all statistical profiles show improvement with inclusion of velocity bias but only the centre-line statistics have been shown to avoid redundancy. In the mean profile, the inclusion of velocity bias appears to correct the statistics for both models to match better with the reference. For the StSm model, there is a right shift in the statistics while the opposite is seen for the StSp model. The correction for the StSp model appears stronger than that for the StSm model. This is further supported by the fluctuation profile where without the velocity bias, the StSp model tends to the Smag model with an inflection point while the inclusion of a connection between the large-scale velocity advection and the small-scale variance results in the correct dual peak nature of the references. For the StSm model, figure 11b suggests a reduction in statistical correlation with the inclusion of the velocity bias -this is studied further through 2D isocontours.
Figure 12 plots 2D isocontours for the streamwise fluctuating profiles for the MULU. Once again an averaging is performed in time and along the spanwise direction. A clear distinction between the models with and without velocity bias is again difficult to observe. However, on closer inspection, within the vortex bubbles, we can see that including velocity bias improves the agreement with the DNS by reducing the bubble size for the StSm model and increasing it for the StSp model. The higher magnitude prediction along the centreline seen for the StSm -Nad model could be the result of an overall bias of the statistics and not due to an improvement in model performancethe presence of an inflection point in the profile further validates the model inaccuracy. This error is corrected in the model with velocity bias. This corrective nature of the bias is further analysed.
For the StSp model, the simulation without the bias has a larger recir-culation zone or is "over dissipative" and this is corrected by the bias. This result supports the findings of (55) whose structure model, when employed in physical space, applies a similar statistical averaging procedure of squarevelocity differences in a local neighbourhood. They found their model to also be over-dissipative in free-shear flows and did not work for wall flows as too much dissipation suppressed development of turbulence and had to be turned off in regions of low three-dimensionality. To achieve that [START_REF] Ducros | Large-eddy simulation of transition to turbulence in a boundary layer developing spatially over a flat plate[END_REF] proposed the filtered-structure-function model, which removes the large-scale fluctuations before computing the statistical average. They applied this model with success to large-eddy simulation and analysis of transition to turbulence in a boundary layer. For the StSp model, which also displays this over dissipation quality (without velocity bias), the correction appears to be implicitly done by the velocity bias. Such a velocity correction is consistent with the recent findings of ( 19) who provided physical interpretations of the local corrective advection due to the turbulence inhomegeneity in the pivotal region of the near wake shear layers where transition to turbulence takes place. The recirculation length for all cases is tabulated in table 3. Data are obtained from the centre-line velocity statistics shown in figure 11a. The tabulated values further exemplify the corrective nature of the velocity bias where we see an improved estimation of the recirculation length with the inclusion of the velocity bias. Also, a marginal improvement in statistical match similar to figure 11a is observed with the inclusion of the velocity bias for all lateral profiles (not shown here). It can be concluded that the inclusion of velocity bias provides, in general, an improvement to the model. The physical characteristics of the velocity bias (expressed henceforth as u * = 1 2 ∇ • a) are explored further. The bias u * , having the same units as velocity, can be seen as an extension or a correction to velocity. Extending this analogy, the divergence of u * is similar to "the divergence of a velocity field". This is the case in the MULU where to ensure incompressibility, the divergence free constraints (eq. ( 5)) are necessary. The stability and statistical accuracy of simulations were improved with a pressure field calculated using the modified velocity u, i.e. when the weak incompressibility constraint was enforced on the flow. This pressure field can be visualised as a true pressure field unlike the Smagorinsky model where the gradients of the trace of the stress tensor are absorbed in an effective pressure field.
Stretching the u * and velocity analogy, we can also interpret the curl of u * (∇ × u * ) as vorticity or more specifically as a vorticity bias. The curl of u * plays a role in the wake of the flow where it can be seen as a correction to the vorticity field. The divergence and curl of u * are features solely of the MULU and their characterisation defines the functioning of these models. Figure 13 depicts the mean isocontour of ∇ • (u * ) = 0.02 for the two MULU. This divergence function is included in the poisson equation for pressure calculation in order to enforce the weak incompressibility constraint. In the StSm model the contribution is strictly limited to within the shear layer while in the StSp model the spatial influence extends far into the downstream wake. The stark difference in the spatial range could be due to the lack of directional dissipation in the StSm model which is modelled on the classic Smagorinsky model. This modelling results in a constant diagonal auto correlation matrix, and the trace elements simplifying to a laplacian of a (∆a) for ∇ • (u * ). This formulation contains a no cross-correlation assumption (zero non-diagonal elements in the auto correlation matrix) as well as ignoring directional dissipation contribution (constant diagonal terms provides equal SGS dissipation in all three principle directions). These Smagorinsky like assumptions place a restriction on the form and magnitude of u * which are absent in the StSp model. The existence of cross-correlation terms in a for the StSp model results in a better defined and spatially well-extended structure for the divergence.
The importance of the cross-correlation terms are further amplified in the mean curl isocontour of u * (see figure 14) where once again a spatial limitation is observed for the StSm model. However, the more interesting observation is the presence of spanwise periodicity for the curl of u * observed in the StSp model. The curl parameter is analogous to vorticity and is in coherence with the birth of streamwise vortices seen in figure 7, a spanwise periodicity is observed with a wavelength λ ∼ 0.8. Figure 15 superimposes isocontour. While clear periodicity is not observed for the mean vorticity, alternate peaks and troughs can be seen which match with the peaks in the mean curl isocontour. The wavelength of this periodicity is comparable with the spanwise wavelength of approximately 1D of mode B instabilities observed by [START_REF] Williamson | Vortex dynamics in the cylinder wake[END_REF] for Re ∼ 270. The footprint of mode B instabilities is linked to secondary instabilities leading to streamwise vortices observed for Re ranging from 270 to ∼ 21000 [START_REF] Bays-Muchmore | On streamwise vortices in turbulent wakes of cylinders[END_REF]. These results demonstrate the ability of the spatial variance model to capture the essence of the auto-correlation tensor.
The regions of the flow affected by the auto-correlation term can be characterised by plotting the contours of SGS dissipation density ((∇u)a(∇u) T ) of the MULU averaged in time and along the spanwise direction. These have been compared with the dissipation densities for the Smag and DSmag models ((∇u)ν t (∇u) T ) (see figure 17). A 'reference' dissipation density has been obtained by filtering the DNS dissipation density at the cLES resolution (see figure 16e) and by plotting the difference. The StSp model density matches best with the DNS compared with all other models -a larger spatial extent and a better magnitude match for the dissipation density is observed. The high dissipation density observed just behind the recirculation zone is captured only by the StSp model while all Smag models under-predict the density in this region. The longer recirculation zone for Smag model can be observed in the density contours. A few important questions need to be addressed here: firstly, the Smag model is known to be over-dissipative, however, in the density contours, a lower magnitude is observed for this model. This is a case of cause and effect where the over-dissipative nature of the Smag model smooths the velocity field thus reducing the velocity gradients which inversely affects the value of the dissipation density. Secondly, in the statistical comparison only a marginal difference is observed especially between the DSmag and StSp models while in the dissipation density contours we observe considerable difference. This is because the statistical profiles are a result of contributions from the resolved scales and the sub-grid scales. The dissipation density contours of figure 17 represent only a contribution of the sub-grid scales, i.e. the scales of turbulence characterised by the model. Thus, larger differences are observed in this case due to focus on the scales of model activity. Finally, we observe in figure 9 that within the vortex bubbles behind the cylinder the MULU perform better than the Smag or DSmag models. For the StSp model, this improvement is associated with the higher magnitude seen within this region in the SGS dissipation density. For the StSm model, no such direct relation can be made with the SGS dissipation density. However, when we look at the resolved scale dissipation ((∇u)ν(∇u) T ) for the models (see figure 16), a higher density is observed in the vortex bubbles for this model. For the classical models high dissipation is observed mainly in the shear layers. As the kinematic viscosity (ν) for all models is the same, the density maps are indicative of the smoothness of the velocity gradient. For the classical models, we see a highly smoothed field while for the MULU, we see higher density in the wake. This difference could induce the isocontour mismatch seen in figure 9. These results are consistent with the findings of ( 19), who applied the MULU in the context of reduced order model and observed that MULU plays a significant role in the very near wake where important physical mechanisms take place. For the MULU, the SGS contributions can be split into velocity bias (u∇ T (-1 2 ∇ • a) and dissipation ( 1 However, it is important to note that while the computational cost for the Smagorinsky models stay fixed despite changes in model parameters, the cost for the StSp model strictly depends on the size of the local neighbourhood used. A smaller neighbourhood reduces the simulation cost but could lead to loss of accuracy and vice versa for a larger neighbourhood. The definition of an optimal local neighbourhood is one avenue of future research that could be promising. StSm model, which also provides comparable improvement on the classic Smag model, can be performed at 24% the cost of the DSmag model. Thus, the MULU provide a low cost (2/3rd reduction) alternative to the dynamic Smagorinsky model while improving the level of statistical accuracy.
2 ij ∂ x i (a ij ∂ x j u)).
Conclusion
In this study, cylinder wake flow in the transitional regime was simulated in a coarse mesh construct using the formulation under location uncertainty. The simulations were performed on a mesh 54 times coarser than the DNS study. The simulation resolution is of comparable size with PIV resolutionthis presents a useful tool for performing DA where in disparity between the two resolutions can lead to difficulties.
This study focused on the MULU whose formulation introduces a velocity bias term in addition to the SGS dissipation term. These models were compared with the classic and dynamic Smagorinsky model. The MULU were shown to perform well with a coarsened mesh -the statistical accuracy of the spatial variance based model was, in general, better than the other compared models. The spatial variance based model and DSmag model both captured accurately the volatile recirculation length. The 2D streamwise velocity isocontours of the MULU matched better with the DNS reference than the Smagorinsky models. Additionally, the physical characterisation of the MULU showed that the velocity bias improved the statistics -considerably in the case of the StSp model. The analogy of the velocity bias with velocity was explored further through divergence and curl functions. The spanwise periodicity observed at low Re in literature was observed at this higher Re with the StSp model through the curl of u * (analogous with vorticity) and through mean vorticity albeit noisily. The SGS contribution was compared with Smagorinsky models and the split of the velocity bias and dissipation was also delineated through isocontours.
The authors show that the performance of the MULU under a coarse mesh construct could provide the necessary computational cost reduction needed for performing LES of higher Re flows. The higher cost of the StSp model compared with Smagorinsky, is compensated by the improvement in accuracy obtained at coarse resolution. In addition, the StSp model performs marginally better than the currently established DSmag model at just 37% the cost of the DSmag model. This cost reduction could pave the way for different avenue of research such as sensitivity analyses, high Reynolds number flows, etc. Of particular interest is the possible expansion of Data Assimilation studies from the currently existing 2D assimilations [START_REF] Gronskis | Inflow and initial conditions for direct numerical simulation based on adjoint data assimilation[END_REF] or low Re 3D assimilations [START_REF] Robinson | Image assimilation techniques for Large Eddy Scale models : Application to 3d reconstruction[END_REF] to a more informative 3D assimilation at realistic Re making use of advanced experimental techniques such as tomo-PIV. Also and more importantly, the simplistic definition of the MULU facilitate an
Figure 1 :
1 Figure 1: Mean streamwise velocity (a) and fluctuating streamwise velocity (b) in the streamwise direction along the centreline (y = 0) behind the cylinder for the reference data-sets. Legend: HWA -hot wire anemometry(7), K&M -B-spline simulations (case II) of Kravchenko and Moin(4), L&S -experiment of Lourenco and Shih (9), N -experiment of Norberg at Re = 3000 and 5000[START_REF] Norberg | LDV measurements in the near wake of a circular cylinder[END_REF], O&W -experiement of Ong and Wallace[START_REF] Ong | The velocity field of the turbulent very near wake of a circular cylinder[END_REF]
Figure 2 :
2 Figure 2: Mean streamwise velocity at 1.06D (top), 1.54D (middle), and 2.02D (bottom) in the wake of the circular cylinder.
Figure 3 :
3 Figure 3: Mean lateral velocity at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 4 :
4 Figure 4: Streamwise rms velocity (u u ) fluctuations at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 5 :
5 Figure 5: Lateral rms velocity (v v ) fluctuations at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 6 :
6 Figure 6: Rms velocity fluctuations cross-component (u v ) at 1.06D, 1.54D, and 2.02D in the wake of the circular cylinder.
Figure 7 :
7 Figure 7: 3D instantaneous vorticity iso-surface at Ω = 7.
Figure 8 :
8 Figure 8: Mean (a) and Fluctuating (b) streamwise velocity profile in the streamwise direction along the centreline behind the cylinder.
Figure 9 :
9 Figure 9: 2D isocontours of time averaged fluctuating streamwise velocity (u u ).
Figure 10 :
10 Figure 10: Power spectra of streamwise (a) and lateral (b) velocity component at x/D = 3 behind the cylinder.
Figure 11 :
11 Figure 11: Effect of velocity bias on centre-line mean (a) and fluctuating (b) streamwise velocity behind the cylinder.
Figure 12 :
12 Figure 12: Effect of velocity bias on the 2D isocontour of time averaged fluctuating streamwise velocity (u u ).
(a) Full scale view of the isocontour superposition with the outlined zoom area (b) Zoomed in view of the DNS mean vorticity isocontour (c) Zoomed in view of the isocontour superposition
Figure 15 :
15 Figure 15: 3D isocontour superposition of the mean curl of u * (blue) for the StSp model at ∇ × (u * ) = 0.05 with the mean vorticity (yellow) for DNS at Ω = 3
Figure 16 :
16 Figure 16: Sub-grid scale dissipation density in the wake of the cylinder. o stands for the original DNS dissipation and f stands for filtered (to cLES resolution) DNS dissipation.
Figure 18
18 shows the contribution of the two via 3D isocontours (dissipation in yellow and velocity bias in red). The contribution of velocity bias is limited for the StSm model as expected while in the StSp model it plays a larger role. Velocity bias in StSp model is dominant in the near wake of the flow especially in and around the recirculation zone. It is important to outline that this is the
Figure 17 :
17 Figure 17: Resolved scale dissipation density in the wake of the cylinder for each model.
Figure 18 :
18 Figure18: 3d SGS contribution iso-surface along the primary flow direction (x) with dissipation iso-surface in yellow (at 0.002) and velocity bias in red (at 0.001)
feature of the StSp model -the model captures accurately the statistics at only 0.37 the cost of performing the DSmag model.
Table 1 :
1 Flow parameters. LES resolution[START_REF] Corpetti | Fluid experimental flow estimation based on an optical-flow scheme[END_REF]. The DNS, however, exhibits a profile similar to other references and a magnitude in between the two LDV experiments of Norberg. Considering the intermediate Reynolds number of the DNS compared to the Norberg experiments, this suggests good convergence and accuracy of the DNS statistics. Note that the cLES mesh is ∼ 0.46% the cost of the DNS. Table
Re nx × ny × nz lx/D × ly /D × lz /D ∆x/D ∆y/D ∆z/D U∆t/D
cLES 3900 241×241×48 20×20×π 0.083 0.024-0.289 0.065 0.003
DNS 3900 1537×1025×96 20×20×π 0.013 0.0056-0.068 0.033 0.00075
PIV -Parn 3900 160×128×1 3.6×2.9×0.083 0.023 0.023 0.083 0.01
LES -Parn 3900 961×961×48 20×20×π 0.021 0.021 0.065 0.003
dows similar to a
Table 2 :
2 Recirculation lengths for cLES.
Model PIV -Parn DNS Smag DSmag StSm StSp
L r /D 1.51 1.50 1.75 1.50 1.42 1.50
of this parameter would signify a coarser mesh (i.e. a rough resolution) while
a small value indicates a finer cut off scale or a finer mesh resolution. When
Table 3 :
3 Recirculation lengths with and without velocity bias.
Model PIV -Parn DNS StSm -Nad StSm StSp -Nad StSp
L r /D 1.51 1.50 1.42 1.42 1.58 1.50 |
01764622 | en | [
"phys.phys.phys-flu-dyn"
] | 2024/03/05 22:32:13 | 2018 | https://inria.hal.science/hal-01764622/file/Incompact3d_meet.pdf | LES -SGS Models
Smagorinsky Variants :
Classic : ν t = (C s * ∆) 2 S,
(1)
Dynamic : C 2 s = - 1 2 L ij Mij M kl Mkl . (2)
where,
L ij = T ij -τij (3)
M ij = ∆2 | S| Sij -∆2 (| S| Sij ), (4)
WALE :
ν t = (C w ∆) 2 (ς d ij ς d ij ) 3/2 ( Sij Sij ) 5/2 + (ς d ij ς d ij ) 5/4
(5)
MULU
NS formulation as derived in [START_REF] Memin | Fluid flow dynamics under location uncertainty[END_REF] :
Mass conservation :
d t ρ t + ∇•(ρ w )dt + ∇ρ •σ dB t = 1 2 ∇• (a∇q)dt, (7)
w = w - 1 2 ∇ • a (8)
For an incompressible fluid :
∇• (σdB t ) = 0, ∇ • w = 0, (9)
Momentum conservation :
∂ t w +w ∇ T (w - 1 2 ∇ • a)- 1 2 ij ∂ x i (a ij ∂ x j w ) ρ = ρg -∇p+µ∆w . ( 10
)
Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting 6 / 29
Modelling of a :
Stochastic Smagorinsky model (StSm) :
a(x, t) = C ||S||I 3 , (11)
Local variance based models (StSp / StTe) :
a(x, nδt) = 1 |Γ| -1 x i ∈η(x) (w (x i , nδt)-w (x, nδt))(w (x i , nδt)-w (x, nδt)) T C st , (
n x × n y × n z l x × l y ×
VDA -General Formulation
Objective : Estimate the unknown true state of interest x t (t, x)
Formulation :
∂ t x(t, x) + M(x(t, x)) = q(t, x), (13)
x(t 0 , x) = x b 0 + η(x) (14) Y(t, x) = H(x(t, x)) + (t, x) (15) q(t, x) -model error (Covariance matrix Q) η(x) -
VDA -General Formulation
Cost Function :
J(x 0 ) = 1 2 (x 0 -x b 0 ) T B -1 (x 0 -x b 0 )+ 1 2 t f t0 (H(x t ) -Y(t)) T R -1 (H(x t ) -Y(t)).
4DVar -Adjoint Method
Evolution of the state of interest (x 0 ) as a function of time
∂J ∂η = -λ(to) + B -1 (∂x(t 0 ) -∂x 0 ). ( 17
)
Le Dimet, Francois-Xavier, and Olivier Talagrand. "Variational algorithms for analysis and assimilation of meteorological observations : theoretical aspects." Tellus A : Dynamic Meteorology and Oceanography 38.2 (1986) : 97-110.
Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting
Additional Control
Minimisation of J with respect to additional incremental control parameter δu :
δγ (i) = {δx (i) 0 , δu (i) } J(δγ (i) ) = 1 2 ||δx (i) 0 + x (i) 0 -x (b) 0 || 2 B -1 + 1 2 t f t 0 ||δu (i) + u (i) -u (b) || 2 Bc + ... (18)
constrained by :
∂ t δx (i) + ∂ x M(x (i) ) • ∂x (i) + ∂ u M(u (i) ) • ∂u (i) = 0 (19)
a(x, t) = C ||S||I 3 , (20)
Local variance based models (StSp / StTe) :
a(x, nδt) = 1 |Γ| -1 x i ∈η(x) (w (x i , nδt)-w (x, nδt))(w (x i , nδt)-w (x, nδt)) T C sp ,
τ
Velocity fluctuation profiles for turbulent channel flow at Reτ = 395
contour n x × n y × n z l x × l y × l z n y × n z l x /D × l y /D × l z /D ∆x/
Pranav CHANDRAMOULI4D-Var with LES -Incompact3d Meeting
Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Plan 1 Large Eddy Simulation SGS Models LES -Results 2 Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation -Types Sequential approach : Cuzol and Mémin (2009), Colburn et al. (2011), Kato and Obayashi (2013), Combes et al. (2015) Variational approach (VDA) : Papadakis and Mémin (2009), Suzuki et al. (2009), Heitz et al. (2010), Lemke and Sesterhenne (2013), Gronskis et al. (2013), Dovetta et al. (2014) Hybrid approach : Yang (2014) Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results 4DVar -Incremental Optimisation Evolution of the cost function (J(x 0 )) as a function of time Courtier, P., J. N. Thépaut, and Anthony Hollingsworth. "A strategy for operational implementation of 4D-Var, using an incremental approach." Quarterly Journal of the Royal Meteorological Society 120.519 (1994) : 1367-1387. Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Code Formulation VDA with LES -Results 4DVar -Flow Chart 4DVar incremental Data Assimilation using Adjoint methodology. Pranav CHANDRAMOULI 4D-Var with LES -Incompact3d Meeting Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results Data Assimilation Introduction to Data Assimilation Code Formulation VDA with LES -Results
Synthetic Assimilation at Re 3900
Optimisation Parameter -U(x, y, z, t 0 ) Control Parameters -U in (1, y, z, t) Cst (x, y , z) |
01764628 | en | [
"spi.meca.biom"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764628/file/ris00000775.pdf | A Chenel
C J F Kahn
K Bruyère
T Bège
K Chaumoitre
C Masson
Morphotypes and typical locations of the liver and relationship with anthropometry
Introduction
The liver is one of the most injured organs in road or domestic accidents. To protect or repair it, companies and clinicians rely more and more on numerical finite elements models. It is highly complex owing to its structure, its dual blood supply and its environment. It is located under the diaphragm and partially covered by the thoracic cage and it presents a high variability of its shape. In order to improve finite element modelling of the liver, an anatomical customization must be done. Three aspects of the liver anatomy have been studied. First, some authors focused on the external shape of the liver [1][2][3]. Caix and Cubertafond [1] have found two morphotypes according to the subject's morphology. Nagato et al. [2] have divided the liver in six morphotypes depending on the costal and diaphragmatic impressions, or the development of one lobe in relation to the other. Studer al et. [3] have stated on two morphotypes thanks to the ratio of two geometrical characteristics of the liver. Secondly some authors focused on the location of the liver in the thoracic cage [4][5]. The liver's location in different postures has been determined in vivo [4] and on cadaveric subjects [5]. Finally, the variability of the internal shape of the liver has been reported. Some authors have described different segments based on the hepatic vessels and particularly the hepatic veins [START_REF] Couinaud | Le foie : étude anatomiques et chirurgicales[END_REF][START_REF] Bismuth | [END_REF]. The purpose of our study is a global analysis of the liver anatomy, quantifying at the same time its external shape, its internal vascular structure and its anatomical location, applied to livers reconstructed from 78 CT-scans, in order to identify liver's morphotypes and typical locations in the thoracic cage. Moreover, we analyzed the ability of subject's characteristics to predict these morphotypes and locations.
Materials and methods
Population -This study is based on 78 CT-scans from the Department of Medical Imaging and Interventional Radiology at Hôpital Nord in Marseille. These CT-scans were performed on patients between 17 and 95 years old. with no liver disease nor morphological abnormalities of the abdominal organs or the peritoneum.
Measurement of geometric and anthropometric parameters -
The 3D reconstructions of the liver, the associated veins and the thoracic cage were performed manually. A database was created with 53 geometrical characteristics per liver, qualifying its external geometry [START_REF] Serre | Digital Human modeling for Design and Engineering Conference[END_REF], its internal geometry, the diameters and angles of the veins and their first two bifurcations, and its location in the thoracic cage. Furthermore, anthropometric measurements were measured, such as the xiphoïd angle, the abdominal and thoracic perimeters. Lastly, data such as the subject's age and gender were known.
Statistical analysis -To homogenize the data, a transformation in logarithm, logarithm of the cubic root or by the log shape ratio of Mosimann [START_REF] Mosimann | of the American stat[END_REF] was used. To reduce the number of variables, principal component analysis (PCA) was performed on parameters characterizing the external geometry, the internal geometry, the veins geometry and the location of the liver. The first two dimensions were kept and two new variables by linear combination were created. An ascending hierarchical classification was then produced to determine the number of categories. Then, the partitioning around medoids method was chosen to classify the different individuals into categories. Lastly, ANOVAs flowed by post-hoc Tukey (HSD) tests were performed to verify the existence of a relationship between the subject's anthropometry and the liver's morphotypes.
Results and discussion
Four morphotypes were found and described by Fig. 1.
The first morphotype corresponds to a liver with a very small volume which presents as a small volume of the right lobe particularly segments 4 to 7. The associated veins globally have small diameters. A small angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle under 80° and a thoracic perimeter under 75 cm.
The second morphotype corresponds to a liver with a small volume which manifests as a small volume of the right lobe, and particularly segments 5 to 7. The associated veins globally have small diameters. A large angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle under 80° and a thoracic perimeter under 75 cm.
The third morphotype corresponds to a liver with a very large volume which presents as a very large volume of the right lobe, and particularly segments 4 to 6. The associated veins have large diameters. A large angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle over 80° and a thoracic perimeter over 75 cm.
The fourth morphotype corresponds to a liver with a large volume which manifests as a large volume of the right lobe, and particularly segments 4, 5 and 7. The associated veins have large diameters. A small angle between the two lobes can be noted. This kind of morphotype is noticed for subjects with a xiphoïd angle under 80°, a thoracic perimeter around 75 cm.
No statistical difference can be noted for the position of the liver in the thoracic cage. Only the position of one lobe to the other seems to vary.
Although the volume of the segments varies from one morphotype to another, the proportion of these segments, especially the fifth, seems stable and the volumes of segments are correlated with the hepatic volume (R 2 =0.42 for segment 5). Moreover, the diameters of the veins seems correlated with the volume (R 2 =0.33 for the portal vein, but only 0.17 for the vena cava). |
01764647 | en | [
"shs.hist",
"shs.psy",
"shs.relig",
"shs.scipo"
] | 2024/03/05 22:32:13 | 1993 | https://hal.science/cel-01764647/file/PowersofUnreal.pdf | THE POWERS OF THE UNREAL: MYTHS AND IDEOLOGY IN THE USA
P. CARMIGNANI Université de Perpignan-Via Domitia
AN INTRODUCTION
As a starting-point, I'd like to quote the opinion of a French historian, Ph. Ariès, who stated in his Essais sur l'histoire de la mort en Occident (Paris, Le Seuil, 1975) that "pour la connaissance de la civilisation d'une époque, l'illusion même dans laquelle ont vécu les contemporains a valeur d'une vérité", which means that to get an idea of a society and its culture one needs a history and a para-history as well, para-history recording not what happened but what people, at different times, said or believed had happened. A famous novelist, W. Faulkner expressed the same conviction in a more literary way when he stated in Absalom Absalom that "there is a might have been which is more true than truth", an interesting acknowledgement of the power of myths and legends. This being said, I'd like now to say a few words about my basic orientations; the aim of this course is twofold :
-firstly, to introduce students to the technique of research in the field of American culture and society and give them a good grounding in the methodology of the classic academic exercise known as "analysis of historical texts and documents" ; -secondly, to analyze the emergence and workings of "l'imaginaire social" in the States through two of its most characteristic manifestations : Myths and Ideology. We'll see that every society generates collective representations (such as symbols, images etc.) and identification patterns gaining acceptance and permanence through such mediators or vehicles as social and political institutions (for instance, the educational system, the armed forces, religious denominations) and of course, the mass media (the Press, the radio, the cinema and, last but not least, television). They all combine their efforts to inculcate and perpetuate some sort of mass culture and ideology whose function is to hold the nation together and provide it with a convenient set of ready-made pretexts or rationalizations it often uses to justify various social or political choices.
DEFINITIONS OF KEY NOTIONS
A) Imagination vs. "the imaginary"
There is no exact English equivalent of the French word "l'imaginaire" or "l'imaginaire social"; however, the word "imaginary" does exist in English but chiefly as an epithet in the sense of "existing only in the imagination, not real" (Random House Dict.) and not as a substantive. It is sometimes found as a noun "the imaginary" as opposed to "the symbolic" in some works making reference to J. Lacan's well-known distinction between the three registers of "le réel, l'imaginaire et le symbolique", but its meaning has little to do with what we're interested in. For convenience sake, I'll coin the phrase "the imaginary" or "the social imaginary" on the model of "the collective unconscious" for instance to refer to our object of study. First of all, we must distinguish between "the imagination" and "the imaginary" though both are etymologically related to the word "image" and refer, according to G. Durand -the author of Les Structures anthropologiques de l'imaginaire -to "l'ensemble des images et des relations d'images qui constitue le capital pensé de l'homo sapiens", they do not share the same characteristics. IMAGINATION means "the power to form mental images of objects not perceived or not wholly perceived by the senses and also the power to form new ideas by a synthesis of separate elements of experience" (English Larousse). The IMAGINARY also implies the human capacity for seeing resemblances between objects but it also stresses the creative function of mind, its ability to organize images according to the subject's personality and psyche: as a local specialist, Pr. J. Thomas, stated: L'imaginaire est essentiellement un dynamisme, la façon dont nous organisons notre vision du monde, une tension entre notre conscience et le monde créant un lien entre le en-nous et le hors-nous [...] La fonction imaginaire apparaît donc comme voisine de la définition même du vivant, c'est-àdire organisation d'un système capable d'autogénération dans son adaptation à l'environnement, et dans le contrôle d'une tension rythmique (intégrant le temps) entre des polarisations opposées (vie/ mort, ordre/désordre, stable/dynamique, symétrie/dissymétrie, etc.) mais en même temps dans sa capacité imprévisible de création et de mutation [...] L'imaginaire assure ainsi une fonction générale d'équilibration anthropologique.
Thus, to sum up, if the imagination has a lot to do with the perception of analogies or resemblances between objects or notions, "the imaginary" is more concerned with binary oppositions and their possible resolution in a "tertium quid" i.e. something related in some way to two things but distinct from both.
B) Myth
Myth is a protean entity and none of the numerous definitions of myth is ever comprehensive enough to explain it away (cf. "Myth is a fragment of the soul-life, the dream-thinking of people, as the dream is the myth of the individual", Reuthven, 70). Etymologically, myth comes from the Greek "mythos". A mythos to the Greeks was primarily just a thing spoken, uttered by the mouth, a tale or a narrative, which stresses the verbality of myth and it essential relationship with the language within which it exists and signifies (parenthetically, it seems that many myths originate in some sort of word play cf. Oedipus = swollen foot. So bear in mind that the medium of myth is language: whatever myth conveys it does in and through language).
A myth also implies an allegoric and symbolic dimension (i.e. a latent meaning different from the manifest content) and it is a primordial "symbolic form" i.e. one of those things -like language itself -which we interpose between ourselves and the outside world in order to apprehend it.
It usually serves several purposes :
-to explain how something came into existence: it is "a prescientific and imaginative attempt to explain some phenomenon, real or supposed, which excites the curiosity of the mythmaker or observer" (K. R., 17)
-to provide a logical model capable of overcoming a contradiction (L. Strauss). In simpler terms, myths attempt to mediate between contradictions in human experience; they mediate a "coincidentia oppositorum" (cf. examples).
So to sum up, in the words of R. Barthes: "le mythe est un message qui procèderait de la prise de conscience de certaines oppositions et tendrait à leur médiation", in plain English, myth is a message originating in the awareness of certain oppositions, contradictions or polarities, and aiming at the mediation; myth is "a reconciler of opposites" or to quote G. Durand once more: "un discours dynamique résolvant en son dire l'indicible d'un dilemme" (Figures mythiques,306). Lastly, an essential feature of myth: it can be weakened but hardly annihilated by disbelief or historical evidence; myth is immune from any form of denial, whether experimental or historical (e.g. we still think of a rising and setting sun though we know it is a fallacy).
C) Ideology
The relationship between myth and ideology is obvious inasmuch as "toute idéologie est une mythologie conceptuelle dans laquelle les hommes se représentent sous une forme imaginaire leurs conditions d'existence réelles".
As far as language in general, and myth in particular, is a way of articulating experience, they both participate in ideology i.e. the sum of the ways in which people both live and represent to themselves their relationship to the conditions of their existence. Ideology is inscribed in signifying practices -in discourses, myths, presentations and representations of the way things are. Man is not only a social but also an "ideological animal". According to French philosopher L. Althusser, ideology is: un système (possédant sa logique et sa rigueur propres) de représentations (images, mythes, idées ou concepts selon les cas) doué d'une existence et d'un rôle historiques au sein d'une société donnée [...] Dans l'idéologie, qui est profondément inconsciente, même lorsqu'elle se présente sous une forme réfléchie, les hommes expriment, en effet, non pas leur rapport à leurs conditions d'existence, mais la façon dont ils vivent leur rapport à leurs conditions d'existence: ce qui suppose à la fois rapport réel et rapport vécu, imaginaire (Pour Marx,(238)(239)(240).
So between the individual and the real conditions of his existence are interposed certain interpretative structures, but ideology is not just a system of interpretation, it also assumes the function of a cementing force for society. According to Althusser, ideological practices are supported and reproduced in the institutions of our society which he calls "Ideological State Apparatuses" (ISA): their function is to guarantee consent to the existing mode of production. The central ISA in all Western societies is the educational system which prepares children to act consistently with the values of society by inculcating in them the dominant versions of appropriate behaviour as well as history, social studies and of course literature. Among the allies of the educational ISA are the family, the law, the media and the arts all helping to represent and reproduce the myths and beliefs necessary to enable people to live and work within the existing social formation. As witness its Latin motto "E Pluribus Unum" meaning "Out of many, one" or "One from many", America, like any nation in the making, was from the very beginning, confronted with a question of the utmost importance viz. how to foster national cohesion and achieve a unity of spirit and ideal. Before the Constitution there were thirteen separate, quasi independent States; in the words of D. Boorstin, "Independence had created not one nation but thirteen", which is paradoxical yet true since each former colony adopted a Constitution which in practice turned it into a sovereign state. However, the new States shared a common experience and set of values, and in the wake of Independence and throughout the XIX th century, the new country gradually developed a collective representation and a unifying force counterbalancing an obvious strain of individualism in the American character as well as holding in check certain centrifugal tendencies in the American experience; to quote just a few instances: the mobility of the population, its composite character, the slavery issue, sectional and regional differences, oppositions between the haves and have-nots are part of the disunifying forces that have threatened the concept as well as the reality of a single unmistakably American nationality and culture (the question of making a super identity out of all the identities imported by its constituent immigrants still besets America). For an examination and discussion of the genesis of the nation, the formation of the State, and the establishment of its model of recognized power, we'll have a look at the article by E. Marienstras "Nation, État, Idéologie".
However, even if the different people making up the USA have not coalesced into one dull homogeneous nation of look-alikes, talk-alikes and think-alikes, even if one can rightly maintain that there exist not one but fifty Americas (cf. the concept of "the American puzzle") there's no doubt that the USA succeeded in developing a national conciousness which is the spiritual counterpart of the political entity that came into being with the Declaration of Independence. The elaboration of a national identity was inseparable from the creation of a national ideology in the sense we have defined i. e. a coherent system of beliefs, assumptions, principles, images, symbols and myths that has become an organic whole and part and parcel of national consciousness. Let me remind you, at this stage, that my use of the concept, derived from L. Althusser, assumes that ideology is both a real and an imaginary relation to the world, that its rôle is to suppress all contradictions in the interest of the existing social formation by providing (or appearing to provide) answers to questions which in reality it evades. I'd like to point out as well, for the sake of honesty and argument, that some historians and social scientists might question the truth of my assumption: some consider that in view of the vastness and diversity of the New World it is absurd to speak of an American ideology and would subs-titute for it the concept of ideologies, in the plural; others claim that we have entered a post-mythical age or maintain, like D. Bell, the author of a famous book The End of Ideology (1960) that ideology no longer plays any rôle in Western countries, an opinion to which the fall of Communism has given new credence (but ironically enough two years later, in 1962, Robert E. Lane published a book entitled Political Ideology: Why The American Common Man Believes What He Does ?, which clearly shows that ideology is a moot point). Now, whatever such specialists may claim, there's no denying that the Americans take a number of assumptions for granted and, either individually or collectively, either consciously or unconsciously, often resort, in vindication of their polity (i.e. an organized society together with its government and administration), to a set of arguments or "signifiers" in Barthesian parlance, at the core of which lie the key notions of the American way of life and Americanism, two concepts about which there seems to be a consensus of opinion.
The American way of life is too familiar a notion to look into it in detail; everybody knows it suggests a certain degree of affluence and material well-being (illustrated by the possession of one or several cars, a big house with an impressive array of machines and gadgets etc.), and also implies a certain type of social relations based on a sense of community which does not preclude an obvious strain of rugged individualism and lastly, to strengthen the whole thing, an indestructible faith in freedom and a superior moral worth. As far as Americanism is concerned, it suggests devotion to or preference for the USA and its institutions and is the only creed to which Americans are genuinely committed. Although Americanism has been in common use since the late XVIII th century no one has ever been completely sure of its meaning, and it is perhaps best defined in contrast to its opposite Un-Americanism, i. e. all that is foreign to or opposed to the character, standards or ideals of the USA. Be that as it may, the concept of Americanism apparently rests on a structure of ideas about democracy, liberty and equality; through Americanism public opinion expresses its confidence in a number of hallowed institutions and principles: the Constitution, the pursuit of happiness, the preservation of individual liberty and human rights, a sense of mission, the free enterprise system, a fluid social system, a practical belief in individual effort, equality of opportunity, etc., in short a set of tenets that prompts the Americans' stock reply to those who criticize their country: "If you don't like this country, why don't you go back where you came from ?" a jingoistic reaction which is sometimes even more tersely expressed by "America: love it or leave it". Thus Americanism is the backbone of the nation and it has changed very little even if America has changed a lot. To sum up, the vindication of Americanism and the American way of life aims at reaffirming, both at home and abroad, the reality and permanence of an American identity and distinctiveness. However if such identity and specifity are unquestionable, they nonetheless pertain to the realm of the imaginary: why? There are at least two reasons for this : A) First of all, America is the outgrowth -not to say the child -of a dream i.e. the American Dream which has always been invoked by those in charge of the destiny of the American people whether a presidential candidate, a preacher or a columnist : "Ours is the only nation that prides itself upon a dream and gives its name to one: the American Dream", wrote critic L. Trilling. The Dream is the main framework of reference, it comes first and History comes next. One can maintain that from the very beginning of the settlement the Pilgrim Fathers and the pioneers settled or colonized a dream as well as a country. America originated in a twofold project bearing the marks of both idealism and materialism and such duality, as we shall see, was sooner or later bound to call for some sort of ideological patching up. At this stage, a brief survey of how things happened is in order: the first permanent settlement on American soil started in May 1607 in Virginia. The settlers, mainly adventurers, and ambitious young men employed by the Virginia Company of London were attracted by the lure of profit: they hoped to locate gold mines and a water route through the continent to the fabulous markets of Asia. A few decades later the colonists were reinforced by members of the loyalist country gentry who supported the King in the English Civil War (1642-52) -the Cavaliers, who deeply influenced the shaping of Antebellum South and gave Southern upper classes their distinctively aristocratic flavour.
In 1620, some five hundred miles to the North, another settlement -Plymouth Colony -was set up under the leadership of the famous Pilgrim Fathers, a group of Puritans who where dissatisfied with religious and political conditions in England. Unlike the the planters of Virginia the settlers of New England were motivated less by the search for profits than by ideological considerations. They sailed to America not only to escape the evils of England, but also to build an ideal community, what their leader J. Winthrop called "A Model of Christian Charity," to demonstrate to the world the efficacy and superiority of true Christian principles. So, the beginnings of America were marked by a divided heritage and culture: the Puritans in the North and the Cavaliers in the South, Democracy with its leveling effect, and Aristocracy with slavery as its "mudsill". And these two ways of life steadily diverged from colonial times until after the Civil War. Now I'd like to embark upon a short digression to show you an interesting and revealing instance of ideologogical manipulation : on Thanksgiving Day, i. e. the fourth Thursday in November, a national holiday, the Americans commemorate the founding of Plymouth Colony by the Pilgrim Fathers in 1620. This event has come to symbolize the birth of the American nation, but it unduly highlights the part taken by New England in its emergence. The importance that history and tradition attach to the Puritan community should not obliterate the fact that the colonization of the Continent actually started in the South 13 years before. Jamestown, as you know now, was founded in 1607 and one year before the "Mayflower" (the ship in which the Pilgrim Fathers sailed) reached Massachusetts, a Dutch sailing ship, named the "Jesus" (truth is indeed stranger than fiction) had already unloaded her cargo of 20 Negroes on the coast of Virginia. Small wonder then that in the collective consciousness of American people, the Pilgrim Fathers, with their halo of innocence and idealism overshadowed the Southerners guilty of the double sin of slavery and Secession.
B) The second reason is that the American socio-political experience, and consequently ideology, roots itself, for better or for worse in "Utopia" (from Greek "ou"/not + "topos"/place ; after Utopia by Sir Thomas More, 1516, describing an island in which ideal conditions existed; since that time the name has come to refer to any imaginary political or social system in which relationships between individual and the State are perfectly adjusted). The early Puritan settlers in New England compared themselves with God's Chosen People of the Old Testament and America was seen as a second Promised Land where a New Jerusalem was to be founded ("We shall be as a city upon a hill...," proclaimed their leader , J. Winthrop). What the early settlers'experience brings to light is the role of the fictitious in the making of America: the Pilgrim Fathers modelled their adventure on what I am tempted to call a Biblical or scriptural script. The settlement of the American continent was seen as a re-enactment of various episodes of the Old testament and was interpreted in biblical terms: for instance, the Pilgrim Fathers identified themselves with the Hebrews of Exodus who under the leadership of Moses fled Egypt for the Promised Land. The English Kings whose policies were detrimental to the Puritan community were compared to Pharaoh and the long journey across the Atlantic Ocean was interpreted as an obvious parallel with the wanderings of the Hebrews across the Sinaï Desert. Even the Indian tribes, who made it possible for the early colonists to survive the hardships of settlement, were readily identified with the Canaanites, the enemies of the Hebrews, who occupied ancient Palestine. Another corollary of the Promised Land scenario was, as we have just seen, that the Pilgrim Fathers had the deep-rooted conviction that they were endowed with a double mission: spreading the Word of God all over the new continent and setting up a New Jerusalem and a more perfect form of government under the guidance of the Church placed at the head of the community (a theocracy). Parenthetically, the identification with the Hebrews was so strong that at the time of the Declaration of Independence some delegates suggested that Hebrew should become the official language of the New Republic! Thus the Pilgrim Fathers were under the impression of leaving the secular arena to enter the mythical one: they looked forward to an end to history i. e. the record of what man has done and this record is so gruesome that Byron called history "the devil's scripture". The Puritans planned to substitute God's scripture for the devil's: myth redeems history. The saga of the Pilgrim Fathers is evidence of the supremacy of the mythical or imaginary over the actual; it is also an illustration of the everlasting power of mythical structures to give shape to human experience: the flight from corrupt, sin-ridden Europe was assimilated to the deliverance of Israel from Egypt. Now it is worthy of note that if utopia means lofty ideals, aspiration, enterprise and a desire to improve the order of things, it also tends to degenerate and to content itself with paltry substitutes, makeshift solutions and vicarious experiences: as M. Atwood puts it the city upon the hill has never materialized and: "Some Americans have even confused the actuality with the promise: in that case Heaven is a Hilton Hotel with a coke machine in it".
Such falling-off is illustrated by the evolution of the myth of the Promised Land which, as an ideological construction served a double purpose: first of all, there is no doubt that this myth and its derivative (the Idea of the Puritans as a Chosen People) reflected an intensely personal conviction and expressed a whole philosophy of life but at the same time it is obvious that these religious convictions readily lent themselves to the furtherance of New England's political and economic interests. The consciousness of being God's chosen intruments in bringing civilization and true religion to the wilderness justified a policy of territorial expansion and war on the Indians: their culture was all but destroyed and the race nearly extinguished. As you all know, the Indians were forced into ever-smaller hunting-grounds as they were persuaded or compelled to give their forests and fields in exchange for arms, trinkets or alcohol. This policy of removal culminated in the massacre of Wounded Knee and the harrowing episode of the Trail of Tears. So this is a perfect example of the way ideology works: in the present instance, it served as a cover for territorial expansion and genocide.
As a historian put it, "the American national epic is but the glorification of a genocide".
What the early days of settlement prove beyond doubt is that the American experiment took root in an archetypal as well as in a geographical universe, both outside history and yet at a particular stage in the course of history. The Pilgrim Fathers' motivations were metaphysical as well as temporal and the ideological discourse held by those founders was inscribed in the imaginary: it was rich in fables, symbols and metaphors that served as a system of interpretation or framework of references to give shape and meaning to their experience. Thus the American Dream was intimately related to the Sacred and eventually regarded as sacred. Later, under the influence of the writings of such philosophers as John Locke (1632-1704) or Benjamin Franklin (1706-1790), the American Dream was gradually remodelled and secularized but it has always kept a mystical dimension.
Nowadays, it is obvious that ideology in present-day America is mostly concerned with what the Dream has become and the most striking feature of it is its permanence -however changeable its forms and short-lived its manifestations may have been. It is of course an endless debate revealing great differences of attitude; some like John Kennedy in A Nation of Immigrants (1964) maintaining the Dream has materialized ("Les occasions que l'Amérique a offertes ont fait du rêve une réalité, au moins pour un bon nombre de gens; mais le rêve lui-même était pour une large part le produit de millions de simples gens qui commençaient une nouvelle existence avec la conviction que l'existence en effet pouvait être meilleure, et que chaque nouvelle vague d'immigrants ravivait le rêve") while others contend that it has vanished into thin air or again claim like John M. Gill in his introduction toThe American Dream that the American Dream has not been destroyed because it has not materialized yet; it is just, in the words of the Negro poet Langston Hughes "a dream deferred" i.e. "ajourné" : What happens to a dream deferred ? Does it dry up like a raisin in the sun? Or fester like a sore -And then run? Does it stink like rotten meat? Or crust and sugar overlike a syrupy sweet? May be it just sags like a heavy load. Or does it explode?
Let America be America again. Let it be the dream it used to be. Let it be the pioneer on the plain Seeking a home where he himself is free. (America never was America to me.) Let America be the dream the dreamers dreamed -Let it be that great strong land of love Where never Kings connive nor tyrants scheme That any man be crushed by one above. (It never was America to me.) O, let my land be a land where liberty Is crowned with no false patriotic wreath, But opportunity is real, and life is free, Equality is the air we breathe. (There's never been equality for me Nor freedom in this "homeland of the free".") ---------------O, yes, I say it plain, America never was America to me, And yet, I swear this oath -America will be!
The Dream is all the more enduring as it is seen as being deferred. Note as well that the definition of the Dream has continually changed as the notion of happiness evolved, but if some elements have disappeared, others have been included and the Dream still embodies a number of obsessions and phantasms that haunted the people of Massachusetts or New Jersey four centuries ago. Such perennity and resilience are most remarkable features and prove beyond doubt that myth cannot be destroyed by history. The main components of the American Dream are well-known; it is a cluster of myths where one can find, side by side or alternating with each other :
-the myth of Promise ("America is promises") in its quasi theological form, America being seen as God's own country -the myth of plenty : America = a Land of plenty, a myth which originated in the Bible and then assumed more materialistic connotations -the Myth of Adamic Innocence -a sense of mission, at first divine and then imperialistic (Manifest Destiny) -the Frontier -the Melting-Pot etc., the list is by no means exhaustive and might include all the concepts at the core of Americanism such as the pursuit of happiness, equality of opportunity, freedom, selfreliance and what not.
However nebulous this series of elements may be, it played at one time or another in American -and for the most part still plays -the rôle of motive power or propelling force for the American experiment: this is what makes Americans tick! I shall now embark upon a more detailed examination of the major ones.
THE PROMISED LAND
The significance of the Promised Land varied according to what the settlers or immigrants expected to find in the New World. For some it was chiefly a religious myth (America was seen as a haven of peace for latter-day pilgrims); for those who were more interested in worldly things it was supposed to be an El Dorado, a legendary country said to be rich in gold and treasures, lastly, for a third category of people, it symbolized a prelapsarian world, a place of renewal and the site of a second golden age of humanity. As we have seen, most of the colonists who left Europe for the New World did so in the hope of finding a more congenial environment and for the Pilgrim Fathers New England was a modern counterpart of the Biblical archetype and the success of the settlement was seen as evidence of their peculiar relation to God (a sign of divine election). But from the outset, the myth also served different purposes:
-it was used as propaganda material and a lure to stimulate immigration from Europe;
-it provided the colonists with a convenient justification for the extermination of the Indians;
-it offered an argument against British rule since God's Chosen People could not acknowledge any other authority but God's, which resulted in a theocratic organization of the colony.
What is worthy of note is that even if subsequent settlers did not share this explicitly religious outlook stemming from radical Protestantism, most of them did think of America as in some sense a gift of Divine Providence. But the secularization of the myth set in very early; in the late XVIII th and in the early XIX th , with the rise of capitalism and incipient industrialism which made living conditions worse for large numbers of people, a reaction set in which revived the pastoral ideal, a pagan version or new avatar of the concept of the Promised Land. The myth of a rustic para-dise, as formulated by Rousseau for instance, postulates that the beauty of nature, the peace and harmony of the virgin forest have a regenerative, purifying and therapeutic effect both physically and morally.
Thomas Jefferson, 3 rd President of the U.S. was the originator of the pastoral tradition in America: he maintained that the independent yeoman farmer was the true social foundation of democratic government: "Those who labor in the earth are the chosen people of God, if ever he had a chosen people," he wrote. If the pastoral myth played an obvious role in the conquest of the Continent, it was nonetheless a sort of rearguard action doomed to failure: the advance of progress, and urbanization was irresistible, but the ideal of a life close to nature was to persist in the realm of fiction where it repeatedly crops up in the works of Cooper, Emerson, Twain or Thoreau.
However, the religious interpretation of the myth survived and continued at intervals to reassert itself through much of the XIX th century; see for instance the saga of the Mormons who trekked westward across the prairie to settle in Utah and found Salt Lake City, their Jerusalem. The motif of the journey out of captivity into a land of freedom also found an answering echo in Black slaves on Southern plantations; for some of them the dream did materialize when they managed to reach the Northern free states or Canada thanks to the "Underground Railroad", a secret organization helping fugitive slaves to flee to free territory. But the most important consequence of the idea of the Promised Land was the Messianic spirit that bred into Americans a sense of moral and endowed them with the conviction that God had given them a world mission i.e. America's "Manifest Destiny", an imperialistic slogan launched by Horace Greely a journalist and political leader. According to the doctrine it was the destiny of the U.S. to be the beacon of human progress, the liberator of oppressed peoples and consequently to expand across the continent of North America. The notion of "Manifest Destiny" is perfectly illustrative of the way ideology works and turns every principle or doctrine to its adavantage: from a sense of mission in the field of religion, the concept evolved into a secular and imperialistic justification for territorial expansion. This self-imposed mission served as a most convenient pretext to justify acts of imperialistic interventions in the affairs of foreign countries but one must also acknowledge that on occasion it also provided the moral basis for acts of altruism or generosity toward other nations. Thus there was a shift from spreading the Word of God to spreading the American model of government and way of life; the impetus or drive was kept but the goal was changed: spiritual militancy evolved into imperialism.
THE AMERICAN ADAM
As was to be expected, the myth of the Promised Land gave rise to a novel idea of human nature embodied by a new type of man, the American Adam i.e. homo americanus having recaptured pristine innocence. Sinful, corrupt Europe was an unlikely place for the emergence of this new avatar of humanity, but the American wilderness being a virgin environment was to prove much more favorable to the advent of a mythic American new man. As St John de Crèvecoeur stated in his Letters from an American Farmer (1782): "The American is a new man, who acts upon new principles he must therefore entertain new ideas and form new opinions". The forefather of the American Adam was "the natural man" or "the noble savage" of Locke's and Rousseau's philosophies i.e. an ideal type of individual seen as the very opposite of the corrupt and degenerate social man. The American farmer, hedged in by the forest, partaking of none of the vices of urban life, came to be regarded as the very type of Adamic innocence. In the wake of Independence, the new country elaborated a national ideology characterized by a strong antagonism towards Europe and towards the past: cf. John L. O'Sullivan (1839):
Our National birth was the beginning of new history, the formation and progress of an untried political system, which separates us from the past and connects us with the future only; so far as regards the entire development of the rights of man, in moral, political and national life, we may confidently assume that our country is destined to be the great nation of futurity.
XIX th -century authors like Thoreau, Emerson or Cooper were the principal myth-makers: they created a collective representation, the American Adam, which they dsecribed as:
An individual emancipated from history, happily bereft of ancestry, untouched and undefiled by the usual inheritances of family and race; an individual standing alone, self-reliant and self-propelling, ready to confront whatever awaited him with the aid of his unique and inherent resources. It was not surprising, in a Bible-reading generation, that the new hero (in praise or disapproval) was most easily identified with Adam before the Fall. Adam was the first, the archetypal man. His moral position was prior to experience, and in his very newness he was fundamentally innocent. (R. W. Lewis)
As immigrants from Europe contaminated the east of the American continent, the western part of the country, being thinly populated and therefore unsullied, became the repository of American innocence. An interesting political development from the fear of European corruption and the myth of the American Adam was isolationism. It proceeded from the assumption that America was likely to be tainted in its dealings with foreign nations and resulted in the formulation of the Monroe Doctrine in 1823: a declaration enunciated by James Monroe (5 th President of the US) that the Americas were not to be considered as a field for European colonization and that the USA would view with displeasure any European attempt to intervene in the political affairs of American countries. It dominated American diplomacy for the next century, and came, in the late 19th century to be associated with the assertion of U.S. hegemony in Latin America. One of the objectives of the Monroe Doctrine was to preserve the moral purity of the nation. However with the sobering experiences of the Civil War, WWI and WWII, and above all the War in Vietnam, the myth lost some of its credibility for experience conclusively proved that the Americans did not belong to a radically different species. America was to be, in the words of M. Lerner, "an extended genesis" but it fizzled out with the outbreak of the War between brothers; then America entered or rather fell into History again: "We've had our Fall" said a Southern woman of letters (E. Welty, Flannery O'Connor?). Though it suffered severe setbacks, the myth of the American Adam remains deeply rooted in the American psyche and is a leitmotif in American fiction, in the Press or in political speeches. The very stereotype of the self-made man ("l'homme qui est le fils de ses oeuvres"), totally dedicated to the present and the future, testifies to the all-engrossing and abiding power of the Adamic idea in American life.
THE MELTING-POT (Facts and Fiction)
As we have seen, building a new polity required the development of a national sense of peoplehood but in the U.S.A the question of national identity was from the start inseperable from assimilation i.e. America's ability to absorb unlimited numbers of immigrants, a process of massive cultural adaptation symbolized by the image of the "Melting-Pot". Thus the motto "E Pluribus Unum" sums up the essence of America's cosmopolitan faith, a conviction that this new country would bring unity out of diversity, but the national motto may assume two widely different meanings depending on whether one places greater stress on the "pluribus" or the "unum". Should "pluribus" be subordinated to and assimilated into "unum", or the other way round i.e. should unity/ "unum" be superseded by diversity/"pluribus"? Although the question is of crucial importance, its relevance is relatively recent: why? Simply because the original colonists were all coming from England and thus the American nationality was originally formed in a basically Anglo-Saxon mold.
As long as the settlers came from the British Isles, Germany and Northern Europe i.e. were mostly Protestant in religion, the process of assimilation or melting-pot worked smoothly and resulted in the emergence of culturally and politically dominant group which though it also contained strong Celtic admixtures (the Welsh, the Scots and the Irish) came to be referred to as WASPS, an acronym formed from the initial letters of the words "White Anglo-Saxon Protestants". Now the melting-pot idea of immigrant assimilation and American nationality was first put forward by Michel-Guillaume Jean de Crèvecoeur in the oft-quoted passage from Letters from an American Farmer (1782):
What then is the American, this new man? He is either a European or the descendant of a European, hence that strange mixture of blood, which you will find in no other country. I could point out to you a family whose grandfather was an Englishman, whose wife was Dutch, whose son married a French woman, and whose present four sons have four wives of different nations. He is an American, who, leaving behind him all his ancient prejudices and manners, receives new ones from the new mode of life he has embraced, the new government he obeys, and the new rank he holds. He becomes an America by being received into the broad lap of our great Alma Mater. Here individuals are melted into a new race of men, whose labours and posterity will one cause great changes in the world. (emphasis mine)
The term Melting-pot which remains the most popular symbol for ethnic interaction and the society in which it takes place, was launched by Isreal Zangwill's play The Melting-Pot which had a long run in New York in 1909. Now what must be pointed out is that in spite of its liberality and tolerance, the cosmopolitan version of the melting-pot was far from being a catholic or universal process. It seemed obvious that from the outset some allegedly unmeltable elements such as the Indians or the Blacks would be simply excluded from the process. Besides, the Melting-Pot was first of all and still is a theory of assimilation. The idea that the immigrants must change was basic; they were, as Crèvecoeur put it, to discard all vestiges of their former culture nationality to conform to what was at bottom an essentially Anglo-Saxon model. If, before the Civil war, the first big wave of immigrants from Ireland, Germany, Sweden and Norway was easily melted into a new race of men in the crucible of American society, in the 1880s, the second wave, an influx of Catholic people from the mediterranean area, followed by Slavic people and Jews, strained the assimilationist capacity of the so-called melting-pot. The flood of immigrants whose life-styles and ways of thinking were conspicuously different from American standards raised the problem of mutation and assimilation; it also gave rise to a feeling of racism towards the newly-arrived immigrants. Xenophobia was then rampant and found expression in such movements as the Ku Klux Klan (the 2 nd organization founded in 1915 and professing Americanism as its object), Nativism (the policy of protecting the interests of native inhabitants against those of immigrants) and Know-Nothingism (from the answer "I know nothing" that the members of the organization were advised to give inquisitive people). The program of the Know-Nothing party called for the exclusion of Catholics and foreigners from public office and demanded that immigrants should not be granted citizenship until twelve years after arrival. From the 1880s on, increasing numbers of Americans came to doubt that the mysterious alembic of American society was actually functioning as it was supposed to: the melting-pot gave signs of overheating and the USA assumed the disquieting appearance of "AmeriKKKa". (Parenthetically, I'd like to point out that the strain of xenophobia has not disappeared from American culture; there have been several resurgences of the phenomenon, e.g.
McCarthyism "Red-hunting during the cold war and nearer to us various campaigns in favour of "100 percent Americanism"). To return to the XIX th century, the public outcry against overly liberal immigration policies and the increasing number of "hyphenated" Americans (i.e. Afro-American) led U.S Government to pass legislation restricting entry to the Promised Land (quotas, literacy tests, or the Exclusion Act in 1882 to put an end to Chinese immigration). That period brought to light the limitations and true nature of the melting-pot theory which was just a cover for a process of WASPification in an essentially Anglo-Saxon mold which was almost by definition and from the outset unable to assimilate heterogeneous elements sharply diverging from a certain standard. In the words of N. Glazer and D. Moynihan, "the point about the melting-pot is that it did not happen" i.e. it was just a fallacy and an ideological argument masking the domination of one social group, the Wasps, under the guise of universal principles.
What is the situation today? Immigration laws are a little more liberal and new Americans are still pouring in by the million (nearly 5 million immigrants were admitted from 1969 to 1979). The 70's were the decade of the immigrants and above all the decade of the Asian (refugees from the Philippines or Vietnam etc.). A new and interesting development is Cuban immigration, concentrating in Florida and the influx of illegal immigrants from Mexico, the Wetbacks, settling in the South-West. The most dramatic consequence of the presence of fast-growing communities of Cubans, Puerto Ricans, or Mexicans is the increasing hispanicization of some parts of the USA. Spanish is already the most common foreign language spoken in the States and in some cities or counties it may one day replace American English.
The last three decades were marked by a revival of ethnicity and the rise of new forms of ethnic militancy; the 60s witnessed not only an undeniable heightening of ethnic and racial consciousness among the Blacks (pride of race manifested itself in the purposeful promotion of black power, black pride, black history, and patriotism), the Hispanic Americans and the native Americans, but also an emphatic rejection of the assimilationist model expressed in the idea of the Melting-Pot. Nowadays, foreign-born Americans want the best of both worlds i.e. enjoy the benefits of the American system and way of life but at the same time preserve their customs, traditions and languages. They refuse to sacrifice their own cultural identities on the altar of Americanism and claim a right to a twofold identity.
By way of conclusion: the two decades from 1960 to 1980 witnessed a severe weakening of confidence in the American system, in the principles on which it was based and in the efficacy of its institutions. This crisis in confidence originated in the realization that in the words of Harold Cruse, "America is a nation that lies to itself about who and what it is. It is a nation of minorities ruled by a minority of one--it thinks and acts as if it were a nation of White-Anglo-Saxon Protestants".
The debunking of the melting-pot theory will hopefully pave the way for a different type of society: what seems to be emerging today is the goal of a society that will be genuinely pluralistic in that it will deliberately attempt to preserve and foster all the diverse cultural and economic interests of its constituent groups. The motto "E Pluribus Unum" is coming to seem more and more outdated and one may wonder whether the country's new motto should not be "Ex Uno Plures".
Conclusion
What must be pointed out, after this survey of some of the basic components of national consciousness and ideology, is that they constitute the motive power of the American experiment, what moves or prompts American people to action and stimulates their imagination, in other words, it is what makes them tick.
A third feature of American ideology is that the ideals and goals it assigns to American people are consistently defined in terms of "prophetic vision", whether it be the vision of a brave new world, of a perfect society or whatever. One of the most forceful exponents of this prophetic vision was Thomas Paine, the political writer, who wrote in Common Sense (1776): "We, Americans, have it in our power to begin the world over again. A situation similar to the present, hath not happened since the days of Noah until now. The birthday of a new world is at hand".
Nearer to us F. D. Roosevelt stated in 1937: "We have dedicated ourselves to the achievement of a vision" and we're all familiar with Martin L. King's famous opening lines: "I had a dream that one day the sons of former slaves and the sons of former slave-owners will be able to sit together at the table of brotherhood" (August 1963 in Washington).
What is noteworthy -and the previous quotation is a case in point -is that there is an obvious relationship between American ideology and religion. As an observer put it: "America is a missionary institution that preaches mankind a Gospel". As we saw, American ideology, as embodied in the American Dream, is inseparable from the Sacred and buttressed by the three major denominations in the States: Protestantism, Catholicism, and Judaism. It must be borne in mind that myths, religions, ideology and of course politics are in constant interplay: they often overlap and interpenetrate with each other, cf. Tocqueville : « Je ne sais si tous les Américains ont foi dans leur religion, mais je suis sûr qu'ils la croient nécessaire au maintien des institutions républicaines ».
An opinion borne out by President Eisenhower's contention that: "Our Government makes no sense unless it is founded in a deeply religious faith -and I don't care what it is" (« Notre gouvernement n'a pas de sens s'il n'est fondé sur une foi religieuse intensément ressentie, et peu importe de quelle foi il s'agit »). It seems that in the States the attitude toward religion is more important than the object of devotion; the point is to show one has faith in something -whether God or the American way of life does not really matter.
In the words of a sociologist, "we worship not God but our own worshiping" or to put it differently, the Americans have faith in faith and believe in religion. Thus the nation has always upheld the idea of pluralism of belief and freedom of worship. The State supports no religion but even nowadays religion is so much part of American public life that there seems to be a confusion between God and America, God's own country: dollar banknotes bear the inscription "In God we trust" and the President takes the oath on the Bible. Public atheism remains rare: it is regarded as intellectual, radical, un-American and is accompanied by social disapproval. Since 1960 church attendance has declined steadily, but experimentation with new forms of worship still continues, as witness the increasing number of sects of every description vying with the three main religious groups viz. Protestants, Roman Catholics and Jews.
It is a well-known aspect of religious life in the USA that an American church is in many ways very similar to a club: it is a center of social life and an expression of group solidarity and conformity. People tend to change religious groups or sects according to their rise in social status or their moving into a new neighbourhood. Little emphasis is laid on theology, doctrine or religious argument: morality is the main concern. Tocqueville once remarked: "Go into the churches (I mean the Protestant ones) you will hear morality preached, of doctrine not a word...". The observation is still valid, for churches and religious denominations are expressions of group solidarity rather than of rigid adherence to doctrine. However, this religious dimension is so firmly entrenched in the mind of Americans that Hubert Humphrey a presidential candidate campaigned for "the brotherhood of man under the fatherhood of God", something unthinkable on the French political scene, and President Jimmy Carter was a Baptist preacher. The American people do have their common religion and that religion is the system familiarly known as the American way of life. By every realistic criterion the American way of life is the operative faith of the American people, for the American way of life is at bottom a spiritual structure of ideas and ideals, of aspirations and values, of beliefs and standards: it synthesizes all that commends itself to the Americans as the right, the good and the true in actual life. It is a faith that markerdly influences, and is influenced by, the official religions of American society. The American way of life is a kind of secularized Puritanism and so is democracy which has been erected into a "superfaith" above and embracing the three recognized religions; cf. J. P. Williams: "Americans must come (I am tempted to substitute 'have come') to look on the democratic ideal as the Will of God" so that the democratic faith is in the States the religion of religions and religion, in its turn, is something that reassures the American citizen about the essential rightness of everything American, his nation, his culture and himself. So, to conclude this series of observations, one can maintain that the Americans are "at one and the same time, one of the most religious and most secular of nations".
If one of the functions of religion is, among other things, to sanctify the American way of life, if democracy can be seen as a civic religion then the core of this religion is faith in the Constitution as well as in Law and Order. Without going into too much detail, I'd like to point out that the implications and connotations of the two terms are quite different from those they have in other cultures. Law for instance is endowed with a prestige that comes from the Bible through its association with Mosaic Law and British tradition (the Common Law) which accounts for its sacred, selfevident nature and its being seen as a "transcendental category": cf. M. L. King's statement that "an unjust law is a human law that is not rooted in eternal law or in natural law". Despite King's lofty conception, American law embodies many of the moralisms and taboos of the American mind and aims at enforcing an order that is dear to the establishment. At the apex of the American legal system stands the Supreme Court as interpreter of the Constitution which enshrines the nation's cohesive force and lends itself to idolization. The Constitution is America's covenant and its guardians, the justices of the Supreme Court are touched with its divinity.
Lastly, among the key values underpinning American ideology, there's common sense, the very foundation of the American Revolution as witness Thomas Paine's pamphlet. Common sense or sound practical judgment is akin to what R. Barthes used to call Doxa (i.e. « l'opinion courante, les fausses évidences, c'est-à-dire les masques de l'idéologie, le vraisemblable, ce qui va de soi: le propre de l'idéologie est de toujours tenter de faire passer pour naturel ce qui est profondément culturel ou historique », L-J Calvet, R. Barthes). Thus there is in American ideology an enduring relationship between the notion of common sense and that of the common man, a stereotype, the main constituent of the middle-class and mainstream America, the backbone of the system. The high valuation of the "common man", endowed with all the virtues that are dear to Americanism, dates back to Jacksonian democracy; the common man has undergone a series of avatars: the frontiersman, the farmer of the Middle-West, the man in the street, in lastly the middle-class citizens forming the so-called silent majority which is, in spite of its name, quite vocal on the defense of its interests and values and considers itself the guardian of normalcy. But, paradoxically enough, what must be emphasized is the complementary link between the mass of ordinary people advocating common sense and belonging to the middle-classes and hero-worship, the cult of individuals out of the common run. American culture has given rise to an impressive gallery of national or comic strip heroes such as Kit Carson, Davy Crockett, Paul Bunyan, Superman or Batman... In the same way, the perfect President is the one whose life follows the well-known pattern set by such national heroes as Jackson or Lincoln i. e. a trajectory leading the individual from the log-cabin to the White House, a fate which is evidence of the openness of American society and equality of opportunity.
All the elements we have reviewed account for the remarkable stability of American ideology: I grant that there have been periods when that very ideology was questioned -the Americans are currently undergoing one of these cyclical crises in confidence -but national consensus though somewhat shattered is still going strong. Ideology continues to play its traditional rôle of cementing force aiming both at neutralizing all potential conflicts or disruptive tensions and at revitalizing the key values of Americanism. Its flexibility is its main asset and accounts for the multiple adjustments it resorted to ward off all threats to the system: Populism, Progressivism, the Square Deal, the Fair Deal, and the New Frontier -"to get America moving again" -were all attempts to avert disin-tegration. It is the same fundamental ideology that underpins the particular position on this or that issue that the Republicans, the Democrats, the Liberals or even the Radicals may take up. In spite of great differences of opinion and interests, there's general agreement, with the usual qualifications, on such basic principles as the defence of:
-Americanism, set up as a universal model; -a regime of free enterprise and free competition; -a free world; -national safety; -American leadership.
It is worthy of note that opposition to the system and criticisms levelled against Americanism, the consumer society or the society of alienation, are more often than not inspired by the same values and ideals that its opponents accuse American society of having forfeited; besides, those who challenge traditional values and the goals of official culture i.e. adherents to the so-called counter-culture, whether it be the youth culture, the drug culture, the hippie movement and flower children, can seldom conceive of any lasting alternative to the American way of life.
The wiles and resilience of national ideology are such that it never fails to absorb or "recuperate" subversive practices by turning them to its advantage. As I said earlier, the USA is currently undergoing a period of self-doubt and loss of confidence; in spite of some oustanding achievements in the field of foreign policy there's a rising tide of discontent and disenchantment at home. Some Americans have come to question their country's ability to materialize the promise and the dream upon which America was founded. Is the age of ideology passing away to give way to the age of debunking and demythologizing? The question is uppermost in the national consciousness but as far as I am concerned it will go unanswered. |
01764716 | en | [
"sdv.bv.bot",
"sde.be"
] | 2024/03/05 22:32:13 | 2011 | https://amu.hal.science/hal-01764716/file/Ait%20Said_et_al_2011.pdf | Samir Ait Said
Catherine Fernandez
Stéphane Greff
Arezki Derridj
Thierry Gauquelin
Jean-Philippe Mevy
email: jean-philippe.mevy@univ-provence.fr
Inter-population variability of leaf morpho-anatomical and terpenoid patterns of Pistacia atlantica Desf. ssp. atlantica growing along an aridity gradient in Algeria
Keywords:
Three Algerian populations of female Pistacia atlantica shrubs were investigated in order to check whether their terpenoid contents and morpho-anatomical parameters may characterize the infraspecific variability. The populations were sampled along a gradient of increasing aridity from the Atlas mountains into the northwestern Central Sahara.
As evidenced by Scanning Electron Microscopy, tufted hairs could be found only on seedling leaves from the low aridity site as a population-specific trait preserved also in culture. Under common garden cultivation seedlings of the high aridity site showed a three times higher density of glandular trichomes compared to the low aridity site. Increased aridity resulted also in reduction of leaf sizes while their thickness increased. Palisade parenchyma thickness also increases with aridity, being the best variable that discriminates the three populations of P. atlantica.
Analysis of terpenoids from the leaves carried out by GC-MS reveals the presence of 65 compounds. The major compounds identified were spathulenol (23 g g -1 dw), ␣-pinene (10 g g -1 dw), verbenone (7 g g -1 dw) and -pinene (6 g g -1 dw) in leaves from the low aridity site; spathulenol (73 g g -1 dw), ␣-pinene (25 g g -1 dw), -pinene (18 g g -1 dw) and ␥-amorphene (16 g g -1 dw) in those from medium aridity and spathulenol (114 g g -1 dw), ␣-pinene (49 g g -1 dw), germacrene D (29 g g -1 dw) and camphene (23 g g -1 dw) in leaves from the high aridity site. Terpene concentrations increased with the degree of aridity: the highest mean concentration of monoterpenes (136 g g -1 dw), sesquiterpenes (290 g g -1 dw) and total terpenes (427 g g -1 dw) were observed in the highest arid site and are, respectively, 3-, 5-and 4-fold higher compared to the lower arid site. Spathulenol and ␣-pinene can be taken as chemical markers of aridity. Drought discriminating compounds in low, but detectable concentrations are ␦-cadinene and -copaene. The functional roles of the terpenoids found in P. atlantica leaves and principles of their biosynthesis are discussed with emphasis on the mechanisms of plant resistance to drought conditions.
Introduction
Plants respond to environmental variations, particularly to water availability through morphological, anatomical and biochemical adjustments that help them cope with such variations [START_REF] Lukovic | Histological characteristics of sugar beet leaves potentially linked to drought tolerance[END_REF]. Plants are adapted to drought stress by developing xeromorphic characters based mainly on reduction of leaf size [START_REF] Trubat | Plant morphology and root hydraulics are altered by nutrient deficiency in Pistacia lentiscus L[END_REF] and increase in thickness of cell walls, a more dense vascular system, greater density of stomata and an increased development of palisade tissue at the expense of the spongy tissue [START_REF] Bussotti | Structural and functional traits of Quercus ilex in response to water availability[END_REF][START_REF] Bacelar | Immediate responses and adaptative strategies of three olive cultivars under contrasting water availability regimes: changes on structure and chemical composition of foliage and oxidative damage[END_REF][START_REF] Syros | Leaf structural dynamics associated with adaptation of two Ebenus cretica ecotypes[END_REF].
Terpenes are one of the most diverse family of chemical compounds found in plant kingdom and they exhibit several roles in plant defense and communication [START_REF] Kirby | Biosynthesis of plant isoprenoids: perspectives for microbial engineering[END_REF]. In response to drought conditions, significant changes of terpene emissions were shown in many Mediterranean species (Orme ño et al., 2007a;[START_REF] Lavoir | Drought reduced monoterpene emissions from the evergreen Mediterranean oak Quercus ilex: results from a throughfall displacement experiment[END_REF]. Similar results were reported regarding the occurrence of terpenic components from Erica multiflora and Globularia alypum [START_REF] Llusià | Net ecosystem exchange and whole plant isoprenoid emissions by a Mediterranean shrubland exposed to experimental climate change[END_REF]. It has been shown also that monoterpenes and sesquiterpenes have a role in protecting plants from thermal damage (Pe ñuelas [START_REF] Pe Ñuelas | Linking photorespiration, monoterpenes and thermotolerance in Quercus[END_REF][START_REF] Loreto | Impact of ozone on monoterpene emissions and evidence for an isoprene-like antioxidant action of monoterpenes emitted by Quercus ilex leaves[END_REF][START_REF] Llusià | Airborne limonene confers limited thermotolerance to Quercus ilex[END_REF][START_REF] Pe Ñuelas | Linking isoprene with plant thermotolerance, antioxidants and monoterpene emissions[END_REF]. Terpenes are recognized as being relatively stable and also as precursors of numerous potential physiological components including growth regulators [START_REF] Byrd | Narrow hybrid zone between two subspecies of big sagebrush, Artemisia tridentata (Asteraceae). VIII. Spatial and temporal pattern of terpenes[END_REF]. Another property of these compounds is their great variability in time and depending on the geographic distribution of species as shown by many studies in literature [START_REF] Lang | Abies alba Mill -differentiation of provenances and provenance groups by the monoterpene patterns in the cortex resin of twigs[END_REF][START_REF] Staudt | Seasonal variation in amount and composition of monoterpenes emitted by young Pinus pinea trees -implications for emission modeling[END_REF][START_REF] Hillig | A chemotaxonomic analysis of terpenoid variation in Cannabis[END_REF][START_REF] Smelcerovic | Essential oil composition of Hypericum L. species from Southeastern Serbia and their chemotaxonomy[END_REF]. As a result, many studies relate terpenic constituents with plant systematic and population issues [START_REF] Adams | Systematics of multi-seeded eastern hemisphere Juniperus based on leaf essential oils and RAPD DNA fingerprinting[END_REF][START_REF] Naydenov | Structure of Pinus nigra Arn. populations in Bulgaria revealed by chloroplast microsatellites and terpenes analysis: provenance tests[END_REF].
The genus Pistacia (Anacardiaceae) consists of at least eleven dioecious species [START_REF] Zohary | A monographic study of the genus Pistacia[END_REF][START_REF] Kokwaro | Notes on the Anacardiaceae of Eastern Africa[END_REF]) that all intensely produce terpenes. There are three wild Pistacia species in Algeria: P. atlantica Desf. ssp. atlantica which exhibits high morphological variability [START_REF] Belhadj | Analyse de la variabilité morphologique chez huit populations spontanées de Pistacia atlantica en Algérie[END_REF], P lentiscus L. and less frequently P. terebinthus L. Pistacia atlantica is considered to be an Irano-Turanian species which is distributed from south-west Asia to north-west Africa [START_REF] Zohary | A monographic study of the genus Pistacia[END_REF]. In Algeria, it occurs in the wild from sub-humid environments to extreme Sahara sites [START_REF] Monjauze | Note sur la régénération du Bétoum par semis naturels dans la place d'éssais de Kef Lefaa[END_REF][START_REF] Quézel | Ecologie et biogéographie des forêts du bassin méditerranéen[END_REF][START_REF] Benhassaini | The chemical composition of fruits of Pistacia atlantica Desf. subsp. atlantica from Algeria[END_REF]. As a thermophilous xerophyte P. atlantica grows in dry stony or rocky hill sides, edges of field, roadsides, near the base of dry stone walls and other similar habitats [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF]. The species grows well on clay or silty soils, although it can thrive also on calcareous rocks where roots develop inside cracks. Hence, P. atlantica has a wide ecological plasticity as also shown by [START_REF] Belhadj | Comparative morphology of leaf epidermis in eight populations of atlas pistachio (Pistacia atlantica Desf., Anacardiaceae)[END_REF] through leaf epidermis analysis. For all these reasons, P. atlantica is used in re-planting projects in Algeria but only few studies are carried out on the infraspecific variability of this plant.
Regarding the phytochemistry of P. atlantica, essential oils from samples harvested in Greece [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF] and Morocco [START_REF] Barrero | Chemical composition of the essential oils of Pistacia atlantica Desf[END_REF] were described. Recently also a study was published describing essential oils and their biological properties from P. atlantica harvested in Algeria [START_REF] Gourine | Chemical composition and antioxidant activity of essential oil of leaves of Pistacia atlantica Desf. from Algeria[END_REF]. However, to the best of our knowledge, there is no detailed study on the relationship between the phytochemistry of P. atlantica and its ecological conditions of growth.
The aim of this work is to investigate the intraspecific diversity of three populations of P. atlantica growing wild in arid zones of Algeria through terpenoid analysis and leaf morpho-anatomical traits. We also examined the possible links that may exist between plant chemical composition and aridity conditions of these three locations.
Material and methods
Sampling sites
Pistacia atlantica Desf. ssp. atlantica was harvested in June 2008 from three Algerian sites chosen along a Northeast-Southwest transect of increasing aridity: Oued-Besbes (Medea)-Low aridity, Tilghemt (Laghouat)-Medium aridity and Beni-Ouniff (Bechar)-High aridity (Fig. 1). Specimens were deposited at the herbarium of the University of Provence Marseille and referred as Mar-PA1-2008;Mar-PA2-2008;Mar-PA3-2008 for the locations of Medea, Laghouat and Bechar, respectively. Ecological factors of samplings sites are described in Table 1.
For all the sites, sampling was carried out during fructification stage in order to take into account the phenological shift due to local climatic conditions. Ten healthy female individuals with the same age were chosen per site. Plants density and soil conditions were similar for the different sites.
Leaf morphology and anatomy
From each of the three locations, ten female trees were selected and thirty leaves fully sun exposed were harvested per tree. Once harvested, these leaves were carefully dried and kept in herbarium prior to biometric measurements: leaf length and width, petiole length, rachis length and the terminal leaflet length and width.
For anatomical parameters, cross sections were prepared across the middle part of three fresh leaflets per leaf, stained with carmino-green, and thickness measured using light microscopy of abaxial and adaxial epidermis, cuticle, palisade and spongy parenchyma and total leaflet.
Scanning Electron Microscopy (SEM) of seedling leaves
Seeds were collected in August 2008 at Medea and Bechar sites. After germination the seedlings were transplanted in pots filled with peat and sand, then kept in a growth chamber at constant temperature of 25 • C. The photoperiod was set at 11/13 h and the light irradiance was 500 mol photons m -2 s -1 . After 11 months of culture eight plants from each location were randomly selected, then three leaves per plant were harvested and carefully dried prior to SEM observations. Micromorphological observations were carried out on three leaflet samples (adaxial and abaxial surfaces) per leaf. These were gold coated before scanning through an electronic microscope: FEI XL30 ESEM (USA).
Terpenoids extraction
Mature and sun exposed leaves were harvested in the field, dried in dark at ambient air temperature conditions until constant weight, then, 100 g per tree were grounded and stored until use. Shade-drying method has no significant effect on the qualitative composition of volatile oils compared to fresh, sun-drying and oven-drying at 40 or 45 • C [START_REF] Omidbaigi | Influence of drying methods on the essential oil content and composition of Roman chamomile[END_REF][START_REF] Sefidkon | Influence of drying and extraction methods on yield and chemical composition of the essential oil of Satureja hortensis[END_REF][START_REF] Ashafa | Effect of drying methods on the chemical composition of essential oil from Felicia muricata leaves[END_REF]. The extraction method used consisted of suspending leaf dry matter in dichloromethane according to a ratio of 1:2 (w/v), for 30 min, under constant shaking at room temperature. 50 l of dodecane (5 mg ml -1 ) were added as internal standard for quantification.
Quantitative and qualitative analysis of terpenoids
Extracts were filtered on RC syringe filter (regenerated Cellulose, 0.45 m, 25 mm; Phenomenex, Le Pecq, France) then analyzed with a gas chromatograph Hewlett Packard ® GC 6890 coupled to a mass selective detector 5973 Network. The system was fitted with an HP-5MS capillary column 30 m, 0.25 mm, 0.25 m. 2 l of extracts was injected through an automatic injector ALS 7683 in splitless mode. Purge was set at 50 min ml -1 after 1 min. Injection temperature was maintained at 250 • C. Helium was used as carrier gas. A constant flow rate of 1 ml min -1 was set throughout the run. The oven temperature initially set at 40 • C was increased to 270 • C at a rate of 4 • C min -1 and remained constant for 5 min. The MSD transfer line heater was maintained at 280 • C.
Terpenes were identified by comparison of their arithmetic index (AI) and mass spectra with those obtained from authentic samples and literature [START_REF] Adams | Identification of Essential Oil Components by Gas Chromatography/Mass Spectrometry[END_REF].
Statistical analysis
The data were analyzed by a one-way ANOVA model. Newman-Keuls test was used to test for significant differences in monoterpene, sesquiterpene, total terpene concentrations and morpho-anatomical measurement data between the three populations. In order to evaluate the information contained in the collected chemical data, Principal Component Analysis was carried out. The statistical analyses were performed using R statistical software and packages "ade4".
Results
Morpho-anatomical measurements
Among the biometric parameters studied it appears that leaf length and width as well as terminal leaflet length and width highly discriminate statistically the three populations of P. atlantica (Table 2). The population from the most arid site shows the lowest leaf and terminal leaflet sizes. However the number of leaflet pairs increases with aridity. Regarding the anatomical data, the thickness of palisade parenchyma is the major discriminating variable and it increases with aridity (Table 3).
SEM observations
The epidermis of seedling leaves has markedly sinuous walls in both Medea and Bechar populations. Abaxial and adaxial leaf surfaces of each population are covered with two types of trichomes, elongated hairs and glandular trichomes. The former are essen- tially located at midrib of the adaxial leaf surface (Fig. 2A) and at the rachis forming parallel rows (Fig. 2B). The latter (Fig. 2C) are distributed over the entire leaf surface (essentially at the abaxial surface) with high density (18.31 ± 0.29 mm -2 ) in plants that seeds were sampled from the population of the most arid site (Bechar).
Trichome density of the plants raised from seeds sampled from the population that grows under less arid conditions (Medea) was 6.15 ± 0.21 mm -2 when both seedling lots were cultivated in the same environment (Fig. 2D andE). The Medea population could further be discriminated by the presence of tufted hairs which never were observed in the Bechar population on P. atlantica, neither in seedlings nor in adult plants (Fig. 2F).
Terpenoid analysis
P. atlantica leaves contain forty nine compounds identified (Table 4). Among these, twenty two were monoterpenes (8 hydrocarbons and 14 oxygenated) and twenty five were sesquiterpenes (16 hydrocarbons and 9 oxygenated). In the high aridity site, the major compounds identified were spathulenol (114 g g -1 dw), ␣-pinene (49 g g -1 dw), germacrene D (29 g g -1 dw) and camphene (23 g g -1 dw) while from the low aridity site spathulenol (23 g g -1 dw), ␣-pinene (10 g g -1 dw), verbenone (7 g g -1 dw) and -pinene (6 g g -1 dw) were the dominant constituents. For the medium aridity site situated between these two extreme con-ditions of aridity, spathulenol (73 g g -1 dw), ␣-pinene (25 g g -1 dw), -pinene (18 g g -1 dw) and ␥-amorphene (16 g g -1 dw) were the main terpenes found.
The quantitative analysis showed significant differences in both monoterpene, sesquiterpene and total terpene concentrations of the P. atlantica leaves according to the sites investigated (Fig. 3). Three distinct groups were obtained (Newman-Keuls test, 5% level). Terpene concentrations increase with the degree of aridity. The highest mean concentrations of monoterpenes (136 g g -1 dw), sesquiterpenes (290 g g -1 dw) and total terpenes (427 g g -1 dw) were observed in the high aridity site, whereas these figures were: 57 g g -1 dw, 57 g g -1 dw and 113 g g -1 dw, respectively, at the low aridity site.
Multivariate analysis was applied to the terpenoid contents of 30 solvent extracts. Fig. 4 shows the two-dimensional mapping of the Principal Component Analysis which comprises 77% of the total inertia. Axis 1 represents 62% of the information and is characterized on the positive side by thuja-2,4(10)-diene and on the negative side by a couple of compounds, essentially tricyclene, ␣-pinene, camphene, isoborneol acetate, -cubebene, -copaene, germacrene D, ␦-cadinene and spathulenol. Axis 2 representing 15% of the information is characterized on the negative side by -pinene and terpinen-4-ol.
Positions of the individual samples from leaf extractions in the two-axes space show an overall homogeneity between leaf extracts belonging to the same study site (Fig. 5). Three main groups which are characterized by the geographical provenances can be distinguished. The first group is situated on the positive side of Axis 1 and includes samples from individuals of the low aridity site. The second group is located on the negative side of Axis 1 and includes all individuals of the high aridity site. The third group situated on the negative side of Axis 2, between the points related to samples from the two extreme sites, includes in the majority samples from individuals of the medium aridity site. These three groups are clearly separated along Axis 1 which can be interpreted as indicating the aridity gradient. The most discriminating variables encompass ␣-pinene, spathulenol, ␦-cadinene and copaene.
Discussion
Increase of epidermis, cuticle, palisade parenchyma and total leaf thickness with the degree of aridity may enhance survival and growth of P. atlantica by improving water relations and providing higher protection for the inner tissues in the high aridity site. Such patterns were observed in many species submitted to water stress (e.g., [START_REF] Bussotti | Structural and functional traits of Quercus ilex in response to water availability[END_REF][START_REF] Bacelar | Immediate responses and adaptative strategies of three olive cultivars under contrasting water availability regimes: changes on structure and chemical composition of foliage and oxidative damage[END_REF][START_REF] Guerfel | Impacts of water stress on gas exchange, water relations, chlorophyll content and leaf structure in the two main Tunisian olive (Olea europaea L.) cultivars[END_REF]. Also, a pronounced decrease of leaf size reduces transpiration in sites where water is scarce, as also reported for other plants [START_REF] Huang | Leaf morphological and physiological responses to drought and shade in two Populus cathayana populations[END_REF][START_REF] Macek | Morphological and ecophysiological traits shaping altitudinal distribution of three Polylepis treeline species in the dry tropical Andes[END_REF]. The high morpho-anatomical plasticity of Pistacia atlantica in response to aridity may explain its wide ecological distribution in northern Africa. Trichomes are considered as important taxonomic characters [START_REF] Krak | Trichomes in the tribe Lactuceae (Asteraceae) -taxonomic implications[END_REF][START_REF] Salmaki | Trichome micromorphology of Iranian Stachys (Lamiaceae) with emphasis on its systematic implication[END_REF][START_REF] Shaheen | Diversity of foliar trichomes and their systematic relevance in the genus Hibiscus (Malvaceae)[END_REF]. The absence of tufted hairs in Bechar population suggests the existence of genetic differences between the populations studied.
Regarding the phytochemistry of P. atlantica, no data were reported before on extractable terpenoids composition of the pistacia leaves. However, qualitative and quantitative analyses of essential oils from leaves of P. atlantica were reported by several authors. Oils from female plants originating from Greece contained myrcene (17.8-24.8%), sabinene (7.8-5.2%) and terpinene (6-11.6%) as major components [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF]. Some compounds found in our samples like ␥-amorphene, p-mentha-1,3,5-triene, cis-and trans-sabinene hydrate, ␣-campholenic aldehyde, trans-verbenol, myrtenal, myrtenol, verbenone, ␣muurolene and spathulenol were not found in leaves of P. atlantica from Greece. A provenance from Morocco whose sex was not specified was rich in terpinen-4-ol (21.7%) and elemol (20.0%) [START_REF] Barrero | Chemical composition of the essential oils of Pistacia atlantica Desf[END_REF]. These compounds were found in small amounts (less than 1.1%) also in our samples. Recently, [START_REF] Gourine | Chemical composition and antioxidant activity of essential oil of leaves of Pistacia atlantica Desf. from Algeria[END_REF] have identified 31 compounds from samples harvested at Laghouat with -pinene (19.1%), ␣-terpineol (12.8%), bicyclogermacrene (8.2%) and spathulenol (9.5%) as the principal molecules. Qualitative and quantitative differences between literature data and our results may be explained by such factors as sex of the plants [START_REF] Tzakou | Volatile metabolites of Pistacia atlantica Desf. from Greece[END_REF], period of plant collection [START_REF] Barra | Characterization of the volatile constituents in the essential oil of Pistacia lentiscus L. from different origins and its antifungal and antioxidant activity[END_REF][START_REF] Gardeli | Essential oil composition of Pistacia lentiscus L. and Myrtus communis L.: evaluation of antioxidant capacity of methanolic extracts[END_REF][START_REF] Hussain | Chemical composition, antioxidant and antimicrobial activities of basil (Ocimum basilicum) essential oils depends on seasonal variations[END_REF]), plant competition (Orme ño et al., 2007b), position of leaves in the trees [START_REF] Gambliel | Terpene changes due to maturation and canopy level in douglas-fir (Pseudotsuga-menziesii) flush needle oil[END_REF][START_REF] Barnola | Intraindividual variations of volatile terpene contents in Pinus caribaea needles and its possible relationship to Atta laevigata herbivory[END_REF], soil nutrient availability [START_REF] Yang | Effects of ammonium concentration on the yield, mineral content and active terpene components of Chrysanthemum coronarium L. in a hydroponic system[END_REF][START_REF] Orme Ño | Production and diversity of volatile terpenes from plants on calcareous and siliceous soils: effect of soil nutrients[END_REF][START_REF] Blanch | Drought, warming and soil fertilization effects on leaf volatile terpene concentrations in Pinus halepensis and Quercus ilex[END_REF] and water availability [START_REF] Turtola | Drought stress alters the concentration of wood terpenoids in Scots pine and Norway spruce seedlings[END_REF][START_REF] Blanch | Drought, warming and soil fertilization effects on leaf volatile terpene concentrations in Pinus halepensis and Quercus ilex[END_REF]. Moreover, according to the method of extraction used, recovering the true components of the plant in vivo still remains a matter of debate. Indeed through hydrodistillation, thermal hydrolysis in acid medium may be a source of artifacts in terms of the essential oil composition [START_REF] Adams | Cedar wood oil -analysis and properties[END_REF].
However, the chemical analysis indicated that there are significant differences between the three populations which were analyzed by the same method. These differences comprise both the quantitative and the qualitative composition of the terpenoids. Spathulenol and ␣-pinene are the dominant compounds that clearly discriminate quantitatively the three stations. Although being identified in minor contents from samples of low and medium aridity stations, thuja-2,4(10)-diene, p-mentha-1,3,5triene, nopinone and trans-3-pinocarvone were not registered from high arid station samples. This raises the question of the role of indi-vidual terpenoid components in plant responses to aridity and the central issue of phenotypic/genotypic diversity of the investigated populations.
Allelopathic properties of ␣-pinene are reported in literature. This hydrocarbon monoterpene inhibits radicula growth of several species, enhances root solute leakage and increases level of malondialdehyde, proline and hydrogen peroxide indicating lipid peroxidation and induction of oxidative stress [START_REF] Singh | alpha-Pinene inhibits growth and induces oxidative stress in roots[END_REF]. It is likely that, the high content of ␣-pinene found in the leaves from the driest site may influence interspecific competition for water resources. For all sites investigated, the understory diversity was low, composed mainly of Ziziphus lotus. Hence, ␣-pinene might play direct and indirect roles in P. atlantica responses to drought situations.
Spathulenol is an azulenic sesquiterpene alcohol that occurs in several plant essential oils [START_REF] Mévy | Composition of the volatile constituents of the aerial parts of an endemic plant of Ivory Coast, Monanthotaxis capea (E. G. & A. Camus) Verdc[END_REF][START_REF] Cavar | Chemical composition and antioxidant and antimicrobial activity of two Satureja essential oils[END_REF]. Azulenes are also known as allelochemicals [START_REF] Inderjit | Principles and Practices in Plant Ecology: Allelochemical Interactions[END_REF]. Especially their bactericidal activity has been proven as well as their function as plant growth regulator precursors [START_REF] Muir | Azulene derivatives as plant growth regulators[END_REF][START_REF] Konovalov | Natural azulenes in plants[END_REF]. Azulene is a polycyclic hydrocarbon, consisting of an unsaturated five member ring linked to an unsaturated seven member ring. This molecule absorbs red light 600 nm for the first excited state transition and UVA 330 nm light for the second excited state transition producing a dark blue color in aqueous medium [START_REF] Tetreault | Control of the photophysical properties of polyatomic molecules by substitution and solvation: the second excited singlet state of azulene[END_REF]. The high content of spathulenol found from leaves collected in the high arid station may be interpreted as a defense mechanism against deleterious effects of biotic interactions and UV-light during summer.
Our results are in accordance with several authors who reported increased terpene concentrations in plants under high temperature and water stress conditions [START_REF] Llusià | Changes in terpene content and emission in potted Mediterranean woody plants under severe drought[END_REF][START_REF] Loreto | Impact of ozone on monoterpene emissions and evidence for an isoprene-like antioxidant action of monoterpenes emitted by Quercus ilex leaves[END_REF][START_REF] Pe Ñuelas | Linking isoprene with plant thermotolerance, antioxidants and monoterpene emissions[END_REF][START_REF] Llusià | Net ecosystem exchange and whole plant isoprenoid emissions by a Mediterranean shrubland exposed to experimental climate change[END_REF]. For instance, 54 and 119% increases of total terpene contents under drought treatment were recorded from Pinus halepensis and Quercus ilex, respectively [START_REF] Blanch | Drought, warming and soil fertilization effects on leaf volatile terpene concentrations in Pinus halepensis and Quercus ilex[END_REF]. Because monoterpene biosynthesis is strictly dependent on photosynthesis [START_REF] López | Allelopathic potential of Tagetes minuta terpenes by a chemical, anatomical and phytotoxic approach[END_REF] the increase of their content along with aridity suggests an involvement of specific metabolic pathways that sustain photosynthesis in harsh environmental conditions. In our study, the high thickness of palisade parenchyma can be mentioned in favor of this assumption. On the other hand, monoterpenes act as plant chloroplast membrane stabilizers and protectors against free radicals due to their lipophily and the presence of double bonds in their molecules (Pe ñuelas [START_REF] Pe Ñuelas | Linking photorespiration, monoterpenes and thermotolerance in Quercus[END_REF][START_REF] Chen | Inhibition of monoterpene biosynthesis accelerates oxidative stress and leads to enhancement of antioxidant defenses in leaves of rubber tree (Hevea brasiliensis)[END_REF]. Hence, the increase of monoterpenes may be considered as a regulatory feedback loop that protects photosynthesis machinery from oxidative and thermal damages.
Glandular trichomes are one of the most common secretory structures that produce and store essential oil in plants [START_REF] Covello | Functional genomics and the biosynthesis of artemisinin[END_REF][START_REF] Giuliani | Insight into the structure and chemistry of glandular trichomes of Labiatae, with emphasis on subfamily Lamioideae[END_REF][START_REF] Biswas | Essential oil production: relationship with abundance of glandular trichomes in aerial surface of plants[END_REF]. The high terpenoid contents in Bechar population could be related to the high density of glandular trichomes in this population, which would be also in accordance with other results found by several authors [START_REF] Mahmoud | Cosuppression of limonene-3hydroxylase in peppermint promotes accumulation of limonene in the essential oil[END_REF][START_REF] Fridman | Metabolic, genomic, and biochemical analyses of glandular trichomes from the wild tomato species Lycopersicon hirsutum identify a key enzyme in the biosynthesis of methylketones[END_REF][START_REF] Ringer | Monoterpene metabolism. Cloning, expression, and characterization of (-)-isopiperitenol/(-)-carveol dehydrogenase of peppermint and spearmint[END_REF].
␦-cadinene and -copaene are two compounds found in low contents (0.5-3.8 and 1-4.9 g g -1 dw, respectively) and are similarly as spathulenol and ␣-pinene correlated with the increased aridity the populations are experiencing. Except for antibacterial effects [START_REF] Townsend | Antisense suppression of a (+)-delta-cadinene synthase gene in cotton prevents the induction of this defense response gene during bacterial blight infection but not its constitutive expression[END_REF][START_REF] Bakkali | Biological effects of essential oils -a review[END_REF], no information is available about specific ecological roles of -copaene and ␦-cadinene. It should be noted that they are germacrene D derivatives [START_REF] Bülow | The role of germacrene D as a precursor in sesquiterpene biosynthesis: investigations of acid catalyzed, photochemically and thermally induced rearrangements[END_REF] which is found in high concentration in Cupressus sempervirens after long-term water stress [START_REF] Yani | The effect of a long-term waterstress on the metabolism and emission of terpenes of the foliage of Cupressus sempervirens[END_REF]. Also, the content of germacrene D from Pistacia lentiscus was shown to increase four time during the summer season compared to spring [START_REF] Gardeli | Essential oil composition of Pistacia lentiscus L. and Myrtus communis L.: evaluation of antioxidant capacity of methanolic extracts[END_REF].
The different terpenoids can be appreciated as aridity markers characterizing the three P. atlantica populations. It is not clear whether they are constitutively synthesized or induced by the environmental conditions. Morphological data of leaves indicate that the three populations significantly differ. Scanning electronic microscopy of leaves of seedlings from the high aridity and low aridity provenances grown under controlled conditions reveals that the two populations keep their morphological differences with respect to trichome typology and density. Hence it is likely that the three populations investigated indeed are genetically different. Therefore the chemical variability observed might be as well genetically based. This should be tested in the future by submitting clones selected from the three populations to the same conditions of drought.
Fig. 1 .
1 Fig. 1. Geographical location of the investigated P. atlantica populations. Sites: ᭹.
Fig. 2 .
2 Fig. 2. Scanning electron micrographs showing epidermis and trichomes of P. atlantica seedling leaves. (A) Midrib of adaxial leaf surface, covered by elongated trichomes. Bar = 200 m. (B) Elongated trichomes in parallel rows. Bar = 10 m. (C) Glandular trichome. Bar = 20 m. (D and E) Low density of glandular trichomes in Medea population (D) compared to Bechar population (E). Bar = 500 m. (F) Tufted hairs at the adaxial leaf surface in Medea population. Bar = 50 m.
Fig. 3 .
3 Fig. 3. Variance analysis of monoterpene, sesquiterpene and total terpene contents found in female Pistacia atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Means of n = 10 with standard errors, p < 0.05.
Fig. 4 .
4 Fig.4. Correlation of occurrences of terpenoid compounds (g g -1 dw) from female Pistacia atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria; shown are only those terpenoids among which high correlation could be found.
Fig. 5 .
5 Fig.5. Two dimensional PCA of Pistacia atlantica ssp. atlantica individual samples originating from low (la), medium (ma) and high (ha) aridity sites in Algeria.
Table 1
1 Ecological factors of the Pistacia atlantica collection sites, selected to define the aridity gradient.
Site Mean annual Maximal Drought duration in Emberger, Q2 a Latitude Elevation (m)
precipitation (mm) temperature M ( • C) months (Bagnouls and
of the driest month Gaussen, 1953)
Medea low aridity 393.10 31.00 4 15.40 36 • 11 -36 • 22 north 720
3 • 00 -3 • 10 east
Laghouat medium aridity 116.60 39.40 10 04.34 28 • 00 north 780
3 • 00 east
Bechar high aridity 57.70 40.70 12 02.36 31 • 38 -32 • 03 north 790
1 • 13 -2 • 13 west
a Emberger's pluviothermic quotient.
Table 2
2 Morphological data (cm) of female P. atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Mean of 30 measurements per tree with standard errors.
Leaf biometry (cm) Low aridity site (Medea) Medium aridity site (Laghouat) High aridity site (Bechar) p
Leaf length 9.63 ± 0.19 a 9.17 ± 0.17 b 8.92 ± 0.18 c <0.001
Leaf width 7.61 ± 0.16 a 7.16 ± 0.14 b 6.65 ± 0.17 c <0.001
Rachis length 4.09 ± 0.10 a 3.78 ± 0.07 b 3.72 ± 0.08 b <0.001
Petiole length 2.13 ± 0.04 2.11 ± 0.05 2.05 ± 0.06 >0.05
Terminal leaflet length 3.41 ± 0.03 a 3.29 ± 0.03 b 3.14 ± 0.02 c <0.001
Terminal leaflet width 1.58 ± 0.03 a 1.49 ± 0.01 b 1.45 ± 0.02 c <0.001
Number of leaflet pairs 3.09 ± 0.07 b 3.12 ± 0.08 b 3.26 ± 0.10 a <0.05
Table 3
3 Anatomical data (m) of female P. atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Mean of 30 measurements per plant (3 replicates per leaf) with standard errors.
Leaf anatomy (m) Low aridity site (Medea) Medium aridity site (Laghouat) High aridity site (Bechar) p
Abaxial cuticle 4.98 ± 0.08 b 5.99 ± 0.12 a 6.08 ± 0.13 a <0.001
Adaxial cuticle 4.32 ± 0.06 b 4.88 ± 0.16 a 4.91 ± 0.12 a <0.01
Abaxial epidermis 12.70 ± 0.18 b 12.69 ± 0.18 b 14.07 ± 0.20 a <0.001
Adaxial epidermis 13.26 ± 0.16 b 13.30 ± 0.14 b 13.45 ± 0.18 a <0.05
Palisade parenchyma 64.66 ± 1.5 c 72.77 ± 1.38 b 95.76 ± 1.42 a <0.001
Spongy parenchyma 98.67 ± 1.64 102.45 ± 1.81 106.34 ± 1.86 >0.05
Leaf thickness 198.53 ± 2.78 c 212.08 ± 2.97 b 240.61 ± 3.51 a <0.001
Table 4
4 Concentrations of terpenoids (g g -1 dw) found in female Pistacia atlantica ssp. atlantica leaves from low, medium and high aridity sites in Algeria. Mean of 10 extractions per site with standard errors.
Group Compounds AI Compound content in leaves (g g -1 dw) and location
Low aridity Medium aridity High aridity p
Hydrocarbon 1 Tricyclene 914 1.2 ± 0.2 b 2.4 ± 0.4 b 8.7 ± 0.5 a <0.001
monoterpenes 2 ␣-Pinene 926 10.0 ± 0.4 c 24.5 ± 0.8 b 49.4 ± 1.0 a <0.001
3 Camphene 941 3.1 ± 0.5 b 5.5 ± 0.9 b 23.2 ± 1.1 a <0.001
4 Thuja-2,4(10)-diene 948 1.0 ± 0.1 a 0.6 ± 0.0 b - <0.001
5 -Pinene 971 6.5 ± 2.3 b 18.1 ± 0.9 a 12.6 ± 0.7 ab <0.001
6 Mentha-1,3,5-triene, p- 1007 0.7 ± 0.1 a 0.1 ± 0.0 b - <0.001
7 Cymene, p- 1023 0.7 ± 0.1 b 1.9 ± 0.2 a 0.2 ± 0.0 b <0.001
8 ␥-Terpinene 1058 0.6 ± 0.3 ab 1.6 ± 0.3 a 0.3 ± 0.0 b <0.001
Oxygenated 9 Sabinene hydrate, cis-(IPP vs OH) 1067 0.4 ± 0.2 b 1.6 ± 0.3 a 0.2 ± 0.0 b <0.001
monoterpenes 10 NI 1088 0.6 ± 0.1 a 0.5 ± 0.0 a 0.2 ± 0.0 b <0.001
11 Sabinene hydrate, trans-(IPP vs OH) 1098 0.6 ± 0.2 b 1.5 ± 0.2 a 0.1 ± 0.0 b <0.001
12 NI 1101 5.8 ± 1.5 3.9 ± 0.5 4.9 ± 0.4 >0.05
13 ␣-Campholenic aldehyde 1125 1.9 ± 0.4 2.1 ± 0.3 1.9 ± 0.3 >0.05
14 Nopinone 1133 0.3 ± 0.1 - - <0.05
15 Pinocarveol, trans- 1138 1.8 ± 0.4 ab 1.5 ± 0.1 b 3.2 ± 0.4 a <0.01
16 Verbenol, trans- 1146 6.0 ± 1.6 3.9 ± 0.6 6.1 ± 0.8 >0.05
17 3-Pinocarvone, trans- 1157 1.2 ± 0.4 ab 1.9 ± 0.4 a - <0.001
18 Pinocarvone 1161 0.8 ± 0.1 b 0.7 ± 0.1 ab 1.2 ± 0.2 a <0.05
19 Terpinen-4-ol 1177 1.3 ± 0.3 b 3.8 ± 0.4 a 1.3 ± 0.2 b <0.001
20 Myrtenal 1194 0.4 ± 0.2 b 0.6 ± 0.2 b 1.4 ± 0.2 a <0.001
21 Myrtenol 1197 1.4 ± 0.3 1.9 ± 0.4 1.5 ± 0.2 >0.05
22 Verbenone 1208 7.0 ± 1.7 3.9 ± 0.7 5.1 ± 0.9 >0.05
23 Carveol, trans 1221 0.8 ± 0.2 b 0.4 ± 0.1 ab 0.9 ± 0.1 a <0.05
24 Borneol, iso-, acetate 1285 2.6 ± 0.5 b 3.9 ± 0.4 b 13.9 ± 0.7 a <0.001
Hydrocarbon 25 ␦-Elemene 1337 1.4 ± 0.6 b 14.0 ± 3.7 a 22.0 ± 1.8 a <0.001
sesquiterpenes 26 ␣-Cubebene 1349 0.3 ± 0.0 b 0.7 ± 0.2 b 1.5 ± 0.2 a <0.001
27 ␣-Copaene 1375 0.2 ± 0.0 b 0.5 ± 0.1 ab 1.0 ± 0.1 a <0.001
28 -Bourbonene 1383 0.9 ± 0.2 1.2 ± 0.2 0.8 ± 0.2 >0.05
29 -Cubebene 1389 0.2 ± 0.0 b 0.5 ± 0.1 ab 0.8 ± 0.1 a <0.001
30 -Elemene 1392 0.1 ± 0.0 b 0.5 ± 0.1 ab 0.8 ± 0.2 a <0.001
31 -Ylangene 1418 1.6 ± 0.2 b 9.0 ± 1.3 a 7.2 ± 0.9 a <0.001
32 -Copaene 1429 0.5 ± 0.1 b 1.2 ± 0.1 b 3.8 ± 0.6 a <0.001
33 ␥-Elemene 1433 0.5 ± 0.0 b 2.6 ± 0.8 b 7.4 ± 0.9 a <0.001
34 Guaia-6,9-diene 1438 0.3 ± 0.1 b 2.0 ± 0.2 a 1.9 ± 0.4 a <0.001
35 NI 1444 0.1 ± 0.0 b 0.6 ± 0.1 b 1.3 ± 0.2 a <0.001
36 NI 1453 0.2 ± 0.1 b 1.4 ± 0.3 a 2.2 ± 0.3 a <0.001
37 Caryophyllene, 9-epi- 1461 0.8 ± 0.2 b 4.0 ± 0.6 a 3.8 ± 0.3 a <0.001
38 NI 1470 0.8 ± 0.4 0.4 ± 0.0 1.1 ± 0.1 >0.05
39 Germacrene D 1482 3.0 ± 0.4 b 5.2 ± 1.0 b 29.0 ± 2.9 a <0.001
40 ␥-Amorphene 1496 1.8 ± 0.6 b 15.5 ± 2.8 a 20.5 ± 2.2 a <0.001
41 ␣-Muurolene 1501 0.3 ± 0.0 b 3.7 ± 0.4 a 0.9 ± 0.1 b <0.001
42 ␥-Cadinene 1515 0.3 ± 0.0 b 0.7 ± 0.1 b 2.0 ± 0.3 a <0.001
43 ␦-Cadinene 1524 1.0 ± 0.1 b 2.0 ± 0.2 b 4.9 ± 0.6 a <0.001
Oxygenated 44 Cubebol 1518 1.1 ± 0.1 b 1.2 ± 0.1 ab 1.7 ± 0.1 a <0.01
sesquiterpenes 45 NI 1527 0.7 ± 0.4 1.4 ± 0.2 1.6 ± 0.3 >0.05
46 Elemol 1552 0.7 ± 0.1 b 2.1 ± 0.6 b 5.8 ± 0.9 a <0.001
47 NI 1557 1.0 ± 0.2 b 2.0 ± 0.6 ab 3.1 ± 0.7 a <0.05
48 NI 1568 0.4 ± 0.0 0.5 ± 0.2 0.8 ± 0.2 >0.05
49 Spathulenol 1581 23.2 ± 1.1c 72.9 ± 1.9 b 114.4 ± 2.2 a <0.001
50 NI 1586 3.5 ± 0.8 b 10.1 ± 0.6 a 3.4 ± 0.3 b <0.001
51 NI 1590 0.4 ± 0.1 b 1.2 ± 0.3 ab 2.4 ± 0.5 a <0.01
52 Salvial-4(14)-en-1-one 1595 0.8 ± 0.1 b 1.5 ± 0.2 ab 2.4 ± 0.3 a <0.001
53 NI 1609 0.6 ± 0.2 b 1.1 ± 0.2 ab 1.9 ± 0.5 a <0.05
54 NI 1615 1.4 ± 0.2 b 2.8 ± 0.3 b 5.1 ± 0.8 a <0.001
55 NI 1620 0.7 ± 0.2 1.0 ± 0.5 0.8 ± 0.1 >0.05
56 Germacrene D-4-ol 1623 0.3 ± 0.0 b 0.8 ± 0.2 b 2.1 ± 0.3 a <0.001
57 ␥-Eudesmol 1634 0.2 ± 0.0 b 0.7 ± 0.1 b 1.3 ± 0.2 a <0.001
58 NI 1641 1.4 ± 0.3 b 9.5 ± 1.2 a 12.8 ± 1.3 a <0.001
59 ␣-Muurolol 1645 0.3 ± 0.1 b 1.1 ± 0.1 ab 1.6 ± 0.3 a <0.001
60 Cedr-8(15)-en-10-ol 1650 0.5 ± 0.1 b 1.3 ± 0.3 ab 2.7 ± 0.4 a <0.001
61 -Eudesmol 1653 0.4 ± 0.0 b 2.0 ± 0.3 b 4.7 ± 0.5 a <0.001
62 NI 1657 2.0 ± 0.2 b 3.7 ± 0.8 b 10.2 ± 1.3 a <0.001
63 NI 1677 0.7 ± 0.3 b 2.3 ± 0.3 a 0.7 ± 0.1 a <0.001
Others 64 Hex-3-en-1-ol benzoate, (Z)- 1572 tr 0.5 ± 0.1 1.0 ± 0.1
65 Actinolide, dihydro- 1530 2.0 ± 0.3 2.8 ± 0.3 2.3 ± 0.1
NI: non-identified; AI: arithmetic index of
[START_REF] Adams | Identification of Essential Oil Components by Gas Chromatography/Mass Spectrometry[END_REF]
calculated with the formula of
[START_REF] Van Den Dool | A generalization of the retention index system including linear temperature programmed gas-liquid partition chromatography[END_REF]
; tr: trace.
Acknowledgments
The authors gratefully acknowledge F. Torre for statistical analysis, R. Zergane of Beni Slimane and people of Laghouat and Bechar forestry conservation for their help in plant collection, and A. Tonetto for Scanning Electron Micrographs. The French and Algerian Inter-university Cooperation is also gratefully acknowledged for funding this work. |
01764851 | en | [
"sdu.ocean"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01764851/file/Costa_etal_PLoSoneREVISED3_CalculationBetweennessMarineConnectivity.pdf | Andrea Costa
email: andrea.costa@pusan.ac.kr
Anne A Petrenko
Katell Guizien
Andrea M Doglioli
On the Calculation of Betweenness Centrality in Marine Connectivity Studies Using Transfer Probabilities
Betweenness has been used in a number of marine studies to identify portions of sea that sustain the connectivity of whole marine networks. Herein we highlight the need of methodological exactness in the calculation of betweenness when graph theory is applied to marine connectivity studies based on transfer probabilities. We show the inconsistency in calculating betweeness directly from transfer probabilities and propose a new metric for the node-to-node distance that solves it. Our argumentation is illustrated by both simple theoretical examples and the analysis of a literature data set.
Introduction
In the last decade, graph theory has increasingly been used in ecology and conservation studies [START_REF] Moilanen | On the limitations of graph-theoretic connectivity in spatial ecology and conservation[END_REF] and particularly in marine connectivity studies (e.g., [START_REF] Treml | Modeling population connectivity by ocean currents, a graph theoretic approach for marine conservation[END_REF] [3] [4] [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF] [START_REF] Rossi | Hydrodynamic provinces and oceanic connectivity from a transport network help desining marine reserves[END_REF]).
Graphs are a mathematical representation of a network of entities (called nodes) linked by pairwise relationships (called edges). Graph theory is a set of mathematical results that permit to calculate different measures to identify nodes, or set of nodes, that play specific roles in a graph (e.g., [START_REF] Bondy | Graph theory with applications[END_REF]). Graph theory application to the study of marine connectivity typically consists in the representation of portions of sea as nodes. Then, the edges between these nodes represent transfer probabilities between these portions of sea.
Transfer probabilities estimate the physical dispersion of propagula [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF] [START_REF] Berline | A connectivity-based ecoregionalization of the Mediterranean Sea[END_REF] [10] [START_REF] Jonsson | How to select networks of marine protected areas for multiple species with different dispersal strategies[END_REF], nutrients or pollutants [START_REF] Doglioli | Development of a numerical model to study the dispersion of wastes coming from a marine fish farm in the Ligurian Sea (Western Mediterranean)[END_REF], particulate matter [START_REF] Mansui | Modelling the transport and accumulation of floating marine debris in the Mediterranean basin[END_REF], or other particles either passive or interacting with the environment (see [START_REF] Ghezzo | Connectivity in three European coastal lagoons[END_REF] [START_REF] Bacher | Probabilistic approach of water residence time and connectivity using Markov chains with application to tidal embayments[END_REF] and references therein). As a result, graph theory already proved valuable in the identification of hydrodynamical provinces [START_REF] Rossi | Hydrodynamic provinces and oceanic connectivity from a transport network help desining marine reserves[END_REF], genetic stepping stones [START_REF] Rozenfeld | Network analysis identifies weak and strong links in a metapopulation system[END_REF], genetic communities [START_REF] Kininmonth | Determining the community structure of the coral Seriatopora hystrix from hydrodynamic and genetic networks[END_REF], sub-populations [START_REF] Jacobi | Identification of subpopulations from connectivity matrices[END_REF], and in assessing Marine Protected Areas connectivity [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF].
In many marine connectivity studies, it is of interest to identify specific portions of sea where a relevant amount of the transfer across a graph passes through. A well-known graph theory measure is frequently used for this purpose: betweenness centrality. In the literature, high values of this measure are commonly assumed to identify nodes sustaining the connectivity of the whole network. For this reason a high value of betweenness has been used in the framework of marine connectivity to identify PLOS 1/10 migration stepping stones [START_REF] Treml | Modeling population connectivity by ocean currents, a graph theoretic approach for marine conservation[END_REF], genetic gateways [START_REF] Rozenfeld | Network analysis identifies weak and strong links in a metapopulation system[END_REF], and marine protected areas ensuring a good connectivity between them [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF].
Our scope in the present letter is to highlight some errors that can occur in implementing graph theory analysis. Especially we focus on the definition of edges when one is interested in calculating the betweenness centrality and other related measures.
We also point out two papers in the literature in which this methodological inconsistency can be found: [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF].
In Materials and Methods we introduce the essential graph theory concepts for our scope. In Results we present our argument on the base of the analysis of a literature data set. In the last Section we draw our conclusions.
Materials and Methods
A simple graph G is a couple of sets (V, E), where V is the set of nodes and E is the set of edges. The set V represents the collection of objects under study that are pair-wise linked by an edge a ij , with (i,j) ∈ V , representing a relation of interest between two of these objects. If a ij = a ji , ∀(i,j) ∈ V , the graph is said to be 'undirected', otherwise it is 'directed'. The second case is the one we deal with when studying marine connectivity, where the edges' weights represent the transfer probabilities between two zones of sea (e.g., [3] [4] [5] [START_REF] Rossi | Hydrodynamic provinces and oceanic connectivity from a transport network help desining marine reserves[END_REF]).
If more than one edge in each direction between two nodes is allowed, the graph is called multigraph. The number of edges between each pair of nodes (i,j) is then called multiplicity of the edge linking i and j.
The in-degree of a node k, deg + (k), is the sum of all the edges that arrive in k:
deg + (k) = i a ik . The out-degree of a node k, deg -(k)
, is the sum of all the edges that start from k: deg -(k) = j a kj . The total degree of a node k, deg(k), is the sum of the in-degree and out-degree of k: deg(k
) = deg + (k) + deg -(k).
In a graph, there can be multiple ways (called paths) to go from a node i to a node j passing by other nodes. The weight of a path is the sum of the weights of the edges composing the path itself. In general, it is of interest to know the shortest or fastest path σ ij between two nodes, i.e. the one with the lowest weight. But it is even more instructive to know which nodes participate to the greater numbers of shortest paths. In fact, this permits to measure the influence of a given node over the spread of information through a network. This measure is called betweenness value of a node in the graph. The betweenness value of a node k, BC(k), is defined as the fraction of shortest paths existing in the graph, σ ij , with i = j, that effectively pass through k,
σ ij (k), with i = j = k: BC(k) = i =k =j σ ij (k) σ ij (1)
with (i,j,k) ∈ V . Note that the subscript i = k = j means that betweenness is not influenced by direct connections between the nodes. Betweenness is then normalized by the total number of possible connections in the graph once excluded node k:
(N -1)(N -2)
, where N is the number of nodes in the graph, so that 0 ≤ BC ≤ 1.
Although betweenness interpretation is seemingly straightforward, one must be careful in its calculation. In fact betweenness interpretation is sensitive to the node-to-node metric one chooses to use as edge weight. If, as frequently the case of the marine connectivity studies, one uses transfer probabilities as edge weight, betweenness loses its original meaning. Based on additional details -personally given by the authors PLOS 2/10 of [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF]-on their methods, this was the case in those studies. In those cases, edge weight would decrease when probability decreases and the shortest paths would be the sum of edges with lowest value of transfer probability. As a consequence, high betweenness would be associated to the nodes through which a high number of improbable paths pass through. Exactly the opposite of betweenness original purpose.
Hence, defining betweenness using Equation 1(the case of [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF]) leads to an inconsistency that affects the interpretation of betweenness values.
Alternative definitions of betweenness accounting for all the paths between two nodes and not just the most probable one have been proposed to analyze graphs in which the edge weight is a probability [START_REF] Newman | A measure of betweenness centrality based on random walks[END_REF] and avoid the above inconsistency.
Herein, we propose to solve the inconsistency when using the original betweenness definition of transfer probabilities by using a new metric for the edge weights instead of modifying the betweenness definition. The new metric transforms transfer probabilities a ij into a distance in order to conserve the original meaning of betweenness, by ensuring that a larger transfer probability between two nodes corresponds to a smaller node-to-node distance. Hence, the shortest path between two nodes effectively is the most probable one. Therefore, high betweenness is associated to the nodes through which a high number of probable paths pass through.
In the first place, in defining the new metric, we need to reverse the order of the probabilities in order to have higher values of the old metric a ij correspond to lower values of the new one. In the second place we also consider three other facts: (i) transfer probabilities a ij are commonly calculated with regards to the position of the particles only at the beginning and at the end of the advection period; (ii) the probability to go from i to j does not depend on the node the particle is coming from before arriving in i; and (iii) the calculation of the shortest paths implies the summation of a variable number of transfer probability values. Note that, as the a ij values are typically calculated on the base of the particles' positions at the beginning and at the end of a spawning period, we are dealing with paths whose values are calculated taking into account different numbers of generations. Therefore, the transfer probabilities between sites are independent from each other and should be multiplied by each other when calculating the value of a path. Nevertheless, the classical algorithms commonly used in graph theory analysis calculate the shortest paths as the summation of the edges composing them (e.g., the Dijkstra algorithm, [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] or the Brandes algorithm [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]).
Therefore, these algorithms, if directly applied to the probabilities at play here, are incompatible with their independence.
A possible workaround could be to not use the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF] and use instead the 10 th algorithm proposed in [START_REF] Brandes | On variants of shortest-path betweenness centrality and their generic computation[END_REF]. Therein, the author suggests to define the betweenness of a simple graph via its interpretation as a multigraph. He then shows that the value of a path can be calculated as the product of the multiplicities of its edges. When the multiplicity of an edge is set equal to the weight of the corresponding edge in the simple graph, one can calculate the value of a path as the product of its edges' weights a ij . However, this algorithm selects the shortest path on the basis of the number of steps (or hop count) between a pair of nodes (Breadth-First Search algorithm [START_REF] Moore | The shortest path through a maze[END_REF]). This causes the algorithm to fail in identifying the shortest path in some cases. For example, in Fig 1 it would identify the path ACB (2 steps with total probability 1 × 10 -8 ) when, instead, the most probable path is ADEB (3 steps with total probability 1 × 10 -6 ). See Table 1 for more details.
However, by changing the metric used in the algorithms, it is possible to calculate the shortest path in a meaningful way with the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. In particular, we propose to define the weight of an edge between two nodes i and j as:
Fig 1.
Example of graph in which the 10 th algorithm in [START_REF] Brandes | On variants of shortest-path betweenness centrality and their generic computation[END_REF] would fail to identify the shortest path between A and B (ADEB) when using a ij as metric.
d ij = log 1 a ij ( 2
)
This definition is the composition of two functions: h(x) = 1/x and f (x) = log(x).
The use of h(x) allows one to reverse the ordering of the metric in order to make the most probable path the shortest. The use of f (x), thanks to the basic properties of logarithms, allows the use of classical shortest-path finding algorithms while dealing correctly with the independence of the connectivity values. In fact, we are de facto calculating the value of a path as the product of the values of its edges.
It is worth mentioning that the values d ij = ∞, coming from the values a ij = 0, do not influence the calculation of betweenness values via the Dijkstra and Brandes algorithms. Note that d ij is additive:
d il + d lj = log 1 a il •a lj = log 1 aij = d ij ,
for any (i,l,j) ∈ V thus being suitable to be used in conjunction with the algorithms proposed by [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. Also, note that both a ij and d ij are dimensionless. Equation 2 is the only metric that allows to consistently apply the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF] to transfer probabilities. Other metrics would permit to make the weight decrease when probability increases: for example, 1
-a ij , 1/a ij , -a ij , log(1 -a ij ).
However, the first three ones do not permit to account for the independence of the transfer probabilities along a path. Furthermore, log(1 -a ij ) takes negative values as 0 ≤ a ij ≤ 1. Therefore, it cannot be used to calculate shortest paths because the algorithms in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF] would either endlessly go through a cycle (see Fig 2a and
Results
The consequences of the use of the raw transfer probability (a ij ) rather than the distance we propose (d ij ) are potentially radical. To show this, we used 20 connectivity matrices calculated for [START_REF] Guizien | Vulnerability of marine benthic metapopulations: implications of spatially structured connectivity for conservation practice[END_REF]. They were calculated from Lagrangian simulations using a Table 1. Paths and respective probabilities, weights and hop count for the graph in In particular matrix #1 was obtained after a period of reversed (eastward) circulation. Indeed, this case of circulation is less frequent than the westward circulation [START_REF] Petrenko | Barotropic eastward currents in the western Gulf of Lion, north-western Mediterranean Sea, during stratified conditions[END_REF]. Matrices #14, #10 and #13 correspond to a circulation pattern with an enhanced recirculation in the center of the gulf. Finally, matrices #2, #3, #5, #6, #8, #9, #14, #16, #18, #19, #20 correspond to a rather mixed circulation with no clear pattern. The proportions of particles coming from an origin node and arriving at a settlement node after 3, 4 and 5 weeks were weight-averaged to compute a connectivity PLOS 5/10
-a ij ) Figure 2a ADEDE. . . DEB → 0 → -∞ ACFB (1 × 10 -3 ) 3 = 1 × 10 -9 -3 × 10 -3 Figure 2b ADEFB (1 × 10 -3 ) 4 = 1 × 10 -12 -4 × 10 -3 ACB (1 × 10 -3 ) 2 = 1 × 10 -6 -2 × 10 -3
matrix for larvae with a competency period extending from 3 to 5 weeks. Furthermore, it is expected to have a positive correlation between the degree of a node and its betweenness (e.g., [START_REF] Valente | How correlated are network centrality measures?[END_REF] and [START_REF] Lee | Correlations among centrality measures in complex networks[END_REF]). However, we find that the betweenness values, calculated on the 20 connectivity matrices containing a ij , have an average correlation coefficient of -0.42 with the total degree, -0.42 with the in-degree, and -0.39 with the out-degree. Instead, betweenness calculated with the metric of Equation 2 has an average correlation coefficient of 0.48 with the total degree, 0.45 with the in-degree, and a not significant correlation with the out-degree (p-value > 0.05). Fig 4, betweenness values of the 32 nodes calculated using the two node-to-node distances a ij and log(1/a ij ) are drastically different between each other. Moreover, in 10 out of 20 connectivity matrices, the correlation between node ranking based on betweenness values with the two metrics were not significant. In the 10 cases it was (p-value < 0.05), the correlation coefficient was lower than 0.6 (data not shown). Such partial correlation is not unexpected as the betweenness of a node with a lot of connections could be similar when calculated with a ij or d ij if among these connections there are both very improbable and highly probable ones, like in node 21 in the present test case. Furthermore, it is noticeable that if one uses the a ij values (Fig 4a ), the betweenness values are much more variable than the ones obtained using d ij (Fig 4b). This is because, in the first case, the results depend on the most improbable connections that, in the ocean, are likely to be numerous and unsteady.
As an example, in
As we show in
Conclusion
We highlighted the need of methodological exactness inconsistency in the betweenness calculation when graph theory to marine transfer probabilities. Indeed, the inconsistency comes from the need to reverse the probability when calculating shortest paths. If this is not done, one considers the most improbable paths as the most probable ones. We showed the drastic consequences of this methodological error on the analysis of a published data set of connectivity matrices for the Gulf of Lion [START_REF] Guizien | Vulnerability of marine benthic metapopulations: implications of spatially structured connectivity for conservation practice[END_REF].
On the basis of our study, it may be possible that results in [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] and [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF] might also be affected. A re-analysis of [START_REF] Kininmonth | Graph theoretic topology of the Great but small Barrier Reef world[END_REF] would not affect the conclusions drawn by the authors about the small-world characteristics of the Great Barrier Reef as that is purely topological characteristics of a network. About [START_REF] Andrello | Low connectivity between Mediterranean Marine Protected Areas: a biophysical modeling approach for the dusky grouper: Epinephelus Marginatus[END_REF], according to Marco Andrello (personal communication), due to the particular topology of the network at study, which forces most of the paths -both probable or improbable-to follow the Mediterranean large-scale steady circulation (e.g., [START_REF] Pinardi | The physical and ecological structure and variability of shelf areas in the Mediterranean Sea[END_REF]). As a consequence, sites along the prevalent circulation pathways have high betweenness when using either a ij or d ij . However, betweenness values of sites influenced by smaller-scale circulation will significantly vary according to the way of calculating betweenness.
To solve the highlighted inconsistency, we proposed the use of a node-to-node metric that provides a meaningful way to calculate shortest paths and -as a consequencebetweenness, when relying on transfer probabilities issued from Lagrangian simulations and the algorithm proposed in [START_REF] Dijkstra | A note on two problems in connexion with graphs[END_REF] and [START_REF] Brandes | A faster algorithm for betweenness centrality[END_REF]. The new metric permits to reverse the probability and to calculate the value of a path as the product of its edges and to account for the independence of the transfer probabilities. Moreover, this metric is not limited to the calculation of betweenness alone but is also valid for the calculation of every graph theory measure related to the concept of shortest paths: for example, shortest cycles, closeness centrality, global and local efficiency, and average path length [START_REF] Costa | Tuning the interpretation of graph theory measures in analyzing marine larval connectivity[END_REF].
1 ADEB ( 1 × 10 - 2 ) 3 = 1 × 10 Fig 2 .
1110231102 Figure 1
Fig 3 we show the representation of the graph corresponding to matrix #7. The arrows starting from a node i and ending in a node j represent the direction of the element a ij (in Fig 3a) or d ij (in Fig 3b). The arrows' color code represents the magnitude of the edges' weights. The nodes' color code indicates the betweenness values calculated using the metric a ij (in Fig 3a) or d ij (in Fig 3b). In Fig 3a the edges corresponding to the lower 5% of the weights a ij are represented. These are the larval transfers that, though improbable, are the most influential in determining high betweenness values when using a ij as metric. In Fig 3b the edges corresponding to the lower 5% of the weights d ij are represented. These are the most probable larval transfers that -correctly-are the most influential in determining high betweenness values when using d ij as metric. While in Fig 3a the nodes with highest betweenness are the nodes 31 (0.26), 27 (0.25) and 2 (0.21); in Fig 3b the nodes with highest betweenness are nodes 21 (0.33), 20 (0.03) and 29 (0.03).
Fig 3 .Fig 4 .
34 Fig 3. Representation of matrix #7 from [21], the right side colorbars indicate the metric values. a) Results obtained by using a ij as edge weight, b) results obtained by using d ij as edge weight. In a) the lowest 5% of edges weights are represented. In b) the lowest 5% of edges weights are represented. Note the change in the colorbars' ranges.
Table 2 )
2 or choose the path with more edges (see Fig 2b and Table2), hence arbitrarily lowering the value of the paths between two nodes.
Table 2 .
2 Paths and respective probabilities and weights for the networks in Fig 2.
Path Probability Weight using log(1
Acknowledgments
The authors thank Dr. S.J. Kininmonth and Dr. M. Andrello for kindly providing the code they used for the betweenness calculation in their studies. The first author especially thanks Dr. R. Puzis for helpful conversations. Andrea Costa was financed by a MENRT Ph.D. grant. The research leading to these results has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under Grant Agreement No. 287844 for the project 'Towards COast to COast NETworks of marine protected areas (from the shore to the high and deep sea), coupled with sea-based wind energy potential' (COCONET). The project leading to this publication has received funding from European FEDER Fund under project 1166-39417. |
01764854 | en | [
"sdv.mp.bac",
"sdv.bbm.bc",
"sde"
] | 2024/03/05 22:32:13 | 2017 | https://amu.hal.science/hal-01764854/file/Favre_et_al_2017.pdf | Laurie Favre
Annick Ortalo-Magne
Steṕhane Greff
Thierry Peŕez
Olivier P Thomas
Jean-Charles Martin
Geŕald Culioli
Discrimination of Four Marine Biofilm-Forming Bacteria by LC-MS Metabolomics and Influence of Culture Parameters
Keywords: marine bacteria, biofilms, metabolomics, liquid chromatography-mass spectrometry, MS/MS networking, ornithine lipids, polyamines
Most marine bacteria can form biofilms, and they are the main components of biofilms observed on marine surfaces. Biofilms constitute a widespread life strategy, as growing in such structures offers many important biological benefits. The molecular compounds expressed in biofilms and, more generally, the metabolomes of marine bacteria remain poorly studied. In this context, a nontargeted LC-MS metabolomics approach of marine biofilm-forming bacterial strains was developed. Four marine bacteria, Persicivirga (Nonlabens) mediterranea TC4 and TC7, Pseudoalteromonas lipolytica TC8, and Shewanella sp. TC11, were used as model organisms. The main objective was to search for some strainspecific bacterial metabolites and to determine how culture parameters (culture medium, growth phase, and mode of culture) may affect the cellular metabolism of each strain and thus the global interstrain metabolic discrimination. LC-MS profiling and statistical partial least-squares discriminant analyses showed that the four strains could be differentiated at the species level whatever the medium, the growth phase, or the mode of culture (planktonic vs biofilm). A MS/MS molecular network was subsequently built and allowed the identification of putative bacterial biomarkers. TC8 was discriminated by a series of ornithine lipids, while the P. mediterranea strains produced hydroxylated ornithine and glycine lipids. Among the P. mediterranea strains, TC7 extracts were distinguished by the occurrence of diamine derivatives, such as putrescine amides.
■ INTRODUCTION
All biotic or abiotic surfaces immersed in the marine environment are subjected to colonization pressure by a great diversity of micro-and macroorganisms (e.g., bacteria, diatoms, micro-and macroalgae, invertebrate larvae). This so-called "marine biofouling" generates serious economic issues for endusers of the marine environment. Biofouling drastically alters boat hulls, pipelines, aquaculture, and port structures, [START_REF] Yebra | Antifouling technology-Past, present and future steps towards efficient and environmentally friendly antifouling coatings[END_REF] thus affecting fisheries and the maritime industry by reducing vessel efficiency and increasing maintenance costs. [START_REF] Schultz | Economic impact of biofouling on a naval surface ship[END_REF] Among fouling organisms, bacteria are well known for their significant pioneer role in the process of colonization. [START_REF] Railkin | Marine Biofouling: Colonization Processes and Defenses[END_REF] They are commonly considered as the first colonizers of immersed surfaces. They organize themselves in communities called biofilms, forming complex structures of cells embedded in an exopolymeric matrix. [START_REF] Stoodley | Biofilms as complex differentiated communities[END_REF] Thousands of bacterial strains are present in marine biofilms, and bacterial cell concentration is higher than in planktonic samples isolated from the same environment. Such an organization confers a special functioning to the prokaryotic community: [START_REF] Flemming | Biofilms: an emergent form of bacterial life[END_REF] (i) it provides a better resistance to exogenous stresses, (ii) it allows nutrients to accumulate at the surface, and (iii) it can constitute a protective system to predation. [START_REF] Matz | Marine biofilm bacteria evade eukaryotic predation by targeted chemical defense[END_REF] Moreover, the composition of the community and its biochemical production have been shown to impact the settlement of other organisms and thus the maturation of the biofouling. [START_REF] Lau | Roles of bacterial community composition in biofilms as a mediator for larval settlement of three marine invertebrates[END_REF][START_REF] Dobretsov | Facilitation and inhibition of larval attachment of the bryozoan Bugula neritina in association with monospecies and multi-species biofilms[END_REF] From a chemical point of view, marine bacteria are known to produce a wide array of specialized metabolites exhibiting various biological activities. [START_REF] Blunt | Marine natural products[END_REF] Among them, a vast number of compounds serve as protectors in highly competitive environments, and others have specific roles in physiology, communication, or constitute adaptive responses to environmental changes. [START_REF] Dang | Microbial surface colonization and biofilm development in marine environments[END_REF] Therefore, obtaining broad information on the metabolic status of bacterial strains isolated from marine biofilms and correlating it with external parameters is of high interest. Such knowledge constitutes a prerequisite for further studies on the overall understanding of these complex ecological systems.
With the recent developments of metabolomics, it is now possible to obtain a snapshot view, as complete and accurate as possible, of a large set of metabolites (i.e., small organic molecules with M w < 1500 Da) in a biological sample reflecting the metabolic state of the cells as a result of the specificity of their genetic background and an environmental context. Nuclear magnetic resonance spectroscopy or hyphenated techniques such as liquid chromatography (LC) or gas chromatography (GC) coupled to mass spectrometry are commonly used as analytical tools for metabolomics studies. Liquid chromatography-mass spectrometry (LC-MS) has the advantage to analyze a large pool of metabolites with high sensitivity and resolution, even without derivatization. [START_REF] Zhou | LC-MS-based metabolomics[END_REF] In comparative experiments, metabolomics applied to bacteria allows the identification of biomarkers able to differentiate strains. To date, a limited number of metabolomics studies have focused on marine bacteria, and only few of them are related to the effects of physiological and culture parameters on bacterial metabolism. [START_REF] Romano | Exo-metabolome of Pseudovibrio sp. FO-BEG1 analyzed by ultra-high resolution mass spectrometry and the effect of phosphate limitation[END_REF][START_REF] Zech | Growth phasedependent global protein and metabolite profiles of Phaeobacter gallaeciensis strain DSM 17395, a member of the marine Roseobacterclade[END_REF][START_REF] Takahashi | Metabolomics approach for determining growth-specific metabolites based on Fourier transform ion cyclotron resonance mass spectrometry[END_REF][START_REF] Brito-Echeverría | Response to adverse conditions in two strains of the extremely halophilic species Salinibacter ruber[END_REF] With the main objectives to search for some strain-specific bacterial metabolites and to assess the influence of culture parameters on the strain metabolism, this study intended: (i) to evaluate the LC-MS-based discrimination between the metabolome of four marine biofilm-forming bacterial strains depending on different extraction solvents and culture conditions and (ii) to putatively annotate the main discriminating compounds (Figure 1).
The four marine strains studied herein are all Gram-negative bacteria isolated from natural biofilms: Persicivirga (Nonlabens) mediterranea TC4 and TC7 belong to the phylum Bacteroidetes, while Pseudoalteromonas lipolytica TC8 and Shewanella sp. TC11 are γ-proteobacteria. They were selected on the basis of their biofilm-forming capability when cultivated in vitro and their ease for growing. [START_REF] Brian-Jaisson | Identification of bacterial strains isolated from the Mediterranean sea exhibiting different abilities of biofilm formation[END_REF] The two first strains (TC4 and TC7) were specifically chosen to evaluate the discriminative potential of our metabolomics approach as they belong to the same species. Because of the use of high-salt culture media when working on marine bacteria and to obtain an efficient extraction of intracellular metabolites, liquid-liquid extraction with medium polarity agents was specifically selected. For the analytical conditions, C18 reversed-phase HPLC columns are widely used for LC-MS profiling. [START_REF] Kuehnbaum | New advances in separation science for metabolomics: Resolving chemical diversity in a post-genomic era[END_REF] Such separation process provides satisfactory retention of medium to low polar analytes but does not allow a proper retention of more polar compounds. In the present study, analyses were performed on a phenyl-hexyl stationary phase to detect a large array of bacterial metabolites. The recently developed core-shell stationary phase was applied here for improved efficiency. [START_REF] Gritti | Performance of columns packed with the new shell Kinetex-C 18 particles in gradient elution chromatography[END_REF] For the MS detection, even if high-resolution mass spectrometry (HRMS) is mainly used in metabolomics, a low-resolution mass spectrometer (LRMS) was first chosen to assess the potential of the metabolomic approach to discriminate between the bacteria. A cross-platform comparison including HRMS was subsequently undertaken to assess the robustness of the method. Finally, HRMS and MS/MS data were used for the metabolite annotation. The resulting data were analyzed by multivariate statistical methods, including principal component analysis (PCA) and supervised partial least-squares discriminate analysis (PLS-DA). Unsupervised PCA models were first used to evaluate the divide between bacterial strains, while supervised PLS-DA models allowed us to increase the separation between sample classes and to extract information on discriminating metabolites.
■ EXPERIMENTAL SECTION
Reagents
Ethyl acetate (EtOAc), methanol (MeOH), and dichloromethane (DCM) used for the extraction procedures were purchased from VWR (Fontenay-sous-Bois, France). LC-MS analyses were performed using LC-MS-grade acetonitrile (ACN) and MeOH (VWR). Milli-Q water was generated by the Millipore ultrapure water system (Waters-Millipore, Milford, MA). Formic acid of mass spectrometry grade (99%) was obtained from Sigma-Aldrich (St. Quentin-Fallavier, France).
Bacterial Strains, Culture Conditions, and Metabolite Extraction
Persicivirga (Nonlabens) mediterranea TC4 and TC7 (TC for Toulon Collection), Pseudoalteromonas lipolytica TC8, and Shewanella sp. TC11 strains were isolated from marine biofilms harvested on artificial surfaces immersed in the Mediterranean Sea (Bay of Toulon, France, 43°06′23″ N, 5°57′17″ E). [START_REF] Brian-Jaisson | Identification of bacterial strains isolated from the Mediterranean sea exhibiting different abilities of biofilm formation[END_REF] All strains were stored at -80 °C in 50% glycerol medium until use and were grown in Vaäẗanen nine salt solution (VNSS) at 20 °C on a rotator/shaker (120 rpm) to obtain synchronized bacteria in postexponential phase. A cell suspension was used as starting inoculum to prepare planktonic and sessile cultures. Depending on the experiment, these cultures were performed in two different nutrient media: VNSS or marine broth (MB) (BD, Franklin Lakes, NJ), always at the same temperature of 20 °C. In the case of planktonic cultures, precultured bacteria (10 mL) were suspended in culture medium (50 mL) at 0.1 absorbance unit (OD 600 ) and placed in 250 mL Erlenmeyer flasks. Strains were grown in a rotary shaker (120 rpm). Medium turbidity was measured at 600 nm (Genesys 20 spectrophotometer, Thermo Fisher Scientific, Waltham, MA) every hour for the determination of growth curves before metabolite extraction. Cultures were then extracted according to the OD 600 value correlated to the growth curve. For sessile conditions, precultured planktonic cells were suspended in culture medium (10 mL) at 0.1 absorbance unit (OD 600 ) in Petri dishes. After 24 or 48 h of incubation, the culture medium was removed and biofilms were physically recovered by scraping. The resulting mixture was then extracted.
Metabolite extractions were performed with EtOAc, cold MeOH, or a mixture of cold MeOH/DCM (1:1 v/v). 100 mL of solvent was added to the bacterial culture. The resulting mixture was shaken for 1 min and then subjected to ultrasounds for 30 min at 20 °C. For samples extracted with EtOAc, the organic phase was recovered and concentrated to dryness under reduced pressure. Samples extracted with MeOH or MeOH/ DCM were dried in vacuo. Dried extracts were then dissolved in MeOH at a concentration of 15 mg/mL. Samples were transferred to 2 mL HPLC vials and stored at -80 °C until analysis.
For all experiments, bacterial cultures, extraction, and sample preparation were carried out by the same operator.
Metabolic Fingerprinting by LC-MS
LC-ESI-IT-MS Analyses. The bacterial extracts were analyzed on an Elite LaChrom (VWR-Hitachi, Fontenay-sous-Bois, France) chromatographic system coupled to an ion trap mass spectrometer (Esquire 6000, Bruker Daltonics, Wissembourg, France). Chromatographic separation was achieved on an analytical core-shell reversed-phase column (150 × 3 mm, 2.6 μm, Kinetex Phenyl-Hexyl, Phenomenex, Le Pecq, France) equipped with a guard cartridge (4 × 3 mm, SecurityGuard Ultra Phenomenex) and maintained at 30 °C. The injected sample volume was 5 μL. The mobile phase consisted of water (A) and ACN (B) containing both 0.1% of formic acid. The flow rate was 0.5 mL/min. The elution gradient started at 20% B during 5 min, ascended to 100% B in 20 min with a final isocratic step for 10 min; and then returned to 20% B in 0.1 min and maintained 9.9 min. The electrospray interface (ESI) parameters were set as follows: nebulizing gas (N 2 ) pressure at 40 psi, drying gas (N 2 ) flow at 8 L/min, drying temperature at 350 °C, and capillary voltage at 4000 V. Mass spectra were acquired in the full scan range m/z 50 to 1200 in positive mode as this mode provides a higher number of metabolite features after filtering and also a better discrimination between clusters in the multivariate statistics. Data were handled with Data Analysis (version 4.3, Bruker Daltonics).
UPLC-ESI-QToF-MS Analyses. The UPLC-MS instrumentation consisted of a Dionex Ultimate 3000 Rapid Separation (Thermo Fisher Scientific) chromatographic system coupled to a QToF Impact II mass spectrometer (Bruker Daltonics). The analyses were performed using an analytical core-shell reversed-phase column (150 × 2.1 mm, 1.7 μm, Kinetex Phenyl-Hexyl with a SecurityGuard cartridge, Phenomenex) with a column temperature of 40 °C and a flow rate of 0.5 mL/min. The injection volume was 5 μL. Mobile phases were water (A) and ACN (B) containing each 0.1% (v/v) of formic acid. The elution gradient (A:B, v/v) was as follows: 80:20 from 0 to 1 min, 0:100 in 7 min and kept 4 min, and then 80:20 at 11.5 min and kept 2 min. The capillary voltage was set at 4500 V (positive mode), and the nebulizing parameters were set as follows: nebulizing gas (N 2 ) pressure at 0.4 bar, drying gas (N 2 ) flow at 4 L/min, and drying temperature at 180 °C. Mass spectra were recorded from m/z 50 to 1200 at a mass resolving power of 25 000 full width at half-maximum (fwhm, m/z = 200) and a frequency of 2 Hz. Tandem mass spectrometry analyses were performed thanks to a collisioninduced dissociation (CID) with a collision energy of 25 eV. A solution of formate/acetate forming clusters was automatically injected before each sample for internal mass calibration, and the mass spectrometer was calibrated with the same solution before each sequence of samples. Data handling was done using Data Analysis (version 4.3).
Quality Control. For each sequence, a pool sample was prepared by combining 100 μL of each bacterial extract. The pool sample was divided into several HPLC vials that were used as quality-control samples (QCs). Samples of each condition were randomly injected to avoid any possible time-dependent changes in LC-MS chromatographic fingerprints. To ensure analytical repeatability, the QCs were injected at the beginning, at the end, and every four samples within each sequence run. Cell-free control samples (media blanks) were prepared in the same way as cultures with cells, and they were randomly injected within the sequence. These blanks allowed the subsequent subtraction of contaminants or components coming from the growth media. Moreover, to assess sample carry-over of the analytical process, three solvent blanks were injected for each set of experiments before the first QC and after the last QC.
Data Preprocessing and Filtering. LC-MS raw data were converted into netCDF files with a script developed within the Data Analysis software and preprocessed with the XCMS software (version 1.38.0) under R 3.1.0 environment. Peak picking was performed with the "matchedFilter" method for HPLC-IT-MS data and "centwave" method for UPLC-QToF-MS data. The other XCMS parameters were as follows: "snthresh" = 5, retention time correction with the obiwarp method ("profstep" = 0.1), peak grouping with "bw" = 5 for ion trap data, "bw" = 2 for QToF data and "mzwidth" = 0.5 for ion trap data, and "mzwidth" = 0.015 for QToF data, gap filling with default parameters. [START_REF] Patti | Meta-analysis of untargeted metabolomic data from multiple profiling experiments[END_REF] To ensure data quality and remove redundant signals, three successive filtering steps were applied to preprocessed data using an in-house script on R. The first was based on the signal/noise (S/N) ratio to remove signals observed in medium blanks (S/N set at 10 for features matching between samples and medium blanks). The second allowed suppression of signals based on the value of the coefficient of variation (CV) of the intensity of the variables in the QCs (cutoff set at 20%). A third filtering step was applied using the coefficient of the autocorrelation (with a cutoff set at 80%) between variables with a same retention time in the extract samples.
MS/MS Networking. The molecular network was generated on the Internet platform GNPS (http://gnps.ucsd. edu) from MS/MS spectra. Raw data were converted into .mzXML format with DataAnalysis. Data were filtered by removing MS/MS peaks within ±17 Da of the m/z of the precursor ions. Only the top 6 peaks were conserved in a window of 50 Da. Data were clustered using MS-Cluster with a tolerance of 1 Da for precursor ions and of 0.5 Da for MS/MS fragment ions to create a consensus spectrum. Consensus spectra containing fewer than two spectra were eliminated. The resulting spectra were compared with those of the GNPS spectral bank. The molecular network was then generated and previewed directly on GNPS online. Data were imported and treated offline with Cystoscape (version 3.4.0). MS/MS spectra with a high spectral similarity (cosine score (CS) > 0.65) were represented as nodes. Connections between these nodes appeared because the CS was above 0.65 and at least four common ions were detected. The thickness of the connections was proportional to the CS.
Annotation of Biomarkers. Variables of importance were identified from the multivariate statistical analyses (see the Statistical Analyses section). They were then subjected to annotation by searching the most probable molecular formula with the "smartformula" package of DataAnalysis and by analyzing their accurate masses and their fragmentation patterns in comparison with the literature data. Other data available online in KEGG (www.genome.jp/kegg), PubChem (https://pubchem.ncbi.nlm.nih.gov), ChemSpider (www. chemspider.com), Lipid Maps (http://www.lipidmaps.org), Metlin (https://metlin.scripps.edu/), and GNPS (http:// gnps.ucsd.edu) were also used for complementary information.
Statistical Analyses. Simca 13.0.3 software (Umetrics, Umea, Sweden) was used for all multivariate data analyses and modeling. Data were log10-transformed and mean-centered. Models were built on principal component analysis (PCA) or on partial least-squares discriminant analysis (PLS-DA). PLS-DA allowed the determination of discriminating metabolites using the variable importance on projection (VIP). The VIP score value indicates the contribution of a variable to the discrimination between all of the classes of samples. Mathematically, these scores are calculated for each variable as a weighted sum of squares of PLS weights. The mean VIP value is one, and usually VIP values over one are considered as significant. A high score is in agreement with a strong discriminatory ability and thus constitutes a criterion for the selection of biomarkers. All of the models evaluated were tested for over fitting with methods of permutation tests and cross-validation analysis of variance (CV-ANOVA). The descriptive performance of the models was determined by R 2 X (cumulative) (perfect model: R 2 X (cum) = 1) and R 2 Y (cumulative) (perfect model: R 2 Y (cum) = 1) values, while their prediction performance was measured by Q 2 (cumulative) (perfect model: Q 2 (cum) = 1), p (CV-ANOVA) (perfect model: p = 0) values, and a permutation test (n = 150). The permuted model should not be able to predict classes: R 2 and Q 2 values at the Y-axis intercept must be lower than those of Q 2 and the R 2 of the nonpermuted model. Data Visualization. The heatmap representation was obtained with the PermutMatrix software. [START_REF] Caraux | PermutMatrix: a graphical environment to arrange gene expression profiles in optimal linear order[END_REF] Dissimilarity was calculated with the squared Pearson correlation distance, while the Ward's minimum variance method was used to obtain the hierarchical clustering.
■ RESULTS AND DISCUSSION
Selection of the Metabolite Extraction Method (Experiment #1)
Metabolomics allows the analysis of many metabolites simultaneously detected in a biological sample. To ensure that the resulting metabolomic profiles characterize the widest range of metabolites of high relevance, the metabolite extraction protocol must be nonselective and highly reproducible. [START_REF] Kido Soule | Environmental metabolomics: Analytical strategies[END_REF] Therefore, the biological material must be studied after simple preparation steps to prevent any potential degradation or loss of metabolites. In microbial metabolomics, the first step of the sample preparation corresponds to quenching to avoid alterations of the intracellular metabolome, which is known for its fast turnover. [START_REF] De Jonge | Optimization of cold methanol quenching for quantitative metabolomics of Penicillium chrysogenum[END_REF] In this study, the first objective was therefore to develop an extraction protocol for LC-MS-based metabolome profiling (exo-and endometabolomes) of marine bacteria that should be applied to any strain, cultivated either planktonically or under sessile conditions, but also easily transposable to natural complex biofilms. For this purpose, liquid-liquid extraction was selected because this process allows quenching and extraction of the bacterial culture in a single step. The second issue is linked to the high salinity of the extracts, which implies a required desalting step. For these reasons, EtOAc, MeOH, and MeOH/DCM (1:1) were selected as extractive solvents for this experiment.
For the first experiment, common cultures conditions were selected. Thus the four bacterial strains were grown planktonically in single-species cultures (in VNSS medium), each in biological triplicates, and extracted until they reached their stationary phase (at t = t 5 , Supporting Information Figure S1) and before the decline phase. For each sample, the whole culture was extracted using a predefined set of experimental conditions and analyzed by LC-(+)-ESI-IT-MS. The selection of the optimal solvents was performed based on: (i) the number of features detected on LC-MS profiles after filtering, (ii) the ability to discriminate the bacterial strains by multivariate analyses, and (iii) the ease of implementation of the experimental protocol.
In such a rich culture medium, data filtering constitutes a key requirement because bacterial metabolites are masked by components of the culture broth (e.g., peptone, starch, yeast extract). Moreover, such a process was essential to reduce falsepositives and redundant data for the further statistical analyses. First, for each solvent, treatment of all chromatograms with the XCMS package gave a primary data set with 3190 ± 109 metabolite features (Supporting Information Figure S2). A primary filtering between variables present in both bacterial extracts and blank samples removed >80% of the detected features, which were inferred to culture medium components, solvent contamination, or instrumentation noise. After two additional filtering steps, one with the CV of variable intensities and the other with the coefficient of autocorrelation across samples and between variables with a same retention time, a final list of 155 ± 22 m/z features was reached.
The resulting data showed a different number of detected metabolite features depending on the extraction solvent (Supporting Information Figure S3): MeOH/DCM yielded a higher number of metabolites for TC4 and TC11, while EtOAc was the most effective extraction solvent for TC7 and TC8. This result was expected because previous works showed that the extraction method had a strong effect on the detected microbial metabolome, with the physicochemical properties of the extraction solvent being one of the main factor of the observed discrepancies. [START_REF] Duportet | The biological interpretation of metabolomic data can be misled by the extraction method used[END_REF][START_REF] Shin | Evaluation of sampling and extraction methodologies for the global metabolic profiling of Saccharophagus degradans[END_REF] The extraction parameters had an effect not only on the number of detected features but also on their concentration. [START_REF] Canelas | Quantitative evaluation of intracellular metabolite extraction techniques for yeast metabolomics[END_REF] The LC-MS data sets were analyzed by PCA and PLS-DA to evaluate the potential of the method to discriminate among the bacterial strains according to the extraction solvent system. PCA evidenced interstrain cleavage on the score plots (Figure 2a and Supporting Information Figure S4a,b). For each solvent, samples from TC4 and TC7, on one hand, and from TC8 and TC11, on the other hand, were clearly distinguished on the first component, which accounted for 56-72% of the total variance. The second component, with 12 to 29%, allowed the distinction between TC8 and TC11 and, to a lesser extent, between TC4 and TC7.
experiments parameters models N°a R 2 X cum b R 2 Y cum c Q 2 Y cum d R intercept e Q
To find discriminating biomarkers, PLS-DA was also applied to the LC-MS data (one model by solvent condition and one class by strain). For each extraction solvent, the resulting score plots showed three distinct clusters composed of both P. mediterranea strains (TC4 and TC7), TC8 and TC11, respectively (data not shown). The PLS-DA four-class models gave R 2 Xcum and R 2 Ycum values of 0.951-0.966 and 0.985-0.997, respectively, showing the consistency of the obtained data, and Q 2 Ycum values of 0.820-0.967, estimating their predictive ability (Table 1). Nevertheless, the p values (>0.05) obtained from the cross validation indicated that the bacterial samples were not significantly separated according to the strain, while the R intercept values (>0.4) obtained from a permutation test (n = 150) showed overfitting of the models. Taking these results into account, three-class PLS-DA models regrouping the TC4 and TC7 strains into a same class were constructed. The resulting R 2 Xcum (0.886-0.930), R 2 Ycum (0.974-0.991), and Q 2 Ycum (0.935-0.956) values attested the quality of these improved models. In addition, a permutation test (n = 150) allowed the successful validation of the PLS-DA models: R intercept values (<0.4, except for MeOH/DCM) and Q intercept values (<-0.2) indicated that no overfitting was observed, while p values (<0.05) showed that the three groups fitted by the models were significantly different (Table 1, Supporting Information Figure S4c-e). Samples extracted with MeOH and EtOAc showed higher quality and more robust PLS-DA models for the strain discrimination than those obtained after extraction with MeOH/DCM. For all of these reasons, EtOAc was selected for metabolome extraction in the subsequent experiments. These results were in accordance with the use of a similar protocol in recent studies dealing with the chemical profiling of marine bacteria. [START_REF] Lu | A highresolution LC-MS-based secondary metabolite fingerprint database of marine bacteria[END_REF][START_REF] Bose | LC-MS-based metabolomics study of marine bacterial secondary metabolite and antibiotic production in Salinispora arenicola[END_REF][START_REF] Vynne | chemical profiling, and 16S rRNA-based phylogeny of Pseudoalteromonas strains collected on a global research cruise[END_REF] Three culture parameters (culture media, phase of growth, and mode of culture) were then analyzed sequentially to evaluate their respective impact on the interstrain metabolic discrimination.
Impact of the Culture Medium (Experiment #2)
The influence of the culture medium on the marine bacteria metabolome has been poorly investigated. [START_REF] Brito-Echeverría | Response to adverse conditions in two strains of the extremely halophilic species Salinibacter ruber[END_REF][START_REF] Canelas | Quantitative evaluation of intracellular metabolite extraction techniques for yeast metabolomics[END_REF][START_REF] Bose | LC-MS-based metabolomics study of marine bacterial secondary metabolite and antibiotic production in Salinispora arenicola[END_REF][START_REF] Djinni | Metabolite profile of marine-derived endophytic Streptomyces sundarbansensis WR1L1S8 by liquid chromatography-mass spectrometry and evaluation of culture conditions on antibacterial activity and mycelial growth[END_REF] To ascertain that the chemical discrimination of the bacterial strains studied was not medium-dependent, a second culture broth was used. This second set of experiments was designed as follows: the four bacterial strains were cultivated in parallel in VNSS and MB media (each in biological triplicates) until they reached the stationary phase (t = t 5 , Supporting Information Figure S1), and their organic extracts (extraction with EtOAc) were analyzed by LC-MS. Just like in the case of VNSS, MB is a salt-rich medium widely used for marine bacterial cultures. The number of metabolites detected and the chemical discrimination between the bacterial strains were then determined for this set of samples. First, the number of metabolites obtained after the three filtering steps was similar for both culture media (Supporting Information Figure S5), and all of the detected m/z features were common to both media. This result showed the robustness of the filtering method because the chemical compositions of both culture media are highly different (Supporting Information Table S1). Indeed, MB contains more salts, and, in terms of organic components, higher amounts of yeast extract and peptone, while starch and glucose are specific ingredients of VNSS. A small difference was observed on the PCA score plots obtained with samples from a single strain cultured in these two different media, but the low number of samples did not allow the validation of the corresponding PLS-DA models (data not shown). Whatever the medium, an obvious clustering pattern for each of the four strains was observed on the PCA score plots when all of the samples were considered (Supporting Information Figure S6a). Four-and three-class PLS-DA models were constructed to evaluate the discrimination capacity of the method. As demonstrated for VNSS cultures (Supporting Information Figure S4c), the PLS-DA three-class model obtained with the bacteria grown in MB (Table 1 and Supporting Information Figure S6b) also showed a clear separation between the groups (p < 0.05), and it was statistically validated by a permutation test. When the whole data set (VNSS and MB) was analyzed, the resulting PLS-DA models (Table 1), which passed crossvalidation and permutation test, indicated that the bacterial samples could be efficiently discriminated at the species level.
On the basis of the PLS-DA score plot, TC8 was the bacterial strain, which demonstrated the higher metabolic variation with the culture media used (Figure 2b). It is now well established that changing bacterial culture media not only affects the metabolome quantitatively but also has a significant impact on the expression of some distinct biosynthetic pathways. Such an approach, named OSMAC (One Strain-MAny Compounds), has been used in recent years to improve the number of secondary metabolites produced by a single microbial strain. [START_REF] Bode | Big effects from small changes: Possible ways to explore Nature's chemical diversity[END_REF] In the present study, some intrastrain differences were observed between cultures in both media, but they did not prevent from a clear interstrain discrimination. Therefore, these results showed that this method allowed the discrimination between samples of the three marine biofilm-forming bacterial species, even if they are grown in distinct media.
Impact of the Growth Phase (Experiment #3)
Growth of bacteria in suspension, as planktonic microorganisms, follows a typical curve with a sequence of a lag phase, an exponential phase (multiplication of cells), a stationary phase (stabilization), and a decline phase. To date, only a few studies have focused on differences in the metabolome of microorganisms along their growth phase, [START_REF] Zech | Growth phasedependent global protein and metabolite profiles of Phaeobacter gallaeciensis strain DSM 17395, a member of the marine Roseobacterclade[END_REF][START_REF] Drapal | The application of metabolite profiling to Mycobacterium spp.: Determination of metabolite changes associated with growth[END_REF][START_REF] Jin | Metabolomics-based component profiling of Halomonas sp. KM-1 during different growth phases in poly(3-hydroxybutyrate) production[END_REF] and most of these analyses were performed by NMR and GC-MS. These different culture phases are related to the rapid bacterial response to environmental changes and thus to different metabolic expressions. To determine the impact of this biological variation on the discrimination between bacterial cell samples, the metabolome content was analyzed (in biological triplicates) for the four strains grown in VNSS at five times of their different growth stages: two time points during the exponential phase (t 1 and t 2 ), one at the end of the exponential phase (t 3 ), and two others during the stationary phase (t 4 and t 5 ) (Supporting Information Figure S1). All aliquots were treated with the selected extraction protocol, followed by analysis with LC-MS. The data obtained for the four strains were preprocessed, filtered, and then analyzed by PCA and PLS-DA. As shown in Figure S1, the strains grew differently, as indicated by OD 600 changes: the exponential phase of all of the strains started directly after inoculation and occurred during 3 h for TC8 and TC11 and 8 h for TC4 and TC7, respectively. After filtering, the data showed that among all of the strains TC8 produced the highest number of metabolites in all phases, while TC7 was always the less productive. The number of metabolites detected was higher for TC8 during the stationary phase, while it was slightly higher for TC11 during the exponential phase, and no significant differences were noticed for TC4 and for TC7 (Supporting Information Figure S7). For each strain, most of the detected m/z signals were found in both growth phases, but more than two-thirds of them were present in higher amounts during the stationary phase.
To determine if this method was also able to differentiate between the phases of growth, PLS-DA models were then constructed for each bacterial strain with the LC-MS profiles (Supporting Information Table S2). For TC8, bacterial cultures were clearly discriminated with their growth phase, as described on the corresponding PLS-DA score plot (Supporting Information Figure S8a). This constructed PLS-DA model was well-fitted to the experimental data: It consisted of four components, and the two first explained almost 75% of the variation. The first dimension showed a significant separation between cultures harvested at the beginning and the middle of the exponential phase (t 1 and t 2 ), the end of this same growth phase (t 3 ), and the stationary phase (t 4 and t 5 ), while the second one emphasized the discrimination of cultures collected at the end of the exponential phase (t 3 ) from the others. With a less pronounced separation between samples of the exponential phase, a similar pattern was observed for TC4 and, to a lesser extent, for TC11 (Supporting Information Figure S8b,c). For TC7, no PLS-DA model allowed highlighting significant differences between samples with the growth phase (data not shown). Finally, the discrimination between all of the bacterial species harvested during the two growth phases (five time points) was analyzed. The resulting PLS-DA model explained >78% of the variance of the data set (Table 1 and Figure 2c). Here again, the metabolome of the TC8 strain showed the most important variability, but a clear discrimination between the metabolomes of the four bacterial strains was observed whatever the phase of growth. In accordance with their taxonomic proximity, it was highlighted that both P. mediterranea strains (TC4 and TC7) were closely related.
It is now well-established that drastic changes may occur in bacterial metabolic production at the transition from exponential phase to stationary phase. This phenomenon is often due to a lowered protein biosynthesis, which induces the biosynthetic machinery to switch from a metabolic production mainly dedicated to cell growth during exponential phase toward alternative metabolism, producing a new set of compounds during the stationary phase. [START_REF] Alam | Metabolic modeling and analysis of the metabolic switch in Streptomyces coelicolor[END_REF][START_REF] Herbst | Label-free quantification reveals major proteomic changes in Pseudomonas putida F1 during the exponential growth phase[END_REF] However, in contrast with well-studied model microorganisms, several marine bacteria undergo a stand-by step between these two growth phases. [START_REF] Sowell | Proteomic analysis of stationary phase in the marine bacterium "Candidatus Pelagibacter ubique[END_REF][START_REF] Gade | Proteomic analysis of carbohydrate catabolism and regulation in the marine bacterium Rhodopirellula baltica[END_REF] For each strain, our results showed that most of the changes between the growth phases correspond to the upregulation of a large part of the metabolites during the stationary phase. This trend was already observed in previous studies, [START_REF] Drapal | The application of metabolite profiling to Mycobacterium spp.: Determination of metabolite changes associated with growth[END_REF][START_REF] Jin | Metabolomics-based component profiling of Halomonas sp. KM-1 during different growth phases in poly(3-hydroxybutyrate) production[END_REF] but opposite results have also been described due to distinct metabolome coverage or studied microorganism. [START_REF] Zech | Growth phasedependent global protein and metabolite profiles of Phaeobacter gallaeciensis strain DSM 17395, a member of the marine Roseobacterclade[END_REF] These bibliographic data were also in accordance with the different behavior of each of the four strains when the metabolome, restricted to the extraction and analytical procedures, was investigated at different time points of the growth curve. The chemical discrimination of these bacteria was thus not dependent on their growth phase. Overall, because the chemical diversity seemed to be higher during the stationary phase, this growth phase was then chosen for the rest of the study.
Impact of the Mode of Culture (Experiment #4)
The bacterial strains were isolated from marine biofilms developed on artificial surfaces immersed in situ. [START_REF] Brian-Jaisson | Identification of bacterial strains isolated from the Mediterranean sea exhibiting different abilities of biofilm formation[END_REF] In addition to their facility to grow in vitro, these strains were chosen for their propensity to form biofilms. The intrinsic differences between the metabolisms of planktonic and biofilm cells and the impact of these two modes of culture on the interstrain discrimination were analyzed by LC-MS profiling of three of the bacteria. Indeed, due to the chemical similarity of both P. mediterranea strains, only TC4 was used for this experiment. For this purpose, these strains were cultured in triplicate in planktonic (at five points of their growth curve) and biofilm modes (at two culture times: 24 and 48 h). This difference in growth time between both culture modes was due to the slowgrowing nature of biofilms. To compare accurately the two modes of culture, the development of biofilms was performed under static conditions and in the same medium as those used for planktonic growth (VNSS). For each strain, PLS-DA models were constructed and showed a clear discrimination between samples with their culture mode with total variances ranging from 52 to 59% (Supporting Information Figure S9). PLS-DA models with good-quality parameters were obtained, and validation values indicated that they could be regarded as predictable (Supporting Information Table S2). Moreover, a similar number of m/z features upregulated specifically in one of the two culture modes was detected for each strain. When dealing with the interstrain discrimination for bacteria cultured as biofilms, the three strains were clearly separated on the PCA score plot, and the total variance due to the two main projections accounted for 59% (Supporting Information Figure S10a). The corresponding PLS-DA model showed a similar trend and gave good results, indicating that this model could distinguish the three strains (Table 1 and Supporting Information Figure S10b). When the full data set (biofilms and planktonic cultures) was analyzed, the same pattern was further noticed with the occurrence of one cluster by strain on the PCA score plot (Supporting Information Figure S11). A PLS-DA model was built and demonstrated again, after validation, a good separation among all of the strains (Table 1 and Figure 2d).
To date, the few metabolomics studies undertaken on biofilms were mostly based on NMR, which is limited by intrinsic low sensitivity. [START_REF] Yeom | 1 H NMR-based metabolite profiling of planktonic and biofilm cells in Acinetobacter baumannii 1656-2[END_REF][START_REF] Ammons | Quantitative NMR metabolite profiling of methicillin-resistant and methicillin-susceptible Staphylococcus aureus discriminates between biofilm and planktonic phenotypes[END_REF] More specifically, only two studies have used a metabolomic approach with the aim of analyzing marine bacterial biofilms. [START_REF] Chandramouli | Proteomic and metabolomic profiles of marine Vibrio sp. 010 in response to an antifoulant challenge[END_REF][START_REF] Chavez-Dozal | Proteomic and metabolomic profiles demonstrate variation among free-living and symbiotic vibrio f ischeri biofilms[END_REF] It is now well-established that in many aquatic environments most of the bacteria are organized in biofilms, and this living mode is significantly different from its planktonic counterpart. [START_REF] Hall-Stoodley | Bacterial biofilms: from the Natural environment to infectious diseases[END_REF] Deep modifications occur in bacterial cells at various levels (e.g., gene expression, proteome, transcriptome) during the transition from free-living planktonic to biofilm states. [START_REF] Sauer | The genomics and proteomics of biofilm formation[END_REF] Biofilm cells have traditionally been described as metabolically dormant with reduced growth and metabolic activity. Additionally, cells in biofilms show a higher tolerance to stress (e.g., chemical agents, competition, and predation). On the basis of these data, a liquid culture alone does not allow a full understanding of the ecological behavior or the realistic response to a specific challenge in the case of benthic marine bacteria. For the TC4, TC8, and TC11 strains, PLS-DA models allowed an unambiguous distinction between biofilm and planktonic samples at different ages. As described in the literature for other bacteria, these results agreed with a significant metabolic shift between the two modes of culture whatever the strain and the culture time. Considering the biofilm samples and the whole set of samples, our results demonstrated that chemical profiling by LC-MS followed by PLS-DA analysis led to a clear discrimination between the three strains. Therefore, the interstrain metabolic differences are more significant than the intrastrain differences inherent to the culture mode.
Analytical Platforms Comparison and Identification of Putative Biomarkers
The data collected during the first part of this study did not allow the annotation of the biomarkers. In this last part, both accurate MS and MS/MS data were obtained from a limited pool of samples (four strains, EtOAc as extraction solvent, planktonic cultures in VNSS until the stationary phase) with an UPLC-ESI-QToF equipment. After extraction and filtering, the data obtained from the LC-HRMS profiles were subjected to chemometric analyses. The resulting PCA and PLS-DA score plots (Supporting Information Figure S12 and Figure 3a) were compared with those obtained with the same set of samples on the previous LC-LRMS platform (Figure 2a and Supporting Information Figure S4c). For both platforms, the PCA score plots exhibited a clear discrimination between the four strains with a separation of the two couples TC4/TC7 and TC8/ TC11 on the first component and between the strains of each pair on the second component. The main difference relies in the total variance accounted for by these two first components, which was lower in the case of the HRMS platform (64% instead of 85% for the LRMS platform). These results prove the robustness of the method.
The subsequent step was to build a supervised discrimination model using PLS-DA for UPLC-QToF data. As already described for HPLC-IT-MS data, the resulting three-class PLS-DA model led to a proper differentiation of the three bacterial groups (Table 1). Moreover, despite the different kind of chromatographic conditions (HPLC vs UPLC) and mass spectrometry instrumentations (ESI-IT vs ESI-QToF), the two platforms gave similar results and the same conclusion was made with other sets of samples. Moreover, samples used for the study of the impact of the growth phase and the mode of culture on the TC8 strain were also analyzed on the HRMS 2.
platform, and the resulting PLS-DA model was similar to those obtained on the LC-LRMS platform (Supporting Information Figures S8a andS13).
In a second step, the aim was to identify putative biomarkers for each bacterial strain. Metabolome annotation is often considered as a bottleneck in the metabolomics data analysis, which is even more challenging for nonstudied species. For this reason, a molecular network was constructed based on MS/MS data (Figure 4). This analysis has the main advantage to organize mass spectra by fragmentation similarity, rendering easier the annotation of compounds of a same chemical family. [START_REF] Watrous | Mass spectral molecular networking of living microbial colonies[END_REF] The molecular network constructed with a set of data including all of the strains (EtOAc as extraction solvent, planktonic cultures in VNSS until the stationary phase) highlighted several clusters. At the same time, the most discriminating m/z features in the PLS-DA model (Figure 3b,c) were selected based on their VIP score, which resulted in 17 compounds with VIP value equal to or higher than 3 (Table 2). The molecular formulas of each VIP were proposed based on accurate mass measurement, true isotopic pattern, and fragmentation analysis. A detailed analysis of VIPs and molecular network revealed that most of these discriminating metabolites constitute the cluster A (Figure 4). These chemical compounds were specific to TC8, on one hand, and to TC4 and TC7, on the other hand. Interestingly, all of these specific compounds showed a similar fragmentation pattern with a characteristic ion fragment at m/z 115. A bibliographic review allowed us to propose ornithine-containing lipids (OL) as good candidates for this chemical group. OLs are widespread among Gram-negative bacteria, more rarely found in Gram-positive ones, and absent in eukaryotes and archaea. [START_REF] Moore | Elucidation and identification of amino acid containing membrane lipids using liquid chromatography/highresolution mass spectrometry[END_REF] These membrane lipids contain an ornithine headgroup linked to a 3-hydroxy fatty acid via its α-amino moiety and a second fatty acid chain (also called "piggyback" fatty acid) esterified to the hydroxyl group of the first fatty acid. In some bacteria the ester-linked fatty acid can be hydroxylated, usually at the C-2 position. [START_REF] Geiger | Amino acid-containing membrane lipids in bacteria[END_REF] OLs show a specific MS fragmentation pattern used in this study for their identification. Characteristic multistage MS fragmentation patterns include the sequential loss of H 2 O (from the ornithine part), the piggyback acyl chain, and the amide-linked fatty acid. [START_REF] Zhang | Characterization of ornithine and glutamine lipids extracted from cell membranes of Rhodobacter sphaeroides[END_REF] This characteristic mode of fragmentation leads to headgroup fragment ions at m/z 159
(C 6 H 11 N 2 O 3 ), 141 (C 6 H 9 N 2 O 2 ), 133 (C 5 H 13 N 2 O 2 ), 115 (C 5 H 11 N 2 O 2 )
, and 70 (C 4 H 8 N). On that basis, HRMS/MS fragmentation of VIP no. 1 (m/z 677) is proposed in Figure 5, and the same pattern was observed for the other OLs (Table 2). In Gram-negative bacteria, membranes are constituted by polar lipids frequently composed of phospholipids like phosphatidylethanolamine (PE). In this work, this type of lipids was detected in the four strains (cluster E, Figure 4), but several studies have shown that under phosphorus starvation, which is common in marine environments, the production of nonphosphorus polar lipids such as OLs may increase significantly. [START_REF] Yao | Heterotrophic bacteria from an extremely phosphate-poor lake have conditionally reduced phosphorus demand and utilize diverse sources of phosphorus[END_REF][START_REF] Sandoval-Calderoń | Plasticity of Streptomyces coelicolor membrane composition under different growth conditions and during development[END_REF] Moreover, because of their zwitterionic character, OLs have been speculated to play a crucial role for the membrane stability of Gram-negative bacteria and more broadly for the adaptation of the membrane in response to changes of environmental conditions. Under the culture conditions used in this study, OLs were produced by three of the strains but not by Shewanella sp. TC11. In a same way, components of cluster B specifically produced by Bacteroidetes (TC4 and TC7) were identified as hydroxylated OLs (HOLs). These compounds showed the same MS fragmentation pattern as their nonhydroxylated analogs, while a supplementary loss of H 2 O was observed at the beginning of the MS fragmentation pathway. HOLs have been described as metabolites specifically produced by bacteria under stress (e.g., temperature, 49 pH 50 ): the occurrence of an additional hydroxyl group seems to be implied in the membranes stability via an increase in strong lateral interactions between their components. [START_REF] Nikaido | Molecular basis of bacterial outer membrane permeability revisited[END_REF] HOLs were mainly described in α-, β-, and γ-proteobacteria and Bacteroidetes. [START_REF] Sohlenkamp | Bacterial membrane lipids: Diversity in structures and pathways[END_REF] In our study HOLs were only detected in LC-MS profiles of Bacteroidetes (TC4 and TC7) but not in those of the γ-proteobacteria (TC8 and TC11). Concerning the position of the additional hydroxyl group in these derivatives, the absence of characteristic ion fragments for ornithine headgroup hydroxylation [START_REF] Moore | Elucidation and identification of amino acid containing membrane lipids using liquid chromatography/highresolution mass spectrometry[END_REF] indicated that this group was linked to one of the two fatty acids (at the two-position). This structural feature was in agreement with the fact that hydroxylation of the ornithine headgroup in HOLs was only observed in α-proteobacteria and not in Bacteroidetes. [START_REF] Sohlenkamp | Bacterial membrane lipids: Diversity in structures and pathways[END_REF] TC4 and TC7 were also clearly discriminated from the other strains through another class of metabolites putatively identified on the basis of their HRMS/MS data as glycine lipids (GLs) and close derivatives, namely, methylglycine or alanine lipids (cluster C, Figure 4). These compounds are structurally similar to OLs, the main difference being the replacement of the ornithine unit by a glycine one. They showed a similar fragmentation sequence and were specifically characterized by headgroup fragment ions at m/z 76 (C 2 H 6 NO 2 ) for GLs and m/z 90 (C 3 H 8 NO 2 ) for methylglycine or alanine lipids. This last class of compounds needs to be further confirmed by purification and full structure characterization. From a chemotaxonomic point of view, GLs are valuable compounds because they have only been described
from Bacteroidetes and thus seem to be biomarkers of this bacterial group. [START_REF] Sohlenkamp | Bacterial membrane lipids: Diversity in structures and pathways[END_REF] Finally, when considering the discrimination between the two P. mediterranea strains, TC7 specifically produced a variety of lipids tentatively assigned as N-acyl diamines by HRMS/MS (cluster D, Figure 4). More precisely, a fragmentation pattern common for most of the compounds of cluster D showed the occurrence of a diamine backbone with an amide-linked fatty acid and yielded fragment ions at m/z 89 (C 4 H 13 N 2 ) and 72 (C 4 H 10 N) characteristic of the putrescine headgroup. [START_REF] Voynikov | Hydroxycinnamic acid amide profile of Solanum schimperianum Hochst by UPLC-HRMS[END_REF] Several other chemical members of this cluster showed similar fragmentation pathways but with fragment ions, in accordance with slight variations of the chemical structure of the headgroup (N-methylation, hydroxylation). In the case of TC7, compounds with a hydroxylated headgroup were specifically overexpressed. According to literature data, polyamines are commonly found in most living cells, and, among this chemical family, putrescine constitutes one of the simplest member. [START_REF] Michael | Polyamines in eukaryotes, bacteria, and archaea[END_REF] This diamine is widespread among bacteria and is involved in a large number of biological functions. [START_REF] Miller-Fleming | Remaining mysteries of molecular biology: The role of polyamines in the cell[END_REF] Interestingly, ornithine can form putrescine either directly (ornithine decarboxylase) or indirectly from arginine (arginine decarboxylase) via agmatine (agmatine deiminase). Taking into account the few studies on MS fragmentation of natural N-acyl diamines and the absence of commercially available standards, further structure investigations are required to fully characterize this class of bacterial biomarkers and to establish a possible biosynthetic link with OLs. Finally, some other specific clusters were remarkable in the molecular network but without possible affiliation of the corresponding compounds to an existing chemical family. Conversely, molecular components of the nondiscriminative cluster F were putatively identified as cyclic dipeptides. This type of compounds exhibits a wide range of biological functions, and cyclic dipeptides are involved in chemical signaling in Gram-negative bacteria with a potential role in interkingdom communication. [START_REF] Ryan | Diffusible signals and interspecies communication in bacteria[END_REF][START_REF] Ortiz-Castro | Transkingdom signaling based on bacterial cyclodipeptides with auxin activity in plants[END_REF][START_REF] Holden | Quorum-sensing cross talk: Isolation and chemical characterization of cyclic dipeptides from Pseudomonas aeruginosa and other Gram-negative bacteria[END_REF] ■ CONCLUSIONS
We described a metabolomics approach applied to the assessment of the effects of several culture parameters, such as culture media, growth phase, or mode of culture, in the metabolic discrimination between four marine biofilm-forming bacteria. The developed method based on a simple extraction protocol could differentiate bacterial strains cultured in organicrich media. Depending on the culture parameters, some significant intrastrain metabolic changes were observed, but overall these metabolome variations were always less pronounced than interstrain differences. Finally, several classes of biomarkers were putatively identified via HRMS/MS analysis and molecular networking. Under the culture conditions used (not phosphate-limited), OLs were thus identified as specifically produced by three of the bacteria, while HOLs and GLs were only detected in the two Bacteroidetes strains.
Our study provides evidence that such an analytical protocol is useful to explore more deeply the metabolome of marine bacteria under various culture conditions, including cultures in organic-rich media and biofilms. This efficient process gives information on the metabolome of marine bacterial strains that represent complementary data to those provided by genomic, transcriptomic, and proteomic analyses on the regulatory and metabolic pathways of marine bacteria involved in biofilms. Also, a broader coverage of the biofilm metabolome requires the examination of polar extracts even if high-salt contents limit drastically the analysis capabilities of polar compounds in marine bacterial cultures and environmental biofilm samples.
As an important result, bacterial acyl amino acids, and more broadly membrane lipids, can be used as efficient biomarkers not only for chemotaxonomy but also directly for studies directed toward bacterial stress response. Indeed, a targeted analysis of GLs would be efficient to estimate the occurrence of Bacteroidetes in complex natural biofilms, while OLs and HOLs would be valuable molecular tools to evaluate the response of bacteria to specific environmental conditions.
Moreover, to get closer to reality, future works in this specific field of research should address some more ecologically relevant questions. To this end, metabolomics studies involving multispecies cocultures or bacterial cultures implemented with signal compounds (e.g., N-acyl-homoserine lactones, diketopiperazines) may be considered in the future and linked to similar data obtained from natural biofilms. Venn diagrams showing unique and shared metabolites for the four bacterial strains in each extraction condition. Figure S4. PCA and PLS-DA score plots of the four bacterial strains in each extraction condition. Figure S5. Number of m/z features detected for the four bacterial strains depending on the culture media. Figure S6. PCA score plots of the four bacterial strains cultured in two media and PLS-DA score plots of the four bacterial strains cultured in MB. Figure S7. Number of m/z features detected for each bacteria at five time points of the growth curve. Figure S8. PLS-DA score plots of TC8, TC4, and TC11 at five time points of their growth curve. Figure S9. PLS-DA score plots of TC7, TC8, and TC11 cultured in planktonic and biofilm modes. Figure S10. PCA and PLS-DA score plots of TC7, TC8, and TC11 cultured in biofilms. Figure S11. PCA score plots of TC7, TC8, and TC11 cultured in biofilms and planktonic conditions. Figure S12. PCA score plots (LC-HRMS) of the four bacterial strains. Figure S13. PLS-DA score plots (LC-HRMS) of TC8 at five time points of its growth curve. Table S1. Composition of the MB and VNSS culture media. Table S2. Parameters of the PLS-DA models used for the intrastrain discrimination depending on different culture conditions.(PDF)
■ AUTHOR INFORMATION
Corresponding Author *Tel: (+33) 4 94 14 29 35. E-mail: culioli@univ-tln.fr.
ORCID
Olivier P. Thomas: 0000-0002-5708-1409 Geŕald Culioli: 0000-0001-5760-6394
Notes
The authors declare no competing financial interest.
■ ACKNOWLEDGMENTS
This study was partly funded by the French "Provence-Alpes-Cote d'Azur (PACA)" regional council (Ph.D. grant to L.F.). We are grateful to R. Gandolfo for the kind support of the French Mediterranean Marine Competitivity Centre (Pole Mer Mediterraneé) and thank J. C. Tabet, R. Lami, G. Genta-Jouve, J. F. Briand, and B. Misson for helpful discussions. LC-HRMS experiments were acquired on the regional platform MALLA-BAR (CNRS and PACA supports). Dedicated to Professor Louis Piovetti on the occasion of his 75th birthday.
■ ABBREVIATIONS ACN, acetonitrile; CS, cosine score; CV-ANOVA, cross validation-analysis of variance; DCM, dichloromethane; EtOAc, ethyl acetate; GC, gas-chromatography; GL, glycine lipids; GNPS, global natural product social molecular networking; HOL, hydroxylated ornithine lipid; HRMS, high resolution mass spectrometry; KEGG, Kyoto encyclopedia of genes and genomes; LC, liquid chromatography; LC-ESI-IT-MS, liquid chromatography-electrospray ionization ion trap tandem mass spectrometry; LC-MS, liquid chromatographymass spectrometry; LRMS, low resolution mass spectrometry; MB, marine broth; MeOH, methanol; NMR, nuclear magnetic resonance; OL, ornithine lipid; PCA, principal component analysis; PE, phosphatidylethanolamine; PLS-DA, partial leastsquares discriminant analysis; TC, Toulon collection; UPLC-ESI-QToF-MS, ultraperformance liquid chromatography-electrospray ionization quadrupole time-of-flight tandem mass spectrometry; VIP, variable importance on projection; VNSS, Vaäẗanen nine salt solution
Figure 1 .
1 Figure 1. Overview of the experimental workflow used for the discrimination of the four marine bacterial strains and for the putative identification of relevant biomarkers.
Figure 2 .
2 Figure 2. (a) PCA score plot obtained from LC-LRMS profiles of the four bacterial strains (extraction with EtOAc, stationary phase, planktonic cultures in VNSS). (b) PLS-DA score plot obtained from LC-MS profiles of the four bacterial strains (extraction with EtOAc, stationary phase) cultured planktonically in two media (VNSS and MB). (c) PLS-DA score plot obtained from LC-MS profiles of the four bacterial strains (extraction with EtOAc, planktonic cultures in VNSS) at five time points of their growth curve. (d) PLS-DA score plot obtained from LC-MS profiles of three of the strains (extraction with EtOAc) cultured in biofilms (two time points; dark symbols) and planktonic conditions (five time points; colored symbols) in VNSS.
Table 1 .
1 Summary of the Parameters for the Assessment of the Quality and of the Validity of the PLS-DA Models Used for the Discrimination of the Bacterial Strains According to Different Culture or Analysis Conditions
Figure 3 .
3 Figure 3. (a) PLS-DA score plot obtained from LC-HRMS profiles of the four bacterial strains (extraction with EtOAc, stationary phase, planktonic cultures in VNSS). (b) PLS-DA loading plots with the most contributing mass peaks (VIPs) numbered from 1 to 17. (c) Heatmap of the 17 differential metabolites with VIP values ≥3.0 from the PLS-DA model. Detailed VIPs description is given in Table2.
Figure 4 .
4 Figure 4. Molecular networks of HRMS fragmentation data obtained from cultures of the four bacterial strains (extraction with EtOAc, stationary phase, planktonic cultures in VNSS). AL: alanine lipid, GL: glycine lipid, HOL: hydroxylated ornithine lipid, HMPL: hydroxylated and methylated putrescine lipid, LOL: lyso-ornithine lipid. MPL: methylated putrescine lipid, OL: ornithine lipid, OPL: oxidized putrescine lipid, PE: phosphatidylethanolamine, PL: putrescine lipid.
Figure 5 .
5 Figure 5. (a) HRMS mass spectra of Pseudoalteromonas lipolytica TC8 ornithine lipid at m/z 677.5785 (VIP no. 1). (b) Proposed fragmentation of VIP no. 1. (The elemental composition of fragment ions is indicated and the corresponding theoretical value of m/z is given in parentheses.)
■
Figure S1. Growth stages of the four bacterial strains. Figure S2. Number of m/z features detected for the four bacteria depending on the extraction solvent. Figure S3.
Figure S1. Growth stages of the four bacterial strains. Figure S2. Number of m/z features detected for the four bacteria depending on the extraction solvent. Figure S3.
Table 2 .
2 List of the Biomarkers (VIP Value ≥3) Identified by LC-HRMS for the Discrimination of the Four Bacterial Strains Constructor statistical match factor obtained by comparison of the theoretical and observed isotopic pattern. b Total intensity of the explained peaks with respect to the total intensity of all peaks in the fragment spectrum peak list. c HOL: hydroxylated ornithine lipid, LOL: lyso-ornithine lipid, OL: ornithine lipid, PE: phosphatidylethanolamine.
mass
VIP number m/z RT (s) VIP value formula error (ppm) mσ a I expl (%) b MS/MS fragment ions (relative abundance in %) putative identification c
1 677.5806 438 4.0 C 41 H 77 N 2 O 5 3.8 4.9 63.5 659 (3) d , 413 (16) e , 395 (62) f , 377 (62) g , 159 (4) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (25) h OL (C18:1, C18:1)
2 625.5501 425 4.0 C 37 H 73 N 2 O 5 2.0 3.6 93.6 607 (2) d , 387 (11) e , 369 (41) f , 351 (44) g , 159 (6) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (24) h OL (C16:0, C16:0)
3 651.5653 429 3.7 C 39 H 75 N 2 O 5 2.7 5.9 61.7 633 (3) d , 413 (6) e , 395 (24) f , 377 (24) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (21) h OL (C18:1, C16:0)
4 611.5354 426 3.7 C 36 H 71 N 2 O 5 0.6 2.8 62.5 593 (2) d , 387 (12) e , 369 (43) f , 351 (48) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (25) h OL (C16:0, C15:0)
5 627.5304 408 3.5 C 36 H 71 N 2 O 6 0.4 2.4 81.7 609 (<1) d , 591 (<1) i , 387 (8) f , 369 (58) g , 351 (62) j , 159 (7) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (16) h HOL (C16:0, C15:0)
6 641.5462 417 3.4 C 37 H 73 N 2 O 6 0.2 1.7 80.5 623 (1) d , 605 (<1) i , 401 (7) f , 383 (50) g , 365 (52) j , 159 (6) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (15) h HOL (C17:0, C15:0)
7 597.5184 418 3.3 C 35 H 69 N 2 O 5 2.8 1.8 57.7 579 (2) d , 387 (2) e , 369 (8) f , 351 (10) g , 159 (4) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (36) h OL (C16:0, C14:0)
8 613.5149 398 3.3 C 35 H 69 N 2 O 6 0.2 1.3 78.9 595 (1) d , 577 (<1) i , 387 (5) f , 369 (33) g , 351 (37) j , 159 (7) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (22) h HOL (C16:0, C14:0)
9 623.5339 425 3.2 C 37 H 71 N 2 O 5 2.9 1.5 66.8 605 (2) d , 385 (3) e , 367 (12) f , 349 (12) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (24) h OL (C16:1, C16:0)
10 639.5299 403 3.0 C 37 H 71 N 2 O 6 1.2 10.5 81.0 621 (1) d , 603 (<1) i , 399 (<1) f , 381 (58) g , 363 (56) j , 159 (7) h , 141 (3) h , 133 (6) h , 115 (100) h , 70 (21) h HOL (C17:1, C15:0)
11 621.5182 409 3.0 C 37 H 69 N 2 O 5 3.0 8.6 61.0 603 (2) d , 385 (15) e , 367 (54) f , 349 (52) g , 159 (5) h , 141 (3) h , 133 (6) h , 115 (100) h , 70 (36) h OL (C16:1, C16:1)
12 387.3218 294 3.0 C 21 H 43 N 2 O 4 -0.3 12 103.6 369 (3) d , 351 (5) i , 159 (1) h , 141 (3) h , 133 (7) h , 115 (100) h , 70 (62) h LOL (C16:0)
13 653.5808 446 3.0 C 39 H 77 N 2 O 5 2.9 7.1 63.0 635 (2) d , 415 (8) e , 397 (28) f , 379 (29) g , 159 (5) h , 141 (3) h , 133 (5) h , 115 (100) h , 70 (35) h OL (C18:0, C16:0)
14 649.5498 430 3.0 C 39 H 73 N 2 O 5 2.5 2.6 66.0 631 (2) d , 413 (7) e , 395 (27) f , 377 (27) g , 159 (4) h , 141 (3) h , 133 (4) h , 115 (100) h , 70 (23) h OL (C18:1, C16:1)
15 440.2769 340 3.0 C 20 H 43 NO 7 P 0.2 15.2 76.6 299 (100) k PE (C15:0)
16 401.3373 305 3.0 C 22 H 41 N 2 O 4 0.1 16.0 81.5 383 (4) d , 365 (6) i , 159 (2) h , 141 (4) h , 133 (7) h , 115 (100) h , 70 (60) h LOL (C17:0)
17 413.5187 304 3.0 C 23 H 45 N 2 O 4 0.2 13.6 80.5 395 (4) d , 377 (6) i , 159 (2) h , 141 (4) h , 133 (10) h , 115 (100) h , 70 (51) h LOL (C18:1)
a d [M + H -H 2 O] + . e [M + H -RCOH] + . f [M + H -H 2 O -RCOH] + . g [M + H -2 H 2 O -RCOH] + . h Other typical OL ion fragments. i [M+H -2 H 2 O]. j [M + H -3 H 2 O -RCOH] + . k [M + H -C 2 H 8 NO 4 P] + . |
01681621 | en | [
"sde",
"sdu.stu",
"sdu.envi"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01681621/file/Geijzendorffer_2017_EnvSciPol_postprint.pdf | Ilse R Geijzendorffer
email: geijzendorffer@tourduvalat.org
Emmanuelle Cohen-Shacham
Anna F Cord
Wolfgang Cramer
Carlos Guerra
Berta Martín-López
Ecosystem Services in Global Sustainability Policies
Keywords: Aichi Targets, human well-being, indicators, monitoring, reporting, Sustainable Development Goals. Highlights
+ Acknowledgements (95) + References (1796). The manuscript contains 4 Figures (296 words
), 3 Tables (668 words), 55 References and an Online appendix.
Introduction
Multiple international policy objectives aim to ensure human well-being and the sustainability of the planet, whether via sustainable development of society or via biodiversity conservation, e.g. the Sustainable Development Goals (SDGs) and the Conventional of Biological Diversity (CBD) Aichi Targets. To evaluate progress made towards these objectives and to obtain information on the efficiency of implemented measures, effective monitoring schemes and trend assessments are required [START_REF] Hicks | Engage key social concepts for sustainability[END_REF]. Whereas the CBD has been reporting on progress towards objectives in Global Outlooks since 20011 , a first list of indicators has recently been launched.
There is broad consensus that pathways to sustainability require a secure supply of those ecosystem services that contribute to human well-being (Fig. 1; [START_REF] Griggs | Policy: Sustainable development goals for people and planet[END_REF][START_REF] Wu | Landscape sustainability science: ecosystem services and human well-being in changing landscapes[END_REF]. The ecosystem service concept is an important integrated framework in sustainability science [START_REF] Liu | Systems integration for global sustainability[END_REF], even if the term ecosystem services is not often explicitly mentioned in policy objectives. Nevertheless, a number of specific ecosystem services are mentioned in documents relating to the different objectives stated in the SDGs and Aichi Targets. For example, there is an explicit mentioning of regulation of natural hazards in SDG 13 and of carbon sequestration in Aichi Target 15. Especially for the poorest people, who most directly depend on access to ecosystems and their services [START_REF] Daw | Applying the ecosystem services concept to poverty alleviation: the need to disaggregate human well-being[END_REF][START_REF] Sunderlin | Livelihoods, forests, and conservation in developing countries: An Overview[END_REF], information on ecosystem services state and trends should be highly relevant [START_REF] Wood | Ecosystems and human well-being in the Sustainable Development Goals[END_REF]. Trends in biodiversity, ecosystem services and their impact on human well-being as well as sustainability must be studied using an integrated approach [START_REF] Bennett | Linking biodiversity, ecosystem services, and human well-being: three challenges for designing research for sustainability[END_REF][START_REF] Liu | Systems integration for global sustainability[END_REF]. The SDG ambitions could potentially offer key elements for this integration. Most assessments use a pragmatic approach to select indicators for ecosystem services, often only focusing on those indicators and ecosystem services, for which data are readily available. Although this helps to advance the knowledge on ecosystem services on many aspects, it may not cover the knowledge required to monitor progress towards sustainability [START_REF] Hicks | Engage key social concepts for sustainability[END_REF]. Regions characterized by high vulnerability of ecosystem services supply and human well-being, such as the Mediterranean Basin [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF], require information on the trends in on all aspects ecosystem services flows including the impact of governance interventions and pressures on social-ecological systems.
Considerable progress has been made in developing integrative frameworks and definitions for ecosystem services and the quantification of indicators (e.g. [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF][START_REF] Maes | An indicator framework for assessing ecosystem services in support of the EU Biodiversity Strategy to[END_REF], but it is unclear to which extent the current state of the art in ecosystem services assessments is able to provide the information required for monitoring the SDGs and the Aichi Targets. Since the publication of the Millennium Ecosystem Assessment in 2005, multiple national ecosystem services assessments have been undertaken, such as the United Kingdom National Ecosystem Assessment (UK National Ecosystem Assessment, 2011), the Spanish NEA [START_REF] Santos-Martín | Unraveling the Relationships between Ecosystems and Human Wellbeing in Spain[END_REF] or the New Zealand assessment [START_REF] Dymond | Ecosystem services in New Zealand[END_REF]. Furthermore, in the context of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), regional and global assessments are planned for 2018 and 2019, respectively. The ecosystem services indicators used in these national, regional and global assessments could also provide relevant information for monitoring the progress towards these global sustainability objectives.
The main goal of the present study is to explore to what extent the ecosystem services concept has been incorporated in global sustainability policies, particularly the SDGs and the Aichi Targets. For this objective, we i) assessed the information on ecosystem services currently recommended to monitor the progress on both policy documents and ii) identified which information on ecosystem services can already be provided on the basis of the indicators reported in national ecosystem assessments. Based on these two outputs, we iii) identified knowledge gaps regarding ecosystem services for monitoring the progress on global policy objectives for sustainability.
Material and methods
Numerous frameworks exist to describe ecosystem services (e.g., [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF][START_REF] Maes | An indicator framework for assessing ecosystem services in support of the EU Biodiversity Strategy to[END_REF], but there is general agreement that a combination of biophysical, ecological and societal components is required to estimate the flow of actual benefits arriving to the beneficiary. In line with the ongoing development of an Essential Ecosystem Services Variable Framework in the scope of the Global Earth Observation Biodiversity Observation Network (GEO BON), we used a framework that distinguishes variables of ecosystem services flows (Tab. 1): the ecological potential for ecosystem services supply (Potential supply), and the societal co-production (Supply), Use of the service, Demand for the service as well as Interests and governance measures for the service (Tab. 1, adapted from [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. We hereafter refer to these variables with capitals to increase the readability of the text. Using this framework, we i) identified and ranked the frequency at which specific ecosystem services are mentioned, within and across the selected policy documents [START_REF] Cbd | Decision document UNEP/CBD/COP/DEC/X/2; Quick guides to the Aichi Biodiversity Targets, version 2[END_REF]United Nations, 2015a); ii) reviewed indicators currently used for reporting on the Aichi Targets (Global Outlook) and iii) reviewed the 277 indicators currently being used in national ecosystem assessments, to identify any existing information gaps.
Only monitoring data that feed all the variables of this framework allows detecting trends and interpreting changes in ecosystem services flow. One example relevant for the SDGs is a food deficit indicator (e.g. insufficient calories intake per capita). An increase in calorie intake in a specific country would indicate the need for additional interventions. However, depending on the cause of this increased deficit, some interventions are more likely to be effective than others. For example, the food deficit could be caused by a change in demand (e.g. increased population numbers), in the service supply (e.g. agricultural land abandonment), or in the ecological potential to supply services (e.g. degradation of soils).
We structured our analysis of indicators by distinguishing between indirect and direct indicators (Tab. 1). While direct indicators assess an aspect of an ecosystem service flow (e.g. tons of wheat produced), indirect indicators provide proxies or only partial information (e.g. hectares of wheat fields under organic management) necessary to compute the respective indicator. Our review does not judge the appropriateness or robustness of the respective indicator (as proposed by [START_REF] Hák | Sustainable Development Goals: A need for relevant indicators[END_REF], nor did we aim to assess whether the underlying data source was reliable or could provide repeated measures of indicators over time. We only looked at the type of information that was described for each of the ecosystem services mentioned in the policy objectives and the type of indicators proposed for reporting on these policies.
The data for reporting on the SDGs is currently provided by national statistical bureaus and we therefore wanted to identify which ecosystem services indicators might be available at this level. To get a first impression, we reviewed the indicators used in 9 national ecosystem assessments and the European ecosystem assessment.
A network analysis was used to determine the associations between i) ecosystem services within the SDGs and the CBD Aichi Targets, ii) the variables of ecosystem services flows and proposed indicators for both policies and iii) the categories of ecosystem services and the components of the ecosystem service flow, in the indicators used in national and the European ecosystem assessments. The network analysis was performed using Gephi [START_REF] Bastian | Gephi: an open source software for exploring and manipulating networks[END_REF] and their visualization was subsequently produced using NodeXL (https://nodexl.codeplex.com/, last consulted January 13 th 2017).
Managed Supply
Type and quantity of services supplied by the combination of the Potential supply and the impact of interventions (e.g., management) by people in a particular area and over a specific time period.
Capacity [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF], supply [START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF], service capacity [START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF]; supply capacity of an area [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF]; actual ecosystem service provision [START_REF] Guerra | Mapping Soil Erosion Prevention Using an Ecosystem Service Modeling Framework for Integrated Land Management and Policy[END_REF]; ecosystem functions under the impact of "land management" [START_REF] Van Oudenhoven | Framework for systematic indicator selection to assess effects of land management on ecosystem services[END_REF]; Service Providing Unit-Ecosystem Service Provider Continuum [START_REF] Harrington | Ecosystem services and biodiversity conservation: concepts and a glossary[END_REF].
Harvested biomass; potential pressures that a managed landscape can absorb; extent of landscape made accessible for recreation.
Modelled estimates of harvestable biomass under managed conditions; soil cover vegetation management; financial investments in infrastructure.
Use
Quantity and type of services used by society.
Flow [START_REF] Schröter | Ecosystem Service Supply and Vulnerability to Global Change in Europe[END_REF][START_REF] Schröter | Accounting for capacity and flow of ecosystem services: A conceptual model and a case study for Telemark, Norway[END_REF]; service flow [START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF]; "demand" (match and demand aggregated into one term) [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF][START_REF] Crossman | A blueprint for mapping and modelling ecosystem services[END_REF].
Biomass sold or otherwise used; amount of soil erosion avoided while exposed to eroding pressures; number of people actually visiting a landscape.
Estimations of biomass use for energy by households; reduction of soil erosion damage; distance estimates from nearby urban areas.
Demand
Expression of demands by people in terms of actual allocation of scarce resources (e.g. money or travel time) to fulfil their demand for services, in a particular area and over a specific time period.
Stakeholder prioritisation of ecosystem services [START_REF] Martín-López | Trade-offs across valuedomains in ecosystem services assessment[END_REF], service demand [START_REF] Villamagna | Capacity, pressure, demand, and flow: A conceptual framework for analyzing ecosystem service provision and delivery[END_REF], demand [START_REF] Burkhard | Mapping ecosystem service supply, demand and budgets[END_REF].
Prices that people are willing to pay for biomass; amount of capital directly threatened by soil erosion; time investment, travel distances and prices people are willing to pay to visit a landscape.
Computation of average household needs; remaining soil erosion rates; survey results on landscape appreciation.
Interests
An expression of people's interests for certain services, in a particular area and over a specific time period. These tend to be longer wish-lists of services without prioritisation.
Identification of those important ecosystem services for stakeholders' well-being [START_REF] Martín-López | Trade-offs across valuedomains in ecosystem services assessment[END_REF]; beneficiaries with assumed demands [START_REF] Bastian | The five pillar EPPS framework for quantifying, mapping and managing ecosystem services[END_REF].
Subsidies for bio-energy; endorsement of guidelines for best practices for soil management; publicity for outdoor recreation.
Number of people interested in green energy; number of farmers aware of soil erosion; average distance of inhabitants to green areas.
Identification of ecosystem services in the SDGs and Aichi Targets
Two international policy documents were selected for review: the SDGs (United Nations, 2015a) and the CBD Aichi Targets (CBD, 2013). Both documents have global coverage and contain objectives on sustainable development, related to maintaining or improving human well-being and nature. The classification of ecosystem services used in this paper is based on [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF], which matched best with the terminology of policy documents and the national assessments.
For each policy document, we determined the absolute and relative frequency at which an ecosystem service was mentioned. This frequency was also used to produce a relative ranking of ecosystem services, within and across these policy documents. Although the SDGs and the Aichi Targets include several statements on specific ecosystem services (e.g. food production, protection from risks), the term "ecosystem services" is not often mentioned. In the SDGs, for instance, ecosystem services explicitly occur only once (Goal 15.1). In contrast, "sustainable development or management" and "sustainable use of natural resources" are mentioned several times, although not further specified. While the latter could be interpreted to mean that the use of nature for provisioning purposes should not negatively affect regulating services, we preferred to remain cautious and not make this assumption, when reviewing the policy documents. We are therefore certain that we underestimate the importance of knowledge on ecosystem services regarding the different policy objectives.
Proposed ecosystem services indicators for the SDGs and Aichi Targets
In addition to the ecosystem services directly mentioned in the policy objectives, we also reviewed the type of information on ecosystem services proposed to monitor the progress towards the policy objectives. To this end, we used the 2015 UN report (United Nations, 2015b) for the SDGs. For the Aichi Targets, we focused on the recently proposed (but still under development) indicator list [START_REF] Cbd | Report of the ad hoc technical expert group on indicators for the strategic plan for biodiversity 2011-2020[END_REF] and on the indicators recently used in the Global Biodiversity Outlook 4 (CBD, 2014).
Review of national ecosystem services assessments
Although many authors propose indicators for ecosystem services (e.g. Böhnke-Hendrichs et al., 2013;[START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF], not all indicators can be used for monitoring, due to lack of available data at the relevant scale or because current inventories do not provide sufficient time series for trend assessment.
For the CBD reporting, continuous efforts are made to provide monitoring information at global level, for instance via the use of Essential Biodiversity Variables (e.g. [START_REF] O'connor | Earth observation as a tool for tracking progress towards the Aichi Biodiversity Targets[END_REF]. Reporting for the SDGs, however, will heavily rely on the capacity of national statistical bureaus to provide the required data (ICSU, ISSC, 2015).
To estimate the type of ecosystem services indicators that might be available at national level, we selected national ecosystem assessment reports, which were openly available and written in one of the seven languages mastered by the co-authors (i.e. English, Spanish, Portuguese, Hebrew, French, German and Dutch). Nine assessments fulfilled these criteria (see Tab. 2). We complemented them with the European report [START_REF] Maes | Mapping and Assessment of Ecosystems and their Services: Trends in ecosystems and ecosystem services in the European Union between 2000 and[END_REF], which is considered to be a baseline reference for upcoming national assessments in European member states. The selection criteria resulted in the inclusions of 9 national assessments from three continents, but there is a bias towards European and developed countries.
Results and discussion
Ecosystem services mentioned in policy objectives
The need for information on ecosystem services from all three categories (i.e. provisioning, regulating and cultural) is mentioned in both policies, and reflects earlier suggestions on the integrative nature of the policy objectives on sustainable development, especially for the SDGs (Le [START_REF] Blanc | Towards Integration at Last? The Sustainable Development Goals as a Network of Targets: The sustainable development goals as a network of targets[END_REF]. Among the 17 SDGs and the 20 Aichi Targets, 12 goals and 13 targets respectively, relate to ecosystem services. Across both policy documents, all ecosystem service categories are well covered, the top 25% of the most cited ecosystem services being: Natural heritage and diversity, Capture fisheries, Aquaculture, Water purification, Crops, Livestock and Cultural heritage & diversity (Table 3). In the SDGs, provisioning services are explicitly mentioned 29 times, regulating services 33 times and cultural services 23 times. In the Aichi Targets, provisioning services are explicitly mentioned 29 times, regulating services 21 times and cultural services 13 times.
When considering the different ecosystem service categories, SDG 2 (end hunger, achieve food security and improved nutrition, and promote sustainable agriculture) and Aichi Goal B (reduce the direct pressures on biodiversity and promote sustainable use) heavily rely on provisioning services, with the latter also relying on regulating services (Fig. 2). Cultural services are more equally demanded over a range of policy objectives, with the service Natural heritage & diversity being the most demanded ecosystem service (see Tab. A.1).
Recent reviews of scientific ecosystem services assessments (e.g.
Proposed ecosystem services indicators
The analysis of the proposed indicators for reporting on both policy objectives (n=119) demonstrated that in total 43 indicators represented information on Potential supply with the other variables being represented by indicators in the 15-24 range (Fig. 3A). This bias towards supply variables is remarkable for the Aichi Targets (Fig. 3A). Another observed pattern is that the variables Demand and Interest are more often represented by proposed indicators for the SDGs than for the Aichi Targets (i.e. demand 11 versus 5 and interest 13 versus 4, respectively). The results therefore provide support for the claim that the SDGs aim to be an integrative policy framework (Le [START_REF] Blanc | Towards Integration at Last? The Sustainable Development Goals as a Network of Targets: The sustainable development goals as a network of targets[END_REF], at least in the sense that the proposed indicators for SDGs demonstrate a more balanced inclusion of ecological and socio-economic information.
A comparison of the number of ecosystem services that are relevant for the SDGs with the total number of indicators proposed for monitoring, however, reveals that balanced information from the indicators is unlikely to concern all ecosystem services (Figure 3). The proposed indicators never cover all five variables for a single SDG target except for one SDGs target (i.e. SDG 15: "Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss"). Among the Aichi Targets, none of the Strategic Goals was covered by indicators representing all five variables. The frequencies at which ecosystem services are presented for the policy reports are surprisingly low (Figure 3B). In an ideal situation, each of the ecosystem services would have been covered by indicators representing the five variables (i.e. frequency value of 1). Our results demonstrate a highest frequency value of 0.4 for SDG target 13 ("Take urgent action to combat climate change and its impacts"), caused by several indicators representing only two variables (i.e. demand and interest). The SDG list of indicators is kept short on purpose to keep reporting feasible, but if the indicators and data were available through national or global platforms (e.g. IPBES, World Bank), a longer list of readily updated indicators might not be so problematic. Despite the identified value of information on ecosystem services as presented in section 3.1, it seems that entire ecosystem service flows (from Potential supply to Interest) are poorly captured by the proposed and (potentially) used indicators. The information recommended for Aichi Targets shows a strong bias on the supply side of ecosystem services flow (i.e. Potential supply and Supply), whereas this seems more balanced for SDGs. However, the overall information demanded is very low, given the number of services that are relevant for the policies (Fig. 3). Variables linked to social behaviour and ecosystem services consumption (i.e. Demand and Use) and Governance (i.e. Interest) are much less represented in Aichi targets and this bias is enforced when looking at the actually used indicators. As the SDGs reporting is based on information from national statistical bureaus, we can wonder whether their data will demonstrate a similar bias or not, as the used data sources can be of a different nature (e.g. some indicators may come from national censors). Results from section 3.3 make it clear that if SDGs reports rely only on national ecosystem reports for their information, it will likely demonstrate the same bias as found in the Aichi Target reports. To obtain more balanced information for the SDGS, national statistical bureaus would be ideally placed to add complementary social and economic data on other variables.
Ecosystem service information in national assessments
The national ecosystem assessments analysis demonstrates the availability of a significant amount of information on ecosystem services flows at national level (Appendix A,Tab. A.4). It has to be noted that as the analysed national ecosystem assessments under represent developing countries and non-European countries, the available information at a global level might be significantly lower. However, some national reports may not have been detected or included in our review, for instance because we did not find them on the internet or because they were not written in any of the languages mastered by the authors.
The available knowledge in the selected ecosystem assessments on ecosystem services flows shows, however, a considerable bias towards Supply information on provisioning services and Potential supply information for regulating services. Cultural ecosystem services as well as Use, Demand and Interest variables are not well covered in national assessments. In addition, only for some ecosystem services (e.g., Timber, Erosion Regulation, Recreation) information is available for all relevant ecosystem services variables (Fig. A.2).
In total, we identified 277 ecosystem services indicators in the ten selected ecosystem services assessments (Tab. A.2). Within these 277 indicators, most provide information on provisioning services (126, 45%), whereas 121 indicators provide information on regulating services (44%). The remaining 30 indicators (11%) provide information on cultural services. Based on the network analysis, we can clearly see that indicators used for provisioning services mostly represent information on the Supply variable, whereas indicators used for regulating services mostly represent the Potential supply variable (Fig. 4).
Figure 4. Relative representation of the indicators used in analysed National
Ecosystem Assessments, according to ecosystem services category (provisioning, regulating or cultural services) and the ecosystem service variables (Potential supply, Supply, Use, Demand or Interest). The line width indicates the frequency at which indicators of a certain ecosystem service category were used to monitor any of the components of the ecosystem services flow. The size of the nodes is proportional to the number of ties that a node has. Among the 277 indicators, 39 did not provide a measure of service flow, but rather of the pressure (e.g. amount of ammonia emission) or of the status quo (e.g. current air quality). None of these measures provide information on the actual ecosystem service flow; they rather reflect the response to a pressure. The status quo can be considered to result from the interplay between exerted pressure and triggered ecosystem services flow. Among the 39 indicators, 38 were used to quantify regulating services, leaving a total number of 83 indicators to quantify variables of regulating ecosystem services flows.
The 238 indicators of ecosystem service flows are almost equally divided between direct and indirect indicators, namely 124 versus 114, respectively (Tab. A.2). The distribution of the indicators within the different ecosystem service categories differs. Among the different variables, Interest is least represented by the different indicators. The pattern is most pronounced for provisioning services, where there is relatively little information available on Demand and Interest (Fig. 4). For regulating services, most information seems available on the Potential supply side of the ecosystem services flow (Fig. 4).
The cultural ecosystem services category has the lowest number of indicators used for monitoring the ecosystem service flow (Tab. A.2). Regardless of general patterns, indicators are available only for very few services, for all five variables (Fig. A.2). For the top 25% services most frequently mentioned in the policies, there is a similar bias towards indicators on Supply (Tab. A.3), mainly stemming from the provisioning services crop and livestock (Tab. A.4), whereas no indicators were included for the ecosystem service Natural heritage and natural diversity.
As already acknowledged by IPBES, capacity building is needed to increase the number of readily available indicators for ecosystems services at national and global levels. The capacity to monitor spatially-explicit dynamics of ecosystem services, including multiple variables of the ecosystem services flow simultaneously, could benefit from the application of process-oriented models (e.g. [START_REF] Bagstad | Spatial dynamics of ecosystem service flows: A comprehensive approach to quantifying actual services[END_REF][START_REF] Guerra | An assessment of soil erosion prevention by vegetation in Mediterranean Europe: Current trends of ecosystem service provision[END_REF], the use of remote sensing for specific variables (e.g. [START_REF] Cord | Monitor ecosystem services from space[END_REF], or by aligning with censor social and economic data (e.g. Hermans-Neumann et al., 2016).
Recommendations for improvement towards the future
The biased information on ecosystem service flows hampers an evaluation of progress on sustainable development. If policy reports are not able to identify whether trends in supply, consumption and demand of ecosystem services align, it will be difficult to identify if no one is left behind [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. Apart from the results of the structured analysis, three other issues emerged from the review, which we want to mention here to raise awareness and stimulate inclusion of these issues in further scientific studies.
First, trade-offs play a crucial role in the interpretation of the sustainability of developments related to human well-being [START_REF] Liu | Systems integration for global sustainability[END_REF][START_REF] Wu | Landscape sustainability science: ecosystem services and human well-being in changing landscapes[END_REF] and often include regulating services [START_REF] Lee | A quantitative review of relationships between ecosystem services[END_REF]. Interestingly, in the case of the SDGs, where the objective of sustainable development is a key concept, no indicators are proposed to monitor whether the impacts of progress on some objectives (e.g. industry development mentioned in Target 16) might negatively affect progress towards another objective (e.g. water availability and water quality mentioned in Target 6). Without monitoring of tradeoffs between objectives and underlying ecosystem services, it will be difficult to determine whether any progress made can be considered sustainable for improving human well-being [START_REF] Costanza | The UN Sustainable Development Goals and the dynamics of well-being[END_REF][START_REF] Nilsson | Policy: Map the interactions between Sustainable Development Goals[END_REF]. Reporting on global sustainability policies would greatly benefit from the development and standardisation of methods to detect trends in trade-offs between ecosystem services, and between ecosystem services and other pressures. The ongoing IPBES regional and global assessments could offer excellent opportunities to develop comprehensive narratives that include the interactions between multiple ecosystem services and between them and drivers of change. Global working groups on ecosystem services from GEO BON2 and the Ecosystem Services Partnership3 can render ecosystem services data and variables usable in a wide set of monitoring and reporting contexts by developing frameworks connecting data to indicators and monitoring schemes.
Second, the applied framework of variables of ecosystem service flows did not allow for an evaluation of the most relevant spatial and temporal scales, or for indicators' units. Most ecosystem services are spatially explicit and show spatial and temporal heterogeneity that requires information on both ecological and social aspects of ecosystem services flows (e.g. [START_REF] Guerra | An assessment of soil erosion prevention by vegetation in Mediterranean Europe: Current trends of ecosystem service provision[END_REF][START_REF] Guerra | Mapping Soil Erosion Prevention Using an Ecosystem Service Modeling Framework for Integrated Land Management and Policy[END_REF]. To monitor progress towards the Aichi Targets, the tendency to date has been to develop indicators and variables that could be quantified at global level, with the framework of Essential Biodiversity Variables being a leading concept [START_REF] O'connor | Earth observation as a tool for tracking progress towards the Aichi Biodiversity Targets[END_REF][START_REF] Pereira | Essential Biodiversity Variables[END_REF][START_REF] Pettorelli | Framing the concept of satellite remote sensing essential biodiversity variables: challenges and future directions[END_REF]. Although indicators with global coverage can be very effective in communicating and convincing the audience on the existence of specific trends (e.g. the Living Planet Index4 ), they are not likely to provide sufficient information to inform management or policy decisions, at local or national scales. For the SDGs, which are at a much earlier stage of development than the Aichi Targets, data will be provided at national level by national statistical bureaus (ICSU, ISSC, 2015), which may better suit national decision makers deciding on implementation of interventions. The current approach of reporting on SDGs progress at national level may also allow easier integration of information on ecosystem services available from national assessments. Although the number of available national ecosystem assessments is still rising, developing countries are currently underrepresented. Developing national assessments in these countries is therefore an important for the credible reporting on Aichi targets and SDGs.
Third, national ecosystem assessments would ideally provide information at the spatio-temporal scale and unit most relevant for the ecosystem services at hand [START_REF] Costanza | Ecosystem services: Multiple classification systems are needed[END_REF][START_REF] Geijzendorffer | The relevant scales of ecosystem services demand[END_REF]. This would allow for the identification of people who do not have enough access to particular ecosystem services (e.g. gender related, income related) at a sub-national level. The assessment of progress in human well-being for different social actors within the same country, requires alternative units of measurement than national averages for the whole population in order to appraise equity aspects [START_REF] Daw | Applying the ecosystem services concept to poverty alleviation: the need to disaggregate human well-being[END_REF][START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. Further, although the setting of the SDGs was done by national governments, achieving sustainable development requires the engagement of multiple social actors operating at local level. Some of these local actors (e.g. rural or indigenous communities, low-income neighbourhoods, migrants or women) play a relevant role in achieving the SDGs, because they are more vulnerable to the impact of unequal access to and distribution of ecosystem services.
Although some of the indicators and objectives of SDGs mention particular actor groups (e.g. women), the representation of vulnerable groups will require special attention throughout the different targets and ecosystem services.
Conclusion
This study demonstrates that information from all ecosystem services categories is relevant for the monitoring of the Aichi Targets and the SDGs. It identifies a bias in the information demand as well as in the information available from indicators at national level towards supply related aspects of ecosystem services flows, whereas information on social behaviour, use, demand and governance implementation is much less developed.
The National statistical bureaus currently in charge of providing the data for reporting on the SDGs could be well placed to address this bias, by integrating ecological and socio-economic data. In addition, IPBES could potentially address gaps between national and global scales, as well as improve coverage of ecosystem services flows. As its first assessments of biodiversity and ecosystem services are ongoing, IPBES is still adapting its concepts. To live up to its potential role, IPBES needs to continue to adapt concepts based on scientific conceptual arguments and not based on current day practical constraints, such as a lack of data, or political sensitivities. This manuscript demonstrates the importance of data and indicators for global sustainability policies and which biases we need to start readdressing, now.
Appendix A: The frequency at which ecosystem services are mentioned per target, in the policy documents. The review of the national assessment reports showed no indicators explicitly linked to the Natural heritage and natural diversity service (Table S3). We might consider that some aspects of this service may be captured by other cultural services, such as the appreciation by tourists or knowledge systems.
However, the interpretation of this specific service is generally considered to be very difficult. Many consider that the intrinsic value of biodiversity, although very important, cannot be considered an ecosystem service as the direct benefit for human well-being is not evident, but rather as an ecological characteristic [START_REF] Balvanera | Quantifying the evidence for biodiversity effects on ecosystem functioning and services: Biodiversity and ecosystem functioning/services[END_REF][START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF]. To include to the Natural heritage and natural diversity service in our review, we considered that only information on biodiversity aspects for which human appreciation was explicitly used as criteria, should be included in this particular ecosystem service. This means that general patterns in species abundance (e.g. Living Planet Index), habitat extent or the presence of red list of species, were considered as important variables for biodiversity, only if they supported specific ecological functions (e.g. mangrove extent for life cycle maintenance by providing nurseries for fish), but not as an indicator for the supply of the natural heritage service in general.
Figure 1 .
1 Figure 1. Contribution of ecosystem services to human well-being, with direct contributions being indicated with black arrows and indirect contributions by dotted arrows.Figure adapted from Wu (2013).
Figure 1. Contribution of ecosystem services to human well-being, with direct contributions being indicated with black arrows and indirect contributions by dotted arrows.Figure adapted from Wu (2013).
Fig 2 .
2 Fig 2. Relative importance of ecosystem service categories for the different policy objectives. The line width indicates the frequency at which a certain ecosystem service category was mentioned in relation to a specific goal of the SDGs or Aichi Targets (goals for which no relation to ecosystem services was found are not shown). The size of the nodes is proportional to the number of ties that a node has.
For3BFigure 3 .
3 Figure 3. Relative importance of each of the ecosystem services variables (Potential supply, Supply, Use, Demand and Interest) recommended for the monitoring of the global sustainability objectives. (A) The number of proposed and used indicators for the reporting on the progress of the sustainability goal in policy documents per ecosystem service variable. (B) Relative frequencies (0-1) at which information from variables are represented by indicators per policy target. Frequency values are standardized for the total number of services linked to individual policy target (nES) and the legend indicates nSDG and nAichi for the total number of proposed indicators for each ES variable per policy programme respectively. Policy targets which did not mention ecosystem services were not included in the figure.
nSDGS=13; nAichi = 30) Supply (nSDGs=7; nAichi =14)) Use (nSDGs=10; nAichi=3) Demand (nSDG =11;nAichi=5)Interest (nSDG = 13; nAichi=4) 3.
Figure A. 1 :
1 Figure A.1: Degree (the number of connections) per ecosystem service across both policy documents
Table 1 : Evaluation framework for the indicators on ecosystem service flows (
1 adapted from[START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF]. While direct indicators can be used to immediately assess the needed information, indirect indicators provide proxies or only partial information necessary to compute the respective indicator.
Information
component Definition Related terms used in other papers Examples of direct indicators Examples of indirect indicators Potential Supply
Estimated supply of ecosystem Ecosystem functions (de Groot et Modelled estimates of Qualitative estimates of
services based on ecological and al., 2002); ecosystem properties harvestable biomass under land cover type
geophysical characteristics of that support ecosystem functions natural conditions; potential contributions to biomass
ecosystems, taking into account (van Oudenhoven et al., 2012) pressures that an ecosystem growth; species traits (e.g.
the ecosystem's integrity, under can absorb; landscape root growth patterns);
the influence of external drivers aesthetic quality. landscape heterogeneity
(e.g., climate change or of land cover types.
pollution).
Table 2 : Ecosystem service assessments considered in the analysis Included countries Reference
2
Belgium (Stevens, 2014)
Europe (Maes et al., 2015)
Finland http://www.biodiversity.fi/ecosystemservice s/home, last consulted January 13 th 2017
New Zealand (Dymond, 2013)
South Africa (Reyers et al., 2014)
South Africa, Tanzania and Zambia (Willemen et al., 2015)
Spain (Santos-Martín et al., 2013)
United Kingdom (UK National Ecosystem Assessment, 2011)
Table 3 . Frequency at which the different ecosystem services were mentioned in both policy
3 [START_REF] Geijzendorffer | Improving the identification of mismatches in ecosystem services assessments[END_REF] Lee and Hautenbach, 2016) demonstrate that easily measurable ecosystem services (i.e. most of the provisioning services) or ecosystem services that can be quantified through modelling (i.e. many of the regulating services) are most often studied, whereas cultural ecosystem services are much less represented, despite their importance for global sustainability policies. The reason for this knowledge gap is partly theoretical (e.g. lack of agreement on for monitoring and measuring, and partly because the assessment of cultural services in particularly requires a multi-disciplinary approach (e.g. landscape ecologists, environmental anthropologists, or environmental planners) which is difficult to achieve(Hernández- 231 Morcillo et al. 2013;[START_REF] Milcu | Cultural ecosystem services: a literature review and prospects for future research[END_REF]. The development of cultural services indicators would benefit 232 from a truly interdisciplinary dialogue which should take place at both national level and international 233 level to capture cultural differences and spatial heterogeneity. The capacity building objectives of IPBES 234 could provide an important global incentive to come to a structured, mutli-disciplinary and coherent 235 concept of cultural services. 236
237
238 documents. Presented ecosystem services frequency scores are for the SDGs per target (n=126) and for
239 the Aichi Targets per target (n=20).
Ecosystem services SDGs Aichi Targets
Provisioning services (total) 29 29
Crops 4 3
Energy (biomass) 2 1
Fodder 0 1
Livestock 4 3
Fibre 0 2
Timber 0 3
Wood for fuel 2 1
Capture fisheries 8 3
Aquaculture 5 3
Wild foods 2 3
Biochemicals/medicine 0 3
Freshwater 2 3
Regulating services (total) 33 21
Global climate regulation 0 2
Local climate regulation 3 1
Air quality regulation 2 0
Water flow regulation 5 2
Water purification 5 3
Nutrient regulation 0 3
Erosion regulation 3 3
Natural hazard protection 6 1
Pollination 1 2
Pest and disease control 2 2
Regulation of waste 6 2
Cultural services (total) 23 13
Recreation 4 0
Landscape aesthetics 0 0
Knowledge systems 2 3
Religious and spiritual experiences 0 1
Cultural heritage & cultural diversity 4 3
Natural Heritage & natural diversity 13 6
240
241
Table A .1. Overall ranking of the frequency that ecosystem services were mentioned across both the SDGs and the Aichi Targets.
A The top 25% most frequently mentioned ecosystem services are highlighted in bold. Ecosystem services categories are Provisioning (P), Regulating (R) and Cultural (C).
Ecosystem SDGs Aichi Targets Combined
service category Ecosystem services Ranking Ranking ranking
C Natural heritage & natural diversity 1 1 1
P Capture fisheries 2 8 2
P Aquaculture 6 8 3.5
R Water purification 6 8 3.5
P Crops 9,5 8 6
P Livestock 9,5 8 6
C Cultural heritage & cultural diversity 9,5 8 6
R Erosion regulation 12,5 8 8,5
R Regulation of waste 3,5 17,5 8,5
R Water flow regulation 6 17,5 10
P Wild foods 17 8 12
P Freshwater 17 8 12
C Knowledge systems 17 8 12
R Natural hazard protection 3,5 23,5 14
P Timber 25,5 8 16
P Biochemicals/medicine 25,5 8 16
R Nutrient regulation 25,5 8 16
R Pest and disease control 17 17,5 18
R Local climate regulation 12,5 23,5 19
C Recreation 9,5 28 20
R Pollination 21 17,5 21
P Energy (biomass) 17 23,5 22,5
P Wood for fuel 17 23,5 22,5
P Fibre 25,5 17,5 24
R Global climate regulation 25,5 17,5 25
R Air quality regulation 17 28 26
P Fodder 25,5 23,5 27,5
C Religious and spiritual experiences 25,5 23,5 27,5
C Landscape aesthetics 25,5 28 29
Table A .2. Number of indicators identified from national ecosystem assessments, presented per ecosystem service category
A (provisioning, regulating or cultural services), ecosystem service variable (Potential Supply, Supply, Use, Demand or Interest) or indicator type (direct or indirect). For regulating services, 39 additional indicators describing pressures and states were identified.
Potential
Direct Indirect Supply Supply Use Demand Interest
Total 124 114 59 89 46 31 13
Provisioning 82 43 22 61 31 8 3
Regulating 26 57 34 19 5 18 7
Cultural 16 14 3 9 10 5 3
Potential Supply 19 40
Supply 45 44
Use 40 6
Demand 17 14
Interest 3 10
Table A.3.
Number of indicators identified from ecosystem services assessments for the top 25% of ecosystem services recommended by the reviewed policies, presented per ecosystem service variable
(Potential Supply, Supply, Use, Demand or Interest) or indicator type (direct or indirect).
(https://www.cbd.int/gbo/) last consulted on the
nd of April 2017
http://geobon.org/working-groups/, last consulted 22th of April 2017
http://es-partnership.org/community/workings-groups/, last consulted 22th of April 2017
www.livingplanetindex.org/home/index, last consulted 22th of April 2017
Acknowledgements
We thank the two anonymous reviewers for their suggestions, which have led to an improved final version of the manuscript. This work was partly supported by 7th Framework Programmes funded by the European Union the EU BON (Contract No. 308454) and the OPERAs project (Contract No.308393). It contributes to the Labex OT-Med (no. ANR-11-LABX-0061) funded by the French Government through the A*MIDEX project (no. ANR-11-IDEX-0001-02). This study contributes to the work done within the GEO BON working group on Ecosystem Services and the Mediterranean Ecosystem Services working group of the Ecosystem Services Partnership.
[START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF], but based on the indicators found in the selected ecosystem services assessments, we made small adjustments: 1) for livestock the definition remained the same, but we changed the name for clarity in the table; 2) noise reduction, soil quality regulation and lifecycle maintenance were absent from [START_REF] Kandziora | Interactions of ecosystem properties, ecosystem integrity and ecosystem service indicators-A theoretical matrix exercise[END_REF] and were added; 3) we split natural hazard regulation in two: flood risk regulation and coastal protection; and 4) we separated recreation and tourism. |
01444016 | en | [
"sde.be"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01444016/file/Titeux_2016_GCB_postprint.pdf | Nicolas Titeux
email: nicolas.titeux@ctfc.es
Klaus Henle
Jean-Baptiste Mihoub
Adrián Regos
Ilse R Geijzendorffer
Wolfgang Cramer
Peter H Verburg
Lluís Brotons
Biodiversity scenarios neglect future land use changes Running head Land use changes and biodiversity scenarios
Keywords: Biodiversity projections, climate change, ecological forecasting, land cover change, land system science, predictive models, species distribution models, storylines
Efficient management of biodiversity requires a forward-looking approach based on scenarios that explore biodiversity changes under future environmental conditions. A number of ecological models have been proposed over the last decades to develop these biodiversity scenarios. Novel modelling approaches with strong theoretical foundation now offer the possibility to integrate key ecological and evolutionary processes that shape species distribution and community structure. Although biodiversity is affected by multiple threats, most studies addressing the effects of future environmental changes on biodiversity focus on a single threat only. We examined the studies published during the last 25 years that developed scenarios to predict future biodiversity changes based on climate, land use and land
Introduction
Biodiversity plays an important role in the provision of ecosystem functions and services [START_REF] Mace | Biodiversity and ecosystem services: a multilayered relationship[END_REF][START_REF] Bennett | Linking biodiversity, ecosystem services, and human well-being: three challenges for designing research for sustainability[END_REF]Oliver et al., 2015a). Yet, it is undergoing important decline worldwide due to human-induced environmental changes [START_REF] Collen | Monitoring change in vertebrate abundance: the living planet index[END_REF][START_REF] Pimm | The biodiversity of species and their rates of extinction, distribution, and protection[END_REF]. Governance and anticipative management of biodiversity require plausible scenarios of expected changes under future environmental conditions [START_REF] Sala | Global Biodiversity Scenarios for the Year 2100[END_REF][START_REF] Pereira | Scenarios for global biodiversity in the 21st century[END_REF][START_REF] Larigauderie | Biodiversity and ecosystem services science for a sustainable planet: the DIVERSITAS vision for 2012-20[END_REF]. A forward-looking approach is essential because drivers of biodiversity decline and their associated impacts change over time. In addition, delayed mitigation efforts are likely more costly and timeconsuming than early action and often fail to avoid a significant part of the ecological damage [START_REF] Cook | Using strategic foresight to assess conservation opportunity[END_REF][START_REF] Oliver | The pitfalls of ecological forecasting[END_REF]. Hence, biodiversity scenarios are on the agenda of international conventions, platforms and programmes for global biodiversity conservation, such as the Convention on Biological Diversity (CBD) and the Intergovernmental Platform for Biodiversity & Ecosystem Services (IPBES) [START_REF] Pereira | Scenarios for global biodiversity in the 21st century[END_REF][START_REF] Leadley | Progress towards the Aichi Biodiversity Targets: An Assessment of Biodiversity Trends, Policy Scenarios and Key Actions[END_REF]; Secretariat of the Convention on Biological Diversity, 2014; [START_REF] Díaz | The IPBES Conceptual Frameworkconnecting nature and people[END_REF][START_REF] Kok | Biodiversity and ecosystem services require IPBES to take novel approach to scenarios[END_REF].
Accepted Article
This article is protected by copyright. All rights reserved.
An increasing number of ecological models have been proposed over the last decades to develop biodiversity scenarios [START_REF] Evans | Predictive systems ecology[END_REF][START_REF] Kerr | Predicting the impacts of global change on species, communities and ecosystems: it takes time[END_REF][START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF]. They integrate and predict the effects of the two main factors that will determine the future of biodiversity:
(1) the nature, rate and magnitude of expected changes in environmental conditions and (2) the capacity of organisms to deal with these changing conditions through a range of ecological and evolutionary processes (Figure 1). Most modelling approaches rely on strong assumptions about the key processes that shape species distribution, abundance, community structure or ecosystem functioning [START_REF] Kearney | Mechanistic niche modelling: combining physiological and spatial data to predict species' ranges[END_REF][START_REF] Evans | Modelling ecological systems in a changing world[END_REF][START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF], with only few studies considering the adaptation potential of the species. Hence, recent work has mainly focused on improving the theoretical foundation of ecological models [START_REF] Evans | Predictive systems ecology[END_REF][START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF]Harfoot et al., 2014a;Zurell et al., 2016).
Yet, the credibility of developed biodiversity scenarios remains severely limited by the assumptions used to integrate the expected changes in environmental conditions into the ecological models.
Biodiversity scenarios draw upon narratives (storylines) of environmental change that project plausible socio-economic developments or particularly desirable future pathways under specific policy options and strategies [START_REF] Van Vuuren | Scenarios in global environmental assessments: key characteristics and lessons for future use[END_REF][START_REF] O'neill | The roads ahead: Narratives for shared socioeconomic pathways describing world futures in the 21st century[END_REF] (Figure 1). Although biodiversity is affected by multiple interacting driving forces (Millennium Ecosystem Assessment, 2005;[START_REF] Mantyka-Pringle | Interactions between climate and habitat loss effects on biodiversity: a systematic review and meta-analysis[END_REF][START_REF] Settele | Biodiversity: Interacting global change drivers[END_REF], most biodiversity scenarios are based on environmental change projections that represent a single threat only [START_REF] Bellard | Combined impacts of global changes on biodiversity across the USA[END_REF]. With a literature survey on the biodiversity scenarios published during the last 25 years, we show here a dominant use of climate change projections and a relative neglect of future changes in land use and land cover. The emphasis on the impacts of climate change reflects the urgency to deal with this threat as it emerges from studies, data and reports such as those produced by the Intergovernmental Panel on Climate Change (IPCC) [START_REF] Tingley | Climate change must not blow conservation off course[END_REF][START_REF] Settele | Terrestrial and inland water systems[END_REF]. The direct destruction or degradation of habitats are, however, among the most significant threats to biodiversity to date (Millennium Ecosystem Assessment, 2005;[START_REF] Leadley | Progress towards the Aichi Biodiversity Targets: An Assessment of Biodiversity Trends, Policy Scenarios and Key Actions[END_REF][START_REF] Newbold | Global effects of land use on local terrestrial biodiversity[END_REF][START_REF] Newbold | Global patterns of terrestrial assemblage turnover within and among land uses[END_REF] and not including them raises concerns for the credibility of biodiversity scenarios. Habitat destruction and
Accepted Article
This article is protected by copyright. All rights reserved. degradation result from both changes in the type of vegetation or human infrastructures that cover the land surface (i.e. land cover) and changes in the manner in which humans exploit and manage the land cover (i.e. land use) [START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF][START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF]. The lack of coherent and interoperable environmental change projections that integrate climate, land use and land cover across scales constitutes a major research gap that impedes the development of credible biodiversity scenarios and the implementation of efficient forward-looking policy responses to biodiversity decline. We identify key research challenges at the crossroads between ecological and environmental sciences, and we provide recommendations to overcome this gap.
Climate and land use/cover changes are important drivers of biodiversity decline
Biodiversity decline results from a number of human-induced drivers of change, including land use/cover change, climate change, pollution, overexploitation and invasive species [START_REF] Pereira | Global biodiversity change: the bad, the good, and the unknown[END_REF][START_REF] Leadley | Progress towards the Aichi Biodiversity Targets: An Assessment of Biodiversity Trends, Policy Scenarios and Key Actions[END_REF]. [START_REF] Ostberg | Three centuries of dual pressure from land use and climate change on the biosphere[END_REF] have recently estimated that climate and land use/cover changes have now reached a similar level of pressure on the biogeochemical and vegetation-structural properties of terrestrial ecosystems across the globe, but during the last three centuries land use/cover change has exposed 1.5 times as many areas to significant modifications as climate change. The relative impacts of these driving forces on biodiversity have also been assessed at the global scale. In its volume on state and trends, the Millennium Ecosystem Assessment (2005) reported that land use/cover change in terrestrial ecosystems has been the most important direct driver of changes in biodiversity and ecosystem services in the past 50 years. Habitat destruction or degradation due to land use/cover change constitute an on-going threat in 44.8% of the vertebrate populations included in the Living Planet Index (WWF, 2014) for which threats have been identified, whereas climate change is a threat in only 7.1% of them. A query performed on the website of the IUCN Red List of Threatened species (assessment during the period [2000][2001][2002][2003][2004][2005][2006][2007][2008][2009][2010][2011][2012][2013][2014][2015] indicates that more than 85% of the vulnerable or (critically) endangered mammal, bird and amphibian species in terrestrial ecosystems are affected by habitat destruction or degradation (i.e. residential and commercial development, agriculture and aquaculture, energy production and mining, transportation and service corridors, and natural system modification) and less than 20% are affected by climate
Accepted Article
This article is protected by copyright. All rights reserved. change and severe weather conditions (see also [START_REF] Pereira | Global biodiversity change: the bad, the good, and the unknown[END_REF]. Interactions between multiple driving forces, such as climate, land use and land cover changes, may further push ecological systems beyond tipping points [START_REF] Mantyka-Pringle | Interactions between climate and habitat loss effects on biodiversity: a systematic review and meta-analysis[END_REF][START_REF] Oliver | Interactions between climate change and land use change on biodiversity: attribution problems, risks, and opportunities[END_REF] and are key to understanding biodiversity dynamics under changing environmental conditions [START_REF] Travis | Climate change and habitat destruction: a deadly anthropogenic cocktail[END_REF][START_REF] Forister | Compounded effects of climate change and habitat alteration shift patterns of butterfly diversity[END_REF][START_REF] Staudt | The added complications of climate change: understanding and managing biodiversity and ecosystems[END_REF][START_REF] Mantyka-Pringle | Climate change modifies risk of global biodiversity loss due to land-cover change[END_REF].
Emphasis on climate change impacts in biodiversity scenarios
Available projections of climate and land use/cover changes [START_REF] Van Vuuren | Scenarios in global environmental assessments: key characteristics and lessons for future use[END_REF][START_REF] O'neill | The roads ahead: Narratives for shared socioeconomic pathways describing world futures in the 21st century[END_REF] are used to inform on future environmental conditions for biodiversity across a variety of spatial and temporal scales (de Chazal & Rounsevell, 2009) (Figure 1). Many studies have predicted the consequences of expected climate change on biodiversity [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF][START_REF] Staudinger | Biodiversity in a changing climate: a synthesis of current and projected trends in the US[END_REF][START_REF] Pacifici | Assessing species vulnerability to climate change[END_REF]. For instance, future climate change is predicted to induce latitudinal or altitudinal shifts in species ranges with important effects on ecological communities [START_REF] Maes | Predicted insect diversity declines under climate change in an already impoverished region[END_REF][START_REF] Barbet-Massin | The effect of range changes on the functional turnover, structure and diversity of bird assemblages under future climate scenarios[END_REF], to increase the risks of species extinction [START_REF] Thomas | Extinction risk from climate change[END_REF][START_REF] Urban | Accelerating extinction risk from climate change[END_REF] or to reduce the effectiveness of conservation areas [START_REF] Araújo | Climate change threatens European conservation areas[END_REF]. Projections of land use/cover change have been used to predict future changes in suitable habitats for a number of species [START_REF] Martinuzzi | Future land-use scenarios and the loss of wildlife habitats in the southeastern United States[END_REF][START_REF] Newbold | Global effects of land use on local terrestrial biodiversity[END_REF], to predict future plant invasions [START_REF] Chytrý | Projecting trends in plant invasions in Europe under different scenarios of future land-use change[END_REF], to estimate potential future extinctions in biodiversity hotspots [START_REF] Jantz | Future habitat loss and extinctions driven by land-use change in biodiversity hotspots under four scenarios of climate-change mitigation[END_REF] or to highlight the restricted potential for future expansion of protected areas worldwide [START_REF] Pouzols | Global protected area expansion is compromised by projected land-use and parochialism[END_REF]. [START_REF] Visconti | Socio-economic and ecological impacts of global protected area expansion plans[END_REF] estimated the coverage of suitable habitats for terrestrial mammals under future land use/cover change and based on global protected areas expansion plans. They showed that such plans might not constitute the most optimal conservation action for a large proportion of the studied species and that alternative strategies focusing on the most threatened species will be more efficient.
Climate and land use/cover change projections have also been combined in the same modelling framework to address how climate change will interplay with land use/cover change in driving the future of biodiversity [START_REF] Jetz | Projected impacts of climate and land-use change on the global diversity of birds[END_REF][START_REF] Martin | Testing instead of assuming the importance of land use change scenarios to model species distributions under climate change[END_REF][START_REF] Ay | Integrated models, scenarios and dynamics of climate, land use and common birds[END_REF][START_REF] Saltré | How climate, migration ability and habitat fragmentation affect the projected future distribution of European beech[END_REF][START_REF] Visconti | Projecting Global Biodiversity Indicators under Future Development Scenarios[END_REF]. For instance, future refuge areas for orang-utans have been identified in Borneo
Accepted Article
This article is protected by copyright. All rights reserved.
under projected climate change, deforestation and suitability for oil-palm agriculture [START_REF] Struebig | Anticipated climate and land-cover changes reveal refuge areas for Borneo's orang-utans[END_REF]. [START_REF] Alkemade | GLOBIO3: A Framework to Investigate Options for Reducing Global Terrestrial Biodiversity Loss[END_REF] used land use/cover change, climate change and projections of other driving forces to predict the future impacts of different global-scale policy options on the composition of ecological communities. Recently, it has been shown that the persistence of drought-sensitive butterfly populations under future climate change may be significantly improved if semi-natural habitats are restored to reduce fragmentation (Oliver et al., 2015b).
We searched published literature from 1990 to 2014 to estimate the yearly number of studies that developed biodiversity scenarios based on climate change projections, land use/cover change projections or the combination of both types of projections. A list of 2,313 articles was extracted from the search procedure described in Table 1. We expected a number of articles within this list would only weakly focus on the development of biodiversity scenarios based on climate and/or land use/cover change projections and therefore, we randomly sampled articles within this list (sample size: N=300). We then carefully checked their titles and abstracts to allocate each of them to one of the following categories: We considered that articles reported on the development of biodiversity scenarios when they produced predictions of the response of biodiversity to future changes in environmental conditions.
Accepted Article
This article is protected by copyright. All rights reserved.
We calculated for each year between 1990 and 2014 the proportions of studies allocated to each of the five categories among the random sample of articles. We used a window size of 5 years and we calculated two-sided moving averages of the yearly proportions along the 25-year long time series.
With this approach, we smoothed out short-term fluctuations due to the limited sample size and we highlighted the long-term trend in the proportions of articles allocated to the different categories.
We used these smoothed proportions estimated from the sample of articles and the total number of 2,313 articles extracted from the search procedure to estimate the yearly numbers of articles during 1990-2014 that reported on the development of biodiversity scenarios and that used climate change projections (category 1), land use/cover change projections (category 2) and both types of environmental change projections (category 3).
Our survey revealed that the number of studies that have included the expected impacts of future land use/cover change on biodiversity falls behind in comparison with the number of studies that have focused on the effects of future climate change (Figure 2). Among the studies published during the period 1990-2014 and that drew upon at least one of these two driving forces to develop biodiversity scenarios, we estimated that 85.2% made use of climate change projections alone and that 4.1% used only projections of land use/cover change. Climate and land use/cover change projections were combined in 10.7% of the studies. A sensitivity analysis was carried out and indicates that the number of articles for which we checked the titles and abstracts was sufficient to reflect those proportions in a reliable way (Appendix S1 and Figure S1). The imbalance between the use of climate and land use/cover change projections has increased over time in the last 25 years and has now reached a maximum (Figure 2).
Where biodiversity scenarios lack credibility
Disregarding future changes in land use or land cover when developing biodiversity scenarios assumes that their effects on biodiversity will be negligible compared to the impacts of climate change. Two main reasons are frequently brought forward when omitting to include the effects of land use/cover change in biodiversity scenarios: (1) the available representations of future land use/cover
Accepted Article
This article is protected by copyright. All rights reserved. change are considered unreliable or irrelevant for addressing the future of biodiversity (e.g. [START_REF] Stanton | Combining static and dynamic variables in species distribution models under climate change[END_REF] and (2) climate change could outpace land use and land cover as the greatest threat to biodiversity in the next decades (e.g. [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF]. Here, we build on these two lines of arguments to discuss the lack of credibility of assuming unchanged land use/cover in biodiversity scenarios and to stress the need for further development of land use/cover change projections.
Available large-scale land use/cover change projections are typically associated with a relatively coarse spatial resolution and a simplified thematic representation of the land surface [START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]. This is largely due to the fact that most of these projections have been derived from integrated assessment models which simulate expected changes in the main land cover types and their impacts on climate through emission of greenhouse gases (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]Harfoot et al., 2014b). A strong simplification of the representation of land use and land cover is inevitable due to the spatial extent and computational complexity of these models. Some studies have implemented downscaling methods based on spatial allocation rules to improve the representation of landscape composition in large-scale projections [START_REF] Verburg | Downscaling of land use change scenarios to assess the dynamics of European landscapes[END_REF]. Because their primary objective is to respond to the pressing need to assess future changes in climatic conditions and to explore climate change mitigation options, such downscaled projections use, however, only a small number of land cover types and are, consequently, of limited relevance for addressing the full impact of landscape structure and habitat fragmentation on biodiversity (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]Harfoot et al., 2014b).
In addition, much of land system science has focused on conversions between land cover types (e.g.
from forest to open land through deforestation), but little attention has been paid to capture some of the most important dimensions of change for biodiversity that result from changes in land use withinand not only betweencertain types of land cover (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF][START_REF] Stürck | Simulating and delineating future land change trajectories across Europe[END_REF]. Changes in land management regimes (e.g. whether grasslands are mown or grazed) and intensity of use (e.g. through wood harvesting or the use of fertilizers, pesticides and irrigation in cultivated areas) are known to strongly impact biodiversity [START_REF] Pe'er | EU agricultural reform fails on biodiversity[END_REF] and are expected to cause unprecedented habitat modifications in the next decades (Laurance,
Accepted Article
This article is protected by copyright. All rights reserved.
2001; [START_REF] Tilman | Forecasting Agriculturally Driven Global Environmental Change[END_REF]. For instance, management intensification of currently cultivated areas [START_REF] Meehan | Agricultural landscape simplification and insecticide use in the Midwestern United States[END_REF] rather than agricultural surface expansion will likely provide the largest contribution to the future increases in agricultural production [START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF]. These aspects of land use change remain poorly captured and integrated into currently available projections [START_REF] Rounsevell | Challenges for land system science[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF][START_REF] Stürck | Simulating and delineating future land change trajectories across Europe[END_REF]. Furthermore, the frequency and sequence of changes in land use and land cover, or the lifespan of certain types of land cover, interact with key ecological processes and determine the response of biodiversity to such changes [START_REF] Kleyer | Mosaic cycles in agricultural landscapes of Northwest Europe[END_REF][START_REF] Watson | Land-use change: incorporating the frequency, sequence, time span, and magnitude of changes into ecological research[END_REF]. Although methods have become available to represent the dynamics and the expected trajectories of the land system [START_REF] Rounsevell | Challenges for land system science[END_REF], these temporal dimensions of change are still rarely incorporated in land use/cover change projections [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]Harfoot et al., 2014b). This lack of integration between ecological and land system sciences limits the ability to make credible evaluations of the future response of biodiversity to land use and land cover changes in interaction with climate change (de Chazal [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]Harfoot et al., 2014b). In turn, this makes it hazardous to speculate that the expected rate and magnitude of climate change will downplay the effects of land use/cover change on biodiversity in the future. There is no consensus on how the strength of future climate change impact should be compared to that of other threats such as changes in land use and land cover [START_REF] Tingley | Climate change must not blow conservation off course[END_REF]. Some of the few studies that included the combined effect of both types of drivers in biodiversity scenarios have stressed that, although climate change will severely affect biodiversity at some point in the future, land use/cover change may lead to more immediate and even greater biodiversity decline in some terrestrial ecosystems [START_REF] Jetz | Projected impacts of climate and land-use change on the global diversity of birds[END_REF][START_REF] Pereira | Scenarios for global biodiversity in the 21st century[END_REF][START_REF] Visconti | Projecting Global Biodiversity Indicators under Future Development Scenarios[END_REF]. For example, considerable habitat loss is predicted in some regions during the next few decades due to increasing pressures to convert natural habitats into agricultural areas [START_REF] Lambin | Global land use change, economic globalization, and the looming land scarcity[END_REF]. The rapid conversion of tropical forests and natural grasslands for agriculture, timber production and other land uses [START_REF] Laurance | Saving logged tropical forests[END_REF] is expected to have more significant impacts on biodiversity than climate in the near future [START_REF] Jetz | Projected impacts of climate and land-use change on the global diversity of birds[END_REF][START_REF] Laurance | Biodiversity scenarios: projections of 21st century change in biodiversity and associated ecosystem services[END_REF]. Again, most of these studies focused on changes
Accepted Article
This article is protected by copyright. All rights reserved. that will emerge from conversions between different types of land cover and only few of them addressed the future impacts of land use change within certain types of land cover. For instance, the distribution changes of broad habitat types were predicted under future climate, land use and CO 2 change projections in Europe and it was shown that land use change is expected to have the greatest effects in the next few decades [START_REF] Lehsten | Disentangling the effects of land-use change, climate and CO2 on projected future European habitat types[END_REF]. In this region, effects of land use change might lead to both a loss and a gain of habitats benefitting different aspects of biodiversity. This will likely happen through parallel processes of intensification and abandonment of agriculture that offer potential for recovering wilderness areas [START_REF] Henle | Identifying and managing the conflicts between agriculture and biodiversity conservation in Europe -A review[END_REF][START_REF] Queiroz | Farmland abandonment: threat or opportunity for biodiversity conservation? A global review[END_REF]. These immediate effects of land use/cover changes on biodiversity deserve further attention with regard to the ecological forecast horizon, i.e. how far into the future useful predictions can be made [START_REF] Petchey | The ecological forecast horizon, and examples of its uses and determinants[END_REF]. Immediate changes in land use/cover may significantly alter the ability of ecological systems to deal with the impacts of climate change that are expected to be increasingly severe in the future [START_REF] Tingley | Climate change must not blow conservation off course[END_REF]. Hence, ecological predictions that neglect the immediate effects of land use/cover changes and only focus on the effects of climate change in a distant future may be largely uncertain. It is therefore needed to identify appropriate time horizons for biodiversity scenarios, with increased reliance on those associated with greater predictability and higher policy relevance [START_REF] Petchey | The ecological forecast horizon, and examples of its uses and determinants[END_REF].
Climate change will exert severe impacts on the land system, but the way humans are managing the land will also influence climatic conditions, so that both processes interact with each other. For instance, deforestation and forest management constitute a major source of carbon loss with direct impacts on the carbon cycle and indirect effects on climate [START_REF] Pütz | Long-term carbon loss in fragmented Neotropical forests[END_REF][START_REF] Naudts | Europe's forest management did not mitigate climate warming[END_REF].
Climate change mitigation strategies include important modifications of the land surface such as the increased prevalence of biofuel crops. This mitigation action may pose some conflicts between important areas for biodiversity conservation and bioenergy production [START_REF] Alkemade | GLOBIO3: A Framework to Investigate Options for Reducing Global Terrestrial Biodiversity Loss[END_REF][START_REF] Fletcher | Biodiversity conservation in the era of biofuels: risks and opportunities[END_REF][START_REF] Meller | Balance between climate change mitigation benefits and land use impacts of bioenergy: conservation implications for European birds[END_REF]. In integrated assessment models or other global land use models, such interactions are often restricted to impacts of climate change on crop productivity and shifts in potential production areas. These models neglect a wide range of human adaptive responses
Accepted Article
This article is protected by copyright. All rights reserved.
to climate change in the land system [START_REF] Rounsevell | Towards decision-based global land use models for improved understanding of the Earth system[END_REF], such as spatial displacement of activities [START_REF] Lambin | Global land use change, economic globalization, and the looming land scarcity[END_REF]) that may pose a significant threat to biodiversity [START_REF] Estes | Using changes in agricultural utility to quantify future climate-induced risk to conservation[END_REF]. Increased attention to the feedback effects between climate and land use/cover changes is therefore needed to help assessing the full range of consequences of the combined impacts of these driving forces on biodiversity in the future.
Both climate and land use/cover changes are constrained or driven by large-scale forces linked to economic globalization, but the actual changes in land use/cover are largely determined by local factors [START_REF] Lambin | The causes of land-use and land-cover change: moving beyond the myths[END_REF][START_REF] Lambin | Global land use change, economic globalization, and the looming land scarcity[END_REF][START_REF] Rounsevell | Challenges for land system science[END_REF]. Modifications in the land system are highly location-dependent and a reflection of the local biophysical and socioeconomic constraints and opportunities [START_REF] Rounsevell | Towards decision-based global land use models for improved understanding of the Earth system[END_REF]. In Europe, observed changes in agricultural practices in response to increased market demands and globalization of commodity markets include the intensification of agriculture, the abandonment of marginally productive areas, and the changing scale of agricultural operations. These processes occur at the same time but at different locations across the continent [START_REF] Henle | Identifying and managing the conflicts between agriculture and biodiversity conservation in Europe -A review[END_REF][START_REF] Stürck | Simulating and delineating future land change trajectories across Europe[END_REF][START_REF] Van Vliet | Manifestations and underlying drivers of agricultural land use change in Europe[END_REF].
Hence, land use/cover change and its impacts on biodiversity are highly scale-sensitive processes: they show strongly marked contrasts from one location to the other [START_REF] Tzanopoulos | Scale sensitivity of drivers of environmental change across Europe[END_REF]. Many subtle changes that are locally or regionally significant for biodiversity may be seriously underestimated in the available land use/cover change projections because they are occurring below the most frequently used spatial, temporal and thematic resolution of analysis in large-scale land use models [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]. Most statistical downscaling approaches based on spatial allocation rules neglect such scale-sensitivity issues and therefore fail to represent landscape composition and structure to appropriately address the local or regional impacts of land use/cover changes on biodiversity [START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF].
A multi-scale, integrated approach is therefore required to unravel the relative and interacting roles of climate, land use and land cover in determining the future of biodiversity across a range of temporal and spatial scales. A good example of this need is the prediction of the impacts of changes in
Accepted Article
This article is protected by copyright. All rights reserved.
disturbance regimes, such as fire, for which idiosyncratic changes may be expected in particular combinations of future climate and land use/cover changes [START_REF] Brotons | How fire history, fire suppression practices and climate change affect wildfire regimes in Mediterranean landscapes[END_REF][START_REF] Regos | Predicting the future effectiveness of protected areas for bird conservation in Mediterranean ecosystems under climate change and novel fire regime scenarios[END_REF].
A way forward for biodiversity scenarios
Most large-scale land cover change projections are derived from integrated assessment models. They are coherent to some extent with climate change projections because they are based on the same socio-economic storylines. This coherence is useful for studying the interplay between different driving forces. Integrated assessment models capture human energy use, industrial development, agriculture and main land cover changes within a single modelling framework. However, their original, primary objective is to provide future predictions of greenhouse gas emissions. It is therefore important to recognise that these models are not designed to describe the most relevant aspects of land use and land cover changes for (changes in) biodiversity [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF]Harfoot et al., 2014b). Here, we provide two recommendations to increase the ecological relevance of land use/cover change projections: (1) reconciling local and global land use/cover modelling approaches and (2) incorporating important ecological processes in land use/cover models.
Novel and flexible downscaling and upscaling methods to reconcile global-, regional-and local-scale land use modelling approaches are critically required and constitute one of the most burning issues in land system science [START_REF] Letourneau | A land-use systems approach to represent land-use dynamics at continental and global scales[END_REF][START_REF] Rounsevell | Challenges for land system science[END_REF][START_REF] Verburg | The representation of landscapes in global scale assessments of environmental change[END_REF]. An important part of the land use modelling community focuses on the development of modelling and simulation approaches at local to regional scales where human decision-making and land use/cover change processes are incorporated explicitly [START_REF] Rounsevell | Towards decision-based global land use models for improved understanding of the Earth system[END_REF]. These models offer potential to include a more detailed representation of land use/cover trajectories than integrated assessment models. Beyond the classification of dominant land cover types, they inform on land use, intensity of use, management regimes, and other dimensions of land use/cover changes (van Asselen [START_REF] Van Asselen | Land cover change or land-use intensification: simulating land system change with a global-scale land change model[END_REF]). An integration of scales will provide the opportunity to better represent the interactions between local trajectories and global dynamics [START_REF] Kok | Biodiversity and ecosystem services require IPBES to take novel approach to scenarios[END_REF] and to deal more explicitly with scale-sensitive factors such as land use/cover changes [START_REF] Tzanopoulos | Scale sensitivity of drivers of environmental change across Europe[END_REF]. To achieve this integration, a strengthened connection between ecological and land use modelling communities is
Accepted Article
This article is protected by copyright. All rights reserved. needed as it would ensure that the spatial, temporal and thematic representation of changes in land use models matches with the operational scale at which biodiversity respond to these changes. Harfoot et al. (2014b) recently suggested development needs for integrated assessment models and recommended the general adoption of a user-centred approach that would identify why ecologists need land use/cover change projections and how they intend to use them to build biodiversity scenarios. Although we believe such an approach will also be needed to ensure the ecological relevance of integrating the different scales of analysis in land use models, this will only be successful if ecologists increase their use of already available land use/cover change projections and suggest concrete modifications to improve their ecological relevance [START_REF] De Chazal | Land-use and climate change within assessments of biodiversity change: a review[END_REF][START_REF] Martin | Testing instead of assuming the importance of land use change scenarios to model species distributions under climate change[END_REF]. To address the scale-sensitivity issue thoroughly, we should also move beyond the current emphasis on large and coarse scale of analysis in global change impact research and increase our recognition for studies examining the local and regional effects of climate and land use/cover changes on biodiversity.
Ecological processes in marine, freshwater or terrestrial ecosystems remain poorly incorporated in existing integrated assessment models and other land use models (Harfoot et al., 2014b). Ecological processes in natural and anthropogenic ecosystems provide essential functions, such as pollination, disease or pest control, nutrient or water cycling and soil stability, that exert a strong influence on land systems through complex mechanisms [START_REF] Sekercioglu | Ecosystem consequences of bird declines[END_REF][START_REF] Klein | Importance of pollinators in changing landscapes for world crops[END_REF].
Incorporating these processes at appropriate spatial and temporal scales in land use models constitutes an important challenge, but it would considerably increase the ecological realism of these models and, in turn, their ability to predict emergent behaviour of the future ecosystems and the related biodiversity patterns (Harfoot et al., 2014b). Therefore, we urge the need for strengthened interactions between different scientific communities to identify (1) which ecological processes are relevant in driving land use/cover dynamics and (2) how and at which scales these processes could be incorporated in land use models to predict the trajectories of socio-ecological systems.
Accepted Article
This article is protected by copyright. All rights reserved.
A successful implementation of our two recommendations does not solely depend on collaborative scientific efforts, but it also requires societal agreement and acceptance. The dialogue with and engagement of stakeholders, such as policy advisers and NGOs, within a participatory modelling framework [START_REF] Voinov | Modelling with stakeholders[END_REF] will be key to agreeing on a set of biodiversity-oriented storylines and desirable pathways at relevant spatial and temporal scales for decision-making processes in biodiversity conservation and management. An improved integration of the expertise and knowledge from social science into the development and interpretation of the models may allow a better understanding of likely trajectories of land use/cover changes. Moreover, such an integration would provide a better theoretical understanding and practical use of social-ecological feedback loops in form of policy and management responses to changes in biodiversity and ecosystem services, which in turn will impact future land use decisions and trajectories.
The priority given to investigating future climate change impacts on biodiversity most likely reflects how the climate change community has attracted attention during the last decades. The availability of long-term time series of climatic observations in most parts of the world and the increasing amount of science-based, spatially explicit climatic projections derived from global and regional circulation models have clearly stimulated the development of studies focusing on the impacts of climate change [START_REF] Tingley | Climate change must not blow conservation off course[END_REF]Harfoot et al., 2014b). Under the World Climate Research Programme (WCRP), the working group on coupled modelling has established the basis for climate model diagnosis, validation, inter-comparison, documentation and accessibility [START_REF] Overpeck | Climate Data Challenges in the 21st Century[END_REF]. The requirements for climate policy, mediated through the IPCC, have further mobilized the use of a common reference in climate observations and simulations by the scientific community. The set of common future emission scenarios (SRES) released in 2000 [START_REF] Nakicenovic | Special Report on Emission Scenarios: A Special Report of Working Group III of the Intergovernmental Panel on Climate Change[END_REF], the more recent representative concentration pathways (RCPs) [START_REF] Van Vuuren | The representative concentration pathways: an overview[END_REF], and the fact that these
Accepted Article
This article is protected by copyright. All rights reserved. model representations, uncertainties and differences is also needed and should be understandable and interpretable by a broad interdisciplinary audience (Harfoot et al., 2014b).
The IPCC has also clearly demonstrated that an independent intergovernmental body is an appropriate platform for attracting the attention of the non-scientific community. Many actors now perceive climate change as an important threat to ecosystem functions and services. This emphasis can be heard in the media and among policy makers, such as during the United Nations conferences on climate change. As a response to the increasing societal and political relevance of climate change, research efforts have been mostly directed towards climate change impact assessments [START_REF] Herrick | Land degradation and climate change: a sin of omission?[END_REF]. From this observed success of the IPCC and the climate change community, it becomes evident that an independent body is needed for mobilizing the scientific and non-scientific communities to face the significant challenge of developing biodiversity-oriented references for land use and land cover change projections. With its focus on multi-scale, multi-disciplinary approaches, the working programme of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES) [START_REF] Inouye | IPBES: global collaboration on biodiversity and ecosystem services[END_REF][START_REF] Díaz | The IPBES Conceptual Frameworkconnecting nature and people[END_REF][START_REF] Lundquist | Engaging the conservation community in the IPBES process[END_REF] is offering a suitable context to stimulate collaborative efforts for taking up this challenge. In line with [START_REF] Kok | Biodiversity and ecosystem services require IPBES to take novel approach to scenarios[END_REF], we therefore encourage IPBES to strengthen its investment in the development and use of interoperable and plausible projections of environmental changes that will allow to better explore the future of biodiversity.
Conclusion
Neglecting the future impacts of land use and land cover changes on biodiversity and focusing on climate change impacts only is not a credible approach. We are concerned that such an overemphasis on climate change reduces the efficiency of identifying forward-looking policy and management responses to biodiversity decline. However, the current state of integration between ecological and land system sciences impedes the development of a comprehensive and well-balanced research agenda addressing the combined impacts of future climate, land use and land cover changes on biodiversity and ecosystem services. We recommend addressing two key areas of developments to increase the ecological relevance of land use/cover change projections: (1) reconciling local and
Accepted Article
This article is protected by copyright. All rights reserved.
global land use/cover modelling approaches and (2) incorporating important ecological processes in land use/cover models. A multi-disciplinary framework and continuing collaborative efforts from different research horizons are needed and will have to build on the efforts developed in recent years by the climate community to agree on a common framework in climate observations and simulations.
It is now time to extend these efforts across scales in order to produce reference environmental change projections that embrace multiple pressures such as climate, land use and land cover changes. IPBES offers a timely opportunity for taking up this challenge, but this independent body can only do so if adequate research efforts are undertaken.
Figure captions
Figure 1. Biodiversity scenarios: a predictive tool to inform policy-makers on expected biodiversity responses (after [START_REF] Bellard | Impacts of climate change on the future of biodiversity[END_REF] with minor modifications) to future human-induced environmental changes. A great variety of ecological models integrate the nature, rate and magnitude of expected changes in environmental conditions and the capacity of organisms to deal with these changing conditions to generate biodiversity scenarios [START_REF] Thuiller | A road map for integrating eco-evolutionary processes into biodiversity models[END_REF].
1.
Article reporting on the development of biodiversity scenarios based only on climate change projections 2. Article reporting on the development of biodiversity scenarios based only on land use/cover change projections 3. Article reporting on the development of biodiversity scenarios based on the use of climate and land use/cover change projections 4. Article reporting on the development of biodiversity scenarios based on other types of environmental change projections 5. Article not reporting on the actual development of biodiversity scenarios
can be shared easily have played a major role in mobilizing the scientific community to use climate change projections in biodiversity scenarios. Work is underway to facilitate open access to land use/cover change time series and projections, but clear and transparent documentation of land use
Figure 2 .
2 Figure 2. Relative neglect of future land use and land cover change in biodiversity scenarios.
Acknowledgements N.T., K.H., J.B.M., I.R.G., W.C. and L.B. acknowledge support from the EU BON project (no. 308454, FP7-ENV-2012, European Commission, Hoffmann et al., 2014). N.T. and L.B. were also funded by the TRUSTEE project (no. 235175, RURAGRI ERA-NET, European Commission). N.T., A.R. and L.B. were also supported by the FORESTCAST project (CGL2014-59742, Spanish Government). I.R.G. and W.C. contribute to the Labex OT-Med (no. ANR-11-LABX-0061) funded by the French Government through the A*MIDEX project (no. ANR-11-IDEX-0001-02). P.H.V.
received funding from the GLOLAND project (no. 311819, FP7-IDEAS-ERC, European Commission). We thank Piero Visconti and one anonymous reviewer for useful comments on a previous version of this paper.
Accepted Article
This article is protected by copyright. All rights reserved.
Accepted Article
This article is protected by copyright. All rights reserved.
Accepted Article
This article is protected by copyright. All rights reserved.
Iyengar L, Jeffries B, Oerlemans N). WWF International, Gland, Switzerland. Zurell D, Thuiller W, Pagel J et al. (2016) Benchmarking novel approaches for modelling species range dynamics. Global Change Biology, accepted, doi: 10.1111/gcb.13251.
Supporting Information captions
Appendix S1. Sensitivity analysis to examine the effect of sample size in the literature survey.
Figure S1. Effect of sample size in the literature survey.
Tables Table 1. We used Boolean operators "AND" to combine the different queries and we refined the obtained results using "Articles" as Document Type and using "Ecology" or "Biodiversity conservation" as Web of Science Categories. We also tested if the parameters that we used in the query #3 might potentially underestimate the number of studies focusing on land use/cover change. To do so, we tried to capture land use/cover change in a broader sense and we included additional parameters in the query #3 as follows: ("climat* chang*" OR "chang* climat*") OR ("land use chang*" OR "land cover chang*" OR "land* system* chang*" OR "land* chang*" OR "habitat loss*" OR "habitat degradation*" OR "habitat chang*" OR "habitat modification*"). We refined the results as described above and we obtained a list of 2,388 articles, that is, only 75 additional articles compared to the search procedure with the initial query #3 (see main text). Hence, the well-balanced design of the search procedure as described in the table does not underestimate the use of land use/cover change projections compared to climate change projections in biodiversity scenarios studies. |
01444653 | en | [
"sde.be"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01444653/file/huggel_etal_resubm_final.pdf | Christian Huggel
email: christian.huggel@geo.uzh.ch
Ivo Wallimann-Helmer
Dáithí Stone
email: dstone@lbl.gov
Wolfgang Cramer
email: wolfgang.cramer@imbe.fr
Reconciling justice and attribution research to advance climate policy
The Paris Climate Agreement is an important step for international climate policy, but the compensation for negative effects of climate change based on clear assignment of responsibilities remains highly debated. Both from a policy and science perspective, it is unclear how responsibilities should be defined and on what evidence base. We explore different normative principles of justice relevant to climate change impacts, and ask how different forms of causal evidence of impacts drawn from detection and attribution research could inform policy approaches in accordance with justice considerations. We reveal a procedural injustice based on the imbalance of observations and knowledge of impacts between developed and developing countries. This type of injustice needs to be considered in policy negotiations and decisions, and efforts be strengthened to reduce it.
The Paris Agreement 1 of the United Nations Framework Convention on Climate Change (UNFCCC) is considered an important milestone in international climate policy. Among the most critical points during the Paris negotiations were issues related to climate justice, including the question about responsibilities for the negative impacts of anthropogenic climate change. Many developing countries continued to emphasize the historical responsibility of the developed world. On the other hand, developed countries were not willing to bear the full burden of climate responsibilities, reasons among others being the current high levels of greenhouse gas emissions and substantial financial power of some Parties categorized as developing countries (i.e. Non-Annex I) in the UNFCCC. Many Annex I Parties were particularly uncomfortable with the issue of 'Loss and Damage' (L&D), which is typically defined as the residual, adverse impacts of climate change beyond what can be addressed by mitigation and adaptation [START_REF] Warner | Loss and damage from climate change: local-level evidence from nine vulnerable countries[END_REF][START_REF] Okereke | Working Paper 19 pp[END_REF] . Although L&D is now anchored in the Paris Agreement in a separate article (Article 8) [START_REF] Cramer | Adoption of the Paris Agreement[END_REF] , questions of responsibility and claims for compensation of negative impacts of climate change basically remain unsolved.
Claims for compensation, occasionally also called climate 'reparations' [START_REF] Burkett | Climate Reparations[END_REF] , raise the question of who is responsible for which negative climate change impacts, how to define such responsibilities and on the basis of what type of evidence. Scientific evidence has become increasingly available from recent studies and assessments, termed "detection and attribution of climate change impacts", revealing numerous discernable impacts of climate change on natural, managed and human systems worldwide [START_REF] Rosenzweig | Detection and attribution of anthropogenic climate change impacts[END_REF][START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF][START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] . In some cases, these impacts have been found to be substantial, but often the effects of multiple non-climatic drivers ('confounders') acting on natural and especially human and managed systems (e.g. land-use change, technical developments) have either been greater than the effect of climate change or have rendered attempts to determine the relative importance thereof difficult. A significant portion of attribution research has focused on the effects of increased atmospheric greenhouse gas concentrations on extreme weather events, yet usually without adopting an impacts perspective [START_REF]Attribution of extreme weather events in the context of climate change. 144[END_REF] . Recent studies have therefore emphasized the need for a more comprehensive attribution framework that considers all components of risk (or L&D), including vulnerability and exposure of assets and values in addition to climate hazards [START_REF] Huggel | Loss and damage attribution[END_REF] . Other contributions have discussed the role of attribution analysis for adaptation and L&D policies [START_REF] Allen | The blame game[END_REF][START_REF] Pall | Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000[END_REF][START_REF] Hulme | Attributing weather extremes to 'climate change' A review[END_REF][START_REF] James | Characterizing loss and damage from climate change[END_REF] .
How detection and attribution research could inform, or engage with climate policy and justice debates is currently largely unclear. Some first sketches of a justice framework to address the assignment of responsibility for L&D have recently been developed [START_REF] Thompson | Ethical and normative implications of weather event attribution for policy discussions concerning loss and damage[END_REF][START_REF] Wallimann-Helmer | Justice for climate loss and damage[END_REF] . However, the question of which type of evidence would best cohere with each of the various concepts of justice has not been addressed despite its importance for the achievement of progress in international climate policy.
In this Perspective we explore the different concepts and dimensions of normative justice research relevant to issues of climate change impacts (see Textbox 1). We adopt a normative perspective and analyze how the application of principles of justice can inform respective political and legal contexts.
We study the extent to which different forms of scientific evidence on climate change impacts, including detection and attribution research (see Textbox 2), can contribute to, or inform, the respective justice questions and related policy debates. Normative principles of justice define who is morally responsible for an impact and how to fairly distribute the burdens of remedy. In the political and in particular in the legal context liability defines an agent's legal duties in case of unlawful behavior [START_REF] Hayward | Climate change and ethics[END_REF] . Liability of an agent for climate change impacts defines a legal duty to pay for remedy of the negative effects. Liability can comprise compensation for L&D but also, for instance, include fines [START_REF] Hart | Causation in the law[END_REF][START_REF] Honoré | Stanford Encyclopedia of Philosophy[END_REF] .
In the following we first address questions of liability and compensation and why a potential implementation faces many hurdles on the scientific, political and legal level. We then consider the role that recognition of moral responsibilities for climate change impacts could play in fostering political reconciliation processes. Third, we explore the feasibility of the principle of ability to assist (or pay) and focus on risk management mechanisms as a response to immediate and preventive needs. Finally, we address the uneven distribution of knowledge about impacts across the globe as assessed in the 5 th Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), and reveal an additional injustice on a procedural level with important further implications for policy and science.
BEGIN TEXT BOX 1: Justice principles relevant for climate change impacts
International climate policy is loaded with moral evaluations. The fact that emissions of greenhouse gases from human activities lead to climate change is not morally blameworthy as such. In order to assess emissions as ethically relevant it is necessary to evaluate their consequences based on normative principles. The level at which climate change is "dangerous" in an ethically significant sense has to be defined. Similarly, normative principles become relevant when differentiating responsibilities in order to deal with the adverse effects of climate change [START_REF] Hayward | Climate change and ethics[END_REF][START_REF] Mckinnon | Climate justice in a carbon budget[END_REF][START_REF] Pachauri | Climate ethics: Essential readings[END_REF] . In climate policy, as reflected in normative climate justice research, the following principles are relevant for establishing who bears responsibility for climate change impacts and for remedying those impacts:
Polluter-Pays-Principle (PPP): It is commonly accepted that those who have contributed or are contributing more to anthropogenic climate change should shoulder the burdens of minimizing and preventing climate change impacts in proportion to the magnitude of their contribution to the problem. From a PPP perspective, it is not only high-emitting developed countries that are called into responsibility to share the burden and assist low-emitting communities facing climate change risks, but also high-emitting developing countries [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Gardiner | Ethics and Global Climate Change[END_REF][START_REF] Shue | Global Environment and International Inequality[END_REF] .
Beneficiary-Pays-Principle (BPP):
The BPP addresses important ethical challenges emerging from the PPP such as that some people have profited from past emissions, yet have not directly contributed to anthropogenic climate change. The BPP claims that those benefitting from the high emissions of others (e.g. their ancestors or other high-emitting co-citizens) are held responsible to assist those impacted by climate change irrespective of whether they themselves caused these emissions [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Halme | Carbon Debt and the (In)significance of History[END_REF][START_REF] Gosseries | Historical Emissions and Free-Riding[END_REF][START_REF] Baatz | Responsibility for the Past? Some Thoughts on Compensating Those Vulnerable to Climate Change in Developing Countries[END_REF] .
Ability-to-Pay-Principle (APP):
The PPP and BPP both establish responsibilities irrespective of the capacity of the duty-bearers to contribute to climate change measures or reduce emissions. This can result in detrimental situations for disadvantaged high emitters and beneficiaries, be it individuals or countries. Following the APP only those capable of carrying burdens are responsible to contribute to climate change measures or emission reductions [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Shue | Global Environment and International Inequality[END_REF][START_REF] Caney | Cosmopolitan Justice, Responsibility, and Global Climate Change[END_REF] .
In this Perspective, we deal with the APP under the label of "Ability-to-Assist-Principle" (AAP) in order to broaden the perspective beyond monetary payments toward consideration of assistance with climate change impacts more generally. Furthermore, we do not address the difference between the PPP and the BPP because to a large extent the sets of duty-bearers identified by the two principles overlap. None of the above principles provides any natural guidance on the threshold for emissions in terms of quantity or historical date at which they become a morally relevant contribution to dangerous climate change.
END TEXT BOX 1: Justice principles relevant for climate change impacts BEGIN TEXT BOX 2: Evidence that climate change has impacted natural and human systems
Scientific evidence that human-induced climate change is impacting natural and humans systems can come in a number of forms, each having different applications and implications [START_REF] Huggel | Potential and limitations of the attribution of climate change impacts for informing loss and damage discussions and policies[END_REF] . We draw here an analogy to U.S. environmental litigation [START_REF] Schleiter | Proving Causation in Environmental Litigation[END_REF] where typically two types of causation are relevant: "general causation" refers to the question of whether a substance is capable of causing a particular damage, injury or condition, while "specific causation" refers to a particular substance causing a specific individual's injury.
In the line of general causation, evidence for the potential existence of anthropogenic climate change impacts is relatively abundant (for more examples and references see the main text). Long-term monitoring may, for instance, reveal a trend toward more frequent wildfires in an unpopulated area.
These observations may have little to say about the relevance of climate change, or of emissions for that climate change, but they can be useful for highlighting the potential urgency of an issue.
Another form of evidence may come from a mechanistic understanding of how a system should respond to some change in its environmental conditions. The ranges of plant and animal species may, for instance, shift polewards in response to an observed or expected warming. In this case, the relevance to human-induced climate change may be explicit, but it remains unclear whether the range shifts have indeed occurred.
In order to be confident that an impact of anthropogenic climate change has indeed occurred, more direct evidence is required, akin to "specific evidence" in U.S. environmental litigation [START_REF] Schleiter | Proving Causation in Environmental Litigation[END_REF] . The most complete set of information for understanding past changes in climate and its impacts, commonly referred to as "detection and attribution", combines observational and mechanistic evidence, by confronting predictions of recent changes based on our mechanistic understanding with observations of long-term variations [START_REF] Stone | The challenge to detect and attribute effects of climate change on human and natural systems[END_REF] . These analyses address two questions: the first, detection, examines whether the natural or human system has indeed been affected by anthropogenic climate change, versus changes that may be related to natural climate variability or non-climatic factors. The second, attribution, estimates the magnitude of the effect of anthropogenic climate change as compared to the effect of other factors. These other factors (also termed 'confounders') might be considered external drivers of the observed change (e.g. deforestation driving land-cover changes).
Impacts of multi-decadal trends in climate have now been detected in many different aspects of natural and human systems across the continents and oceans of the planet [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . Analysis of the relevant climate trends suggests that anthropogenic emissions have played a major role in at least two thirds of the impacts induced by warming, but few of the impacts resulting from precipitation trends can yet be confidently linked to anthropogenic emissions [START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] . Overall, research on detection and attribution of climate change impacts is still emerging and there remain few studies available that demonstrate a causal link between anthropogenic emissions, climate trends and impacts.
END TEXT BOX: Evidence that climate change has impacted natural and human systems
Liability and compensation
Compensation of those who suffer harm by those responsible for the harm, and more specifically, responsible for the negative impacts of climate change, represents a legitimate claim from the perspective of normative justice research [START_REF] Shue | Global Environment and International Inequality[END_REF][START_REF] Goodin | Theories of Compensation[END_REF][START_REF] Miller | Global justice and climate change: How should responsibilities be distributed[END_REF][START_REF] Pogge | World poverty and human rights: Cosmopolitan responsibilities and reforms[END_REF] . In their most common understanding, principles such as the PPP or BPP provide the justice framework to identify those responsible for climate change impacts and establish a basis for liability and compensation (see Textbox 1). However, issues of compensation have not yet been sufficiently clarified and remain contested in international climate policy. Driven by the pressure exerted by countries such as the U.S. and others, the notion that L&D involves or provides a basis for liability and compensation has been explicitly excluded in the decisions taken in Paris 2015 [START_REF] Cramer | Adoption of the Paris Agreement[END_REF] . L&D has previously been thought to require consideration of causation, as well as the deviations from some (possibly historical) baseline condition [START_REF] Verheyen | Beyond Adaptation-The legal duty to pay compensation for climate change damage[END_REF] . The Paris Agreement and related discussions have not offered any clarity about what type of evidence would be required for claims of liability and compensation to be legitimate, either from a normative perspective considering different principles of justice (see Textbox 1) or in relation to legal mechanisms under international policy. Liability and compensation represent the strongest and most rigid reference frame to clarify who is responsible to remedy climate change impacts, but also involve major challenges, both in terms of policy and science, as we will outline below. Liability and compensation involve clarification of impacts due to climate variability versus anthropogenic climate change, since no one can be morally blamed or held legally liable for negative impacts wholly resulting from natural climate variability [START_REF] Page | Distributing the Burdens of Climate Change[END_REF][START_REF] Gardiner | Ethics and Global Climate Change[END_REF][START_REF] Caney | Cosmopolitan Justice, Responsibility, and Global Climate Change[END_REF] . Accordingly, and as further detailed below, we suggest that here the strongest scientific evidence in line with specific causation is required, i.e. detection and attribution (see Textbox 2).
Figure 1 sketches a detection and attribution framework as it has been developed in recent research [START_REF] Stone | The challenge to detect and attribute effects of climate change on human and natural systems[END_REF][START_REF] Hansen | Linking local impacts to changes in climate: a guide to attribution[END_REF] and assessments [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . It reflects the relation of climatic and non-climatic drivers and detected climate change impacts in both natural and human systems at a global scale. As a general guideline, changes in many physical, terrestrial and marine ecosystems are strongly governed by climatic drivers such as regional changes in average or extreme air temperature, precipitation, or ocean water temperature. Due to the high likelihood of a major anthropogenic role in observed trends in these regional climate drivers, there is accordingly potential for high confidence in detection and attribution of related impacts of anthropogenic climate change [START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] .
The negative impacts of climate change potentially relevant for liability and compensation usually concern human systems, and for these climatic drivers are typically less important than for natural systems: any anthropogenic climate effect can be outweighed by the magnitude of socio-economic changes, for instance considered in terms of exposure and vulnerability (e.g. expansion of exposed assets or people, or increasing climate resilient infrastructure). As a consequence, as documented in the IPCC AR5 and subsequent studies [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF][START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF] , there is currently only low confidence in the attribution of a major climate change role in impacts on human systems, except for polar and high mountain regions where livelihood conditions are strongly tied to climatic and cryospheric systems (Fig. 1).
In order to establish confidence in the detection of impacts, long-term, reliable, high-quality observations, as well as better process understanding, are crucial for both natural and human systems. Assuming that some substantial level of confidence will be required for issues of liability and compensation, we need to recognize that a very high bar is set by requiring high-quality observations over periods of several decades. Precisely these requirements are likely one of the reasons why studies of detection and attribution of impacts to anthropogenic climate change are still rare.
In the context of liability and compensation a separate pathway to climate policy is being developed in climate litigation under existing laws. In some countries such as the U.S. climate litigation has been used to advance climate policy but so far only a small fraction of lawsuits have been concerned with questions of rights and liabilities as related to damage or tort due to climate change impacts [START_REF] Peel | Climate Change Litigation[END_REF][START_REF] Markell | An Empirical Assessment of Climate Change In The Courts: A New Jurisprudence Or Business As Usual?[END_REF] . In the U.S. where by far the most such lawsuits are documented worldwide, several cases on imposing monetary penalties or injunctive relief on greenhouse gas emitters have been brought to court but so far, all of them have ultimately failed [START_REF] Wilensky | Climate change in the courts: an assessment of non-U.S. climate litigation[END_REF] . One of the most prominent lawsuits is known as California v. General Motors where the State of California claimed monetary compensation from six automakers for damage due to climate change under the tort liability theory of public nuisance.
Damages specified for California included reduced snow pack, increased coastal erosion due to rising sea levels, and increased frequency and duration of extreme heat events. As with several other lawsuits, the case was dismissed on the grounds that non-justiciable political questions were raised.
Further legal avenues that have been taken and researched with respect to the negative impacts of climate change include human rights in both domestic and international law [START_REF] Verheyen | Beyond Adaptation-The legal duty to pay compensation for climate change damage[END_REF][START_REF] Mcinerney-Lankford | Human Rights and Climate Change: A Review of the International Legal Dimensions[END_REF][START_REF] Posner | Climate Change and International Human Rights Litigation: A Critical Appraisal[END_REF] .
Generally, currently available experiences cannot sufficiently clarify what type of evidence would be needed in court to defend a legal case on climate change liability. However, there is useful precedent from litigation over harm caused by exposure to toxic substance where typically specific causation is required [START_REF] Farber | Basic Compensation for Victims of Climate Change[END_REF] . In our context, hence, this translates into detection and attribution of impacts of anthropogenic climate change.
Overall, experience so far indicates that the hurdles are considerable, and they may range from aspects of justiciability, to the proof required for causation, to the applicability of the no-harm rule established in international law or of the application of extraterritoriality in human rights law [START_REF] Schleiter | Proving Causation in Environmental Litigation[END_REF][START_REF] Mcinerney-Lankford | Human Rights and Climate Change: A Review of the International Legal Dimensions[END_REF][START_REF] Weisbach | Negligence, strict liability, and responsibility for climate change[END_REF][START_REF] Maldonado | The impact of climate change on tribal communities in the US: displacement, relocation, and human rights[END_REF] .
Based on these challenges and on the analysis of precedents from cases with harm due to exposure to toxic substances, some scholars favor ex-ante compensation as compared to ex-post compensation and refer to experiences with monetary disaster funds used to compensate affected vicitms [START_REF] Farber | Basic Compensation for Victims of Climate Change[END_REF] . It is interesting to note that in one of the very few lawsuits on climate change liability that have been accepted by a court ex-ante compensation is claimed. In this currently ongoing legal case at a German court, a citizen of the city of Huaraz in Peru is suing RWE, a large German energy producer, for their cumulative emissions causing an increased local risk of floods from a glacier lake in the Andes that formed as glaciers receded. Specific causation is likely required for this case but additional difficulty arises from proving the relation of harm of an individual to emissions. From an attribution point of view governments are in a better position to claim compensation than individuals because damages due to climate change can be aggregated over time and space over their territory and/or economic interests [START_REF] Grossman | Adjudicating climate change: state, national, and international approaches[END_REF] .
In conclusion, at the current state of legal practice, political discussions and available scientific evidence, significant progress in terms of liability and compensation seems rather unlikely in the near future. Politically, creating a monetary fund in line with considerations of ex-ante compensation may yet be the most feasible mechanism. In the following, we present two alternative approaches to achieve justice in relation to climate change impacts.
Recognition of responsibilities and reconciliation
As a first alternative we refer to the notion that legitimate claims of justice may extend beyond questions of liability and compensation, involving instead restorative justice, and more specifically recognition and acknowledgement of moral responsibilities for climate change impacts [START_REF] Thompson | Ethical and normative implications of weather event attribution for policy discussions concerning loss and damage[END_REF] . Following from that, we argue that recognition of responsibilities would be a first important step in any process of reconciliation.
Reconciliation is often discussed in the context of normative restorative (or transitional) justice research, which typically relates to the aftermath of violence and repression [START_REF] May | Restitution, and Transitional Justice. Moral[END_REF][START_REF] Roberts | Encyclopedia of Global Justice[END_REF] . In this context it is argued that recognition of wrongs is important in order to attain and maintain social stability [START_REF] Eisikovits | Stanford Encyclopedia of Philosophy[END_REF] . In the case of the negative effects of climate change, recognition could play a similar role. However, since the most negative effects of climate change will occur at least several decades from now, ex-ante recognition of responsibilities of climate change impacts would be required to support maintaining social stability. Recognition of responsibilities neither is the final step nor does it exclude the possibility of compensation, but we suggest it can represent a fundamental element in the process, especially where recovery has limitations. This is particularly the case when impacts of climate change are irreversible, such as for submersion of low-lying islands, permafrost thawing in the Arctic, or loss of glaciers in mountain regions [START_REF] Bell | Environmental Refugees: What Rights? Which Duties?[END_REF][START_REF] Byravan | The Ethical Implications of Sea-Level Rise Due to Climate Change[END_REF][START_REF] Heyward | New Waves in Global Justice[END_REF][START_REF] Zellentin | Climate justice, small island developing states & cultural loss[END_REF] .
On the level of scientific evidence, recognition of responsibilities as a first step in a reconciliation process implies clarification of those who caused, or contributed to, negative impacts of anthropogenic climate change, and of those who suffer the damage and losses. If the goal is a practical first step in a reconciliation process between those generally contributing to and those generally being impacted by climate change, rather than experiencing a specific impact, then we argue that basic understanding of causation (i.e. general causation) could provide sufficient evidence.
Understanding of general causation (see Textbox 2) can rely on multiple lines of evidence collected from observations, modeling or physical understanding, but not all are necessarily required and nor do they all have to concern the exact impact and location in question [START_REF] Huggel | Potential and limitations of the attribution of climate change impacts for informing loss and damage discussions and policies[END_REF] . According to physical understanding, for instance, warming implies glacier shrinkage and thus changes in the contribution of ice melt to river runoff [START_REF] Kaser | Contribution potential of glaciers to water availability in different climate regimes[END_REF][START_REF] Schaner | The contribution of glacier melt to streamflow[END_REF] or formation and growth of glacier lakes with possible lake outburst floods and associated risks [START_REF] Iribarren Anacona | Hazardous processes and events from glacier and permafrost areas: lessons from the Chilean and Argentinean Andes[END_REF][START_REF] Allen | hydrometeorological triggering and topographic predisposition[END_REF] . As another example, given the sensitivity of crops such as grapes or coffee to changes in temperature, precipitation, and soil moisture [START_REF] Jaramillo | Climate Change or Urbanization? Impacts on a Traditional Coffee Production System in East Africa over the Last 80 Years[END_REF][START_REF] Hannah | Climate change, wine, and conservation[END_REF][START_REF] Moriondo | Projected shifts of wine regions in response to climate change[END_REF] we can expect that yield, quality, phenology, pest and disease, planting site suitability and possibly supply chains may be affected [START_REF] Laderach | The Economic, Social and Political Elements of Climate Change[END_REF][START_REF] Holland | Climate Change and the Wine Industry: Current Research Themes and New Directions[END_REF][START_REF] Webb | Earlier wine-grape ripening driven by climatic warming and drying and management practices[END_REF][START_REF] Baca | An Integrated Framework for Assessing Vulnerability to Climate Change and Developing Adaptation Strategies for Coffee Growing Families in Mesoamerica[END_REF] . However, our understanding will be limited with respect to the exact magnitude of these impacts, especially along cascades of impacts from crop production to food supply. Further challenges arise from ongoing adaptation in human and managed systems, in particular for agricultural systems as demonstrated in recent studies [START_REF] Lobell | Climate change adaptation in crop production: Beware of illusions[END_REF][START_REF] Lereboullet | Socio-ecological adaptation to climate change: A comparative case study from the Mediterranean wine industry in France and Australia[END_REF] . Thus, while we suggest that understanding of general causation could serve the reconciliation processes, the value and limitations of this sort of evidence may vary among different types of impacts and is not likely to be sufficient to attain justice in the full sense. In climate policy, as the Paris negotiations have shown, many countries do in fact recognize some moral responsibility for impacts of climate change, but are reluctant to define any legal implications thereof in more detail.
Against this background, we believe that explicit recognition of moral responsibilities for climate change impacts plays a significant role in fostering cooperation among the Parties to the UNFCCC.
The ability to assist principle and risk management
Discourses on global justice provide the grounds for a second alternative beyond liabilities and compensation. A number of scholars offer arguments to distinguish between responsibilities to assist and claims for compensation from those liable for harm [START_REF] Miller | Holding Nations Responsible[END_REF][START_REF] Young | Responsiblity and Global Justice: A Social Connection Model[END_REF][START_REF] Jagers | Dual climate change responsibility: on moral divergences between mitigation and adaptation[END_REF][START_REF] Miller | National Responsibility and Global Justice[END_REF] . Ability to assist (AAP) is in line with the APP (see Textbox 1) and assumes an assignment of responsibilities proportional to economic, technological and logistic capacities. With regard to climate change impacts specifically, we argue that prioritizing the ability to assist is supported in the following contexts [START_REF] Wallimann-Helmer | Justice for climate loss and damage[END_REF][START_REF] Jagers | Dual climate change responsibility: on moral divergences between mitigation and adaptation[END_REF][START_REF] Wallimann-Helmer | Philosophy, Law and Environmental Crisis / Philosophie, droit et crise environnementale[END_REF] : when a projected climate impact is severe and immediate help is needed; when there is missing clarity on whether the party causing a negative impact did something morally wrong; or when the party responsible for the impact is not able to provide full recovery. It is important to note that prioritizing AAP does not mean that PPP and BPP should be dismissed altogether. Rather we think that AAP is more plausible and feasible in the aforementioned contexts than the other justice principles.
In the context of climate change impacts, we suggest that AAP includes an ex-ante component to facilitate prevention of and preparedness for L&D. Many different mechanisms exist to meet responsibilities to assist in the aforementioned sense and context, including reconstruction, programs to strengthen preparedness and institutions responsible for risk management, or technology transfer. Most of these mechanisms can be accommodated under the perspective of integrative risk management [START_REF] Mechler | Managing unnatural disaster risk from climate extremes[END_REF] .
Appropriate identification and understanding of risks, and how risks change over time, is an important prerequisite for risk management. In the IPCC AR5 risk is defined as a function of (climate) hazard, exposure of assets and people, and their vulnerability [START_REF] Oppenheimer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . For the climate hazard component of risks, extreme weather events are a primary concern. A large number of studies have identified observed trends in extreme weather, both globally [START_REF] Hansen | Perception of climate change[END_REF][START_REF] Hartmann | Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change[END_REF][START_REF] Westra | Global Increasing Trends in Annual Maximum Daily Precipitation[END_REF] and regionally [START_REF] Skansi | Warming and wetting signals emerging from analysis of changes in climate extreme indices over South America[END_REF][START_REF] Donat | Reanalysis suggests long-term upward trends in European storminess since 1871[END_REF] , and have examined their relation to anthropogenic climate change [START_REF] Bindoff | Climate Change 2013: The Physical Science Basis[END_REF][START_REF] Otto | Attribution of extreme weather events in Africa: a preliminary exploration of the science and policy implications[END_REF] . Particularly challenging and debated is the attribution of single extreme weather events to anthropogenic climate change [START_REF]Attribution of extreme weather events in the context of climate change. 144[END_REF][START_REF] Bindoff | Climate Change 2013: The Physical Science Basis[END_REF][START_REF] Otto | Reconciling two approaches to attribution of the 2010 Russian heat wave[END_REF][START_REF] Trenberth | Attribution of climate extreme events[END_REF][START_REF] Seneviratne | Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change[END_REF] . On the other hand, disaster risk studies focusing on L&D due to extreme weather events generally have concluded that the observed strong increase in monetary losses is primarily due to changes in exposure and wealth [START_REF] Bouwer | Have Disaster Losses Increased Due to Anthropogenic Climate Change? Bull[END_REF][START_REF] Barthel | A trend analysis of normalized insured damage from natural disasters[END_REF][START_REF] Ipcc | Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change[END_REF] , with a dynamic contribution from vulnerability [START_REF] Mechler | Understanding trends and projections of disaster losses and climate change: is vulnerability the missing link?[END_REF] . For instance, for detected changes in heat related human mortality, changes in exposure, health care or physical infrastructure and adaptation are important drivers and often outweigh the effects of climate change [START_REF] Christidis | Causes for the recent changes in cold-and heatrelated mortality in England and Wales[END_REF][START_REF] Oudin Åström | Attributing mortality from extreme temperatures to climate change in Stockholm, Sweden[END_REF][START_REF] Arent | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF][START_REF] Smith | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] .
Risk management yet should not only be concerned with impacts of extreme weather events but also with negative effects of gradual climate change on natural, human and managed systems. Based on the assessment of the IPCC AR5, concern for unique and threatened systems has mounted for Arctic, marine and mountain systems, including Arctic marine ecosystems, glaciers and permafrost, and Arctic indigenous livelihoods [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . Impacts of gradual climate change are often exacerbated by extreme events, thus enhancing risks and complicating attribution [START_REF] Huggel | Potential and limitations of the attribution of climate change impacts for informing loss and damage discussions and policies[END_REF] . Furthermore, impacts of climate change usually occur within a context of multiple non-climatic drivers of risk. Effective identification of specific activities to reduce risk may require estimation of the relative balance of the contributions of climatic and non-climatic drivers. However, understanding of general causation, in the form for instance of process-based understanding, may not provide sufficient precision to distinguish the relative importance of the various drivers; in that case, more refined information generated through detection and attribution analysis may be required. This, however, implies the availability of longterm data which is limited in many developing countries.
In the context of international climate policy, assistance provided to strengthen risk management is largely uncontested and is supported in many documents [START_REF]Lima Call for Climate Action[END_REF] . Hence, political feasibility, the justice basis and potential progress in scientific evidence make risk management a promising vehicle for addressing climate change impacts.
Injustices from the imbalance of climate and impact monitoring
Depending on the approaches outlined in the previous sections, observational monitoring of climate and impacts can be of fundamental importance in order to provide the necessary causal evidence, and to satisfy justice claims posed by many Parties. In this light, it is informative to consider the distribution of long-term climate observations, as well as that of the detected and attributed impacts as assessed by the IPCC AR5 [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . As Figure 2 shows, the distribution of both long-term recording weather stations and observed impacts of climate change is unequal across the globe. Observations of non-climatic factors, which are important to assess the magnitude of impacts of climatic versus non-climatic factors, are not shown in Figure 2 but are likely to show a similar imbalanced pattern.
The analysis of attributed impacts based on IPCC AR5 6 reveals that more than 60% of the attributed impacts considered come from the 43 Annex I countries while the 154 Non-Annex I countries feature less than 40% of the observations (Fig. 3). This imbalance is even larger if the least developed countries (LDC) and the countries of the Small Island Developing States (SIDS) (80 countries together) are considered, for which less than 20% of globally detected and attributed impacts are reported.
While different identified impacts in the IPCC AR5 reflect different degrees of aggregation (e.g. aggregating phenological shifts across species on a continent into a single impact unit), this aggregation tends to be amplified in Annex I countries, and thus understates the geographical contrast in terms of available evidence between developed and developing countries. Additionally, Non-Annex I, LDC and SIDS countries generally have a higher proportion of impacts with very low and low confidence in attribution to climate change whereas Annex I countries have more impacts with high confidence in attribution. The assignment of confidence thereby typically relates, among other things, to the quality and duration of available observational series [START_REF] Hegerl | Good practice guidance paper on detection and attribution related to anthropogenic climate change[END_REF][START_REF] Adler | The IPCC and treatment of uncertainties: topics and sources of dissensus[END_REF][START_REF] Ebi | Differentiating theory from evidence in determining confidence in an assessment finding[END_REF] ; this also holds for the attribution of observed climate trends to greenhouse gas emissions [START_REF] Stone | Rapid systematic assessment of the detection and attribution of regional anthropogenic climate change[END_REF] . This imbalance thus reflects an unequal distribution of monitoring for physical as well as for biological, managed and human systems.
The results of this analysis imply new kinds of injustices involved in the approaches discussed above.
Whichever approach is chosen, the unequal distribution of observed and attributed impacts, and of the confidence in assessments, implies an unjustified disadvantage for those most in need of assistance. The more impacts are detected and their attribution to climate change is clarified the better it is understood (i) what responsibilities would have to be recognized, (ii) what the appropriate measures of risk management might be, and (iii) what would represent appropriate methods of compensation for negative climate change effects on natural and human systems. In this respect many Non-Annex I countries seem to be disadvantaged as compared to Annex I countries. This disadvantage represents a form of procedural injustice in negotiating and deciding when, where and what measures are taken. Hence, the point here is not the potentially unfair outcomes of negotiations but the fairness of the process of negotiating itself. The imbalance of the distribution of detected and attributed impacts was in fact an issue during the final IPCC AR5 government approval process [START_REF] Hansen | Global distribution of observed climate change impacts[END_REF] , indicating concern that voices from some actors and parties might be downplayed or ignored due to lack of hard evidence for perceived impacts.
Against this background, we argue in line with a version of the APP (AAP) that countries with appropriate economic, technological and logistic capacities should enhance the support for countries with limited available resources or capacity along two lines of actions and policy: i) to substantially improve monitoring of a broad range of climate change impacts on natural and human systems; ii) to strengthen local human resources and capacities in countries facing important climate change impacts to a level that ensures an adequate quality and extent of monitoring and scientific analysis. This proposal is perfectly in line with the UNFCCC and decisions taken at recent negotiations including COP21 [START_REF] Cramer | Adoption of the Paris Agreement[END_REF][START_REF]Lima Call for Climate Action[END_REF] , and actions and programs underway in several Non-Annex I countries, hence strongly increasing its political feasibility. The lack of monitoring and observations has been long recognized but the related procedural injustice has not received much discussion. Our analysis intends to provide the justice basis and context to justify strengthening these efforts.
However, even if such efforts are substantially developed in the near future, a major challenge remains in how to cope with non-existing or low-quality observational records of the past decades in countries were no corresponding monitoring had been in place. Reconstruction of past climate change impacts and events exploiting historical satellite data, on-site field mapping, searching historical archives, etc. may be able to recover missing data to some extent. Different and diverse forms of knowledge existing in various regions and localities can be of additional value but need to be evaluated in their respective context to avoid simplistic comparisons of, for instance, scientific versus local knowledge [START_REF] Reyes-García | Local indicators of climate change: the potential contribution of local knowledge to climate research[END_REF] . Substantial observational limitations, however, will likely remain and the implications for the aforementioned approaches toward justice need to be seriously considered.
Developing evidence for just policy
In this Perspective we discussed different approaches towards justice regarding negative climate change impacts. We argued that depending on the approach chosen, different kinds of evidence concerning detection and attribution of climate change impacts are needed. Establishing liabilities in a legal or political context to seek compensation sets the highest bar, and we suggest that it requires detection and attribution in line with specific causation. However, in general the level of scientific evidence currently available rarely supports high confidence in linking impacts to emissions, except for some natural and human systems related to the mountain and Arctic cryosphere and the health of warm water corals. Hence, claims for compensation based on liabilities will likely continue to encounter scientific hurdles, in addition to various political and legal hurdles.
Understanding the role of climate change in trends in impacted natural and human systems at a level of evidence currently available can still effectively inform other justice principles which in our view are politically much more feasible, namely recognition of responsibilities and ability to assist.
Attribution research can clarify responsibilities and thus facilitate their recognition; and it can enhance the understanding of drivers of risks as a basis for improved risk management. More rigorous implementation of risk management is actually critical to prevent and reduce future L&D.
Whether recognition of responsibilities and APP / AAP are politically sufficient to facilitate ex-ante compensation, for instance with the creation of a monetary fund for current or future victims of climate change impacts, needs yet to be seen.
Finally, the imbalance of observed and attributed climate change impacts leaves those countries most in need of assistance (i.e. SIDS and LDC countries) with relatively poor evidence in support of appropriate risk management approaches or any claim for liability and related compensation in international climate policy or at courts. We have argued that evidence in line with general causation may be sufficient for recognition of responsibilities, and hence, this may well speak in favor of this justice approach, considering the aforementioned limitations in observations and attribution.
Recognition of responsibilities cannot represent the final step to attain justice, however, and we therefore suggest that two issues remain crucial: i) procedural injustice resulting from an imbalance of detected and attributed impacts should be considered as a fundamental issue in negotiations and decision making in international climate policy; and ii) monitoring of climate change impacts in natural and human systems, and local capacities in developing countries need to be substantially strengthened. Efforts taken now will be of critical value for the future when climate change impacts are expected to be more severe than experienced so far. stations and the number of detected impacts as assessed in the IPCC WGII AR5 [START_REF] Cramer | Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects[END_REF] . It distinguishes between Annex I countries (in red colors), Non-Annex I countries (in green colors), and regions not party to the UNFCCC (grey colors). The GHCN is the largest publicly available collection of global surface air temperature station data. The shaded regions correspond to the regional extent of relevant climatic changes for various impacts, rather than of the impacts themselves, as determined in [START_REF] Hansen | Assessing the observed impact of anthropogenic climate change[END_REF]; a few impacts are not included due to insufficient information for defining a relevant region.
Figure captions
Figure 1 :
1 Figure captionsFigure 1: A schematic detection and attribution framework for impacts on natural and human systems. The left part (in light grey) indicates the different impacts and the respective level of confidence in detection and attribution of a climate change influence as assessed in the IPCC Working Group II 5 th Assessment Report (AR5) 6 . Boxes with a thick (thin) outline indicate a major (minor) role of climate change as assessed in [ 6 ] (note that this IPCC assessment 6 did not distinguish between natural and anthropogenic climate change in relation with impacts). The right part (in darker grey) of the figure identifies important climatic and non-climatic drivers of detected impacts at global scales. The attribution statements for the climatic drivers are from IPCC WGI AR5 77 and refer to anthropogenic climate change. Trends in the graphs are all for global drivers and represent from top to bottom the following: TAS: mean annual land air temperature 98 ; TXx (TNn): hottest (coldest) daily maximum (minimum) temperature of the year 99 ; TOS sea surface temperature 100 (all units are degrees Celsius and anomalies from the 1981-2010 global average); SIC: northern hemisphere sea ice coverage 100 (in million km 2 ); Popul: total world population (in billions); GDP: global gross domestic product (in 2005 USD); Life exp. and health expend.: total life expectancy at birth and public health expenditure (% of GDP) (Data sources: The World Bank ,World Bank Open Data, http://data.worldbank.org/).
Figure 2 :
2 Figure 2: World map showing the distribution of Global Historical Climatology Network (GHCN)
Figure 3 :
3 Figure 3: Distribution of attributed climate change impacts in physical, biological and human systems as assessed in the IPCC WGII AR5 6 , showing an imbalance between Annex I, Non-Annex I, and Least Developed Countries (LDC) and Small Island Development States (SIDS). Three confidence levels of attribution are distinguished. Note that LDC and SIDS are also part of Non-Annex I countries.
Acknowledgements C. H. was supported by strategic funds by the Executive Board and Faculty of Science of the University of Zurich. I. W.-H. acknowledges financial support by the Stiftung Mercator Switzerland and the University of Zurich's Research Priority Program for Ethics (URPP Ethics). D.S. was supported by the US Department of Energy Office of Science, Office of Biological and Environmental Research, under contract number DE-AC02-05CH11231. W.C. contributes to the Labex OT-Med (no. ANR-11-LABX-0061) funded by the French Government through the A*MIDEX project (no. ANR-11-IDEX-0001-02). We furthermore appreciate the collaboration with Gerrit Hansen on the analysis of the distribution of climate change impacts. |